Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
11,000 | 11,000 | 16,298,062 | 2,642 | A network system for managing an on-demand service can receive, from a user, a service request indicating a destination location. In addition to facilitating available service providers to fulfill the service request, the network system can create and manage a session for the user for an entity identified based on, for example, the destination location. The session for the user can be used to procure items and/or services provided by the entity. The network system can transmit, to terminal computing system(s) associated with the entity, session initiation data that includes user data such as identification information to identify the user. The transmission of the session initiation data can cause the terminal computing system(s) to automatically create the session for the user. In addition, the network system can receive, from the terminal computing system(s), session data upon termination of the session for the user. | 1. A network system comprising:
one or more processors; and one or more memory resources storing instructions that, when executed by the one or more processors of the network system, cause the network system to:
receive, over a network from a user device of a user, a first set of request data corresponding to a first request for a transport service, the first set of request data indicating a destination location of the transport service;
in response to receiving the first set of request data corresponding to the first request for the transport service, identify one or more entities based, at least in part, on the first set of request data;
receive, over the network from the user device, a second set of request data corresponding to a request to create a session for recording one or more transactions of the user at a selected entity of the one or more entities;
in response to receiving the second set of request data corresponding to the request to create the session, cause a terminal computing system located at the selected entity to create the session for recording one or more transactions of the user at the selected entity; and
receive, over the network from the terminal computing system located at the selected entity, a set of session data associated with the session that indicates one or more transactions of the user at the selected entity. 2. The network system of claim 1, wherein identifying the one or more entities based, at least in part, on the first set of request data includes identifying the one or more entities based, at least in part, on the destination location of the transport service. 3. The network system of claim 1, wherein the executed instructions further cause the network system to:
determine an estimated time of arrival at the destination location; and wherein identifying the one or more entities based, at least in part, on the first set of request data includes identifying the one or more entities based, at least in part, on the estimated time of arrival at the destination location. 4. The network system of claim 1, wherein the executed instructions further cause the network system to identify the one or more entities based, at least in part, on one or more of: (i) a preference of the user, (ii) historical session information of the user, or (iii) historical requests of the user. 5. The network system of claim 1, wherein the executed instructions further cause the network system to transmit, over the network to the user device, a set of content data to cause the user device to present, within a user application, the one or more entities that are identified based, at least in part, on the first set of request data. 6. The network system of claim 5, wherein the user device is configured to transmit the second set of request data corresponding to the request to create the session in response to a user selection of the selected entity from the one or more entities presented within the user application. 7. The network system of claim 1, wherein the executed instructions further cause the network system to cause the terminal computing system located at the selected entity to create the session by transmitting, over the network to the terminal computing system, a set of session initiation data that includes identification information of the user. 8. The network system of claim 1, wherein the executed instructions further cause the network system to:
transmit, over the network to the user device, a set of content data to cause the user device to present a user interface for selecting from a plurality of items available at the selected entity; receive, over the network from the user device, a set of selection data indicating a user selection of one or more items from the plurality of available items; and wherein the one or more selected items are made via the user interface presented on the user device and one or more transactions associated with the one or more selected items are recorded as session data associated with the session for the user. 9. The network system of claim 8, wherein the set of content data is transmitted over the network to the user device prior to the user arriving at the destination location of the transport service. 10. The network system of claim 8, wherein the executed instructions further cause the network system to:
determine an estimated time of arrival at the destination location; and transmitting the set of content data over the network to the user device based at least in part on the estimated time of arrival at the destination location. 11. The network system of claim 8, wherein the set of selection data indicating the user selection of one or more items to be provided by the selected entity is received by the network system prior to the user arriving at the destination location of the transport service. 12. The network system of claim 1, wherein the executed instructions further cause the network system to transmit a set of query data to cause the terminal computing system to: (i) terminate the session for the user, and (ii) transmit the set of session data to the network system. 13. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data in response to receiving, over the network from the user device, a third set of request data corresponding to a second request for the transport service. 14. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data in response to receiving, over the network from the user device, a termination signal. 15. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data based on location data received from the user device. 16. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data after a timeout period during which no session data is received from the terminal computing system. 17. The network system of claim 1, wherein the set of session data associated with the session is transmitted by the terminal computing system in response to one or more operator inputs received at the terminal computing system, the one or more operator inputs further causing the terminal computing system to terminate the session for the user. 18. The network system of claim 1, wherein the executed instructions further cause the network system to:
in response to receiving the first set of request data corresponding to the first request for the transport service, identify a transport provider based, at least in part, on a start location of the transport service indicated by the first set of request data; and transmit, over the network to a provider device of the transport provider, a set of data corresponding to an invitation to fulfill the first request for the transport service. 19. A computer-implemented method comprising:
receiving, over a network from a user device of a user, a first set of request data corresponding to a first request for a transport service to a destination location; in response to receiving the first set of request data corresponding to the first request for the transport service, identifying one or more entities based, at least in part, on the destination location; receiving, over the network from the user device, a second set of request data corresponding to a request to create a session for recording one or more transactions of the user at a selected entity of the one or more entities; in response to receiving the second set of request data corresponding to the request to create the session, causing a terminal computing system located at the selected entity to create the session for recording one or more transactions of the user at the selected entity; and receiving, over the network from the terminal computing system located at the selected entity, a set of session data associated with the session that indicates one or more transactions of the user at the selected entity. 20. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a network system, cause the network system to:
receive, over a network from a user device of a user, a first set of request data corresponding to a first request for a transport service to a destination location; in response to receiving the first set of request data corresponding to the first request for the transport service, identify one or more entities based, at least in part, on the destination location; receive, over the network from the user device, a second set of request data corresponding to a request to create a session for recording one or more transactions of the user at a selected entity of the one or more entities; in response to receiving the second set of request data corresponding to the request to create the session, cause a terminal computing system located at the selected entity to create the session for recording one or more transactions of the user at the selected entity; and receive, over the network from the terminal computing system located at the selected entity, a set of session data associated with the session that indicates one or more transactions of the user at the selected entity. | A network system for managing an on-demand service can receive, from a user, a service request indicating a destination location. In addition to facilitating available service providers to fulfill the service request, the network system can create and manage a session for the user for an entity identified based on, for example, the destination location. The session for the user can be used to procure items and/or services provided by the entity. The network system can transmit, to terminal computing system(s) associated with the entity, session initiation data that includes user data such as identification information to identify the user. The transmission of the session initiation data can cause the terminal computing system(s) to automatically create the session for the user. In addition, the network system can receive, from the terminal computing system(s), session data upon termination of the session for the user.1. A network system comprising:
one or more processors; and one or more memory resources storing instructions that, when executed by the one or more processors of the network system, cause the network system to:
receive, over a network from a user device of a user, a first set of request data corresponding to a first request for a transport service, the first set of request data indicating a destination location of the transport service;
in response to receiving the first set of request data corresponding to the first request for the transport service, identify one or more entities based, at least in part, on the first set of request data;
receive, over the network from the user device, a second set of request data corresponding to a request to create a session for recording one or more transactions of the user at a selected entity of the one or more entities;
in response to receiving the second set of request data corresponding to the request to create the session, cause a terminal computing system located at the selected entity to create the session for recording one or more transactions of the user at the selected entity; and
receive, over the network from the terminal computing system located at the selected entity, a set of session data associated with the session that indicates one or more transactions of the user at the selected entity. 2. The network system of claim 1, wherein identifying the one or more entities based, at least in part, on the first set of request data includes identifying the one or more entities based, at least in part, on the destination location of the transport service. 3. The network system of claim 1, wherein the executed instructions further cause the network system to:
determine an estimated time of arrival at the destination location; and wherein identifying the one or more entities based, at least in part, on the first set of request data includes identifying the one or more entities based, at least in part, on the estimated time of arrival at the destination location. 4. The network system of claim 1, wherein the executed instructions further cause the network system to identify the one or more entities based, at least in part, on one or more of: (i) a preference of the user, (ii) historical session information of the user, or (iii) historical requests of the user. 5. The network system of claim 1, wherein the executed instructions further cause the network system to transmit, over the network to the user device, a set of content data to cause the user device to present, within a user application, the one or more entities that are identified based, at least in part, on the first set of request data. 6. The network system of claim 5, wherein the user device is configured to transmit the second set of request data corresponding to the request to create the session in response to a user selection of the selected entity from the one or more entities presented within the user application. 7. The network system of claim 1, wherein the executed instructions further cause the network system to cause the terminal computing system located at the selected entity to create the session by transmitting, over the network to the terminal computing system, a set of session initiation data that includes identification information of the user. 8. The network system of claim 1, wherein the executed instructions further cause the network system to:
transmit, over the network to the user device, a set of content data to cause the user device to present a user interface for selecting from a plurality of items available at the selected entity; receive, over the network from the user device, a set of selection data indicating a user selection of one or more items from the plurality of available items; and wherein the one or more selected items are made via the user interface presented on the user device and one or more transactions associated with the one or more selected items are recorded as session data associated with the session for the user. 9. The network system of claim 8, wherein the set of content data is transmitted over the network to the user device prior to the user arriving at the destination location of the transport service. 10. The network system of claim 8, wherein the executed instructions further cause the network system to:
determine an estimated time of arrival at the destination location; and transmitting the set of content data over the network to the user device based at least in part on the estimated time of arrival at the destination location. 11. The network system of claim 8, wherein the set of selection data indicating the user selection of one or more items to be provided by the selected entity is received by the network system prior to the user arriving at the destination location of the transport service. 12. The network system of claim 1, wherein the executed instructions further cause the network system to transmit a set of query data to cause the terminal computing system to: (i) terminate the session for the user, and (ii) transmit the set of session data to the network system. 13. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data in response to receiving, over the network from the user device, a third set of request data corresponding to a second request for the transport service. 14. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data in response to receiving, over the network from the user device, a termination signal. 15. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data based on location data received from the user device. 16. The network system of claim 12, wherein the executed instructions further cause the network system to transmit the set of query data after a timeout period during which no session data is received from the terminal computing system. 17. The network system of claim 1, wherein the set of session data associated with the session is transmitted by the terminal computing system in response to one or more operator inputs received at the terminal computing system, the one or more operator inputs further causing the terminal computing system to terminate the session for the user. 18. The network system of claim 1, wherein the executed instructions further cause the network system to:
in response to receiving the first set of request data corresponding to the first request for the transport service, identify a transport provider based, at least in part, on a start location of the transport service indicated by the first set of request data; and transmit, over the network to a provider device of the transport provider, a set of data corresponding to an invitation to fulfill the first request for the transport service. 19. A computer-implemented method comprising:
receiving, over a network from a user device of a user, a first set of request data corresponding to a first request for a transport service to a destination location; in response to receiving the first set of request data corresponding to the first request for the transport service, identifying one or more entities based, at least in part, on the destination location; receiving, over the network from the user device, a second set of request data corresponding to a request to create a session for recording one or more transactions of the user at a selected entity of the one or more entities; in response to receiving the second set of request data corresponding to the request to create the session, causing a terminal computing system located at the selected entity to create the session for recording one or more transactions of the user at the selected entity; and receiving, over the network from the terminal computing system located at the selected entity, a set of session data associated with the session that indicates one or more transactions of the user at the selected entity. 20. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a network system, cause the network system to:
receive, over a network from a user device of a user, a first set of request data corresponding to a first request for a transport service to a destination location; in response to receiving the first set of request data corresponding to the first request for the transport service, identify one or more entities based, at least in part, on the destination location; receive, over the network from the user device, a second set of request data corresponding to a request to create a session for recording one or more transactions of the user at a selected entity of the one or more entities; in response to receiving the second set of request data corresponding to the request to create the session, cause a terminal computing system located at the selected entity to create the session for recording one or more transactions of the user at the selected entity; and receive, over the network from the terminal computing system located at the selected entity, a set of session data associated with the session that indicates one or more transactions of the user at the selected entity. | 2,600 |
11,001 | 11,001 | 16,342,499 | 2,613 | An example method includes acquiring a 3D surface model at a processor. The processor may determine a plurality of differently segmented 2D representations of the 3D surface model. The processor may select, based on a predetermined 3D surface forming criteria, a segmented 2D representation of the plurality of differently segmented 2D representations for refinement. The processor may determine, based on the selected segmented 2D representation, a refined 2D representation. The refined 2D representation may be determined such that the refined 2D representation provides an output which, when formed in a heat deformable substrate, is formable to a shape of the 3D surface model with better accuracy than an output of the selected segmented 2D representation. | 1. A method comprising:
acquiring, at a processor, a 3D surface model; determining, by the processor, a plurality of differently segmented 2D representations of the 3D surface model; selecting, by the processor, and based on a predetermined 3D surface forming criteria, a segmented 2D representation of the plurality of differently segmented 2D representations for refinement; and determining, by the processor and based on the selected 2D representation, a refined 2D representation, wherein the refined 2D representation is determined such that the refined 2D representation provides an output which, when formed in a heat deformable substrate, is formable to a shape of the 3D surface model with better accuracy than an output of the selected 2D representation. 2. The method according to claim 1 wherein determining, by the processor, a plurality of differently segmented 2D representations of the 3D surface model comprises determining a segmented 2D representation according to criteria comprising at least one of:
determining each segment to have at least a minimum size;
mapping a surface of the 3D surface model which subtends an arc of more than a predetermined size to at least two segments; and
determining the segments such that angles between segments are within predetermined criteria. 3. The method according to claim 1 wherein selecting a segmented 2D representations for refinement based on the predetermined 3D surface forming criteria comprises selecting a 2D representation based on at least one of:
conformability to the 3D surface model;
appearance of at least one pattern element when formed as a 3D surface; and
manageability of at least one portion of a heat deformable substrate to which the segmented 2D representation is applied. 4. The method according to claim 1 in which determining, by the processor and based on the selected 2D representation, a refined 2D representation comprises at least one of:
changing at least one angle between segments;
dividing a segment into a plurality of segments;
merging a plurality of segments into a merged segment;
altering a shape of a segment,
rescaling a segment;
reconfiguring a relative arrangement of segments;
adding a segment;
altering an image to be applied to the substrate in an output of the 2D representation; and
merging aspects of different selected 2D representations. 5. The method according to claim 1 wherein determining, by the processor and based on the selected 2D representation, a refined 2D representation comprises modifying the selected 2D representation so as to reduce a substrate region consumed in printing the 2D representation. 6. The method according to claim 5 wherein refining the selected 2D representation comprises rearranging the segments into a different configuration. 7. The method according to claim 1 comprising generating, using the processor, a representation of a 3D surface formable from a substrate output formed according to the selected 2D representation and displaying the generated a 3D surface model. 8. The method according to claim 1 comprising:
determining a plurality of refined 2D representation;
selecting, by the processor, and based on predetermined 3D surface forming criteria, a refined 2D representation of the plurality of refined 2D representations for further refinement; and
determining, by the processor and based on the selected refined 2D representation, a next generation refined 2D representation, wherein the next generation refined 2D representation is determined such that the next generation refined 2D representation provides an output which, when formed in a heat deformable substrate, is formable to a shape of the 3D surface model with better accuracy than an output of the selected refined 2D representation on which it is based. 9. The method according to claim 1 comprising printing the refined 2D representation on a heat deformable substrate. 10. Processing circuitry comprising:
an unwrap module to generate a plurality of segmented representations of a 3D surface, each segmented representation comprising at least one plane; a selection module to select a segmented representation of the plurality of segmented representations based on a suitability of each segmented representation to form the 3D surface in a heat deformable material; and a refinement module to refine the selected segmented representation to determine a refined segmented representation having an increased suitability to form the 3D surface in a heat deformable material. 11. The processing circuitry according to claim 10 wherein the unwrap module is to segment a model of a three dimensional object having the 3D surface into a plurality of geometric shapes and to unwrap the geometrical shapes to provide a segmented representation of the 3D surface. 12. The processing circuitry according to claim 10 wherein the refinement module is to merge aspects of different segmented representations to determine a refined segmented representation. 13. The processing circuitry according to claim 10 wherein the refinement module is to:
change at least one angle between segments;
divide a segment into a plurality of segments;
merge a plurality of segments into a merged segment;
rescale a segment;
alter a shape of a segment,
merge aspects of different selected 2D representations;
reconfigure a relative arrangement of segments;
add a segment;
alter an image to be printed on a substrate to form the 3D surface; and
increase a size of a segment. 14. A non-transitory machine readable medium comprising instructions which, when executed by a processor, cause the processor to:
assess, from a plurality of 2D representations of a 3D surface, which representations are better suited to form the 3D surface in a heat deformable material; and refine a 2D representation of the plurality of 2D representations to increase its suitability to form the 3D surface in the heat deformable material, wherein the suitability is determined based on at least one of:
conformability to the 3D surface;
appearance of a pattern element when formed as a 3D surface; and
manageability of a portion of the heat deformable material. 15. The non-transitory machine readable medium according to claim 14 wherein the instructions to refine a 2D representation comprise instructions which, when executed by a processor, cause the processor to at least one of:
change at least one angle between segments of the 2D representation;
divide a segment into a plurality of segments of the 2D representation;
merge a plurality of segments of the 2D representation into a merged segment;
rescale a segment of the 2D representation;
alter a shape of a segment,
merge aspects of different selected 2D representations;
reconfigure a relative arrangement of segments; and
add a segment;
alter an image to be printed on a substrate to form the 3D surface; and
increase a size of a segment of the 2D representation. | An example method includes acquiring a 3D surface model at a processor. The processor may determine a plurality of differently segmented 2D representations of the 3D surface model. The processor may select, based on a predetermined 3D surface forming criteria, a segmented 2D representation of the plurality of differently segmented 2D representations for refinement. The processor may determine, based on the selected segmented 2D representation, a refined 2D representation. The refined 2D representation may be determined such that the refined 2D representation provides an output which, when formed in a heat deformable substrate, is formable to a shape of the 3D surface model with better accuracy than an output of the selected segmented 2D representation.1. A method comprising:
acquiring, at a processor, a 3D surface model; determining, by the processor, a plurality of differently segmented 2D representations of the 3D surface model; selecting, by the processor, and based on a predetermined 3D surface forming criteria, a segmented 2D representation of the plurality of differently segmented 2D representations for refinement; and determining, by the processor and based on the selected 2D representation, a refined 2D representation, wherein the refined 2D representation is determined such that the refined 2D representation provides an output which, when formed in a heat deformable substrate, is formable to a shape of the 3D surface model with better accuracy than an output of the selected 2D representation. 2. The method according to claim 1 wherein determining, by the processor, a plurality of differently segmented 2D representations of the 3D surface model comprises determining a segmented 2D representation according to criteria comprising at least one of:
determining each segment to have at least a minimum size;
mapping a surface of the 3D surface model which subtends an arc of more than a predetermined size to at least two segments; and
determining the segments such that angles between segments are within predetermined criteria. 3. The method according to claim 1 wherein selecting a segmented 2D representations for refinement based on the predetermined 3D surface forming criteria comprises selecting a 2D representation based on at least one of:
conformability to the 3D surface model;
appearance of at least one pattern element when formed as a 3D surface; and
manageability of at least one portion of a heat deformable substrate to which the segmented 2D representation is applied. 4. The method according to claim 1 in which determining, by the processor and based on the selected 2D representation, a refined 2D representation comprises at least one of:
changing at least one angle between segments;
dividing a segment into a plurality of segments;
merging a plurality of segments into a merged segment;
altering a shape of a segment,
rescaling a segment;
reconfiguring a relative arrangement of segments;
adding a segment;
altering an image to be applied to the substrate in an output of the 2D representation; and
merging aspects of different selected 2D representations. 5. The method according to claim 1 wherein determining, by the processor and based on the selected 2D representation, a refined 2D representation comprises modifying the selected 2D representation so as to reduce a substrate region consumed in printing the 2D representation. 6. The method according to claim 5 wherein refining the selected 2D representation comprises rearranging the segments into a different configuration. 7. The method according to claim 1 comprising generating, using the processor, a representation of a 3D surface formable from a substrate output formed according to the selected 2D representation and displaying the generated a 3D surface model. 8. The method according to claim 1 comprising:
determining a plurality of refined 2D representation;
selecting, by the processor, and based on predetermined 3D surface forming criteria, a refined 2D representation of the plurality of refined 2D representations for further refinement; and
determining, by the processor and based on the selected refined 2D representation, a next generation refined 2D representation, wherein the next generation refined 2D representation is determined such that the next generation refined 2D representation provides an output which, when formed in a heat deformable substrate, is formable to a shape of the 3D surface model with better accuracy than an output of the selected refined 2D representation on which it is based. 9. The method according to claim 1 comprising printing the refined 2D representation on a heat deformable substrate. 10. Processing circuitry comprising:
an unwrap module to generate a plurality of segmented representations of a 3D surface, each segmented representation comprising at least one plane; a selection module to select a segmented representation of the plurality of segmented representations based on a suitability of each segmented representation to form the 3D surface in a heat deformable material; and a refinement module to refine the selected segmented representation to determine a refined segmented representation having an increased suitability to form the 3D surface in a heat deformable material. 11. The processing circuitry according to claim 10 wherein the unwrap module is to segment a model of a three dimensional object having the 3D surface into a plurality of geometric shapes and to unwrap the geometrical shapes to provide a segmented representation of the 3D surface. 12. The processing circuitry according to claim 10 wherein the refinement module is to merge aspects of different segmented representations to determine a refined segmented representation. 13. The processing circuitry according to claim 10 wherein the refinement module is to:
change at least one angle between segments;
divide a segment into a plurality of segments;
merge a plurality of segments into a merged segment;
rescale a segment;
alter a shape of a segment,
merge aspects of different selected 2D representations;
reconfigure a relative arrangement of segments;
add a segment;
alter an image to be printed on a substrate to form the 3D surface; and
increase a size of a segment. 14. A non-transitory machine readable medium comprising instructions which, when executed by a processor, cause the processor to:
assess, from a plurality of 2D representations of a 3D surface, which representations are better suited to form the 3D surface in a heat deformable material; and refine a 2D representation of the plurality of 2D representations to increase its suitability to form the 3D surface in the heat deformable material, wherein the suitability is determined based on at least one of:
conformability to the 3D surface;
appearance of a pattern element when formed as a 3D surface; and
manageability of a portion of the heat deformable material. 15. The non-transitory machine readable medium according to claim 14 wherein the instructions to refine a 2D representation comprise instructions which, when executed by a processor, cause the processor to at least one of:
change at least one angle between segments of the 2D representation;
divide a segment into a plurality of segments of the 2D representation;
merge a plurality of segments of the 2D representation into a merged segment;
rescale a segment of the 2D representation;
alter a shape of a segment,
merge aspects of different selected 2D representations;
reconfigure a relative arrangement of segments; and
add a segment;
alter an image to be printed on a substrate to form the 3D surface; and
increase a size of a segment of the 2D representation. | 2,600 |
11,002 | 11,002 | 16,085,358 | 2,646 | There is provided a method of secondary serving cell selection for a wireless communication device served by an in-door radio system. The indoor radio system is operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station. The method comprises selecting (S1), if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance, and selecting (S2), if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s). | 1. A method of secondary serving cell selection for a wireless communication device served by an indoor radio system employing carrier aggregation of component carriers to enable selection of a secondary serving cell, the indoor radio system operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station, wherein said method comprises:
selecting, if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance; and
selecting, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s). 2. The method of claim 1, wherein the second set of frequency band(s) is a set of frequency band(s) in unlicensed spectrum and/or a set of indoor frequency band(s). 3. The method of claim 1, wherein,
if it is determined that there is lack of indoor dominance on each band of the first set of frequency band(s), a component carrier of the second set of frequency band(s) is selected to carry the secondary serving cell; and if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance is selected to carry the secondary serving cell. 4. The method of claim 1, wherein the step of selecting a component carrier of the second set of frequency band(s) comprises selecting a component carrier of a licensed frequency band of the second set with higher priority than an unlicensed frequency band of the second set. 5. The method of claim 1, further comprising determining whether there is indoor dominance or lack of indoor dominance at least partly based on signal measurements on one or more bands of the first set of frequency band(s) performed by the wireless communication device. 6. The method of claim 5, wherein it is determined that there is lack of indoor dominance on said one or more bands of the first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is stronger than a signal on said one or more bands of the first set of frequency band(s) from the indoor radio system as measured by the wireless communication device. 7. The method of claim 5, wherein it is determined that there is indoor dominance on said one or more bands of first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is weaker than a signal on said one or more bands of the first set of frequency band(s) from the indoor radio system as measured by the wireless communication device. 8. The method of claim 1, wherein the indoor radio system is an indoor base station, and the at least one neighboring base station is an outdoor base station. 9. The method of claim 1, wherein
the method is performed by the indoor radio system or the method is performed by a network device in connection with the indoor radio system, and the network device is part of a centralized radio access network deployment or a cloud-based network device. 10. (canceled) 11. A device configured to perform secondary serving cell selection for a wireless communication device served by an indoor radio system employing carrier aggregation of component carriers to enable selection of a secondary serving cell, the indoor radio system operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station, wherein
the device is configured to select, if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance; and the device is configured to select, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s). 12. The device of claim 11, wherein the device is configured to perform secondary serving cell selection based on the second set of frequency band(s) being a set of frequency band(s) in unlicensed spectrum and/or a set of indoor frequency band(s). 13. The device of claim 11, wherein the device is configured to select, if it is determined that there is lack of indoor dominance on each band of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell, and
wherein the device is configured to select, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s), on which there is indoor dominance, to carry the secondary serving cell. 14. The device of claim 11, wherein the device is configured to select a component carrier of the second set of frequency band(s) by selecting a component carrier of a licensed frequency band of the second set with higher priority than an unlicensed frequency band of the second set. 15. The device of claim 11, wherein the device is configured to determine whether there is indoor dominance or lack of indoor dominance at least partly based on signal measurements on one or more bands of the first set of frequency band(s) performed by the wireless communication device. 16. The device of claim 15, wherein the device is configured to determine that there is lack of indoor dominance on said one or more bands of the first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is stronger than a signal on said one or more bands of the first set of frequency band(s) from the indoor base station as measured by the wireless communication device. 17. The device of claim 15, wherein the device is configured to determine that there is indoor dominance on said one or more bands of the first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is weaker than a signal on said one or more bands of the first set of frequency band(s) from the indoor base station as measured by the wireless communication device. 18. The device of claim 11, wherein the device comprises a processor and memory, said memory comprising instructions executable by said processor, whereby said processor is operative to perform secondary serving cell selection. 19. An indoor radio system comprising the device of claim 11. 20. A network device comprising the device of claim 11. 21. (canceled) 22. A computer program for performing, when executed, secondary serving cell selection for a wireless communication device served by an indoor radio system employing carrier aggregation of component carriers to enable selection of secondary serving cell, the indoor radio system operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station, wherein the computer program comprises instructions, which when executed by at least one processor, cause the at least one processor to:
select, if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance; and select, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s). 23. (canceled) 24. (canceled) | There is provided a method of secondary serving cell selection for a wireless communication device served by an in-door radio system. The indoor radio system is operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station. The method comprises selecting (S1), if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance, and selecting (S2), if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s).1. A method of secondary serving cell selection for a wireless communication device served by an indoor radio system employing carrier aggregation of component carriers to enable selection of a secondary serving cell, the indoor radio system operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station, wherein said method comprises:
selecting, if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance; and
selecting, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s). 2. The method of claim 1, wherein the second set of frequency band(s) is a set of frequency band(s) in unlicensed spectrum and/or a set of indoor frequency band(s). 3. The method of claim 1, wherein,
if it is determined that there is lack of indoor dominance on each band of the first set of frequency band(s), a component carrier of the second set of frequency band(s) is selected to carry the secondary serving cell; and if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance is selected to carry the secondary serving cell. 4. The method of claim 1, wherein the step of selecting a component carrier of the second set of frequency band(s) comprises selecting a component carrier of a licensed frequency band of the second set with higher priority than an unlicensed frequency band of the second set. 5. The method of claim 1, further comprising determining whether there is indoor dominance or lack of indoor dominance at least partly based on signal measurements on one or more bands of the first set of frequency band(s) performed by the wireless communication device. 6. The method of claim 5, wherein it is determined that there is lack of indoor dominance on said one or more bands of the first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is stronger than a signal on said one or more bands of the first set of frequency band(s) from the indoor radio system as measured by the wireless communication device. 7. The method of claim 5, wherein it is determined that there is indoor dominance on said one or more bands of first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is weaker than a signal on said one or more bands of the first set of frequency band(s) from the indoor radio system as measured by the wireless communication device. 8. The method of claim 1, wherein the indoor radio system is an indoor base station, and the at least one neighboring base station is an outdoor base station. 9. The method of claim 1, wherein
the method is performed by the indoor radio system or the method is performed by a network device in connection with the indoor radio system, and the network device is part of a centralized radio access network deployment or a cloud-based network device. 10. (canceled) 11. A device configured to perform secondary serving cell selection for a wireless communication device served by an indoor radio system employing carrier aggregation of component carriers to enable selection of a secondary serving cell, the indoor radio system operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station, wherein
the device is configured to select, if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance; and the device is configured to select, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s). 12. The device of claim 11, wherein the device is configured to perform secondary serving cell selection based on the second set of frequency band(s) being a set of frequency band(s) in unlicensed spectrum and/or a set of indoor frequency band(s). 13. The device of claim 11, wherein the device is configured to select, if it is determined that there is lack of indoor dominance on each band of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell, and
wherein the device is configured to select, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s), on which there is indoor dominance, to carry the secondary serving cell. 14. The device of claim 11, wherein the device is configured to select a component carrier of the second set of frequency band(s) by selecting a component carrier of a licensed frequency band of the second set with higher priority than an unlicensed frequency band of the second set. 15. The device of claim 11, wherein the device is configured to determine whether there is indoor dominance or lack of indoor dominance at least partly based on signal measurements on one or more bands of the first set of frequency band(s) performed by the wireless communication device. 16. The device of claim 15, wherein the device is configured to determine that there is lack of indoor dominance on said one or more bands of the first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is stronger than a signal on said one or more bands of the first set of frequency band(s) from the indoor base station as measured by the wireless communication device. 17. The device of claim 15, wherein the device is configured to determine that there is indoor dominance on said one or more bands of the first set of frequency band(s) when a signal on said one or more bands of the first set of frequency band(s) from the neighboring base station is weaker than a signal on said one or more bands of the first set of frequency band(s) from the indoor base station as measured by the wireless communication device. 18. The device of claim 11, wherein the device comprises a processor and memory, said memory comprising instructions executable by said processor, whereby said processor is operative to perform secondary serving cell selection. 19. An indoor radio system comprising the device of claim 11. 20. A network device comprising the device of claim 11. 21. (canceled) 22. A computer program for performing, when executed, secondary serving cell selection for a wireless communication device served by an indoor radio system employing carrier aggregation of component carriers to enable selection of secondary serving cell, the indoor radio system operating in a first set of at least one frequency band, shared with at least one neighboring base station, and a second set of at least one frequency band, unused by the at least one neighboring base station, wherein the computer program comprises instructions, which when executed by at least one processor, cause the at least one processor to:
select, if it is determined that there is lack of indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of the second set of frequency band(s) to carry the secondary serving cell with higher priority than a component carrier of said one or more bands of the first set of frequency band(s) on which there is lack of indoor dominance; and select, if it is determined that there is indoor dominance on one or more bands of the first set of frequency band(s), a component carrier of said one or more bands of the first set of frequency band(s) on which there is indoor dominance, from the indoor radio system, to carry the secondary serving cell with higher priority than a component carrier of the second set of frequency band(s). 23. (canceled) 24. (canceled) | 2,600 |
11,003 | 11,003 | 16,292,852 | 2,625 | A display device includes a pixel array unit formed by disposing pixel circuits having a P-channel type drive transistor that drives a light-emitting unit, a sampling transistor that applies a signal voltage, a light emission control transistor that controls emission/non-emission of the light-emitting unit, a storage capacitor that is connected between a gate electrode and a source electrode of the drive transistor and an auxiliary capacitor that is connected to the source electrode, and a drive unit that, during threshold correction, respectively applies a first voltage and a second voltage to the source electrode of the drive transistor and the gate electrode thereof, the difference between the first voltage and the second voltage being less than a threshold voltage of the drive transistor, and subsequently performs driving that applies a standard voltage used in threshold correction to the gate electrode when the source electrode is in a floating state. | 1. A display device comprising:
a pixel array unit that is formed by disposing pixel circuits that include a P-channel type drive transistor that drives a light-emitting unit, a sampling transistor that applies a signal voltage, a light emission control transistor that controls light emission and non-light emission of the light-emitting unit, a storage capacitor that is connected between a gate electrode and a source electrode of the drive transistor and an auxiliary capacitor that is connected to the source electrode of the drive transistor; and a drive unit that, during threshold correction, respectively applies a first voltage and a second voltage to the source electrode of the drive transistor and the gate electrode thereof, the difference between the first voltage and the second voltage being less than a threshold voltage of the drive transistor, and subsequently performs driving that applies a standard voltage that is used in threshold correction to the gate electrode in a state in which the source electrode of the drive transistor has been set to a floating state. | A display device includes a pixel array unit formed by disposing pixel circuits having a P-channel type drive transistor that drives a light-emitting unit, a sampling transistor that applies a signal voltage, a light emission control transistor that controls emission/non-emission of the light-emitting unit, a storage capacitor that is connected between a gate electrode and a source electrode of the drive transistor and an auxiliary capacitor that is connected to the source electrode, and a drive unit that, during threshold correction, respectively applies a first voltage and a second voltage to the source electrode of the drive transistor and the gate electrode thereof, the difference between the first voltage and the second voltage being less than a threshold voltage of the drive transistor, and subsequently performs driving that applies a standard voltage used in threshold correction to the gate electrode when the source electrode is in a floating state.1. A display device comprising:
a pixel array unit that is formed by disposing pixel circuits that include a P-channel type drive transistor that drives a light-emitting unit, a sampling transistor that applies a signal voltage, a light emission control transistor that controls light emission and non-light emission of the light-emitting unit, a storage capacitor that is connected between a gate electrode and a source electrode of the drive transistor and an auxiliary capacitor that is connected to the source electrode of the drive transistor; and a drive unit that, during threshold correction, respectively applies a first voltage and a second voltage to the source electrode of the drive transistor and the gate electrode thereof, the difference between the first voltage and the second voltage being less than a threshold voltage of the drive transistor, and subsequently performs driving that applies a standard voltage that is used in threshold correction to the gate electrode in a state in which the source electrode of the drive transistor has been set to a floating state. | 2,600 |
11,004 | 11,004 | 15,765,920 | 2,644 | An imaging system includes an imaging device; and an information processing device that receives a first communication from the imaging device, makes a condition determination, and, in a case where the imaging device satisfies a predetermined condition, sends a second communication to the imaging device, wherein the second communication includes at least a power ON request data. | 1. An imaging system including:
an imaging device; and an information processing device configured to receive a first communication from the imaging device, to make a condition determination, and, in a case where the imaging device satisfies a predetermined condition, to send a second communication to the imaging device, wherein the second communication includes at least a power ON request data. 2. The imaging system according to claim 1, wherein the first communication is a scan response. 3. The imaging system according to claim 1, wherein the second communication includes a connection request. 4. An information processing device configured to:
receive a first communication from an imaging device; make a condition determination; and in a case where the imaging device satisfies a predetermined condition, to send a second communication to the imaging device, wherein the second communication includes at least a power ON request data, 5. The information processing device according to claim 4, wherein the information processing device is configured to receive advertising packets from the imaging device. 6. The information processing device according to claim 5, wherein the information processing device is configured to determine whether the imaging device has been paired with the information processing device, upon receiving the advertising packets continuously sent from the imaging device. 7. The information processing device according to claim 6, wherein, in a case where the information processing device determines that the imaging device has been paired with the information processing device, the information processing device is configured to transmit a scan request to the imaging device. 8. The information processing device according to claim 4, wherein the imaging device is one of a plurality of imaging devices, and the information processing device is configured to perform a scanning until connections to a predetermined number of the plurality of imaging devices have been established. 9. The information processing device according to claim 8, wherein the predetermined number is set by a user. 10. The information processing device according to claim 4, wherein the first communication is a scan response. 11. The information processing device according to claim 4, wherein the second communication includes a connection request. 12. The information processing device according to claim 4, wherein the predetermined condition corresponds to whether the imaging device and the information processing device have a predetermined master-slave relationship. 13. The information processing device according to claim 4, wherein the predetermined condition corresponds to whether the imaging device and the information processing device have been recently connected within a predetermined amount of time. 14. The information processing device according to claim 4, wherein the predetermined condition corresponds to whether the imaging device and the information processing device have a predetermined connection history. 15. The information processing device according to claim 4, wherein the imaging device and the information processing device are respectively configured to perform communication in a plurality of communication schemes. 16. The information processing device according to claim 15, wherein the plurality of communication schemes includes two or more selected from: a Bluetooth standard, a Wi-Fi standard, a near field communication standard, an optical wireless communication scheme, an audio communication scheme, a wireless communication scheme, and a wired communication scheme. 17. The information processing device according to claim 4, wherein the information processing device includes a control unit, a communication unit, and a display. 18. The information processing device according to claim 4, wherein the imaging device is a wearable wristband terminal. 19. A method of remotely communicating with an imaging device, comprising:
receiving, by an information processing device, a first communication from the imaging device; making, by the information processing device, a condition determination; and in a case where the imaging device satisfies a predetermined condition, sending, by the information processing device, a second communication to the imaging device, wherein the second communication includes at least a power ON request data. 20. A non-transitory computer-readable medium storing therein instructions that, when executed by a processor of an information processing device, cause the information processing device to perform operations comprising:
receiving a first communication from the imaging device; making a condition determination; and in a case where the imaging device satisfies a predetermined condition, sending a second communication to the imaging device, wherein the second communication includes at least a power ON request data. | An imaging system includes an imaging device; and an information processing device that receives a first communication from the imaging device, makes a condition determination, and, in a case where the imaging device satisfies a predetermined condition, sends a second communication to the imaging device, wherein the second communication includes at least a power ON request data.1. An imaging system including:
an imaging device; and an information processing device configured to receive a first communication from the imaging device, to make a condition determination, and, in a case where the imaging device satisfies a predetermined condition, to send a second communication to the imaging device, wherein the second communication includes at least a power ON request data. 2. The imaging system according to claim 1, wherein the first communication is a scan response. 3. The imaging system according to claim 1, wherein the second communication includes a connection request. 4. An information processing device configured to:
receive a first communication from an imaging device; make a condition determination; and in a case where the imaging device satisfies a predetermined condition, to send a second communication to the imaging device, wherein the second communication includes at least a power ON request data, 5. The information processing device according to claim 4, wherein the information processing device is configured to receive advertising packets from the imaging device. 6. The information processing device according to claim 5, wherein the information processing device is configured to determine whether the imaging device has been paired with the information processing device, upon receiving the advertising packets continuously sent from the imaging device. 7. The information processing device according to claim 6, wherein, in a case where the information processing device determines that the imaging device has been paired with the information processing device, the information processing device is configured to transmit a scan request to the imaging device. 8. The information processing device according to claim 4, wherein the imaging device is one of a plurality of imaging devices, and the information processing device is configured to perform a scanning until connections to a predetermined number of the plurality of imaging devices have been established. 9. The information processing device according to claim 8, wherein the predetermined number is set by a user. 10. The information processing device according to claim 4, wherein the first communication is a scan response. 11. The information processing device according to claim 4, wherein the second communication includes a connection request. 12. The information processing device according to claim 4, wherein the predetermined condition corresponds to whether the imaging device and the information processing device have a predetermined master-slave relationship. 13. The information processing device according to claim 4, wherein the predetermined condition corresponds to whether the imaging device and the information processing device have been recently connected within a predetermined amount of time. 14. The information processing device according to claim 4, wherein the predetermined condition corresponds to whether the imaging device and the information processing device have a predetermined connection history. 15. The information processing device according to claim 4, wherein the imaging device and the information processing device are respectively configured to perform communication in a plurality of communication schemes. 16. The information processing device according to claim 15, wherein the plurality of communication schemes includes two or more selected from: a Bluetooth standard, a Wi-Fi standard, a near field communication standard, an optical wireless communication scheme, an audio communication scheme, a wireless communication scheme, and a wired communication scheme. 17. The information processing device according to claim 4, wherein the information processing device includes a control unit, a communication unit, and a display. 18. The information processing device according to claim 4, wherein the imaging device is a wearable wristband terminal. 19. A method of remotely communicating with an imaging device, comprising:
receiving, by an information processing device, a first communication from the imaging device; making, by the information processing device, a condition determination; and in a case where the imaging device satisfies a predetermined condition, sending, by the information processing device, a second communication to the imaging device, wherein the second communication includes at least a power ON request data. 20. A non-transitory computer-readable medium storing therein instructions that, when executed by a processor of an information processing device, cause the information processing device to perform operations comprising:
receiving a first communication from the imaging device; making a condition determination; and in a case where the imaging device satisfies a predetermined condition, sending a second communication to the imaging device, wherein the second communication includes at least a power ON request data. | 2,600 |
11,005 | 11,005 | 15,937,653 | 2,659 | A system and method of speech recognition involving a mobile device. Speech input is received ( 202 ) on a mobile device ( 102 ) and converted ( 204 ) to a set of phonetic symbols. Data relating to the phonetic symbols is transferred ( 206 ) from the mobile device over a communications network ( 104 ) to a remote processing device ( 106 ) where it is used ( 208 ) to identity at least one matching data item from a set of data items ( 114 ). Data relating to the at least one matching data item is transferred ( 210 ) from the remote processing device to the mobile device and presented ( 214 ) thereon. | 1. An electronic device, comprising:
one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving speech input;
converting the speech input to a set of phonetic symbols on the electronic device; and
transferring data relating to the phonetic symbols to a remote processing device over a communications network;
receiving data relating to at least one matching data item from a remote processing device; and
presenting data relating to the at least one received matching data item. 2. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
displaying an orthographic representation of the at least one received matching data item. 3. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
outputting data corresponding to a map coordinate of a location represented by a received matching data item. 4. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
outputting data corresponding to an identification code for a media item represented by the received matching data item. 5. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
generating a spoken form of the at least one received matching data item. 6. The electronic device of claim 1, wherein a number of the matching data items received from the remote processing device corresponds to a lower of: a maximum number of data items to be displayed, or a number of data items arranged in order of decreasing posterior probability corresponding to the phonetic symbols down to a predetermined threshold value. 7. The electronic device of claim 6, wherein the posterior probability is based on a match score, wherein the match score is generated by matching phonetic symbols to phonetic reference forms corresponding to the matching data items, and wherein the match score is normalized. 8. The electronic device of claim 1, wherein the set of phonetic symbols comprises:
a sequence of phonetic symbols, a lattice of phonetic symbols, or a combination thereof 9. The electronic device of claim 1, wherein the one or more programs further include instructions for:
storing data representing the speech input; and performing a rescoring process using the data relating to the at least one received matching data item and the stored speech input data. 10. The electronic device of claim 9, wherein the rescoring process further comprises producing a network including data representing phonetic specifications of the received data relating to at least one matching data item. 11. The electronic device of claim 9, wherein the rescoring process further comprises:
generating, for each received matching data item to be rescored, sequences or lattices of acoustic hidden Markov models (HMMs), the HMMs representing phonetic units corresponding to a phonetic specification of reference pronunciations, wherein the reference pronunciations correspond to each received matching data item to be rescored. 12. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving compressed data based on shared common elements in the phonetic specification to reduce an amount of data received at the electronic device for the rescoring process. 13. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving data specifying a general-purpose sub-grammar representing a set of alternative number-word sequences for the rescoring process; and determining, using the sub-grammar, a most likely one of the number-word sequences to be presented. 14. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving phonetic specifications corresponding to the received matching data items, the specifications associated with indices; selecting one or more phonetic specifications to be presented; transferring indices corresponding to each of the selected phonetic specifications to the remote processing device; receiving, from the remote processing device, data representing a full description of data items corresponding to the transferred indices; and presenting the data representing the full description of data items. 15. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving, from the remote processing device, a predetermined number of matching data items corresponding to best matches of the phonetic symbols and data items in a set of data items. 16. The electronic device of claim 10, wherein each of the sequences or lattices is matched against a sequence of frames of spectrum parameters corresponding to the speech input data stored in the electronic device to produce a match score. 17. The electronic device of claim 16, wherein the matching of the sequences or lattices involves at least one of a Viterbi time alignment and a full forward probability method. 18. The electronic device of claim 1, wherein the mobile device comprises a mobile telephone. 19. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to:
receive speech input; convert the speech input to a set of phonetic symbols on the electronic device; and transfer data relating to the phonetic symbols to a remote processing device over a communications network. 20. A method, comprising:
at an electronic device having one or more processors:
receiving speech input;
converting the speech input to a set of phonetic symbols on the electronic device; and
transferring data relating to the phonetic symbols to a remote processing device over a communications network. | A system and method of speech recognition involving a mobile device. Speech input is received ( 202 ) on a mobile device ( 102 ) and converted ( 204 ) to a set of phonetic symbols. Data relating to the phonetic symbols is transferred ( 206 ) from the mobile device over a communications network ( 104 ) to a remote processing device ( 106 ) where it is used ( 208 ) to identity at least one matching data item from a set of data items ( 114 ). Data relating to the at least one matching data item is transferred ( 210 ) from the remote processing device to the mobile device and presented ( 214 ) thereon.1. An electronic device, comprising:
one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving speech input;
converting the speech input to a set of phonetic symbols on the electronic device; and
transferring data relating to the phonetic symbols to a remote processing device over a communications network;
receiving data relating to at least one matching data item from a remote processing device; and
presenting data relating to the at least one received matching data item. 2. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
displaying an orthographic representation of the at least one received matching data item. 3. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
outputting data corresponding to a map coordinate of a location represented by a received matching data item. 4. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
outputting data corresponding to an identification code for a media item represented by the received matching data item. 5. The electronic device of claim 1, wherein presenting data relating to the at least one received matching data item comprises:
generating a spoken form of the at least one received matching data item. 6. The electronic device of claim 1, wherein a number of the matching data items received from the remote processing device corresponds to a lower of: a maximum number of data items to be displayed, or a number of data items arranged in order of decreasing posterior probability corresponding to the phonetic symbols down to a predetermined threshold value. 7. The electronic device of claim 6, wherein the posterior probability is based on a match score, wherein the match score is generated by matching phonetic symbols to phonetic reference forms corresponding to the matching data items, and wherein the match score is normalized. 8. The electronic device of claim 1, wherein the set of phonetic symbols comprises:
a sequence of phonetic symbols, a lattice of phonetic symbols, or a combination thereof 9. The electronic device of claim 1, wherein the one or more programs further include instructions for:
storing data representing the speech input; and performing a rescoring process using the data relating to the at least one received matching data item and the stored speech input data. 10. The electronic device of claim 9, wherein the rescoring process further comprises producing a network including data representing phonetic specifications of the received data relating to at least one matching data item. 11. The electronic device of claim 9, wherein the rescoring process further comprises:
generating, for each received matching data item to be rescored, sequences or lattices of acoustic hidden Markov models (HMMs), the HMMs representing phonetic units corresponding to a phonetic specification of reference pronunciations, wherein the reference pronunciations correspond to each received matching data item to be rescored. 12. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving compressed data based on shared common elements in the phonetic specification to reduce an amount of data received at the electronic device for the rescoring process. 13. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving data specifying a general-purpose sub-grammar representing a set of alternative number-word sequences for the rescoring process; and determining, using the sub-grammar, a most likely one of the number-word sequences to be presented. 14. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving phonetic specifications corresponding to the received matching data items, the specifications associated with indices; selecting one or more phonetic specifications to be presented; transferring indices corresponding to each of the selected phonetic specifications to the remote processing device; receiving, from the remote processing device, data representing a full description of data items corresponding to the transferred indices; and presenting the data representing the full description of data items. 15. The electronic device of claim 11, wherein the rescoring process further comprises:
receiving, from the remote processing device, a predetermined number of matching data items corresponding to best matches of the phonetic symbols and data items in a set of data items. 16. The electronic device of claim 10, wherein each of the sequences or lattices is matched against a sequence of frames of spectrum parameters corresponding to the speech input data stored in the electronic device to produce a match score. 17. The electronic device of claim 16, wherein the matching of the sequences or lattices involves at least one of a Viterbi time alignment and a full forward probability method. 18. The electronic device of claim 1, wherein the mobile device comprises a mobile telephone. 19. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to:
receive speech input; convert the speech input to a set of phonetic symbols on the electronic device; and transfer data relating to the phonetic symbols to a remote processing device over a communications network. 20. A method, comprising:
at an electronic device having one or more processors:
receiving speech input;
converting the speech input to a set of phonetic symbols on the electronic device; and
transferring data relating to the phonetic symbols to a remote processing device over a communications network. | 2,600 |
11,006 | 11,006 | 15,903,501 | 2,612 | Described herein are systems and methods for using depth and image information for an object within a space to determine physical dimensions for the object and a relative location of the object within the space with respect to a capturing device such as a computer device. In embodiments, the object may be a building feature that is accessible prior to obstructing objects, such as dry wall, obstructing the visibility and accessibility of the building feature. A scene or image information of the space associated with the object may be associated with the depth information and object identification for recall after an obstructing object has been placed over the building feature. The scene may be recalled via an application and may present the location and dimensions for the building feature despite the presence of an obscuring object. | 1. A computer-implemented method, comprising:
receiving, via a user interface of a computing device, user input that comprises a number of points in a space that correspond to a building feature in the space; determining, by the computing device, a distance from the computing device to the building feature based at least in part on depth information received from at least one sensor of the computing device, in relation to the number of points; determining, by the computing device, a relative location of the computing device with respect to the building feature based at least in part on location information obtained by the computing device in response to receiving the user input; capturing, by the computing device, at least one image of the building feature and generating, based on the at least one image, first image information that corresponds to the building feature within the space; determining, by the computing device, an origin point within the space to associate with a data file for recalling the building feature within a display of the computing device based at least in part on the relative location of the computing device, the first image information, the building feature within the space, and received second image information that corresponds to objects within the space; and generating, by the computing device, the data file that includes a data object for the building feature, the distance to the building feature within the space, the relative location of the computing device with respect to the building feature, the first image information, and the second image information, the data file configured to present, within a display of the computing device, a 3D representation of the building feature in a particular location of the display that visually corresponds to the location of the building feature within the space, the 3D representation being recallable by the computing device based at least in part on the origin point and configured to present the 3D representation within the display after an obscuring object is installed within the space so as to physically obscure the building feature from view by the computing device. 2. The computer-implemented method of claim 1, wherein the computing device captures the depth information using one or more depth sensors. 3. The computer-implemented method of claim 2, wherein the one or more depth sensors include one or more stereoscopic cameras of the computing device. 4. The computer-implemented method of claim 1, further comprising calculating physical dimensions for the building feature in the space based at least in part on the depth information. 5. The computer-implemented method of claim 1, wherein the user input associates the building feature with a particular utility. 6. The computer-implemented method of claim 1, further comprising receiving information, via the user interface, that modifies metadata for the data file that corresponds to the data object for the building feature, the metadata comprising particular information that corresponds to a type of utility line or a type of structural component. 7. The computer-implemented method of claim 1, wherein the data file is configured to be presented in a web browser. 8. A system comprising:
one or more camera devices;
a processor;
a display; and
a memory including instructions that, when executed with the processor, cause the system to, at least:
obtain, from the one or more camera devices, depth information that corresponds to a plurality of points in a space captured by the one or more camera devices, the plurality of points in the space associated with input received by the system;
calculate, using the depth information, a distance from the one or more camera devices to the plurality of points in the space and physical dimensions for a building feature associated with the plurality of points in the space;
obtain, from the one or more camera devices, first image information for the building feature;
obtain, from the one or more camera devices, second image information for one or more objects within the space;
determine an origin point within the space to associate with a data file for recalling the building feature within the display based at least in part on the distance, the building feature associated with the plurality of points in the space, the first image information, and the second image information; and
generate the data file that includes a 3D representation of the building feature within the space and conforms to the calculated distance and the physical dimensions based at least in part on the first image information, the second image information, and the calculated distance, the data file configured to present, within the display, the 3D representation of the building feature in a particular location of the display that visually corresponds to the location of the building feature within the space, the 3D representation being recallable by the processor based at least in part on the origin point and configured to present the 3D representation within the display after an obscuring object is installed within the space so as to physically obscure the building feature from view by the system. 9. The system of claim 8, wherein the data file is further configured to be consumed by an application of a computer device to present the 3D representation of the building feature within the space. 10. The system of claim 9, wherein presenting the 3D representation of the building feature within the space comprises presenting, within the display, the 3D representation of the building feature in a location within the space and between a current location of the computer device and the location of the building feature. 11. The system of claim 10, wherein the application is configured to display an augmented reality image that corresponds to the first image information and the second image information. 12. The system of claim 10, wherein presenting the 3D representation of the building feature within the space, via the application, is in response to matching the augmented reality image to a particular portion of the space that corresponds to the second image information. 13. The system of claim 8, wherein the instructions are further configured to cause the system to at least:
in response to a request for the data file:
generate an image file that comprises a static image of the 3D representation of the building feature within the space that conforms to the calculated distance and the physical dimensions; and
transmit the image file to a requestor associated with the request. 14. An apparatus comprising:
a camera device configured to capture image information; a depth sensor device configured to capture depth information; a mobile application stored in a computer-readable medium that, when executed, causes the apparatus to, at least:
receive depth information from the depth sensor for one or more points within a space that correspond to first image information captured using the camera device, the one or more points indicated via the mobile application;
receive user input, via the mobile application, that indicates a type of building feature to associate with the one or more points within the space;
calculate, using the depth information and the user input, a distance and physical dimensions for the building feature within the space that corresponds to the one or more points;
receive second image information captured using the camera device that corresponds to one or more objects within the space;
determine an origin point within the space to associate with a data file for recalling the building feature within a display of the mobile application based at least in part on the distance, the building feature associated with the one or more points within the space, the first image information, and the second image information; and
generate the data file that includes a data object that comprises a 3D representation of the building feature within the space that is configured to communicate, via the display of the mobile application, a first location of the building feature within the space relative to the apparatus and the physical dimensions for the building feature based at least in part on the depth information, the first image information, and the second image information,
wherein, after an obscuring object is installed within the space so as to physically obscure the building feature from view by the apparatus, the 3D representation is recallable by the mobile application based at least in part on the origin point and configured to present the 3D representation within the display in a particular location of the display that visually corresponds to the location of the building feature. 15. The apparatus of claim 14, wherein the mobile application is further configured to cause the apparatus to capture, using the camera device, an image of an object of the one or more objects or a structural feature within the space to associate with the 3D representation of the object. 16. The apparatus of claim 14, wherein the mobile application is further configured to present, via the mobile application, an augmented reality presentation of the second image information within the space to match to a real-time image of the space to further present the data object within the space. 17. The apparatus of claim 14, wherein the mobile application is further configured to use the one or more points within the space to identify a plurality of types of objects within the space. 18. The apparatus of claim 17, wherein each type of object of the plurality of types of objects is stored as a layer within the data object. 19. The apparatus of claim 14, wherein the apparatus further comprises an accelerometer and a compass. 20. The apparatus of claim 19, wherein the mobile application is further configured to determine a relative location of the apparatus within the space based at least in part on sensor input obtained by the accelerometer and the compass in response to receiving the user input. | Described herein are systems and methods for using depth and image information for an object within a space to determine physical dimensions for the object and a relative location of the object within the space with respect to a capturing device such as a computer device. In embodiments, the object may be a building feature that is accessible prior to obstructing objects, such as dry wall, obstructing the visibility and accessibility of the building feature. A scene or image information of the space associated with the object may be associated with the depth information and object identification for recall after an obstructing object has been placed over the building feature. The scene may be recalled via an application and may present the location and dimensions for the building feature despite the presence of an obscuring object.1. A computer-implemented method, comprising:
receiving, via a user interface of a computing device, user input that comprises a number of points in a space that correspond to a building feature in the space; determining, by the computing device, a distance from the computing device to the building feature based at least in part on depth information received from at least one sensor of the computing device, in relation to the number of points; determining, by the computing device, a relative location of the computing device with respect to the building feature based at least in part on location information obtained by the computing device in response to receiving the user input; capturing, by the computing device, at least one image of the building feature and generating, based on the at least one image, first image information that corresponds to the building feature within the space; determining, by the computing device, an origin point within the space to associate with a data file for recalling the building feature within a display of the computing device based at least in part on the relative location of the computing device, the first image information, the building feature within the space, and received second image information that corresponds to objects within the space; and generating, by the computing device, the data file that includes a data object for the building feature, the distance to the building feature within the space, the relative location of the computing device with respect to the building feature, the first image information, and the second image information, the data file configured to present, within a display of the computing device, a 3D representation of the building feature in a particular location of the display that visually corresponds to the location of the building feature within the space, the 3D representation being recallable by the computing device based at least in part on the origin point and configured to present the 3D representation within the display after an obscuring object is installed within the space so as to physically obscure the building feature from view by the computing device. 2. The computer-implemented method of claim 1, wherein the computing device captures the depth information using one or more depth sensors. 3. The computer-implemented method of claim 2, wherein the one or more depth sensors include one or more stereoscopic cameras of the computing device. 4. The computer-implemented method of claim 1, further comprising calculating physical dimensions for the building feature in the space based at least in part on the depth information. 5. The computer-implemented method of claim 1, wherein the user input associates the building feature with a particular utility. 6. The computer-implemented method of claim 1, further comprising receiving information, via the user interface, that modifies metadata for the data file that corresponds to the data object for the building feature, the metadata comprising particular information that corresponds to a type of utility line or a type of structural component. 7. The computer-implemented method of claim 1, wherein the data file is configured to be presented in a web browser. 8. A system comprising:
one or more camera devices;
a processor;
a display; and
a memory including instructions that, when executed with the processor, cause the system to, at least:
obtain, from the one or more camera devices, depth information that corresponds to a plurality of points in a space captured by the one or more camera devices, the plurality of points in the space associated with input received by the system;
calculate, using the depth information, a distance from the one or more camera devices to the plurality of points in the space and physical dimensions for a building feature associated with the plurality of points in the space;
obtain, from the one or more camera devices, first image information for the building feature;
obtain, from the one or more camera devices, second image information for one or more objects within the space;
determine an origin point within the space to associate with a data file for recalling the building feature within the display based at least in part on the distance, the building feature associated with the plurality of points in the space, the first image information, and the second image information; and
generate the data file that includes a 3D representation of the building feature within the space and conforms to the calculated distance and the physical dimensions based at least in part on the first image information, the second image information, and the calculated distance, the data file configured to present, within the display, the 3D representation of the building feature in a particular location of the display that visually corresponds to the location of the building feature within the space, the 3D representation being recallable by the processor based at least in part on the origin point and configured to present the 3D representation within the display after an obscuring object is installed within the space so as to physically obscure the building feature from view by the system. 9. The system of claim 8, wherein the data file is further configured to be consumed by an application of a computer device to present the 3D representation of the building feature within the space. 10. The system of claim 9, wherein presenting the 3D representation of the building feature within the space comprises presenting, within the display, the 3D representation of the building feature in a location within the space and between a current location of the computer device and the location of the building feature. 11. The system of claim 10, wherein the application is configured to display an augmented reality image that corresponds to the first image information and the second image information. 12. The system of claim 10, wherein presenting the 3D representation of the building feature within the space, via the application, is in response to matching the augmented reality image to a particular portion of the space that corresponds to the second image information. 13. The system of claim 8, wherein the instructions are further configured to cause the system to at least:
in response to a request for the data file:
generate an image file that comprises a static image of the 3D representation of the building feature within the space that conforms to the calculated distance and the physical dimensions; and
transmit the image file to a requestor associated with the request. 14. An apparatus comprising:
a camera device configured to capture image information; a depth sensor device configured to capture depth information; a mobile application stored in a computer-readable medium that, when executed, causes the apparatus to, at least:
receive depth information from the depth sensor for one or more points within a space that correspond to first image information captured using the camera device, the one or more points indicated via the mobile application;
receive user input, via the mobile application, that indicates a type of building feature to associate with the one or more points within the space;
calculate, using the depth information and the user input, a distance and physical dimensions for the building feature within the space that corresponds to the one or more points;
receive second image information captured using the camera device that corresponds to one or more objects within the space;
determine an origin point within the space to associate with a data file for recalling the building feature within a display of the mobile application based at least in part on the distance, the building feature associated with the one or more points within the space, the first image information, and the second image information; and
generate the data file that includes a data object that comprises a 3D representation of the building feature within the space that is configured to communicate, via the display of the mobile application, a first location of the building feature within the space relative to the apparatus and the physical dimensions for the building feature based at least in part on the depth information, the first image information, and the second image information,
wherein, after an obscuring object is installed within the space so as to physically obscure the building feature from view by the apparatus, the 3D representation is recallable by the mobile application based at least in part on the origin point and configured to present the 3D representation within the display in a particular location of the display that visually corresponds to the location of the building feature. 15. The apparatus of claim 14, wherein the mobile application is further configured to cause the apparatus to capture, using the camera device, an image of an object of the one or more objects or a structural feature within the space to associate with the 3D representation of the object. 16. The apparatus of claim 14, wherein the mobile application is further configured to present, via the mobile application, an augmented reality presentation of the second image information within the space to match to a real-time image of the space to further present the data object within the space. 17. The apparatus of claim 14, wherein the mobile application is further configured to use the one or more points within the space to identify a plurality of types of objects within the space. 18. The apparatus of claim 17, wherein each type of object of the plurality of types of objects is stored as a layer within the data object. 19. The apparatus of claim 14, wherein the apparatus further comprises an accelerometer and a compass. 20. The apparatus of claim 19, wherein the mobile application is further configured to determine a relative location of the apparatus within the space based at least in part on sensor input obtained by the accelerometer and the compass in response to receiving the user input. | 2,600 |
11,007 | 11,007 | 15,636,066 | 2,657 | Examples of the present disclosure describe systems and methods for automatically assisting conversations using a graph database. In order to minimize misunderstanding of words and phrases used by participants during a conversation, phrases from the conversation may be received by conversation assistance application as the conversation takes place. Entities may be extracted from the phrase based on natural language recognition according to a domain context of the participant being assisted. One or more tags may be looked up from a graph database, and may be provided to the participant as a list of hashtags related to the conversation. Links to documents may be extracted based on the tags for the participant for viewing during the conversation. | 1. A system for automatically assisting conversation, the system comprising:
at least one processor; and a memory storing instructions that when executed by the at least one processor perform a method for providing tags for the conversation and links to documents related to the conversation, the method comprising:
receiving at least one phrase from a conversation between a first participant and a second participant;
extracting at least one tag based on the at least one received phrase;
providing the at least one tag;
receiving a tag;
based on the received tag, retrieving one or more link to documents from at least one graph database; and
providing the a first set of links to the first participant and a second set of links to the second participant, wherein the first set of links is different from the second set of links. 2. The system of claim 1, wherein extracting at least one tag comprises:
extracting at least one entity from the received phrase based on natural language recognition; and based on the extracted at least one entity, retrieving at least one tag from at least one graph database. 3. The system of claim 1, wherein retrieving links to documents comprises:
retrieving tag nodes from the at least one graph database; based on the retrieved tag nodes, retrieving at least one link to documents associated with the retrieved tag nodes; and ranking the retrieved at least one link to documents based on relevance to the extracted at least one entity. 4. The system of claim 1, wherein the at least one tag is associated with at least one document. 5. The system of claim 2, wherein the natural language recognition is based on at least one of a first domain of the first participant and a second domain of the second participant. 6. The system of claim 2, further comprising identifying at least one common domain context among the first participant and the second to the conversation, wherein the natural language recognition is based on the common domain context. 7. The system of claim 1, wherein the at least one graph database comprises a tag graph, and wherein the tag graph comprises a tag node and an edge originating from the tag node to a document link node. 8. The system of claim 1, wherein the at least one graph database comprises a document link graph, the document link graph comprising a document link node, at least one edge from the document link node to a document node, and at least one edge from the document link node to an access control node. 9. A method for automatically assisting conversation using a graph database, the method comprising:
receiving identities of participants of a conversation, wherein one of the participants is a requesting participant; receiving at least one phrase from the conversation; extracting at least one tag from the at least one received phrase; providing the at least one tag to the requesting participant; receiving a tag from the requesting participant; based on the received tag, retrieving links to documents from at least one graph database; and providing the links to electronic files to the requesting participant. 10. The method of claim 9, the method further comprising:
receiving identity of a participant of the conversation, wherein extracting at least one tag comprises:
extracting at least one entity from the received phrase based on natural language recognition; and
based on the extracted at least one entity, retrieving at least one tag from at least one graph database. 11. The method of claim 9, the method further comprising:
receiving identity of a participant of the conversation, wherein retrieving links to documents comprises:
retrieving tag nodes from the at least one graph database;
based on the retrieved tag nodes, retrieving at least one link to documents associated with the retrieved tag nodes; and
ranking the retrieved at least one link to documents based on relevance to the extracted at least one entity. 12. The method of claim 9, wherein the at least one tag is associated with at least one document, and wherein the participant is accessible to the at least one document. 13. The method of claim 10, wherein the natural language recognition is based on domain context of the participant. 14. The method of claim 10, further comprising:
receiving identities of all participants of the conversation; identifying at least one common domain context among the all participants of the conversation, wherein the natural language recognition is based on the common domain context of the participant. 15. The method of claim 9, wherein the at least one graph database comprises a tag graph comprising a tag node and an edge originating from the tag node to a document link node. 16. The method of claim 9, wherein the at least one graph database comprises a document link graph comprising a document link node, at least one edge from the document link node to a document, and at least one edge from the document link to an access control node. 17. A computer-readable storage device with a memory storing computer executable instructions which, when connected to and executed by at least one processor, perform a method for automatically assisting conversations using graph database, the method comprising:
receiving at least one phrase from a conversation; transcoding the at least one phrase to at least one text phrase; extracting at least one tag from the at least one text phrase; providing the at least one tag; receiving a tag; based on the received tag, retrieving links to documents from at least one graph database; and providing the links to documents. 18. The computer-readable storage device of claim 17, the method further comprising:
receiving identity of a participant of the conversation, wherein extracting at least one tag comprises: extracting at least one entity from the received phrase based on natural language recognition; and based on the extracted at least one entity, retrieving at least one tag from at least one graph database. 19. The computer-readable storage device of claim 17, the method further comprising:
receiving identity of a participant of the conversation, wherein retrieving links to documents comprises:
retrieving tag nodes from the at least one graph database;
based on the retrieved tag nodes, retrieving at least one link to documents associated with the retrieved tag nodes; and
ranking the retrieved at least one link to documents based on relevance to the extracted at least one entity. 20. The computer-readable storage device of claim 17, wherein the at least one tag is associated with at least one document, and wherein the participant is accessible to the at least one document. | Examples of the present disclosure describe systems and methods for automatically assisting conversations using a graph database. In order to minimize misunderstanding of words and phrases used by participants during a conversation, phrases from the conversation may be received by conversation assistance application as the conversation takes place. Entities may be extracted from the phrase based on natural language recognition according to a domain context of the participant being assisted. One or more tags may be looked up from a graph database, and may be provided to the participant as a list of hashtags related to the conversation. Links to documents may be extracted based on the tags for the participant for viewing during the conversation.1. A system for automatically assisting conversation, the system comprising:
at least one processor; and a memory storing instructions that when executed by the at least one processor perform a method for providing tags for the conversation and links to documents related to the conversation, the method comprising:
receiving at least one phrase from a conversation between a first participant and a second participant;
extracting at least one tag based on the at least one received phrase;
providing the at least one tag;
receiving a tag;
based on the received tag, retrieving one or more link to documents from at least one graph database; and
providing the a first set of links to the first participant and a second set of links to the second participant, wherein the first set of links is different from the second set of links. 2. The system of claim 1, wherein extracting at least one tag comprises:
extracting at least one entity from the received phrase based on natural language recognition; and based on the extracted at least one entity, retrieving at least one tag from at least one graph database. 3. The system of claim 1, wherein retrieving links to documents comprises:
retrieving tag nodes from the at least one graph database; based on the retrieved tag nodes, retrieving at least one link to documents associated with the retrieved tag nodes; and ranking the retrieved at least one link to documents based on relevance to the extracted at least one entity. 4. The system of claim 1, wherein the at least one tag is associated with at least one document. 5. The system of claim 2, wherein the natural language recognition is based on at least one of a first domain of the first participant and a second domain of the second participant. 6. The system of claim 2, further comprising identifying at least one common domain context among the first participant and the second to the conversation, wherein the natural language recognition is based on the common domain context. 7. The system of claim 1, wherein the at least one graph database comprises a tag graph, and wherein the tag graph comprises a tag node and an edge originating from the tag node to a document link node. 8. The system of claim 1, wherein the at least one graph database comprises a document link graph, the document link graph comprising a document link node, at least one edge from the document link node to a document node, and at least one edge from the document link node to an access control node. 9. A method for automatically assisting conversation using a graph database, the method comprising:
receiving identities of participants of a conversation, wherein one of the participants is a requesting participant; receiving at least one phrase from the conversation; extracting at least one tag from the at least one received phrase; providing the at least one tag to the requesting participant; receiving a tag from the requesting participant; based on the received tag, retrieving links to documents from at least one graph database; and providing the links to electronic files to the requesting participant. 10. The method of claim 9, the method further comprising:
receiving identity of a participant of the conversation, wherein extracting at least one tag comprises:
extracting at least one entity from the received phrase based on natural language recognition; and
based on the extracted at least one entity, retrieving at least one tag from at least one graph database. 11. The method of claim 9, the method further comprising:
receiving identity of a participant of the conversation, wherein retrieving links to documents comprises:
retrieving tag nodes from the at least one graph database;
based on the retrieved tag nodes, retrieving at least one link to documents associated with the retrieved tag nodes; and
ranking the retrieved at least one link to documents based on relevance to the extracted at least one entity. 12. The method of claim 9, wherein the at least one tag is associated with at least one document, and wherein the participant is accessible to the at least one document. 13. The method of claim 10, wherein the natural language recognition is based on domain context of the participant. 14. The method of claim 10, further comprising:
receiving identities of all participants of the conversation; identifying at least one common domain context among the all participants of the conversation, wherein the natural language recognition is based on the common domain context of the participant. 15. The method of claim 9, wherein the at least one graph database comprises a tag graph comprising a tag node and an edge originating from the tag node to a document link node. 16. The method of claim 9, wherein the at least one graph database comprises a document link graph comprising a document link node, at least one edge from the document link node to a document, and at least one edge from the document link to an access control node. 17. A computer-readable storage device with a memory storing computer executable instructions which, when connected to and executed by at least one processor, perform a method for automatically assisting conversations using graph database, the method comprising:
receiving at least one phrase from a conversation; transcoding the at least one phrase to at least one text phrase; extracting at least one tag from the at least one text phrase; providing the at least one tag; receiving a tag; based on the received tag, retrieving links to documents from at least one graph database; and providing the links to documents. 18. The computer-readable storage device of claim 17, the method further comprising:
receiving identity of a participant of the conversation, wherein extracting at least one tag comprises: extracting at least one entity from the received phrase based on natural language recognition; and based on the extracted at least one entity, retrieving at least one tag from at least one graph database. 19. The computer-readable storage device of claim 17, the method further comprising:
receiving identity of a participant of the conversation, wherein retrieving links to documents comprises:
retrieving tag nodes from the at least one graph database;
based on the retrieved tag nodes, retrieving at least one link to documents associated with the retrieved tag nodes; and
ranking the retrieved at least one link to documents based on relevance to the extracted at least one entity. 20. The computer-readable storage device of claim 17, wherein the at least one tag is associated with at least one document, and wherein the participant is accessible to the at least one document. | 2,600 |
11,008 | 11,008 | 16,277,014 | 2,646 | Example methods and systems for adjusting the beam width of radio frequency (RF) signals for purposes of balloon-to-ground communication are described. One example method includes determining, based on respective locations of a plurality of balloons and areas covered by respective ground-facing communication beams of the balloons, a contiguous ground coverage area served by the plurality of balloons, where the communication beam of a balloon defines a corresponding individual coverage area within the ground coverage area, determining a change in position of at least one of the balloons, based on the change in position of the at least one balloon, determining an adjustment to a first of the individual coverage areas in an effort to maintain the contiguous ground coverage area after the change in position of at least one of the balloons, and adjusting a width of the ground-facing communication beam of the balloon corresponding to the first individual coverage area in order to make the determined adjustment to the first individual coverage area. | 1. A method comprising:
determining, by one or more processors, state information for a plurality of high altitude platforms forming at least part of a wireless communication network, the state information including location data for each of the plurality of high altitude platforms, communication link information and meteorological information; predicting, by the one or more processors, a potential coverage gap of the wireless communication network based on expected movement of one or more of the plurality of high altitude platforms and at least a portion of the state information; in response to the predicted potential coverage gap, selecting one of a plurality of antennas of a given one of the plurality of high altitude platforms to adjust a coverage area of the given one of the plurality of the plurality of high altitude platforms; and transmitting a communication signal to a receiver device via the selected antenna of the given one of the plurality of high altitude platforms. 2. The method of claim 1, wherein selecting one of the plurality of antennas includes switching from a first antenna having a first beam width to a second antenna having a second beam width, the first beam width being narrower than the second beam width. 3. The method of claim 1, wherein selecting one of the plurality of antennas includes switching from a first antenna having a first beam width to a second antenna having a second beam width, the first beam width being wider than the second beam width. 4. The method of claim 3, wherein the antenna with the second beam width is selected to decrease a coverage area size and to provide a stronger signal to a smaller area on the ground. 5. The method of claim 1, wherein selecting one of the plurality of antennas includes adjusting a beam width to provide a certain amount of coverage to particular areas on the ground based on demand level. 6. The method of claim 1, wherein:
the given one of the plurality of high altitude platforms comprises a balloon; and predicting the potential coverage gap of the wireless communication network includes evaluating the meteorological information associated with the balloon to determine an expected altitude or lateral position change to the balloon. 7. The method of claim 1, further comprising:
determining a contiguous ground coverage area by the plurality of high altitude platforms based on individual coverage areas for each of the high altitude platforms; wherein selecting one of the plurality of antennas includes choosing an antenna beam width to provide the contiguous ground coverage area with a minimal amount of overlap between the individual coverage areas. 8. The method of claim 1, further comprising:
determining a contiguous ground coverage area by the plurality of high altitude platforms based on individual coverage areas for each of the high altitude platforms; wherein selecting one of the plurality of antennas includes choosing an antenna beam width to provide the contiguous ground coverage area with a determined amount of overlap between the individual coverage areas to provide redundancy between at least some of the plurality of high altitude platforms. 9. The method of claim 1, further comprising:
determining a service level demand within a first region of a contiguous coverage area; wherein selecting one of the plurality of antennas includes choosing from among the plurality of antennas to satisfy the service level demand within the first region of the contiguous coverage area. 10. A system comprising:
a plurality of high altitude platforms forming at least part of a wireless communication network; and a control system including one or more processors, the one or more processors being configured to: determine state information for the plurality of high altitude platforms, the state information including location data for each of the plurality of high altitude platforms, communication link information and meteorological information; predict a potential coverage gap of the wireless communication network based on expected movement of one or more of the plurality of high altitude platforms and at least a portion of the state information; in response to the predicted potential coverage gap, select one of a plurality of antennas of a given one of the plurality of high altitude platforms to adjust a coverage area of the given one of the plurality of the plurality of high altitude platforms; and instruct the given high altitude platform to use the selected antenna when transmitting a communication signal to a receiver device. 11. The system of claim 10, wherein:
the given one of the plurality of high altitude platforms comprises a balloon; and prediction of the potential coverage gap of the wireless communication network includes an evaluation the meteorological information associated with the balloon to determine an expected altitude or lateral position change to the balloon. 12. A signal routing method comprising:
determining, by one or more processors in a communication network, state information for a plurality of high altitude platforms forming at least part of the communication network, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determining, by the one or more processors according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; selecting, by the one or more processors, a transparent routing path from among the one or more routing paths; and transmitting the communication signal to a receiver device via the transparent routing path. 13. The signal routing method of claim 12, wherein the transparent routing path comprises a plurality of free-space optical links between the subset of high altitude platforms. 14. The signal routing method of claim 12, wherein determining the one or more routing paths includes identifying adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 15. The signal routing method of claim 14, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 16. The signal routing method of claim 12, wherein determining the one or more routing paths includes evaluating which paths implement wavelength division multiplexing. 17. The signal routing method of claim 16, wherein selecting the transparent routing path includes assigning a same wavelength for all optical links on the transparent routing path. 18. The signal routing method of claim 12, wherein one or more of the plurality of high altitude platforms comprises a balloon. 19. A system comprising:
a plurality of high altitude platforms forming at least part of a wireless communication network; and a control system including one or more processors, the one or more processors being configured to:
determine state information for the plurality of high altitude platforms, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information;
determine, according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion;
select a transparent routing path from among the one or more routing paths; and
inform all high altitude platforms along the transparent routing path of the selection. 20. The system of claim 19, wherein one or more of the high altitude platforms along the transparent routing path comprises a balloon. | Example methods and systems for adjusting the beam width of radio frequency (RF) signals for purposes of balloon-to-ground communication are described. One example method includes determining, based on respective locations of a plurality of balloons and areas covered by respective ground-facing communication beams of the balloons, a contiguous ground coverage area served by the plurality of balloons, where the communication beam of a balloon defines a corresponding individual coverage area within the ground coverage area, determining a change in position of at least one of the balloons, based on the change in position of the at least one balloon, determining an adjustment to a first of the individual coverage areas in an effort to maintain the contiguous ground coverage area after the change in position of at least one of the balloons, and adjusting a width of the ground-facing communication beam of the balloon corresponding to the first individual coverage area in order to make the determined adjustment to the first individual coverage area.1. A method comprising:
determining, by one or more processors, state information for a plurality of high altitude platforms forming at least part of a wireless communication network, the state information including location data for each of the plurality of high altitude platforms, communication link information and meteorological information; predicting, by the one or more processors, a potential coverage gap of the wireless communication network based on expected movement of one or more of the plurality of high altitude platforms and at least a portion of the state information; in response to the predicted potential coverage gap, selecting one of a plurality of antennas of a given one of the plurality of high altitude platforms to adjust a coverage area of the given one of the plurality of the plurality of high altitude platforms; and transmitting a communication signal to a receiver device via the selected antenna of the given one of the plurality of high altitude platforms. 2. The method of claim 1, wherein selecting one of the plurality of antennas includes switching from a first antenna having a first beam width to a second antenna having a second beam width, the first beam width being narrower than the second beam width. 3. The method of claim 1, wherein selecting one of the plurality of antennas includes switching from a first antenna having a first beam width to a second antenna having a second beam width, the first beam width being wider than the second beam width. 4. The method of claim 3, wherein the antenna with the second beam width is selected to decrease a coverage area size and to provide a stronger signal to a smaller area on the ground. 5. The method of claim 1, wherein selecting one of the plurality of antennas includes adjusting a beam width to provide a certain amount of coverage to particular areas on the ground based on demand level. 6. The method of claim 1, wherein:
the given one of the plurality of high altitude platforms comprises a balloon; and predicting the potential coverage gap of the wireless communication network includes evaluating the meteorological information associated with the balloon to determine an expected altitude or lateral position change to the balloon. 7. The method of claim 1, further comprising:
determining a contiguous ground coverage area by the plurality of high altitude platforms based on individual coverage areas for each of the high altitude platforms; wherein selecting one of the plurality of antennas includes choosing an antenna beam width to provide the contiguous ground coverage area with a minimal amount of overlap between the individual coverage areas. 8. The method of claim 1, further comprising:
determining a contiguous ground coverage area by the plurality of high altitude platforms based on individual coverage areas for each of the high altitude platforms; wherein selecting one of the plurality of antennas includes choosing an antenna beam width to provide the contiguous ground coverage area with a determined amount of overlap between the individual coverage areas to provide redundancy between at least some of the plurality of high altitude platforms. 9. The method of claim 1, further comprising:
determining a service level demand within a first region of a contiguous coverage area; wherein selecting one of the plurality of antennas includes choosing from among the plurality of antennas to satisfy the service level demand within the first region of the contiguous coverage area. 10. A system comprising:
a plurality of high altitude platforms forming at least part of a wireless communication network; and a control system including one or more processors, the one or more processors being configured to: determine state information for the plurality of high altitude platforms, the state information including location data for each of the plurality of high altitude platforms, communication link information and meteorological information; predict a potential coverage gap of the wireless communication network based on expected movement of one or more of the plurality of high altitude platforms and at least a portion of the state information; in response to the predicted potential coverage gap, select one of a plurality of antennas of a given one of the plurality of high altitude platforms to adjust a coverage area of the given one of the plurality of the plurality of high altitude platforms; and instruct the given high altitude platform to use the selected antenna when transmitting a communication signal to a receiver device. 11. The system of claim 10, wherein:
the given one of the plurality of high altitude platforms comprises a balloon; and prediction of the potential coverage gap of the wireless communication network includes an evaluation the meteorological information associated with the balloon to determine an expected altitude or lateral position change to the balloon. 12. A signal routing method comprising:
determining, by one or more processors in a communication network, state information for a plurality of high altitude platforms forming at least part of the communication network, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information; determining, by the one or more processors according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion; selecting, by the one or more processors, a transparent routing path from among the one or more routing paths; and transmitting the communication signal to a receiver device via the transparent routing path. 13. The signal routing method of claim 12, wherein the transparent routing path comprises a plurality of free-space optical links between the subset of high altitude platforms. 14. The signal routing method of claim 12, wherein determining the one or more routing paths includes identifying adaptive routing between first and second high altitude platforms of the plurality of high altitude platforms, where a lightpath between the first and second high altitude platforms is determined and set-up when a connection is needed and released at a later time. 15. The signal routing method of claim 14, wherein the lightpath is determined dynamically depending upon at least one of a current state, a past state, or a predicted state of the plurality of high altitude platforms. 16. The signal routing method of claim 12, wherein determining the one or more routing paths includes evaluating which paths implement wavelength division multiplexing. 17. The signal routing method of claim 16, wherein selecting the transparent routing path includes assigning a same wavelength for all optical links on the transparent routing path. 18. The signal routing method of claim 12, wherein one or more of the plurality of high altitude platforms comprises a balloon. 19. A system comprising:
a plurality of high altitude platforms forming at least part of a wireless communication network; and a control system including one or more processors, the one or more processors being configured to:
determine state information for the plurality of high altitude platforms, the state information including one or more of location data for each of the plurality of high altitude platforms, communication link information or meteorological information;
determine, according to the state information, one or more routing paths for a communication signal through a subset of the plurality of high altitude platforms, at least one of the one or more routing paths is transparent without signal conversion;
select a transparent routing path from among the one or more routing paths; and
inform all high altitude platforms along the transparent routing path of the selection. 20. The system of claim 19, wherein one or more of the high altitude platforms along the transparent routing path comprises a balloon. | 2,600 |
11,009 | 11,009 | 15,980,318 | 2,645 | A method and apparatus for storing a computer location including the steps of initiating, by a user of a device, a capture sequence, wherein the device includes a processor and a display screen, obtaining, by the processor, visual data corresponding to an image displayed on the display screen at the time of the initiating step, obtaining, by the processor, location data corresponding to a computer location accessed by the device, and storing, by the processor, the visual data and the location data as associated data such that the visual data and the location data are associated with each other. A method and apparatus for sending the computer location further including the step of sending, by the first device, the associated data to a second device, wherein the second device includes a second processor and a second display screen. | 1. A method for storing a computer location comprising the steps of:
initiating a capture sequence of a device, wherein the device comprises a processor and a display screen; wherein the capture sequence comprises:
obtaining, by the processor, visual data corresponding to an image displayed on the display screen at the time of the initiating of the capture sequence;
obtaining, by the processor, location data corresponding to a computer location accessed by the device; and
storing, by the processor, the visual data and the location data as associated data such that the visual data and the location data are associated with each other. 2. The method according to claim 1, wherein the initiating of the capture sequence is caused by a detection, by the processor, of an activation input from a user. 3. The method according to claim 2, wherein the initiating of the capture sequence causes the processor to perform the obtaining steps and the storing step without additional input from the user beyond the activation input. 4. The method according to claim 1, wherein the storing of the visual data and the location data comprises generating a data file that includes the associated data and storing the data file in a storage medium of the device. 5. The method according to claim 1, wherein the storing of the visual data and the location data comprises transmitting the associated data to a server for storage, wherein the server is remote from the device. 6. The method according to claim 1, wherein the location data includes a unique session identifier. 7. The method according to claim 1, further comprising obtaining, by the processor, video progress data, wherein the video progress data indicates a particular point in time of a video provided at the computer location. 8. The method according to claim 7, wherein the storing of the visual data and location data further comprises storing the video progress data with the associated data such that the visual data, the location data and the video progress data are associated with each other. 9. The method according to claim 1, further comprising displaying a plurality of images on the display screen, each image of the plurality of images corresponding to visual data of an associated data of a plurality of associated data. 10. A method for storing and sending a computer location comprising:
initiating a capture sequence of a first device, wherein the first device comprises a first processor and a first display screen; wherein the capture sequence comprises:
obtaining, by the first processor, visual data corresponding to an image displayed on the first display screen at the time of the initiating step;
obtaining, by the first processor, location data corresponding to a computer location accessed by the first device;
storing, by the first processor, the visual data and the location data as associated data such that the visual data and location data are associated with each other; and
sending, by the first device, the associated data to a second device, wherein the second device comprises a second processor and a second display screen. 11. The method according to claim 10, wherein the initiating of the capture sequence is caused by a detection, by the first processor, of an activation input from a user of the first device. 12. The method according to claim 10, further comprising displaying the image on the second display screen of the second device based on the associated data. 13. The method according to claim 12, further comprising interacting, by a user of the second device, with the image, thereby causing the second device to access the computer location. 14. The method according to claim 10, further comprising obtaining, by the first processor, video progress data, wherein the video progress data indicates a particular point in time of a video provided at the computer location, and wherein the storing of the visual data and location data further comprises storing the video progress data as the associated data such that the visual data, the location data and the video progress data are associated with each other. 15. The method according to claim 10, wherein the location data contains a unique session identifier. 16. The method according to claim 10, wherein the storing of the visual data and the location data comprises generating a data file that includes the associated data and storing the data file in a storage medium of the first device. 17. The method according to claim 10, wherein the storing of the visual data and the location data comprises transmitting the associated data to a server for storage, wherein the server is remote from the first device and the second device. 18. A device for storing a computer location comprising:
a processor; a display screen; an input element; and a computer readable non-transitory storage medium; wherein the device is configured to perform a capture sequence based on a detection, by the processor, of an activation input performed by a user on the input element; and wherein the capture sequence comprises:
obtaining, by the processor, visual data corresponding to an image displayed on the display screen at the time of the detection of the activation input by the processor;
obtaining, by the processor, location data corresponding to a computer location accessed by the device; and
storing, by the processor, the visual data and the location data as associated data in the storage medium such that the visual data and the location data are associated with each other. 19. The device according to claim 18, wherein the detection of the activation input by the processor causes the device to perform the capture sequence without additional input from the user beyond the activation input. 20. The device according to claim 18, wherein the device is configured to store the associated data with a plurality of other associated data. | A method and apparatus for storing a computer location including the steps of initiating, by a user of a device, a capture sequence, wherein the device includes a processor and a display screen, obtaining, by the processor, visual data corresponding to an image displayed on the display screen at the time of the initiating step, obtaining, by the processor, location data corresponding to a computer location accessed by the device, and storing, by the processor, the visual data and the location data as associated data such that the visual data and the location data are associated with each other. A method and apparatus for sending the computer location further including the step of sending, by the first device, the associated data to a second device, wherein the second device includes a second processor and a second display screen.1. A method for storing a computer location comprising the steps of:
initiating a capture sequence of a device, wherein the device comprises a processor and a display screen; wherein the capture sequence comprises:
obtaining, by the processor, visual data corresponding to an image displayed on the display screen at the time of the initiating of the capture sequence;
obtaining, by the processor, location data corresponding to a computer location accessed by the device; and
storing, by the processor, the visual data and the location data as associated data such that the visual data and the location data are associated with each other. 2. The method according to claim 1, wherein the initiating of the capture sequence is caused by a detection, by the processor, of an activation input from a user. 3. The method according to claim 2, wherein the initiating of the capture sequence causes the processor to perform the obtaining steps and the storing step without additional input from the user beyond the activation input. 4. The method according to claim 1, wherein the storing of the visual data and the location data comprises generating a data file that includes the associated data and storing the data file in a storage medium of the device. 5. The method according to claim 1, wherein the storing of the visual data and the location data comprises transmitting the associated data to a server for storage, wherein the server is remote from the device. 6. The method according to claim 1, wherein the location data includes a unique session identifier. 7. The method according to claim 1, further comprising obtaining, by the processor, video progress data, wherein the video progress data indicates a particular point in time of a video provided at the computer location. 8. The method according to claim 7, wherein the storing of the visual data and location data further comprises storing the video progress data with the associated data such that the visual data, the location data and the video progress data are associated with each other. 9. The method according to claim 1, further comprising displaying a plurality of images on the display screen, each image of the plurality of images corresponding to visual data of an associated data of a plurality of associated data. 10. A method for storing and sending a computer location comprising:
initiating a capture sequence of a first device, wherein the first device comprises a first processor and a first display screen; wherein the capture sequence comprises:
obtaining, by the first processor, visual data corresponding to an image displayed on the first display screen at the time of the initiating step;
obtaining, by the first processor, location data corresponding to a computer location accessed by the first device;
storing, by the first processor, the visual data and the location data as associated data such that the visual data and location data are associated with each other; and
sending, by the first device, the associated data to a second device, wherein the second device comprises a second processor and a second display screen. 11. The method according to claim 10, wherein the initiating of the capture sequence is caused by a detection, by the first processor, of an activation input from a user of the first device. 12. The method according to claim 10, further comprising displaying the image on the second display screen of the second device based on the associated data. 13. The method according to claim 12, further comprising interacting, by a user of the second device, with the image, thereby causing the second device to access the computer location. 14. The method according to claim 10, further comprising obtaining, by the first processor, video progress data, wherein the video progress data indicates a particular point in time of a video provided at the computer location, and wherein the storing of the visual data and location data further comprises storing the video progress data as the associated data such that the visual data, the location data and the video progress data are associated with each other. 15. The method according to claim 10, wherein the location data contains a unique session identifier. 16. The method according to claim 10, wherein the storing of the visual data and the location data comprises generating a data file that includes the associated data and storing the data file in a storage medium of the first device. 17. The method according to claim 10, wherein the storing of the visual data and the location data comprises transmitting the associated data to a server for storage, wherein the server is remote from the first device and the second device. 18. A device for storing a computer location comprising:
a processor; a display screen; an input element; and a computer readable non-transitory storage medium; wherein the device is configured to perform a capture sequence based on a detection, by the processor, of an activation input performed by a user on the input element; and wherein the capture sequence comprises:
obtaining, by the processor, visual data corresponding to an image displayed on the display screen at the time of the detection of the activation input by the processor;
obtaining, by the processor, location data corresponding to a computer location accessed by the device; and
storing, by the processor, the visual data and the location data as associated data in the storage medium such that the visual data and the location data are associated with each other. 19. The device according to claim 18, wherein the detection of the activation input by the processor causes the device to perform the capture sequence without additional input from the user beyond the activation input. 20. The device according to claim 18, wherein the device is configured to store the associated data with a plurality of other associated data. | 2,600 |
11,010 | 11,010 | 15,641,632 | 2,675 | An encoder and a method therein for Pyramid Vector Quantizer, PVQ, shape search, the PVQ taking a target vector x as input and deriving a vector y by iteratively adding unit pulses in an inner dimension search loop. The method comprises, before entering a next inner dimension search loop for unit pulse addition, determining, based on the maximum pulse amplitude, maxamp y , of a current vector y, whether more than a current bit word length is needed to represent enloop y , in a lossless manner in the upcoming inner dimension loop. The variable enloop y is related to an accumulated energy of the vector y. The performing of this method enables the encoder to keep the complexity of the search at a reasonable level. | 1. A method for Pyramid Vector Quantizer (PVQ) shape search, performed by an audio encoder, the PVQ taking a target vector x as input and deriving a vector y by iteratively adding unit pulses in an inner dimension search loop, the method comprising:
before entering a next inner dimension search loop for unit pulse addition, determining, based on a maximum pulse amplitude, maxampy, of a current vector y, whether more than a current bit word length is needed to represent, in a lossless manner, a variable, enloopy, related to an accumulated energy of y, in the next inner dimension search loop. 2. The method according to claim 1, wherein the method further comprises:
before entering the next inner dimension search loop for unit pulse addition, determining, based on a maximum absolute value, xabsmax, of the input vector, x, a possible upshift, in a bit word, of the next loop's accumulated in-loop correlation value, corrxy, between x and the vector y. 3. The method according to claim 1, further comprising:
when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations using a longer bit word length to represent enloopy. 4. The method according to claim 1, further comprising:
when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations using a longer bit word length to represent a squared accumulated in-loop correlation value, corrxy 2, between x and the vector y, in the inner loop. 5. The method according to claim 1, further comprising:
when more than the current bit word length is not needed to represent enloopy, performing the inner loop calculations by employing a first unit pulse addition loop using a first bit word length to represent enloopy; and when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations by employing a second unit pulse addition loop using a longer bit word length to represent enloopy than the first unit pulse addition loop. 6. The method according to claim 1 further comprising:
when more than the current bit word length is not needed to represent enloopy, performing the inner loop calculations by employing a first unit pulse addition loop having a certain precision; and
when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations by employing a second unit pulse addition loop having a higher precision than the first unit pulse addition loop. 7. The method according to claim 1, wherein the determining, based on maxampy, of whether more than the current bit word length is needed to represent enloopy comprises determining characteristics of the case when, in the next inner dimension search loop, aunit pulse is added to the position in y being associated with maxampy. 8. The method according to claim 1, further comprising:
in the inner dimension search loop for unit pulse addition: determining a position, nbest, in y for addition of a unit pulse by evaluating a cross-multiplication, for each position n in y, of a correlation and energy value for the current n; and a squared correlation, BestCorrSq and an energy value, bestEn, saved from previous values of n, as:
corrxy 2*bestEn>BestCorrSq*enloopy
where
n
best
=
n
bestEn
=
enloop
y
BestCorrSq
=
corr
xy
2
}
,
when
corr
xy
2
*
bestEn
>
BestCorrSq
*
enloop
y 9. The method according to claim 1, further comprising:
keeping track of maxampy when a final value of K, associated with the target vector x, exceeds a threshold value. 10. A computer program product comprising a non-transitory computer readable medium storing a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to claim 1. 11. The computer program product according to claim 12, wherein at least one of the at least one processors is a Digital Signal Processor. 12. An audio encoder configured for Pyramid Vector Quantization (PVQ) shape search, the PVQ taking a target vector x as input and deriving a vector y by iteratively adding unit pulses in an inner dimension search loop, the audio encoder being configured to:
before entering a next inner dimension search loop for unit pulse addition, determine, based on a maximum pulse amplitude, maxampy, of a current vector y, whether more than a current bit word length is needed to represent, in a lossless manner, a variable, enloopy, related to an accumulated energy of y, in the next inner dimension loop. 13. The audio encoder according to claim 12, being further configured to:
before entering the next inner dimension loop for unit pulse addition, determine, based on a maximum absolute value, xabsmax, of the input vector, x, a possible upshift, in a bit word, of the next loop's accumulated in-loop correlation value, corrxy, between x and the vector y. 14. The audio encoder according to claim 12, being further configured to:
perform the inner loop calculations using a longer bit word length to represent enloopy, when more than the current bit word length is needed to represent enloopy. 15. The audio encoder according to claim 12, being further configured to:
perform the inner loop calculations by employing a first unit pulse addition loop using a first bit word length when more than the current bit word length is not needed to represent enloopy, and perform the inner loop calculations by employing a second unit pulse addition loop using a longer bit word length than the first unit pulse addition loop when more than the current bit word length is needed to represent enloopy. 16. The audio encoder according to claim 12, being further configured to:
perform the inner loop calculations by employing a first unit pulse addition loop, having a certain precision, when more than the current bit word length is not needed to represent enloopy; and perform the inner loop calculations by employing a second unit pulse addition loop, having a higher precision than the first unit pulse addition loop, when more than the current bit word length is needed to represent enloopy. 17. The audio encoder according to claim 12, wherein the determining, based on maxampy, of whether more than the current bit word length is needed to represent enloopy is configured to comprise determining characteristics of the case when, in the next inner dimension search loop, aunit pulse is added to the position in y being associated with maxampy. 18. The audio encoder according to claim 12, being further configured to:
in the inner dimension search loop for unit pulse addition, determine a position, nbest, in y for addition of a unit pulse by evaluating a cross-multiplication, for each position n in y, of a correlation and energy value for the current n; and a correlation, BestCorrSq, and energy value, bestEn, saved from previous values of n, as:
corrxy 2*bestEn>BestCorrSq*enloopy
where
n
best
=
n
bestEn
=
enloop
y
BestCorrSq
=
corr
xy
2
}
,
when
corr
xy
2
*
bestEn
>
BestCorrSq
*
enloop
y 19. The audio encoder according to claim 12, being further configured to keep track of maxampy when a number of final unit pulses, K, associated with the target vector x, exceeds a threshold value. 20. A communication device comprising the audio encoder according to claim 12. | An encoder and a method therein for Pyramid Vector Quantizer, PVQ, shape search, the PVQ taking a target vector x as input and deriving a vector y by iteratively adding unit pulses in an inner dimension search loop. The method comprises, before entering a next inner dimension search loop for unit pulse addition, determining, based on the maximum pulse amplitude, maxamp y , of a current vector y, whether more than a current bit word length is needed to represent enloop y , in a lossless manner in the upcoming inner dimension loop. The variable enloop y is related to an accumulated energy of the vector y. The performing of this method enables the encoder to keep the complexity of the search at a reasonable level.1. A method for Pyramid Vector Quantizer (PVQ) shape search, performed by an audio encoder, the PVQ taking a target vector x as input and deriving a vector y by iteratively adding unit pulses in an inner dimension search loop, the method comprising:
before entering a next inner dimension search loop for unit pulse addition, determining, based on a maximum pulse amplitude, maxampy, of a current vector y, whether more than a current bit word length is needed to represent, in a lossless manner, a variable, enloopy, related to an accumulated energy of y, in the next inner dimension search loop. 2. The method according to claim 1, wherein the method further comprises:
before entering the next inner dimension search loop for unit pulse addition, determining, based on a maximum absolute value, xabsmax, of the input vector, x, a possible upshift, in a bit word, of the next loop's accumulated in-loop correlation value, corrxy, between x and the vector y. 3. The method according to claim 1, further comprising:
when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations using a longer bit word length to represent enloopy. 4. The method according to claim 1, further comprising:
when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations using a longer bit word length to represent a squared accumulated in-loop correlation value, corrxy 2, between x and the vector y, in the inner loop. 5. The method according to claim 1, further comprising:
when more than the current bit word length is not needed to represent enloopy, performing the inner loop calculations by employing a first unit pulse addition loop using a first bit word length to represent enloopy; and when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations by employing a second unit pulse addition loop using a longer bit word length to represent enloopy than the first unit pulse addition loop. 6. The method according to claim 1 further comprising:
when more than the current bit word length is not needed to represent enloopy, performing the inner loop calculations by employing a first unit pulse addition loop having a certain precision; and
when more than the current bit word length is needed to represent enloopy, performing the inner loop calculations by employing a second unit pulse addition loop having a higher precision than the first unit pulse addition loop. 7. The method according to claim 1, wherein the determining, based on maxampy, of whether more than the current bit word length is needed to represent enloopy comprises determining characteristics of the case when, in the next inner dimension search loop, aunit pulse is added to the position in y being associated with maxampy. 8. The method according to claim 1, further comprising:
in the inner dimension search loop for unit pulse addition: determining a position, nbest, in y for addition of a unit pulse by evaluating a cross-multiplication, for each position n in y, of a correlation and energy value for the current n; and a squared correlation, BestCorrSq and an energy value, bestEn, saved from previous values of n, as:
corrxy 2*bestEn>BestCorrSq*enloopy
where
n
best
=
n
bestEn
=
enloop
y
BestCorrSq
=
corr
xy
2
}
,
when
corr
xy
2
*
bestEn
>
BestCorrSq
*
enloop
y 9. The method according to claim 1, further comprising:
keeping track of maxampy when a final value of K, associated with the target vector x, exceeds a threshold value. 10. A computer program product comprising a non-transitory computer readable medium storing a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to claim 1. 11. The computer program product according to claim 12, wherein at least one of the at least one processors is a Digital Signal Processor. 12. An audio encoder configured for Pyramid Vector Quantization (PVQ) shape search, the PVQ taking a target vector x as input and deriving a vector y by iteratively adding unit pulses in an inner dimension search loop, the audio encoder being configured to:
before entering a next inner dimension search loop for unit pulse addition, determine, based on a maximum pulse amplitude, maxampy, of a current vector y, whether more than a current bit word length is needed to represent, in a lossless manner, a variable, enloopy, related to an accumulated energy of y, in the next inner dimension loop. 13. The audio encoder according to claim 12, being further configured to:
before entering the next inner dimension loop for unit pulse addition, determine, based on a maximum absolute value, xabsmax, of the input vector, x, a possible upshift, in a bit word, of the next loop's accumulated in-loop correlation value, corrxy, between x and the vector y. 14. The audio encoder according to claim 12, being further configured to:
perform the inner loop calculations using a longer bit word length to represent enloopy, when more than the current bit word length is needed to represent enloopy. 15. The audio encoder according to claim 12, being further configured to:
perform the inner loop calculations by employing a first unit pulse addition loop using a first bit word length when more than the current bit word length is not needed to represent enloopy, and perform the inner loop calculations by employing a second unit pulse addition loop using a longer bit word length than the first unit pulse addition loop when more than the current bit word length is needed to represent enloopy. 16. The audio encoder according to claim 12, being further configured to:
perform the inner loop calculations by employing a first unit pulse addition loop, having a certain precision, when more than the current bit word length is not needed to represent enloopy; and perform the inner loop calculations by employing a second unit pulse addition loop, having a higher precision than the first unit pulse addition loop, when more than the current bit word length is needed to represent enloopy. 17. The audio encoder according to claim 12, wherein the determining, based on maxampy, of whether more than the current bit word length is needed to represent enloopy is configured to comprise determining characteristics of the case when, in the next inner dimension search loop, aunit pulse is added to the position in y being associated with maxampy. 18. The audio encoder according to claim 12, being further configured to:
in the inner dimension search loop for unit pulse addition, determine a position, nbest, in y for addition of a unit pulse by evaluating a cross-multiplication, for each position n in y, of a correlation and energy value for the current n; and a correlation, BestCorrSq, and energy value, bestEn, saved from previous values of n, as:
corrxy 2*bestEn>BestCorrSq*enloopy
where
n
best
=
n
bestEn
=
enloop
y
BestCorrSq
=
corr
xy
2
}
,
when
corr
xy
2
*
bestEn
>
BestCorrSq
*
enloop
y 19. The audio encoder according to claim 12, being further configured to keep track of maxampy when a number of final unit pulses, K, associated with the target vector x, exceeds a threshold value. 20. A communication device comprising the audio encoder according to claim 12. | 2,600 |
11,011 | 11,011 | 16,208,090 | 2,683 | A hand-held electronic device having a remote control application user interface that functions to displays operational mode information to a user. The graphical user interface may be used, for example, to setup the remote control application to control appliances for one or more users in one or more rooms, to perform activities, and to access favorites. The remote control application is also adapted to be upgradeable. Furthermore, the remote control application provides for the sharing of operational mode information. | 1. A method for configuring a controlling device to command functional operations of a target appliance, the method comprising:
receiving at a speech recognition engine voice data indicative of at least a type for and a brand of the target appliance whereupon the speech recognition engine uses the voice data indicative of at least the type for and the brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the brand of the target appliance; and causing the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 2. The method as recited in claim 1, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 3. The method as recited in claim 1, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the method comprises receiving at the speech recognition engine voice data indicative of a command to be assigned to a function key of the controlling device whereupon the speech recognition engine uses the voice data indicative of a command to be assigned to a function key to identify within the at least one codeset command data that is cross-referenced to the command and causing the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 4. The method as recited in claim 1, wherein location data is additionally utilized in the process of identifying the at least one codeset that is cross-referenced to the type for and the brand of the target appliance. 5. A system for configuring a controlling device to command functional operations of a target appliance, the system comprising:
a processing device having associated instructions stored on a non-transient readable media which instructions, when executed by the processing device, cause a speech recognition engine to use received voice data indicative of at least a type for and a brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the type for and the brand of the target appliance and to cause the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 6. The system as recited in claim 5, wherein the instructions are downloaded to the controlling device in a downloadable app. 7. The system as recited in claim 5, wherein the controlling device comprises one of a smart phone or a tablet computing device. 8. The system as recited in claim 5, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the instructions cause the speech recognition engine to use voice data indicative of a command to be assigned to a function key of the controlling device to identify within the at least one codeset command data that is cross-referenced to the command and to cause the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 9. The system as recited in claim 5, wherein the instructions additionally cause location data to be considered when identifying the at least one codeset that is cross-referenced to the brand of the target appliance. | A hand-held electronic device having a remote control application user interface that functions to displays operational mode information to a user. The graphical user interface may be used, for example, to setup the remote control application to control appliances for one or more users in one or more rooms, to perform activities, and to access favorites. The remote control application is also adapted to be upgradeable. Furthermore, the remote control application provides for the sharing of operational mode information.1. A method for configuring a controlling device to command functional operations of a target appliance, the method comprising:
receiving at a speech recognition engine voice data indicative of at least a type for and a brand of the target appliance whereupon the speech recognition engine uses the voice data indicative of at least the type for and the brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the brand of the target appliance; and causing the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 2. The method as recited in claim 1, wherein the library of codesets is stored remotely from the controlling device and the at least one codeset is provisioned to the controlling device by being downloaded thereto. 3. The method as recited in claim 1, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the method comprises receiving at the speech recognition engine voice data indicative of a command to be assigned to a function key of the controlling device whereupon the speech recognition engine uses the voice data indicative of a command to be assigned to a function key to identify within the at least one codeset command data that is cross-referenced to the command and causing the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 4. The method as recited in claim 1, wherein location data is additionally utilized in the process of identifying the at least one codeset that is cross-referenced to the type for and the brand of the target appliance. 5. A system for configuring a controlling device to command functional operations of a target appliance, the system comprising:
a processing device having associated instructions stored on a non-transient readable media which instructions, when executed by the processing device, cause a speech recognition engine to use received voice data indicative of at least a type for and a brand of the target appliance to identify within a library of codesets at least one codeset that is cross-referenced to the type for and the brand of the target appliance and to cause the at least one codeset to be provisioned to the controlling device for use in commanding functional operations of the target appliance. 6. The system as recited in claim 5, wherein the instructions are downloaded to the controlling device in a downloadable app. 7. The system as recited in claim 5, wherein the controlling device comprises one of a smart phone or a tablet computing device. 8. The system as recited in claim 5, wherein the controlling device has a plurality of function keys activatable to cause a transmission of a command to the target appliance and wherein the instructions cause the speech recognition engine to use voice data indicative of a command to be assigned to a function key of the controlling device to identify within the at least one codeset command data that is cross-referenced to the command and to cause the command data of the at least one codeset to be used by the controlling device in response to the function key being subsequently activated to cause a transmission of a command to the target device. 9. The system as recited in claim 5, wherein the instructions additionally cause location data to be considered when identifying the at least one codeset that is cross-referenced to the brand of the target appliance. | 2,600 |
11,012 | 11,012 | 15,777,554 | 2,626 | An input apparatus includes an operation interface configured to receive an input operation from a user, a piezoelectric element attached to the operation interface, and a controller configured to acquire output based on the input operation to the operation interface from the piezoelectric element and to execute different control with respect to a controlled apparatus in accordance with the output. | 1. An input apparatus comprising:
an operation interface configured to receive an input operation from a user; a piezoelectric element attached to the operation interface; and a controller configured to acquire output based on the input operation to the operation interface from the piezoelectric element and to execute different control with respect to a controlled apparatus in accordance with the output. 2. The input apparatus of claim 1, wherein the controller is configured to execute the different control in accordance with a polarity of voltage output from the piezoelectric element. 3. The input apparatus of claim 2, wherein the controller is configured to determine a speed of the control with respect to the controlled apparatus in accordance with the magnitude of the voltage. 4. The input apparatus of claim 1, wherein the controller is configured to provide a tactile sensation to the user by causing the piezoelectric element to expand and contract when the controller acquires the output from the piezoelectric element. 5. An input method for an input apparatus comprising an operation interface configured to receive an input operation from a user, a piezoelectric element attached to the operation interface, and a controller, the input method comprising:
acquiring, using the controller, output based on the input operation to the operation interface from the piezoelectric element; and executing different control with respect to a controlled apparatus in accordance with the output. | An input apparatus includes an operation interface configured to receive an input operation from a user, a piezoelectric element attached to the operation interface, and a controller configured to acquire output based on the input operation to the operation interface from the piezoelectric element and to execute different control with respect to a controlled apparatus in accordance with the output.1. An input apparatus comprising:
an operation interface configured to receive an input operation from a user; a piezoelectric element attached to the operation interface; and a controller configured to acquire output based on the input operation to the operation interface from the piezoelectric element and to execute different control with respect to a controlled apparatus in accordance with the output. 2. The input apparatus of claim 1, wherein the controller is configured to execute the different control in accordance with a polarity of voltage output from the piezoelectric element. 3. The input apparatus of claim 2, wherein the controller is configured to determine a speed of the control with respect to the controlled apparatus in accordance with the magnitude of the voltage. 4. The input apparatus of claim 1, wherein the controller is configured to provide a tactile sensation to the user by causing the piezoelectric element to expand and contract when the controller acquires the output from the piezoelectric element. 5. An input method for an input apparatus comprising an operation interface configured to receive an input operation from a user, a piezoelectric element attached to the operation interface, and a controller, the input method comprising:
acquiring, using the controller, output based on the input operation to the operation interface from the piezoelectric element; and executing different control with respect to a controlled apparatus in accordance with the output. | 2,600 |
11,013 | 11,013 | 15,927,973 | 2,648 | This disclosure relates to transport link selection for an accessory wireless device in association with a companion device. The accessory device may communicate via a short range wireless communication link with the companion device. The companion device may detect an event and, based on the event, transmit assistance information to the accessory device. The accessory device may evaluate various conditions. The accessory device may select a transport link and/or short range link based at least in part on the received assistance information and/or the evaluated conditions. | 1. An apparatus, comprising:
one or more processing elements, wherein the one or more processing elements are configured to cause an accessory wireless device to:
establish a short range link with a companion wireless device;
receive assistance information from the companion device, wherein the assistance information concerns an upcoming change in connectivity of a communication link between the companion device and a network;
select a transport link from at least two potential transport links for communication between the accessory wireless device and the network, wherein the transport link is selected based at least in part on the assistance information; and
communicating with the network using the transport link. 2. The apparatus of claim 1, wherein the transport link is further selected based at least in part on at least one application executing on the accessory wireless device. 3. The apparatus of claim 1, wherein the transport link is further selected based at least in part on battery state of at least one of:
the accessory wireless device; or the companion wireless device. 4. The apparatus of claim 1, wherein the transport link is selected based at least in part on the availability of one or more direct links between the accessory wireless device and a network. 5. The apparatus of claim 1, wherein the one or more processing elements are further configured to cause the accessory wireless device to:
measure a quality of the short range link with the companion device, wherein the transport link is selected based at least in part on the quality of the short range link. 6. The apparatus of claim 1, wherein the transport link is selected based at least in part on energy use of at least one link. 7. The apparatus of claim 6, wherein the transport link is selected based at least in part on a comparison of energy use per throughput ratio of the at least two potential transport links. 8. The apparatus of claim 1, wherein the transport link is selected based at least in part on at least one user setting. 9. A method for operating a companion wireless device, the method comprising:
communicating with a network using a first radio access technology (RAT); communicating with an accessory device using a second RAT; detecting a first event, wherein the first event concerns a connectivity status of the companion wireless device and the network; and transmitting assistance information to the accessory device, wherein the assistance information is based at least in part on the first event, wherein the assistance information is useable to select a transport link from a plurality of potential transport links. 10. The method of claim 9, wherein the first RAT is a cellular RAT, wherein the first event comprises the companion device entering or exiting a full service state with regard to the network. 11. The method of claim 9, wherein the first RAT is a cellular RAT, wherein the first event comprises the companion device entering or exiting a state that curtails background traffic over the network. 12. The method of claim 9, wherein the first event comprises detecting a change in the quality of a link with the network using the first RAT. 13. The method of claim 12, wherein the first RAT is a wireless local area network (WLAN) RAT, wherein said detecting the change in the quality of the link with the network using the first RAT is based on at least one of:
a radio quality indicator; or a link quality indicator. 14. The method of claim 9, wherein the first event comprises an event initiated by the user of the companion wireless device. 15. An apparatus, comprising:
one or more processing elements, wherein the one or more processing elements are configured to cause a companion wireless device to:
establish a short range link with an accessory wireless device;
detect a first event related to a remote link with a network, wherein the first event impacts connectivity of the accessory device with the network via the companion wireless device; and
transmit assistance information to the accessory wireless device, wherein the assistance information is based at least in part on the event, wherein the assistance information is useable to select a transport link from a plurality of potential transport links. 16. The apparatus of claim 15, wherein the one or more processing elements are further configured to cause the companion wireless device to:
exchange data with the accessory device via the short range link, wherein at least some data is exchanged prior to the detection of the first event. 17. The apparatus of claim 15, wherein the assistance information comprises a data rate available to the accessory device via a connection of the companion wireless device. 18. The apparatus of claim 15, wherein the assistance information comprises the time of the first event. 19. The apparatus of claim 15, wherein the assistance information comprises a response of the companion device to the first event. 20. The apparatus of claim 15, wherein the one or more processing elements are further configured to cause the companion wireless device to:
detect a second event, wherein the assistance information is further based on the second event. 21. An apparatus, comprising:
one or more processing elements, wherein the one or more processing elements are configured to cause an accessory wireless device to:
establish a first short range link with a companion wireless device;
evaluate a first criterion for the first short range link, wherein the first criterion comprises an exit criterion;
evaluate a second criterion for an alternative short range link with the companion wireless device, wherein the second criterion comprises an entry criterion; and
based on the evaluation of the first and second criteria, use one of the first short range link or the alternative short range link for communication with the companion wireless device, wherein using the alternative short range link comprises establishing the alternative short range link. 22. The apparatus of claim 21,
wherein the evaluation of the first criterion comprises a determination that the exit criteria are satisfied, wherein said using one of the first short range link or the alternative short range link includes using the first short range link. 23. The apparatus of claim 22, wherein the one or more processing elements are further configured to cause the accessory wireless device to:
determine whether a timer has expired, wherein said using one of the first short range link or the alternative short range link is further based at least in part on a determination that the timer has not expired. 24. The apparatus of claim 22,
wherein the evaluation of the second criterion comprises a determination that the entry criteria are not satisfied. 25. The apparatus of claim 21, wherein the one or more processing elements are further configured to cause the accessory wireless device to determine at least one of:
whether at least one foreground application is executing on the accessory device; and the proximity of the accessory device to a user of the accessory device,
wherein the evaluation of the first and second criteria is based at least in part on the determination. | This disclosure relates to transport link selection for an accessory wireless device in association with a companion device. The accessory device may communicate via a short range wireless communication link with the companion device. The companion device may detect an event and, based on the event, transmit assistance information to the accessory device. The accessory device may evaluate various conditions. The accessory device may select a transport link and/or short range link based at least in part on the received assistance information and/or the evaluated conditions.1. An apparatus, comprising:
one or more processing elements, wherein the one or more processing elements are configured to cause an accessory wireless device to:
establish a short range link with a companion wireless device;
receive assistance information from the companion device, wherein the assistance information concerns an upcoming change in connectivity of a communication link between the companion device and a network;
select a transport link from at least two potential transport links for communication between the accessory wireless device and the network, wherein the transport link is selected based at least in part on the assistance information; and
communicating with the network using the transport link. 2. The apparatus of claim 1, wherein the transport link is further selected based at least in part on at least one application executing on the accessory wireless device. 3. The apparatus of claim 1, wherein the transport link is further selected based at least in part on battery state of at least one of:
the accessory wireless device; or the companion wireless device. 4. The apparatus of claim 1, wherein the transport link is selected based at least in part on the availability of one or more direct links between the accessory wireless device and a network. 5. The apparatus of claim 1, wherein the one or more processing elements are further configured to cause the accessory wireless device to:
measure a quality of the short range link with the companion device, wherein the transport link is selected based at least in part on the quality of the short range link. 6. The apparatus of claim 1, wherein the transport link is selected based at least in part on energy use of at least one link. 7. The apparatus of claim 6, wherein the transport link is selected based at least in part on a comparison of energy use per throughput ratio of the at least two potential transport links. 8. The apparatus of claim 1, wherein the transport link is selected based at least in part on at least one user setting. 9. A method for operating a companion wireless device, the method comprising:
communicating with a network using a first radio access technology (RAT); communicating with an accessory device using a second RAT; detecting a first event, wherein the first event concerns a connectivity status of the companion wireless device and the network; and transmitting assistance information to the accessory device, wherein the assistance information is based at least in part on the first event, wherein the assistance information is useable to select a transport link from a plurality of potential transport links. 10. The method of claim 9, wherein the first RAT is a cellular RAT, wherein the first event comprises the companion device entering or exiting a full service state with regard to the network. 11. The method of claim 9, wherein the first RAT is a cellular RAT, wherein the first event comprises the companion device entering or exiting a state that curtails background traffic over the network. 12. The method of claim 9, wherein the first event comprises detecting a change in the quality of a link with the network using the first RAT. 13. The method of claim 12, wherein the first RAT is a wireless local area network (WLAN) RAT, wherein said detecting the change in the quality of the link with the network using the first RAT is based on at least one of:
a radio quality indicator; or a link quality indicator. 14. The method of claim 9, wherein the first event comprises an event initiated by the user of the companion wireless device. 15. An apparatus, comprising:
one or more processing elements, wherein the one or more processing elements are configured to cause a companion wireless device to:
establish a short range link with an accessory wireless device;
detect a first event related to a remote link with a network, wherein the first event impacts connectivity of the accessory device with the network via the companion wireless device; and
transmit assistance information to the accessory wireless device, wherein the assistance information is based at least in part on the event, wherein the assistance information is useable to select a transport link from a plurality of potential transport links. 16. The apparatus of claim 15, wherein the one or more processing elements are further configured to cause the companion wireless device to:
exchange data with the accessory device via the short range link, wherein at least some data is exchanged prior to the detection of the first event. 17. The apparatus of claim 15, wherein the assistance information comprises a data rate available to the accessory device via a connection of the companion wireless device. 18. The apparatus of claim 15, wherein the assistance information comprises the time of the first event. 19. The apparatus of claim 15, wherein the assistance information comprises a response of the companion device to the first event. 20. The apparatus of claim 15, wherein the one or more processing elements are further configured to cause the companion wireless device to:
detect a second event, wherein the assistance information is further based on the second event. 21. An apparatus, comprising:
one or more processing elements, wherein the one or more processing elements are configured to cause an accessory wireless device to:
establish a first short range link with a companion wireless device;
evaluate a first criterion for the first short range link, wherein the first criterion comprises an exit criterion;
evaluate a second criterion for an alternative short range link with the companion wireless device, wherein the second criterion comprises an entry criterion; and
based on the evaluation of the first and second criteria, use one of the first short range link or the alternative short range link for communication with the companion wireless device, wherein using the alternative short range link comprises establishing the alternative short range link. 22. The apparatus of claim 21,
wherein the evaluation of the first criterion comprises a determination that the exit criteria are satisfied, wherein said using one of the first short range link or the alternative short range link includes using the first short range link. 23. The apparatus of claim 22, wherein the one or more processing elements are further configured to cause the accessory wireless device to:
determine whether a timer has expired, wherein said using one of the first short range link or the alternative short range link is further based at least in part on a determination that the timer has not expired. 24. The apparatus of claim 22,
wherein the evaluation of the second criterion comprises a determination that the entry criteria are not satisfied. 25. The apparatus of claim 21, wherein the one or more processing elements are further configured to cause the accessory wireless device to determine at least one of:
whether at least one foreground application is executing on the accessory device; and the proximity of the accessory device to a user of the accessory device,
wherein the evaluation of the first and second criteria is based at least in part on the determination. | 2,600 |
11,014 | 11,014 | 16,229,270 | 2,689 | The present invention relates to a haptic information provision device. The haptic information provision device ( 100 ) according to the present invention comprises: a receiver ( 120 ) for receiving external notification information; a controller ( 130 ) for converting the notification information to a haptic signal; and an operation unit ( 110 ) for transferring haptic information to a user according to the haptic signal, wherein the operation unit ( 110 ) includes a plurality of operation units ( 110 a - 110 j ), the respective operation units ( 110 a - 110 j ) operating in response to different notification information and thus transferring different haptic information to the user. | 1. A driver assistance information feedback system, comprising:
a vehicle sensor for detecting road lane markings or an object in the environment outside of and around a vehicle; and a tactile information supply device for receiving a notification information from the vehicle sensor and converting the notification information to a tactile signal, thereby providing a user with tactile information, wherein the tactile information supply device includes:
to a receiver for receiving the notification information;
a controller for converting the notification information to the tactile signal; and
an operator for providing the user with the tactile information according to the tactile signal, and
wherein the operator includes a plurality of operation units and each of the operation units operates in response to different notification information to provide different tactile information to the user. 2. The driver assistance information feedback system of claim 1,
wherein the controller controls at least one of an operation intensity and an operation pulse of each of the operation units according to the notification information. 3. The driver assistance information feedback system of claim 2,
wherein each of the operation units operates corresponding to information on the position of the road lane markings or the object with reference to the vehicle. 4. The driver assistance information feedback system of claim 3,
wherein the vehicle sensor is a lane departure detection sensor, and wherein each of the operation units operates when the lane departure detection sensor detects the lane departure of the vehicle. 5. The driver assistance information feedback system of claim 1,
wherein the operator comprises:
a band portion in close contact with the body of the user; and
the plurality of operation units provided on the band portion. 6. The driver assistance information feedback system of claim 1,
wherein the operation unit comprises:
a tactile sensation provider having a shape or position that changes in response to a magnetic field; and
a magnetic field generator for generating the magnetic field to change the shape or position of the tactile sensation provider. 7. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider comprises magnetic particles and a matrix material. 8. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider includes a magnetorheological fluid (MRF) or a magnetorheological elastomer (MRE). 9. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider vibrates when an AC magnetic field is generated by the magnetic field generator, and wherein the stiffness of the tactile sensation provider changes when a DC magnetic field is generated by the magnetic field generator. 10. The driver assistance information feedback system of claim 6,
wherein the magnetic field generator is at least one of a planar coil and a solenoid coil. 11. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider is a permanent magnet and provides tactile information by vibration due to application of the magnetic field. 12. The driver assistance information feedback system of claim 1,
wherein the plurality of operation units includes at least one of an eccentric motor, a linear resonant actuator, a piezoelectric actuator, an electroactive polymer actuator, and an electrostatic actuator. 13. The driver assistance information feedback system of claim 1,
wherein the plurality of operation units provides at least one tactile information of vibration, brushing, constriction, beating, pressing, tapping and tilting. 14. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider maintains a first shape when not influenced by the magnetic field, wherein the tactile sensation provider maintains a second shape when influenced by the magnetic field, and wherein the tactile sensation provider produces a reciprocating motion between the second shape and the first shape, thereby providing tactile information. 15. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider maintains a first position when not influenced by the magnetic field, wherein the tactile sensation provider maintains a second position when influenced by the magnetic field, and wherein the tactile sensation provider produces a reciprocating motion between the second position and the first position, thereby providing tactile information. 16. The driver assistance information feedback system of claim 14,
wherein at least one of a degree, a direction, and a frequency of the transformation from the first shape to the second shape is controlled by controlling at least one of an intensity, a direction, and a frequency of the magnetic field generated by the magnetic field generator. 17. The driver assistance information feedback system of claim 15,
wherein at least one of a degree, a direction, and a frequency of the transformation from the first position to the second position is controlled by controlling at least one of an intensity, a direction, and a frequency of the magnetic field generated by the magnetic field generator. 18. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider includes at least one of iron (Fe), cobalt (Co) and nickel (Ni). | The present invention relates to a haptic information provision device. The haptic information provision device ( 100 ) according to the present invention comprises: a receiver ( 120 ) for receiving external notification information; a controller ( 130 ) for converting the notification information to a haptic signal; and an operation unit ( 110 ) for transferring haptic information to a user according to the haptic signal, wherein the operation unit ( 110 ) includes a plurality of operation units ( 110 a - 110 j ), the respective operation units ( 110 a - 110 j ) operating in response to different notification information and thus transferring different haptic information to the user.1. A driver assistance information feedback system, comprising:
a vehicle sensor for detecting road lane markings or an object in the environment outside of and around a vehicle; and a tactile information supply device for receiving a notification information from the vehicle sensor and converting the notification information to a tactile signal, thereby providing a user with tactile information, wherein the tactile information supply device includes:
to a receiver for receiving the notification information;
a controller for converting the notification information to the tactile signal; and
an operator for providing the user with the tactile information according to the tactile signal, and
wherein the operator includes a plurality of operation units and each of the operation units operates in response to different notification information to provide different tactile information to the user. 2. The driver assistance information feedback system of claim 1,
wherein the controller controls at least one of an operation intensity and an operation pulse of each of the operation units according to the notification information. 3. The driver assistance information feedback system of claim 2,
wherein each of the operation units operates corresponding to information on the position of the road lane markings or the object with reference to the vehicle. 4. The driver assistance information feedback system of claim 3,
wherein the vehicle sensor is a lane departure detection sensor, and wherein each of the operation units operates when the lane departure detection sensor detects the lane departure of the vehicle. 5. The driver assistance information feedback system of claim 1,
wherein the operator comprises:
a band portion in close contact with the body of the user; and
the plurality of operation units provided on the band portion. 6. The driver assistance information feedback system of claim 1,
wherein the operation unit comprises:
a tactile sensation provider having a shape or position that changes in response to a magnetic field; and
a magnetic field generator for generating the magnetic field to change the shape or position of the tactile sensation provider. 7. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider comprises magnetic particles and a matrix material. 8. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider includes a magnetorheological fluid (MRF) or a magnetorheological elastomer (MRE). 9. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider vibrates when an AC magnetic field is generated by the magnetic field generator, and wherein the stiffness of the tactile sensation provider changes when a DC magnetic field is generated by the magnetic field generator. 10. The driver assistance information feedback system of claim 6,
wherein the magnetic field generator is at least one of a planar coil and a solenoid coil. 11. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider is a permanent magnet and provides tactile information by vibration due to application of the magnetic field. 12. The driver assistance information feedback system of claim 1,
wherein the plurality of operation units includes at least one of an eccentric motor, a linear resonant actuator, a piezoelectric actuator, an electroactive polymer actuator, and an electrostatic actuator. 13. The driver assistance information feedback system of claim 1,
wherein the plurality of operation units provides at least one tactile information of vibration, brushing, constriction, beating, pressing, tapping and tilting. 14. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider maintains a first shape when not influenced by the magnetic field, wherein the tactile sensation provider maintains a second shape when influenced by the magnetic field, and wherein the tactile sensation provider produces a reciprocating motion between the second shape and the first shape, thereby providing tactile information. 15. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider maintains a first position when not influenced by the magnetic field, wherein the tactile sensation provider maintains a second position when influenced by the magnetic field, and wherein the tactile sensation provider produces a reciprocating motion between the second position and the first position, thereby providing tactile information. 16. The driver assistance information feedback system of claim 14,
wherein at least one of a degree, a direction, and a frequency of the transformation from the first shape to the second shape is controlled by controlling at least one of an intensity, a direction, and a frequency of the magnetic field generated by the magnetic field generator. 17. The driver assistance information feedback system of claim 15,
wherein at least one of a degree, a direction, and a frequency of the transformation from the first position to the second position is controlled by controlling at least one of an intensity, a direction, and a frequency of the magnetic field generated by the magnetic field generator. 18. The driver assistance information feedback system of claim 6,
wherein the tactile sensation provider includes at least one of iron (Fe), cobalt (Co) and nickel (Ni). | 2,600 |
11,015 | 11,015 | 16,403,278 | 2,664 | Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for image rendering. In one aspect, a method comprises receiving a plurality of observations characterizing a particular scene, each observation comprising an image of the particular scene and data identifying a location of a camera that captured the image. In another aspect, the method comprises receiving a plurality of observations characterizing a particular video, each observation comprising a video frame from the particular video and data identifying a time stamp of the video frame in the particular video. In yet another aspect, the method comprises receiving a plurality of observations characterizing a particular image, each observation comprising a crop of the particular image and data characterizing the crop of the particular image. The method processes each of the plurality of observations using an observation neural network to determine a numeric representation as output. | 1. A computer implemented method comprising:
receiving a plurality of observations characterizing a particular scene, each observation comprising an image of the particular scene and data identifying a location of a camera that captured the image; processing each of the plurality of observations using an observation neural network, wherein the observation neural network is configured to, for each of the observations:
process the observation to generate as output a lower-dimensional representation of the observation;
determining a numeric representation of the particular scene by combining the lower-dimension representations of the observations; providing the numeric representation of the particular scene for use in characterizing contents of the particular scene; receiving data identifying a new camera location; and processing the data identifying the new camera location and the numeric representation of the particular scene using a generator neural network to generate a new image of the particular scene taken from a camera at the new camera location. 2. The method of claim 1, wherein the numeric representation is a collection of numeric values that represents underlying contents of the particular scene. 3. The method of claim 1, wherein the numeric representation is a semantic description of the particular scene. 4. The method of claim 1, wherein combining the lower-dimension representations of the observations comprises:
summing the lower-dimension representations to generate the numeric representation. 5. The method of claim 1, wherein the generator neural network is configured to:
at each of a plurality of time steps:
sample one or more latent variables for the time step, and
update a hidden state as of the time step by processing the hidden state, the sampled latent variables, the numeric representation, and the data identifying the new camera location using a deep convolutional neural network to generate an updated hidden state; and
after a last time step in the plurality of time steps:
generate the new image of the particular scene from the updated hidden state after the last time step. 6. The method of claim 5, wherein generating the new image of the particular scene from the updated hidden state after the last time step comprises:
generating pixel sufficient statistics from the updated hidden state after the last time step; and sampling color values of pixels in the new image using the pixel sufficient statistics. 7. The method of claim 5, wherein the generator neural network and the observation neural network have been trained jointly with a posterior neural network configured to, during the training, receive a plurality of training observations and a target observation and generate a posterior output that defines a distribution over the one or more latent variables. 8. The method of claim 1, wherein the observation neural network has been trained to generate numeric representations that, in combination with a particular camera location, is usable by a generator neural network to generate a reconstruction of a particular image of the particular scene taken from the particular camera location. 9. A computer implemented method comprising:
receiving a plurality of observations characterizing a particular video, each observation comprising a video frame from the particular video and data dentifying a time stamp of the video frame in the particular video; processing each of the plurality of observations using an observation neural network, wherein the observation neural network is configured to, for each of the observations:
process the observation to generate as output a lower-dimensional representation of the observation;
determining a numeric representation of the particular video by combining the lower-dimension representations of the observations; providing the numeric representation of the particular video for use in characterizing contents of the particular video; receiving data identifying a new time stamp; and processing the data identifying the new time stamp and the numeric representation of the particular video using a generator neural network to generate a new video frame at the new time stamp in the particular video. 10. The method of claim 9, wherein the numeric representation is a collection of numeric values that represents underlying contents of the particular video. 11. The method of claim 9, wherein the numeric representation is a semantic description of the particular video. 12. The method of claim 9, wherein combining the lower-dimension representations of the observations comprises:
summing the lower-dimension representations to generate the numeric representation. 13. The method claim 9, wherein the generator neural network is configured to:
at each of a plurality of time steps:
sample one or more latent variables for the time step, and
update a hidden state as of the time step by processing the hidden state, the sampled latent variables, the numeric representation, and the data identifying the new time stamp using a deep convolutional neural network to generate an updated hidden state; and
after a last time step in the plurality of time steps:
generate the new video frame from the updated hidden state after the last time step. 14. The method of claim 13, wherein generating the new video frame comprises:
generating pixel sufficient statistics from the updated hidden state after the last time step; and sampling color values of pixels in the new video frame using the pixel sufficient statistics. 15. The method of claim 13, wherein the generator neural network and the observation neural network have been trained jointly with a posterior neural network configured to, during the training, receive a plurality of training observations and a target observation and generate a posterior output that defines a distribution over the one or more latent variables. 16. The method claim 9, wherein the observation neural network has been trained to generate numeric representations that, in combination with a particular time stamp, is usable by a generator neural network to generate a reconstruction of a particular video frame from the particular video at the particular time stamp. 17. A computer implemented method comprising:
receiving a plurality of observations characterizing a particular image, each observation comprising a crop of the particular image and data identifying a location and size of the crop in the particular image; processing each of the plurality of observations using an observation neural network, wherein the observation neural network is configured to, for each of the observations:
process the observation to generate as output a lower-dimensional representation of the observation;
determining a numeric representation of the particular image by combining the lower-dimension representations of the observations; providing the numeric representation of the particular image for use in characterizing contents of the particular image; receiving data identifying a new crop location and a new crop size; and processing the data identifying the new crop location and the new crop size and the numeric representation of the particular image using a generator neural network to generate a new crop of the particular image at the new crop location and having the new crop size. 18. The method of claim 17, wherein the numeric representation is a collection of numeric values that represents underlying contents of the particular image. 19. The method of claim 17, wherein the numeric representation is a semantic description of the particular image. 20. The method of claim 17, wherein combining the lower-dimension representations of the observations comprises:
summing the lower-dimension representations to generate the numeric representation. 21. The method of claim 17, wherein the generator neural network is configured to:
at each of a plurality of time steps:
sample one or more latent variables for the time step, and
update a hidden state as of the time step by processing the hidden state, the sampled latent variables, the numeric representation, and the data identifying the new crop location and the new crop size using a deep convolutional neural network to generate an updated hidden state; and
after a last time step in the plurality of time steps:
generate the new crop of the particular image from the updated hidden state after the last time step. 22. The method of claim 21, wherein generating the new crop of the particular image from the updated hidden state after the last time step comprises:
generating pixel sufficient statistics from the updated hidden state after the last time step; and sampling color values of pixels in the new crop using the pixel sufficient statistics. 23. The method of claim 21, wherein the generator neural network and the observation neural network have been trained jointly with a posterior neural network configured to, during the training, receive a plurality of training observations and a target observation and generate a posterior output that defines a distribution over the one or more latent variables. 24. The method of claim 17, wherein the observation neural network has been trained to generate numeric representations that, in combination with a particular crop location and a particular crop size, is usable by a generator neural network to generate a reconstruction of a particular crop of the particular image at the particular crop location and having the particular crop size. | Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for image rendering. In one aspect, a method comprises receiving a plurality of observations characterizing a particular scene, each observation comprising an image of the particular scene and data identifying a location of a camera that captured the image. In another aspect, the method comprises receiving a plurality of observations characterizing a particular video, each observation comprising a video frame from the particular video and data identifying a time stamp of the video frame in the particular video. In yet another aspect, the method comprises receiving a plurality of observations characterizing a particular image, each observation comprising a crop of the particular image and data characterizing the crop of the particular image. The method processes each of the plurality of observations using an observation neural network to determine a numeric representation as output.1. A computer implemented method comprising:
receiving a plurality of observations characterizing a particular scene, each observation comprising an image of the particular scene and data identifying a location of a camera that captured the image; processing each of the plurality of observations using an observation neural network, wherein the observation neural network is configured to, for each of the observations:
process the observation to generate as output a lower-dimensional representation of the observation;
determining a numeric representation of the particular scene by combining the lower-dimension representations of the observations; providing the numeric representation of the particular scene for use in characterizing contents of the particular scene; receiving data identifying a new camera location; and processing the data identifying the new camera location and the numeric representation of the particular scene using a generator neural network to generate a new image of the particular scene taken from a camera at the new camera location. 2. The method of claim 1, wherein the numeric representation is a collection of numeric values that represents underlying contents of the particular scene. 3. The method of claim 1, wherein the numeric representation is a semantic description of the particular scene. 4. The method of claim 1, wherein combining the lower-dimension representations of the observations comprises:
summing the lower-dimension representations to generate the numeric representation. 5. The method of claim 1, wherein the generator neural network is configured to:
at each of a plurality of time steps:
sample one or more latent variables for the time step, and
update a hidden state as of the time step by processing the hidden state, the sampled latent variables, the numeric representation, and the data identifying the new camera location using a deep convolutional neural network to generate an updated hidden state; and
after a last time step in the plurality of time steps:
generate the new image of the particular scene from the updated hidden state after the last time step. 6. The method of claim 5, wherein generating the new image of the particular scene from the updated hidden state after the last time step comprises:
generating pixel sufficient statistics from the updated hidden state after the last time step; and sampling color values of pixels in the new image using the pixel sufficient statistics. 7. The method of claim 5, wherein the generator neural network and the observation neural network have been trained jointly with a posterior neural network configured to, during the training, receive a plurality of training observations and a target observation and generate a posterior output that defines a distribution over the one or more latent variables. 8. The method of claim 1, wherein the observation neural network has been trained to generate numeric representations that, in combination with a particular camera location, is usable by a generator neural network to generate a reconstruction of a particular image of the particular scene taken from the particular camera location. 9. A computer implemented method comprising:
receiving a plurality of observations characterizing a particular video, each observation comprising a video frame from the particular video and data dentifying a time stamp of the video frame in the particular video; processing each of the plurality of observations using an observation neural network, wherein the observation neural network is configured to, for each of the observations:
process the observation to generate as output a lower-dimensional representation of the observation;
determining a numeric representation of the particular video by combining the lower-dimension representations of the observations; providing the numeric representation of the particular video for use in characterizing contents of the particular video; receiving data identifying a new time stamp; and processing the data identifying the new time stamp and the numeric representation of the particular video using a generator neural network to generate a new video frame at the new time stamp in the particular video. 10. The method of claim 9, wherein the numeric representation is a collection of numeric values that represents underlying contents of the particular video. 11. The method of claim 9, wherein the numeric representation is a semantic description of the particular video. 12. The method of claim 9, wherein combining the lower-dimension representations of the observations comprises:
summing the lower-dimension representations to generate the numeric representation. 13. The method claim 9, wherein the generator neural network is configured to:
at each of a plurality of time steps:
sample one or more latent variables for the time step, and
update a hidden state as of the time step by processing the hidden state, the sampled latent variables, the numeric representation, and the data identifying the new time stamp using a deep convolutional neural network to generate an updated hidden state; and
after a last time step in the plurality of time steps:
generate the new video frame from the updated hidden state after the last time step. 14. The method of claim 13, wherein generating the new video frame comprises:
generating pixel sufficient statistics from the updated hidden state after the last time step; and sampling color values of pixels in the new video frame using the pixel sufficient statistics. 15. The method of claim 13, wherein the generator neural network and the observation neural network have been trained jointly with a posterior neural network configured to, during the training, receive a plurality of training observations and a target observation and generate a posterior output that defines a distribution over the one or more latent variables. 16. The method claim 9, wherein the observation neural network has been trained to generate numeric representations that, in combination with a particular time stamp, is usable by a generator neural network to generate a reconstruction of a particular video frame from the particular video at the particular time stamp. 17. A computer implemented method comprising:
receiving a plurality of observations characterizing a particular image, each observation comprising a crop of the particular image and data identifying a location and size of the crop in the particular image; processing each of the plurality of observations using an observation neural network, wherein the observation neural network is configured to, for each of the observations:
process the observation to generate as output a lower-dimensional representation of the observation;
determining a numeric representation of the particular image by combining the lower-dimension representations of the observations; providing the numeric representation of the particular image for use in characterizing contents of the particular image; receiving data identifying a new crop location and a new crop size; and processing the data identifying the new crop location and the new crop size and the numeric representation of the particular image using a generator neural network to generate a new crop of the particular image at the new crop location and having the new crop size. 18. The method of claim 17, wherein the numeric representation is a collection of numeric values that represents underlying contents of the particular image. 19. The method of claim 17, wherein the numeric representation is a semantic description of the particular image. 20. The method of claim 17, wherein combining the lower-dimension representations of the observations comprises:
summing the lower-dimension representations to generate the numeric representation. 21. The method of claim 17, wherein the generator neural network is configured to:
at each of a plurality of time steps:
sample one or more latent variables for the time step, and
update a hidden state as of the time step by processing the hidden state, the sampled latent variables, the numeric representation, and the data identifying the new crop location and the new crop size using a deep convolutional neural network to generate an updated hidden state; and
after a last time step in the plurality of time steps:
generate the new crop of the particular image from the updated hidden state after the last time step. 22. The method of claim 21, wherein generating the new crop of the particular image from the updated hidden state after the last time step comprises:
generating pixel sufficient statistics from the updated hidden state after the last time step; and sampling color values of pixels in the new crop using the pixel sufficient statistics. 23. The method of claim 21, wherein the generator neural network and the observation neural network have been trained jointly with a posterior neural network configured to, during the training, receive a plurality of training observations and a target observation and generate a posterior output that defines a distribution over the one or more latent variables. 24. The method of claim 17, wherein the observation neural network has been trained to generate numeric representations that, in combination with a particular crop location and a particular crop size, is usable by a generator neural network to generate a reconstruction of a particular crop of the particular image at the particular crop location and having the particular crop size. | 2,600 |
11,016 | 11,016 | 16,060,128 | 2,665 | A method comprising: processing a recording of a scene to recognise a predetermined user command event performed within the scene; and automatically controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the predetermined user command event performed within the scene. | 1-15. (canceled) 16. An apparatus comprising:
at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
process a recording of a scene to recognise a user command event performed within the scene; and
control image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene. 17. An apparatus as claimed in claim 16, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises image analysis of a recorded image of the scene to recognise a user command event performed within the scene. 18. An apparatus as claimed in claim 16, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises gesture recognition of a person within the scene. 19. An apparatus as claimed in claim 16, wherein processing a recording of a scene to recognise a user command event performed within the scene occurs after a user initiation event dependent upon proximal user event detection and/or user action recognition and/or user identity recognition. 20. An apparatus as claimed in claim 16 further cause the apparatus to perform at least the following: control image processing of other captured images of the scene to adapt the other captured images, in dependence on said recognition of the user command event wherein the other captured images of the scene are captured after the captured image comprising the recognised user command event and/or wherein the other captured images of the scene are captured before the captured image comprising the recognised user command event. 21. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event, comprises removal and replacement of content from the captured image of the scene to create an adapted captured image of the scene. 22. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene comprises controlling processing of:
a first image of a scene captured at a first time, where a portion of the scene has a first state in the captured first image, and a second image of the scene captured at a second time, where the portion of the scene has a second state in the captured second image, wherein the portion of the scene changes between the first state and the second state, to generate a third image, comprising the second image of the scene adapted such that the portion of the second image has a first state in the adapted captured second image. 23. An apparatus as claimed in claim 22, wherein the third image, comprises the second image of the scene having a replacement image portion for the portion of the scene, the replacement image portion depending upon the portion of the scene captured in the first image. 24. An apparatus as claimed in claim 22, comprising enabling user selection of the portion or enabling automatic selection of the portion. 25. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event, comprises removal of one or more objects in the scene from the captured image of the scene. 26. An apparatus as claimed in claim 25, wherein said user command event indicates at least one of the one or more objects. 27. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event comprises controlling image processing of successive video frames, each capturing an image of the scene, to adapt the successive video frames, in dependence on said recognition of the user command event, wherein each video frame comprises an effective field of view exceeding 100°. 28. An apparatus as claimed in claim 27, wherein the video frames are used for mediated reality. 29. A method comprising:
processing a recording of a scene to recognise a user command event performed within the scene; and controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene. 30. A method as claimed in claim 29, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises image analysis of a recorded image of the scene to recognise a user command event performed within the scene. 31. A method as claimed in claim 29, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises gesture recognition of a person within the scene. 32. A method as claimed in claim 29, wherein processing a recording of a scene to recognise a user command event performed within the scene occurs after a user initiation event dependent upon proximal user event detection and/or user action recognition and/or user identity recognition. 33. A method as claimed in claim 29, comprising: controlling image processing of other captured images of the scene to adapt the other captured images, in dependence on said recognition of the user command event wherein the other captured images of the scene are captured after the captured image comprising the recognised user command event and/or wherein the other captured images of the scene are captured before the captured image comprising the recognised user command event. 34. A method as claimed in claim 29, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event, comprises removal and replacement of content from the captured image of the scene to create an adapted captured image of the scene. 35. At least one non-transitory computer readable medium comprising instructions that, when executed, perform at least the following:
process a recording of a scene to recognise a user command event performed within the scene; and control image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene. | A method comprising: processing a recording of a scene to recognise a predetermined user command event performed within the scene; and automatically controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the predetermined user command event performed within the scene.1-15. (canceled) 16. An apparatus comprising:
at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
process a recording of a scene to recognise a user command event performed within the scene; and
control image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene. 17. An apparatus as claimed in claim 16, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises image analysis of a recorded image of the scene to recognise a user command event performed within the scene. 18. An apparatus as claimed in claim 16, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises gesture recognition of a person within the scene. 19. An apparatus as claimed in claim 16, wherein processing a recording of a scene to recognise a user command event performed within the scene occurs after a user initiation event dependent upon proximal user event detection and/or user action recognition and/or user identity recognition. 20. An apparatus as claimed in claim 16 further cause the apparatus to perform at least the following: control image processing of other captured images of the scene to adapt the other captured images, in dependence on said recognition of the user command event wherein the other captured images of the scene are captured after the captured image comprising the recognised user command event and/or wherein the other captured images of the scene are captured before the captured image comprising the recognised user command event. 21. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event, comprises removal and replacement of content from the captured image of the scene to create an adapted captured image of the scene. 22. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene comprises controlling processing of:
a first image of a scene captured at a first time, where a portion of the scene has a first state in the captured first image, and a second image of the scene captured at a second time, where the portion of the scene has a second state in the captured second image, wherein the portion of the scene changes between the first state and the second state, to generate a third image, comprising the second image of the scene adapted such that the portion of the second image has a first state in the adapted captured second image. 23. An apparatus as claimed in claim 22, wherein the third image, comprises the second image of the scene having a replacement image portion for the portion of the scene, the replacement image portion depending upon the portion of the scene captured in the first image. 24. An apparatus as claimed in claim 22, comprising enabling user selection of the portion or enabling automatic selection of the portion. 25. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event, comprises removal of one or more objects in the scene from the captured image of the scene. 26. An apparatus as claimed in claim 25, wherein said user command event indicates at least one of the one or more objects. 27. An apparatus as claimed in claim 16, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event comprises controlling image processing of successive video frames, each capturing an image of the scene, to adapt the successive video frames, in dependence on said recognition of the user command event, wherein each video frame comprises an effective field of view exceeding 100°. 28. An apparatus as claimed in claim 27, wherein the video frames are used for mediated reality. 29. A method comprising:
processing a recording of a scene to recognise a user command event performed within the scene; and controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene. 30. A method as claimed in claim 29, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises image analysis of a recorded image of the scene to recognise a user command event performed within the scene. 31. A method as claimed in claim 29, wherein processing a recording of a scene to recognise a user command event performed within the scene comprises gesture recognition of a person within the scene. 32. A method as claimed in claim 29, wherein processing a recording of a scene to recognise a user command event performed within the scene occurs after a user initiation event dependent upon proximal user event detection and/or user action recognition and/or user identity recognition. 33. A method as claimed in claim 29, comprising: controlling image processing of other captured images of the scene to adapt the other captured images, in dependence on said recognition of the user command event wherein the other captured images of the scene are captured after the captured image comprising the recognised user command event and/or wherein the other captured images of the scene are captured before the captured image comprising the recognised user command event. 34. A method as claimed in claim 29, wherein controlling image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event, comprises removal and replacement of content from the captured image of the scene to create an adapted captured image of the scene. 35. At least one non-transitory computer readable medium comprising instructions that, when executed, perform at least the following:
process a recording of a scene to recognise a user command event performed within the scene; and control image processing of a captured image of the scene to adapt the captured image, in dependence on said recognition of the user command event performed within the scene. | 2,600 |
11,017 | 11,017 | 15,728,716 | 2,619 | A method and apparatus are provided for tessellating patches of surfaces in a tile based three dimensional computer graphics rendering system. For each tile in an image a per tile list of primitive indices is derived for tessellated primitives which make up a patch. Hidden surface removal is then performed on the patch and any domain points which remain after hidden surface removal are derived. The primitives are then shaded for display. | 1. A method for processing tessellated patches of surfaces in a tile-based computer graphics rendering system, comprising:
deriving, in a tiling unit, a per-tile list of primitive indices for tessellated primitives produced from a surface patch; performing hidden surface removal on the tessellated primitives in the surface patch indicated by the primitive indices in the per-tile list for each tile; and shading, by a programmable shading unit, the tessellated primitives which remain after hidden surface removal for each tile for display. 2. The method as claimed in claim 1, wherein the method comprises implementing a geometry processing phase, the geometry processing phase comprising the deriving step. 3. The method as claimed in claim 2, wherein the step of implementing the geometry processing phase further comprises performing domain shading to generate vertex position data for tessellated primitives making up a surface patch, the per-tile list of primitive indices for the tessellated primitives making up the surface patch being derived from the generated vertex position data. 4. The method as claimed in claim 3, wherein only a position part of the domain shading is performed to generate the vertex position data. 5. The method as claimed in claim 3, wherein the step of implementing the geometry processing phase further comprises performing hull shading for the surface patch to calculate tessellation factors for the surface patch. 6. The method as claimed in claim 5, wherein the domain shading is performed on domain points making up the surface patch calculated from the tessellation factors. 7. The method as claimed in claim 5, wherein the step of implementing the geometry processing phase further comprises writing the calculated tessellation factors to a memory. 8. The method as claimed in claim 1, wherein the method further comprises compressing the per-tile list of primitive indices for the tessellated primitives produced from the surface patch. 9. The method as claimed in claim 1, wherein the method further comprises implementing a rasterization phase, the rasterization phase comprising the step of performing hidden surface removal and the shading step. 10. The method as claimed in claim 9, wherein the step of implementing the rasterization phase further comprises performing domain shading to generate vertex position data for the tessellated primitives indicated by the primitive indices in the per-tile list for each tile. 11. The method as claimed in claim 10, wherein only a position part of the domain shading is performed to generate the vertex position data for the tessellated primitives indicated by the primitive indices in the per-tile list for each tile. 12. The method as claimed in claim 10, wherein the domain shading is performed on domain points making up the surface patch calculated from fetched tessellation factors. 13. The method as claimed in claim 12, wherein the domain points are calculated from the fetched tessellation factors by performing domain tessellation. 14. The method as claimed in claim 1, wherein hidden surface removal is performed on the tessellated primitives in the surface patch for each tile to generate a per-tile list of visible tessellated primitives for each tile. 15. The method as claimed in claim 14, wherein the step of shading the tessellated primitives which remain after hidden surface removal comprises performing domain shading to generate vertex position and attribute data for the visible tessellated primitives for each tile. 16. The method as claimed in claim 15, wherein full domain shading is performed to generate the vertex position and attribute data for the tessellated primitives remaining after hidden surface removal. 17. The method as claimed in claim 16, wherein the full domain shading is performed on domain points calculated for visible primitives for each tile from fetched tessellation factors. 18. The method as claimed in claim 17, wherein the domain points are calculated from the fetched tessellation factors by performing domain tessellation. 19. An apparatus for processing tessellated patches of surfaces in a tile-based computer-graphics rendering system, comprising:
a tiling unit configured to derive a per-tile list of primitive indices for tessellated primitives produced from a surface patch; a hidden surface removal unit configured to perform hidden surface removal on the tessellated primitives in the surface patch indicated by the primitive indices in the per-tile list for each tile; and a programmable shading unit configured to shade the tessellated primitives which remain after hidden surface removal for each tile for display. 20. A method for processing tessellated patches of surfaces in a tile based computer graphics rendering system, comprising:
implementing a geometry processing phase comprising:
performing domain shading to generate vertex position data for tessellated primitives making up a surface patch; and
deriving a per-tile list of primitive indices for the tessellated primitives making up the patch using the generated vertex position data;
implementing a rasterisation phase comprising:
performing domain shading to generate vertex position data for the tessellated primitives indicated by the primitive indices in the per-tile list for each tile;
performing hidden surface removal on the tessellated primitives making up the surface patch for each tile; and
shading the primitives which remain after hidden surface removal for each tile for display. | A method and apparatus are provided for tessellating patches of surfaces in a tile based three dimensional computer graphics rendering system. For each tile in an image a per tile list of primitive indices is derived for tessellated primitives which make up a patch. Hidden surface removal is then performed on the patch and any domain points which remain after hidden surface removal are derived. The primitives are then shaded for display.1. A method for processing tessellated patches of surfaces in a tile-based computer graphics rendering system, comprising:
deriving, in a tiling unit, a per-tile list of primitive indices for tessellated primitives produced from a surface patch; performing hidden surface removal on the tessellated primitives in the surface patch indicated by the primitive indices in the per-tile list for each tile; and shading, by a programmable shading unit, the tessellated primitives which remain after hidden surface removal for each tile for display. 2. The method as claimed in claim 1, wherein the method comprises implementing a geometry processing phase, the geometry processing phase comprising the deriving step. 3. The method as claimed in claim 2, wherein the step of implementing the geometry processing phase further comprises performing domain shading to generate vertex position data for tessellated primitives making up a surface patch, the per-tile list of primitive indices for the tessellated primitives making up the surface patch being derived from the generated vertex position data. 4. The method as claimed in claim 3, wherein only a position part of the domain shading is performed to generate the vertex position data. 5. The method as claimed in claim 3, wherein the step of implementing the geometry processing phase further comprises performing hull shading for the surface patch to calculate tessellation factors for the surface patch. 6. The method as claimed in claim 5, wherein the domain shading is performed on domain points making up the surface patch calculated from the tessellation factors. 7. The method as claimed in claim 5, wherein the step of implementing the geometry processing phase further comprises writing the calculated tessellation factors to a memory. 8. The method as claimed in claim 1, wherein the method further comprises compressing the per-tile list of primitive indices for the tessellated primitives produced from the surface patch. 9. The method as claimed in claim 1, wherein the method further comprises implementing a rasterization phase, the rasterization phase comprising the step of performing hidden surface removal and the shading step. 10. The method as claimed in claim 9, wherein the step of implementing the rasterization phase further comprises performing domain shading to generate vertex position data for the tessellated primitives indicated by the primitive indices in the per-tile list for each tile. 11. The method as claimed in claim 10, wherein only a position part of the domain shading is performed to generate the vertex position data for the tessellated primitives indicated by the primitive indices in the per-tile list for each tile. 12. The method as claimed in claim 10, wherein the domain shading is performed on domain points making up the surface patch calculated from fetched tessellation factors. 13. The method as claimed in claim 12, wherein the domain points are calculated from the fetched tessellation factors by performing domain tessellation. 14. The method as claimed in claim 1, wherein hidden surface removal is performed on the tessellated primitives in the surface patch for each tile to generate a per-tile list of visible tessellated primitives for each tile. 15. The method as claimed in claim 14, wherein the step of shading the tessellated primitives which remain after hidden surface removal comprises performing domain shading to generate vertex position and attribute data for the visible tessellated primitives for each tile. 16. The method as claimed in claim 15, wherein full domain shading is performed to generate the vertex position and attribute data for the tessellated primitives remaining after hidden surface removal. 17. The method as claimed in claim 16, wherein the full domain shading is performed on domain points calculated for visible primitives for each tile from fetched tessellation factors. 18. The method as claimed in claim 17, wherein the domain points are calculated from the fetched tessellation factors by performing domain tessellation. 19. An apparatus for processing tessellated patches of surfaces in a tile-based computer-graphics rendering system, comprising:
a tiling unit configured to derive a per-tile list of primitive indices for tessellated primitives produced from a surface patch; a hidden surface removal unit configured to perform hidden surface removal on the tessellated primitives in the surface patch indicated by the primitive indices in the per-tile list for each tile; and a programmable shading unit configured to shade the tessellated primitives which remain after hidden surface removal for each tile for display. 20. A method for processing tessellated patches of surfaces in a tile based computer graphics rendering system, comprising:
implementing a geometry processing phase comprising:
performing domain shading to generate vertex position data for tessellated primitives making up a surface patch; and
deriving a per-tile list of primitive indices for the tessellated primitives making up the patch using the generated vertex position data;
implementing a rasterisation phase comprising:
performing domain shading to generate vertex position data for the tessellated primitives indicated by the primitive indices in the per-tile list for each tile;
performing hidden surface removal on the tessellated primitives making up the surface patch for each tile; and
shading the primitives which remain after hidden surface removal for each tile for display. | 2,600 |
11,018 | 11,018 | 15,674,094 | 2,641 | Certain aspects of the present disclosure relate to techniques for performing handover from a source base station to a target base station. According to one aspect, a method generally includes receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station, generating a scheduling uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request, and communicating the grant to the user equipment. | 1. A method of wireless communication by a target base station, the method comprising:
receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station; generating an uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and communicating the grant to the user equipment. 2. The method of claim 1, wherein communicating the grant to the user equipment comprises transmitting the grant to the source base station, which transmits the grant to the user equipment. 3. The method of claim 2, wherein transmitting the grant to the source base station comprises sending a radio resource control container including a radio resource control connection reconfiguration message to the source base station. 4. The method of claim 1, further comprising communicating a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 5. The method of claim 4, wherein the starting time comprises a system frame number of the target base station and a subframe number. 6. The method of claim 1, further comprising determining of a time offset between the target base station and the source base station; and providing the time offset to the user equipment. 7. The method of claim 1, further comprising receiving the handover complete message from the user equipment and cancelling the grant based on receiving the handover complete message. 8. The method of claim 1, wherein communicating the grant to the user equipment comprises transmitting, to the user equipment, signaling indicating a grant of uplink resources for the target base station, and wherein the method further comprises receiving, from the user equipment, an indication that handover from the source base station to the target base station is complete using the granted uplink resources. 9. The method of claim 8, wherein the grant of uplink resources is transmitted from the target base station on a physical downlink control channel (PDCCH). 10. The method of claim 8, wherein communicating the grant to the user equipment comprises prompting, by the target base station, the source base station to transmit the grant of uplink resources. 11. The method of claim 10, wherein the grant is transmitted via radio resource control (RRC) signaling. 12. The method of claim 8, wherein the grant of uplink resources includes an indication of a periodicity and timing offset for uplink transmissions on the granted resources. 13. The method of claim 12, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 14. The method of claim 8, wherein the signaling indicating a grant of uplink resources further comprises signaling indicating transmission power for uplink transmissions on the granted uplink resources. 15. The method of claim 1, further comprising selecting one or more resources of an uplink channel and a modulation and coding scheme for the user equipment to transmit the handover complete message, wherein the grant comprises an indication of the one or more resources of the uplink channel and the modulation and coding scheme. 16. The method of claim 15, wherein selecting the one or more resources and the modulation and coding scheme is based on measurements made by the user equipment provided in the handover request. 17. The method of claim 1, further comprising selecting a periodicity for the grant based on a type of the user equipment or a number of active bearers of the user equipment. 18. The method of claim 1, further comprising selecting a periodicity for the grant based on available resources of the target base station. 19. The method of claim 1, further comprising selecting a periodicity for the grant based on latency requirements of bearers of the user equipment. 20. The method of claim 1, further comprising communicating a power offset for transmitting the handover complete message to the user equipment. 21. The method of claim 1, wherein the grant is for a physical uplink shared channel. 22. The method of claim 1, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 23. The method of claim 1, wherein the uplink grant comprises a dynamic uplink grant. 24. A method of wireless communication by a user equipment, the method comprising:
receiving, from a base station, an uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from a source base station to the target base station; and transmitting the handover complete message based on the grant. 25. The method of claim 24, wherein the uplink grant is received from the source base station. 26. The method of claim 25, wherein the uplink grant is received via radio resource control (RRC) signaling. 27. The method of claim 24, wherein the uplink grant includes an indication of a periodicity and timing offset for uplink transmissions on granted uplink resources indicated in the uplink grant. 28. The method of claim 27, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 29. The method of claim 27, wherein the periodicity is based on a bearer type of the granted uplink resources. 30. The method of claim 24, further comprising reading a master information block broadcast by the target base station; and determining a timing of the target base station based on the master information block. 31. The method of claim 30, wherein reading the master information block comprises reading the master information block while communicating with the source base station. 32. The method of claim 24, further comprising receiving a time offset between the target base station and the source base station. 33. The method of claim 24, further comprising receiving a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 34. The method of claim 33, wherein the starting time comprises a system frame number of the target base station and a subframe number. 35. The method of claim 24, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 36. The method of claim 24, wherein the uplink grant comprises a dynamic uplink grant. 37. The method of claim 24, wherein the uplink grant is received from the target base station on a physical downlink control channel. 38. The method of claim 24, wherein the grant comprises one or more resources of an uplink channel and a modulation and coding scheme. 39. The method of claim 38, wherein the one or more resources and the modulation and coding scheme are based on measurements made by the user equipment. 40. The method of claim 24, further comprising receiving a power offset for transmitting the handover complete message. 41. The method of claim 40, further comprising applying the power offset to a power calculated via an open loop power control procedure performed by the user equipment. 42. The method of claim 24, wherein the grant is for a physical uplink shared channel. 43. The method of claim 24, wherein the handover complete message is transmitted using granted uplink resources indicated in the uplink grant. 44. The method of claim 24, wherein the uplink grant comprises signaling indicating transmission power for uplink transmissions using granted uplink resources. 45. A target base station comprising:
a memory; and a processor configured to:
receive a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station;
generate an uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and
communicate the grant to the user equipment. 46. The target base station of claim 45, wherein the processor is configured to communicate the grant to the user equipment by transmitting the grant to the source base station, which transmits the grant to the user equipment. 47. The target base station of claim 46, wherein transmitting the grant to the source base station comprises sending a radio resource control container including a radio resource control connection reconfiguration message to the source base station. 48. The target base station of claim 45, wherein the processor is further configured to communicate a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 49. The target base station of claim 48, wherein the starting time comprises a system frame number of the target base station and a subframe number. 50. The target base station of claim 45, wherein the processor is further configured to determine of a time offset between the target base station and the source base station;
and provide the time offset to the user equipment. 51. The target base station of claim 45, wherein the processor is further configured to receive the handover complete message from the user equipment and cancel the grant based on receiving the handover complete message. 52. The target base station of claim 45, wherein the processor is configured to communicate the grant to the user equipment by transmitting, to the user equipment, signaling indicating a grant of uplink resources to the target base station, and wherein the processor is further configured to receive, from the user equipment, an indication that handover from the source base station to the target base station is complete using the granted uplink resources. 53. The target base station of claim 52, wherein the grant of uplink resources is transmitted from the target base station on a physical downlink control channel (PDCCH). 54. The target base station of claim 52, wherein the processor is configured to communicate the grant to the user equipment by prompting, by the target base station, the source base station to transmit the grant of uplink resources. 55. The target base station of claim 54, wherein the grant is transmitted via radio resource control (RRC) signaling. 56. The target base station of claim 52, wherein the grant of uplink resources includes an indication of a periodicity and timing offset for uplink transmissions on the granted resources. 57. The target base station of claim 56, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 58. The target base station of claim 52, wherein the signaling indicating the grant of uplink resources further comprises signaling indicating transmission power for uplink transmissions using the granted uplink resources. 59. The target base station of claim 45, wherein the processor is further configured to select one or more resources of an uplink channel and a modulation and coding scheme for the user equipment to transmit the handover complete message, and wherein the grant comprises the one or more resources of the uplink channel and the modulation and coding scheme. 60. The target base station of claim 59, wherein the processor is configured to select the one or more resources and the modulation and coding scheme based on measurements made by the user equipment provided in the handover request. 61. The target base station of claim 45, wherein the processor is further configured to select a periodicity for the grant based on a type of the user equipment or a number of active bearers of the user equipment. 62. The target base station of claim 45, wherein the processor is further configured to select a periodicity for the grant based on available resources of the target base station. 63. The target base station of claim 45, wherein the processor is further configured to select a periodicity for the grant based on latency requirements of bearers of the user equipment. 64. The target base station of claim 45, wherein the processor is further configured to communicate a power offset for transmitting the handover complete message to the user equipment. 65. The target base station of claim 45, wherein the grant is for a physical uplink shared channel. 66. The target base station of claim 45, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 67. The target base station of claim 45, wherein the uplink grant comprises a dynamic uplink grant. 68. A user equipment comprising:
a memory; and a processor configured to:
receive, from a base station, an uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from a source base station to the target base station; and
transmit the handover complete message based on the grant. 69. The user equipment of claim 68, wherein the uplink grant is received from the source base station. 70. The user equipment of claim 69, wherein the uplink grant is received via radio resource control (RRC) signaling. 71. The user equipment of claim 68, wherein the uplink grant includes an indication of a periodicity and timing offset for uplink transmissions on granted uplink resources indicated in the uplink grant. 72. The user equipment of claim 71, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 73. The user equipment of claim 71, wherein the periodicity is based on a bearer type of the granted uplink resources. 74. The user equipment of claim 68, wherein the processor is further configured to read a master information block broadcast by the target base station; and determine a timing of the target base station based on the master information block. 75. The user equipment of claim 74, wherein the processor is configured to read the master information block by reading the master information block while communicating with the source base station. 76. The user equipment of claim 68, wherein the processor is further configured to receive a time offset between the target base station and the source base station. 77. The user equipment of claim 68, wherein the processor is further configured to receive a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 78. The user equipment of claim 77, wherein the starting time comprises a system frame number of the target base station and a subframe number. 79. The user equipment of claim 68, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 80. The user equipment of claim 68, wherein the uplink grant comprises a dynamic uplink grant. 81. The user equipment of claim 68, wherein the uplink grant is received from the target base station on a physical downlink control channel. 82. The user equipment of claim 68, wherein the grant comprises one or more resources of an uplink channel and a modulation and coding scheme. 83. The user equipment of claim 82, wherein the one or more resources and the modulation and coding scheme are based on measurements made by the user equipment. 84. The user equipment of claim 68, wherein the processor is further configured to receive a power offset for transmitting the handover complete message. 85. The user equipment of claim 84, wherein the processor is further configured to apply the power offset to a power calculated via an open loop power control procedure performed by the user equipment. 86. The user equipment of claim 68, wherein the grant is for a physical uplink shared channel. 87. The user equipment of claim 68, wherein the handover complete message is transmitted using granted uplink resources indicated in the uplink grant. 89. The user equipment of claim 68, wherein the uplink grant comprises signaling indicating transmission power for uplink transmissions using granted uplink resources. 89. A target base station comprising:
means for receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station; means for generating an uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and communicating the grant to the user equipment. 90. A user equipment comprising:
means for receiving, from a base station, an uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from the source base station to the target base station; and means for transmitting the handover complete message based on the grant. 91. A computer readable medium having instructions stored thereon for causing at least one processor in a target base station to perform a method, the method comprising:
receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station; generating a semi-persistent scheduling uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and communicating the grant to the user equipment. 92. A computer readable medium having instructions stored thereon for causing at least one processor in a user equipment to perform a method, the method comprising:
receiving, from a source base station, a semi-persistent scheduling uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from the source base station to the target base station; and transmitting the handover complete message based on the grant. | Certain aspects of the present disclosure relate to techniques for performing handover from a source base station to a target base station. According to one aspect, a method generally includes receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station, generating a scheduling uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request, and communicating the grant to the user equipment.1. A method of wireless communication by a target base station, the method comprising:
receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station; generating an uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and communicating the grant to the user equipment. 2. The method of claim 1, wherein communicating the grant to the user equipment comprises transmitting the grant to the source base station, which transmits the grant to the user equipment. 3. The method of claim 2, wherein transmitting the grant to the source base station comprises sending a radio resource control container including a radio resource control connection reconfiguration message to the source base station. 4. The method of claim 1, further comprising communicating a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 5. The method of claim 4, wherein the starting time comprises a system frame number of the target base station and a subframe number. 6. The method of claim 1, further comprising determining of a time offset between the target base station and the source base station; and providing the time offset to the user equipment. 7. The method of claim 1, further comprising receiving the handover complete message from the user equipment and cancelling the grant based on receiving the handover complete message. 8. The method of claim 1, wherein communicating the grant to the user equipment comprises transmitting, to the user equipment, signaling indicating a grant of uplink resources for the target base station, and wherein the method further comprises receiving, from the user equipment, an indication that handover from the source base station to the target base station is complete using the granted uplink resources. 9. The method of claim 8, wherein the grant of uplink resources is transmitted from the target base station on a physical downlink control channel (PDCCH). 10. The method of claim 8, wherein communicating the grant to the user equipment comprises prompting, by the target base station, the source base station to transmit the grant of uplink resources. 11. The method of claim 10, wherein the grant is transmitted via radio resource control (RRC) signaling. 12. The method of claim 8, wherein the grant of uplink resources includes an indication of a periodicity and timing offset for uplink transmissions on the granted resources. 13. The method of claim 12, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 14. The method of claim 8, wherein the signaling indicating a grant of uplink resources further comprises signaling indicating transmission power for uplink transmissions on the granted uplink resources. 15. The method of claim 1, further comprising selecting one or more resources of an uplink channel and a modulation and coding scheme for the user equipment to transmit the handover complete message, wherein the grant comprises an indication of the one or more resources of the uplink channel and the modulation and coding scheme. 16. The method of claim 15, wherein selecting the one or more resources and the modulation and coding scheme is based on measurements made by the user equipment provided in the handover request. 17. The method of claim 1, further comprising selecting a periodicity for the grant based on a type of the user equipment or a number of active bearers of the user equipment. 18. The method of claim 1, further comprising selecting a periodicity for the grant based on available resources of the target base station. 19. The method of claim 1, further comprising selecting a periodicity for the grant based on latency requirements of bearers of the user equipment. 20. The method of claim 1, further comprising communicating a power offset for transmitting the handover complete message to the user equipment. 21. The method of claim 1, wherein the grant is for a physical uplink shared channel. 22. The method of claim 1, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 23. The method of claim 1, wherein the uplink grant comprises a dynamic uplink grant. 24. A method of wireless communication by a user equipment, the method comprising:
receiving, from a base station, an uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from a source base station to the target base station; and transmitting the handover complete message based on the grant. 25. The method of claim 24, wherein the uplink grant is received from the source base station. 26. The method of claim 25, wherein the uplink grant is received via radio resource control (RRC) signaling. 27. The method of claim 24, wherein the uplink grant includes an indication of a periodicity and timing offset for uplink transmissions on granted uplink resources indicated in the uplink grant. 28. The method of claim 27, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 29. The method of claim 27, wherein the periodicity is based on a bearer type of the granted uplink resources. 30. The method of claim 24, further comprising reading a master information block broadcast by the target base station; and determining a timing of the target base station based on the master information block. 31. The method of claim 30, wherein reading the master information block comprises reading the master information block while communicating with the source base station. 32. The method of claim 24, further comprising receiving a time offset between the target base station and the source base station. 33. The method of claim 24, further comprising receiving a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 34. The method of claim 33, wherein the starting time comprises a system frame number of the target base station and a subframe number. 35. The method of claim 24, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 36. The method of claim 24, wherein the uplink grant comprises a dynamic uplink grant. 37. The method of claim 24, wherein the uplink grant is received from the target base station on a physical downlink control channel. 38. The method of claim 24, wherein the grant comprises one or more resources of an uplink channel and a modulation and coding scheme. 39. The method of claim 38, wherein the one or more resources and the modulation and coding scheme are based on measurements made by the user equipment. 40. The method of claim 24, further comprising receiving a power offset for transmitting the handover complete message. 41. The method of claim 40, further comprising applying the power offset to a power calculated via an open loop power control procedure performed by the user equipment. 42. The method of claim 24, wherein the grant is for a physical uplink shared channel. 43. The method of claim 24, wherein the handover complete message is transmitted using granted uplink resources indicated in the uplink grant. 44. The method of claim 24, wherein the uplink grant comprises signaling indicating transmission power for uplink transmissions using granted uplink resources. 45. A target base station comprising:
a memory; and a processor configured to:
receive a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station;
generate an uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and
communicate the grant to the user equipment. 46. The target base station of claim 45, wherein the processor is configured to communicate the grant to the user equipment by transmitting the grant to the source base station, which transmits the grant to the user equipment. 47. The target base station of claim 46, wherein transmitting the grant to the source base station comprises sending a radio resource control container including a radio resource control connection reconfiguration message to the source base station. 48. The target base station of claim 45, wherein the processor is further configured to communicate a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 49. The target base station of claim 48, wherein the starting time comprises a system frame number of the target base station and a subframe number. 50. The target base station of claim 45, wherein the processor is further configured to determine of a time offset between the target base station and the source base station;
and provide the time offset to the user equipment. 51. The target base station of claim 45, wherein the processor is further configured to receive the handover complete message from the user equipment and cancel the grant based on receiving the handover complete message. 52. The target base station of claim 45, wherein the processor is configured to communicate the grant to the user equipment by transmitting, to the user equipment, signaling indicating a grant of uplink resources to the target base station, and wherein the processor is further configured to receive, from the user equipment, an indication that handover from the source base station to the target base station is complete using the granted uplink resources. 53. The target base station of claim 52, wherein the grant of uplink resources is transmitted from the target base station on a physical downlink control channel (PDCCH). 54. The target base station of claim 52, wherein the processor is configured to communicate the grant to the user equipment by prompting, by the target base station, the source base station to transmit the grant of uplink resources. 55. The target base station of claim 54, wherein the grant is transmitted via radio resource control (RRC) signaling. 56. The target base station of claim 52, wherein the grant of uplink resources includes an indication of a periodicity and timing offset for uplink transmissions on the granted resources. 57. The target base station of claim 56, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 58. The target base station of claim 52, wherein the signaling indicating the grant of uplink resources further comprises signaling indicating transmission power for uplink transmissions using the granted uplink resources. 59. The target base station of claim 45, wherein the processor is further configured to select one or more resources of an uplink channel and a modulation and coding scheme for the user equipment to transmit the handover complete message, and wherein the grant comprises the one or more resources of the uplink channel and the modulation and coding scheme. 60. The target base station of claim 59, wherein the processor is configured to select the one or more resources and the modulation and coding scheme based on measurements made by the user equipment provided in the handover request. 61. The target base station of claim 45, wherein the processor is further configured to select a periodicity for the grant based on a type of the user equipment or a number of active bearers of the user equipment. 62. The target base station of claim 45, wherein the processor is further configured to select a periodicity for the grant based on available resources of the target base station. 63. The target base station of claim 45, wherein the processor is further configured to select a periodicity for the grant based on latency requirements of bearers of the user equipment. 64. The target base station of claim 45, wherein the processor is further configured to communicate a power offset for transmitting the handover complete message to the user equipment. 65. The target base station of claim 45, wherein the grant is for a physical uplink shared channel. 66. The target base station of claim 45, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 67. The target base station of claim 45, wherein the uplink grant comprises a dynamic uplink grant. 68. A user equipment comprising:
a memory; and a processor configured to:
receive, from a base station, an uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from a source base station to the target base station; and
transmit the handover complete message based on the grant. 69. The user equipment of claim 68, wherein the uplink grant is received from the source base station. 70. The user equipment of claim 69, wherein the uplink grant is received via radio resource control (RRC) signaling. 71. The user equipment of claim 68, wherein the uplink grant includes an indication of a periodicity and timing offset for uplink transmissions on granted uplink resources indicated in the uplink grant. 72. The user equipment of claim 71, wherein the periodicity and timing offset for uplink transmissions on the granted uplink resources comprises a plurality of grant occasions that are consistent across a plurality of frames. 73. The user equipment of claim 71, wherein the periodicity is based on a bearer type of the granted uplink resources. 74. The user equipment of claim 68, wherein the processor is further configured to read a master information block broadcast by the target base station; and determine a timing of the target base station based on the master information block. 75. The user equipment of claim 74, wherein the processor is configured to read the master information block by reading the master information block while communicating with the source base station. 76. The user equipment of claim 68, wherein the processor is further configured to receive a time offset between the target base station and the source base station. 77. The user equipment of claim 68, wherein the processor is further configured to receive a starting time for the grant to the user equipment with respect to a timing offset between the target base station and the source base station. 78. The user equipment of claim 77, wherein the starting time comprises a system frame number of the target base station and a subframe number. 79. The user equipment of claim 68, wherein the uplink grant comprises a semi-persistent scheduling uplink grant. 80. The user equipment of claim 68, wherein the uplink grant comprises a dynamic uplink grant. 81. The user equipment of claim 68, wherein the uplink grant is received from the target base station on a physical downlink control channel. 82. The user equipment of claim 68, wherein the grant comprises one or more resources of an uplink channel and a modulation and coding scheme. 83. The user equipment of claim 82, wherein the one or more resources and the modulation and coding scheme are based on measurements made by the user equipment. 84. The user equipment of claim 68, wherein the processor is further configured to receive a power offset for transmitting the handover complete message. 85. The user equipment of claim 84, wherein the processor is further configured to apply the power offset to a power calculated via an open loop power control procedure performed by the user equipment. 86. The user equipment of claim 68, wherein the grant is for a physical uplink shared channel. 87. The user equipment of claim 68, wherein the handover complete message is transmitted using granted uplink resources indicated in the uplink grant. 89. The user equipment of claim 68, wherein the uplink grant comprises signaling indicating transmission power for uplink transmissions using granted uplink resources. 89. A target base station comprising:
means for receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station; means for generating an uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and communicating the grant to the user equipment. 90. A user equipment comprising:
means for receiving, from a base station, an uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from the source base station to the target base station; and means for transmitting the handover complete message based on the grant. 91. A computer readable medium having instructions stored thereon for causing at least one processor in a target base station to perform a method, the method comprising:
receiving a handover request from a source base station for handover of communication of a user equipment from the source base station to the target base station; generating a semi-persistent scheduling uplink grant for the user equipment to transmit a handover complete message based on receiving the handover request; and communicating the grant to the user equipment. 92. A computer readable medium having instructions stored thereon for causing at least one processor in a user equipment to perform a method, the method comprising:
receiving, from a source base station, a semi-persistent scheduling uplink grant for the user equipment to transmit a handover complete message to a target base station for handover of communication of the user equipment from the source base station to the target base station; and transmitting the handover complete message based on the grant. | 2,600 |
11,019 | 11,019 | 16,418,872 | 2,612 | The technology disclosed relates to positioning and revealing a control interface in a virtual or augmented reality that includes causing display of a plurality of interface projectiles at a first region of a virtual or augmented reality. Input is received that is interpreted as user interaction with an interface projectile. User interaction includes selecting and throwing the interface projectile in a first direction. An animation of the interface projectile is displayed along a trajectory in the first directions to a place where it lands. A blooming of the control interface blooming from the interface projectile at the place where it lands is displayed. | 1. A method of positioning and revealing a control interface in a virtual or augmented reality, including:
causing display of a plurality of interface projectiles a first region of a virtual or augmented reality; receiving input interpreted as user interaction with an interface projectile, including selecting and throwing the interface projectile in a first direction, causing display of an animation of the interface projectile along a trajectory in the first directions to a place where it lands; and causing display of the control interface blooming from the interface projectile at the place where it lands. 2. The method of claim 1, further implementing determining from the input, a throw direction and a throw speed for the user interaction with the interface projectile. 3. The method of claim 2, further implementing determining from the throw direction and the throw speed, a user's intended interface angle and an interface distance. 4. The method of claim 1, further implementing heuristics based on user comfort factors including at least an arm length for the user and a location of pre-existing interfaces in a user's workspace. 5. The method of claim 4, further implementing using the user comfort factors to refine a target interface position and rotation to place the control interface in location that is immediately accessible without discomfort or significant movement required of a user. 6. The method of claim 1, wherein input is received from an optical sensor device comprising at least one camera having a field of view disposed to sense motions of hands of a user. 7. The method of claim 6, wherein a user's hand is sensed without aid of markers, gloves, or hand held controllers. 8. The method of claim 6, further including capturing a set of captured images of one or more hands in the a three-dimensional (3D) sensory space and sensing a location of at least one hand using a video capturing sensor including at least one camera. 9. The method of claim 1, further including the interface projectiles bear a representation of the control interface that will be launched by throwing. 10. The method of claim 9, further including detecting a grab gesture made by the user that indicates the user has grasped the interface projectile. 11. A graphic user interface generator system, including:
a processor coupled with a computer readable medium storing instructions thereon that when executed implement: a display generator configurable to cause display of a plurality of interface projectiles in a first region of a virtual or augmented reality; a gesture data input that receives gesture data representative of a user selecting an interface projectile and throwing it towards a place where it lands; the display generator configured to respond to the gesture data by animating a trajectory of the selected interface projectile from the first region to the place where the interface projectile lands; and the display generator further configured to generate a control interface bloom that reveals a control interface at the place where the interface projectile lands. 12. The system of claim 11, further implementing determining from the input, a throw direction and a throw speed for a user interaction with the interface projectile. 13. The system of claim 11, further implementing determining from the throw direction and the throw speed, a user's intended interface angle and an interface distance. 14. The system of claim 11, further implementing heuristics based on user comfort factors including at least an arm length for the user and a location of pre-existing interfaces in a user's workspace. 15. The system of claim 14, further implementing using the user comfort factors to refine a target interface position and rotation to place the control interface in location that is immediately accessible without discomfort or significant movement required of a user. 16. The system of claim 11, wherein input is received from an optical sensor device comprising at least one camera having a field of view disposed to sense motions of hands of a user. 17. The system of claim 16, wherein a user's hand is sensed without aid of markers, gloves, or hand held controllers. 18. The system of claim 16, further including capturing a set of captured images of one or more hands in the a three-dimensional (3D) sensory space and sensing a location of at least one hand using a video capturing sensor including at least one camera. 19. The system of claim 11, further including the interface projectiles bear a representation of the control interface that will be launched by throwing. 20. The system of claim 11, further including the gesture input implementing detecting a grab gesture made by the user that indicates the user has grasped the interface projectile. 21. A non-transitory computer readable medium storing instructions thereon, which instructions when executed by one or more processors implement a graphic user interface capable wearable device including:
presenting a plurality of interface projectiles displayed in a virtual or augmented reality at a first time, wherein each interface projectile is thowable and, upon landing, blooms into a control interface where it lands, presenting an interface projectile trajectory animation, responsive to user manipulation of an interface projectile, which displays travel of the interface projectile from its location at the first time to a place where it lands in the virtual or augmented reality at a second time, and presenting a control interface that becomes visible, blooming from interface projectile at the place where it lands at a third time. 22. The non-transitory computer readable medium of claim 21, further including using an iconographic representation. | The technology disclosed relates to positioning and revealing a control interface in a virtual or augmented reality that includes causing display of a plurality of interface projectiles at a first region of a virtual or augmented reality. Input is received that is interpreted as user interaction with an interface projectile. User interaction includes selecting and throwing the interface projectile in a first direction. An animation of the interface projectile is displayed along a trajectory in the first directions to a place where it lands. A blooming of the control interface blooming from the interface projectile at the place where it lands is displayed.1. A method of positioning and revealing a control interface in a virtual or augmented reality, including:
causing display of a plurality of interface projectiles a first region of a virtual or augmented reality; receiving input interpreted as user interaction with an interface projectile, including selecting and throwing the interface projectile in a first direction, causing display of an animation of the interface projectile along a trajectory in the first directions to a place where it lands; and causing display of the control interface blooming from the interface projectile at the place where it lands. 2. The method of claim 1, further implementing determining from the input, a throw direction and a throw speed for the user interaction with the interface projectile. 3. The method of claim 2, further implementing determining from the throw direction and the throw speed, a user's intended interface angle and an interface distance. 4. The method of claim 1, further implementing heuristics based on user comfort factors including at least an arm length for the user and a location of pre-existing interfaces in a user's workspace. 5. The method of claim 4, further implementing using the user comfort factors to refine a target interface position and rotation to place the control interface in location that is immediately accessible without discomfort or significant movement required of a user. 6. The method of claim 1, wherein input is received from an optical sensor device comprising at least one camera having a field of view disposed to sense motions of hands of a user. 7. The method of claim 6, wherein a user's hand is sensed without aid of markers, gloves, or hand held controllers. 8. The method of claim 6, further including capturing a set of captured images of one or more hands in the a three-dimensional (3D) sensory space and sensing a location of at least one hand using a video capturing sensor including at least one camera. 9. The method of claim 1, further including the interface projectiles bear a representation of the control interface that will be launched by throwing. 10. The method of claim 9, further including detecting a grab gesture made by the user that indicates the user has grasped the interface projectile. 11. A graphic user interface generator system, including:
a processor coupled with a computer readable medium storing instructions thereon that when executed implement: a display generator configurable to cause display of a plurality of interface projectiles in a first region of a virtual or augmented reality; a gesture data input that receives gesture data representative of a user selecting an interface projectile and throwing it towards a place where it lands; the display generator configured to respond to the gesture data by animating a trajectory of the selected interface projectile from the first region to the place where the interface projectile lands; and the display generator further configured to generate a control interface bloom that reveals a control interface at the place where the interface projectile lands. 12. The system of claim 11, further implementing determining from the input, a throw direction and a throw speed for a user interaction with the interface projectile. 13. The system of claim 11, further implementing determining from the throw direction and the throw speed, a user's intended interface angle and an interface distance. 14. The system of claim 11, further implementing heuristics based on user comfort factors including at least an arm length for the user and a location of pre-existing interfaces in a user's workspace. 15. The system of claim 14, further implementing using the user comfort factors to refine a target interface position and rotation to place the control interface in location that is immediately accessible without discomfort or significant movement required of a user. 16. The system of claim 11, wherein input is received from an optical sensor device comprising at least one camera having a field of view disposed to sense motions of hands of a user. 17. The system of claim 16, wherein a user's hand is sensed without aid of markers, gloves, or hand held controllers. 18. The system of claim 16, further including capturing a set of captured images of one or more hands in the a three-dimensional (3D) sensory space and sensing a location of at least one hand using a video capturing sensor including at least one camera. 19. The system of claim 11, further including the interface projectiles bear a representation of the control interface that will be launched by throwing. 20. The system of claim 11, further including the gesture input implementing detecting a grab gesture made by the user that indicates the user has grasped the interface projectile. 21. A non-transitory computer readable medium storing instructions thereon, which instructions when executed by one or more processors implement a graphic user interface capable wearable device including:
presenting a plurality of interface projectiles displayed in a virtual or augmented reality at a first time, wherein each interface projectile is thowable and, upon landing, blooms into a control interface where it lands, presenting an interface projectile trajectory animation, responsive to user manipulation of an interface projectile, which displays travel of the interface projectile from its location at the first time to a place where it lands in the virtual or augmented reality at a second time, and presenting a control interface that becomes visible, blooming from interface projectile at the place where it lands at a third time. 22. The non-transitory computer readable medium of claim 21, further including using an iconographic representation. | 2,600 |
11,020 | 11,020 | 14,524,733 | 2,625 | The present invention relates to an organic light emitting display device that changes a reference voltage commonly applied to a driving transistor in all pixels, based on a characteristic value sensed according to each pixel. | 1. An organic light emitting display device comprising:
a sensing unit that senses a characteristic value of a driving transistor in each pixel of a display panel; and a compensation unit that controls to change a reference voltage commonly applied to a driving transistor in all pixels, based on the characteristic value sensed according to each pixel. 2. The organic light emitting display device of claim 1, wherein the compensation unit calculates dispersion information of the characteristic value sensed according to each pixel, and controls to change the reference voltage which is a common voltage of all pixels of the display panel, according to a result of a comparison between the calculated dispersion information with a predetermined reference dispersion information. 3. The organic light emitting display device of claim 2, wherein the compensation unit compares the calculated dispersion information with the pre-stored reference dispersion information, and when a difference between the calculated dispersion information and the reference dispersion information is out of a predetermined range, the compensation unit controls to change the reference voltage according to a reference voltage change value determined so that the difference between the calculated dispersion information and the reference dispersion information is within the predetermined range. 4. The organic light emitting display device of claim 1, wherein the compensation unit comprises a calculation unit that calculates the dispersion information with an average value and a deviation of the characteristic value sensed according to each pixel, a first compensation unit that compensates for the deviation of the characteristic value sensed according to the each pixel, and a second compensation unit that compensates for the average value of the characteristic value sensed according to each pixel. 5. The organic light emitting display device of claim 4, wherein the first compensation unit controls to change a data voltage provided to each pixel in order to compensate for the deviation of the characteristic value sensed according to each pixel, and the second compensation unit controls to change the reference voltage provided to the all pixels of the display panel in order to compensate for the average value of the characteristic value sensed according to each pixel. 6. The organic light emitting display device of claim 5, further comprising:
a data driving unit that provides a data voltage to each pixel of the display panel, wherein the data driving unit that provides to each pixel with a data voltage changed within a data voltage range including a first voltage range allocated for a grayscale expression and a second voltage range allocated for a deviation compensation. 7. An organic light emitting display device comprising:
a display panel comprising data lines and gate lines defining a plurality of pixels; a data driving unit that provides a data voltage to the data lines; and a power providing unit that changes and provides a reference voltage commonly provided to a driving transistor in the plurality of the pixels. 8. The organic light emitting display device of claim 7, wherein the power providing unit changes and outputs the reference voltage according to an average value of a characteristic value of the driving transistor in the plurality of each pixels. 9. The organic light emitting display device of claim 7, wherein the data driving unit changes and provides the data voltage according to a deviation of a characteristic value of the driving transistor in the plurality of each pixels. | The present invention relates to an organic light emitting display device that changes a reference voltage commonly applied to a driving transistor in all pixels, based on a characteristic value sensed according to each pixel.1. An organic light emitting display device comprising:
a sensing unit that senses a characteristic value of a driving transistor in each pixel of a display panel; and a compensation unit that controls to change a reference voltage commonly applied to a driving transistor in all pixels, based on the characteristic value sensed according to each pixel. 2. The organic light emitting display device of claim 1, wherein the compensation unit calculates dispersion information of the characteristic value sensed according to each pixel, and controls to change the reference voltage which is a common voltage of all pixels of the display panel, according to a result of a comparison between the calculated dispersion information with a predetermined reference dispersion information. 3. The organic light emitting display device of claim 2, wherein the compensation unit compares the calculated dispersion information with the pre-stored reference dispersion information, and when a difference between the calculated dispersion information and the reference dispersion information is out of a predetermined range, the compensation unit controls to change the reference voltage according to a reference voltage change value determined so that the difference between the calculated dispersion information and the reference dispersion information is within the predetermined range. 4. The organic light emitting display device of claim 1, wherein the compensation unit comprises a calculation unit that calculates the dispersion information with an average value and a deviation of the characteristic value sensed according to each pixel, a first compensation unit that compensates for the deviation of the characteristic value sensed according to the each pixel, and a second compensation unit that compensates for the average value of the characteristic value sensed according to each pixel. 5. The organic light emitting display device of claim 4, wherein the first compensation unit controls to change a data voltage provided to each pixel in order to compensate for the deviation of the characteristic value sensed according to each pixel, and the second compensation unit controls to change the reference voltage provided to the all pixels of the display panel in order to compensate for the average value of the characteristic value sensed according to each pixel. 6. The organic light emitting display device of claim 5, further comprising:
a data driving unit that provides a data voltage to each pixel of the display panel, wherein the data driving unit that provides to each pixel with a data voltage changed within a data voltage range including a first voltage range allocated for a grayscale expression and a second voltage range allocated for a deviation compensation. 7. An organic light emitting display device comprising:
a display panel comprising data lines and gate lines defining a plurality of pixels; a data driving unit that provides a data voltage to the data lines; and a power providing unit that changes and provides a reference voltage commonly provided to a driving transistor in the plurality of the pixels. 8. The organic light emitting display device of claim 7, wherein the power providing unit changes and outputs the reference voltage according to an average value of a characteristic value of the driving transistor in the plurality of each pixels. 9. The organic light emitting display device of claim 7, wherein the data driving unit changes and provides the data voltage according to a deviation of a characteristic value of the driving transistor in the plurality of each pixels. | 2,600 |
11,021 | 11,021 | 16,801,256 | 2,689 | An indicator assembly includes, among other things, a vehicle model that is configured to communicate with a vehicle. The indicator assembly further includes an indicator portion of the vehicle model. The indicator portion is configured to indicate a status of the vehicle based on a communication sent to the vehicle model. | 1. An indicator assembly, comprising:
a vehicle model that is configured to communicate with a vehicle; and an indicator portion of the vehicle model, the indicator portion configured to provide an indication that the vehicle has completed an action in response to a first communication sent from a keyfob to the vehicle, the indicator portion providing the indication based on a second communication sent to the vehicle model from the vehicle, wherein the vehicle model is a three-dimensional model of the vehicle. 2. The indicator assembly of claim 1, wherein the vehicle is an electrified vehicle having a traction battery, wherein the indicator portion includes a traction battery indicator portion configured to provide an indication of a charge status of the electrified vehicle. 3. The indicator assembly of claim 1, further comprising a plurality of lighting devices, the indicator portion including the plurality of lighting devices. 4. The indicator assembly of claim 1, wherein the indicator portion includes a tire pressure indicator portion configured to provide an indication of a low tire pressure status of at least one tire of the vehicle, the tire pressure indicator portion configured to illuminate at least one tire of the vehicle model, the at least one tire of the vehicle model corresponding to the at least one tire of the vehicle having the low tire pressure status. 5. The indicator assembly of claim 1, wherein the first and second communications are wireless communications. 6. The indicator assembly of claim 1, further comprising a wireless receiver of the vehicle model, the wireless receiver configured to receive a wireless communication from a cloud server, the indicator portion configured to indicate the status based on the wireless communication. 7-8. (canceled) 9. A vehicle status indicating method, comprising:
receiving, at a vehicle model, a communication that includes a confirmation status of a vehicle; and indicating the confirmation status of the vehicle on the vehicle model by adjusting an indicator portion of the vehicle model based on the communication, wherein the indicator portion is adjusted a first way if the vehicle completes an action requested by a user, and the indicator portion is adjusted a different, second way if the vehicle does not complete the action requested by the user, wherein the vehicle model is a three-dimensional model of the vehicle. 10. The vehicle status indicating method of claim 9, wherein the vehicle is an electrified vehicle and further comprising indicating a charging status of a traction battery of the electrified vehicle by adjusting the indicator portion of the vehicle model based on another communication. 11. The vehicle status indicating method of claim 10, wherein the indicator portion includes a plurality of lighting devices, wherein the indicating comprises illuminating a percentage of the lighting devices within the plurality of lighting devices in proportion to a state of charge of the traction battery. 12. The vehicle status indicating method of claim 10, wherein the indicator portion includes a plurality of lighting devices, wherein the indicating comprise flashing at least some of the plurality of lighting devices when the electrified vehicle is charging. 13. The vehicle status indicating method of claim 9, wherein the status is a tire pressure status of at least one tire of the vehicle. 14-16. (canceled) 17. The vehicle status indicating method of claim 9, wherein the action requested by the user is requested by the user initiating a wireless command to the vehicle. 18. A vehicle status indicating method, comprising:
at a vehicle, receiving a first communication from a keyfob; in response to the first communication, initiating an action at the vehicle; and sending a second communication from the vehicle to a vehicle model of the vehicle, the second communication including a status of the vehicle, wherein the vehicle model is configured to adjust an indicator portion of the vehicle model based on the second communication to provide an indication of the status on the vehicle model, wherein the vehicle is an electrified vehicle, and the vehicle model is a replica of the electrified vehicle. 19-20. (canceled) 21. The indicator assembly of claim 1, wherein the status included in the second communication is a confirmation status that causes the indicator portion to adjust in a way that confirms the action initiated in response the first communication. 22. The vehicle status indicating method of claim 9, wherein the wireless command is sent from a keyfob that is separate and distinct from the vehicle model. 23. The vehicle status indicating method of claim 18, wherein the status included in the second communication is a confirmation status that causes the indicator portion to adjust in a way that confirms the action initiated in response the first communication. 24. The vehicle status indicating method of claim 18, where the keyfob is separate and distinct from the vehicle model. | An indicator assembly includes, among other things, a vehicle model that is configured to communicate with a vehicle. The indicator assembly further includes an indicator portion of the vehicle model. The indicator portion is configured to indicate a status of the vehicle based on a communication sent to the vehicle model.1. An indicator assembly, comprising:
a vehicle model that is configured to communicate with a vehicle; and an indicator portion of the vehicle model, the indicator portion configured to provide an indication that the vehicle has completed an action in response to a first communication sent from a keyfob to the vehicle, the indicator portion providing the indication based on a second communication sent to the vehicle model from the vehicle, wherein the vehicle model is a three-dimensional model of the vehicle. 2. The indicator assembly of claim 1, wherein the vehicle is an electrified vehicle having a traction battery, wherein the indicator portion includes a traction battery indicator portion configured to provide an indication of a charge status of the electrified vehicle. 3. The indicator assembly of claim 1, further comprising a plurality of lighting devices, the indicator portion including the plurality of lighting devices. 4. The indicator assembly of claim 1, wherein the indicator portion includes a tire pressure indicator portion configured to provide an indication of a low tire pressure status of at least one tire of the vehicle, the tire pressure indicator portion configured to illuminate at least one tire of the vehicle model, the at least one tire of the vehicle model corresponding to the at least one tire of the vehicle having the low tire pressure status. 5. The indicator assembly of claim 1, wherein the first and second communications are wireless communications. 6. The indicator assembly of claim 1, further comprising a wireless receiver of the vehicle model, the wireless receiver configured to receive a wireless communication from a cloud server, the indicator portion configured to indicate the status based on the wireless communication. 7-8. (canceled) 9. A vehicle status indicating method, comprising:
receiving, at a vehicle model, a communication that includes a confirmation status of a vehicle; and indicating the confirmation status of the vehicle on the vehicle model by adjusting an indicator portion of the vehicle model based on the communication, wherein the indicator portion is adjusted a first way if the vehicle completes an action requested by a user, and the indicator portion is adjusted a different, second way if the vehicle does not complete the action requested by the user, wherein the vehicle model is a three-dimensional model of the vehicle. 10. The vehicle status indicating method of claim 9, wherein the vehicle is an electrified vehicle and further comprising indicating a charging status of a traction battery of the electrified vehicle by adjusting the indicator portion of the vehicle model based on another communication. 11. The vehicle status indicating method of claim 10, wherein the indicator portion includes a plurality of lighting devices, wherein the indicating comprises illuminating a percentage of the lighting devices within the plurality of lighting devices in proportion to a state of charge of the traction battery. 12. The vehicle status indicating method of claim 10, wherein the indicator portion includes a plurality of lighting devices, wherein the indicating comprise flashing at least some of the plurality of lighting devices when the electrified vehicle is charging. 13. The vehicle status indicating method of claim 9, wherein the status is a tire pressure status of at least one tire of the vehicle. 14-16. (canceled) 17. The vehicle status indicating method of claim 9, wherein the action requested by the user is requested by the user initiating a wireless command to the vehicle. 18. A vehicle status indicating method, comprising:
at a vehicle, receiving a first communication from a keyfob; in response to the first communication, initiating an action at the vehicle; and sending a second communication from the vehicle to a vehicle model of the vehicle, the second communication including a status of the vehicle, wherein the vehicle model is configured to adjust an indicator portion of the vehicle model based on the second communication to provide an indication of the status on the vehicle model, wherein the vehicle is an electrified vehicle, and the vehicle model is a replica of the electrified vehicle. 19-20. (canceled) 21. The indicator assembly of claim 1, wherein the status included in the second communication is a confirmation status that causes the indicator portion to adjust in a way that confirms the action initiated in response the first communication. 22. The vehicle status indicating method of claim 9, wherein the wireless command is sent from a keyfob that is separate and distinct from the vehicle model. 23. The vehicle status indicating method of claim 18, wherein the status included in the second communication is a confirmation status that causes the indicator portion to adjust in a way that confirms the action initiated in response the first communication. 24. The vehicle status indicating method of claim 18, where the keyfob is separate and distinct from the vehicle model. | 2,600 |
11,022 | 11,022 | 16,256,406 | 2,636 | The disclosure provides a system for transmitting and receiving optical signals. The system includes a first mirror of a communication device, a first mirror actuator configured to control a pointing direction of the first mirror, a second mirror of the communication device, a second mirror actuator configured to control a pointing direction of the second mirror, and one or more processors. The one or more processors are configured to direct the second mirror actuator to move the second mirror to track a signal within a zone in an area of coverage of the communication device and meanwhile keep the first mirror stationary at a first angle. The one or more processors are also configured to direct the first mirror actuator to move the first mirror to a second angle in a direction of motion of the signal when the signal reaches an edge of the zone and meanwhile move the second mirror to a default angle. | 1. A system for transmitting and receiving optical signals, the system including:
a first mirror of a communication device; a first mirror actuator configured to control a pointing direction of the first mirror; a second mirror of the communication device; a second mirror actuator configured to control a pointing direction of the second mirror; and one or more processors operatively coupled to the first mirror actuator and the second mirror actuator, the one or more processors being configured to:
direct the second mirror actuator to move the second mirror to track a signal within a first zone in a plurality of zones in an area of coverage of the communication device,
wherein each zone is adjacent to at least one other zone of the plurality of zones, and
keep the first mirror stationary at a first angle during a first time period while using the second mirror to track the signal in the first zone;
when the signal reaches an edge of the first zone at a first time, direct the first mirror actuator to move the first mirror, during a second time period starting at the first time and ending at a second time, from the first angle to a second angle in a direction of motion of the signal as the signal moves into a second zone of the plurality of zones;
direct the second mirror actuator to move the second mirror, during the second time period, to a default angle while moving the first mirror from the first angle to the second angle; and
keep the first mirror stationary at the second angle during the second time period while using the second mirror to track the signal in the second zone. 2. The system of claim 1, wherein the first mirror is kept stationary by locking the first mirror in place and turning off power to the first mirror actuator. 3. The system of claim 1, wherein the area of each zone of the plurality of zones does not overlap with any other zone of the plurality of zones and is approximately a same percentage of a range of motion of the second mirror. 4. The system of claim 1, wherein the area of coverage of the communication device is determined by combining a range of motion of the first mirror and a range of motion of the second mirror. 5. (canceled) 6. The system of claim 1, wherein the one or more processors are further configured to:
when the signal reaches an edge of the second zone at a third time, direct the first mirror actuator to move the first mirror, during a third time period starting at the third time and ending at a fourth time, from the second angle to a third angle in a direction of motion of the signal as the signal moves into a third zone of the plurality of zones; and keep the first mirror stationary at the third angle during the third time period while using the second mirror to track the signal in the third zone. 7. The system of claim 1, wherein the second angle is a set interval from the first angle. 8. (canceled) 9. (canceled) 10. The system of claim 1, further comprising the communication device. 11. (canceled) 12. A method for transmitting and receiving optical signals, the method including:
controlling, by one or more processors, a first mirror actuator to control a pointing direction of a first mirror of a communication device; and controlling, by the one or more processors, a second mirror actuator to control a pointing direction of a second mirror of the communication device, wherein the second mirror actuator moves the second mirror to track a signal within a first zone in a plurality of zones in an area of coverage of the communication device, wherein each zone is adjacent to at least one other zone of the plurality of zones, wherein the first mirror is kept stationary at a first angle during a first period while the second mirror is used to track the signal in the first zone, wherein, when the signal reaches an edge of the first zone at a first time, the first mirror actuator moves the first mirror, during a second time period starting at the first time and ending at a second time, from the first angle to a second angle in a direction of motion of the signal as the signal moves into a second zone of the plurality of zones, wherein the second mirror actuator moves the second mirror, during the second time period, to a default angle while the first mirror actuator moves the first mirror from the first angle to the second angle, and wherein first mirror is kept stationary at the second angle during the second time period while the second mirror is used to track the signal in the second zone. 13. The method of claim 12, wherein the first mirror is kept stationary by locking the first mirror in place and turning off power to the first mirror actuator. 14. The method of claim 12, wherein the area of each zone of the plurality of zones does not overlap with any other zone of the plurality of zones and is approximately a same percentage of a range of motion of the second mirror. 15. The method of claim 12, further comprising determining, by the one or more processors, the area of coverage of the communication device by combining a range of motion of the first mirror and a range of motion of the second mirror. 16. (canceled) 17. The method of claim 12, wherein, when the signal reaches an edge of the second zone at a third time, the first mirror actuator moves the first mirror, during a third time period starting at the third time and ending at a fourth time, from the second angle to a third angle in a direction of motion of the signal as the signal moves into a third zone of the plurality of zones, and
wherein the first mirror is kept stationary at the third angle during the third time period while the second mirror is used to track the signal in the third zone. 18. The method of claim 12, wherein the second angle is a set interval from the first angle. 19. (canceled) 20. (canceled) | The disclosure provides a system for transmitting and receiving optical signals. The system includes a first mirror of a communication device, a first mirror actuator configured to control a pointing direction of the first mirror, a second mirror of the communication device, a second mirror actuator configured to control a pointing direction of the second mirror, and one or more processors. The one or more processors are configured to direct the second mirror actuator to move the second mirror to track a signal within a zone in an area of coverage of the communication device and meanwhile keep the first mirror stationary at a first angle. The one or more processors are also configured to direct the first mirror actuator to move the first mirror to a second angle in a direction of motion of the signal when the signal reaches an edge of the zone and meanwhile move the second mirror to a default angle.1. A system for transmitting and receiving optical signals, the system including:
a first mirror of a communication device; a first mirror actuator configured to control a pointing direction of the first mirror; a second mirror of the communication device; a second mirror actuator configured to control a pointing direction of the second mirror; and one or more processors operatively coupled to the first mirror actuator and the second mirror actuator, the one or more processors being configured to:
direct the second mirror actuator to move the second mirror to track a signal within a first zone in a plurality of zones in an area of coverage of the communication device,
wherein each zone is adjacent to at least one other zone of the plurality of zones, and
keep the first mirror stationary at a first angle during a first time period while using the second mirror to track the signal in the first zone;
when the signal reaches an edge of the first zone at a first time, direct the first mirror actuator to move the first mirror, during a second time period starting at the first time and ending at a second time, from the first angle to a second angle in a direction of motion of the signal as the signal moves into a second zone of the plurality of zones;
direct the second mirror actuator to move the second mirror, during the second time period, to a default angle while moving the first mirror from the first angle to the second angle; and
keep the first mirror stationary at the second angle during the second time period while using the second mirror to track the signal in the second zone. 2. The system of claim 1, wherein the first mirror is kept stationary by locking the first mirror in place and turning off power to the first mirror actuator. 3. The system of claim 1, wherein the area of each zone of the plurality of zones does not overlap with any other zone of the plurality of zones and is approximately a same percentage of a range of motion of the second mirror. 4. The system of claim 1, wherein the area of coverage of the communication device is determined by combining a range of motion of the first mirror and a range of motion of the second mirror. 5. (canceled) 6. The system of claim 1, wherein the one or more processors are further configured to:
when the signal reaches an edge of the second zone at a third time, direct the first mirror actuator to move the first mirror, during a third time period starting at the third time and ending at a fourth time, from the second angle to a third angle in a direction of motion of the signal as the signal moves into a third zone of the plurality of zones; and keep the first mirror stationary at the third angle during the third time period while using the second mirror to track the signal in the third zone. 7. The system of claim 1, wherein the second angle is a set interval from the first angle. 8. (canceled) 9. (canceled) 10. The system of claim 1, further comprising the communication device. 11. (canceled) 12. A method for transmitting and receiving optical signals, the method including:
controlling, by one or more processors, a first mirror actuator to control a pointing direction of a first mirror of a communication device; and controlling, by the one or more processors, a second mirror actuator to control a pointing direction of a second mirror of the communication device, wherein the second mirror actuator moves the second mirror to track a signal within a first zone in a plurality of zones in an area of coverage of the communication device, wherein each zone is adjacent to at least one other zone of the plurality of zones, wherein the first mirror is kept stationary at a first angle during a first period while the second mirror is used to track the signal in the first zone, wherein, when the signal reaches an edge of the first zone at a first time, the first mirror actuator moves the first mirror, during a second time period starting at the first time and ending at a second time, from the first angle to a second angle in a direction of motion of the signal as the signal moves into a second zone of the plurality of zones, wherein the second mirror actuator moves the second mirror, during the second time period, to a default angle while the first mirror actuator moves the first mirror from the first angle to the second angle, and wherein first mirror is kept stationary at the second angle during the second time period while the second mirror is used to track the signal in the second zone. 13. The method of claim 12, wherein the first mirror is kept stationary by locking the first mirror in place and turning off power to the first mirror actuator. 14. The method of claim 12, wherein the area of each zone of the plurality of zones does not overlap with any other zone of the plurality of zones and is approximately a same percentage of a range of motion of the second mirror. 15. The method of claim 12, further comprising determining, by the one or more processors, the area of coverage of the communication device by combining a range of motion of the first mirror and a range of motion of the second mirror. 16. (canceled) 17. The method of claim 12, wherein, when the signal reaches an edge of the second zone at a third time, the first mirror actuator moves the first mirror, during a third time period starting at the third time and ending at a fourth time, from the second angle to a third angle in a direction of motion of the signal as the signal moves into a third zone of the plurality of zones, and
wherein the first mirror is kept stationary at the third angle during the third time period while the second mirror is used to track the signal in the third zone. 18. The method of claim 12, wherein the second angle is a set interval from the first angle. 19. (canceled) 20. (canceled) | 2,600 |
11,023 | 11,023 | 16,208,927 | 2,611 | A contextually-aware graphical avatar system includes a cue capturing assembly and a user device. The cue capturing assembly includes a visual cue capturing unit configured to generate agent video data of a human agent, and a visual cue encoder configured to process the agent video data to generate visual cue data corresponding to visual cues of the human agent. The user device is configured to receive the visual cue data. The user device includes (i) an avatar rendering unit configured to modify a graphical avatar, such that visual cues of the graphical avatar correspond to the visual cues of the human agent, and (ii) a display screen configured to display the modified graphical avatar. The system increases the efficiency with which the visual cues of the agent are conveyed as compared to receiving and to displaying the agent video data on the display of the user device. | 1. A contextually-aware graphical avatar system, comprising:
a cue capturing assembly including a visual cue capturing unit configured to generate agent video data of a human agent, and a visual cue encoder configured to process the agent video data to generate visual cue data corresponding to visual cues of the human agent; and a user device configured to receive the visual cue data, the user device including (i) an avatar rendering unit configured to modify a graphical avatar based on the visual cue data, such that visual cues of the graphical avatar correspond to the visual cues of the human agent, and (ii) a display screen configured to display the modified graphical avatar in order to convey the visual cues of the human agent, wherein the graphical avatar system increases the efficiency with which the visual cues of the human agent are conveyed as compared to receiving and to displaying the agent video data on the display screen of the user device. 2. The contextually-aware graphical avatar system of claim 1, wherein:
the cue capturing assembly further includes a camera configured to generate the agent video data and an audio capturing unit configured to record audio of the human agent as agent audio data, the user device is configured to receive the agent audio data, and the user device further includes a speaker configured to generate audio based on the agent audio data. 3. The contextually-aware graphical avatar system of claim 1, wherein:
the cue capturing assembly further includes a first network adapter configured to transmit the visual cue data by way of an electronic network, and the user device further includes a second network adapter configured to receive the transmitted visual cue data from the electronic network. 4. The contextually-aware graphical avatar system of claim 3, wherein:
transmitting the agent video data by way of the electronic network uses a first network bandwidth, transmitting the visual cue data by way of the electronic network uses a second network bandwidth, and the second network bandwidth is less than the first network bandwidth. 5. The contextually-aware graphical avatar system of claim 1, wherein the agent video data is not transmitted to the user device. 6. The contextually-aware graphical avatar system of claim 5, wherein the cue capturing assembly includes a memory configured to store the agent video data and the visual cue data. 7. The contextually-aware graphical avatar system of claim 1, wherein the visual cue data includes at least one of eyebrow data, eye data, and mouth data. 8. The contextually-aware graphical avatar system of claim 7, wherein:
the eye data includes eye position data, eye size data, pupil data, and eye color data, and the mouth data includes lip data, tongue data, and teeth data. 9. The contextually-aware graphical avatar system of claim 1, wherein:
the visual cue data during a first time period corresponds to a first facial expression made by the human agent, the visual cue data during a second time period corresponds to a second facial expression made by the human agent, the modified graphical avatar exhibits the first facial expression based on the visual cue data of the first time period, the modified graphical avatar exhibits the second facial expression based on the visual cue data of the second time period, and the first facial expression is different from the second facial expression. 10. The contextually-aware graphical avatar system of claim 9, wherein:
the human agent changes from the first facial expression to the second facial expression based on a context of a communication session between the human agent and a user of the user device, and the modified graphical avatar as displayed on the display screen conveys the context of the communication session to the user. 11. The contextually-aware graphical avatar system of claim 1, wherein the user device is configured as an autonomous vehicle. 12. A method of generating and displaying a contextually-aware graphical avatar on a user device, the method comprising:
capturing agent video data of a human agent with a cue capturing assembly; encoding the agent captured video data to generate visual cue data corresponding to visual cues of the human agent with a visual cue encoder of the cue capturing assembly; transmitting the visual cue data from the cue capturing assembly to the user device without transmitting the captured agent video data to the user device; modifying the graphical avatar based on the transmitted visual cue data with an avatar rendering unit of the user device, such that visual cues of the graphical avatar correspond to the visual cues of the human agent; displaying the modified graphical avatar on a display screen of the user device in order to convey the visual cues of the human agent to a user of the user device; and conveying the visual cues of the human agent to the user using the modified graphical avatar without transmitting the captured agent video data to the user device to increase the efficiency with which the visual cues of the human agent are conveyed as compared to receiving and to displaying the captured agent video data on the display screen of the user device. 13. The method according to claim 12, further comprising:
recording agent audio data of the human agent with an audio capturing unit of the cue capturing assembly; transmitting the recorded agent audio data from the cue capturing assembly to the user device; and emitting audio corresponding to the transmitted agent audio data with a speaker of the user device. 14. The method according to claim 13, further comprising:
recording user audio data of the user with a microphone of the user device; transmitting the recorded user audio data from the user device to the cue capturing assembly; and emitting audio corresponding to the transmitted user audio data with a speaker of the cue capturing assembly. 15. The method according to claim 12, further comprising:
generating visual cue data during a first time period corresponding to a first facial expression made by the human agent; generating visual cue data during a second time period corresponding to a second facial expression made by the human agent; modifying the graphical avatar to exhibit the first facial expression based on the visual cue data of the first time period; and modifying the graphical avatar to exhibit the second facial expression based on the visual cue data of the second time period, wherein the first facial expression is different from the second facial expression. 16. The method according to claim 12, further comprising:
generating visual cue data while the human agent makes a gesture; modifying the graphical avatar to exhibit the gesture based on the generated visual cue data; and displaying the modified graphical avatar exhibiting the gesture on the display screen of the user device. 17. The method according to claim 12, wherein:
transmitting the visual cue data by way of an electronic network with a first network adapter of the cue capturing assembly; and receiving the transmitted visual cue data with a second network adapter of the user device. | A contextually-aware graphical avatar system includes a cue capturing assembly and a user device. The cue capturing assembly includes a visual cue capturing unit configured to generate agent video data of a human agent, and a visual cue encoder configured to process the agent video data to generate visual cue data corresponding to visual cues of the human agent. The user device is configured to receive the visual cue data. The user device includes (i) an avatar rendering unit configured to modify a graphical avatar, such that visual cues of the graphical avatar correspond to the visual cues of the human agent, and (ii) a display screen configured to display the modified graphical avatar. The system increases the efficiency with which the visual cues of the agent are conveyed as compared to receiving and to displaying the agent video data on the display of the user device.1. A contextually-aware graphical avatar system, comprising:
a cue capturing assembly including a visual cue capturing unit configured to generate agent video data of a human agent, and a visual cue encoder configured to process the agent video data to generate visual cue data corresponding to visual cues of the human agent; and a user device configured to receive the visual cue data, the user device including (i) an avatar rendering unit configured to modify a graphical avatar based on the visual cue data, such that visual cues of the graphical avatar correspond to the visual cues of the human agent, and (ii) a display screen configured to display the modified graphical avatar in order to convey the visual cues of the human agent, wherein the graphical avatar system increases the efficiency with which the visual cues of the human agent are conveyed as compared to receiving and to displaying the agent video data on the display screen of the user device. 2. The contextually-aware graphical avatar system of claim 1, wherein:
the cue capturing assembly further includes a camera configured to generate the agent video data and an audio capturing unit configured to record audio of the human agent as agent audio data, the user device is configured to receive the agent audio data, and the user device further includes a speaker configured to generate audio based on the agent audio data. 3. The contextually-aware graphical avatar system of claim 1, wherein:
the cue capturing assembly further includes a first network adapter configured to transmit the visual cue data by way of an electronic network, and the user device further includes a second network adapter configured to receive the transmitted visual cue data from the electronic network. 4. The contextually-aware graphical avatar system of claim 3, wherein:
transmitting the agent video data by way of the electronic network uses a first network bandwidth, transmitting the visual cue data by way of the electronic network uses a second network bandwidth, and the second network bandwidth is less than the first network bandwidth. 5. The contextually-aware graphical avatar system of claim 1, wherein the agent video data is not transmitted to the user device. 6. The contextually-aware graphical avatar system of claim 5, wherein the cue capturing assembly includes a memory configured to store the agent video data and the visual cue data. 7. The contextually-aware graphical avatar system of claim 1, wherein the visual cue data includes at least one of eyebrow data, eye data, and mouth data. 8. The contextually-aware graphical avatar system of claim 7, wherein:
the eye data includes eye position data, eye size data, pupil data, and eye color data, and the mouth data includes lip data, tongue data, and teeth data. 9. The contextually-aware graphical avatar system of claim 1, wherein:
the visual cue data during a first time period corresponds to a first facial expression made by the human agent, the visual cue data during a second time period corresponds to a second facial expression made by the human agent, the modified graphical avatar exhibits the first facial expression based on the visual cue data of the first time period, the modified graphical avatar exhibits the second facial expression based on the visual cue data of the second time period, and the first facial expression is different from the second facial expression. 10. The contextually-aware graphical avatar system of claim 9, wherein:
the human agent changes from the first facial expression to the second facial expression based on a context of a communication session between the human agent and a user of the user device, and the modified graphical avatar as displayed on the display screen conveys the context of the communication session to the user. 11. The contextually-aware graphical avatar system of claim 1, wherein the user device is configured as an autonomous vehicle. 12. A method of generating and displaying a contextually-aware graphical avatar on a user device, the method comprising:
capturing agent video data of a human agent with a cue capturing assembly; encoding the agent captured video data to generate visual cue data corresponding to visual cues of the human agent with a visual cue encoder of the cue capturing assembly; transmitting the visual cue data from the cue capturing assembly to the user device without transmitting the captured agent video data to the user device; modifying the graphical avatar based on the transmitted visual cue data with an avatar rendering unit of the user device, such that visual cues of the graphical avatar correspond to the visual cues of the human agent; displaying the modified graphical avatar on a display screen of the user device in order to convey the visual cues of the human agent to a user of the user device; and conveying the visual cues of the human agent to the user using the modified graphical avatar without transmitting the captured agent video data to the user device to increase the efficiency with which the visual cues of the human agent are conveyed as compared to receiving and to displaying the captured agent video data on the display screen of the user device. 13. The method according to claim 12, further comprising:
recording agent audio data of the human agent with an audio capturing unit of the cue capturing assembly; transmitting the recorded agent audio data from the cue capturing assembly to the user device; and emitting audio corresponding to the transmitted agent audio data with a speaker of the user device. 14. The method according to claim 13, further comprising:
recording user audio data of the user with a microphone of the user device; transmitting the recorded user audio data from the user device to the cue capturing assembly; and emitting audio corresponding to the transmitted user audio data with a speaker of the cue capturing assembly. 15. The method according to claim 12, further comprising:
generating visual cue data during a first time period corresponding to a first facial expression made by the human agent; generating visual cue data during a second time period corresponding to a second facial expression made by the human agent; modifying the graphical avatar to exhibit the first facial expression based on the visual cue data of the first time period; and modifying the graphical avatar to exhibit the second facial expression based on the visual cue data of the second time period, wherein the first facial expression is different from the second facial expression. 16. The method according to claim 12, further comprising:
generating visual cue data while the human agent makes a gesture; modifying the graphical avatar to exhibit the gesture based on the generated visual cue data; and displaying the modified graphical avatar exhibiting the gesture on the display screen of the user device. 17. The method according to claim 12, wherein:
transmitting the visual cue data by way of an electronic network with a first network adapter of the cue capturing assembly; and receiving the transmitted visual cue data with a second network adapter of the user device. | 2,600 |
11,024 | 11,024 | 16,107,079 | 2,637 | Systems and methods for distributed measurement in a network implemented by an orchestrator include directing one or more modules associated with one or more network elements to each perform a subset of the distributed measurement; receiving results from at least one network element of the one or more network elements based on the directing; and detecting an event or property based on the results. The subset of the distributed measurement can be based on performance monitoring data. The method can further include shuffling assignments for the distributed measurement between the one or more modules | 1. A method for distributed measurement in a network implemented by an orchestrator, the method comprising:
directing one or more modules associated with one or more network elements to each perform a subset of the distributed measurement; receiving results from at least one network element of the one or more network elements based on the directing; and detecting an event or property based on the results. 2. The method of claim 1, wherein the subset of the distributed measurement is based on performance monitoring data. 3. The method of claim 1, further comprising:
shuffling assignments for the distributed measurement between the one or more modules. 4. The method of claim 1, wherein the results are based on local analysis performed by the one or more modules on the subset of the distributed measurement. 5. The method of claim 1, wherein the results are based on a trigger condition detected locally by the associated module. 6. The method of claim 1, wherein the directing comprises control of the one or more modules to modify data acquisition characteristics for the subset. 7. The method of claim 1, wherein the directing to a specific module is modified based on a signal from another module or network element or the orchestrator. 8. The method of claim 1, wherein the distributed measurement is for polarization transient, and wherein the one or more modules comprise a plurality of modules associated with channels that co-propagate over all or part of the same link and the plurality of modules are set to different sampling rates. 9. The method of claim 1, wherein the distributed measurement is to identify a process on a link and correlate the process with error events to provide an identification of a module responsible for the error events. 10. The method of claim 1, wherein the distributed measurement is to discover correlation between performance monitoring data, and possibly other data sources, utilizing machine learning. 11. The method of claim 1, wherein the distributed measurement is to detect errors in sections in a meshed optical network by sampling signal-to-noise ratio at the one or more network elements. 12. An orchestrator configured to perform distributed measurement in a network, the orchestrator comprising:
a processor; and memory storing instructions that, when executed, cause the processor to
direct one or more modules associated with one or more network elements to each perform a subset of the distributed measurement,
receive results from at least one network element of the one or more network elements based on the direction, and
detect an event or property based on the results. 13. The orchestrator of claim 12, wherein the subset of the distributed measurement is based on performance monitoring data. 14. The orchestrator of claim 12, wherein the memory storing instructions that, when executed, further cause the processor to
shuffle assignments for the distributed measurement between the one or more modules. 15. The orchestrator of claim 12, wherein the results are based on local analysis performed by the one or more modules on the subset of the distributed measurement. 16. The orchestrator of claim 12, wherein the results are based on a trigger condition detected locally by the associated module. 17. The orchestrator of claim 12, wherein the one or more modules are directed to modify data acquisition characteristics for the subset. 18. A non-transitory computer readable medium storing instructions executable by a processor, and, in response to such execution, causes the processor to perform operations comprising:
directing one or more modules associated with one or more network elements to each perform a subset of the distributed measurement; receiving results from at least one network element of the one or more network elements based on the directing; and detecting an event or property based on the results. 19. The non-transitory computer readable medium of claim 18, wherein the subset of the distributed measurement is based on performance monitoring data. 20. The non-transitory computer readable medium of claim 18, wherein the instructions further cause the processor to perform operations comprising:
shuffling assignments for the distributed measurement between the one or more modules. | Systems and methods for distributed measurement in a network implemented by an orchestrator include directing one or more modules associated with one or more network elements to each perform a subset of the distributed measurement; receiving results from at least one network element of the one or more network elements based on the directing; and detecting an event or property based on the results. The subset of the distributed measurement can be based on performance monitoring data. The method can further include shuffling assignments for the distributed measurement between the one or more modules1. A method for distributed measurement in a network implemented by an orchestrator, the method comprising:
directing one or more modules associated with one or more network elements to each perform a subset of the distributed measurement; receiving results from at least one network element of the one or more network elements based on the directing; and detecting an event or property based on the results. 2. The method of claim 1, wherein the subset of the distributed measurement is based on performance monitoring data. 3. The method of claim 1, further comprising:
shuffling assignments for the distributed measurement between the one or more modules. 4. The method of claim 1, wherein the results are based on local analysis performed by the one or more modules on the subset of the distributed measurement. 5. The method of claim 1, wherein the results are based on a trigger condition detected locally by the associated module. 6. The method of claim 1, wherein the directing comprises control of the one or more modules to modify data acquisition characteristics for the subset. 7. The method of claim 1, wherein the directing to a specific module is modified based on a signal from another module or network element or the orchestrator. 8. The method of claim 1, wherein the distributed measurement is for polarization transient, and wherein the one or more modules comprise a plurality of modules associated with channels that co-propagate over all or part of the same link and the plurality of modules are set to different sampling rates. 9. The method of claim 1, wherein the distributed measurement is to identify a process on a link and correlate the process with error events to provide an identification of a module responsible for the error events. 10. The method of claim 1, wherein the distributed measurement is to discover correlation between performance monitoring data, and possibly other data sources, utilizing machine learning. 11. The method of claim 1, wherein the distributed measurement is to detect errors in sections in a meshed optical network by sampling signal-to-noise ratio at the one or more network elements. 12. An orchestrator configured to perform distributed measurement in a network, the orchestrator comprising:
a processor; and memory storing instructions that, when executed, cause the processor to
direct one or more modules associated with one or more network elements to each perform a subset of the distributed measurement,
receive results from at least one network element of the one or more network elements based on the direction, and
detect an event or property based on the results. 13. The orchestrator of claim 12, wherein the subset of the distributed measurement is based on performance monitoring data. 14. The orchestrator of claim 12, wherein the memory storing instructions that, when executed, further cause the processor to
shuffle assignments for the distributed measurement between the one or more modules. 15. The orchestrator of claim 12, wherein the results are based on local analysis performed by the one or more modules on the subset of the distributed measurement. 16. The orchestrator of claim 12, wherein the results are based on a trigger condition detected locally by the associated module. 17. The orchestrator of claim 12, wherein the one or more modules are directed to modify data acquisition characteristics for the subset. 18. A non-transitory computer readable medium storing instructions executable by a processor, and, in response to such execution, causes the processor to perform operations comprising:
directing one or more modules associated with one or more network elements to each perform a subset of the distributed measurement; receiving results from at least one network element of the one or more network elements based on the directing; and detecting an event or property based on the results. 19. The non-transitory computer readable medium of claim 18, wherein the subset of the distributed measurement is based on performance monitoring data. 20. The non-transitory computer readable medium of claim 18, wherein the instructions further cause the processor to perform operations comprising:
shuffling assignments for the distributed measurement between the one or more modules. | 2,600 |
11,025 | 11,025 | 14,975,931 | 2,683 | A radio frequency identification (RFID) tag reading system and method read RFID tags in a controlled area in real time with an enhanced performance. An RFID reader reads a mixed tag population of interesting RFID tags and of uninteresting RFID tags in the controlled area at a read rate. A controller dynamically and continuously monitors the read rate in real time, dynamically selects the interesting RFID tags, or deselects the uninteresting RFID tags, in real time when the read rate is below a reading threshold, and dynamically controls the RFID reader in real time to only read the interesting RFID tags when the read rate is below the reading threshold. | 1. A radio frequency (RF) identification (RFID) tag reading system for reading RFID tags in a controlled area in real time with an enhanced performance, the system comprising:
an RFID reader for reading a mixed tag population of interesting RFID tags, which are associated with items of interest, and of uninteresting RFID tags, which are associated with items of less interest, in the controlled area at a read rate; and a controller operatively connected to the RFID reader, and operative for dynamically monitoring the read rate in real time, for dynamically selecting the interesting RFID tags in real time when the read rate is below a reading threshold, and for dynamically controlling the RFID reader in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 2. The system of claim 1, wherein the RFID reader is mounted in an overhead location in the controlled area. 3. The system of claim 1, and additional RFID readers mounted in overhead locations in the controlled area, and wherein the controller is operatively connected to all the RFID readers to dynamically control all the RFID readers in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 4. The system of claim 1, wherein the controller is configured to select the interesting RFID tags by deselecting the uninteresting tags. 5. The system of claim 1, wherein the controller is configured to continuously monitor the read rate, and wherein the controller is configured to select the interesting RFID tags when the reading threshold is a predetermined number of RFID tags per unit of time. 6. The system of claim 1, wherein the controller is configured to select the interesting RFID tags based on preselected criteria. 7. The system of claim 5, wherein one of the preselected criteria is a likelihood of motion of an item. 8. The system of claim 1, wherein the controller is further operative for assigning tag priorities to the mixed tag population, and for designating which of the RFID tags are interesting and which of the RFID tags are uninteresting. 9. A radio frequency (RF) identification (RFID) tag reading system for reading RFID tags in a controlled area in real time with an enhanced performance, the system comprising:
a plurality of RFID readers mounted in overhead locations in the controlled area, for reading a mixed tag population of interesting RFID tags, which are associated with items of interest, and of uninteresting RFID tags, which are associated with items of less interest, in the controlled area at a read rate; a server operatively connected to the RFID readers; and a controller located in at least one of the RFID readers and the server, and operative for dynamically and continuously monitoring the read rate in real time, for dynamically selecting the interesting RFID tags in real time when the read rate is below a reading threshold, and for dynamically controlling the RFID readers in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 10. The system of claim 9, wherein the controller is configured to select the interesting RFID tags, or deselect the uninteresting RFID tags, when the reading threshold is a predetermined number of RFID tags per unit of time. 11. The system of claim 9, wherein the controller is configured to select the interesting RFID tags based on preselected criteria. 12. The system of claim 9, wherein the controller is further operative for assigning tag priorities to the mixed tag population, and for designating which of the RFID tags are interesting and which of the RFID tags are uninteresting. 13. A method of reading radio frequency (RF) identification (RFID) tags in a controlled area in real time with an enhanced performance, the method comprising:
reading a mixed tag population of interesting RFID tags, which are associated with items of interest, and of uninteresting RFID tags, which are associated with items of less interest, in the controlled area at a read rate; dynamically and continuously monitoring the read rate in real time; dynamically selecting the interesting RFID tags in real time when the read rate is below a reading threshold; and dynamically controlling the reading in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 14. The method of claim 13, wherein the reading is performed by an RFID reader, and mounting the RFID reader in an overhead location in the controlled area. 15. The method of claim 13, wherein the reading is performed by a plurality of RFID readers, and mounting all the RFID readers in overhead locations in the controlled area, and controlling all the RFID readers in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 16. The method of claim 13, wherein the selecting of the interesting RFID tags is performed by deselecting the uninteresting tags. 17. The method of claim 13, wherein the selecting is performed by selecting the interesting RFID tags when the reading threshold is a predetermined number of RFID tags per unit of time. 18. The method of claim 13, wherein the selecting is performed by selecting the interesting RFID tags based on preselected criteria. 19. The method of claim 18, wherein one of the preselected criteria is a likelihood of motion of an item. 20. The method of claim 13, and assigning tag priorities to the mixed tag population, and designating which of the RFID tags are interesting and which of the RFID tags are uninteresting. | A radio frequency identification (RFID) tag reading system and method read RFID tags in a controlled area in real time with an enhanced performance. An RFID reader reads a mixed tag population of interesting RFID tags and of uninteresting RFID tags in the controlled area at a read rate. A controller dynamically and continuously monitors the read rate in real time, dynamically selects the interesting RFID tags, or deselects the uninteresting RFID tags, in real time when the read rate is below a reading threshold, and dynamically controls the RFID reader in real time to only read the interesting RFID tags when the read rate is below the reading threshold.1. A radio frequency (RF) identification (RFID) tag reading system for reading RFID tags in a controlled area in real time with an enhanced performance, the system comprising:
an RFID reader for reading a mixed tag population of interesting RFID tags, which are associated with items of interest, and of uninteresting RFID tags, which are associated with items of less interest, in the controlled area at a read rate; and a controller operatively connected to the RFID reader, and operative for dynamically monitoring the read rate in real time, for dynamically selecting the interesting RFID tags in real time when the read rate is below a reading threshold, and for dynamically controlling the RFID reader in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 2. The system of claim 1, wherein the RFID reader is mounted in an overhead location in the controlled area. 3. The system of claim 1, and additional RFID readers mounted in overhead locations in the controlled area, and wherein the controller is operatively connected to all the RFID readers to dynamically control all the RFID readers in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 4. The system of claim 1, wherein the controller is configured to select the interesting RFID tags by deselecting the uninteresting tags. 5. The system of claim 1, wherein the controller is configured to continuously monitor the read rate, and wherein the controller is configured to select the interesting RFID tags when the reading threshold is a predetermined number of RFID tags per unit of time. 6. The system of claim 1, wherein the controller is configured to select the interesting RFID tags based on preselected criteria. 7. The system of claim 5, wherein one of the preselected criteria is a likelihood of motion of an item. 8. The system of claim 1, wherein the controller is further operative for assigning tag priorities to the mixed tag population, and for designating which of the RFID tags are interesting and which of the RFID tags are uninteresting. 9. A radio frequency (RF) identification (RFID) tag reading system for reading RFID tags in a controlled area in real time with an enhanced performance, the system comprising:
a plurality of RFID readers mounted in overhead locations in the controlled area, for reading a mixed tag population of interesting RFID tags, which are associated with items of interest, and of uninteresting RFID tags, which are associated with items of less interest, in the controlled area at a read rate; a server operatively connected to the RFID readers; and a controller located in at least one of the RFID readers and the server, and operative for dynamically and continuously monitoring the read rate in real time, for dynamically selecting the interesting RFID tags in real time when the read rate is below a reading threshold, and for dynamically controlling the RFID readers in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 10. The system of claim 9, wherein the controller is configured to select the interesting RFID tags, or deselect the uninteresting RFID tags, when the reading threshold is a predetermined number of RFID tags per unit of time. 11. The system of claim 9, wherein the controller is configured to select the interesting RFID tags based on preselected criteria. 12. The system of claim 9, wherein the controller is further operative for assigning tag priorities to the mixed tag population, and for designating which of the RFID tags are interesting and which of the RFID tags are uninteresting. 13. A method of reading radio frequency (RF) identification (RFID) tags in a controlled area in real time with an enhanced performance, the method comprising:
reading a mixed tag population of interesting RFID tags, which are associated with items of interest, and of uninteresting RFID tags, which are associated with items of less interest, in the controlled area at a read rate; dynamically and continuously monitoring the read rate in real time; dynamically selecting the interesting RFID tags in real time when the read rate is below a reading threshold; and dynamically controlling the reading in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 14. The method of claim 13, wherein the reading is performed by an RFID reader, and mounting the RFID reader in an overhead location in the controlled area. 15. The method of claim 13, wherein the reading is performed by a plurality of RFID readers, and mounting all the RFID readers in overhead locations in the controlled area, and controlling all the RFID readers in real time to only read the interesting RFID tags when the read rate is below the reading threshold. 16. The method of claim 13, wherein the selecting of the interesting RFID tags is performed by deselecting the uninteresting tags. 17. The method of claim 13, wherein the selecting is performed by selecting the interesting RFID tags when the reading threshold is a predetermined number of RFID tags per unit of time. 18. The method of claim 13, wherein the selecting is performed by selecting the interesting RFID tags based on preselected criteria. 19. The method of claim 18, wherein one of the preselected criteria is a likelihood of motion of an item. 20. The method of claim 13, and assigning tag priorities to the mixed tag population, and designating which of the RFID tags are interesting and which of the RFID tags are uninteresting. | 2,600 |
11,026 | 11,026 | 13,366,992 | 2,649 | Multiple low noise amplifiers (LNAs) with combined outputs are disclosed. In an exemplary design, an apparatus includes a front-end module and an integrated circuit (IC). The front-end module includes a plurality of LNAs having outputs that are combined. The IC includes receive circuits coupled to the plurality of LNAs via a single interconnection. In an exemplary design, each of the plurality of LNAs may be enabled or disabled via a respective control signal for that LNA. The front-end module may also include receive filters coupled to the plurality of LNAs and a switchplexer coupled to the receive filters. The front-end module may further include at least one power amplifier, and the IC may further include transmit circuits coupled to the at least one power amplifier. | 1. An apparatus comprising:
a front-end module comprising a plurality of low noise amplifiers (LNAs) having outputs that are combined; and an integrated circuit (IC) comprising receive circuits coupled to the plurality of LNAs via a single interconnection. 2. The apparatus of claim 1, the front-end module further comprising at least one power amplifier, and the IC further comprising transmit circuits coupled to the at least one power amplifier. 3. The apparatus of claim 2, the front-end module further comprising a plurality of transmit filters or a plurality of duplexers for a plurality of frequency bands, and the at least one power amplifier comprising a power amplifier supporting the plurality of frequency bands and coupled to the plurality of transmit filters or the plurality of duplexers. 4. The apparatus of claim 1, wherein a subset of the plurality of LNAs is enabled at any given moment and remaining ones of the plurality of LNAs are disabled. 5. The apparatus of claim 1, the front-end module comprising:
at least one receive filter coupled to at least one of the plurality of LNAs. 6. The apparatus of claim 1, each of the plurality of LNAs comprising:
a first transistor having a gate receiving an input radio frequency (RF) signal; and a second transistor having a drain coupled to a summing node and a source coupled to a drain of the first transistor. 7. The apparatus of claim 1, each of the plurality of LNAs comprising:
a first transistor having a gate receiving an input radio frequency (RF) signal and a drain coupled to a summing node. 8. The apparatus of claim 7, the plurality of LNAs comprising:
a second transistor having a source coupled to the summing node and a drain providing an amplified RF signal. 9. The apparatus of claim 7, the receive circuits comprising:
a common gate stage configured to provide bias current for the plurality of LNAs; and an amplifier coupled to the common gate stage. 10. The apparatus of claim 1, each of the plurality of LNAs comprising:
a single-ended LNA receiving a single-ended input radio frequency (RF) signal and providing a single-ended amplified RF signal. 11. The apparatus of claim 1, each of the plurality of LNAs comprising:
a differential LNA receiving a differential input radio frequency (RF) signal and providing a differential amplified RF signal. 12. The apparatus of claim 1, the plurality of LNAs comprising:
a load inductor shared by the plurality of LNAs. 13. The apparatus of claim 12, the plurality of LNAs further comprising:
an adjustable capacitor coupled in parallel with the load inductor. 14. The apparatus of claim 1, wherein the plurality of LNAs are associated with at least one of different transistor sizes, different transistor biasing, or different LNA circuit designs. 15. The apparatus of claim 1, the plurality of LNAs are for low band, the front-end module further comprising a second plurality of LNAs for high band and having outputs that are combined, and the IC further comprising second receive circuits coupled to the second plurality of LNAs via a second interconnection. 16. A method comprising:
amplifying an input radio frequency (RF) signal with a selected low noise amplifier (LNA) among a plurality of LNAs having outputs that are combined and residing on a front-end module; and receiving, at receive circuits residing on an integrated circuit (IC), an amplified RF signal from the selected LNA via a single interconnection coupling the plurality of LNAs to the receive circuits. 17. The method of claim 16, further comprising:
filtering a received RF signal with one of at least one filter to obtain the input RF signal, the at least one filter being coupled to at least one of the plurality of LNAs. 18. The method of claim 16, further comprising:
conditioning, with transmit circuits residing on the IC, an analog output signal to obtain an output RF signal; and amplifying the output RF signal with a selected power amplifier among at least one power amplifier residing on the front-end module. 19. An apparatus comprising:
a plurality of means for amplifying having outputs that are combined and residing on a first module, one of the plurality of means for amplifying being selected to amplify an input radio frequency (RF) signal and provide an amplified RF signal; and means for processing the amplified RF signal, the means for processing residing on a second module and being coupled to the plurality of means for amplifying via a single interconnection. 20. The apparatus of claim 19, further comprising:
means for filtering a received RF signal to obtain the input RF signal, the means for filtering being coupled to the selected one of the plurality of means for amplifying. | Multiple low noise amplifiers (LNAs) with combined outputs are disclosed. In an exemplary design, an apparatus includes a front-end module and an integrated circuit (IC). The front-end module includes a plurality of LNAs having outputs that are combined. The IC includes receive circuits coupled to the plurality of LNAs via a single interconnection. In an exemplary design, each of the plurality of LNAs may be enabled or disabled via a respective control signal for that LNA. The front-end module may also include receive filters coupled to the plurality of LNAs and a switchplexer coupled to the receive filters. The front-end module may further include at least one power amplifier, and the IC may further include transmit circuits coupled to the at least one power amplifier.1. An apparatus comprising:
a front-end module comprising a plurality of low noise amplifiers (LNAs) having outputs that are combined; and an integrated circuit (IC) comprising receive circuits coupled to the plurality of LNAs via a single interconnection. 2. The apparatus of claim 1, the front-end module further comprising at least one power amplifier, and the IC further comprising transmit circuits coupled to the at least one power amplifier. 3. The apparatus of claim 2, the front-end module further comprising a plurality of transmit filters or a plurality of duplexers for a plurality of frequency bands, and the at least one power amplifier comprising a power amplifier supporting the plurality of frequency bands and coupled to the plurality of transmit filters or the plurality of duplexers. 4. The apparatus of claim 1, wherein a subset of the plurality of LNAs is enabled at any given moment and remaining ones of the plurality of LNAs are disabled. 5. The apparatus of claim 1, the front-end module comprising:
at least one receive filter coupled to at least one of the plurality of LNAs. 6. The apparatus of claim 1, each of the plurality of LNAs comprising:
a first transistor having a gate receiving an input radio frequency (RF) signal; and a second transistor having a drain coupled to a summing node and a source coupled to a drain of the first transistor. 7. The apparatus of claim 1, each of the plurality of LNAs comprising:
a first transistor having a gate receiving an input radio frequency (RF) signal and a drain coupled to a summing node. 8. The apparatus of claim 7, the plurality of LNAs comprising:
a second transistor having a source coupled to the summing node and a drain providing an amplified RF signal. 9. The apparatus of claim 7, the receive circuits comprising:
a common gate stage configured to provide bias current for the plurality of LNAs; and an amplifier coupled to the common gate stage. 10. The apparatus of claim 1, each of the plurality of LNAs comprising:
a single-ended LNA receiving a single-ended input radio frequency (RF) signal and providing a single-ended amplified RF signal. 11. The apparatus of claim 1, each of the plurality of LNAs comprising:
a differential LNA receiving a differential input radio frequency (RF) signal and providing a differential amplified RF signal. 12. The apparatus of claim 1, the plurality of LNAs comprising:
a load inductor shared by the plurality of LNAs. 13. The apparatus of claim 12, the plurality of LNAs further comprising:
an adjustable capacitor coupled in parallel with the load inductor. 14. The apparatus of claim 1, wherein the plurality of LNAs are associated with at least one of different transistor sizes, different transistor biasing, or different LNA circuit designs. 15. The apparatus of claim 1, the plurality of LNAs are for low band, the front-end module further comprising a second plurality of LNAs for high band and having outputs that are combined, and the IC further comprising second receive circuits coupled to the second plurality of LNAs via a second interconnection. 16. A method comprising:
amplifying an input radio frequency (RF) signal with a selected low noise amplifier (LNA) among a plurality of LNAs having outputs that are combined and residing on a front-end module; and receiving, at receive circuits residing on an integrated circuit (IC), an amplified RF signal from the selected LNA via a single interconnection coupling the plurality of LNAs to the receive circuits. 17. The method of claim 16, further comprising:
filtering a received RF signal with one of at least one filter to obtain the input RF signal, the at least one filter being coupled to at least one of the plurality of LNAs. 18. The method of claim 16, further comprising:
conditioning, with transmit circuits residing on the IC, an analog output signal to obtain an output RF signal; and amplifying the output RF signal with a selected power amplifier among at least one power amplifier residing on the front-end module. 19. An apparatus comprising:
a plurality of means for amplifying having outputs that are combined and residing on a first module, one of the plurality of means for amplifying being selected to amplify an input radio frequency (RF) signal and provide an amplified RF signal; and means for processing the amplified RF signal, the means for processing residing on a second module and being coupled to the plurality of means for amplifying via a single interconnection. 20. The apparatus of claim 19, further comprising:
means for filtering a received RF signal to obtain the input RF signal, the means for filtering being coupled to the selected one of the plurality of means for amplifying. | 2,600 |
11,027 | 11,027 | 16,375,630 | 2,619 | Ray tracing systems process rays through a 3D scene to determine intersections between rays and geometry in the scene, for rendering an image of the scene. Ray direction data for a ray can be compressed, e.g. into an octahedral vector format. The compressed ray direction data for a ray may be represented by two parameters (u,v) which indicate a point on the surface of an octahedron. In order to perform intersection testing on the ray, the ray direction data for the ray is unpacked to determine x, y and z components of a vector to a point on the surface of the octahedron. The unpacked ray direction vector is an unnormalised ray direction vector. Rather than normalising the ray direction vector, the intersection testing is performed on the unnormalised ray direction vector. This avoids the processing steps involved in normalising the ray direction vector. | 1. A ray tracing system for use in rendering an image of a 3D scene, the ray tracing system comprising:
a memory configured to store ray data for a ray to be processed in the ray tracing system, wherein the ray data for the ray comprises ray direction data stored in a compressed format; and intersection testing logic configured to:
partially decompress the compressed ray direction data for the ray to determine partially decompressed ray direction data which is not fully decompressed, and
perform intersection testing on the ray in the 3D scene using the partially decompressed ray direction data for the ray;
wherein the ray tracing system is configured to use results of the intersection testing for rendering the image of the 3D scene. 2. The ray tracing system of claim 1, wherein the intersection testing logic is configured to partially decompress the compressed ray direction data for the ray by constructing an unnormalised ray direction vector for the ray by unpacking the compressed ray direction data for the ray, wherein the intersection testing logic is configured to perform the intersection testing on the ray using the unnormalised ray direction vector for the ray rather than a normalised ray direction vector for the ray. 3. The ray tracing system of claim 2, wherein the intersection testing unit is configured to make use of a clipping distance for the ray, wherein the clipping distance has been scaled by an amount based on the magnitude of the unnormalised ray direction vector. 4. The ray tracing system of claim 3, wherein the clipping distance for the ray has been scaled by transforming the clipping distance into Manhattan space. 5. The ray tracing system of claim 1, wherein the compressed format is an octahedral vector format. 6. The ray tracing system of claim 1, wherein the ray data for the ray further comprises ray origin data. 7. The ray tracing system of claim 1, wherein the intersection testing logic comprises one or more of:
a primitive intersection tester configured to perform intersection testing on the ray by identifying an intersection of the ray with a primitive in the scene; a box intersection tester configured to perform intersection testing on the ray by identifying an intersection of the ray with a bounding box of one or more primitives in the scene; and a sphere intersection tester configured to perform intersection testing on the ray by identifying an intersection of the ray with a sphere representing the position of a portion of geometry in the scene. 8. A ray tracing method for use in rendering an image of a scene, the ray tracing method comprising:
retrieving, from a memory, ray data for a ray to be processed, wherein the ray data for the ray comprises ray direction data stored in a compressed format; partially decompressing the compressed ray direction data for the ray to determine partially decompressed ray direction data which is not fully decompressed; performing intersection testing on the ray in the scene using the partially decompressed ray direction data for the ray; and using results of the intersection testing for rendering the image of the scene. 9. The method of claim 8, wherein said partially decompressing the compressed ray direction data for the ray comprises constructing an unnormalised ray direction vector for the ray by unpacking the compressed ray direction data for the ray. 10. The method of claim 9, wherein the intersection testing is performed on the ray using the unnormalised ray direction vector for the ray rather than using a normalised ray direction vector for the ray for performing the intersection testing. 11. The method of claim 9, further comprising scaling a clipping distance of the ray for use in the intersection testing by an amount based on the magnitude of the unnormalised ray direction vector. 12. The method of claim 11, wherein said scaling the clipping distance of the ray comprises transforming the clipping distance for the ray into Manhattan space. 13. The method of claim 8, wherein core ray data for the ray is stored in the memory, whereas at least some non-core ray data for the ray is stored in a separate memory, wherein the compressed ray direction data is included in the core ray data for the ray. 14. A ray tracing system for use in rendering an image of a scene, the ray tracing system comprising:
one or more execution units configured to execute one or more shader instructions which output a ray for intersection testing; a ray compression module configured to compress ray direction data for the ray; a ray data store configured to store the compressed ray direction data; and intersection testing logic configured to perform intersection testing on the ray without fully decompressing the compressed ray direction data, wherein the ray tracing system is configured to use results of the intersection testing for rendering the image of the scene. 15. The ray tracing system of claim 14, wherein the ray compression module is implemented as a software module executed on at least one of the one or more execution units. 16. The ray tracing system of claim 14, wherein the ray compression module is implemented in fixed-function circuitry as a dedicated hardware module. 17. The ray tracing system of claim 14, wherein the ray data store is configured to store the compressed ray direction data for the ray with other ray data for the ray, said other data including a ray origin and a clipping distance for the ray, and wherein the intersection testing logic is configured to receive ray data including the compressed ray direction data from the ray data store. 18. The ray tracing system of claim 14, wherein the intersection testing logic is configured to perform the intersection testing on the ray without fully decompressing the ray direction data by using the compressed ray direction data in the intersection testing of the ray. 19. The ray tracing system of claim 14, wherein the intersection testing logic is configured to perform the intersection testing on the ray without fully decompressing the ray direction data by partially decompressing the compressed ray direction data and then using the partially decompressed ray direction data in the intersection testing of the ray. 20. The ray tracing system of claim 14, wherein the ray tracing system is configured to execute a shader program which uses results of the intersection testing for rendering the image of the scene, wherein the ray direction data for the ray is fully decompressed for use by the shader program. | Ray tracing systems process rays through a 3D scene to determine intersections between rays and geometry in the scene, for rendering an image of the scene. Ray direction data for a ray can be compressed, e.g. into an octahedral vector format. The compressed ray direction data for a ray may be represented by two parameters (u,v) which indicate a point on the surface of an octahedron. In order to perform intersection testing on the ray, the ray direction data for the ray is unpacked to determine x, y and z components of a vector to a point on the surface of the octahedron. The unpacked ray direction vector is an unnormalised ray direction vector. Rather than normalising the ray direction vector, the intersection testing is performed on the unnormalised ray direction vector. This avoids the processing steps involved in normalising the ray direction vector.1. A ray tracing system for use in rendering an image of a 3D scene, the ray tracing system comprising:
a memory configured to store ray data for a ray to be processed in the ray tracing system, wherein the ray data for the ray comprises ray direction data stored in a compressed format; and intersection testing logic configured to:
partially decompress the compressed ray direction data for the ray to determine partially decompressed ray direction data which is not fully decompressed, and
perform intersection testing on the ray in the 3D scene using the partially decompressed ray direction data for the ray;
wherein the ray tracing system is configured to use results of the intersection testing for rendering the image of the 3D scene. 2. The ray tracing system of claim 1, wherein the intersection testing logic is configured to partially decompress the compressed ray direction data for the ray by constructing an unnormalised ray direction vector for the ray by unpacking the compressed ray direction data for the ray, wherein the intersection testing logic is configured to perform the intersection testing on the ray using the unnormalised ray direction vector for the ray rather than a normalised ray direction vector for the ray. 3. The ray tracing system of claim 2, wherein the intersection testing unit is configured to make use of a clipping distance for the ray, wherein the clipping distance has been scaled by an amount based on the magnitude of the unnormalised ray direction vector. 4. The ray tracing system of claim 3, wherein the clipping distance for the ray has been scaled by transforming the clipping distance into Manhattan space. 5. The ray tracing system of claim 1, wherein the compressed format is an octahedral vector format. 6. The ray tracing system of claim 1, wherein the ray data for the ray further comprises ray origin data. 7. The ray tracing system of claim 1, wherein the intersection testing logic comprises one or more of:
a primitive intersection tester configured to perform intersection testing on the ray by identifying an intersection of the ray with a primitive in the scene; a box intersection tester configured to perform intersection testing on the ray by identifying an intersection of the ray with a bounding box of one or more primitives in the scene; and a sphere intersection tester configured to perform intersection testing on the ray by identifying an intersection of the ray with a sphere representing the position of a portion of geometry in the scene. 8. A ray tracing method for use in rendering an image of a scene, the ray tracing method comprising:
retrieving, from a memory, ray data for a ray to be processed, wherein the ray data for the ray comprises ray direction data stored in a compressed format; partially decompressing the compressed ray direction data for the ray to determine partially decompressed ray direction data which is not fully decompressed; performing intersection testing on the ray in the scene using the partially decompressed ray direction data for the ray; and using results of the intersection testing for rendering the image of the scene. 9. The method of claim 8, wherein said partially decompressing the compressed ray direction data for the ray comprises constructing an unnormalised ray direction vector for the ray by unpacking the compressed ray direction data for the ray. 10. The method of claim 9, wherein the intersection testing is performed on the ray using the unnormalised ray direction vector for the ray rather than using a normalised ray direction vector for the ray for performing the intersection testing. 11. The method of claim 9, further comprising scaling a clipping distance of the ray for use in the intersection testing by an amount based on the magnitude of the unnormalised ray direction vector. 12. The method of claim 11, wherein said scaling the clipping distance of the ray comprises transforming the clipping distance for the ray into Manhattan space. 13. The method of claim 8, wherein core ray data for the ray is stored in the memory, whereas at least some non-core ray data for the ray is stored in a separate memory, wherein the compressed ray direction data is included in the core ray data for the ray. 14. A ray tracing system for use in rendering an image of a scene, the ray tracing system comprising:
one or more execution units configured to execute one or more shader instructions which output a ray for intersection testing; a ray compression module configured to compress ray direction data for the ray; a ray data store configured to store the compressed ray direction data; and intersection testing logic configured to perform intersection testing on the ray without fully decompressing the compressed ray direction data, wherein the ray tracing system is configured to use results of the intersection testing for rendering the image of the scene. 15. The ray tracing system of claim 14, wherein the ray compression module is implemented as a software module executed on at least one of the one or more execution units. 16. The ray tracing system of claim 14, wherein the ray compression module is implemented in fixed-function circuitry as a dedicated hardware module. 17. The ray tracing system of claim 14, wherein the ray data store is configured to store the compressed ray direction data for the ray with other ray data for the ray, said other data including a ray origin and a clipping distance for the ray, and wherein the intersection testing logic is configured to receive ray data including the compressed ray direction data from the ray data store. 18. The ray tracing system of claim 14, wherein the intersection testing logic is configured to perform the intersection testing on the ray without fully decompressing the ray direction data by using the compressed ray direction data in the intersection testing of the ray. 19. The ray tracing system of claim 14, wherein the intersection testing logic is configured to perform the intersection testing on the ray without fully decompressing the ray direction data by partially decompressing the compressed ray direction data and then using the partially decompressed ray direction data in the intersection testing of the ray. 20. The ray tracing system of claim 14, wherein the ray tracing system is configured to execute a shader program which uses results of the intersection testing for rendering the image of the scene, wherein the ray direction data for the ray is fully decompressed for use by the shader program. | 2,600 |
11,028 | 11,028 | 16,460,023 | 2,648 | A system includes a magnetic field transmitter assembly. The magnetic field transmitter assembly has a housing with a first layer comprising an electrically-conductive material, a second layer comprising an electrically-insulating material, and a third layer comprising an electrically-conductive material. The second layer is positioned between the first layer and the third layer. The magnetic field transmitter assembly also includes a plurality of magnetic field generator assemblies positioned within the housing. | 1. A system comprising:
a magnetic field transmitter assembly including:
a housing including a first layer comprising an electrically-conductive material, a second layer comprising an electrically-insulating material, and a third layer comprising an electrically-conductive material, wherein the second layer is positioned between the first layer and the third layer, and
a plurality of magnetic field generator assemblies positioned within the housing and configured to generate a plurality of magnetic fields. 2. The system of claim 1, wherein the plurality of magnetic field generator assemblies includes a coil and/or a permanent magnet. 3. The system of claim 1, wherein each of the plurality of magnetic field generator assemblies includes first, second, and third coil windings in an orthogonal arrangement. 4. The system of claim 1, wherein each of the plurality of magnetic field generator assemblies is positioned within the housing such that respective magnetic fields generated by the plurality of magnetic field generator assemblies overlap each other. 5. The system of claim 1, wherein coil windings within the plurality of magnetic field generator assemblies are either energized at different periods of time or energized simultaneously at different frequencies from each other. 6. The system of claim 1, wherein the housing includes a fourth layer comprising an electrically-insulating material and a fifth layer comprising an electrically-conductive material, and wherein the fourth layer is positioned between the third layer and the fifth layer. 7. The system of claim 1, wherein the layers of the housing form a skin that substantially covers an exterior of the housing. 8. The system of claim 1, wherein the electrically-conductive material comprises carbon fiber, and wherein the electrically-insulating material comprises a para-aramid fiber. 9. The system of claim 1, wherein the layers comprising an electrically-conductive material further comprise an electrically-insulating material. 10. The system of claim 9, wherein the layers comprising the electrically-conductive material and the electrically-insulating material are arranged such that the electrically-conductive material and the electrically-insulating material are woven together. 11. The system of claim 1, wherein the electrically-conductive material comprises a plurality of fibers extending parallel to each other. 12. The system of claim 1, further comprising:
a plurality of reference sensors each positioned adjacent to one of the plurality of magnetic field generator assemblies. 13. The system of claim 1, further comprising:
a magnetic field controller configured to control current applied to the plurality of magnetic field generator assemblies. 14. The system of claim 13, further comprising:
a receiver coupled to a medical device, the receiver configured to sense magnetic fields generated by the plurality of magnetic field generator assemblies and generate one or more sensed field signals; and a signal processor configured to receive the sensed field signals and to determine location of the receiver based on phase of the one or more sensed field signals. 15. The system of claim 1, wherein the housing assembly comprises a fluorotranslucent material. 16. The system of claim 1, further comprising:
a mattress coupled to the housing. 17. The system of claim 16, wherein the mattress includes a plurality of sections foldable upon each other. 18. The system of claim 1, further comprising:
a plurality of adjustable clamp assemblies for coupling the magnetic field transmitter assembly to a table. 19. The system of claim 1, wherein the housing includes a plurality of protruding structures at or near an outer periphery of the housing. 20. A magnetic field transmitter assembly comprising:
a housing including means for mitigating magnetic field distortion; and a plurality of magnetic field generator assemblies positioned within the housing. | A system includes a magnetic field transmitter assembly. The magnetic field transmitter assembly has a housing with a first layer comprising an electrically-conductive material, a second layer comprising an electrically-insulating material, and a third layer comprising an electrically-conductive material. The second layer is positioned between the first layer and the third layer. The magnetic field transmitter assembly also includes a plurality of magnetic field generator assemblies positioned within the housing.1. A system comprising:
a magnetic field transmitter assembly including:
a housing including a first layer comprising an electrically-conductive material, a second layer comprising an electrically-insulating material, and a third layer comprising an electrically-conductive material, wherein the second layer is positioned between the first layer and the third layer, and
a plurality of magnetic field generator assemblies positioned within the housing and configured to generate a plurality of magnetic fields. 2. The system of claim 1, wherein the plurality of magnetic field generator assemblies includes a coil and/or a permanent magnet. 3. The system of claim 1, wherein each of the plurality of magnetic field generator assemblies includes first, second, and third coil windings in an orthogonal arrangement. 4. The system of claim 1, wherein each of the plurality of magnetic field generator assemblies is positioned within the housing such that respective magnetic fields generated by the plurality of magnetic field generator assemblies overlap each other. 5. The system of claim 1, wherein coil windings within the plurality of magnetic field generator assemblies are either energized at different periods of time or energized simultaneously at different frequencies from each other. 6. The system of claim 1, wherein the housing includes a fourth layer comprising an electrically-insulating material and a fifth layer comprising an electrically-conductive material, and wherein the fourth layer is positioned between the third layer and the fifth layer. 7. The system of claim 1, wherein the layers of the housing form a skin that substantially covers an exterior of the housing. 8. The system of claim 1, wherein the electrically-conductive material comprises carbon fiber, and wherein the electrically-insulating material comprises a para-aramid fiber. 9. The system of claim 1, wherein the layers comprising an electrically-conductive material further comprise an electrically-insulating material. 10. The system of claim 9, wherein the layers comprising the electrically-conductive material and the electrically-insulating material are arranged such that the electrically-conductive material and the electrically-insulating material are woven together. 11. The system of claim 1, wherein the electrically-conductive material comprises a plurality of fibers extending parallel to each other. 12. The system of claim 1, further comprising:
a plurality of reference sensors each positioned adjacent to one of the plurality of magnetic field generator assemblies. 13. The system of claim 1, further comprising:
a magnetic field controller configured to control current applied to the plurality of magnetic field generator assemblies. 14. The system of claim 13, further comprising:
a receiver coupled to a medical device, the receiver configured to sense magnetic fields generated by the plurality of magnetic field generator assemblies and generate one or more sensed field signals; and a signal processor configured to receive the sensed field signals and to determine location of the receiver based on phase of the one or more sensed field signals. 15. The system of claim 1, wherein the housing assembly comprises a fluorotranslucent material. 16. The system of claim 1, further comprising:
a mattress coupled to the housing. 17. The system of claim 16, wherein the mattress includes a plurality of sections foldable upon each other. 18. The system of claim 1, further comprising:
a plurality of adjustable clamp assemblies for coupling the magnetic field transmitter assembly to a table. 19. The system of claim 1, wherein the housing includes a plurality of protruding structures at or near an outer periphery of the housing. 20. A magnetic field transmitter assembly comprising:
a housing including means for mitigating magnetic field distortion; and a plurality of magnetic field generator assemblies positioned within the housing. | 2,600 |
11,029 | 11,029 | 15,835,891 | 2,692 | Systems are presented herein, which may be implemented in a wearable device. The system is designed to allow a user to edit media images captured with the wearable device. The system employs eye tracking data to control various editing functions, whether prior to the time of capture, during the time of capture, or after the time of capture. Also presented are methods for determining which sections or regions of media images may be of greater interest to a user or viewer. The method employs eye tracking data to assign saliency to captured media. In both the system and the method, eye tracking data may be combined with data from additional sensors in order to enhance operation. | 1-39. (canceled) 40. A system for editing media images, the system comprising:
a wearable device; a scene camera mounted on the device such that the scene camera captures media images of surroundings of a user; a memory for storing the media images; an eye tracking subsystem that projects a reference frame onto at least one eye of a user and associates the projected reference frame with a second reference frame of a display for capturing eye tracking data of the at least one eye of the user; and one or more processors communicating with the scene camera and eye tracking subsystem for tagging media images captured by the scene camera based at least in part on the eye tracking data and for non-destructively editing the media images such that the media images are recoverable at a later time from the memory by the one or more processors. 41. The system of claim 40, wherein the one or more processors are configured to assign a validity weight to the eye tracking data based on a graceful degradation in eye tracking precision. 42. The system of claim 40, further comprising a transceiver or other wireless communication interface communicating with the one or more processors to transmit the media images to a remote location. 43. The system of claim 40, further comprising a transceiver or other wireless communication interface communicating with the one or more processors to receive data from one or more sensors remote from the wearable device. 44. A method for selecting or editing media images from a wearable device worn by a user, comprising:
capturing media images of the user's surroundings at a scene camera of the wearable device; capturing eye tracking data of at least one eye of the user at a user-facing camera of the wearable device; projecting a reference frame onto the at least one eye of the user and associating the projected reference frame with a second reference frame of a display at an eye tracking subsystem of the wearable device; at least one of selecting and editing the media images based at least in part on actions of the at least one eye identified from the eye tracking data; detecting a threshold event based on an accelerometer of the wearable device; and wherein at least one of selecting and editing the media images comprises automatically tagging the media images for a predetermined time immediately before the threshold event. 45. The method of claim 44, further comprising determining a saliency value of the media images from the scene camera based at least in part on eye movement of the user identified in the eye tracking data. 46. The method of claim 45, wherein detecting the threshold event selectively overrides other saliency determinations. 47. The method of claim 44, further comprising recognizing predetermined faces within the media images. 48. The method of claim 44, further comprising:
receiving, at a communication interface of the wearable device, metrics or heuristics from a remote location; wherein at least one of selecting and editing the media images comprises assigning saliency to one or more segments of the media images based upon the metrics or heuristics. 49. The method of claim 45, further comprising using data from social media applications as input for partial determination of saliency in the media images. 50. The method of claim 44, further comprising cropping media images to boundaries suggested by a user's hand motions to create a picture frame. 51. The method of claim 44, further comprising assigning saliency to media images based at least in part on input from multiple sensors, such as squinting, smiling or laughter. 52. The method of claim 44, further comprising automatically cropping media images to a region in which the at least one eye of the user is fixating. 53. The method of claim 44, further comprising automatically cropping media images to a region bounded by saccadic movements of the at least one eye of the user. 54. A method, comprising:
projecting a point of light onto a user's eye; capturing eye tracking data comprising a glint from a reflection of the point of light from the user's eye at an eye tracking camera of a wearable device; determining movement of the user's eye relative to the glint; capturing media images of the user's surroundings at a scene camera of the wearable device; and editing the media images based on movement of the user's eye relative to the glint. 55. The method of claim 54, further comprising detecting a threshold event based on an accelerometer of the wearable device. 56. The method of claim 55, further comprising determining a saliency value of the media images from the scene camera based at least in part on eye movement of the user identified in the eye tracking data. 57. The method of claim 56, wherein detecting the threshold event selectively overrides other saliency determinations. 58. The method of claim 54, further comprising recognizing predetermined faces within the media images. 59. The method of claim 54, further comprising:
receiving, at a communication interface of the wearable device, metrics or heuristics from a remote location; wherein editing the media images comprises assigning saliency to one or more segments of the media images based upon the metrics or heuristics. | Systems are presented herein, which may be implemented in a wearable device. The system is designed to allow a user to edit media images captured with the wearable device. The system employs eye tracking data to control various editing functions, whether prior to the time of capture, during the time of capture, or after the time of capture. Also presented are methods for determining which sections or regions of media images may be of greater interest to a user or viewer. The method employs eye tracking data to assign saliency to captured media. In both the system and the method, eye tracking data may be combined with data from additional sensors in order to enhance operation.1-39. (canceled) 40. A system for editing media images, the system comprising:
a wearable device; a scene camera mounted on the device such that the scene camera captures media images of surroundings of a user; a memory for storing the media images; an eye tracking subsystem that projects a reference frame onto at least one eye of a user and associates the projected reference frame with a second reference frame of a display for capturing eye tracking data of the at least one eye of the user; and one or more processors communicating with the scene camera and eye tracking subsystem for tagging media images captured by the scene camera based at least in part on the eye tracking data and for non-destructively editing the media images such that the media images are recoverable at a later time from the memory by the one or more processors. 41. The system of claim 40, wherein the one or more processors are configured to assign a validity weight to the eye tracking data based on a graceful degradation in eye tracking precision. 42. The system of claim 40, further comprising a transceiver or other wireless communication interface communicating with the one or more processors to transmit the media images to a remote location. 43. The system of claim 40, further comprising a transceiver or other wireless communication interface communicating with the one or more processors to receive data from one or more sensors remote from the wearable device. 44. A method for selecting or editing media images from a wearable device worn by a user, comprising:
capturing media images of the user's surroundings at a scene camera of the wearable device; capturing eye tracking data of at least one eye of the user at a user-facing camera of the wearable device; projecting a reference frame onto the at least one eye of the user and associating the projected reference frame with a second reference frame of a display at an eye tracking subsystem of the wearable device; at least one of selecting and editing the media images based at least in part on actions of the at least one eye identified from the eye tracking data; detecting a threshold event based on an accelerometer of the wearable device; and wherein at least one of selecting and editing the media images comprises automatically tagging the media images for a predetermined time immediately before the threshold event. 45. The method of claim 44, further comprising determining a saliency value of the media images from the scene camera based at least in part on eye movement of the user identified in the eye tracking data. 46. The method of claim 45, wherein detecting the threshold event selectively overrides other saliency determinations. 47. The method of claim 44, further comprising recognizing predetermined faces within the media images. 48. The method of claim 44, further comprising:
receiving, at a communication interface of the wearable device, metrics or heuristics from a remote location; wherein at least one of selecting and editing the media images comprises assigning saliency to one or more segments of the media images based upon the metrics or heuristics. 49. The method of claim 45, further comprising using data from social media applications as input for partial determination of saliency in the media images. 50. The method of claim 44, further comprising cropping media images to boundaries suggested by a user's hand motions to create a picture frame. 51. The method of claim 44, further comprising assigning saliency to media images based at least in part on input from multiple sensors, such as squinting, smiling or laughter. 52. The method of claim 44, further comprising automatically cropping media images to a region in which the at least one eye of the user is fixating. 53. The method of claim 44, further comprising automatically cropping media images to a region bounded by saccadic movements of the at least one eye of the user. 54. A method, comprising:
projecting a point of light onto a user's eye; capturing eye tracking data comprising a glint from a reflection of the point of light from the user's eye at an eye tracking camera of a wearable device; determining movement of the user's eye relative to the glint; capturing media images of the user's surroundings at a scene camera of the wearable device; and editing the media images based on movement of the user's eye relative to the glint. 55. The method of claim 54, further comprising detecting a threshold event based on an accelerometer of the wearable device. 56. The method of claim 55, further comprising determining a saliency value of the media images from the scene camera based at least in part on eye movement of the user identified in the eye tracking data. 57. The method of claim 56, wherein detecting the threshold event selectively overrides other saliency determinations. 58. The method of claim 54, further comprising recognizing predetermined faces within the media images. 59. The method of claim 54, further comprising:
receiving, at a communication interface of the wearable device, metrics or heuristics from a remote location; wherein editing the media images comprises assigning saliency to one or more segments of the media images based upon the metrics or heuristics. | 2,600 |
11,030 | 11,030 | 15,583,189 | 2,658 | An communication system supports communication paths within an environment by receiving speech signals of a speaker and playing it back for one or more listeners. Signal processing tasks are split into a microphone related part and into a loudspeaker related part. A sound processing system suitable for use in an environment having multiple acoustic zones includes a plurality of microphone communication instances coupled and a plurality of loudspeaker instances. | 1. A sound processing system having a plurality of microphones and a plurality of loudspeakers distributed in multiple acoustic zones, the system comprising:
a plurality of microphone communication instances coupled to corresponding ones of the plurality of microphones; a plurality of loudspeaker instances coupled to corresponding ones of the plurality of loudspeakers; and a dynamic audio router coupled to the plurality of microphone communication instances and the plurality of loudspeaker instances. | An communication system supports communication paths within an environment by receiving speech signals of a speaker and playing it back for one or more listeners. Signal processing tasks are split into a microphone related part and into a loudspeaker related part. A sound processing system suitable for use in an environment having multiple acoustic zones includes a plurality of microphone communication instances coupled and a plurality of loudspeaker instances.1. A sound processing system having a plurality of microphones and a plurality of loudspeakers distributed in multiple acoustic zones, the system comprising:
a plurality of microphone communication instances coupled to corresponding ones of the plurality of microphones; a plurality of loudspeaker instances coupled to corresponding ones of the plurality of loudspeakers; and a dynamic audio router coupled to the plurality of microphone communication instances and the plurality of loudspeaker instances. | 2,600 |
11,031 | 11,031 | 16,548,764 | 2,685 | System and methods for identifying road users on at least one roadway is provided. In one embodiment, a method includes identifying a non-vehicle road user (NVRU) detection zone on the at least one roadway. The method further includes identifying at least one road user within the NVRU detection zone based on sensor data from at least one device sensor of road side equipment (RSE) associated with the infrastructure of the at least one roadway. The method yet further includes determining the road user is an NVRU. | 1. A computer-implemented method for identifying road users on at least one roadway, comprising:
identifying a non-vehicle road user (NVRU) detection zone on the at least one roadway; identifying at least one road user within the NVRU detection zone based on sensor data from at least one device sensor of road side equipment (RSE) associated with infrastructure of the at least one roadway; and determining the road user is an NVRU. 2. The computer-implemented method of claim 1, wherein the NVRU detection zone is an area of the at least one roadway frequented by the road users. 3. The computer-implemented method of claim 1, wherein identifying the NVRU detection zone is based on historical data received from the at least one device sensor of the RSE. 4. The computer-implemented method of claim 3, wherein the historical data is compared to a threshold number of previously identified NVRUs. 5. The computer-implemented method of claim 1, further comprising:
determining a boundary limit based on the NVRU detection zone; determining the NVRU satisfies the boundary limit; classifying the road user as an NVRU; and storing the classification of the road user. 6. The computer-implemented method of claim 5, wherein the boundary limit is based on a distance from at least one device sensor to the NVRU detection zone. 7. The computer-implemented method of claim 1, wherein
determining a boundary limit based on the NVRU detection zone; determining the NVRU satisfies the boundary limit; determining the NVRU does not satisfy the boundary limit; and classifying the road user as a vehicle. 8. A system for identifying road users on at least one roadway, comprising:
a memory storing instructions when executed by a processor cause the processor to: identify a non-vehicle road user (NVRU) detection zone on the at least one roadway; identify at least one road user within the NVRU detection zone based on sensor data from at least one device sensor of road side equipment (RSE) associated with infrastructure of the at least one roadway; and determine the road user is an NVRU. 9. The system of claim 8, wherein the NVRU detection zone is an area of the at least one roadway frequented by the road users. 10. The system of claim 8, wherein the NVRU detection zone is based on historical data received from the at least one device sensor of the RSE. 11. The system of claim 10, wherein the instructions when executed by the processor further cause the processor to compare historical data to a threshold number of previously identified NVRUs. 12. The system of claim 8, wherein the instructions when executed by the processor further cause the processor to:
determine a boundary limit based on the NVRU detection zone; determine the NVRU satisfies the boundary limit; classify the road user as an NVRU; and store the classification of the road user. 13. The system of claim 12, wherein the boundary limit is based on a distance from at least one device sensor to the NVRU detection zone. 14. The system of claim 8, wherein the instructions when executed by the processor further cause the processor to:
determine a boundary limit based on the NVRU detection zone; determine the NVRU satisfies the boundary limit; determine the NVRU does not satisfy the boundary limit; and classify the road user as a vehicle. 15. A non-transitory computer readable storage medium storing instructions that when executed by a computer, which includes a processor, perform a method, the method comprising:
receiving a plurality of frames from at least one device sensor of road side equipment (RSE) associated with infrastructure of at least one roadway; receiving a coordinated universal time (UTC) signal; determining a timestamp interval between frames of the plurality of frames; calculating timestamps for the frames of the plurality of frames based on the UTC signal and the timestamp interval; and assigning a calculated timestamp to each of the frames of the plurality of the frames. 16. The non-transitory computer readable storage medium of claim 15, wherein the time interval is based on specifications of the at least one device sensor of the RSE. 17. The non-transitory computer readable storage medium of claim 15, wherein the time interval is based on metadata associated with the frames in the plurality of frames. 18. The non-transitory computer readable storage medium of claim 15, further comprising:
calculating a first timestamp based on the UTC signal and a time when a first frame of the plurality of frames was received. 19. The non-transitory computer readable storage medium of claim 15, further comprising:
assigning the timestamped plurality of frames unique identifiers; and forwarding the timestamped plurality of frames for processing. 20. The non-transitory computer readable storage medium of claim 15, wherein the plurality of frames are aggregated from multiple sensors of RSEs. | System and methods for identifying road users on at least one roadway is provided. In one embodiment, a method includes identifying a non-vehicle road user (NVRU) detection zone on the at least one roadway. The method further includes identifying at least one road user within the NVRU detection zone based on sensor data from at least one device sensor of road side equipment (RSE) associated with the infrastructure of the at least one roadway. The method yet further includes determining the road user is an NVRU.1. A computer-implemented method for identifying road users on at least one roadway, comprising:
identifying a non-vehicle road user (NVRU) detection zone on the at least one roadway; identifying at least one road user within the NVRU detection zone based on sensor data from at least one device sensor of road side equipment (RSE) associated with infrastructure of the at least one roadway; and determining the road user is an NVRU. 2. The computer-implemented method of claim 1, wherein the NVRU detection zone is an area of the at least one roadway frequented by the road users. 3. The computer-implemented method of claim 1, wherein identifying the NVRU detection zone is based on historical data received from the at least one device sensor of the RSE. 4. The computer-implemented method of claim 3, wherein the historical data is compared to a threshold number of previously identified NVRUs. 5. The computer-implemented method of claim 1, further comprising:
determining a boundary limit based on the NVRU detection zone; determining the NVRU satisfies the boundary limit; classifying the road user as an NVRU; and storing the classification of the road user. 6. The computer-implemented method of claim 5, wherein the boundary limit is based on a distance from at least one device sensor to the NVRU detection zone. 7. The computer-implemented method of claim 1, wherein
determining a boundary limit based on the NVRU detection zone; determining the NVRU satisfies the boundary limit; determining the NVRU does not satisfy the boundary limit; and classifying the road user as a vehicle. 8. A system for identifying road users on at least one roadway, comprising:
a memory storing instructions when executed by a processor cause the processor to: identify a non-vehicle road user (NVRU) detection zone on the at least one roadway; identify at least one road user within the NVRU detection zone based on sensor data from at least one device sensor of road side equipment (RSE) associated with infrastructure of the at least one roadway; and determine the road user is an NVRU. 9. The system of claim 8, wherein the NVRU detection zone is an area of the at least one roadway frequented by the road users. 10. The system of claim 8, wherein the NVRU detection zone is based on historical data received from the at least one device sensor of the RSE. 11. The system of claim 10, wherein the instructions when executed by the processor further cause the processor to compare historical data to a threshold number of previously identified NVRUs. 12. The system of claim 8, wherein the instructions when executed by the processor further cause the processor to:
determine a boundary limit based on the NVRU detection zone; determine the NVRU satisfies the boundary limit; classify the road user as an NVRU; and store the classification of the road user. 13. The system of claim 12, wherein the boundary limit is based on a distance from at least one device sensor to the NVRU detection zone. 14. The system of claim 8, wherein the instructions when executed by the processor further cause the processor to:
determine a boundary limit based on the NVRU detection zone; determine the NVRU satisfies the boundary limit; determine the NVRU does not satisfy the boundary limit; and classify the road user as a vehicle. 15. A non-transitory computer readable storage medium storing instructions that when executed by a computer, which includes a processor, perform a method, the method comprising:
receiving a plurality of frames from at least one device sensor of road side equipment (RSE) associated with infrastructure of at least one roadway; receiving a coordinated universal time (UTC) signal; determining a timestamp interval between frames of the plurality of frames; calculating timestamps for the frames of the plurality of frames based on the UTC signal and the timestamp interval; and assigning a calculated timestamp to each of the frames of the plurality of the frames. 16. The non-transitory computer readable storage medium of claim 15, wherein the time interval is based on specifications of the at least one device sensor of the RSE. 17. The non-transitory computer readable storage medium of claim 15, wherein the time interval is based on metadata associated with the frames in the plurality of frames. 18. The non-transitory computer readable storage medium of claim 15, further comprising:
calculating a first timestamp based on the UTC signal and a time when a first frame of the plurality of frames was received. 19. The non-transitory computer readable storage medium of claim 15, further comprising:
assigning the timestamped plurality of frames unique identifiers; and forwarding the timestamped plurality of frames for processing. 20. The non-transitory computer readable storage medium of claim 15, wherein the plurality of frames are aggregated from multiple sensors of RSEs. | 2,600 |
11,032 | 11,032 | 15,620,872 | 2,659 | Variety of approaches to provide an event based activity service are described. A communication service initiates operations) to provide the event based activity service upon analyzing at least a portion of a conversation. An activity associated with the conversation is inferred froth the analyzed portion. Next, an item associated with the activity is created. A creation notification associated with the item is inserted into the conversation. | 1. A method to provide an event based activity service for a conversational environment, the method comprising:
analyzing at least a portion of a conversation; inferring an activity associated with the conversation from the analyzed portion; creating an item associated with the activity; and inserting a creation notification associated with the item into the conversation. 2. The method of claim 1, wherein the conversation includes a combination of text based messages or emails. 3. The method of claim 1, further comprising:
identifying an owner and one or more of a title and a description of a task based on the activity; generating the task as the item; and inserting the task into a calendar of the owner. 4. The method of claim 1, further comprising:
identifying an owner and one or more of a subject and a description of an appointment based on the activity; generating the appointment as the item; and inserting the appointment into a calendar of the owner. 5. The method of claim 1, further comprising:
identifying one or more invitees and one or more of a subject and a description of a meeting based on the activity; generating the meeting as the item; and inserting the meeting into one or more calendars of the one or more invitees. 6. The method of claim 5, wherein the meeting includes a conference call. 7. The method of claim 1, further comprising:
identifying an owner of the item based on the activity, wherein the owner includes one or more of a group member of a group that is collaborating through the conversation, a group owner of the group, and an external entity that interacts with the group. 8. The method of claim 7, further comprising;
detecting the group owner as the owner of the item; identifying an association with the group owner and a second group; and associating the item with the second group based on a context of the activity or a relationship of the group owner with the second group. 9. The method of claim 1, further comprising:
tracking a status of the item; detecting a change to the status of the item; and inserting a change notification associated with the status of the item into the conversation. 10. The method of claim 1, further comprising;
detecting a completion event associated with the item; and inserting a completion notification associated with the item into the conversation. 11. The method of claim 1 further comprising:
receiving a request to duplicate the item;
creating a duplicate of the item; and
inserting a second creation notification associated with the duplicate of the item. 12. A server configured to provide an event based activity service for a conversational environment, the server comprising:
a communication device configured to facilitate communication between a communication service and one or more client devices; a memory configured to store instructions; and a processor coupled to the memory and the communication device, the processor executing the communication service in conjunction with the instructions stored in the memory, wherein the communication service includes:
a detection module configured to:
analyze a content of a conversation based on one or more of a text based analysis, a voice based analysis, and a gesture based analysis;
infer an activity associated with the conversation from the analyzed content;
prompt a group member of a group who participates in the conversation to confirm the activity;
upon receiving a confirmation, create an item associated with the activity; and
insert a creation notification associated with the item into the conversation. 13. The server of claim 12, wherein the detection module is further configured to:
detect an identifier of a stakeholder associated with the item within the activity; and assign the stakeholder as an owner of the item. 14. The server of claim 13, wherein the detection module is further configured to:
detect an association between the identifier of the stakeholder and the group member; and designate the group member as the owner of the item. 15. The server of claim 13, wherein the detection module is further configured to:
detect an association between the identifier of the stakeholder and an external entity; validate an ownership authorization of the external entity to inherit the item; and upon a validation of the ownership authorization, designate the external entity as the owner of the item. 16. The server of claim 12, wherein the detection module is further configured to:
detect a time period associated, with the item within the activity; and configure the item based on the time period. 17. The server of claim 16, wherein the detection module is further configured to:
insert a reminder notification associated with the item into the conversation prior to a start time of the time period associated with the item. 18. The server of claim 12, wherein the detection module is further configured to:
detect an association between the item and a third party provider within the activity; and transmit the item to the third party provider with an instruction to host and manage the item. 19. A computer-readable memory device with instructions stored thereon to provide an event based activity service for a conversational environment, the instructions comprising:
analyzing a content of a conversation based on one or more of a text based analysis, a voice based analysis, and a gesture; inferring an activity associated with the conversation from the analyzed content; prompting a group member of a group who participates in the conversation to confirm the activity; upon receiving a confirmation, creating an item associated with the activity; and inserting a creation notification associated with the item into the conversation. 20. The computer-readable memory device of claim 19, wherein the instructions further comprise:
detecting an identifier of the group member as associated with the item within the activity; and assigning the group member as an owner of the activity. | Variety of approaches to provide an event based activity service are described. A communication service initiates operations) to provide the event based activity service upon analyzing at least a portion of a conversation. An activity associated with the conversation is inferred froth the analyzed portion. Next, an item associated with the activity is created. A creation notification associated with the item is inserted into the conversation.1. A method to provide an event based activity service for a conversational environment, the method comprising:
analyzing at least a portion of a conversation; inferring an activity associated with the conversation from the analyzed portion; creating an item associated with the activity; and inserting a creation notification associated with the item into the conversation. 2. The method of claim 1, wherein the conversation includes a combination of text based messages or emails. 3. The method of claim 1, further comprising:
identifying an owner and one or more of a title and a description of a task based on the activity; generating the task as the item; and inserting the task into a calendar of the owner. 4. The method of claim 1, further comprising:
identifying an owner and one or more of a subject and a description of an appointment based on the activity; generating the appointment as the item; and inserting the appointment into a calendar of the owner. 5. The method of claim 1, further comprising:
identifying one or more invitees and one or more of a subject and a description of a meeting based on the activity; generating the meeting as the item; and inserting the meeting into one or more calendars of the one or more invitees. 6. The method of claim 5, wherein the meeting includes a conference call. 7. The method of claim 1, further comprising:
identifying an owner of the item based on the activity, wherein the owner includes one or more of a group member of a group that is collaborating through the conversation, a group owner of the group, and an external entity that interacts with the group. 8. The method of claim 7, further comprising;
detecting the group owner as the owner of the item; identifying an association with the group owner and a second group; and associating the item with the second group based on a context of the activity or a relationship of the group owner with the second group. 9. The method of claim 1, further comprising:
tracking a status of the item; detecting a change to the status of the item; and inserting a change notification associated with the status of the item into the conversation. 10. The method of claim 1, further comprising;
detecting a completion event associated with the item; and inserting a completion notification associated with the item into the conversation. 11. The method of claim 1 further comprising:
receiving a request to duplicate the item;
creating a duplicate of the item; and
inserting a second creation notification associated with the duplicate of the item. 12. A server configured to provide an event based activity service for a conversational environment, the server comprising:
a communication device configured to facilitate communication between a communication service and one or more client devices; a memory configured to store instructions; and a processor coupled to the memory and the communication device, the processor executing the communication service in conjunction with the instructions stored in the memory, wherein the communication service includes:
a detection module configured to:
analyze a content of a conversation based on one or more of a text based analysis, a voice based analysis, and a gesture based analysis;
infer an activity associated with the conversation from the analyzed content;
prompt a group member of a group who participates in the conversation to confirm the activity;
upon receiving a confirmation, create an item associated with the activity; and
insert a creation notification associated with the item into the conversation. 13. The server of claim 12, wherein the detection module is further configured to:
detect an identifier of a stakeholder associated with the item within the activity; and assign the stakeholder as an owner of the item. 14. The server of claim 13, wherein the detection module is further configured to:
detect an association between the identifier of the stakeholder and the group member; and designate the group member as the owner of the item. 15. The server of claim 13, wherein the detection module is further configured to:
detect an association between the identifier of the stakeholder and an external entity; validate an ownership authorization of the external entity to inherit the item; and upon a validation of the ownership authorization, designate the external entity as the owner of the item. 16. The server of claim 12, wherein the detection module is further configured to:
detect a time period associated, with the item within the activity; and configure the item based on the time period. 17. The server of claim 16, wherein the detection module is further configured to:
insert a reminder notification associated with the item into the conversation prior to a start time of the time period associated with the item. 18. The server of claim 12, wherein the detection module is further configured to:
detect an association between the item and a third party provider within the activity; and transmit the item to the third party provider with an instruction to host and manage the item. 19. A computer-readable memory device with instructions stored thereon to provide an event based activity service for a conversational environment, the instructions comprising:
analyzing a content of a conversation based on one or more of a text based analysis, a voice based analysis, and a gesture; inferring an activity associated with the conversation from the analyzed content; prompting a group member of a group who participates in the conversation to confirm the activity; upon receiving a confirmation, creating an item associated with the activity; and inserting a creation notification associated with the item into the conversation. 20. The computer-readable memory device of claim 19, wherein the instructions further comprise:
detecting an identifier of the group member as associated with the item within the activity; and assigning the group member as an owner of the activity. | 2,600 |
11,033 | 11,033 | 15,379,341 | 2,647 | A telecommunication device includes a casing, smart phone components disposed within the casing, two opposed flexible wrist grips having upper and lower surfaces, and a flexible screen disposed on the lower surfaces of the casing and the opposed flexible wrist grips and in communication with the smart phone components. | 1. A telecommunication device comprising:
a) a casing having a pair of opposed sides, b) a first retractor, c) smart phone components disposed within the casing, d) first flexible screen in communication with the smart phone components, the first flexible screen having proximal and distal ends, the proximal end secured to the first retractor, the distal end being secured to an end unit, the first flexible screen being extensible from and retractable to a first retractor to which its proximal end is secured, the end unit being configured to prevent retraction of the distal end to which it is secured to the first retractor to which the corresponding proximal end is secured. 2. The telecommunication device of claim 1, where the casing is a flexible casing. 3. The telecommunication device of claim 1, where the first flexible screen is secured into the first retractor. 4. The telecommunication device of claim 1, where a portion of the first flexible screen is visible when retracted around the first retractor. 5. The telecommunication device of claim 1, includes a second flexible screen in communication with the smart phone components, the second flexible screen having a second screen proximal and second screen distal end, each second screen proximal end secured to a second retractor, the second screen distal end being secured to an end unit, the second flexible screens being extensible from and retractable to the second retractor to which its second screen proximal end is secured, each end unit being configured to prevent retraction of the second screen distal end to which it is secured to the second retractor to which the corresponding second screen proximal end is secured. 6. The telecommunication device of claim 5, further includes a protective layer applied to at least a portion of the first flexible screen and the second flexible screen. 7. The telecommunication device of claim 5 wherein casing is secured with two opposed flexible wrist grips that have distal ends, and wherein a clasp secures the two distal ends. 8. A telecommunication device comprising:
a) a casing having a pair of opposed sides, b) smart phone components disposed within the casing, d) first flexible screen in communication with the smart phone components, the first flexible screen being integrated with and covering a flexible wrist grip. 9. The telecommunication device of claim 8, includes a clasp that secures the flexible wrist grip which has two distal ends at the two distal ends. 10. A telecommunication device comprising:
a) a casing having upper, lower and opposing surfaces, b) two opposed flexible wrist grips one secured to and extending from each opposing end surface of the case, the opposed flexible wrist grips having upper and lower surfaces, and c) a flexible screen disposed on the lower surfaces of the casing and the opposed flexible wrist grips, whereby the flexible screen faces away from the lower surface and is in communication with smart phone components, d) a second screen in communication with the smart phone components, wherein the casing has an upper surface and the second screen is disposed on or in the upper surface of the casing, wherein the second screen is a fixed screen. 11. The telecommunication device of claim 10 where the opposed flexible wrist grips have opposing side edges and the second flexible screen is secured to the casing but not the opposing flexible wrist grip and where the telecommunication device further comprises two opposed elongated flexible grooved guides extending along the lower surface of each opposed flexible wrist grip along the opposing side edges of the wrist grip, where the two opposed elongated flexible grooved guides moveably engage the second flexible screen within the grooved guides along the edges of the second flexible screen so that the second flexible screen is still visible when engaged by the grooved guides. 12. The telecommunication device of claim 11 where the two opposed flexible wrist grips includes a plurality of segments. 13. The telecommunication device of claim 11 further includes a protective layer applied to at least a portion of the top surface of the flexible screen. 14. The telecommunication device of claim 11 further includes a second screen in communication with the smart phone components, wherein the casing has an upper surface and the second screen is disposed on or in the upper surface of the casing. 15. The telecommunication device of claim 11 wherein the two opposed flexible wrist grips have distal ends, and wherein a clasp secures the two distal ends. 16. A telecommunication device comprising:
a) a flexible wrist grip; b) a flexible screen having proximal and distal ends, the flexible screen being secured to the flexible wrist grip, such that the flexible screen is able to be worn around a wrist of a user; and c) a second screen, wherein the second screen and the flexible screen create a single display, and the flexible screen and the second screen are in communication with the smart phone components. 17. The telecommunication device of claim 16 where the flexible wrist grips include a plurality of segments. 18. The telecommunication device of claim 16 further includes a protective layer applied to at least a portion of the top surface of the flexible screen. 19. The telecommunication device of claim 18 further includes a protective layer applied to at least a portion of the top surface of the second screen. 20. A telecommunication device comprising:
a) a casing having a pair of opposed sides, c) a flexible screen having
i) an interior portion secured to the casing and in communication with smart phone components, and
ii) opposed extensible distal ends, each extensible distal end extending away from one of the opposed sides of the casing, and
d) retractors secured to each of the opposed extensible distal ends of the flexible screen, each retractor being configured to retract an extensible distal end of the flexible screen until the retractor contacts a side of the casing away from which the extensible distal end of the flexible screen to which the retractor is secured extends, each retractor being further configured to enable extension of the extensible distal end of the flexible screen to which the retractor is secured; and e) a second screen in communication with the smart phone components, wherein the casing has an upper surface and the second screen is disposed on or in the upper surface of the casing, wherein the second screen is a fixed screen. 21. The telecommunication device of claim 20, where the flexible screen rolls up on the retractors. 22. The telecommunication device of claim 21, where a portion of the flexible screen is visible when rolled up on the retractors. | A telecommunication device includes a casing, smart phone components disposed within the casing, two opposed flexible wrist grips having upper and lower surfaces, and a flexible screen disposed on the lower surfaces of the casing and the opposed flexible wrist grips and in communication with the smart phone components.1. A telecommunication device comprising:
a) a casing having a pair of opposed sides, b) a first retractor, c) smart phone components disposed within the casing, d) first flexible screen in communication with the smart phone components, the first flexible screen having proximal and distal ends, the proximal end secured to the first retractor, the distal end being secured to an end unit, the first flexible screen being extensible from and retractable to a first retractor to which its proximal end is secured, the end unit being configured to prevent retraction of the distal end to which it is secured to the first retractor to which the corresponding proximal end is secured. 2. The telecommunication device of claim 1, where the casing is a flexible casing. 3. The telecommunication device of claim 1, where the first flexible screen is secured into the first retractor. 4. The telecommunication device of claim 1, where a portion of the first flexible screen is visible when retracted around the first retractor. 5. The telecommunication device of claim 1, includes a second flexible screen in communication with the smart phone components, the second flexible screen having a second screen proximal and second screen distal end, each second screen proximal end secured to a second retractor, the second screen distal end being secured to an end unit, the second flexible screens being extensible from and retractable to the second retractor to which its second screen proximal end is secured, each end unit being configured to prevent retraction of the second screen distal end to which it is secured to the second retractor to which the corresponding second screen proximal end is secured. 6. The telecommunication device of claim 5, further includes a protective layer applied to at least a portion of the first flexible screen and the second flexible screen. 7. The telecommunication device of claim 5 wherein casing is secured with two opposed flexible wrist grips that have distal ends, and wherein a clasp secures the two distal ends. 8. A telecommunication device comprising:
a) a casing having a pair of opposed sides, b) smart phone components disposed within the casing, d) first flexible screen in communication with the smart phone components, the first flexible screen being integrated with and covering a flexible wrist grip. 9. The telecommunication device of claim 8, includes a clasp that secures the flexible wrist grip which has two distal ends at the two distal ends. 10. A telecommunication device comprising:
a) a casing having upper, lower and opposing surfaces, b) two opposed flexible wrist grips one secured to and extending from each opposing end surface of the case, the opposed flexible wrist grips having upper and lower surfaces, and c) a flexible screen disposed on the lower surfaces of the casing and the opposed flexible wrist grips, whereby the flexible screen faces away from the lower surface and is in communication with smart phone components, d) a second screen in communication with the smart phone components, wherein the casing has an upper surface and the second screen is disposed on or in the upper surface of the casing, wherein the second screen is a fixed screen. 11. The telecommunication device of claim 10 where the opposed flexible wrist grips have opposing side edges and the second flexible screen is secured to the casing but not the opposing flexible wrist grip and where the telecommunication device further comprises two opposed elongated flexible grooved guides extending along the lower surface of each opposed flexible wrist grip along the opposing side edges of the wrist grip, where the two opposed elongated flexible grooved guides moveably engage the second flexible screen within the grooved guides along the edges of the second flexible screen so that the second flexible screen is still visible when engaged by the grooved guides. 12. The telecommunication device of claim 11 where the two opposed flexible wrist grips includes a plurality of segments. 13. The telecommunication device of claim 11 further includes a protective layer applied to at least a portion of the top surface of the flexible screen. 14. The telecommunication device of claim 11 further includes a second screen in communication with the smart phone components, wherein the casing has an upper surface and the second screen is disposed on or in the upper surface of the casing. 15. The telecommunication device of claim 11 wherein the two opposed flexible wrist grips have distal ends, and wherein a clasp secures the two distal ends. 16. A telecommunication device comprising:
a) a flexible wrist grip; b) a flexible screen having proximal and distal ends, the flexible screen being secured to the flexible wrist grip, such that the flexible screen is able to be worn around a wrist of a user; and c) a second screen, wherein the second screen and the flexible screen create a single display, and the flexible screen and the second screen are in communication with the smart phone components. 17. The telecommunication device of claim 16 where the flexible wrist grips include a plurality of segments. 18. The telecommunication device of claim 16 further includes a protective layer applied to at least a portion of the top surface of the flexible screen. 19. The telecommunication device of claim 18 further includes a protective layer applied to at least a portion of the top surface of the second screen. 20. A telecommunication device comprising:
a) a casing having a pair of opposed sides, c) a flexible screen having
i) an interior portion secured to the casing and in communication with smart phone components, and
ii) opposed extensible distal ends, each extensible distal end extending away from one of the opposed sides of the casing, and
d) retractors secured to each of the opposed extensible distal ends of the flexible screen, each retractor being configured to retract an extensible distal end of the flexible screen until the retractor contacts a side of the casing away from which the extensible distal end of the flexible screen to which the retractor is secured extends, each retractor being further configured to enable extension of the extensible distal end of the flexible screen to which the retractor is secured; and e) a second screen in communication with the smart phone components, wherein the casing has an upper surface and the second screen is disposed on or in the upper surface of the casing, wherein the second screen is a fixed screen. 21. The telecommunication device of claim 20, where the flexible screen rolls up on the retractors. 22. The telecommunication device of claim 21, where a portion of the flexible screen is visible when rolled up on the retractors. | 2,600 |
11,034 | 11,034 | 15,790,430 | 2,699 | A method and apparatus for use in a digital imaging device for correcting image blur in digital images by combining plurality of images. The plurality of images that are combined include a main subject that can be selected by user input or automatically by the digital imaging device. Blur correction can be performed to make the main subject blur-free while the rest of the image is blurred. All of the image may be made blur-free or the main subject can be made blur-free at the expense of the rest of the image. Result is a blur corrected image that is recorded in a memory. | 1. A method for capturing stabilized video, for use in a device that includes a lens, an image sensor, a display, a memory, and a processor, the device having a field of view, the method comprising:
displaying, in the display of the device, a preview of a subject within the field of view of the device; capturing a video of the subject, with the lens and the image sensor, wherein the video is a sequence of images; detecting, by the processor, the subject in one or more images of the sequence of images and determining a location of the subject within the images; shifting, by the processor, the one or more images vertically and horizontally by an integer number of pixels to obtain corrected images, wherein the amount of vertical and horizontal shift for each of the one or more images is determined at least in part based on the location of the subject in the image; combining the corrected images to obtain a stabilized video; and displaying the stabilized video in the display of the device. | A method and apparatus for use in a digital imaging device for correcting image blur in digital images by combining plurality of images. The plurality of images that are combined include a main subject that can be selected by user input or automatically by the digital imaging device. Blur correction can be performed to make the main subject blur-free while the rest of the image is blurred. All of the image may be made blur-free or the main subject can be made blur-free at the expense of the rest of the image. Result is a blur corrected image that is recorded in a memory.1. A method for capturing stabilized video, for use in a device that includes a lens, an image sensor, a display, a memory, and a processor, the device having a field of view, the method comprising:
displaying, in the display of the device, a preview of a subject within the field of view of the device; capturing a video of the subject, with the lens and the image sensor, wherein the video is a sequence of images; detecting, by the processor, the subject in one or more images of the sequence of images and determining a location of the subject within the images; shifting, by the processor, the one or more images vertically and horizontally by an integer number of pixels to obtain corrected images, wherein the amount of vertical and horizontal shift for each of the one or more images is determined at least in part based on the location of the subject in the image; combining the corrected images to obtain a stabilized video; and displaying the stabilized video in the display of the device. | 2,600 |
11,035 | 11,035 | 16,732,381 | 2,688 | An optical disk ( 100 ) of the present invention includes (i) a medium information region ( 101 ) (a) in which type identification information is recorded by recesses and/or protrusions which are formed by a given modulation method and whose lengths are longer than a length of an optical system resolution limit of a playback device and (b) in which first address information is recorded in a first address data format and (ii) a data region ( 102 ) (a) in which content data is recorded by recesses and/or protrusions which are formed by the given modulation method and which include a recess and/or a protrusion whose length is shorter than the length of the optical system resolution limit and (b) in which second address information is recorded in a second address data format. | 1. A method for playing back an information recording medium, said information recording medium comprising:
a first region in which type identification information for identifying a type of the information recording medium is recorded by recesses and/or protrusions which are formed by a given modulation method and whose lengths are longer than a length of an optical system resolution limit of a playback device; and a second region in which content data is recorded by recesses and/or protrusions which are formed by the given modulation method and which include a recess and/or a protrusion whose length is shorter than the length of the optical system resolution limit, the first region containing first address information recorded therein in a first address data format, and the second region containing second address information recorded therein in a second address data format that differs from the first address data format, said method comprising determining playback setting of the second region based on type identification information obtained by irradiating the first region with playback light. | An optical disk ( 100 ) of the present invention includes (i) a medium information region ( 101 ) (a) in which type identification information is recorded by recesses and/or protrusions which are formed by a given modulation method and whose lengths are longer than a length of an optical system resolution limit of a playback device and (b) in which first address information is recorded in a first address data format and (ii) a data region ( 102 ) (a) in which content data is recorded by recesses and/or protrusions which are formed by the given modulation method and which include a recess and/or a protrusion whose length is shorter than the length of the optical system resolution limit and (b) in which second address information is recorded in a second address data format.1. A method for playing back an information recording medium, said information recording medium comprising:
a first region in which type identification information for identifying a type of the information recording medium is recorded by recesses and/or protrusions which are formed by a given modulation method and whose lengths are longer than a length of an optical system resolution limit of a playback device; and a second region in which content data is recorded by recesses and/or protrusions which are formed by the given modulation method and which include a recess and/or a protrusion whose length is shorter than the length of the optical system resolution limit, the first region containing first address information recorded therein in a first address data format, and the second region containing second address information recorded therein in a second address data format that differs from the first address data format, said method comprising determining playback setting of the second region based on type identification information obtained by irradiating the first region with playback light. | 2,600 |
11,036 | 11,036 | 16,250,551 | 2,689 | A mobile energy delivery system is provided. The mobile energy delivery system includes an unmanned aerial vehicle (UAV) configured to deliver energy, a controller configured to deploy the UAV responsive to a request and a ground-based, drivable vehicle. The ground-based drivable vehicle includes an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame. The frame is configured to accommodate the energy storage component and includes a single entirely smooth uppermost surface. The energy storage component is chargeable by the UAV upon the UAV being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving. | 1. A mobile energy delivery system, comprising:
an unmanned aerial vehicle (UAV) configured to deliver energy; a controller configured to deploy the UAV responsive to a request; and a ground-based, drivable vehicle comprising an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame configured to accommodate the energy storage component and comprising a single entirely smooth uppermost surface, and the energy storage component being chargeable by the UAV upon the UAV being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving. 2. The mobile energy delivery system according to claim 1, wherein the UAV is one of multiple UAVs of a fleet, each of the multiple UAVs being deployable from and returnable to various home bases. 3. The mobile energy delivery system according to claim 1, wherein the UAV contacts or enters into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving by matching a speed and direction of the ground-based, drivable vehicle at a substantially flat and straight roadway. 4. The mobile energy delivery system according to claim 1, wherein:
the controller determines the current energy requirement by calculating a remaining amount of energy in the energy storage component and an amount of energy required for the ground-based driving, and the controller issues the request in an event the remaining amount of energy in the energy storage component is less than the amount of energy required for the ground-based driving. 5. The mobile energy delivery system according to claim 1, wherein the request comprises location, speed, route and identification information of the ground-based, drivable vehicle. 6. The mobile energy delivery system according to claim 1, wherein:
the single entirely smooth uppermost surface comprises conductive paint which is electrically communicative with the energy storage component, and the energy storage component is chargeable by the UAV via the conductive paint upon the UAV contacting the single entirely smooth uppermost surface, or the energy storage component is inductively or capacitively chargeable by the UAV upon the UAV entering into an immediate vicinity of the single entirely smooth uppermost surface. 7. The mobile energy delivery system according to claim 1, wherein the UAV is configured to confirm that the ground-based, drivable vehicle issued the request, that no other UAV responded to the request and executes a charging operation subject to available UAV power. 8. A mobile energy delivery system, comprising:
an unmanned ground-based vehicle (UGV) configured to deliver energy; a controller configured to deploy the UGV responsive to a request; and a ground-based, drivable vehicle comprising an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame configured to accommodate the energy storage component and comprising an undercarriage, and the energy storage component being chargeable by the UGV upon the UGV being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the undercarriage during the ground-based driving. 9. The mobile energy delivery system according to claim 8, wherein the UGV is one of multiple UGVs of a fleet, each of the multiple UGVs being deployable from and returnable to various home bases. 10. The mobile energy delivery system according to claim 8, wherein the UGV contacts or enters into an immediate vicinity of the undercarriage during the ground-based driving by matching a speed and direction of the ground-based, drivable vehicle at a substantially flat and straight roadway. 11. The mobile energy delivery system according to claim 8, wherein:
the controller determines the current energy requirement by calculating a remaining amount of energy in the energy storage component and an amount of energy required for the ground-based driving, and the controller issues the request in an event the remaining amount of energy in the energy storage component is less than the amount of energy required for the ground-based driving. 12. The mobile energy delivery system according to claim 8, wherein the request comprises location, speed, route and identification information of the ground-based, drivable vehicle. 13. The mobile energy delivery system according to claim 8, wherein:
the undercarriage is electrically communicative with the energy storage component, and the energy storage component is chargeable by the UGV upon the UGV contacting the undercarriage, or the energy storage component is inductively or capacitively chargeable by the UGV upon the UGV entering into an immediate vicinity of the undercarriage. 14. The mobile energy delivery system according to claim 8, wherein the UGV is configured to confirm that the ground-based, drivable vehicle issued the request, that no other UGV responded to the request and executes a charging operation subject to available UGV power. 15. A mobile energy delivery system, comprising:
a fleet of unmanned aerial or ground-based vehicles (UAVs or UGVs) respectively configured to deliver energy; a controller configured to deploy one of the UAVs or the UGVs responsive to a request; and a ground-based, drivable vehicle comprising an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame configured to accommodate the energy storage component and comprising one or more of a single entirely smooth uppermost surface and an undercarriage, and the energy storage component being chargeable by a deployed one of the UAVs or the UGVs upon the deployed one of the UAVs being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving or upon the deployed one of the UGVs being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the undercarriage during the ground-based driving. 16. The mobile energy delivery system according to claim 15, wherein the deployed one of the UAVs or UGVs contacts or enters into an immediate vicinity of the single entirely smooth uppermost surface or the undercarriage during the ground-based driving by matching a speed and direction of the ground-based, drivable vehicle at a substantially flat and straight roadway. 17. The mobile energy delivery system according to claim 15, wherein:
the controller determines the current energy requirement by calculating a remaining amount of energy in the energy storage component and an amount of energy required for the ground-based driving, and the controller issues the request in an event the remaining amount of energy in the energy storage component is less than the amount of energy required for the ground-based driving. 18. The mobile energy delivery system according to claim 15, wherein the request comprises location, speed, route and identification information of the ground-based, drivable vehicle. 19. The mobile energy delivery system according to claim 15, wherein:
the single entirely smooth uppermost surface comprises conductive paint which is electrically communicative with the energy storage component, and the energy storage component is chargeable by the deployed one of the UAVs via the conductive paint upon the deployed one of the UAVs contacting the single entirely smooth uppermost surface, or the energy storage component is inductively or capacitively chargeable by the deployed one of the UAVs upon the deployed one of the UAVs entering into an immediate vicinity of the single entirely smooth uppermost surface. 20. The mobile energy delivery system according to claim 8, wherein:
the undercarriage is electrically communicative with the energy storage component, and the energy storage component is chargeable by the deployed one of the UGVs upon the deployed one of the UGVs contacting the undercarriage, or the energy storage component is inductively or capacitively chargeable by the deployed one of the UGVs upon the deployed one of the UGVs entering into an immediate vicinity of the undercarriage. | A mobile energy delivery system is provided. The mobile energy delivery system includes an unmanned aerial vehicle (UAV) configured to deliver energy, a controller configured to deploy the UAV responsive to a request and a ground-based, drivable vehicle. The ground-based drivable vehicle includes an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame. The frame is configured to accommodate the energy storage component and includes a single entirely smooth uppermost surface. The energy storage component is chargeable by the UAV upon the UAV being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving.1. A mobile energy delivery system, comprising:
an unmanned aerial vehicle (UAV) configured to deliver energy; a controller configured to deploy the UAV responsive to a request; and a ground-based, drivable vehicle comprising an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame configured to accommodate the energy storage component and comprising a single entirely smooth uppermost surface, and the energy storage component being chargeable by the UAV upon the UAV being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving. 2. The mobile energy delivery system according to claim 1, wherein the UAV is one of multiple UAVs of a fleet, each of the multiple UAVs being deployable from and returnable to various home bases. 3. The mobile energy delivery system according to claim 1, wherein the UAV contacts or enters into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving by matching a speed and direction of the ground-based, drivable vehicle at a substantially flat and straight roadway. 4. The mobile energy delivery system according to claim 1, wherein:
the controller determines the current energy requirement by calculating a remaining amount of energy in the energy storage component and an amount of energy required for the ground-based driving, and the controller issues the request in an event the remaining amount of energy in the energy storage component is less than the amount of energy required for the ground-based driving. 5. The mobile energy delivery system according to claim 1, wherein the request comprises location, speed, route and identification information of the ground-based, drivable vehicle. 6. The mobile energy delivery system according to claim 1, wherein:
the single entirely smooth uppermost surface comprises conductive paint which is electrically communicative with the energy storage component, and the energy storage component is chargeable by the UAV via the conductive paint upon the UAV contacting the single entirely smooth uppermost surface, or the energy storage component is inductively or capacitively chargeable by the UAV upon the UAV entering into an immediate vicinity of the single entirely smooth uppermost surface. 7. The mobile energy delivery system according to claim 1, wherein the UAV is configured to confirm that the ground-based, drivable vehicle issued the request, that no other UAV responded to the request and executes a charging operation subject to available UAV power. 8. A mobile energy delivery system, comprising:
an unmanned ground-based vehicle (UGV) configured to deliver energy; a controller configured to deploy the UGV responsive to a request; and a ground-based, drivable vehicle comprising an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame configured to accommodate the energy storage component and comprising an undercarriage, and the energy storage component being chargeable by the UGV upon the UGV being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the undercarriage during the ground-based driving. 9. The mobile energy delivery system according to claim 8, wherein the UGV is one of multiple UGVs of a fleet, each of the multiple UGVs being deployable from and returnable to various home bases. 10. The mobile energy delivery system according to claim 8, wherein the UGV contacts or enters into an immediate vicinity of the undercarriage during the ground-based driving by matching a speed and direction of the ground-based, drivable vehicle at a substantially flat and straight roadway. 11. The mobile energy delivery system according to claim 8, wherein:
the controller determines the current energy requirement by calculating a remaining amount of energy in the energy storage component and an amount of energy required for the ground-based driving, and the controller issues the request in an event the remaining amount of energy in the energy storage component is less than the amount of energy required for the ground-based driving. 12. The mobile energy delivery system according to claim 8, wherein the request comprises location, speed, route and identification information of the ground-based, drivable vehicle. 13. The mobile energy delivery system according to claim 8, wherein:
the undercarriage is electrically communicative with the energy storage component, and the energy storage component is chargeable by the UGV upon the UGV contacting the undercarriage, or the energy storage component is inductively or capacitively chargeable by the UGV upon the UGV entering into an immediate vicinity of the undercarriage. 14. The mobile energy delivery system according to claim 8, wherein the UGV is configured to confirm that the ground-based, drivable vehicle issued the request, that no other UGV responded to the request and executes a charging operation subject to available UGV power. 15. A mobile energy delivery system, comprising:
a fleet of unmanned aerial or ground-based vehicles (UAVs or UGVs) respectively configured to deliver energy; a controller configured to deploy one of the UAVs or the UGVs responsive to a request; and a ground-based, drivable vehicle comprising an energy storage component disposed to store energy for ground-based driving, a controller configured to determine a current energy requirement for the ground-based driving and to issue the request to the controller accordingly and a frame configured to accommodate the energy storage component and comprising one or more of a single entirely smooth uppermost surface and an undercarriage, and the energy storage component being chargeable by a deployed one of the UAVs or the UGVs upon the deployed one of the UAVs being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the single entirely smooth uppermost surface during the ground-based driving or upon the deployed one of the UGVs being deployed by the controller in response to the request and subsequently contacting or entering into an immediate vicinity of the undercarriage during the ground-based driving. 16. The mobile energy delivery system according to claim 15, wherein the deployed one of the UAVs or UGVs contacts or enters into an immediate vicinity of the single entirely smooth uppermost surface or the undercarriage during the ground-based driving by matching a speed and direction of the ground-based, drivable vehicle at a substantially flat and straight roadway. 17. The mobile energy delivery system according to claim 15, wherein:
the controller determines the current energy requirement by calculating a remaining amount of energy in the energy storage component and an amount of energy required for the ground-based driving, and the controller issues the request in an event the remaining amount of energy in the energy storage component is less than the amount of energy required for the ground-based driving. 18. The mobile energy delivery system according to claim 15, wherein the request comprises location, speed, route and identification information of the ground-based, drivable vehicle. 19. The mobile energy delivery system according to claim 15, wherein:
the single entirely smooth uppermost surface comprises conductive paint which is electrically communicative with the energy storage component, and the energy storage component is chargeable by the deployed one of the UAVs via the conductive paint upon the deployed one of the UAVs contacting the single entirely smooth uppermost surface, or the energy storage component is inductively or capacitively chargeable by the deployed one of the UAVs upon the deployed one of the UAVs entering into an immediate vicinity of the single entirely smooth uppermost surface. 20. The mobile energy delivery system according to claim 8, wherein:
the undercarriage is electrically communicative with the energy storage component, and the energy storage component is chargeable by the deployed one of the UGVs upon the deployed one of the UGVs contacting the undercarriage, or the energy storage component is inductively or capacitively chargeable by the deployed one of the UGVs upon the deployed one of the UGVs entering into an immediate vicinity of the undercarriage. | 2,600 |
11,037 | 11,037 | 15,902,007 | 2,691 | A controlling device has a moveable touch sensitive panel positioned above a plurality of switches. When the controlling device senses an activation of at least one of the plurality of switches when caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface, the controlling device responds by transmitting a signal to an appliance wherein the signal is reflective of the input location upon the touch sensitive surface. | 1. (canceled) 2. A method for remotely controlling one or more devices and/or a user interface, the method comprising:
detecting a user input event at a portion of a user input device of a remote control; determining whether the user input event is a click event or a touch event; mapping a control command to the user input event based on whether the user input event is a click event or a touch event and on the portion of the user input button at which the user input event was detected; and for a particular portion of the user input element at which the user input event was detected, causing a first control command to be executed in response to determining that the user input event is a click event and causing a second control command to be executed in response to determining that the user input event is a touch event; wherein a threshold associated with a depression of the user input element is used to determine whether the user input event is a click event or a touch event. 3. The method as recited in claim 2, wherein the user input element comprises a touch sensing device and wherein detecting the user input event at the portion of the user input device comprises determining an X/Y location of a touch upon the touch sensing device. 4. The method as recited in claim 2, wherein the threshold associated with the depression comprises a threshold associated with a depression of a metallic dome underlying the user input element. 5. The method as recited in claim 2, wherein the second control command comprises a graphical user interface navigational control command. 6. The method as recited in claim 2, comprising causing indicia to be displayed on a display device for indicating to a user a plurality of control commands transmittable from the remote control via user interaction with the input element. 7. The method as recited in claim 2, further comprising enabling a user to selectively map different control commands of a plurality of control commands to different user input events receivable via the user input element. 8. The method as recited in claim 2, further comprising enabling a user to selectively map different control commands of a plurality of control commands to different portions of the user input element. 9. The method as recited in claim 2, wherein control commands transmitted from the remote control are executable through interaction with the graphical user interface when displayed on a display screen. 10. The method as recited in claim 2, wherein the first and/or second control command is a command for remotely controlling a controlled device and the method further comprising executing the first and/or second control command at the controlled device. 11. The method as recited in claim 2, wherein a unique control command is mapped to each portion of the user input element for each of at least a click input event and a touch input event. 12. A remote control system for remotely controlling one or more devices and/or a user interface, the remote control system comprising:
a plurality of user input elements, each of the user input elements configured to receive a user input event; a plurality of sensors associated with a one of the plurality of user input elements, the sensors being configured to generate sensor data in response to a user input event being received at the one of the plurality of user input elements; user input event detection logic configured to receive the sensor data and identify whether the user input event received at the one of the plurality of user input elements was a click event or a touch event, or another user input event; and command selection logic configured, for a particular portion of the user input element at which the user input event was received, to cause a first control command to be executed in response to determining that the user input event received at the one of the plurality of user input elements was a click event and to cause a second control command to be executed in response to determining that the user input event received at the one of the plurality of user input elements was a touch event; wherein a threshold associated with a depression of the user input element is used to determine whether the user input event is a click event or a touch event. 13. The remote control system as recited in claim 12, wherein the user input element comprises a touch sensing device and wherein the command selection logic detects the user input event at the particular portion of the user input device by determining an X/Y location of a touch upon the touch sensing device. 14. The remote control system as recited in claim 12, wherein the threshold associated with the depression comprises a threshold associated with a depression of a metallic dome underlying the user input element. 15. The remote control system as recited in claim 12, wherein the second control command comprises a graphical user interface navigational control command. 16. The remote control system as recited in claim 12, wherein control commands transmitted from the remote control system are executable through interaction with the graphical user interface when displayed on a display screen. 17. The remote control system as recited in claim 12, wherein the first and/or second control command is a command for remotely controlling a controlled device and the first and/or second control command is executed at the controlled device. 18. The remote control system as recited in claim 12, wherein a unique control command is mapped to each portion of the user input element for each of at least a click input event and a touch input event. 19. The remote control system as recited in claim 12, further comprising remote control user customization logic, the remote control user customization logic being configured to enable a user to selectively map different control commands to different portions of the one of the plurality of user input elements. | A controlling device has a moveable touch sensitive panel positioned above a plurality of switches. When the controlling device senses an activation of at least one of the plurality of switches when caused by a movement of the touch sensitive panel resulting from an input at an input location upon the touch sensitive surface, the controlling device responds by transmitting a signal to an appliance wherein the signal is reflective of the input location upon the touch sensitive surface.1. (canceled) 2. A method for remotely controlling one or more devices and/or a user interface, the method comprising:
detecting a user input event at a portion of a user input device of a remote control; determining whether the user input event is a click event or a touch event; mapping a control command to the user input event based on whether the user input event is a click event or a touch event and on the portion of the user input button at which the user input event was detected; and for a particular portion of the user input element at which the user input event was detected, causing a first control command to be executed in response to determining that the user input event is a click event and causing a second control command to be executed in response to determining that the user input event is a touch event; wherein a threshold associated with a depression of the user input element is used to determine whether the user input event is a click event or a touch event. 3. The method as recited in claim 2, wherein the user input element comprises a touch sensing device and wherein detecting the user input event at the portion of the user input device comprises determining an X/Y location of a touch upon the touch sensing device. 4. The method as recited in claim 2, wherein the threshold associated with the depression comprises a threshold associated with a depression of a metallic dome underlying the user input element. 5. The method as recited in claim 2, wherein the second control command comprises a graphical user interface navigational control command. 6. The method as recited in claim 2, comprising causing indicia to be displayed on a display device for indicating to a user a plurality of control commands transmittable from the remote control via user interaction with the input element. 7. The method as recited in claim 2, further comprising enabling a user to selectively map different control commands of a plurality of control commands to different user input events receivable via the user input element. 8. The method as recited in claim 2, further comprising enabling a user to selectively map different control commands of a plurality of control commands to different portions of the user input element. 9. The method as recited in claim 2, wherein control commands transmitted from the remote control are executable through interaction with the graphical user interface when displayed on a display screen. 10. The method as recited in claim 2, wherein the first and/or second control command is a command for remotely controlling a controlled device and the method further comprising executing the first and/or second control command at the controlled device. 11. The method as recited in claim 2, wherein a unique control command is mapped to each portion of the user input element for each of at least a click input event and a touch input event. 12. A remote control system for remotely controlling one or more devices and/or a user interface, the remote control system comprising:
a plurality of user input elements, each of the user input elements configured to receive a user input event; a plurality of sensors associated with a one of the plurality of user input elements, the sensors being configured to generate sensor data in response to a user input event being received at the one of the plurality of user input elements; user input event detection logic configured to receive the sensor data and identify whether the user input event received at the one of the plurality of user input elements was a click event or a touch event, or another user input event; and command selection logic configured, for a particular portion of the user input element at which the user input event was received, to cause a first control command to be executed in response to determining that the user input event received at the one of the plurality of user input elements was a click event and to cause a second control command to be executed in response to determining that the user input event received at the one of the plurality of user input elements was a touch event; wherein a threshold associated with a depression of the user input element is used to determine whether the user input event is a click event or a touch event. 13. The remote control system as recited in claim 12, wherein the user input element comprises a touch sensing device and wherein the command selection logic detects the user input event at the particular portion of the user input device by determining an X/Y location of a touch upon the touch sensing device. 14. The remote control system as recited in claim 12, wherein the threshold associated with the depression comprises a threshold associated with a depression of a metallic dome underlying the user input element. 15. The remote control system as recited in claim 12, wherein the second control command comprises a graphical user interface navigational control command. 16. The remote control system as recited in claim 12, wherein control commands transmitted from the remote control system are executable through interaction with the graphical user interface when displayed on a display screen. 17. The remote control system as recited in claim 12, wherein the first and/or second control command is a command for remotely controlling a controlled device and the first and/or second control command is executed at the controlled device. 18. The remote control system as recited in claim 12, wherein a unique control command is mapped to each portion of the user input element for each of at least a click input event and a touch input event. 19. The remote control system as recited in claim 12, further comprising remote control user customization logic, the remote control user customization logic being configured to enable a user to selectively map different control commands to different portions of the one of the plurality of user input elements. | 2,600 |
11,038 | 11,038 | 16,596,035 | 2,623 | Wearable electronic systems having varying interactions based on device orientations are described herein. The systems include a first wearable electronic device and a second wearable electronic device having an input device and a device orientation sensor. The device orientation sensor detects a device orientation of the second wearable electronic device and generates a device orientation signal. The systems have a first mapping orientation mode that performs a first mapping between inputs from the input device and functions of a user interface displayed on the first wearable electronic device when the second wearable electronic device has a first device orientation and a second mapping orientation mode that performs a second mapping between inputs from the input device and functions of the user interface displayed on the first wearable electronic device when the device orientation of the second wearable electronic device detected by the device orientation sensor is a second device orientation. | 1. A wearable electronic system comprising:
a first wearable electronic device having a first processor, a first non-transitory processor-readable storage medium communicatively coupled to the first processor, a display communicatively coupled to the first processor, and a first communication interface communicatively coupled to the first processor; and a second wearable electronic device having a device orientation sensor, a second communication interface communicatively coupled to the device orientation sensor, and an input device communicatively coupled to the second communication interface, the device orientation sensor to detect a device orientation of the second wearable electronic device and generate a device orientation signal in response to detecting the device orientation of the second wearable electronic device, the input device to receive an input from a user of the second wearable electronic device and generate an input signal in response to receiving the input from the user, and the second communication interface to transmit signals; wherein: the first communication interface is communicatively coupleable with the second communication interface to provide communications between the first wearable electronic device and the second wearable electronic device; the first non-transitory processor-readable storage medium of the first wearable electronic device stores processor-executable instructions that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device; and the wearable electronic system has: a first mapping orientation mode that performs a first mapping between inputs from the user detected by the input device of the second wearable electronic device and functions of the user interface displayed on the display of the first wearable electronic device when the device orientation of the second wearable electronic device detected by the device orientation sensor is a first device orientation; and a second mapping orientation mode that performs a second mapping between inputs from the user detected by the input device of the second wearable electronic device and functions of the user interface displayed on the display of the first wearable electronic device when the device orientation of the second wearable electronic device detected by the device orientation sensor is a second device orientation. 2. The system of claim 1 wherein the second communication interface of the second wearable electronic device is to transmit the device orientation signal and the input signal, and wherein the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, further cause the first processor to effect:
the first mapping orientation mode in response to receiving a first device orientation signal from the second wearable electronic device indicative of the second wearable electronic device having the first device orientation; or
the second mapping orientation mode in response to receiving a second device orientation signal from the second wearable electronic device indicative of the second wearable electronic device having the second device orientation. 3. The system of claim 2 wherein:
in the first mapping orientation mode, the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, cause the first processor to effect a first function of the user interface displayed on the display of the first wearable electronic device in response to receiving the input signal via the first communication interface; and
in the second mapping orientation mode, the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, cause the first processor to effect a second function of the user interface displayed on the display of the first wearable electronic device in response to receiving the input signal via the first communication interface. 4. The system of claim 1 wherein in the first mapping orientation mode the first processor maps a first user input received from the second wearable electronic device to a first function of the user interface displayed on the display of the first wearable electronic device, and in the second mapping orientation mode the first processor maps the first user input received from the second wearable electronic device to a second function of the user interface displayed on the display of the first wearable electronic device. 5. The system of claim 4 wherein in the second mapping orientation mode the first processor maps a second user input received from the second wearable electronic device to the first function of the user interface displayed on the display of the first wearable electronic device. 6. The system of claim 1 wherein the second wearable electronic device further comprises a second processor and a second non-transitory processor-readable storage medium communicatively coupled to the second processor, the second non-transitory processor-readable storage medium storing processor executable instructions that, when executed by the second processor, cause the second wearable electronic device to:
process the device orientation signal and the input signal to produce an oriented input signal; and
transmit the oriented input signal via the second communication interface. 7. The system of claim 6 wherein processor-executable instructions stored in the second non-transitory processor-readable storage medium that, when executed by the second processor, cause the second processor to process the device orientation signal and the input signal to produce an oriented input signal, cause the second processor to effect:
the first mapping orientation mode in response to receiving a first device orientation signal from the device orientation sensor indicative of the second wearable electronic device having the first device orientation, wherein in the first mapping orientation mode the processor-executable instructions stored in the second non-transitory processor-readable storage medium that, when executed by the second processor, cause the second processor to transmit the oriented input signal via the second communication interface, cause the second processor to transmit a first oriented input signal via the second communication interface; or
the second mapping orientation mode in response to receiving a second device orientation signal from the device orientation sensor indicative of the second wearable electronic device having the second device orientation, wherein in the second mapping orientation mode the processor-executable instructions stored in the second non-transitory processor-readable storage medium that, when executed by the second processor, cause the second processor to transmit the oriented input signal via the second communication interface, cause the second processor to transmit a second oriented input signal via the second communication interface. 8. The system of claim 7 wherein the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, cause the first processor to effect:
a first function of the user interface displayed on the display of the first wearable electronic device in response to receiving the first oriented input signal via the first communication interface; or
a second function of the user interface displayed on the display of the first wearable electronic device in response to receiving the second oriented input signal via the first communication interface. 9. The system of claim 1 wherein the first non-transitory processor-readable storage medium of the first wearable electronic device stores processor-executable instructions that, when executed by the first processor, cause the first wearable electronic device to:
set the first mapping orientation mode when the second wearable electronic device has a first device orientation;
set the second mapping orientation mode when the second wearable electronic device has a second device orientation; and
dynamically change between the first mapping orientation mode and the second mapping orientation mode in response to the first wearable electronic device receiving the device orientation signal indicative of a change in the device orientation of the second wearable electronic device. 10. The system of claim 1 wherein the first wearable electronic device comprises a head mounted electronic display unit. 11. The system of claim 1 wherein the second wearable electronic device comprises an electronic ring, and wherein the first device orientation corresponds to the electronic ring being worn on the user's left hand and the second device orientation corresponds to the electronic ring being worn on the user's right hand. 12. The system of claim 1 wherein the second wearable electronic device comprises an electronic ring including an annular structure with a hole therethrough, and wherein:
the first device orientation corresponds to a finger of the user extending through the hole in the annular structure of the electronic ring in a first direction; and
the second device orientation corresponds to the finger of the user extending through the hole in the annular structure of the electronic ring in a second direction, the second direction opposite the first direction. 13. A method of operating a wearable electronic system comprising a first wearable electronic device and a second wearable electronic device, wherein the second wearable electronic device is operable in at least two different device orientations, the method comprising:
detecting, by a device orientation sensor of the second wearable electronic device, a device orientation of the second wearable electronic device; receiving, by an input device of the second wearable electronic device, an input from a user of the second wearable electronic device; wirelessly transmitting a signal by the second wearable electronic device, the signal based on at least one of the device orientation of the second wearable electronic device detected by the device orientation sensor and the input from the user of the second wearable electronic device received by the input device; wirelessly receiving the signal by the first wearable electronic device; displaying, by a display of the first wearable electronic device, a user interface to the user; in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a first device orientation:
effecting, by the wearable electronic system, a first mapping orientation mode; and
effecting, by the first wearable electronic device, a first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device;
and in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a second device orientation:
effecting, by the wearable electronic system, a second mapping orientation mode; and
effecting, by the first wearable electronic device, a second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device. 14. The method of claim 13 wherein:
wirelessly transmitting a signal by the second wearable electronic device includes wirelessly transmitting, by the second wearable electronic device, both a device orientation signal from the device orientation sensor and an input signal from the input device;
wirelessly receiving the signal by the first wearable electronic device includes wirelessly receiving both the device orientation signal and the input signal by the first wearable electronic device;
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the first device orientation:
effecting, by the wearable electronic system, a first mapping orientation mode in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a first device orientation includes effecting, by a processor of the first wearable electronic device, the first mapping orientation mode in response to wirelessly receiving the device orientation signal by the first wearable electronic device; and
effecting, by the first wearable electronic device, a first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by the processor of the first wearable electronic device, the first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the input signal by the first wearable electronic device;
and
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the second device orientation:
effecting, by the wearable electronic system, a second mapping orientation mode in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a second device orientation includes effecting, by the processor of the first wearable electronic device, the second mapping orientation mode in response to wirelessly receiving the device orientation signal by the first wearable electronic device; and
effecting, by the first wearable electronic device, a second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by the processor of the first wearable electronic device, the second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the input signal by the first wearable electronic device. 15. The method of claim 13, further comprising:
generating, by the device orientation sensor of the second wearable electronic device, a device orientation signal in response to detecting, by the device orientation sensor, the device orientation of the second wearable electronic device; generating, by the input device of the second wearable electronic device, an input signal in response to receiving, by the input device, the input from the user of the second wearable electronic device; and processing, by a second processor of the second wearable electronic device that is communicatively coupled to both the device orientation sensor and the input device, the device orientation signal and the input signal to define an oriented input signal, wherein: in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the first device orientation, effecting, by the wearable electronic system, the first mapping orientation mode includes defining, by the second processor of the second wearable electronic device, a first oriented input signal; and in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the second device orientation, effecting, by the wearable electronic system, the second mapping orientation mode includes defining, by the second processor of the second wearable electronic device, a second oriented input signal. 16. The method of claim 15 wherein:
wirelessly transmitting a signal by the second wearable electronic device includes wirelessly transmitting, by the second wearable electronic device, the oriented input signal;
wirelessly receiving the signal by the first wearable electronic device includes wirelessly receiving the oriented input signal by the first wearable electronic device;
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the first device orientation, effecting, by the first wearable electronic device, a first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by a first processor of the first wearable electronic device, the first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the first oriented input signal by the first wearable electronic device; and
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the second device orientation, effecting, by the first wearable electronic device, a second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by the first processor of the first wearable electronic device, the second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the second oriented input signal by the first wearable electronic device. 17. The method of claim 13 wherein the effecting, by the wearable electronic system, the first mapping orientation mode includes performing a first mapping, by the first processor, of a first user input received from the second wearable electronic device to a first function of the user interface displayed on the display of the first wearable electronic device, and effecting, by the wearable electronic system, the second mapping orientation mode includes performing a second mapping, by the first processor, of the first user input received from the second wearable electronic device to a second function of the user interface displayed on the display of the first wearable electronic device. 18. The method of claim 13, further comprising:
setting the first mapping orientation mode when the second wearable electronic device has a first device orientation; setting the second mapping orientation mode when the second wearable electronic device has a second device orientation; and dynamically changing between the first mapping orientation mode and the second mapping orientation mode in response to the first wearable electronic device receiving the signal indicative of a change in the device orientation of the second wearable electronic device. 19. The method of claim 13 wherein the second wearable electronic device comprises an electronic ring and
detecting, by the device orientation sensor of the first wearable electronic device, a device orientation of the second wearable electronic device includes detecting, by the device orientation sensor of the first wearable electronic device, if the electronic ring is being worn on the user's left hand or if the electronic ring is being worn on the user's right hand. 20. The method of claim 13 wherein the second wearable electronic device comprises an electronic ring including an annular structure with a hole therethrough, and wherein:
detecting, by the device orientation sensor of the first wearable electronic device, a device orientation of the second wearable electronic device includes detecting, by the device orientation sensor of the first wearable electronic device if a finger of the user is extending through the hole in the annular structure of the electronic ring in a first direction or if the finger of the user is extending through the hole in the annular structure of the electronic ring in a second direction, the second direction opposite the first direction. | Wearable electronic systems having varying interactions based on device orientations are described herein. The systems include a first wearable electronic device and a second wearable electronic device having an input device and a device orientation sensor. The device orientation sensor detects a device orientation of the second wearable electronic device and generates a device orientation signal. The systems have a first mapping orientation mode that performs a first mapping between inputs from the input device and functions of a user interface displayed on the first wearable electronic device when the second wearable electronic device has a first device orientation and a second mapping orientation mode that performs a second mapping between inputs from the input device and functions of the user interface displayed on the first wearable electronic device when the device orientation of the second wearable electronic device detected by the device orientation sensor is a second device orientation.1. A wearable electronic system comprising:
a first wearable electronic device having a first processor, a first non-transitory processor-readable storage medium communicatively coupled to the first processor, a display communicatively coupled to the first processor, and a first communication interface communicatively coupled to the first processor; and a second wearable electronic device having a device orientation sensor, a second communication interface communicatively coupled to the device orientation sensor, and an input device communicatively coupled to the second communication interface, the device orientation sensor to detect a device orientation of the second wearable electronic device and generate a device orientation signal in response to detecting the device orientation of the second wearable electronic device, the input device to receive an input from a user of the second wearable electronic device and generate an input signal in response to receiving the input from the user, and the second communication interface to transmit signals; wherein: the first communication interface is communicatively coupleable with the second communication interface to provide communications between the first wearable electronic device and the second wearable electronic device; the first non-transitory processor-readable storage medium of the first wearable electronic device stores processor-executable instructions that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device; and the wearable electronic system has: a first mapping orientation mode that performs a first mapping between inputs from the user detected by the input device of the second wearable electronic device and functions of the user interface displayed on the display of the first wearable electronic device when the device orientation of the second wearable electronic device detected by the device orientation sensor is a first device orientation; and a second mapping orientation mode that performs a second mapping between inputs from the user detected by the input device of the second wearable electronic device and functions of the user interface displayed on the display of the first wearable electronic device when the device orientation of the second wearable electronic device detected by the device orientation sensor is a second device orientation. 2. The system of claim 1 wherein the second communication interface of the second wearable electronic device is to transmit the device orientation signal and the input signal, and wherein the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, further cause the first processor to effect:
the first mapping orientation mode in response to receiving a first device orientation signal from the second wearable electronic device indicative of the second wearable electronic device having the first device orientation; or
the second mapping orientation mode in response to receiving a second device orientation signal from the second wearable electronic device indicative of the second wearable electronic device having the second device orientation. 3. The system of claim 2 wherein:
in the first mapping orientation mode, the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, cause the first processor to effect a first function of the user interface displayed on the display of the first wearable electronic device in response to receiving the input signal via the first communication interface; and
in the second mapping orientation mode, the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, cause the first processor to effect a second function of the user interface displayed on the display of the first wearable electronic device in response to receiving the input signal via the first communication interface. 4. The system of claim 1 wherein in the first mapping orientation mode the first processor maps a first user input received from the second wearable electronic device to a first function of the user interface displayed on the display of the first wearable electronic device, and in the second mapping orientation mode the first processor maps the first user input received from the second wearable electronic device to a second function of the user interface displayed on the display of the first wearable electronic device. 5. The system of claim 4 wherein in the second mapping orientation mode the first processor maps a second user input received from the second wearable electronic device to the first function of the user interface displayed on the display of the first wearable electronic device. 6. The system of claim 1 wherein the second wearable electronic device further comprises a second processor and a second non-transitory processor-readable storage medium communicatively coupled to the second processor, the second non-transitory processor-readable storage medium storing processor executable instructions that, when executed by the second processor, cause the second wearable electronic device to:
process the device orientation signal and the input signal to produce an oriented input signal; and
transmit the oriented input signal via the second communication interface. 7. The system of claim 6 wherein processor-executable instructions stored in the second non-transitory processor-readable storage medium that, when executed by the second processor, cause the second processor to process the device orientation signal and the input signal to produce an oriented input signal, cause the second processor to effect:
the first mapping orientation mode in response to receiving a first device orientation signal from the device orientation sensor indicative of the second wearable electronic device having the first device orientation, wherein in the first mapping orientation mode the processor-executable instructions stored in the second non-transitory processor-readable storage medium that, when executed by the second processor, cause the second processor to transmit the oriented input signal via the second communication interface, cause the second processor to transmit a first oriented input signal via the second communication interface; or
the second mapping orientation mode in response to receiving a second device orientation signal from the device orientation sensor indicative of the second wearable electronic device having the second device orientation, wherein in the second mapping orientation mode the processor-executable instructions stored in the second non-transitory processor-readable storage medium that, when executed by the second processor, cause the second processor to transmit the oriented input signal via the second communication interface, cause the second processor to transmit a second oriented input signal via the second communication interface. 8. The system of claim 7 wherein the processor-executable instructions stored in the first non-transitory processor-readable storage medium of the first wearable electronic device that, when executed by the first processor, cause the first wearable electronic device to generate and display a user interface on the display of the first wearable electronic device, cause the first processor to effect:
a first function of the user interface displayed on the display of the first wearable electronic device in response to receiving the first oriented input signal via the first communication interface; or
a second function of the user interface displayed on the display of the first wearable electronic device in response to receiving the second oriented input signal via the first communication interface. 9. The system of claim 1 wherein the first non-transitory processor-readable storage medium of the first wearable electronic device stores processor-executable instructions that, when executed by the first processor, cause the first wearable electronic device to:
set the first mapping orientation mode when the second wearable electronic device has a first device orientation;
set the second mapping orientation mode when the second wearable electronic device has a second device orientation; and
dynamically change between the first mapping orientation mode and the second mapping orientation mode in response to the first wearable electronic device receiving the device orientation signal indicative of a change in the device orientation of the second wearable electronic device. 10. The system of claim 1 wherein the first wearable electronic device comprises a head mounted electronic display unit. 11. The system of claim 1 wherein the second wearable electronic device comprises an electronic ring, and wherein the first device orientation corresponds to the electronic ring being worn on the user's left hand and the second device orientation corresponds to the electronic ring being worn on the user's right hand. 12. The system of claim 1 wherein the second wearable electronic device comprises an electronic ring including an annular structure with a hole therethrough, and wherein:
the first device orientation corresponds to a finger of the user extending through the hole in the annular structure of the electronic ring in a first direction; and
the second device orientation corresponds to the finger of the user extending through the hole in the annular structure of the electronic ring in a second direction, the second direction opposite the first direction. 13. A method of operating a wearable electronic system comprising a first wearable electronic device and a second wearable electronic device, wherein the second wearable electronic device is operable in at least two different device orientations, the method comprising:
detecting, by a device orientation sensor of the second wearable electronic device, a device orientation of the second wearable electronic device; receiving, by an input device of the second wearable electronic device, an input from a user of the second wearable electronic device; wirelessly transmitting a signal by the second wearable electronic device, the signal based on at least one of the device orientation of the second wearable electronic device detected by the device orientation sensor and the input from the user of the second wearable electronic device received by the input device; wirelessly receiving the signal by the first wearable electronic device; displaying, by a display of the first wearable electronic device, a user interface to the user; in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a first device orientation:
effecting, by the wearable electronic system, a first mapping orientation mode; and
effecting, by the first wearable electronic device, a first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device;
and in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a second device orientation:
effecting, by the wearable electronic system, a second mapping orientation mode; and
effecting, by the first wearable electronic device, a second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device. 14. The method of claim 13 wherein:
wirelessly transmitting a signal by the second wearable electronic device includes wirelessly transmitting, by the second wearable electronic device, both a device orientation signal from the device orientation sensor and an input signal from the input device;
wirelessly receiving the signal by the first wearable electronic device includes wirelessly receiving both the device orientation signal and the input signal by the first wearable electronic device;
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the first device orientation:
effecting, by the wearable electronic system, a first mapping orientation mode in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a first device orientation includes effecting, by a processor of the first wearable electronic device, the first mapping orientation mode in response to wirelessly receiving the device orientation signal by the first wearable electronic device; and
effecting, by the first wearable electronic device, a first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by the processor of the first wearable electronic device, the first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the input signal by the first wearable electronic device;
and
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the second device orientation:
effecting, by the wearable electronic system, a second mapping orientation mode in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to a second device orientation includes effecting, by the processor of the first wearable electronic device, the second mapping orientation mode in response to wirelessly receiving the device orientation signal by the first wearable electronic device; and
effecting, by the first wearable electronic device, a second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by the processor of the first wearable electronic device, the second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the input signal by the first wearable electronic device. 15. The method of claim 13, further comprising:
generating, by the device orientation sensor of the second wearable electronic device, a device orientation signal in response to detecting, by the device orientation sensor, the device orientation of the second wearable electronic device; generating, by the input device of the second wearable electronic device, an input signal in response to receiving, by the input device, the input from the user of the second wearable electronic device; and processing, by a second processor of the second wearable electronic device that is communicatively coupled to both the device orientation sensor and the input device, the device orientation signal and the input signal to define an oriented input signal, wherein: in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the first device orientation, effecting, by the wearable electronic system, the first mapping orientation mode includes defining, by the second processor of the second wearable electronic device, a first oriented input signal; and in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the second device orientation, effecting, by the wearable electronic system, the second mapping orientation mode includes defining, by the second processor of the second wearable electronic device, a second oriented input signal. 16. The method of claim 15 wherein:
wirelessly transmitting a signal by the second wearable electronic device includes wirelessly transmitting, by the second wearable electronic device, the oriented input signal;
wirelessly receiving the signal by the first wearable electronic device includes wirelessly receiving the oriented input signal by the first wearable electronic device;
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the first device orientation, effecting, by the first wearable electronic device, a first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by a first processor of the first wearable electronic device, the first function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the first oriented input signal by the first wearable electronic device; and
in response to the device orientation sensor of the second wearable electronic device detecting that the device orientation of the second wearable electronic device corresponds to the second device orientation, effecting, by the first wearable electronic device, a second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the signal by the first wearable electronic device includes effecting, by the first processor of the first wearable electronic device, the second function of the user interface displayed to the user by the display of the first wearable electronic device in response to wirelessly receiving the second oriented input signal by the first wearable electronic device. 17. The method of claim 13 wherein the effecting, by the wearable electronic system, the first mapping orientation mode includes performing a first mapping, by the first processor, of a first user input received from the second wearable electronic device to a first function of the user interface displayed on the display of the first wearable electronic device, and effecting, by the wearable electronic system, the second mapping orientation mode includes performing a second mapping, by the first processor, of the first user input received from the second wearable electronic device to a second function of the user interface displayed on the display of the first wearable electronic device. 18. The method of claim 13, further comprising:
setting the first mapping orientation mode when the second wearable electronic device has a first device orientation; setting the second mapping orientation mode when the second wearable electronic device has a second device orientation; and dynamically changing between the first mapping orientation mode and the second mapping orientation mode in response to the first wearable electronic device receiving the signal indicative of a change in the device orientation of the second wearable electronic device. 19. The method of claim 13 wherein the second wearable electronic device comprises an electronic ring and
detecting, by the device orientation sensor of the first wearable electronic device, a device orientation of the second wearable electronic device includes detecting, by the device orientation sensor of the first wearable electronic device, if the electronic ring is being worn on the user's left hand or if the electronic ring is being worn on the user's right hand. 20. The method of claim 13 wherein the second wearable electronic device comprises an electronic ring including an annular structure with a hole therethrough, and wherein:
detecting, by the device orientation sensor of the first wearable electronic device, a device orientation of the second wearable electronic device includes detecting, by the device orientation sensor of the first wearable electronic device if a finger of the user is extending through the hole in the annular structure of the electronic ring in a first direction or if the finger of the user is extending through the hole in the annular structure of the electronic ring in a second direction, the second direction opposite the first direction. | 2,600 |
11,039 | 11,039 | 14,964,322 | 2,616 | One embodiment provides a method, including: receiving, at a head mounted display, data indicating a contextual environment; identifying, using a processor, the contextual environment using the data; and altering, using a processor, data displayed by the head mounted display based on the contextual environment identified, the altered data comprising one or more virtual objects. Other aspects are described and claimed. | 1. A method, comprising:
receiving, at a head mounted display, data indicating a contextual environment; identifying, using a processor, the contextual environment using the data; and altering, using a processor, data displayed by the head mounted display based on the contextual environment identified, the altered data comprising one or more virtual objects. 2. The method of claim 1, wherein said altering comprises displaying a predetermined set of virtual objects matched to the contextual environment identified. 3. The method of claim 1, wherein said altering comprises adding a virtual object to the display based on the contextual environment identified. 4. The method of claim 1, wherein said altering comprises removing a virtual object from the display based on the contextual environment identified. 5. The method of claim 1, wherein the one or more virtual objects comprise application generated data. 6. The method of claim 1, wherein said receiving comprises receiving data from one or more sensors. 7. The method of claim 6, wherein at least one of the one or more sensors is physically coupled to the head mounted display. 8. The method of claim 1, further comprising:
detecting user input tagging a virtual object to the contextual environment; and storing an association between the virtual object and the contextual environment. 9. The method of claim 8, wherein said altering comprises retrieving and displaying a previously tagged virtual object based on the user input. 10. The method of claim 1, wherein:
the contextual environment is biking; and the display comprises two or more of a map virtual object, a speed virtual object, a camera virtual object, and a fitness virtual object. 11. A device, comprising:
a head mount; a display coupled to the head mount; a processor operatively coupled to the display; a memory storing instructions executable by the processor to: receive data indicating a contextual environment; identify the contextual environment using the data; and alter data displayed by the display based on the contextual environment identified, the altered data comprising one or more virtual objects. 12. The device of claim 11, wherein to alter comprises displaying a predetermined set of virtual objects matched to the contextual environment identified. 13. The device of claim 11, wherein to alter comprises adding a virtual object to the display based on the contextual environment identified. 14. The device of claim 11, wherein to alter comprises removing a virtual object from the display based on the contextual environment identified. 15. The device of claim 11, wherein the one or more virtual objects comprise application generated data. 16. The device of claim 11, wherein to receive comprises receiving data from one or more sensors. 17. The device of claim 16, wherein the device comprises at least one of the one or more sensors. 18. The device of claim 11, wherein the instructions are further executable by the processor to:
detect user input tagging a virtual object to the contextual environment; and store an association between the virtual object and the contextual environment. 19. The device of claim 18, wherein to alter comprises retrieving and displaying a previously tagged virtual object based on the user input. 20. A system, comprising:
a plurality of sensors; a head mount; a display coupled to the head mount; a processor operatively coupled to the display; a memory storing instructions executable by the processor to: receive, from one or more of the plurality of sensors, data indicating a contextual environment; identify the contextual environment using the data; and alter data displayed by the display based on the contextual environment identified, the altered data comprising one or more virtual objects. | One embodiment provides a method, including: receiving, at a head mounted display, data indicating a contextual environment; identifying, using a processor, the contextual environment using the data; and altering, using a processor, data displayed by the head mounted display based on the contextual environment identified, the altered data comprising one or more virtual objects. Other aspects are described and claimed.1. A method, comprising:
receiving, at a head mounted display, data indicating a contextual environment; identifying, using a processor, the contextual environment using the data; and altering, using a processor, data displayed by the head mounted display based on the contextual environment identified, the altered data comprising one or more virtual objects. 2. The method of claim 1, wherein said altering comprises displaying a predetermined set of virtual objects matched to the contextual environment identified. 3. The method of claim 1, wherein said altering comprises adding a virtual object to the display based on the contextual environment identified. 4. The method of claim 1, wherein said altering comprises removing a virtual object from the display based on the contextual environment identified. 5. The method of claim 1, wherein the one or more virtual objects comprise application generated data. 6. The method of claim 1, wherein said receiving comprises receiving data from one or more sensors. 7. The method of claim 6, wherein at least one of the one or more sensors is physically coupled to the head mounted display. 8. The method of claim 1, further comprising:
detecting user input tagging a virtual object to the contextual environment; and storing an association between the virtual object and the contextual environment. 9. The method of claim 8, wherein said altering comprises retrieving and displaying a previously tagged virtual object based on the user input. 10. The method of claim 1, wherein:
the contextual environment is biking; and the display comprises two or more of a map virtual object, a speed virtual object, a camera virtual object, and a fitness virtual object. 11. A device, comprising:
a head mount; a display coupled to the head mount; a processor operatively coupled to the display; a memory storing instructions executable by the processor to: receive data indicating a contextual environment; identify the contextual environment using the data; and alter data displayed by the display based on the contextual environment identified, the altered data comprising one or more virtual objects. 12. The device of claim 11, wherein to alter comprises displaying a predetermined set of virtual objects matched to the contextual environment identified. 13. The device of claim 11, wherein to alter comprises adding a virtual object to the display based on the contextual environment identified. 14. The device of claim 11, wherein to alter comprises removing a virtual object from the display based on the contextual environment identified. 15. The device of claim 11, wherein the one or more virtual objects comprise application generated data. 16. The device of claim 11, wherein to receive comprises receiving data from one or more sensors. 17. The device of claim 16, wherein the device comprises at least one of the one or more sensors. 18. The device of claim 11, wherein the instructions are further executable by the processor to:
detect user input tagging a virtual object to the contextual environment; and store an association between the virtual object and the contextual environment. 19. The device of claim 18, wherein to alter comprises retrieving and displaying a previously tagged virtual object based on the user input. 20. A system, comprising:
a plurality of sensors; a head mount; a display coupled to the head mount; a processor operatively coupled to the display; a memory storing instructions executable by the processor to: receive, from one or more of the plurality of sensors, data indicating a contextual environment; identify the contextual environment using the data; and alter data displayed by the display based on the contextual environment identified, the altered data comprising one or more virtual objects. | 2,600 |
11,040 | 11,040 | 16,374,772 | 2,616 | A subtractive color change system for displaying a selected color to a viewer and a method of changing color. The system includes a layered assembly having transparent panels of primary and key colors, with a fixed-color background behind the layered assembly. The subtractive color change system may have a control unit to individually control the intensities and values of the primary color panels to render a color and to control the intensities and values of the panels in the layered assembly to reduce differences between the color rendered and the selected color and to display the selected color to a viewer. | 1. (canceled) 2. A wearable article, comprising:
a background material having an interior surface configured to be worn proximate a body of a wearer of the wearable article and an exterior surface configured to be worn distal the body of the wearer; a control unit; and a color change portion, secured to the exterior surface and operatively coupled to the control unit, the color change portion comprising a plurality of panels, layered with respect to one another, each having a separately variable transparency; wherein the control unit is configured to selectively induce a current through each of the plurality of panels to change the transparency of each of the plurality of panels, wherein the transparency of each panel produces an adjustable rendered color of the color change portion. 3. The wearable article of claim 2, wherein the plurality of panels are comprised of at least one of: electrochromic compounds or electrochromic fibers. 4. The wearable article of claim 3, wherein the plurality of panels includes a first panel in proximate the exterior surface, a second panel, and a third panel, the second panel positioned between the first and third panels. 5. The wearable article of claim 4, wherein the first panel is variably transparent to blue, the second panel is variably transparent to green, and the third panel is variably transparent to red. 6. The wearable article of claim 5, wherein the exterior surface is white. 7. The wearable article of claim 5, wherein the exterior surface is fluorescent. 8. The wearable article of claim 2, wherein the adjustable rendered color is one of a spectrum of target colors. 9. A system, comprising:
a background material having a major surface; a control unit; and a color change portion, secured to the major surface and operatively coupled to the control unit, the color change portion comprising a plurality of panels, layered with respect to one another, each having a separately variable transparency; wherein the control unit is configured to selectively induce a current through each of the plurality of panels to change the transparency of each of the plurality of panels, wherein the transparency of each panel produces an adjustable rendered color of the color change portion. 10. The system of claim 9, wherein the plurality of panels are comprised of at least one of: electrochromic compounds or electrochromic fibers. 11. The system of claim 10, wherein the plurality of panels includes a first panel in proximate the major surface, a second panel, and a third panel, the second panel positioned between the first and third panels. 12. The system of claim 11, wherein the first panel is variably transparent to blue, the second panel is variably transparent to green, and the third panel is variably transparent to red. 13. The system of claim 12, wherein the major surface is white. 14. The system of claim 12, wherein the major surface is fluorescent. 15. The system of claim 9, wherein the adjustable rendered color is one of a spectrum of target colors. 16. A method of making a wearable article, comprising:
obtaining a background material having an interior surface configured to be worn proximate a body of a wearer of the wearable article and an exterior surface configured to be worn distal the body of the wearer; obtaining a control unit; and securing a color change portion to the exterior surface and operatively coupling the color change portion to the control unit, the color change portion comprising a plurality of panels, layered with respect to one another, each having a separately variable transparency; wherein the control unit is configured to selectively induce a current through each of the plurality of panels to change the transparency of each of the plurality of panels, wherein the transparency of each panel produces an adjustable rendered color of the color change portion. 17. The method of claim 16, wherein the plurality of panels are comprised of at least one of: electrochromic compounds or electrochromic fibers. 18. The method of claim 17, wherein the plurality of panels includes a first panel in proximate the exterior surface, a second panel, and a third panel, the second panel positioned between the first and third panels. 19. The method of claim 18, wherein the first panel is variably transparent to blue, the second panel is variably transparent to green, and the third panel is variably transparent to red. 20. The method of claim 19, wherein the exterior surface is white. 21. The method of claim 19, wherein the exterior surface is fluorescent. | A subtractive color change system for displaying a selected color to a viewer and a method of changing color. The system includes a layered assembly having transparent panels of primary and key colors, with a fixed-color background behind the layered assembly. The subtractive color change system may have a control unit to individually control the intensities and values of the primary color panels to render a color and to control the intensities and values of the panels in the layered assembly to reduce differences between the color rendered and the selected color and to display the selected color to a viewer.1. (canceled) 2. A wearable article, comprising:
a background material having an interior surface configured to be worn proximate a body of a wearer of the wearable article and an exterior surface configured to be worn distal the body of the wearer; a control unit; and a color change portion, secured to the exterior surface and operatively coupled to the control unit, the color change portion comprising a plurality of panels, layered with respect to one another, each having a separately variable transparency; wherein the control unit is configured to selectively induce a current through each of the plurality of panels to change the transparency of each of the plurality of panels, wherein the transparency of each panel produces an adjustable rendered color of the color change portion. 3. The wearable article of claim 2, wherein the plurality of panels are comprised of at least one of: electrochromic compounds or electrochromic fibers. 4. The wearable article of claim 3, wherein the plurality of panels includes a first panel in proximate the exterior surface, a second panel, and a third panel, the second panel positioned between the first and third panels. 5. The wearable article of claim 4, wherein the first panel is variably transparent to blue, the second panel is variably transparent to green, and the third panel is variably transparent to red. 6. The wearable article of claim 5, wherein the exterior surface is white. 7. The wearable article of claim 5, wherein the exterior surface is fluorescent. 8. The wearable article of claim 2, wherein the adjustable rendered color is one of a spectrum of target colors. 9. A system, comprising:
a background material having a major surface; a control unit; and a color change portion, secured to the major surface and operatively coupled to the control unit, the color change portion comprising a plurality of panels, layered with respect to one another, each having a separately variable transparency; wherein the control unit is configured to selectively induce a current through each of the plurality of panels to change the transparency of each of the plurality of panels, wherein the transparency of each panel produces an adjustable rendered color of the color change portion. 10. The system of claim 9, wherein the plurality of panels are comprised of at least one of: electrochromic compounds or electrochromic fibers. 11. The system of claim 10, wherein the plurality of panels includes a first panel in proximate the major surface, a second panel, and a third panel, the second panel positioned between the first and third panels. 12. The system of claim 11, wherein the first panel is variably transparent to blue, the second panel is variably transparent to green, and the third panel is variably transparent to red. 13. The system of claim 12, wherein the major surface is white. 14. The system of claim 12, wherein the major surface is fluorescent. 15. The system of claim 9, wherein the adjustable rendered color is one of a spectrum of target colors. 16. A method of making a wearable article, comprising:
obtaining a background material having an interior surface configured to be worn proximate a body of a wearer of the wearable article and an exterior surface configured to be worn distal the body of the wearer; obtaining a control unit; and securing a color change portion to the exterior surface and operatively coupling the color change portion to the control unit, the color change portion comprising a plurality of panels, layered with respect to one another, each having a separately variable transparency; wherein the control unit is configured to selectively induce a current through each of the plurality of panels to change the transparency of each of the plurality of panels, wherein the transparency of each panel produces an adjustable rendered color of the color change portion. 17. The method of claim 16, wherein the plurality of panels are comprised of at least one of: electrochromic compounds or electrochromic fibers. 18. The method of claim 17, wherein the plurality of panels includes a first panel in proximate the exterior surface, a second panel, and a third panel, the second panel positioned between the first and third panels. 19. The method of claim 18, wherein the first panel is variably transparent to blue, the second panel is variably transparent to green, and the third panel is variably transparent to red. 20. The method of claim 19, wherein the exterior surface is white. 21. The method of claim 19, wherein the exterior surface is fluorescent. | 2,600 |
11,041 | 11,041 | 16,037,745 | 2,616 | An image processing system includes a computing platform having a hardware processor and a system memory storing an image augmentation software code, a three-dimensional (3D) shapes library, and/or a 3D poses library. The image processing system also includes a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code. The hardware processor executes the image augmentation software code to provide an image to the 2D pose estimation module and to receive a 2D pose data generated by the 2D pose estimation module based on the image. The image augmentation software code identifies a 3D shape and/or a 3D pose corresponding to the image using an optimization algorithm applied to the 2D pose data and one or both of the 3D poses library and the 3D shapes library, and may output the 3D shape and/or 3D pose to render an augmented image on a display. | 1. An image processing system comprising:
a computing platform including a hardware processor and a system memory; the system memory storing an image augmentation software code and at least one of a three-dimensional (3D) poses library and a 3D shapes library; a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code; the hardware processor configured to execute the image augmentation software code to:
provide an image as an input to the 2D pose estimation module;
receive from the 2D pose estimation module, a 2D pose data generated based on the image; and
identify at least one of a 3D pose and a 3D shape corresponding to the image, based on the 2D pose data, wherein the at least one of the 3D pose and the 3D shape is identified using an optimization algorithm applied to the 2D pose data and the at least one of the 3D poses library and the 3D shapes library. 2. The image processing system of claim 1, wherein the hardware processor is further configured to execute the image augmentation software code to output the at least one of the 3D pose and the 3D shape to render an augmented image corresponding to the image on a display. 3. The image processing system of claim 2, wherein the computing platform is part of a communication device remote from the 2D pose estimation module, the computing platform further comprising the display and a camera. 4. The image processing system of claim 3, wherein the hardware processor is further configured to execute the image augmentation application to to obtain the image using the camera. 5. The image processing system of claim 3, wherein the hardware processor is further configured to execute the image augmentation software code to:
use the at least one of the 3D pose and the 3D shape to produce the augmented image, and render the augmented image on the display. 6. The image processing system of claim 1, wherein the image is an RGB image. 7. The image processing system of claim 1, wherein a time lapse between receiving the image by the image augmentation software code and rendering the augmented image on the display is less than five seconds. 8. The image processing system of claim 1, wherein the image comprises a human image, and wherein the augmented image includes the human image and a virtual character generated based on the at least one of the 3D pose and the 3D shape. 9. The image processing system of claim 8, wherein the virtual character is generated based on the 3D pose, and wherein the virtual character and human image are non-overlapping in the augmented image. 10. The image processing system of claim 8, wherein the virtual character is generated based on the 3D pose and the 3D shape, and wherein the virtual character at least partially overlaps the human image in the augmented image. 11. The image processing system of claim 8, wherein the hardware processor is further configured to execute the image augmentation software code to:
estimate a 3D shape corresponding to the human image; and utilize the 3D shape corresponding to the human image to generate at least one of a partial occlusion of the virtual character by the human image and a shadow cast from the human image to the virtual character. 12. A method for use by an image processing system including a computing platform having a hardware processor and a system memory storing an image augmentation software code and at least one of a three-dimensional (3D) poses library and a 3D shapes library, the method comprising:
providing, by the image augmentation software code executed by the hardware processor, an image as an input to a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code; receiving from the 2D pose estimation module, by the image augmentation software code executed by the hardware processor, a 2D pose data generated based on an image; and identifying, by the image augmentation software code executed by the hardware processor, at least one of a 3D pose and a 3D shape corresponding to the image, based on the 2D pose data, wherein the at least one of the 3D pose and the 3D shape is identified using an optimization algorithm applied to the 2D pose data and is the at least one of the 3D poses library and the 3D shapes library. 13. The method of claim 12, further comprising outputting, by the image augmentation software code executed by the hardware processor, the at least one of the 3D pose and the 3D shape to render an augmented image corresponding to the image on a display. 14. The method of claim 13, wherein the computing platform is part of a communication device remote from the 2D pose estimation module, the computing platform further comprising the display and a camera, the method further comprising obtaining, by the image augmentation application executed by the hardware processor, the image using the camera. 15. The method of claim 13, the method further comprising:
using, by the image augmentation application executed by the hardware processor, the at least one of the 3D pose and the 3D shape to produce the augmented image, and rendering, by the image augmentation application executed by the hardware processor, the augmented image on the display. 16. The method of claim 12, wherein the image is an RGB image. 17. The method of claim 12, wherein a time lapse between receiving the image by the image augmentation software code and rendering the augmented image on the display is less than five seconds. 18. The method of claim 12, wherein the image comprises a human image, and wherein the augmented image includes the human image and a virtual character generated based on the at least one of the 3D pose and the 3D shape. 19. The method of claim 18, wherein the virtual character is generated based on the 3D pose and the 3D shape, and wherein the virtual character at least partially overlaps the human image in the augmented image. 20. The method of claim 18, further comprising:
estimating, by the image augmentation application executed by the hardware processor, a 3D shape corresponding to the human image; and utilizing the 3D shape corresponding to the human image, by the image augmentation application executed by the hardware processor, to generate at least one of a partial occlusion of the virtual character by the human image and a shadow cast from the human image to the virtual character. | An image processing system includes a computing platform having a hardware processor and a system memory storing an image augmentation software code, a three-dimensional (3D) shapes library, and/or a 3D poses library. The image processing system also includes a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code. The hardware processor executes the image augmentation software code to provide an image to the 2D pose estimation module and to receive a 2D pose data generated by the 2D pose estimation module based on the image. The image augmentation software code identifies a 3D shape and/or a 3D pose corresponding to the image using an optimization algorithm applied to the 2D pose data and one or both of the 3D poses library and the 3D shapes library, and may output the 3D shape and/or 3D pose to render an augmented image on a display.1. An image processing system comprising:
a computing platform including a hardware processor and a system memory; the system memory storing an image augmentation software code and at least one of a three-dimensional (3D) poses library and a 3D shapes library; a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code; the hardware processor configured to execute the image augmentation software code to:
provide an image as an input to the 2D pose estimation module;
receive from the 2D pose estimation module, a 2D pose data generated based on the image; and
identify at least one of a 3D pose and a 3D shape corresponding to the image, based on the 2D pose data, wherein the at least one of the 3D pose and the 3D shape is identified using an optimization algorithm applied to the 2D pose data and the at least one of the 3D poses library and the 3D shapes library. 2. The image processing system of claim 1, wherein the hardware processor is further configured to execute the image augmentation software code to output the at least one of the 3D pose and the 3D shape to render an augmented image corresponding to the image on a display. 3. The image processing system of claim 2, wherein the computing platform is part of a communication device remote from the 2D pose estimation module, the computing platform further comprising the display and a camera. 4. The image processing system of claim 3, wherein the hardware processor is further configured to execute the image augmentation application to to obtain the image using the camera. 5. The image processing system of claim 3, wherein the hardware processor is further configured to execute the image augmentation software code to:
use the at least one of the 3D pose and the 3D shape to produce the augmented image, and render the augmented image on the display. 6. The image processing system of claim 1, wherein the image is an RGB image. 7. The image processing system of claim 1, wherein a time lapse between receiving the image by the image augmentation software code and rendering the augmented image on the display is less than five seconds. 8. The image processing system of claim 1, wherein the image comprises a human image, and wherein the augmented image includes the human image and a virtual character generated based on the at least one of the 3D pose and the 3D shape. 9. The image processing system of claim 8, wherein the virtual character is generated based on the 3D pose, and wherein the virtual character and human image are non-overlapping in the augmented image. 10. The image processing system of claim 8, wherein the virtual character is generated based on the 3D pose and the 3D shape, and wherein the virtual character at least partially overlaps the human image in the augmented image. 11. The image processing system of claim 8, wherein the hardware processor is further configured to execute the image augmentation software code to:
estimate a 3D shape corresponding to the human image; and utilize the 3D shape corresponding to the human image to generate at least one of a partial occlusion of the virtual character by the human image and a shadow cast from the human image to the virtual character. 12. A method for use by an image processing system including a computing platform having a hardware processor and a system memory storing an image augmentation software code and at least one of a three-dimensional (3D) poses library and a 3D shapes library, the method comprising:
providing, by the image augmentation software code executed by the hardware processor, an image as an input to a two-dimensional (2D) pose estimation module communicatively coupled to the image augmentation software code; receiving from the 2D pose estimation module, by the image augmentation software code executed by the hardware processor, a 2D pose data generated based on an image; and identifying, by the image augmentation software code executed by the hardware processor, at least one of a 3D pose and a 3D shape corresponding to the image, based on the 2D pose data, wherein the at least one of the 3D pose and the 3D shape is identified using an optimization algorithm applied to the 2D pose data and is the at least one of the 3D poses library and the 3D shapes library. 13. The method of claim 12, further comprising outputting, by the image augmentation software code executed by the hardware processor, the at least one of the 3D pose and the 3D shape to render an augmented image corresponding to the image on a display. 14. The method of claim 13, wherein the computing platform is part of a communication device remote from the 2D pose estimation module, the computing platform further comprising the display and a camera, the method further comprising obtaining, by the image augmentation application executed by the hardware processor, the image using the camera. 15. The method of claim 13, the method further comprising:
using, by the image augmentation application executed by the hardware processor, the at least one of the 3D pose and the 3D shape to produce the augmented image, and rendering, by the image augmentation application executed by the hardware processor, the augmented image on the display. 16. The method of claim 12, wherein the image is an RGB image. 17. The method of claim 12, wherein a time lapse between receiving the image by the image augmentation software code and rendering the augmented image on the display is less than five seconds. 18. The method of claim 12, wherein the image comprises a human image, and wherein the augmented image includes the human image and a virtual character generated based on the at least one of the 3D pose and the 3D shape. 19. The method of claim 18, wherein the virtual character is generated based on the 3D pose and the 3D shape, and wherein the virtual character at least partially overlaps the human image in the augmented image. 20. The method of claim 18, further comprising:
estimating, by the image augmentation application executed by the hardware processor, a 3D shape corresponding to the human image; and utilizing the 3D shape corresponding to the human image, by the image augmentation application executed by the hardware processor, to generate at least one of a partial occlusion of the virtual character by the human image and a shadow cast from the human image to the virtual character. | 2,600 |
11,042 | 11,042 | 15,815,635 | 2,664 | Approaches are described for determining facial landmarks in images. An input image is provided to at least one trained neural network that determines a face region (e.g., bounding box of a face) of the input image and initial facial landmark locations corresponding to the face region. The initial facial landmark locations are provided to a 3D face mapper that maps the initial facial landmark locations to a 3D face model. A set of facial landmark locations are determined from the 3D face model. The set of facial landmark locations are provided to a landmark location adjuster that adjusts positions of the set of facial landmark locations based on the input image. The input image is presented on a user device using the adjusted set of facial landmark locations. | 1. A computer-performed method for determining facial landmarks in images, comprising:
generating adjustments to a face region of an input image using a trained joint calibration and alignment neural network; identifying initial facial landmark locations corresponding to the adjustments using the trained joint calibration and alignment neural network; generating refined facial landmark locations from the initial facial landmark locations; and causing presentation of the input image on a user device using the refined facial landmark locations. 2. The method of claim 1, wherein the adjustments to the face region and the initial facial landmark locations are generated from a common fully-connected layer of the trained joint calibration and alignment neural network. 3. The method of claim 1, wherein the landmark location refiner comprises a landmark location adjuster that adjusts positions of a set of facial landmark locations corresponding to the initial facial landmark locations. 4. The method of claim 1, wherein the generating refined facial landmark locations from the initial facial landmark locations comprises:
mapping the initial facial landmark locations to a 3D face model; determining a set of facial landmark locations from the 3D face model; and adjusting positions of the set of facial landmark locations based on the input image. 5. The method of claim 1, further classifying, using a face region classifier neural network, a candidate region of the input image as containing a face, wherein the candidate region is provided to the trained joint calibration and alignment neural network as the face region based on being classified as containing the face. 6. The method of claim 1, further comprising:
training a face region classifier neural network to determine classifications on whether candidate regions contain faces; and after completing the training of the face region classifier, training a joint calibration and alignment neural network to generate adjustments to face regions and identify initial facial landmark locations corresponding to the adjustments using the trained face region classifier, wherein the training produces the trained joint calibration and alignment neural network. 7. The method of claim 1, wherein the trained joint calibration and alignment neural network attempts to determine the adjustments to the face region as a bounding box around a face in the input image. 8. The method of claim 1, wherein the causing presentation of the input image on the user device further user the adjustments to the face region. 9. The method of claim 1, wherein each of the initial facial landmark locations represents a respective two-dimensional point in the input image. 10. One or more non-transitory computer-readable media having a plurality of executable instructions embodied thereon, which, when executed by one or more processors, cause the one or more processors to perform a method for determining facial landmarks in images, the method comprising:
determining, using at least one neural network, a face region of an input image and initial facial landmark locations corresponding to the face region; mapping the initial facial landmark locations to a 3D face model; determining a set of facial landmark locations from the 3D face model; adjusting positions of the set of facial landmark locations based on the input image; and causing presentation of the input image on a user device using the adjusted set of facial landmark locations. 11. The computer-readable media of claim 10, wherein the at least one trained neural network comprises:
a face region classifier neural network that classifies a candidate region of the input image as containing a face; and a trained joint calibration and alignment neural network that based on the candidate region being classified as containing the face, generates the initial facial landmark locations and adjustments that are applied to the candidate region to result in the face region. 12. The computer-readable media of claim 10, wherein the determining, from the 3D face model, the set of facial landmark locations comprises:
projecting the 3D face model into two-dimensions based on a pose of a face in the input image; and determining the set of facial landmark locations from the projected 3D face model. 13. The computer-readable media of claim 10, wherein the set of facial landmark locations includes more facial landmark locations than the initial facial landmark locations. 14. The computer-readable media of claim 10, wherein the at least one trained neural network is trained to determine the face region as a bounding box around a face in the input image. 15. The computer-readable media of claim 10, wherein the presentation of the input image on the user device uses the adjusted set of facial landmark locations and the face region. 16. A computer-implemented system for determining facial landmarks in images, comprising:
a model manager means for:
providing an input image and a face region of the input image to a trained joint calibration and alignment neural network that generates adjustments to the face region and identifies initial facial landmark locations corresponding to the adjustments to the face region; and
providing the initial facial landmark locations to a landmark location refiner that generates refined facial landmark locations from the initial facial landmark locations; and
a presentation component means for causing presentation of the input image on a user device using the refined facial landmark locations. 17. The system of claim 16, further comprising an image processor means for processing the input image using the refined facial landmark locations, wherein the presentation is of the processed input image. 18. The system of claim 16, wherein the adjustments to the face region and the initial facial landmark locations are generated from a common fully-connected layer of the trained joint calibration and alignment neural network. 19. The system of claim 16, wherein the landmark location refiner comprises a landmark location adjuster that adjusts positions of a set of facial landmark locations corresponding to the initial facial landmark locations. 20. The system of claim 16, wherein the providing the initial facial landmark locations to the landmark location refiner causes the landmark location refiner to:
map the initial facial landmark locations to a 3D face model; determine a set of facial landmark locations from the 3D face model; and adjust positions of the set of facial landmark locations based on the input image. | Approaches are described for determining facial landmarks in images. An input image is provided to at least one trained neural network that determines a face region (e.g., bounding box of a face) of the input image and initial facial landmark locations corresponding to the face region. The initial facial landmark locations are provided to a 3D face mapper that maps the initial facial landmark locations to a 3D face model. A set of facial landmark locations are determined from the 3D face model. The set of facial landmark locations are provided to a landmark location adjuster that adjusts positions of the set of facial landmark locations based on the input image. The input image is presented on a user device using the adjusted set of facial landmark locations.1. A computer-performed method for determining facial landmarks in images, comprising:
generating adjustments to a face region of an input image using a trained joint calibration and alignment neural network; identifying initial facial landmark locations corresponding to the adjustments using the trained joint calibration and alignment neural network; generating refined facial landmark locations from the initial facial landmark locations; and causing presentation of the input image on a user device using the refined facial landmark locations. 2. The method of claim 1, wherein the adjustments to the face region and the initial facial landmark locations are generated from a common fully-connected layer of the trained joint calibration and alignment neural network. 3. The method of claim 1, wherein the landmark location refiner comprises a landmark location adjuster that adjusts positions of a set of facial landmark locations corresponding to the initial facial landmark locations. 4. The method of claim 1, wherein the generating refined facial landmark locations from the initial facial landmark locations comprises:
mapping the initial facial landmark locations to a 3D face model; determining a set of facial landmark locations from the 3D face model; and adjusting positions of the set of facial landmark locations based on the input image. 5. The method of claim 1, further classifying, using a face region classifier neural network, a candidate region of the input image as containing a face, wherein the candidate region is provided to the trained joint calibration and alignment neural network as the face region based on being classified as containing the face. 6. The method of claim 1, further comprising:
training a face region classifier neural network to determine classifications on whether candidate regions contain faces; and after completing the training of the face region classifier, training a joint calibration and alignment neural network to generate adjustments to face regions and identify initial facial landmark locations corresponding to the adjustments using the trained face region classifier, wherein the training produces the trained joint calibration and alignment neural network. 7. The method of claim 1, wherein the trained joint calibration and alignment neural network attempts to determine the adjustments to the face region as a bounding box around a face in the input image. 8. The method of claim 1, wherein the causing presentation of the input image on the user device further user the adjustments to the face region. 9. The method of claim 1, wherein each of the initial facial landmark locations represents a respective two-dimensional point in the input image. 10. One or more non-transitory computer-readable media having a plurality of executable instructions embodied thereon, which, when executed by one or more processors, cause the one or more processors to perform a method for determining facial landmarks in images, the method comprising:
determining, using at least one neural network, a face region of an input image and initial facial landmark locations corresponding to the face region; mapping the initial facial landmark locations to a 3D face model; determining a set of facial landmark locations from the 3D face model; adjusting positions of the set of facial landmark locations based on the input image; and causing presentation of the input image on a user device using the adjusted set of facial landmark locations. 11. The computer-readable media of claim 10, wherein the at least one trained neural network comprises:
a face region classifier neural network that classifies a candidate region of the input image as containing a face; and a trained joint calibration and alignment neural network that based on the candidate region being classified as containing the face, generates the initial facial landmark locations and adjustments that are applied to the candidate region to result in the face region. 12. The computer-readable media of claim 10, wherein the determining, from the 3D face model, the set of facial landmark locations comprises:
projecting the 3D face model into two-dimensions based on a pose of a face in the input image; and determining the set of facial landmark locations from the projected 3D face model. 13. The computer-readable media of claim 10, wherein the set of facial landmark locations includes more facial landmark locations than the initial facial landmark locations. 14. The computer-readable media of claim 10, wherein the at least one trained neural network is trained to determine the face region as a bounding box around a face in the input image. 15. The computer-readable media of claim 10, wherein the presentation of the input image on the user device uses the adjusted set of facial landmark locations and the face region. 16. A computer-implemented system for determining facial landmarks in images, comprising:
a model manager means for:
providing an input image and a face region of the input image to a trained joint calibration and alignment neural network that generates adjustments to the face region and identifies initial facial landmark locations corresponding to the adjustments to the face region; and
providing the initial facial landmark locations to a landmark location refiner that generates refined facial landmark locations from the initial facial landmark locations; and
a presentation component means for causing presentation of the input image on a user device using the refined facial landmark locations. 17. The system of claim 16, further comprising an image processor means for processing the input image using the refined facial landmark locations, wherein the presentation is of the processed input image. 18. The system of claim 16, wherein the adjustments to the face region and the initial facial landmark locations are generated from a common fully-connected layer of the trained joint calibration and alignment neural network. 19. The system of claim 16, wherein the landmark location refiner comprises a landmark location adjuster that adjusts positions of a set of facial landmark locations corresponding to the initial facial landmark locations. 20. The system of claim 16, wherein the providing the initial facial landmark locations to the landmark location refiner causes the landmark location refiner to:
map the initial facial landmark locations to a 3D face model; determine a set of facial landmark locations from the 3D face model; and adjust positions of the set of facial landmark locations based on the input image. | 2,600 |
11,043 | 11,043 | 16,208,706 | 2,646 | A method controls a first motor vehicle. The method includes the steps of: i) receiving status data of a roadway section and a geographic position of the roadway section by the first motor vehicle, ii) sensing, as a function of the received status data, the roadway section by a surroundings sensor of the first motor vehicle, and in response thereto, iii) aligning the surroundings sensor with a particular roadway feature of the roadway section, and/or iv) adapting an evaluation of data of the surroundings sensor, and/or v) updating the status data and transmitting the updated status data. | 1. A method for controlling a first motor vehicle, the method comprising the steps of:
receiving status data of a roadway section and a geographic position of the roadway section by the first motor vehicle, sensing, as a function of the received status data, the roadway section by a surroundings sensor of the first motor vehicle, and in response thereto:
(i) aligning the surroundings sensor with a particular roadway feature of the roadway section, and/or
(ii) adapting an evaluation of data of the surroundings sensor, and/or
(iii) updating the status data and transmitting the updated status data. 2. The method as claimed in claim 1, wherein the receiving of the status data of the roadway section takes place while the first motor vehicle is traveling. 3. The method as claimed in claim 1, wherein the receiving of the status data of the roadway section and of the geographic position of the roadway section is preceded by:
determining the status data of the roadway section by a second motor vehicle via at least one surroundings sensor of the second motor vehicle, and/or determining the geographic position of the roadway section. 4. The method as claimed in claim 3, wherein
the determining of the status data by the second motor vehicle comprises analysis of the status data of the roadway section with respect to at least one particular roadway feature. 5. The method as claimed in claim 1, further comprising the step of:
analyzing the status data of the roadway section which is sensed by the surroundings sensor of the first motor vehicle. 6. The method as claimed in claim 5, further comprising the step of:
adapting at least one chassis parameter and/or a driving behavior of the first motor vehicle as a function of the analyzed status data. 7. The method as claimed in claim 6, wherein
the adapting of at least one chassis parameter and/or a driving behavior of the first motor vehicle takes place in response to a predefined threshold value for the status data, which are acquired by the first motor vehicle, being exceeded. 8. The method as claimed in claim 1, wherein
the surroundings sensor comprises one or more of: acceleration sensors, distance sensors, roadway distance sensors, ride height sensors, rolling sensors, roadway unevenness sensors including cameras and LIDAR, sensors for sensing the motor vehicle's own movements, and ultrasonic sensors. 9. The method as claimed in claim 1, wherein
the status data comprise a lane, a travel direction, a frequency of roadway unevennesses, an amplitude of roadway unevennesses, and/or a data item at which the roadway section was passed through. 10. A driver assistance system for controlling a first motor vehicle, comprising:
a data input; an evaluation unit; and a data output, wherein the data input is configured for the reception of status data of a roadway section and a geographic position of the roadway section by a first motor vehicle, wherein the evaluation unit is configured to sense, as a function of the received status data, the roadway section via a surroundings sensor of the first motor vehicle, and wherein the data output is configured, in response thereto, to align the surroundings sensor with a particular roadway feature, to adapt an evaluation of the surroundings sensor and/or to update the status data and to transmit the updated status data. 11. The driver assistance system as claimed in claim 10, wherein
the evaluation unit is further configured to analyze status data of the roadway section which is sensed by the surroundings sensor of the first motor vehicle, and the data output is further configured to adapt at least one chassis parameter and/or a driving behavior of the first motor vehicle as a function of the analyzed status data. 12. A system, comprising:
a driver assistance system that controls a first motor vehicle, wherein a driver assistance system is operatively configured to execute processing to:
receive status data of a roadway section and a geographic position of the roadway section by the first motor vehicle,
sense, as a function of the received status data, the roadway section by a surroundings sensor of the first motor vehicle, and in response thereto:
(i) align the surroundings sensor with a particular roadway feature of the roadway section, and/or
(ii) adapt an evaluation of data of the surroundings sensor, and/or
(iii) update the status data and transmitting the updated status data. 13. A motor vehicle comprising a driver assistance system as claimed in claim 10. | A method controls a first motor vehicle. The method includes the steps of: i) receiving status data of a roadway section and a geographic position of the roadway section by the first motor vehicle, ii) sensing, as a function of the received status data, the roadway section by a surroundings sensor of the first motor vehicle, and in response thereto, iii) aligning the surroundings sensor with a particular roadway feature of the roadway section, and/or iv) adapting an evaluation of data of the surroundings sensor, and/or v) updating the status data and transmitting the updated status data.1. A method for controlling a first motor vehicle, the method comprising the steps of:
receiving status data of a roadway section and a geographic position of the roadway section by the first motor vehicle, sensing, as a function of the received status data, the roadway section by a surroundings sensor of the first motor vehicle, and in response thereto:
(i) aligning the surroundings sensor with a particular roadway feature of the roadway section, and/or
(ii) adapting an evaluation of data of the surroundings sensor, and/or
(iii) updating the status data and transmitting the updated status data. 2. The method as claimed in claim 1, wherein the receiving of the status data of the roadway section takes place while the first motor vehicle is traveling. 3. The method as claimed in claim 1, wherein the receiving of the status data of the roadway section and of the geographic position of the roadway section is preceded by:
determining the status data of the roadway section by a second motor vehicle via at least one surroundings sensor of the second motor vehicle, and/or determining the geographic position of the roadway section. 4. The method as claimed in claim 3, wherein
the determining of the status data by the second motor vehicle comprises analysis of the status data of the roadway section with respect to at least one particular roadway feature. 5. The method as claimed in claim 1, further comprising the step of:
analyzing the status data of the roadway section which is sensed by the surroundings sensor of the first motor vehicle. 6. The method as claimed in claim 5, further comprising the step of:
adapting at least one chassis parameter and/or a driving behavior of the first motor vehicle as a function of the analyzed status data. 7. The method as claimed in claim 6, wherein
the adapting of at least one chassis parameter and/or a driving behavior of the first motor vehicle takes place in response to a predefined threshold value for the status data, which are acquired by the first motor vehicle, being exceeded. 8. The method as claimed in claim 1, wherein
the surroundings sensor comprises one or more of: acceleration sensors, distance sensors, roadway distance sensors, ride height sensors, rolling sensors, roadway unevenness sensors including cameras and LIDAR, sensors for sensing the motor vehicle's own movements, and ultrasonic sensors. 9. The method as claimed in claim 1, wherein
the status data comprise a lane, a travel direction, a frequency of roadway unevennesses, an amplitude of roadway unevennesses, and/or a data item at which the roadway section was passed through. 10. A driver assistance system for controlling a first motor vehicle, comprising:
a data input; an evaluation unit; and a data output, wherein the data input is configured for the reception of status data of a roadway section and a geographic position of the roadway section by a first motor vehicle, wherein the evaluation unit is configured to sense, as a function of the received status data, the roadway section via a surroundings sensor of the first motor vehicle, and wherein the data output is configured, in response thereto, to align the surroundings sensor with a particular roadway feature, to adapt an evaluation of the surroundings sensor and/or to update the status data and to transmit the updated status data. 11. The driver assistance system as claimed in claim 10, wherein
the evaluation unit is further configured to analyze status data of the roadway section which is sensed by the surroundings sensor of the first motor vehicle, and the data output is further configured to adapt at least one chassis parameter and/or a driving behavior of the first motor vehicle as a function of the analyzed status data. 12. A system, comprising:
a driver assistance system that controls a first motor vehicle, wherein a driver assistance system is operatively configured to execute processing to:
receive status data of a roadway section and a geographic position of the roadway section by the first motor vehicle,
sense, as a function of the received status data, the roadway section by a surroundings sensor of the first motor vehicle, and in response thereto:
(i) align the surroundings sensor with a particular roadway feature of the roadway section, and/or
(ii) adapt an evaluation of data of the surroundings sensor, and/or
(iii) update the status data and transmitting the updated status data. 13. A motor vehicle comprising a driver assistance system as claimed in claim 10. | 2,600 |
11,044 | 11,044 | 16,353,005 | 2,621 | A voice actuated visual communication system for a driver of a vehicle in traffic to communicate visually with drivers of other vehicles in proximity to his or her vehicle for, among other things, driving safety. | 1. A message signaling system for use by the driver of a vehicle comprising:
a microphone located in said vehicle to receive voice commands from said driver; a processing device controlled by an operating system and running a program having an input from the output of said microphone; a first screen display attached to a window of said vehicle having an input from the output of said processing device; and a power supply connected to and providing electrical power to said processing device and said screen display, whereby said program controls the input of said screen display as a function of the output of said microphone with the result that an image matched with said voice commands of said driver is displayed thereon as a visual message to drivers of vehicles in close proximity to said vehicle. 2. The system of claim 1 wherein said first screen display is attached to the rear window of said vehicle in a manner so that said message is visible to drivers of vehicles following said vehicle. 3. The system of claim 2 in which said first screen display further comprises a second screen display displaying said image so as to be visible to and readable by said driver of said vehicle in the rearview mirror of said vehicle. 4. The system of claim 1 further comprising timing circuitry powered by said power supply and connected to the input of said screen display
whereby the period of time that said message is visible is governed by said timing circuitry. 5. The system of claim 1 further comprising: a speaker powered by said power supply and having an input from the output of said processing device,
whereby said speaker provides audible messages to said driver in connection with said driver's voice commands and said visual messages displayed in response to said voice commands. 6. The system of claim 5 in which such audible messages are selected from the group comprised of prompts, queries, and feedback. 7. The system of claim 1 in which said program comprises:
a speech recognition engine;
natural language and inflection detection software;
an in-memory database; and
a rules engine. 8. The system of claim 5 in which said program comprises:
a speech recognition engine;
natural language and inflection detection software;
an in-memory database;
a rules engine, and
a speech to text conversion engine. 9. The system of claim 8 in which such audible messages are selected from the group comprised of prompts, queries, and feedback. 10. The system of claim 1 in which said power supply is a rechargeable battery. 11. The system of claim 10 further comprising external charging ports by which said battery may be charged selected from the group comprised of a USB port, an RJ45 port, an inductive power transfer port, and a 12 volt DC port. 12. The system of claim 11 in which one of said ports is connected to a power source selected from the group of a 12 volt automotive battery, an AC power supply, an inductive power supply, an electrical source powered by solar energy, and an electrical source powered by automotive kinetic energy. 13. The system of claim 1 further comprising at least one screen display in addition to said first screen display, each of which additional screen displays having an input from the output of said processing device and receiving electrical power from said power supply. 14. The system of claim 3 in which said image displayed on said second screen display is said image displayed on said first screen display inverted 180 degrees on its x-axis,
whereby said message displayed on said second screen display is visible to and can be correctly read by said driver in the rearview mirror of said vehicle. 15. The system of claim 14 in which inverting said image is mirror writing. 16. The system of claim 4 in which the input to said timing circuitry is connected to the output of said program,
whereby said timing circuitry can be overridden by voice command of said driver so that the period of time that said message is visible is governed by said voice command. 17. The system of claim 1 further comprising means to connect said processing device to an external network. 18. The system of claim 17 in which said external network is selected from the group comprising the internet, a local area network, or the global positioning system. 19. The system of claim 17 in which said connecting means is selected from the group comprising Wi-Fi, Bluetooth®, and broadband wireless. 20. A method to provide a visual message to drivers in close proximity to the driver of a vehicle comprising the steps of:
speaking a message into a microphone; using speech recognition software to analyze the output of said microphone; matching the output of such speech recognition software to the entries in a library of images stored in a database; using a signal corresponding to a matched image as an input to a first screen display attached to a window of said driver's vehicle; and displaying said image on said screen display
whereby a visual message corresponding to said spoken message is provided to drivers in close proximity to said vehicle. 21. The method of claim 20 wherein said first screen display is attached to the rear window of said vehicle in a manner so that said message is visible to drivers of vehicles following said vehicle. 22. The method of claim 21 in which said first screen display further comprises a second screen display displaying said message so as to be visible to and readable by said driver of said vehicle in the rearview mirror of said vehicle. 23. The method of claim 22 further comprising the step of:
reviewing said image in the rearview mirror of said vehicle. 24. The method of claim 20 further comprising the step of:
listening to audible feedback related to said image. 25. The method of claim 20 further comprising the steps of:
listening to one or more audible prompts related to said image; and
speaking a voice command into said microphone in order to terminate display of or modifying said image. 26. The method of claim 20 further comprising the step of:
modifying said image. 27. The method of claim 20 further comprising the step of:
terminating display of said image. 28. The method of claim 26 further comprising the step of:
terminating display of said image. | A voice actuated visual communication system for a driver of a vehicle in traffic to communicate visually with drivers of other vehicles in proximity to his or her vehicle for, among other things, driving safety.1. A message signaling system for use by the driver of a vehicle comprising:
a microphone located in said vehicle to receive voice commands from said driver; a processing device controlled by an operating system and running a program having an input from the output of said microphone; a first screen display attached to a window of said vehicle having an input from the output of said processing device; and a power supply connected to and providing electrical power to said processing device and said screen display, whereby said program controls the input of said screen display as a function of the output of said microphone with the result that an image matched with said voice commands of said driver is displayed thereon as a visual message to drivers of vehicles in close proximity to said vehicle. 2. The system of claim 1 wherein said first screen display is attached to the rear window of said vehicle in a manner so that said message is visible to drivers of vehicles following said vehicle. 3. The system of claim 2 in which said first screen display further comprises a second screen display displaying said image so as to be visible to and readable by said driver of said vehicle in the rearview mirror of said vehicle. 4. The system of claim 1 further comprising timing circuitry powered by said power supply and connected to the input of said screen display
whereby the period of time that said message is visible is governed by said timing circuitry. 5. The system of claim 1 further comprising: a speaker powered by said power supply and having an input from the output of said processing device,
whereby said speaker provides audible messages to said driver in connection with said driver's voice commands and said visual messages displayed in response to said voice commands. 6. The system of claim 5 in which such audible messages are selected from the group comprised of prompts, queries, and feedback. 7. The system of claim 1 in which said program comprises:
a speech recognition engine;
natural language and inflection detection software;
an in-memory database; and
a rules engine. 8. The system of claim 5 in which said program comprises:
a speech recognition engine;
natural language and inflection detection software;
an in-memory database;
a rules engine, and
a speech to text conversion engine. 9. The system of claim 8 in which such audible messages are selected from the group comprised of prompts, queries, and feedback. 10. The system of claim 1 in which said power supply is a rechargeable battery. 11. The system of claim 10 further comprising external charging ports by which said battery may be charged selected from the group comprised of a USB port, an RJ45 port, an inductive power transfer port, and a 12 volt DC port. 12. The system of claim 11 in which one of said ports is connected to a power source selected from the group of a 12 volt automotive battery, an AC power supply, an inductive power supply, an electrical source powered by solar energy, and an electrical source powered by automotive kinetic energy. 13. The system of claim 1 further comprising at least one screen display in addition to said first screen display, each of which additional screen displays having an input from the output of said processing device and receiving electrical power from said power supply. 14. The system of claim 3 in which said image displayed on said second screen display is said image displayed on said first screen display inverted 180 degrees on its x-axis,
whereby said message displayed on said second screen display is visible to and can be correctly read by said driver in the rearview mirror of said vehicle. 15. The system of claim 14 in which inverting said image is mirror writing. 16. The system of claim 4 in which the input to said timing circuitry is connected to the output of said program,
whereby said timing circuitry can be overridden by voice command of said driver so that the period of time that said message is visible is governed by said voice command. 17. The system of claim 1 further comprising means to connect said processing device to an external network. 18. The system of claim 17 in which said external network is selected from the group comprising the internet, a local area network, or the global positioning system. 19. The system of claim 17 in which said connecting means is selected from the group comprising Wi-Fi, Bluetooth®, and broadband wireless. 20. A method to provide a visual message to drivers in close proximity to the driver of a vehicle comprising the steps of:
speaking a message into a microphone; using speech recognition software to analyze the output of said microphone; matching the output of such speech recognition software to the entries in a library of images stored in a database; using a signal corresponding to a matched image as an input to a first screen display attached to a window of said driver's vehicle; and displaying said image on said screen display
whereby a visual message corresponding to said spoken message is provided to drivers in close proximity to said vehicle. 21. The method of claim 20 wherein said first screen display is attached to the rear window of said vehicle in a manner so that said message is visible to drivers of vehicles following said vehicle. 22. The method of claim 21 in which said first screen display further comprises a second screen display displaying said message so as to be visible to and readable by said driver of said vehicle in the rearview mirror of said vehicle. 23. The method of claim 22 further comprising the step of:
reviewing said image in the rearview mirror of said vehicle. 24. The method of claim 20 further comprising the step of:
listening to audible feedback related to said image. 25. The method of claim 20 further comprising the steps of:
listening to one or more audible prompts related to said image; and
speaking a voice command into said microphone in order to terminate display of or modifying said image. 26. The method of claim 20 further comprising the step of:
modifying said image. 27. The method of claim 20 further comprising the step of:
terminating display of said image. 28. The method of claim 26 further comprising the step of:
terminating display of said image. | 2,600 |
11,045 | 11,045 | 16,925,609 | 2,691 | When the speed of head movement exceeds the processing capability of the system, a reduced depiction is displayed. As one example, the resolution may be reduced using coarse pixel shading in order to create a new depiction at the speed of head movement. In accordance with another embodiment, only the region the user is looking at is processed in full resolution and the remainder of the depiction is processed at lower resolution. In still another embodiment, the background depictions may be blurred or grayed out to reduce processing time. | 1. A smart phone comprising:
a system on chip (SoC) comprising:
an integrated circuit die having:
a central processing unit (CPU) comprising a first plurality of cores including a first core and a second core;
a graphics processing unit (GPU) coupled to the CPU, the GPU comprising a second plurality of cores; and
a shared cache memory;
a non-transitory storage medium coupled to the SoC, the non-transitory storage medium comprising instructions that when executed:
cause the CPU to identify a first portion of a frame and a second portion of the frame, based at least in part on information from an application programming interface (API); and
cause the GPU to:
render the first portion of the frame at a first resolution, including to apply a first shading rate within the first portion of the frame; and
render the second portion of the frame at a second resolution lower than the first resolution, including to apply a second shading rate within at least the second portion of the frame,
wherein the first shading rate is twice the second shading rate;
one or more cameras coupled to the SoC; at least one wireless interface; and a display comprising a touchscreen. 2. The smart phone of claim 1, further comprising at least one accelerometer. 3. The smart phone of claim 1, wherein the SoC comprises at least one accelerator. 4. The smart phone of claim 1, wherein the at least one wireless interface comprises a Wi-Fi interface and a Bluetooth interface. 5. The smart phone of claim 1, wherein the SoC is to execute a neural engine. 6. The smart phone of claim 1, wherein the at least one of the second plurality of cores is to render the first portion of the frame at the first resolution, the first resolution comprising a full resolution. 7. The smart phone of claim 1, wherein the at least one of the second plurality of cores is to render the first portion of the frame and render the second portion of the frame, the first portion of the frame comprising a foreground region, and the second portion of the frame comprising a background region. 8. The smart phone of claim 1, further comprising an interface to communicate with a headset. 9. The smart phone of claim 8, wherein the interface is to receive motion tracking information from the headset. 10. A tablet comprising:
a processor comprising:
a central processing unit (CPU) comprising a first plurality of cores including a first core and a second core;
a graphics processing unit (GPU) coupled to the CPU, the GPU comprising a second plurality of cores; and
a shared cache memory;
a non-transitory storage medium coupled to the processor, the non-transitory storage medium comprising instructions that when executed:
cause the CPU to identify a first portion of a frame and a second portion of the frame, based at least in part on information from an application programming interface (API); and
cause the GPU to:
render the first portion of the frame at a first resolution, including to apply a first shading rate within the first portion of the frame; and
render the second portion of the frame at a second resolution lower than the first resolution, including to apply a second shading rate within at least the second portion of the frame,
wherein the first shading rate is twice the second shading rate;
at least one wireless interface; and a display comprising a touchscreen. 11. The tablet of claim 10, further comprising at least one accelerometer. 12. The tablet of claim 10, wherein the processor comprises at least one accelerator. 13. The tablet of claim 10, wherein the at least one wireless interface comprises a Wi-Fi interface and a Bluetooth interface. 14. The tablet of claim 10, wherein the at least one of the second plurality of cores is to render the first portion of the frame at the first resolution, the first resolution comprising a full resolution. 15. The tablet of claim 10, wherein the at least one of the second plurality of cores is to render the first portion of the frame and render the second portion of the frame, the first portion of the frame comprising a foreground region, and the second portion of the frame comprising a background region. 16. A non-transitory storage medium comprising instructions that when executed by a machine cause the machine to:
identify a first portion of a frame and a second portion of the frame, based at least in part on information from an application programming interface (API); render the first portion of the frame at a first resolution, including to apply a first shading rate within the first portion of the frame; and render the second portion of the frame at a second resolution lower than the first resolution, including to apply a second shading rate within at least the second portion of the frame, wherein the first shading rate is at least twice the second shading rate. 17. The non-transitory storage medium of claim 16, further comprising instructions that when executed cause the machine to render the first portion of the frame at the first resolution, the first resolution comprising a full resolution. 18. The non-transitory storage medium of claim 16, further comprising instructions that when executed cause the machine to render the first portion of the frame and render the second portion of the frame, the first portion of the frame comprising a foreground region, and the second portion of the frame comprising a background region. | When the speed of head movement exceeds the processing capability of the system, a reduced depiction is displayed. As one example, the resolution may be reduced using coarse pixel shading in order to create a new depiction at the speed of head movement. In accordance with another embodiment, only the region the user is looking at is processed in full resolution and the remainder of the depiction is processed at lower resolution. In still another embodiment, the background depictions may be blurred or grayed out to reduce processing time.1. A smart phone comprising:
a system on chip (SoC) comprising:
an integrated circuit die having:
a central processing unit (CPU) comprising a first plurality of cores including a first core and a second core;
a graphics processing unit (GPU) coupled to the CPU, the GPU comprising a second plurality of cores; and
a shared cache memory;
a non-transitory storage medium coupled to the SoC, the non-transitory storage medium comprising instructions that when executed:
cause the CPU to identify a first portion of a frame and a second portion of the frame, based at least in part on information from an application programming interface (API); and
cause the GPU to:
render the first portion of the frame at a first resolution, including to apply a first shading rate within the first portion of the frame; and
render the second portion of the frame at a second resolution lower than the first resolution, including to apply a second shading rate within at least the second portion of the frame,
wherein the first shading rate is twice the second shading rate;
one or more cameras coupled to the SoC; at least one wireless interface; and a display comprising a touchscreen. 2. The smart phone of claim 1, further comprising at least one accelerometer. 3. The smart phone of claim 1, wherein the SoC comprises at least one accelerator. 4. The smart phone of claim 1, wherein the at least one wireless interface comprises a Wi-Fi interface and a Bluetooth interface. 5. The smart phone of claim 1, wherein the SoC is to execute a neural engine. 6. The smart phone of claim 1, wherein the at least one of the second plurality of cores is to render the first portion of the frame at the first resolution, the first resolution comprising a full resolution. 7. The smart phone of claim 1, wherein the at least one of the second plurality of cores is to render the first portion of the frame and render the second portion of the frame, the first portion of the frame comprising a foreground region, and the second portion of the frame comprising a background region. 8. The smart phone of claim 1, further comprising an interface to communicate with a headset. 9. The smart phone of claim 8, wherein the interface is to receive motion tracking information from the headset. 10. A tablet comprising:
a processor comprising:
a central processing unit (CPU) comprising a first plurality of cores including a first core and a second core;
a graphics processing unit (GPU) coupled to the CPU, the GPU comprising a second plurality of cores; and
a shared cache memory;
a non-transitory storage medium coupled to the processor, the non-transitory storage medium comprising instructions that when executed:
cause the CPU to identify a first portion of a frame and a second portion of the frame, based at least in part on information from an application programming interface (API); and
cause the GPU to:
render the first portion of the frame at a first resolution, including to apply a first shading rate within the first portion of the frame; and
render the second portion of the frame at a second resolution lower than the first resolution, including to apply a second shading rate within at least the second portion of the frame,
wherein the first shading rate is twice the second shading rate;
at least one wireless interface; and a display comprising a touchscreen. 11. The tablet of claim 10, further comprising at least one accelerometer. 12. The tablet of claim 10, wherein the processor comprises at least one accelerator. 13. The tablet of claim 10, wherein the at least one wireless interface comprises a Wi-Fi interface and a Bluetooth interface. 14. The tablet of claim 10, wherein the at least one of the second plurality of cores is to render the first portion of the frame at the first resolution, the first resolution comprising a full resolution. 15. The tablet of claim 10, wherein the at least one of the second plurality of cores is to render the first portion of the frame and render the second portion of the frame, the first portion of the frame comprising a foreground region, and the second portion of the frame comprising a background region. 16. A non-transitory storage medium comprising instructions that when executed by a machine cause the machine to:
identify a first portion of a frame and a second portion of the frame, based at least in part on information from an application programming interface (API); render the first portion of the frame at a first resolution, including to apply a first shading rate within the first portion of the frame; and render the second portion of the frame at a second resolution lower than the first resolution, including to apply a second shading rate within at least the second portion of the frame, wherein the first shading rate is at least twice the second shading rate. 17. The non-transitory storage medium of claim 16, further comprising instructions that when executed cause the machine to render the first portion of the frame at the first resolution, the first resolution comprising a full resolution. 18. The non-transitory storage medium of claim 16, further comprising instructions that when executed cause the machine to render the first portion of the frame and render the second portion of the frame, the first portion of the frame comprising a foreground region, and the second portion of the frame comprising a background region. | 2,600 |
11,046 | 11,046 | 16,195,809 | 2,651 | A phone appliance and method of use are provided where the phone appliance can be used to make VoIP communications calls. In a preferred embodiment, the phone appliance includes an RF connection for connecting to a computer or other computing device for facilitating the placement of the VoIP communications calls. The phone appliance further includes a display or portal for depicting advertisements provided by various advertisers. The advertisements provided can be used to defray all or part of the cost associated with making VoIP communications calls. The portal can also be used to communicate with businesses for ordering products. such as ordering a pizza, and to perform various services, such as purchasing stocks. In an exemplary system, the phone appliance is used to transmit to a control center information related to the user of the phone appliance, such as interests and buying habits, and queries for receiving additional information for various advertised products and services. The control center transmits the queries to the appropriate vendors for providing the user with additional information. Other functions and features are provided to the phone appliance, such as being able to download e-mail messages stored within or received by the computer. | 1. A recording system for recording voice communications during a voice communication call, comprising:
at least one phone appliance for transmitting and/or receiving voice communications, wherein the voice communications are voice-over-data communications or another type of data communications; a converter configured to convert at least one voice communication from analog to digital format; and a computing device configured to facilitate recording the at least one voice communication in digital format on at least one computer memory of the computing device or a disk of the computing device, wherein the at least one voice communication is recorded during a voice communication call. 2. The phone appliance according to claim 1, wherein the at least one phone appliance includes a key for enabling initiating recording of the at least one voice communication. 3. The phone appliance according to claim 1, wherein the at least one voice communication in digital format is compressed. 4. The phone appliance according to claim 1, wherein the stored at least one voice communication can be retrieved for use. 5. The phone appliance according to claim 1, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication. 6. The phone appliance according to claim 5, wherein the at least one phone appliance performs data communications including Internet access for viewing and interacting with Internet content. 7. A system for recording voice communications during a voice communication call, comprising:
a computing device in operative communication with a phone appliance for enabling initiating recording of at least one voice communication, wherein the at least one voice communication is a voice-over-data communication or another type of digital voice communication transmitted or received by at least one phone appliance; the computing device receiving and recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and the computing device is enabled to facilitate retrieving the recorded at least one voice communication. 8. The system according to claim 7, wherein the at least one voice communication is compressed. 9. The system according to claim 7, wherein the computing device or a control center records at least one voice communication on at least one memory or a disk. 10. The system according to claim 7, wherein the system includes at least a converter to convert at least one voice communication from analog to digital format. 11. The system according to claim 7, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication for recording. 12. The system according to claim 7, wherein the at least one phone appliance is configured for data communications including Internet access for viewing and accessing Internet content. 13. The system according to claim 7, wherein the computing device determines a fee for storing the recorded at least one voice communication. 14. A method for recording voice communications during a voice communication call, comprising:
establishing a voice communication call with a phone appliance, wherein the voice communication call is a voice-over-data or another type of data communication call; converting at least one voice communication from analog to digital format; compressing the digital at least one voice communication; initiating recording the at least one voice communication by the phone appliance, wherein the at least one voice communication is recorded during the voice communication call; and facilitating recording and/or storing of the at least one voice communication by a computing device. 15. A method according to claim 14, further comprising enabling recording the at least one voice communication via a key. 16. A method according to claim 14, further comprising retrieving the recorded and/or the stored at least one voice communication. 17. A method according to claim 14, further comprising transmitting or receiving the at least one voice communication for recording via a transceiver. 18. A method according to claim 17, further comprising storing that at least one voice communication for recording in at least one memory or at least one disk of a computing device or a control center. 19. A method according to claim 14, further comprising performing data communications including Internet access for viewing and interacting with Internet content. 20. A method according to claim 14, further comprising determining a fee for storing the recorded at least one voice communication. 21. A method for recording voice communications during a voice communication call, comprising:
receiving, by a computing device, a communication for enabling recording at least one voice communication, wherein the at least one voice communication is a voice-over-data or another type of data communication call; recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and retrieving the recorded at least one voice communication by a computing device. | A phone appliance and method of use are provided where the phone appliance can be used to make VoIP communications calls. In a preferred embodiment, the phone appliance includes an RF connection for connecting to a computer or other computing device for facilitating the placement of the VoIP communications calls. The phone appliance further includes a display or portal for depicting advertisements provided by various advertisers. The advertisements provided can be used to defray all or part of the cost associated with making VoIP communications calls. The portal can also be used to communicate with businesses for ordering products. such as ordering a pizza, and to perform various services, such as purchasing stocks. In an exemplary system, the phone appliance is used to transmit to a control center information related to the user of the phone appliance, such as interests and buying habits, and queries for receiving additional information for various advertised products and services. The control center transmits the queries to the appropriate vendors for providing the user with additional information. Other functions and features are provided to the phone appliance, such as being able to download e-mail messages stored within or received by the computer.1. A recording system for recording voice communications during a voice communication call, comprising:
at least one phone appliance for transmitting and/or receiving voice communications, wherein the voice communications are voice-over-data communications or another type of data communications; a converter configured to convert at least one voice communication from analog to digital format; and a computing device configured to facilitate recording the at least one voice communication in digital format on at least one computer memory of the computing device or a disk of the computing device, wherein the at least one voice communication is recorded during a voice communication call. 2. The phone appliance according to claim 1, wherein the at least one phone appliance includes a key for enabling initiating recording of the at least one voice communication. 3. The phone appliance according to claim 1, wherein the at least one voice communication in digital format is compressed. 4. The phone appliance according to claim 1, wherein the stored at least one voice communication can be retrieved for use. 5. The phone appliance according to claim 1, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication. 6. The phone appliance according to claim 5, wherein the at least one phone appliance performs data communications including Internet access for viewing and interacting with Internet content. 7. A system for recording voice communications during a voice communication call, comprising:
a computing device in operative communication with a phone appliance for enabling initiating recording of at least one voice communication, wherein the at least one voice communication is a voice-over-data communication or another type of digital voice communication transmitted or received by at least one phone appliance; the computing device receiving and recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and the computing device is enabled to facilitate retrieving the recorded at least one voice communication. 8. The system according to claim 7, wherein the at least one voice communication is compressed. 9. The system according to claim 7, wherein the computing device or a control center records at least one voice communication on at least one memory or a disk. 10. The system according to claim 7, wherein the system includes at least a converter to convert at least one voice communication from analog to digital format. 11. The system according to claim 7, wherein the at least one phone appliance includes at least one transceiver for transmitting or receiving the at least one voice communication for recording. 12. The system according to claim 7, wherein the at least one phone appliance is configured for data communications including Internet access for viewing and accessing Internet content. 13. The system according to claim 7, wherein the computing device determines a fee for storing the recorded at least one voice communication. 14. A method for recording voice communications during a voice communication call, comprising:
establishing a voice communication call with a phone appliance, wherein the voice communication call is a voice-over-data or another type of data communication call; converting at least one voice communication from analog to digital format; compressing the digital at least one voice communication; initiating recording the at least one voice communication by the phone appliance, wherein the at least one voice communication is recorded during the voice communication call; and facilitating recording and/or storing of the at least one voice communication by a computing device. 15. A method according to claim 14, further comprising enabling recording the at least one voice communication via a key. 16. A method according to claim 14, further comprising retrieving the recorded and/or the stored at least one voice communication. 17. A method according to claim 14, further comprising transmitting or receiving the at least one voice communication for recording via a transceiver. 18. A method according to claim 17, further comprising storing that at least one voice communication for recording in at least one memory or at least one disk of a computing device or a control center. 19. A method according to claim 14, further comprising performing data communications including Internet access for viewing and interacting with Internet content. 20. A method according to claim 14, further comprising determining a fee for storing the recorded at least one voice communication. 21. A method for recording voice communications during a voice communication call, comprising:
receiving, by a computing device, a communication for enabling recording at least one voice communication, wherein the at least one voice communication is a voice-over-data or another type of data communication call; recording the at least one voice communication, wherein the at least one voice communication is recorded during a voice communication call; and retrieving the recorded at least one voice communication by a computing device. | 2,600 |
11,047 | 11,047 | 15,531,421 | 2,649 | A test system for testing a device under test includes: a signal processor configured to generate a plurality of independent signals and to apply first fading channel characteristics to each of the independent signals to generate a plurality of first faded test signals; a test system interface configured to provide the plurality of first faded test signals to one or more signal input interfaces of the device under test (DUT); a second signal processor configured to apply second fading channel characteristics to a plurality of output signals of the DUT to generate a plurality of second faded test signals, wherein the second fading channel characteristics are derived from the first fading channel characteristics; and one or more test instruments configured to measure at least one performance characteristic of the DUT from the plurality of second faded test signals. | 1. A method of testing a device under test, the method comprising:
generating a plurality of independent signals; applying first fading channel characteristics to the independent signals to generate a plurality of first faded test signals; providing the plurality of first faded test signals to one or more signal input interfaces of the device under test (DUT); applying second fading channel characteristics to a plurality of output signals of the DUT to generate a plurality of second faded test signals, wherein the second fading channel characteristics are derived from the first fading channel characteristics; and measuring with one or more test instruments at least one performance characteristic of the DUT from the plurality of second faded test signals. 2. The method of claim 1, wherein measuring at least one performance characteristic includes measuring a signal-to-interference-and-noise ratio (SINR) of a plurality of channels of the DUT. 3. The method of claim 2, further comprising:
varying at least one of the first fading channel characteristics and the second fading channel characteristics; and measuring the SINR of the plurality of channels of the DUT with the varied at least one of the first fading channel characteristics and the second fading channel characteristics. 4. The method of claim 1, wherein providing the plurality of first faded test signals to one or more signal input interfaces of the DUT comprises providing the plurality of first faded test signals to one or more signal input interfaces of the DUT via an optical baseband input of the DUT. 5. The method of claim 1, further comprising providing one or more baseband output signals of the DUT to one of the test instruments, and measuring at least one performance characteristic of a baseband processing module of the DUT in response to the plurality of first faded test signals. 6. The method of claim 1, wherein applying the first fading channel characteristics to each of the independent signals to generate the plurality of first faded test signals, comprises:
applying the first fading channel characteristics to each of the independent signals to generate a plurality of faded baseband uplink signals; and applying the plurality of faded baseband uplink signals to one or more RF signal generators to generate the plurality of first faded test signals as RF signals. 7. The method of claim 6, wherein applying the plurality of faded baseband uplink signals to one or more RF signal generators to generate the plurality of first faded test signals comprises providing each of the independent signals to a corresponding one of the RF signal generators, wherein each RF signal generator generates a corresponding one of the first faded test signals as a corresponding RF signal. 8. The method of claim 7, wherein the DUT includes a multiple-input, multiple output (MIMO) transceiver, and wherein providing the plurality of first faded test signals to one or more signal input interfaces of the DUT comprises providing the plurality of first faded test signals to a plurality of RF inputs of the MIMO transceiver. 9. The method of claim 6, wherein the one or more test instruments includes one or more RF test instruments, the method further comprising providing one or more RF output signals of the DUT as one or more input signals to the one or more RF test instruments, and wherein measuring at least one performance characteristic of the DUT from the plurality of second faded test signals includes measuring at least one performance characteristic of an RF processing module of the DUT with the one or more RF test instruments. 10. The method of claim 6, wherein the one or more test instruments includes one or more RF test instruments, the method further comprising:
providing one or more RF output signals of the DUT to the one or more RF test instruments; and measuring the at least one performance characteristic of the DUT using the one or more RF test instruments. 11. A test system for testing a device under test, the test system comprising:
one or more signal processors configured to generate a plurality of independent signals and to apply first fading channel characteristics to each of the independent signals to generate a plurality of first faded test signals; at least one test system interface configured to provide the plurality of first faded test signals to one or more signal input interfaces of the device under test (DUT); and one or more test instruments, wherein the one or more signal processors are configured to apply second fading channel characteristics to a plurality of output signals of the DUT to generate a plurality of second faded test signals, wherein the one or more signal processors are configured to derive the second fading channel characteristics from the first fading channel characteristics, and wherein the one or more test instruments are configured to measure at least one performance characteristic of the DUT from the plurality of second faded test signals. 12. The test system of claim 11, wherein the one or more test instruments are configured to measure a signal-to-interference-and-noise ratio (SINR) of a plurality of channels of the DUT. 13. The test system of claim 12, wherein the one or more signal processors includes a first signal processor comprising memory and a digital processor configured to execute instructions stored in the memory to cause the digital processor to generate the plurality of first faded test signals. 14. The test system of claim 13, wherein the digital processor is further configured to vary the first fading channel characteristics, wherein the one or more test instruments are further configured to measure the SINR of the plurality of channels of the DUT with the varied first fading channel characteristics. 15. The test system of claim 11, wherein the one or more output signals generated by the DUT include one or more baseband output signals, and the one or more test instruments include one of more baseband test instruments configured to receive the one or more baseband output signals, and to measure at least one performance characteristic of the DUT from the one or more baseband output signals. 16. The test system of claim 11, further comprising one or more RF signal generators,
wherein the one or more signal processors is configured to apply the first fading channel characteristics to each of the independent signals to generate a plurality of faded baseband uplink signals, and wherein each of the one or more RF signal generators is configured to receive one or more of the plurality of faded baseband uplink signals and to generate therefrom the plurality of first faded test signals as RF signals. 17. The test system of claim 16, further comprising one or more RF signal generators,
wherein the one or more signal processors includes a first signal processor configured to apply the first fading channel characteristics to each of the independent signals to generate a plurality of faded baseband uplink signals, and wherein each of the RF signal generators is configured to receive one of the plurality of faded baseband uplink signals and to generate therefrom a corresponding one of the first faded test signals as a corresponding RF signal. 18. The test system of claim 17, wherein the DUT includes a multiple-input, multiple output (MIMO) transceiver, and wherein the RF signal generators are configured to provide the plurality of first faded test signals to a plurality of RF inputs of the MIMO transceiver. 19. The test system of claim 16, wherein the one or more test instruments includes one or more RF test instruments, the RF test instruments being configured to receive one or more RF output signals of the DUT as one or more input signals to the one or more RF test instruments and to measure at least one performance characteristic of an RF processing module of the DUT. | A test system for testing a device under test includes: a signal processor configured to generate a plurality of independent signals and to apply first fading channel characteristics to each of the independent signals to generate a plurality of first faded test signals; a test system interface configured to provide the plurality of first faded test signals to one or more signal input interfaces of the device under test (DUT); a second signal processor configured to apply second fading channel characteristics to a plurality of output signals of the DUT to generate a plurality of second faded test signals, wherein the second fading channel characteristics are derived from the first fading channel characteristics; and one or more test instruments configured to measure at least one performance characteristic of the DUT from the plurality of second faded test signals.1. A method of testing a device under test, the method comprising:
generating a plurality of independent signals; applying first fading channel characteristics to the independent signals to generate a plurality of first faded test signals; providing the plurality of first faded test signals to one or more signal input interfaces of the device under test (DUT); applying second fading channel characteristics to a plurality of output signals of the DUT to generate a plurality of second faded test signals, wherein the second fading channel characteristics are derived from the first fading channel characteristics; and measuring with one or more test instruments at least one performance characteristic of the DUT from the plurality of second faded test signals. 2. The method of claim 1, wherein measuring at least one performance characteristic includes measuring a signal-to-interference-and-noise ratio (SINR) of a plurality of channels of the DUT. 3. The method of claim 2, further comprising:
varying at least one of the first fading channel characteristics and the second fading channel characteristics; and measuring the SINR of the plurality of channels of the DUT with the varied at least one of the first fading channel characteristics and the second fading channel characteristics. 4. The method of claim 1, wherein providing the plurality of first faded test signals to one or more signal input interfaces of the DUT comprises providing the plurality of first faded test signals to one or more signal input interfaces of the DUT via an optical baseband input of the DUT. 5. The method of claim 1, further comprising providing one or more baseband output signals of the DUT to one of the test instruments, and measuring at least one performance characteristic of a baseband processing module of the DUT in response to the plurality of first faded test signals. 6. The method of claim 1, wherein applying the first fading channel characteristics to each of the independent signals to generate the plurality of first faded test signals, comprises:
applying the first fading channel characteristics to each of the independent signals to generate a plurality of faded baseband uplink signals; and applying the plurality of faded baseband uplink signals to one or more RF signal generators to generate the plurality of first faded test signals as RF signals. 7. The method of claim 6, wherein applying the plurality of faded baseband uplink signals to one or more RF signal generators to generate the plurality of first faded test signals comprises providing each of the independent signals to a corresponding one of the RF signal generators, wherein each RF signal generator generates a corresponding one of the first faded test signals as a corresponding RF signal. 8. The method of claim 7, wherein the DUT includes a multiple-input, multiple output (MIMO) transceiver, and wherein providing the plurality of first faded test signals to one or more signal input interfaces of the DUT comprises providing the plurality of first faded test signals to a plurality of RF inputs of the MIMO transceiver. 9. The method of claim 6, wherein the one or more test instruments includes one or more RF test instruments, the method further comprising providing one or more RF output signals of the DUT as one or more input signals to the one or more RF test instruments, and wherein measuring at least one performance characteristic of the DUT from the plurality of second faded test signals includes measuring at least one performance characteristic of an RF processing module of the DUT with the one or more RF test instruments. 10. The method of claim 6, wherein the one or more test instruments includes one or more RF test instruments, the method further comprising:
providing one or more RF output signals of the DUT to the one or more RF test instruments; and measuring the at least one performance characteristic of the DUT using the one or more RF test instruments. 11. A test system for testing a device under test, the test system comprising:
one or more signal processors configured to generate a plurality of independent signals and to apply first fading channel characteristics to each of the independent signals to generate a plurality of first faded test signals; at least one test system interface configured to provide the plurality of first faded test signals to one or more signal input interfaces of the device under test (DUT); and one or more test instruments, wherein the one or more signal processors are configured to apply second fading channel characteristics to a plurality of output signals of the DUT to generate a plurality of second faded test signals, wherein the one or more signal processors are configured to derive the second fading channel characteristics from the first fading channel characteristics, and wherein the one or more test instruments are configured to measure at least one performance characteristic of the DUT from the plurality of second faded test signals. 12. The test system of claim 11, wherein the one or more test instruments are configured to measure a signal-to-interference-and-noise ratio (SINR) of a plurality of channels of the DUT. 13. The test system of claim 12, wherein the one or more signal processors includes a first signal processor comprising memory and a digital processor configured to execute instructions stored in the memory to cause the digital processor to generate the plurality of first faded test signals. 14. The test system of claim 13, wherein the digital processor is further configured to vary the first fading channel characteristics, wherein the one or more test instruments are further configured to measure the SINR of the plurality of channels of the DUT with the varied first fading channel characteristics. 15. The test system of claim 11, wherein the one or more output signals generated by the DUT include one or more baseband output signals, and the one or more test instruments include one of more baseband test instruments configured to receive the one or more baseband output signals, and to measure at least one performance characteristic of the DUT from the one or more baseband output signals. 16. The test system of claim 11, further comprising one or more RF signal generators,
wherein the one or more signal processors is configured to apply the first fading channel characteristics to each of the independent signals to generate a plurality of faded baseband uplink signals, and wherein each of the one or more RF signal generators is configured to receive one or more of the plurality of faded baseband uplink signals and to generate therefrom the plurality of first faded test signals as RF signals. 17. The test system of claim 16, further comprising one or more RF signal generators,
wherein the one or more signal processors includes a first signal processor configured to apply the first fading channel characteristics to each of the independent signals to generate a plurality of faded baseband uplink signals, and wherein each of the RF signal generators is configured to receive one of the plurality of faded baseband uplink signals and to generate therefrom a corresponding one of the first faded test signals as a corresponding RF signal. 18. The test system of claim 17, wherein the DUT includes a multiple-input, multiple output (MIMO) transceiver, and wherein the RF signal generators are configured to provide the plurality of first faded test signals to a plurality of RF inputs of the MIMO transceiver. 19. The test system of claim 16, wherein the one or more test instruments includes one or more RF test instruments, the RF test instruments being configured to receive one or more RF output signals of the DUT as one or more input signals to the one or more RF test instruments and to measure at least one performance characteristic of an RF processing module of the DUT. | 2,600 |
11,048 | 11,048 | 15,136,949 | 2,616 | Systems and methods are described for retexturing portions of surface in a 2-D image, where the surface is an image of a 3-D object. The systems and methods analyze, with user input, the 2-D image and then, in computer memory, generate a 3-D model of the imaged surface. The surface may then be retextured, that is, for example, artwork may be added to the 2-D image in a realistic manner, taking into account a 3-D geometry in the scene of the 2-D image. | 1. A method implemented on a computer having memory, a processor, and a display, where the method includes:
accepting a file containing a 2-D image of a 3-D scene into memory, where the 2-D image includes a 2-D surface that is an image of a 3-D surface of an object in the 3-D scene; generating a 3-D model to approximate the 3-D surface of the object in the 3-D scene, where said generating includes:
forming a perspective model, where the perspective model includes a mathematical transformation for: 1) projecting points from the 3-D model to the 2-D image, and 2) re-projecting points from the 2-D image to the 3-D scene,
defining a first silhouette segment of the 2-D surface,
defining a second silhouette segment of the 2-D surface,
defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment,
identifying a spine disposed between said first segment and said second segment,
re-projecting the spine to a 3-D spine in the 3-D scene,
obtaining geometric information related to the object in the scene,
re-projecting each of said one or more profile curves to a corresponding 3-D profile curve, where each 3-D profile curve is in a corresponding profile plane perpendicular to the 3-D spine, where said re-projecting includes using the obtained geometric information,
forming a 3-D model by interpolating the one or more profile segments along the 3-D spine, and
displaying the 3-D model on the display. 2. The method of claim 1, where said geometric information is two physical measurements of the object. 3. The method of claim 1, where said geometric information is an indication of the geometric shape of the 3-D surface. 4. The method of claim 3, where the indication is the geometric shape of the 3-D surface. 5. The method of claim 3, where the indication is of the cross-section of the 3-D surface. 6. The method of claim 1, where said obtaining geometric information includes obtaining user input. 7. The method of claim 6, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model; accepting user input of geometric information; and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 8. The method of claim 1, where said generating the 3-D model further includes accepting user input defining one or more of the first silhouette segment, the second silhouette segment, one or more profile segments, or the spine. 9. The method of claim 1, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model, accepting user input defining one or more of a first silhouette segment of the 2-D surface, defining a second silhouette segment of the 2-D surface, defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment, or identifying a spine disposed between said first segment and said second segment includes accepting user input, and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 10. The method of claim 1, where the image is obtained from a camera trained on the scene. 11. The method of claim 1, where said perspective model includes the camera field of view or EXIF data. 12. The method of claim 1, further comprising generating a mesh on the 3-D surface model. 13. The method of claim 12, further comprising:
accepting artwork; and applying the accepted artwork to the generated mesh. 14. The method of claim 1, where said defining the first silhouette segment on the 2-D surface includes defining the first corner point and defining the second corner point. 15. The method of claim 1, where said defining the second silhouette segment on the 2-D surface includes defining the third corner point and defining the fourth corner point. 16. The method of claim 1, where defining one or more profile curves includes defining, for the one profile segment, the point of the profile's intersection with the spine. 17. The method of claim 1, where defining one or more profile curves includes defining one profile curve. 18. The method of claim 1, where defining one or more profile curves includes defining two profile curves. 19. The method of claim 1, where said defining the spine includes defining two points of the spine. 20. The method of claim 1, where said re-projecting points from the 2-D image to the 3-D scene is accurate to within a scale factor around the camera. 21. The method of claim 1, where said re-projecting points from the 2-D image to the 3-D scene re-projects points onto a pre-determined 3-D plane. 22. The method of claim 1, where the silhouette of the 3-D model projected onto to the 2-D image includes the first silhouette segment and the first silhouette segment. 23. The method of claim 12, where a texture map is automatically calculated for the said mesh. 24. A system comprising:
a computer having memory, a processor, and a display, where the memory includes instructions for the processor perform the steps of: accepting a file containing a 2-D image of a 3-D scene into memory, where the 2-D image includes a 2-D surface that is an image of a 3-D surface of an object in the 3-D scene; generating a 3-D model to approximate the 3-D surface of the object in the 3-D scene, where said generating includes:
forming a perspective model, where the perspective model includes a mathematical transformation for: 1) projecting points from the 3-D model to the 2-D image, and 2) re-projecting points from the 2-D image to the 3-D scene,
defining a first silhouette segment of the 2-D surface,
defining a second silhouette segment of the 2-D surface,
defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment,
identifying a spine disposed between said first segment and said second segment,
re-projecting the spine to a 3-D spine in the 3-D scene,
obtaining geometric information related to the object in the scene,
re-projecting each of said one or more profile curves to a corresponding 3-D profile curve, where each 3-D profile curve is in a corresponding profile plane perpendicular to the 3-D spine, where said re-projecting includes using the obtained geometric information,
forming a 3-D model by interpolating the one or more profile segments along the 3-D spine, and
displaying the 3-D model on the display. 25. The system of claim 24, where said geometric information is two physical measurements of the object. 26. The system of claim 24, where said geometric information is an indication of the geometric shape of the 3-D surface. 27. The system of claim 26, where the indication is the geometric shape of the 3-D surface. 28. The system of claim 26, where the indication is of the cross-section of the 3-D surface. 29. The system of claim 24, where said obtaining geometric information includes obtaining user input. 30. The system of claim 29, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model; accepting user input of geometric information; and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 31. The system of claim 24, where said generating the 3-D model further includes accepting user input defining one or more of the first silhouette segment, the second silhouette segment, one or more profile segments, or the spine. 32. The system of claim 24, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model, accepting user input defining one or more of a first silhouette segment of the 2-D surface, defining a second silhouette segment of the 2-D surface, defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment, or identifying a spine disposed between said first segment and said second segment includes accepting user input, and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 33. The system of claim 24, where the image is obtained from a camera trained on the scene. 34. The system of claim 24, where said perspective model includes the camera field of view or EXIF data. 35. The system of claim 24, further comprising generating a mesh on the 3-D surface model. 36. The system of claim 35, further comprising:
accepting artwork; and applying the accepted artwork to the generated mesh. 37. The system of claim 24, where said defining the first silhouette segment on the 2-D surface includes defining the first corner point and defining the second corner point. 38. The system of claim 24, where said defining the second silhouette segment on the 2-D surface includes defining the third corner point and defining the fourth corner point. 39. The system of claim 24, where defining one or more profile curves includes defining, for the one profile segment, the point of the profile's intersection with the spine. 40. The system of claim 24, where defining one or more profile curves includes defining one profile curve. 41. The system of claim 24, where defining one or more profile curves includes defining two profile curves. 42. The system of claim 24, where said defining the spine includes defining two points of the spine. 43. The system of claim 24, where said re-projecting points from the 2-D image to the 3-D scene is accurate to within a scale factor around the camera. 44. The system of claim 24, where said re-projecting points from the 2-D image to the 3-D scene re-projects points onto a pre-determined 3-D plane. 45. The system of claim 24, where the silhouette of the 3-D model projected onto to the 2-D image includes the first silhouette segment and the first silhouette segment. 46. The system of claim 35, where a texture map is automatically calculated for the said mesh. | Systems and methods are described for retexturing portions of surface in a 2-D image, where the surface is an image of a 3-D object. The systems and methods analyze, with user input, the 2-D image and then, in computer memory, generate a 3-D model of the imaged surface. The surface may then be retextured, that is, for example, artwork may be added to the 2-D image in a realistic manner, taking into account a 3-D geometry in the scene of the 2-D image.1. A method implemented on a computer having memory, a processor, and a display, where the method includes:
accepting a file containing a 2-D image of a 3-D scene into memory, where the 2-D image includes a 2-D surface that is an image of a 3-D surface of an object in the 3-D scene; generating a 3-D model to approximate the 3-D surface of the object in the 3-D scene, where said generating includes:
forming a perspective model, where the perspective model includes a mathematical transformation for: 1) projecting points from the 3-D model to the 2-D image, and 2) re-projecting points from the 2-D image to the 3-D scene,
defining a first silhouette segment of the 2-D surface,
defining a second silhouette segment of the 2-D surface,
defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment,
identifying a spine disposed between said first segment and said second segment,
re-projecting the spine to a 3-D spine in the 3-D scene,
obtaining geometric information related to the object in the scene,
re-projecting each of said one or more profile curves to a corresponding 3-D profile curve, where each 3-D profile curve is in a corresponding profile plane perpendicular to the 3-D spine, where said re-projecting includes using the obtained geometric information,
forming a 3-D model by interpolating the one or more profile segments along the 3-D spine, and
displaying the 3-D model on the display. 2. The method of claim 1, where said geometric information is two physical measurements of the object. 3. The method of claim 1, where said geometric information is an indication of the geometric shape of the 3-D surface. 4. The method of claim 3, where the indication is the geometric shape of the 3-D surface. 5. The method of claim 3, where the indication is of the cross-section of the 3-D surface. 6. The method of claim 1, where said obtaining geometric information includes obtaining user input. 7. The method of claim 6, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model; accepting user input of geometric information; and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 8. The method of claim 1, where said generating the 3-D model further includes accepting user input defining one or more of the first silhouette segment, the second silhouette segment, one or more profile segments, or the spine. 9. The method of claim 1, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model, accepting user input defining one or more of a first silhouette segment of the 2-D surface, defining a second silhouette segment of the 2-D surface, defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment, or identifying a spine disposed between said first segment and said second segment includes accepting user input, and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 10. The method of claim 1, where the image is obtained from a camera trained on the scene. 11. The method of claim 1, where said perspective model includes the camera field of view or EXIF data. 12. The method of claim 1, further comprising generating a mesh on the 3-D surface model. 13. The method of claim 12, further comprising:
accepting artwork; and applying the accepted artwork to the generated mesh. 14. The method of claim 1, where said defining the first silhouette segment on the 2-D surface includes defining the first corner point and defining the second corner point. 15. The method of claim 1, where said defining the second silhouette segment on the 2-D surface includes defining the third corner point and defining the fourth corner point. 16. The method of claim 1, where defining one or more profile curves includes defining, for the one profile segment, the point of the profile's intersection with the spine. 17. The method of claim 1, where defining one or more profile curves includes defining one profile curve. 18. The method of claim 1, where defining one or more profile curves includes defining two profile curves. 19. The method of claim 1, where said defining the spine includes defining two points of the spine. 20. The method of claim 1, where said re-projecting points from the 2-D image to the 3-D scene is accurate to within a scale factor around the camera. 21. The method of claim 1, where said re-projecting points from the 2-D image to the 3-D scene re-projects points onto a pre-determined 3-D plane. 22. The method of claim 1, where the silhouette of the 3-D model projected onto to the 2-D image includes the first silhouette segment and the first silhouette segment. 23. The method of claim 12, where a texture map is automatically calculated for the said mesh. 24. A system comprising:
a computer having memory, a processor, and a display, where the memory includes instructions for the processor perform the steps of: accepting a file containing a 2-D image of a 3-D scene into memory, where the 2-D image includes a 2-D surface that is an image of a 3-D surface of an object in the 3-D scene; generating a 3-D model to approximate the 3-D surface of the object in the 3-D scene, where said generating includes:
forming a perspective model, where the perspective model includes a mathematical transformation for: 1) projecting points from the 3-D model to the 2-D image, and 2) re-projecting points from the 2-D image to the 3-D scene,
defining a first silhouette segment of the 2-D surface,
defining a second silhouette segment of the 2-D surface,
defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment,
identifying a spine disposed between said first segment and said second segment,
re-projecting the spine to a 3-D spine in the 3-D scene,
obtaining geometric information related to the object in the scene,
re-projecting each of said one or more profile curves to a corresponding 3-D profile curve, where each 3-D profile curve is in a corresponding profile plane perpendicular to the 3-D spine, where said re-projecting includes using the obtained geometric information,
forming a 3-D model by interpolating the one or more profile segments along the 3-D spine, and
displaying the 3-D model on the display. 25. The system of claim 24, where said geometric information is two physical measurements of the object. 26. The system of claim 24, where said geometric information is an indication of the geometric shape of the 3-D surface. 27. The system of claim 26, where the indication is the geometric shape of the 3-D surface. 28. The system of claim 26, where the indication is of the cross-section of the 3-D surface. 29. The system of claim 24, where said obtaining geometric information includes obtaining user input. 30. The system of claim 29, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model; accepting user input of geometric information; and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 31. The system of claim 24, where said generating the 3-D model further includes accepting user input defining one or more of the first silhouette segment, the second silhouette segment, one or more profile segments, or the spine. 32. The system of claim 24, where said generating the 3-D model includes iteratively repeating the steps of:
generating the 3-D model, accepting user input defining one or more of a first silhouette segment of the 2-D surface, defining a second silhouette segment of the 2-D surface, defining one or more profile curves each extending from the first silhouette segment to the second silhouette segment, or identifying a spine disposed between said first segment and said second segment includes accepting user input, and displaying the 3-D model on the display, such that the user provides user input to produce an acceptable 3-D model on the display. 33. The system of claim 24, where the image is obtained from a camera trained on the scene. 34. The system of claim 24, where said perspective model includes the camera field of view or EXIF data. 35. The system of claim 24, further comprising generating a mesh on the 3-D surface model. 36. The system of claim 35, further comprising:
accepting artwork; and applying the accepted artwork to the generated mesh. 37. The system of claim 24, where said defining the first silhouette segment on the 2-D surface includes defining the first corner point and defining the second corner point. 38. The system of claim 24, where said defining the second silhouette segment on the 2-D surface includes defining the third corner point and defining the fourth corner point. 39. The system of claim 24, where defining one or more profile curves includes defining, for the one profile segment, the point of the profile's intersection with the spine. 40. The system of claim 24, where defining one or more profile curves includes defining one profile curve. 41. The system of claim 24, where defining one or more profile curves includes defining two profile curves. 42. The system of claim 24, where said defining the spine includes defining two points of the spine. 43. The system of claim 24, where said re-projecting points from the 2-D image to the 3-D scene is accurate to within a scale factor around the camera. 44. The system of claim 24, where said re-projecting points from the 2-D image to the 3-D scene re-projects points onto a pre-determined 3-D plane. 45. The system of claim 24, where the silhouette of the 3-D model projected onto to the 2-D image includes the first silhouette segment and the first silhouette segment. 46. The system of claim 35, where a texture map is automatically calculated for the said mesh. | 2,600 |
11,049 | 11,049 | 16,335,158 | 2,672 | In an example, a method includes identifying, within data for use in printing, a first element set associated with a first print addressable area, wherein elements of the element set are each associated with a print instruction. An element may be selected from the first element set and assigned to the first print addressable area. A second print addressable area may be identified as a candidate print addressable area for error diffusion, the second print addressable area being associated with a second element set. An error associated with the selection of the element from the first element set is scaled based on a criterion and may be diffused to elements of the second element set. | 1. A method comprising:
identifying, using a processor, within data for use in printing, a first element set associated with a first print addressable area, wherein elements of the element set are each associated with a print instruction, selecting, using the processor, an element from the first element set and assigning the element to the first print addressable area; identifying, using the processor, a second print addressable area as a candidate print addressable area for error diffusion, the second print addressable area being associated with a second element set; scaling, using the processor, an error associated with the selection of the element from the first element set based on a criterion, and diffusing, using the processor, the scaled error to elements of the second element set. 2. A method according to claim 1 in which the first and second element sets comprise, respectively, first and second material coverage vectors, in which each element of the element set is associated with a value indicating a probability that the print material or print material combination identified by the element is applied to a print addressable area, wherein the criterion for scaling comprises a value associated with an element in the first material coverage vector and a value associated with the corresponding element in the second material coverage vector. 3. A method according to claim 2 in which diffusing the error comprises:
determining, using the processor, an error vector based on the first material coverage vector and the assigned element, the error vector having the first element set,
determining, using the processor, a ratio of the value associated with the element in the second material coverage vector and the value associated with the corresponding element in the first material coverage vector, and multiplying the ratio with a corresponding element of the error vector, thereby scaling the error associated with selection of the assigned element for transforming the corresponding element of the second material coverage vector, and
transforming, using the processor, the second material coverage vector to an error diffused material coverage vector by combining the scaled error and the corresponding element of the second material coverage vector, the error diffused material coverage vector having the second element set. 4. A method according to claim 3 in which transforming the second material coverage vector comprises assigning a weight to each of the values of the error vector and combining the weighted value associated with the corresponding element of the error vector and the second material coverage vector. 5. A method according to claim 1 in which diffusing the error comprises:
determining, using the processor, a third print addressable area in which the error associated with the selection of the element from the first element set is to be diffused, the third print addressable area being associated with a third element set, and
diffuse a proportion of the scaled error to the second print addressable area and a further proportion to the third print addressable area. 6. A method according to claim 5 in which the scaled error is diffused in equal proportions to the second print addressable area and to the third print addressable area. 7. A method according to claim 5 in which the scaled error is diffused in in accordance with a weighting value associated with the second print addressable area and the third print addressable area, the weighting value indicative of the proportion in which the scaled error is to be diffused in the associated print addressable area. 8. A method according to claim 1 in which the second print addressable area is a spatial neighbour of the first print addressable area. 9. A processing apparatus, comprising:
a memory; a processor coupled to the memory; a material selection module stored on the memory and configured to cause the processor to:
receive data for use in printing, the data characterising a print addressable area associated with a respective element set, wherein elements of the element set are each associated with a print instruction, and
select, from a first element set, an element for a first print addressable area; and
an error diffusion module stored on the memory and configured to cause the processor to:
scale an error associated with the selection from the first element set based on a criterion, and
diffuse the scaled error to at least a second print addressable area characterised in data for use in printing and associated with a second element set, wherein the scaled error is diffused to elements the second element set such that the elements in the second element set do not change. 10. The processing apparatus according to claim 9 in which each elements of an element set are associated with a probability, the material selection module is to select the element of an element set associated with the highest probability, and the error diffusion module is to determine a ratio of the probability associated with an element of the second element set and the probability associated with the corresponding element of the first element set, and scale the error associated with the selection from the first element set based on the determined ratio. 11. The processing apparatus according to claim 10 in which the error diffusion module is to diffuse the scaled error by changing the probability associated with the element of the second element set. 12. The processing apparatus according to claim 9 in which the error diffusion module is to diffuse a proportion of the scaled error to an element set of each of a plurality of print addressable areas. 13. The processing apparatus according to claim 9, further comprising a print apparatus to carry out a print operation according to the print control data. 14. A non-transitory machine readable medium comprising instructions which, when executed by a processor, cause the processor to:
select a print instruction for each print addressable area from candidate possible print instructions associated with received print input data; and scale an error associated with selected print instruction for a first print addressable area for use to transform print instructions associated with candidate second and third print addressable areas, such that scaling of the error is based on the candidate print instructions associated with the second and third print addressable areas, and diffusion of the error varies from the second print addressable area to the third print addressable area. 15. The non-transitory machine readable medium according to claim 14 wherein the instructions further cause the processor to determine print instructions for three-dimensional printing. | In an example, a method includes identifying, within data for use in printing, a first element set associated with a first print addressable area, wherein elements of the element set are each associated with a print instruction. An element may be selected from the first element set and assigned to the first print addressable area. A second print addressable area may be identified as a candidate print addressable area for error diffusion, the second print addressable area being associated with a second element set. An error associated with the selection of the element from the first element set is scaled based on a criterion and may be diffused to elements of the second element set.1. A method comprising:
identifying, using a processor, within data for use in printing, a first element set associated with a first print addressable area, wherein elements of the element set are each associated with a print instruction, selecting, using the processor, an element from the first element set and assigning the element to the first print addressable area; identifying, using the processor, a second print addressable area as a candidate print addressable area for error diffusion, the second print addressable area being associated with a second element set; scaling, using the processor, an error associated with the selection of the element from the first element set based on a criterion, and diffusing, using the processor, the scaled error to elements of the second element set. 2. A method according to claim 1 in which the first and second element sets comprise, respectively, first and second material coverage vectors, in which each element of the element set is associated with a value indicating a probability that the print material or print material combination identified by the element is applied to a print addressable area, wherein the criterion for scaling comprises a value associated with an element in the first material coverage vector and a value associated with the corresponding element in the second material coverage vector. 3. A method according to claim 2 in which diffusing the error comprises:
determining, using the processor, an error vector based on the first material coverage vector and the assigned element, the error vector having the first element set,
determining, using the processor, a ratio of the value associated with the element in the second material coverage vector and the value associated with the corresponding element in the first material coverage vector, and multiplying the ratio with a corresponding element of the error vector, thereby scaling the error associated with selection of the assigned element for transforming the corresponding element of the second material coverage vector, and
transforming, using the processor, the second material coverage vector to an error diffused material coverage vector by combining the scaled error and the corresponding element of the second material coverage vector, the error diffused material coverage vector having the second element set. 4. A method according to claim 3 in which transforming the second material coverage vector comprises assigning a weight to each of the values of the error vector and combining the weighted value associated with the corresponding element of the error vector and the second material coverage vector. 5. A method according to claim 1 in which diffusing the error comprises:
determining, using the processor, a third print addressable area in which the error associated with the selection of the element from the first element set is to be diffused, the third print addressable area being associated with a third element set, and
diffuse a proportion of the scaled error to the second print addressable area and a further proportion to the third print addressable area. 6. A method according to claim 5 in which the scaled error is diffused in equal proportions to the second print addressable area and to the third print addressable area. 7. A method according to claim 5 in which the scaled error is diffused in in accordance with a weighting value associated with the second print addressable area and the third print addressable area, the weighting value indicative of the proportion in which the scaled error is to be diffused in the associated print addressable area. 8. A method according to claim 1 in which the second print addressable area is a spatial neighbour of the first print addressable area. 9. A processing apparatus, comprising:
a memory; a processor coupled to the memory; a material selection module stored on the memory and configured to cause the processor to:
receive data for use in printing, the data characterising a print addressable area associated with a respective element set, wherein elements of the element set are each associated with a print instruction, and
select, from a first element set, an element for a first print addressable area; and
an error diffusion module stored on the memory and configured to cause the processor to:
scale an error associated with the selection from the first element set based on a criterion, and
diffuse the scaled error to at least a second print addressable area characterised in data for use in printing and associated with a second element set, wherein the scaled error is diffused to elements the second element set such that the elements in the second element set do not change. 10. The processing apparatus according to claim 9 in which each elements of an element set are associated with a probability, the material selection module is to select the element of an element set associated with the highest probability, and the error diffusion module is to determine a ratio of the probability associated with an element of the second element set and the probability associated with the corresponding element of the first element set, and scale the error associated with the selection from the first element set based on the determined ratio. 11. The processing apparatus according to claim 10 in which the error diffusion module is to diffuse the scaled error by changing the probability associated with the element of the second element set. 12. The processing apparatus according to claim 9 in which the error diffusion module is to diffuse a proportion of the scaled error to an element set of each of a plurality of print addressable areas. 13. The processing apparatus according to claim 9, further comprising a print apparatus to carry out a print operation according to the print control data. 14. A non-transitory machine readable medium comprising instructions which, when executed by a processor, cause the processor to:
select a print instruction for each print addressable area from candidate possible print instructions associated with received print input data; and scale an error associated with selected print instruction for a first print addressable area for use to transform print instructions associated with candidate second and third print addressable areas, such that scaling of the error is based on the candidate print instructions associated with the second and third print addressable areas, and diffusion of the error varies from the second print addressable area to the third print addressable area. 15. The non-transitory machine readable medium according to claim 14 wherein the instructions further cause the processor to determine print instructions for three-dimensional printing. | 2,600 |
11,050 | 11,050 | 16,006,186 | 2,666 | Various embodiments enable computers to automatically create photographic lineups for police use and, in so doing, eliminate risks associate with subject judgment involved in human selection of fillers for such photographic lineups. Moreover, various improve the reliability of photographic lineups by selecting images of fillers that are similar to, but not too similar to, an image of the suspect. | 1. A system for creating a photographic lineup, including an image of a suspect in a crime, the system comprising:
a communications interface configured to receive the image of the suspect; a filler module configured to:
compare the image of the suspect to a plurality of available images, and
select, from the plurality of available images, a set of filler images, the filler images being similar to, but not overly similar to, the image of the suspect; and
a lineup generation module configured to produce a photographic lineup including the image of the suspect and the filler images. 2. The system of claim 1, further comprising a lineup presentation module configured to display the photographic lineup on a display device. 3. The system of claim 2, further comprising an evidence module configured to record the selection, by a witness, of a single image from the photographic lineup. 4. The system of claim 3, wherein the evidence module is further configured to record an authentication of the selection by the witness. 5. The system of claim 1, wherein the image of the suspect is located, within the photographic lineup, at a randomly selected location with respect to the filler images. 6. The system of claim 1, wherein the image of the suspect is located, within the photographic lineup, at a specified location relative to the filler images. 7. The system of claim 1, wherein:
the system further comprises a face analysis module configured to generate, from the image of the suspect, a set of suspect facial features; and wherein the filler module is configured to compare the image of the suspect to a plurality of available images by comparing the set of suspect facial features to corresponding sets of facial features for each of the plurality of available images. 8. The system of claim 1, wherein:
the system further comprises a face analysis module configured to generate, from the image of the suspect, a set of suspect facial features; and wherein the filler module is configured to generate, for each of the plurality of available images, a confidence score indicating the similarity the image to the image of the suspect; and wherein the filler module is configured to apply a rule each such confidence score, the rule selecting, from among the available images, a set of filler images. 9. The system of claim 8, wherein the rule rejects a given image from among the available images if the confidence score for the given image exceeds a high threshold. 10. The system of claim 8, wherein the rule rejects a given image from among the available images if the confidence score for the given image is below a low threshold. 11. A method for operating a photographic lineup, including an image of the face of a suspect, the method comprising:
analyzing the image of the face of the suspect to develop a set of objective suspect facial features; analyzing each face in a set of potential filler images to develop, for each face in the set of filler images, a set of objective filler facial features; for each face in the set of filler images, applying a rule comparing the set of objective suspect facial features to a corresponding set of objective facial features, to select, from the set of potential filler images, a set of selected filler images; and producing a photographic lineup, the photographic lineup including the image of the face of the suspect, and the set of selected filler images. 12. The method of claim 11, wherein:
analyzing each face in a set of potential filler images further comprises developing, based on the set of filler facial features, a confidence score for each face in a set of filler images, the confidence score indicating, for each face in the set of potential filler images, a likelihood that said face is the face of the suspect; and wherein applying a rule comprises comparing the confidence score to a threshold. 13. The method of claim 12, wherein applying the rule further comprises rejecting an image when the confidence score exceeds the threshold. 14. The method of claim 12, wherein applying the rule further comprises rejecting an image when the confidence score is below the threshold. 15. The method of claim 12, wherein producing a photographic lineup comprises including a given image in the photographic lineup when the confidence score of the given image is between a low threshold and a high threshold. 16. The method of claim 11, further comprising presenting the photographic lineup to a witness. 17. The method of claim 16, further comprising receiving, from the witness, a selection of an image from the lineup. 18. A system for creating a photographic lineup, including an image of a suspect in a crime, the system comprising:
a communications interface configured to receive the image of the suspect; means for comparing the image of the suspect to a plurality of available images; means for selecting, from the plurality of available images, a set of filler images, the filler images being similar to, but not overly similar to, the image of the suspect; and means for generating a photographic lineup including the image of the suspect and the set of filler images. 19. The system of claim 18, further comprising means for displaying the photographic lineup to a witness. 20. The system of claim 19, further comprising means for receiving, from the witness, a selection by the witness of an image from the photographic lineup. | Various embodiments enable computers to automatically create photographic lineups for police use and, in so doing, eliminate risks associate with subject judgment involved in human selection of fillers for such photographic lineups. Moreover, various improve the reliability of photographic lineups by selecting images of fillers that are similar to, but not too similar to, an image of the suspect.1. A system for creating a photographic lineup, including an image of a suspect in a crime, the system comprising:
a communications interface configured to receive the image of the suspect; a filler module configured to:
compare the image of the suspect to a plurality of available images, and
select, from the plurality of available images, a set of filler images, the filler images being similar to, but not overly similar to, the image of the suspect; and
a lineup generation module configured to produce a photographic lineup including the image of the suspect and the filler images. 2. The system of claim 1, further comprising a lineup presentation module configured to display the photographic lineup on a display device. 3. The system of claim 2, further comprising an evidence module configured to record the selection, by a witness, of a single image from the photographic lineup. 4. The system of claim 3, wherein the evidence module is further configured to record an authentication of the selection by the witness. 5. The system of claim 1, wherein the image of the suspect is located, within the photographic lineup, at a randomly selected location with respect to the filler images. 6. The system of claim 1, wherein the image of the suspect is located, within the photographic lineup, at a specified location relative to the filler images. 7. The system of claim 1, wherein:
the system further comprises a face analysis module configured to generate, from the image of the suspect, a set of suspect facial features; and wherein the filler module is configured to compare the image of the suspect to a plurality of available images by comparing the set of suspect facial features to corresponding sets of facial features for each of the plurality of available images. 8. The system of claim 1, wherein:
the system further comprises a face analysis module configured to generate, from the image of the suspect, a set of suspect facial features; and wherein the filler module is configured to generate, for each of the plurality of available images, a confidence score indicating the similarity the image to the image of the suspect; and wherein the filler module is configured to apply a rule each such confidence score, the rule selecting, from among the available images, a set of filler images. 9. The system of claim 8, wherein the rule rejects a given image from among the available images if the confidence score for the given image exceeds a high threshold. 10. The system of claim 8, wherein the rule rejects a given image from among the available images if the confidence score for the given image is below a low threshold. 11. A method for operating a photographic lineup, including an image of the face of a suspect, the method comprising:
analyzing the image of the face of the suspect to develop a set of objective suspect facial features; analyzing each face in a set of potential filler images to develop, for each face in the set of filler images, a set of objective filler facial features; for each face in the set of filler images, applying a rule comparing the set of objective suspect facial features to a corresponding set of objective facial features, to select, from the set of potential filler images, a set of selected filler images; and producing a photographic lineup, the photographic lineup including the image of the face of the suspect, and the set of selected filler images. 12. The method of claim 11, wherein:
analyzing each face in a set of potential filler images further comprises developing, based on the set of filler facial features, a confidence score for each face in a set of filler images, the confidence score indicating, for each face in the set of potential filler images, a likelihood that said face is the face of the suspect; and wherein applying a rule comprises comparing the confidence score to a threshold. 13. The method of claim 12, wherein applying the rule further comprises rejecting an image when the confidence score exceeds the threshold. 14. The method of claim 12, wherein applying the rule further comprises rejecting an image when the confidence score is below the threshold. 15. The method of claim 12, wherein producing a photographic lineup comprises including a given image in the photographic lineup when the confidence score of the given image is between a low threshold and a high threshold. 16. The method of claim 11, further comprising presenting the photographic lineup to a witness. 17. The method of claim 16, further comprising receiving, from the witness, a selection of an image from the lineup. 18. A system for creating a photographic lineup, including an image of a suspect in a crime, the system comprising:
a communications interface configured to receive the image of the suspect; means for comparing the image of the suspect to a plurality of available images; means for selecting, from the plurality of available images, a set of filler images, the filler images being similar to, but not overly similar to, the image of the suspect; and means for generating a photographic lineup including the image of the suspect and the set of filler images. 19. The system of claim 18, further comprising means for displaying the photographic lineup to a witness. 20. The system of claim 19, further comprising means for receiving, from the witness, a selection by the witness of an image from the photographic lineup. | 2,600 |
11,051 | 11,051 | 15,846,559 | 2,641 | A system for use by prospective real estate customers includes a mobile software module that has been downloaded to the user's mobile device via at least one of a mobile phone network and the Internet. A server module is associated with a website that includes one or more secure pages for receiving input from the user to perform entering, displaying, updating, requesting, filtering and sorting of information. The mobile software module receives updated information from the server module through the Internet and synchronizes the information with the mobile device. The mobile software module also enables the user to: request real estate information from the server module for properties in a neighborhood determined from a location based service, a zip code or an address; and receive from the server module real estate information in response to the request. | 1. A system for use by prospective real estate customers for managing personal information including real estate information, wherein the system comprises:
a mobile software module configured to run as an application for managing real estate information on a mobile device of a user that is a prospective real estate customer, wherein the software module has been downloaded by the user to the mobile device via at least one of a mobile phone network and the Internet and is recorded on a non-transitory computer-readable medium of the mobile device; a server module associated with a website for managing personal information including real estate information; wherein the website includes one or more secure pages for receiving input from the user to perform one or more of entering, displaying, updating, requesting, filtering and sorting of information; wherein the user can visit the website through the Internet and provide a username and password associated with the user to access the secure pages of the website; wherein the mobile software module is configured to receive updated information from the server module through the Internet and to synchronize the information so that the user has access to the synchronized information both on the mobile device and on the website; and wherein the mobile software module includes programming instructions that enable the user to:
make a request to the server module for real estate information associated with a plurality of properties in a neighborhood determined from at least one of a location based service, a zip code and an address; and
receive from the server module real estate information in response to the request and relating to one or more properties within the neighborhood. 2. The system according to claim 1 wherein the real estate information received from the server module includes at least one of a picture and a graphic and wherein the mobile software module further includes programming instructions for making the received real estate information accessible to the user and sorted by real estate topic. 3. The system according to claim 1 wherein at least one of the server module and the website is in communication with one or more real estate servers to obtain real estate information requested by the user. 4. The system according to claim 1 wherein at least one of the server module and the website provides a secure storage area for storing the user's personal information received from the user. 5. The system according to claim 1 wherein the mobile software module includes programming instructions that enable the user to update the information on the website through the mobile phone. 6. A system for dissemination of real estate information to a prospective real estate customer, wherein the system comprises:
a mobile software module configured to receive real estate information from one or more real estate servers including a Multiple Listing Service (MLS) server via a server software module recorded on a computer-readable medium; wherein the mobile software module is configured to run as an application on a mobile device of the prospective real estate customer and is downloaded by the prospective real estate customer to the mobile device via at least one of a mobile phone network and the Internet and is recorded on a non-transitory computer-readable medium of the mobile device; wherein the mobile software module includes programming instructions that enable the prospective real estate customer to request from the server software module real estate information for a plurality of properties in a neighborhood determined from at least one of a location based service, a zip code and an address; wherein the server software module is in communication with the one or more real estate servers including the Multiple Listing Service (WS) server and has programming instructions to obtain the real estate information for the plurality of properties requested by the user; wherein the server software module has programming instructions to send the obtained real estate information for the plurality of properties to the mobile software module; and wherein the mobile software module has programming instructions to receive the obtained real estate information for the plurality of properties and make it accessible to the user sorted by one or more real estate topics. 7. The system according to claim 6 wherein the obtained real estate information for the plurality of properties includes at least one of a picture and a graphic. 8. The system according to claim 6 wherein the server software module is associated with a website for managing personal information including real estate information. 9. The system of claim 8 wherein at least one of the server module and the website includes a secure storage area for storing personal information received from the prospective real estate customer. 10. The system according to claim 8 wherein the mobile software module includes programming instructions that enable the prospective real estate customer to update the information on the website through the mobile phone. 11. The system according to claim 8 wherein the mobile software module is configured to receive updated information from the server software module through the Internet and to synchronize the information so that the prospective real estate customer has access to the synchronized information both on the mobile device and on the website. 12. The system of claim 8 wherein the website includes one or more secure pages configured to receive input from the prospective real estate customer to perform one or more of entering, displaying, updating, requesting, filtering and sorting of information, and wherein the prospective real estate customer can visit the website through the Internet and provide a username and password associated with the prospective real estate customer to access the secure pages of the website. | A system for use by prospective real estate customers includes a mobile software module that has been downloaded to the user's mobile device via at least one of a mobile phone network and the Internet. A server module is associated with a website that includes one or more secure pages for receiving input from the user to perform entering, displaying, updating, requesting, filtering and sorting of information. The mobile software module receives updated information from the server module through the Internet and synchronizes the information with the mobile device. The mobile software module also enables the user to: request real estate information from the server module for properties in a neighborhood determined from a location based service, a zip code or an address; and receive from the server module real estate information in response to the request.1. A system for use by prospective real estate customers for managing personal information including real estate information, wherein the system comprises:
a mobile software module configured to run as an application for managing real estate information on a mobile device of a user that is a prospective real estate customer, wherein the software module has been downloaded by the user to the mobile device via at least one of a mobile phone network and the Internet and is recorded on a non-transitory computer-readable medium of the mobile device; a server module associated with a website for managing personal information including real estate information; wherein the website includes one or more secure pages for receiving input from the user to perform one or more of entering, displaying, updating, requesting, filtering and sorting of information; wherein the user can visit the website through the Internet and provide a username and password associated with the user to access the secure pages of the website; wherein the mobile software module is configured to receive updated information from the server module through the Internet and to synchronize the information so that the user has access to the synchronized information both on the mobile device and on the website; and wherein the mobile software module includes programming instructions that enable the user to:
make a request to the server module for real estate information associated with a plurality of properties in a neighborhood determined from at least one of a location based service, a zip code and an address; and
receive from the server module real estate information in response to the request and relating to one or more properties within the neighborhood. 2. The system according to claim 1 wherein the real estate information received from the server module includes at least one of a picture and a graphic and wherein the mobile software module further includes programming instructions for making the received real estate information accessible to the user and sorted by real estate topic. 3. The system according to claim 1 wherein at least one of the server module and the website is in communication with one or more real estate servers to obtain real estate information requested by the user. 4. The system according to claim 1 wherein at least one of the server module and the website provides a secure storage area for storing the user's personal information received from the user. 5. The system according to claim 1 wherein the mobile software module includes programming instructions that enable the user to update the information on the website through the mobile phone. 6. A system for dissemination of real estate information to a prospective real estate customer, wherein the system comprises:
a mobile software module configured to receive real estate information from one or more real estate servers including a Multiple Listing Service (MLS) server via a server software module recorded on a computer-readable medium; wherein the mobile software module is configured to run as an application on a mobile device of the prospective real estate customer and is downloaded by the prospective real estate customer to the mobile device via at least one of a mobile phone network and the Internet and is recorded on a non-transitory computer-readable medium of the mobile device; wherein the mobile software module includes programming instructions that enable the prospective real estate customer to request from the server software module real estate information for a plurality of properties in a neighborhood determined from at least one of a location based service, a zip code and an address; wherein the server software module is in communication with the one or more real estate servers including the Multiple Listing Service (WS) server and has programming instructions to obtain the real estate information for the plurality of properties requested by the user; wherein the server software module has programming instructions to send the obtained real estate information for the plurality of properties to the mobile software module; and wherein the mobile software module has programming instructions to receive the obtained real estate information for the plurality of properties and make it accessible to the user sorted by one or more real estate topics. 7. The system according to claim 6 wherein the obtained real estate information for the plurality of properties includes at least one of a picture and a graphic. 8. The system according to claim 6 wherein the server software module is associated with a website for managing personal information including real estate information. 9. The system of claim 8 wherein at least one of the server module and the website includes a secure storage area for storing personal information received from the prospective real estate customer. 10. The system according to claim 8 wherein the mobile software module includes programming instructions that enable the prospective real estate customer to update the information on the website through the mobile phone. 11. The system according to claim 8 wherein the mobile software module is configured to receive updated information from the server software module through the Internet and to synchronize the information so that the prospective real estate customer has access to the synchronized information both on the mobile device and on the website. 12. The system of claim 8 wherein the website includes one or more secure pages configured to receive input from the prospective real estate customer to perform one or more of entering, displaying, updating, requesting, filtering and sorting of information, and wherein the prospective real estate customer can visit the website through the Internet and provide a username and password associated with the prospective real estate customer to access the secure pages of the website. | 2,600 |
11,052 | 11,052 | 16,235,746 | 2,651 | A hearing device case that includes a first body portion and a second body portion that is movably connected to the first body portion is described herein. The first body portion may include an inner surface that defines a first cavity extending into the first body portion to receive a first hearing device. The second body portion may include an inner surface that defines a second cavity extending into the second body portion to receive a second hearing device. The hearing device case may also include a battery disposed in one of the first and second body portions and electronics disposed in the other of the first and second body portions. The hearing device case may be configured into a closed position to charge the hearing devices disposed therein. | 1. A case comprising:
a first body portion comprising an inner surface, wherein the inner surface of the first body portion defines a first cavity extending into the first body portion to receive a first hearing device; and a second body portion movably connected to the first body portion and comprising an inner surface, wherein the inner surface of the second body portion defines a second cavity extending into the second body portion to receive a second hearing device. 2. The case of claim 1, further comprising one or more hinges operably coupling the first and second body portions such that the first and second body portions move relative to one another. 3. The case of claim 1, further comprising
a battery disposed in one of the first and second body portions; and electronics disposed in the other of the first and second body portions. 4. The case of claim 1, wherein each of the first and second body portions comprises an outer surface opposing the inner surface, wherein the outer surface of each of the first and second body portions defines an opening extending through the outer surface to the first and second cavity, respectively. 5. The case of claim 1, wherein the first body portion comprises a charging contact located within the first cavity and the second body portion comprises a charging contact located within the second cavity. 6. The case of claim 1, wherein one of the first and second body portion comprises an interface port adapted to receive a connector. 7. A case configurable in an open position and a closed position, the case comprising:
a first body portion comprising an inner surface, wherein the inner surface of the first body portion defines a first cavity extending into the first body portion; electronics disposed within the first body portion; a second body portion movably connected to the first body portion and comprising an inner surface, wherein the inner surface of the second body portion defines a second cavity extending into the second body portion; a battery disposed within the second body portion; and one or more hinges operably coupling the first and second body portions such that the first and second body portions move relative to one another between the open position and the closed position, wherein the one or more hinges comprise a biasing element configured to bias the case in the open position and the closed position. 8. The case of claim 7, wherein the biasing element comprises a first magnet portion positioned in the one or more hinges of the first body portion and a second magnet portion positioned in the one or more hinges of the second body portion, wherein the first and second magnet portions are configured to be in equilibrium with one another only when positioned at 0 degrees and 180 degrees relative to one another. 9. The case of claim 7, wherein the one or more hinges define a first opening proximate the first body portion and a second opening proximate the second body portion, wherein the case further comprises a wire extending between the first and second body portions through the first and second openings of the one or more hinges. 10. The case of claim 7, wherein each of the first and second body portions comprises an outer surface opposing the inner surface, wherein the outer surface of each of the first and second body portions defines an opening extending through the outer surface to the first and second cavity, respectively. 11. The case of claim 7, wherein the first body portion comprises a charging contact located within the first cavity and the second body portion comprises a charging contact located within the second cavity, wherein the charging contact of the first body portion interacts with a first hearing device when received by the first cavity and the charging contact of the second body portion interacts with a second hearing device when received by the second cavity. 12. The case of claim 7, wherein the inner surface of the first body portion faces and is parallel with the inner surface of the second body portion when the case is in the closed position. 13. The case of claim 7, wherein the first cavity is offset from the second cavity when the case is in the closed position. 14. A system comprising:
a case comprising,
a first body portion comprising an inner surface, wherein the inner surface of the first body portion defines a first cavity extending into the first body portion; and
a second body portion movably connected to the first body portion and comprising an inner surface, wherein the inner surface of the second body portion defines a second cavity extending into the second body portion;
a first hearing device received by the first cavity of the first body portion; and a second hearing device received by the second cavity of the second body portion. 15. The system of claim 14, further comprising one or more hinges operably coupling the first and second body portions such that the first and second body portions move relative to one another. 16. The system of claim 14, further comprising:
a battery disposed in one of the first and second body portions; and electronics disposed in the other of the first and second body portions. 17. The system of claim 14, wherein each of the first and second body portions comprises an outer surface opposing the inner surface, wherein the outer surface of each of the first and second body portions defines an opening extending through the outer surface to the first and second cavity, respectively, such that the first hearing device is visible through the opening of the first body portion and the second hearing device is visible through the opening of the second body portion. 18. The system of claim 14, wherein the first hearing device is completely contained within the first body portion when received by the first body portion and the second hearing device is completely contained within the second body portion when received by the second body portion. 19. The system of claim 14, wherein the first body portion comprises a charging contact located within the first cavity and the second body portion comprises a charging contact located within the second cavity, wherein the charging contact of the first body portion interacts with the first hearing device when received by the first cavity and the charging contact of the second body portion interacts with the second hearing device when received by the second cavity. 20. The system of claim 14, wherein one of the first and second body portion comprises an interface port adapted to receive a connector. | A hearing device case that includes a first body portion and a second body portion that is movably connected to the first body portion is described herein. The first body portion may include an inner surface that defines a first cavity extending into the first body portion to receive a first hearing device. The second body portion may include an inner surface that defines a second cavity extending into the second body portion to receive a second hearing device. The hearing device case may also include a battery disposed in one of the first and second body portions and electronics disposed in the other of the first and second body portions. The hearing device case may be configured into a closed position to charge the hearing devices disposed therein.1. A case comprising:
a first body portion comprising an inner surface, wherein the inner surface of the first body portion defines a first cavity extending into the first body portion to receive a first hearing device; and a second body portion movably connected to the first body portion and comprising an inner surface, wherein the inner surface of the second body portion defines a second cavity extending into the second body portion to receive a second hearing device. 2. The case of claim 1, further comprising one or more hinges operably coupling the first and second body portions such that the first and second body portions move relative to one another. 3. The case of claim 1, further comprising
a battery disposed in one of the first and second body portions; and electronics disposed in the other of the first and second body portions. 4. The case of claim 1, wherein each of the first and second body portions comprises an outer surface opposing the inner surface, wherein the outer surface of each of the first and second body portions defines an opening extending through the outer surface to the first and second cavity, respectively. 5. The case of claim 1, wherein the first body portion comprises a charging contact located within the first cavity and the second body portion comprises a charging contact located within the second cavity. 6. The case of claim 1, wherein one of the first and second body portion comprises an interface port adapted to receive a connector. 7. A case configurable in an open position and a closed position, the case comprising:
a first body portion comprising an inner surface, wherein the inner surface of the first body portion defines a first cavity extending into the first body portion; electronics disposed within the first body portion; a second body portion movably connected to the first body portion and comprising an inner surface, wherein the inner surface of the second body portion defines a second cavity extending into the second body portion; a battery disposed within the second body portion; and one or more hinges operably coupling the first and second body portions such that the first and second body portions move relative to one another between the open position and the closed position, wherein the one or more hinges comprise a biasing element configured to bias the case in the open position and the closed position. 8. The case of claim 7, wherein the biasing element comprises a first magnet portion positioned in the one or more hinges of the first body portion and a second magnet portion positioned in the one or more hinges of the second body portion, wherein the first and second magnet portions are configured to be in equilibrium with one another only when positioned at 0 degrees and 180 degrees relative to one another. 9. The case of claim 7, wherein the one or more hinges define a first opening proximate the first body portion and a second opening proximate the second body portion, wherein the case further comprises a wire extending between the first and second body portions through the first and second openings of the one or more hinges. 10. The case of claim 7, wherein each of the first and second body portions comprises an outer surface opposing the inner surface, wherein the outer surface of each of the first and second body portions defines an opening extending through the outer surface to the first and second cavity, respectively. 11. The case of claim 7, wherein the first body portion comprises a charging contact located within the first cavity and the second body portion comprises a charging contact located within the second cavity, wherein the charging contact of the first body portion interacts with a first hearing device when received by the first cavity and the charging contact of the second body portion interacts with a second hearing device when received by the second cavity. 12. The case of claim 7, wherein the inner surface of the first body portion faces and is parallel with the inner surface of the second body portion when the case is in the closed position. 13. The case of claim 7, wherein the first cavity is offset from the second cavity when the case is in the closed position. 14. A system comprising:
a case comprising,
a first body portion comprising an inner surface, wherein the inner surface of the first body portion defines a first cavity extending into the first body portion; and
a second body portion movably connected to the first body portion and comprising an inner surface, wherein the inner surface of the second body portion defines a second cavity extending into the second body portion;
a first hearing device received by the first cavity of the first body portion; and a second hearing device received by the second cavity of the second body portion. 15. The system of claim 14, further comprising one or more hinges operably coupling the first and second body portions such that the first and second body portions move relative to one another. 16. The system of claim 14, further comprising:
a battery disposed in one of the first and second body portions; and electronics disposed in the other of the first and second body portions. 17. The system of claim 14, wherein each of the first and second body portions comprises an outer surface opposing the inner surface, wherein the outer surface of each of the first and second body portions defines an opening extending through the outer surface to the first and second cavity, respectively, such that the first hearing device is visible through the opening of the first body portion and the second hearing device is visible through the opening of the second body portion. 18. The system of claim 14, wherein the first hearing device is completely contained within the first body portion when received by the first body portion and the second hearing device is completely contained within the second body portion when received by the second body portion. 19. The system of claim 14, wherein the first body portion comprises a charging contact located within the first cavity and the second body portion comprises a charging contact located within the second cavity, wherein the charging contact of the first body portion interacts with the first hearing device when received by the first cavity and the charging contact of the second body portion interacts with the second hearing device when received by the second cavity. 20. The system of claim 14, wherein one of the first and second body portion comprises an interface port adapted to receive a connector. | 2,600 |
11,053 | 11,053 | 15,744,649 | 2,622 | A display generator, after activation of a display area assigned to at least one function in the interior of a vehicle, holographically displays virtual operator control elements assigned to the at least one function in the display area assigned to the at least one function in the interior of the vehicle. A detection and sensing device detects and senses gestures by a vehicle occupant which are to be interpreted as operator control inputs upon the virtual operator control elements being actuated in a manner desired by the vehicle occupant. A computing unit interprets the gestures as operator control inputs and initiates an implementation, corresponding to the operator control inputs, of the at least one function. | 1-14. (canceled) 15. An operator control system for operating at least one function in an interior of a vehicle, comprising:
a display generator configured to holographically display at least one virtual operator control element assigned to at least one function, after activation of a display area assigned to the at least one function in the interior of the vehicle; a detection and sensing device configured to detect and sense at least one gesture by a vehicle occupant as an operator control input upon the at least one virtual operator control element; and a computing unit configured to interpret the at least one gesture as the operator control input and to initiate an implementation, corresponding to the operator control input, of the at least one function. 16. The operator control system as claimed in claim 15,
further comprising a storage medium, and wherein the display generator is configured to realize the holographic display of the at least one virtual operator control element assigned to the at least one function in the assigned display area by displaying a hologram correspondingly illuminated and retrievable from the storage medium. 17. The operator control system as claimed in claim 15, wherein the display generator comprises a light source and an optical projection system. 18. The operator control system as claimed in claim 17, wherein the light source is a laser. 19. The operator control system as claimed in claim 17, wherein the optical projection system comprises at least one holographic photographic plate on which a hologram that images the at least one virtual operator control element is recorded. 20. The operator control system as claimed in claim 15, further comprising a controller configured to activate the display area automatically, only when a hand of the vehicle occupant, performing the at least one gesture, enters a defined operative area assigned to the display area, and thereby cause the display generator to holographically display the at least one virtual operator control element assigned to the at least one function. 21. The operator control system as claimed in claim 15, further comprising a controller configured to deactivate the display area automatically, after initiation of the implementation of the at least one function corresponding to the operator control input, so that the at least one virtual operator control element is no longer displayed by the display generator. 22. The operator control system as claimed in claim 15, further comprising a controller configured to deactivate the display area automatically, a defined time duration after activation of the display area without sensing any gesture interpreted as an operator control input. 23. The operator control system as claimed in claim 15, wherein the vehicle has a front roof console and a dashboard,
wherein at least one of the detection and sensing device and the display generator is disposed proximate to the front roof console of the vehicle, and wherein the display generator displays the at least one virtual operator control element assigned to the at least one function proximate to the dashboard of the vehicle. 24. A method for operating at least one function in an interior of a vehicle, comprising:
holographically displaying at least one virtual operator control element assigned to the at least one function in a display area, assigned to the at least one function, in the interior of the vehicle after activation of the display area; detecting and sensing at least one gesture by a vehicle occupant as an operator control input of the operator control element; interpreting the at least one gesture as the operator control input of the operator control element; and initiating an implementation, corresponding to the at least one operator control input, of the at least one function. 25. The method as claimed in claim 24, further comprising providing a recording containing at least one hologram of the at least one operator control element. 26. The method as claimed in claim 25, wherein said holographically displaying comprises correspondingly illuminating the at least one hologram of the at least one operator control element. 27. The method as claimed in claim 24, wherein said interpreting comprises comparing the at least one gesture with gestures stored in and retrieved from a storage unit. 28. A vehicle, comprising:
a chassis; and an operator control system configured to operate at least one function in an interior of the vehicle, including
a display generator configured to holographically display at least one virtual operator control element assigned to at least one function, after activation of a display area assigned to the at least one function in the interior of the vehicle;
a detection and sensing device configured to detect and sense at least one gesture by a vehicle occupant as an operator control input upon the at least one virtual operator control element; and
a computing unit configured to interpret the at least one gesture as the operator control input and to initiate an implementation of the at least one function corresponding to the operator control input. 29. The vehicle as claimed in claim 28,
wherein the operator control system further comprises a storage medium, and wherein the display generator is configured to realize the holographic display of the at least one virtual operator control element assigned to the at least one function in the assigned display area by displaying a hologram correspondingly illuminated and retrievable from the storage medium. 30. The vehicle as claimed in claim 28, wherein the display generator comprises a laser and an optical projection system. 31. The vehicle as claimed in claim 30, wherein the optical projection system comprises at least one holographic photographic plate on which a hologram that images the at least one virtual operator control element is recorded. 32. The vehicle as claimed in claim 28, wherein the operator control system further comprises a controller configured to activate the display area automatically, only when a hand of the vehicle occupant, performing the at least one gesture, enters a defined operative area assigned to the display area, and thereby cause the display generator to holographically display the at least one virtual operator control element assigned to the at least one function. 33. The vehicle as claimed in claim 29, wherein the operator control system further comprises a controller configured to deactivate the display area automatically at least one of a defined time duration after activation of the display area without sensing any gesture interpreted as an operator control input, and after initiation of the implementation of the at least one function corresponding to the operator control input, so that the at least one virtual operator control element is no longer displayed by the display generator. 34. The vehicle as claimed in claim 15,
further comprising a front roof console and a dashboard, and wherein at least one of the detection and sensing device and the display generator is disposed proximate to the front roof console of the vehicle, and wherein the display generator displays the at least one virtual operator control element assigned to the at least one function proximate to the dashboard of the vehicle. | A display generator, after activation of a display area assigned to at least one function in the interior of a vehicle, holographically displays virtual operator control elements assigned to the at least one function in the display area assigned to the at least one function in the interior of the vehicle. A detection and sensing device detects and senses gestures by a vehicle occupant which are to be interpreted as operator control inputs upon the virtual operator control elements being actuated in a manner desired by the vehicle occupant. A computing unit interprets the gestures as operator control inputs and initiates an implementation, corresponding to the operator control inputs, of the at least one function.1-14. (canceled) 15. An operator control system for operating at least one function in an interior of a vehicle, comprising:
a display generator configured to holographically display at least one virtual operator control element assigned to at least one function, after activation of a display area assigned to the at least one function in the interior of the vehicle; a detection and sensing device configured to detect and sense at least one gesture by a vehicle occupant as an operator control input upon the at least one virtual operator control element; and a computing unit configured to interpret the at least one gesture as the operator control input and to initiate an implementation, corresponding to the operator control input, of the at least one function. 16. The operator control system as claimed in claim 15,
further comprising a storage medium, and wherein the display generator is configured to realize the holographic display of the at least one virtual operator control element assigned to the at least one function in the assigned display area by displaying a hologram correspondingly illuminated and retrievable from the storage medium. 17. The operator control system as claimed in claim 15, wherein the display generator comprises a light source and an optical projection system. 18. The operator control system as claimed in claim 17, wherein the light source is a laser. 19. The operator control system as claimed in claim 17, wherein the optical projection system comprises at least one holographic photographic plate on which a hologram that images the at least one virtual operator control element is recorded. 20. The operator control system as claimed in claim 15, further comprising a controller configured to activate the display area automatically, only when a hand of the vehicle occupant, performing the at least one gesture, enters a defined operative area assigned to the display area, and thereby cause the display generator to holographically display the at least one virtual operator control element assigned to the at least one function. 21. The operator control system as claimed in claim 15, further comprising a controller configured to deactivate the display area automatically, after initiation of the implementation of the at least one function corresponding to the operator control input, so that the at least one virtual operator control element is no longer displayed by the display generator. 22. The operator control system as claimed in claim 15, further comprising a controller configured to deactivate the display area automatically, a defined time duration after activation of the display area without sensing any gesture interpreted as an operator control input. 23. The operator control system as claimed in claim 15, wherein the vehicle has a front roof console and a dashboard,
wherein at least one of the detection and sensing device and the display generator is disposed proximate to the front roof console of the vehicle, and wherein the display generator displays the at least one virtual operator control element assigned to the at least one function proximate to the dashboard of the vehicle. 24. A method for operating at least one function in an interior of a vehicle, comprising:
holographically displaying at least one virtual operator control element assigned to the at least one function in a display area, assigned to the at least one function, in the interior of the vehicle after activation of the display area; detecting and sensing at least one gesture by a vehicle occupant as an operator control input of the operator control element; interpreting the at least one gesture as the operator control input of the operator control element; and initiating an implementation, corresponding to the at least one operator control input, of the at least one function. 25. The method as claimed in claim 24, further comprising providing a recording containing at least one hologram of the at least one operator control element. 26. The method as claimed in claim 25, wherein said holographically displaying comprises correspondingly illuminating the at least one hologram of the at least one operator control element. 27. The method as claimed in claim 24, wherein said interpreting comprises comparing the at least one gesture with gestures stored in and retrieved from a storage unit. 28. A vehicle, comprising:
a chassis; and an operator control system configured to operate at least one function in an interior of the vehicle, including
a display generator configured to holographically display at least one virtual operator control element assigned to at least one function, after activation of a display area assigned to the at least one function in the interior of the vehicle;
a detection and sensing device configured to detect and sense at least one gesture by a vehicle occupant as an operator control input upon the at least one virtual operator control element; and
a computing unit configured to interpret the at least one gesture as the operator control input and to initiate an implementation of the at least one function corresponding to the operator control input. 29. The vehicle as claimed in claim 28,
wherein the operator control system further comprises a storage medium, and wherein the display generator is configured to realize the holographic display of the at least one virtual operator control element assigned to the at least one function in the assigned display area by displaying a hologram correspondingly illuminated and retrievable from the storage medium. 30. The vehicle as claimed in claim 28, wherein the display generator comprises a laser and an optical projection system. 31. The vehicle as claimed in claim 30, wherein the optical projection system comprises at least one holographic photographic plate on which a hologram that images the at least one virtual operator control element is recorded. 32. The vehicle as claimed in claim 28, wherein the operator control system further comprises a controller configured to activate the display area automatically, only when a hand of the vehicle occupant, performing the at least one gesture, enters a defined operative area assigned to the display area, and thereby cause the display generator to holographically display the at least one virtual operator control element assigned to the at least one function. 33. The vehicle as claimed in claim 29, wherein the operator control system further comprises a controller configured to deactivate the display area automatically at least one of a defined time duration after activation of the display area without sensing any gesture interpreted as an operator control input, and after initiation of the implementation of the at least one function corresponding to the operator control input, so that the at least one virtual operator control element is no longer displayed by the display generator. 34. The vehicle as claimed in claim 15,
further comprising a front roof console and a dashboard, and wherein at least one of the detection and sensing device and the display generator is disposed proximate to the front roof console of the vehicle, and wherein the display generator displays the at least one virtual operator control element assigned to the at least one function proximate to the dashboard of the vehicle. | 2,600 |
11,054 | 11,054 | 16,522,300 | 2,627 | A method is disclosed. The method includes providing an interactive device having at least one of a display screen and an audio component, pairing the interactive device with a user device, transferring data to the interactive device, and displaying images via the display screen or emitting sound via the audio component based on the transferred data. The interactive device is a non-transmitting device. | 1. A method, comprising:
providing an interactive device having at least one of a display screen and an audio component; pairing the interactive device with a user device; transferring data to the interactive device; and displaying images via the display screen or emitting sound via the audio component based on the transferred data; wherein the interactive device is a non-transmitting device; and wherein the interactive device is a non-camera-equipped device or a non-microphone-equipped device. 2. The method of claim 1, wherein the interactive device is a non-recording device. 3. The method of claim 2, wherein the interactive device is both a non-camera-equipped device and a non-microphone-equipped device. 4. The method of claim 1, wherein the interactive device receives data from the user device but does not transmit data to the user device. 5. The method of claim 1, wherein the interactive device is a Christmas ornament that is paired with the user device that is a smartphone. 6. The method of claim 1, wherein the interactive device is paired with the user device via at least one of a Wi-Fi data-streaming connection and a Bluetooth data-streaming connection. 7. The method of claim 1, wherein transferring data to the interactive device includes at least one of transferring data via a storage medium physically inserted into the interactive device and transferring data via a data-streaming connection between the paired interactive device and user device. 8. The method of claim 1, further comprising controlling the display screen and the audio component by actuating a button disposed on the interactive device. 9. The method of claim 1, further comprising controlling the display screen and the audio component by using the user device to control a controller of the interactive device. 10. The method of claim 1, wherein the interactive device is a bell-shaped Christmas ornament or a ball-shaped Christmas ornament. 11. An interactive device that is pairable with a user device, comprising:
a housing; a data storage; a controller configured to pair the interactive device with the user device, the pairing allowing one-way image and audio data transfer from the user device to the interactive device; a display screen controlled by the controller; and an audio component controlled by the controller; wherein the display screen displays images and the audio component emits sound based on the one-way image and audio data transfer; and wherein the interactive device is a non-camera-equipped device. 12. The interactive device of claim 11, wherein the interactive device is a non-data-recording passive device. 13. The interactive device of claim 12, wherein the interactive device does not transmit data to the user device. 14. The interactive device of claim 11, wherein the interactive device is both a non-camera-equipped and a non-microphone-equipped device. 15. (canceled) 16. A method, comprising:
providing a plurality of interactive devices, each interactive device having a display screen and an audio component; pairing at least one of the plurality of interactive devices with at least one of a plurality of user devices to provide a paired set of at least one paired interactive device and at least one paired user device; streaming data from the at least one paired user device to the at least one paired interactive device of the paired set; and displaying images via at least one display screen of the at least one paired interactive device or emitting sound via at least one audio component of the at least one paired interactive device based on the streamed data; wherein the at least one paired interactive device of the paired set is a non-recording device; and wherein each interactive device of the plurality of interactive devices is a non-camera-equipped device and a non-microphone-equipped device. 17. (canceled) 18. The method of claim 16, wherein the at least one paired interactive device includes a plurality of paired interactive devices that are all paired to the at least one paired user device that is a single user device. 19. The method of claim 18, wherein each of the plurality of paired interactive devices displays images and emits sound when the single user device is within a predetermined distance of the plurality of paired interactive devices. 20. The method of claim 16, wherein each of the plurality of interactive devices is one of a ball-shaped Christmas ornament and a bell-shaped Christmas ornament having a flat front portion on which one of a liquid-crystal display and a light-emitting diode display is disposed. 21. The method of claim 16, wherein:
the at least one paired interactive device includes a plurality of paired interactive devices; the at least one paired user device includes a plurality of paired user devices; and the plurality of paired user devices includes a priority paired user device that has a first pairing priority over the other of the plurality of paired user devices. 22. The method of claim 21, further comprising:
pairing all of the plurality of paired interactive devices with the priority paired user device when the priority paired user device is within a predetermined distance of the plurality of paired interactive devices; and streaming data from the priority paired user device to all of the plurality of paired interactive devices. | A method is disclosed. The method includes providing an interactive device having at least one of a display screen and an audio component, pairing the interactive device with a user device, transferring data to the interactive device, and displaying images via the display screen or emitting sound via the audio component based on the transferred data. The interactive device is a non-transmitting device.1. A method, comprising:
providing an interactive device having at least one of a display screen and an audio component; pairing the interactive device with a user device; transferring data to the interactive device; and displaying images via the display screen or emitting sound via the audio component based on the transferred data; wherein the interactive device is a non-transmitting device; and wherein the interactive device is a non-camera-equipped device or a non-microphone-equipped device. 2. The method of claim 1, wherein the interactive device is a non-recording device. 3. The method of claim 2, wherein the interactive device is both a non-camera-equipped device and a non-microphone-equipped device. 4. The method of claim 1, wherein the interactive device receives data from the user device but does not transmit data to the user device. 5. The method of claim 1, wherein the interactive device is a Christmas ornament that is paired with the user device that is a smartphone. 6. The method of claim 1, wherein the interactive device is paired with the user device via at least one of a Wi-Fi data-streaming connection and a Bluetooth data-streaming connection. 7. The method of claim 1, wherein transferring data to the interactive device includes at least one of transferring data via a storage medium physically inserted into the interactive device and transferring data via a data-streaming connection between the paired interactive device and user device. 8. The method of claim 1, further comprising controlling the display screen and the audio component by actuating a button disposed on the interactive device. 9. The method of claim 1, further comprising controlling the display screen and the audio component by using the user device to control a controller of the interactive device. 10. The method of claim 1, wherein the interactive device is a bell-shaped Christmas ornament or a ball-shaped Christmas ornament. 11. An interactive device that is pairable with a user device, comprising:
a housing; a data storage; a controller configured to pair the interactive device with the user device, the pairing allowing one-way image and audio data transfer from the user device to the interactive device; a display screen controlled by the controller; and an audio component controlled by the controller; wherein the display screen displays images and the audio component emits sound based on the one-way image and audio data transfer; and wherein the interactive device is a non-camera-equipped device. 12. The interactive device of claim 11, wherein the interactive device is a non-data-recording passive device. 13. The interactive device of claim 12, wherein the interactive device does not transmit data to the user device. 14. The interactive device of claim 11, wherein the interactive device is both a non-camera-equipped and a non-microphone-equipped device. 15. (canceled) 16. A method, comprising:
providing a plurality of interactive devices, each interactive device having a display screen and an audio component; pairing at least one of the plurality of interactive devices with at least one of a plurality of user devices to provide a paired set of at least one paired interactive device and at least one paired user device; streaming data from the at least one paired user device to the at least one paired interactive device of the paired set; and displaying images via at least one display screen of the at least one paired interactive device or emitting sound via at least one audio component of the at least one paired interactive device based on the streamed data; wherein the at least one paired interactive device of the paired set is a non-recording device; and wherein each interactive device of the plurality of interactive devices is a non-camera-equipped device and a non-microphone-equipped device. 17. (canceled) 18. The method of claim 16, wherein the at least one paired interactive device includes a plurality of paired interactive devices that are all paired to the at least one paired user device that is a single user device. 19. The method of claim 18, wherein each of the plurality of paired interactive devices displays images and emits sound when the single user device is within a predetermined distance of the plurality of paired interactive devices. 20. The method of claim 16, wherein each of the plurality of interactive devices is one of a ball-shaped Christmas ornament and a bell-shaped Christmas ornament having a flat front portion on which one of a liquid-crystal display and a light-emitting diode display is disposed. 21. The method of claim 16, wherein:
the at least one paired interactive device includes a plurality of paired interactive devices; the at least one paired user device includes a plurality of paired user devices; and the plurality of paired user devices includes a priority paired user device that has a first pairing priority over the other of the plurality of paired user devices. 22. The method of claim 21, further comprising:
pairing all of the plurality of paired interactive devices with the priority paired user device when the priority paired user device is within a predetermined distance of the plurality of paired interactive devices; and streaming data from the priority paired user device to all of the plurality of paired interactive devices. | 2,600 |
11,055 | 11,055 | 12,939,969 | 2,689 | One example embodiment includes a keyboard for providing back typing with a mobile device. The keyboard includes a keypad, where the first keypad includes a set of keys for input to a mobile device and a first hinge, where the first hinge is attached a first portion of the keypad and where the first hinge is configured to allow movement of the first portion of the keypad relative to the mobile device. The keyboard also includes a second hinge, wherein the second hinge is attached a second portion of the keypad and wherein the second hinge is configured to allow movement of the second portion of the keypad relative to the mobile device. | 1. A system for providing an adjustable keyboard, the system comprising:
a keyboard panel, wherein the keyboard panel includes:
a first keypad, wherein the first keypad includes a first set of keys for input to a mobile device; and
a second keypad, wherein the second keypad includes a second set of keys for input to the mobile device;
a first swivel, wherein the first swivel:
is attached to the first keypad; and
is configured to allow rotation of the first keypad; and
a second swivel, wherein the second swivel:
is attached to the second keypad; and
is configured to allow rotation of the second keypad. 2. The system of claim 1, wherein the keyboard panel includes a QWERTY keyboard. 3. The system of claim 2, wherein the first keypad of the keyboard panel includes the left-hand keys of the QWERTY keyboard. 4. The system of claim 2, wherein the second keypad of the keyboard panel includes the right-hand keys of the QWERTY keyboard. 5. The system of claim 1, wherein keyboard panel includes the core keys of a QWERTY keyboard. 6. The system of claim 5, wherein the first keypad includes the left-hand core keys of the QWERTY keyboard. 7. The system of claim 6, wherein the second keypad includes the right-hand core keys of the QWERTY keyboard. 8. The system of claim 1 further comprising a hinge wherein the hinge is configured to allow the first keypad to move from the front of a mobile device to a rear surface of the mobile device. 9. The system of claim 8, wherein the hinge is further configured to allow the second keypad to move from the front of the mobile device to the rear surface of the mobile device. 10. The system of claim 1 further comprising a logic device. 11. The system of claim 10, wherein the logic device is configured to modify the output signal of a first depressed key if a second key is depressed prior to the release of the first key. 12. A system for use with a mobile device, wherein the system allows a user to back type on the mobile device, the system comprising:
a mobile device, wherein the mobile device includes a front surface and a rear surface; a first keypad on the rear surface, wherein the first keypad includes a first set of keys oriented in a horizontal direction; a second keypad on the rear surface, wherein the second keypad includes a second set of keys oriented in the horizontal direction a third keypad on the rear surface, wherein the third keypad includes a third set of keys oriented in a vertical direction; a fourth keypad on the rear surface, wherein the fourth keypad includes a fourth set of keys oriented in a vertical direction. 13. The system of claim 12, wherein the first set of keys includes the same keys as the third set of keys. 14. The system of claim 13, wherein the second set of keys includes the same keys as the fourth set of keys. 15. A system for use with a mobile device, wherein the system allows a user to back type on the mobile device, the system comprising:
a mobile device, wherein the mobile device includes:
a front surface, wherein the front surface includes a display; and
a rear surface, wherein the rear surface is opposite the front surface;
a keyboard panel, wherein the keyboard panel includes:
a first keypad, wherein the first keypad includes:
a first set of keys; and
a first swivel, wherein the first swivel is configured to allow the user to rotate the first keypad in the plane of the keyboard panel; and
a second keypad, wherein the second keypad includes:
a second set of keys; and
a second swivel, wherein the second swivel is configured to allow the user to rotate the second keypad in the plane of the keyboard panel; and
a hinge, wherein the hinge is configured to allow a user to move the keyboard panel from the front of the mobile device to the back surface of the mobile device. 16. The system of claim 15, wherein the mobile device includes a cell phone. 17. The system of claim 15, wherein the mobile device includes a laptop. 18. The system of claim 15, wherein the mobile device includes a tablet personal computer. 19. The keyboard of claim 15, wherein the hinge includes a double hinge, wherein the double hinge is configured to allow the keyboard panel to lie flat on the rear surface. 20. The system of claim 15, wherein:
the first swivel is configured to allow 360 degree rotation of the first keypad; and the second swivel is configured to allow 360 degree rotation of the second keypad. | One example embodiment includes a keyboard for providing back typing with a mobile device. The keyboard includes a keypad, where the first keypad includes a set of keys for input to a mobile device and a first hinge, where the first hinge is attached a first portion of the keypad and where the first hinge is configured to allow movement of the first portion of the keypad relative to the mobile device. The keyboard also includes a second hinge, wherein the second hinge is attached a second portion of the keypad and wherein the second hinge is configured to allow movement of the second portion of the keypad relative to the mobile device.1. A system for providing an adjustable keyboard, the system comprising:
a keyboard panel, wherein the keyboard panel includes:
a first keypad, wherein the first keypad includes a first set of keys for input to a mobile device; and
a second keypad, wherein the second keypad includes a second set of keys for input to the mobile device;
a first swivel, wherein the first swivel:
is attached to the first keypad; and
is configured to allow rotation of the first keypad; and
a second swivel, wherein the second swivel:
is attached to the second keypad; and
is configured to allow rotation of the second keypad. 2. The system of claim 1, wherein the keyboard panel includes a QWERTY keyboard. 3. The system of claim 2, wherein the first keypad of the keyboard panel includes the left-hand keys of the QWERTY keyboard. 4. The system of claim 2, wherein the second keypad of the keyboard panel includes the right-hand keys of the QWERTY keyboard. 5. The system of claim 1, wherein keyboard panel includes the core keys of a QWERTY keyboard. 6. The system of claim 5, wherein the first keypad includes the left-hand core keys of the QWERTY keyboard. 7. The system of claim 6, wherein the second keypad includes the right-hand core keys of the QWERTY keyboard. 8. The system of claim 1 further comprising a hinge wherein the hinge is configured to allow the first keypad to move from the front of a mobile device to a rear surface of the mobile device. 9. The system of claim 8, wherein the hinge is further configured to allow the second keypad to move from the front of the mobile device to the rear surface of the mobile device. 10. The system of claim 1 further comprising a logic device. 11. The system of claim 10, wherein the logic device is configured to modify the output signal of a first depressed key if a second key is depressed prior to the release of the first key. 12. A system for use with a mobile device, wherein the system allows a user to back type on the mobile device, the system comprising:
a mobile device, wherein the mobile device includes a front surface and a rear surface; a first keypad on the rear surface, wherein the first keypad includes a first set of keys oriented in a horizontal direction; a second keypad on the rear surface, wherein the second keypad includes a second set of keys oriented in the horizontal direction a third keypad on the rear surface, wherein the third keypad includes a third set of keys oriented in a vertical direction; a fourth keypad on the rear surface, wherein the fourth keypad includes a fourth set of keys oriented in a vertical direction. 13. The system of claim 12, wherein the first set of keys includes the same keys as the third set of keys. 14. The system of claim 13, wherein the second set of keys includes the same keys as the fourth set of keys. 15. A system for use with a mobile device, wherein the system allows a user to back type on the mobile device, the system comprising:
a mobile device, wherein the mobile device includes:
a front surface, wherein the front surface includes a display; and
a rear surface, wherein the rear surface is opposite the front surface;
a keyboard panel, wherein the keyboard panel includes:
a first keypad, wherein the first keypad includes:
a first set of keys; and
a first swivel, wherein the first swivel is configured to allow the user to rotate the first keypad in the plane of the keyboard panel; and
a second keypad, wherein the second keypad includes:
a second set of keys; and
a second swivel, wherein the second swivel is configured to allow the user to rotate the second keypad in the plane of the keyboard panel; and
a hinge, wherein the hinge is configured to allow a user to move the keyboard panel from the front of the mobile device to the back surface of the mobile device. 16. The system of claim 15, wherein the mobile device includes a cell phone. 17. The system of claim 15, wherein the mobile device includes a laptop. 18. The system of claim 15, wherein the mobile device includes a tablet personal computer. 19. The keyboard of claim 15, wherein the hinge includes a double hinge, wherein the double hinge is configured to allow the keyboard panel to lie flat on the rear surface. 20. The system of claim 15, wherein:
the first swivel is configured to allow 360 degree rotation of the first keypad; and the second swivel is configured to allow 360 degree rotation of the second keypad. | 2,600 |
11,056 | 11,056 | 16,379,759 | 2,637 | A calibration system comprises control circuitry and waveform capture circuitry. The control circuitry selects a first calibration waveform for input to a digital predistortion circuit of a transmitter. The capture circuitry captures a first waveform output by the transmitter in response to the first calibration waveform. The control circuitry compares the first calibration waveform to the captured first waveform. The control circuitry selects a first one of a plurality of mapping circuit configurations based on the result of the comparison, wherein the mapping circuit is configured to map outputs of a plurality of delay circuits among inputs of a plurality of filter taps. The control circuitry stores the one of the mapping circuit configurations in nonvolatile memory associated with the transmitter. | 1. A method comprising:
selecting, by circuitry of a calibration system, a first calibration waveform for input to a digital predistortion circuit of a transmitter; capturing, by the circuitry of the calibration system, a first waveform output by the transmitter in response to the first calibration waveform; comparing, by the circuitry of the calibration system, the first calibration waveform to the captured first waveform; selecting, by the circuitry of the calibration system, a first one of a plurality of mapping circuit configurations for the digital predistortion circuit based on a result of the comparing, wherein the mapping circuit is configured to map outputs of a plurality of delay circuits among inputs of a plurality of filter taps; and storing, by the circuitry of the calibration system, the first one of the mapping circuit configurations in nonvolatile memory associated with the transmitter. 2. The method of claim 1, comprising:
determining, by the circuitry of the calibration system, a filter tap configuration to use based on the result of the comparing; and storing, by the circuitry of the calibration system, the determined filter tap configuration in nonvolatile memory associated with the digital predistortion circuit. 3. The method of claim 2, wherein the filter tap configuration comprises one or more lookup table. 4. The method of claim 1, comprising selecting, by the circuitry of the calibration system, the first calibration waveform from among a plurality of calibration waveforms based on a desired characteristic of the transmitter after calibration. 5. The method of claim 4, wherein the desired characteristic is an amount of peak to average power ratio expansion introduced by the transmitter. 6. The method of claim 1, wherein the transmitter comprises a laser diode and the method comprises:
determining, by the circuitry of the calibration system based on the captured first waveform, an input current setting and/or bias voltage setting to be used for the laser diode; and storing, by the circuitry of the calibration system, the input current setting and/or bias voltage setting in nonvolatile memory associated with the transmitter. 7. The method of claim 1, wherein each of the plurality of mapping circuit configurations corresponds to a different mapping of the outputs of the plurality of delay circuits among the plurality of filter taps. 8. The method of claim 1, comprising:
selecting, by the circuitry of the calibration system, a second calibration waveform for input to the digital predistortion circuit of the transmitter; capturing, by the circuitry of the calibration system, a second waveform output by the transmitter in response to the second calibration waveform; comparing, by the circuitry of the calibration system, the second calibration waveform to the captured second waveform; selecting, by the circuitry of the calibration system, a second one of the plurality of mapping circuit configurations for the digital predistortion circuit based on the result of the comparing the second calibration waveform and the second captured waveform; and storing, by the circuitry of the calibration system, the second one of the plurality of mapping circuit configurations in nonvolatile memory associated with the transmitter. 9. The method of claim 8, wherein the first calibration waveform has different phase, frequency, and/or amplitude characteristics than the second calibration waveform. 10. The method of claim 1, comprising performing the selecting the first calibration waveform, the capturing, the comparing, the selecting the first one of the plurality of mapping circuit configurations, and the storing for each of a plurality of calibration temperatures. 11. A system comprising:
a calibration system comprising control circuitry and waveform capture circuitry, the calibration system being operable to:
select, by the control circuitry, a first calibration waveform for input to a digital predistortion circuit of a transmitter;
capture, by the waveform capture circuitry, a first waveform output by the transmitter in response to the first calibration waveform;
compare, by the control circuitry, the first calibration waveform to the captured first waveform;
select, by the control circuitry, a first one of a plurality of mapping circuit configurations for the digital predistortion circuit based on the result of the comparing, wherein the mapping circuit is configured to map outputs of a plurality of delay circuits among inputs of a plurality of filter taps; and
store, by the control circuitry, the first one of the mapping circuit configurations in nonvolatile memory associated with the transmitter. 12. The system of claim 11, wherein the control circuitry is operable to:
determine a filter tap configuration to use based on the result of the comparison; and store the determined filter tap configuration in nonvolatile memory associated with the digital predistortion circuit. 13. The system of claim 12, wherein the filter tap configuration comprises one or more lookup table. 14. The system of claim 11, wherein the control circuitry is operable to select the first calibration waveform from among a plurality of calibration waveforms based on a desired characteristic of the transmitter after calibration. 15. The system of claim 14, wherein the desired characteristic is an amount of peak to average power ratio expansion introduced by the transmitter. 16. The system of claim 11, wherein and the transmitter comprises a laser diode, and the control circuitry is operable to:
determine, based on the captured first waveform, an input current setting and/or bias voltage setting to be used for the laser diode; and store the input current setting and/or bias voltage setting in nonvolatile memory associated with the transmitter. 17. The system of claim 11, wherein each of the plurality of mapping circuit configurations corresponds to a different mapping of the outputs of the plurality of delay circuits among the plurality of filter taps. 18. The system of claim 11, wherein the calibration system is operable to:
select, by the control circuitry, a second calibration waveform for input to the digital predistortion circuit of the transmitter; capture, by the waveform capture circuitry, a second waveform output by the transmitter in response to the second calibration waveform; compare, by the control circuitry, the second calibration waveform to the captured second waveform; select, by the control circuitry, a second one of the plurality of mapping circuit configurations for the digital predistortion circuit based on the result of the comparison of the second calibration waveform and the captured second waveform; and store, by the control circuitry, the second one of the plurality of mapping circuit configurations in nonvolatile memory associated with the transmitter. 19. The system of claim 18, wherein the first calibration waveform has different phase, frequency, and/or amplitude characteristics than the second calibration waveform. 20. The system of claim 11, wherein the calibration system is operable to perform the selection of the first calibration waveform, the capture, the compare, the select of the first one of the plurality of mapping circuit configurations, and the store for each of a plurality of calibration temperatures. | A calibration system comprises control circuitry and waveform capture circuitry. The control circuitry selects a first calibration waveform for input to a digital predistortion circuit of a transmitter. The capture circuitry captures a first waveform output by the transmitter in response to the first calibration waveform. The control circuitry compares the first calibration waveform to the captured first waveform. The control circuitry selects a first one of a plurality of mapping circuit configurations based on the result of the comparison, wherein the mapping circuit is configured to map outputs of a plurality of delay circuits among inputs of a plurality of filter taps. The control circuitry stores the one of the mapping circuit configurations in nonvolatile memory associated with the transmitter.1. A method comprising:
selecting, by circuitry of a calibration system, a first calibration waveform for input to a digital predistortion circuit of a transmitter; capturing, by the circuitry of the calibration system, a first waveform output by the transmitter in response to the first calibration waveform; comparing, by the circuitry of the calibration system, the first calibration waveform to the captured first waveform; selecting, by the circuitry of the calibration system, a first one of a plurality of mapping circuit configurations for the digital predistortion circuit based on a result of the comparing, wherein the mapping circuit is configured to map outputs of a plurality of delay circuits among inputs of a plurality of filter taps; and storing, by the circuitry of the calibration system, the first one of the mapping circuit configurations in nonvolatile memory associated with the transmitter. 2. The method of claim 1, comprising:
determining, by the circuitry of the calibration system, a filter tap configuration to use based on the result of the comparing; and storing, by the circuitry of the calibration system, the determined filter tap configuration in nonvolatile memory associated with the digital predistortion circuit. 3. The method of claim 2, wherein the filter tap configuration comprises one or more lookup table. 4. The method of claim 1, comprising selecting, by the circuitry of the calibration system, the first calibration waveform from among a plurality of calibration waveforms based on a desired characteristic of the transmitter after calibration. 5. The method of claim 4, wherein the desired characteristic is an amount of peak to average power ratio expansion introduced by the transmitter. 6. The method of claim 1, wherein the transmitter comprises a laser diode and the method comprises:
determining, by the circuitry of the calibration system based on the captured first waveform, an input current setting and/or bias voltage setting to be used for the laser diode; and storing, by the circuitry of the calibration system, the input current setting and/or bias voltage setting in nonvolatile memory associated with the transmitter. 7. The method of claim 1, wherein each of the plurality of mapping circuit configurations corresponds to a different mapping of the outputs of the plurality of delay circuits among the plurality of filter taps. 8. The method of claim 1, comprising:
selecting, by the circuitry of the calibration system, a second calibration waveform for input to the digital predistortion circuit of the transmitter; capturing, by the circuitry of the calibration system, a second waveform output by the transmitter in response to the second calibration waveform; comparing, by the circuitry of the calibration system, the second calibration waveform to the captured second waveform; selecting, by the circuitry of the calibration system, a second one of the plurality of mapping circuit configurations for the digital predistortion circuit based on the result of the comparing the second calibration waveform and the second captured waveform; and storing, by the circuitry of the calibration system, the second one of the plurality of mapping circuit configurations in nonvolatile memory associated with the transmitter. 9. The method of claim 8, wherein the first calibration waveform has different phase, frequency, and/or amplitude characteristics than the second calibration waveform. 10. The method of claim 1, comprising performing the selecting the first calibration waveform, the capturing, the comparing, the selecting the first one of the plurality of mapping circuit configurations, and the storing for each of a plurality of calibration temperatures. 11. A system comprising:
a calibration system comprising control circuitry and waveform capture circuitry, the calibration system being operable to:
select, by the control circuitry, a first calibration waveform for input to a digital predistortion circuit of a transmitter;
capture, by the waveform capture circuitry, a first waveform output by the transmitter in response to the first calibration waveform;
compare, by the control circuitry, the first calibration waveform to the captured first waveform;
select, by the control circuitry, a first one of a plurality of mapping circuit configurations for the digital predistortion circuit based on the result of the comparing, wherein the mapping circuit is configured to map outputs of a plurality of delay circuits among inputs of a plurality of filter taps; and
store, by the control circuitry, the first one of the mapping circuit configurations in nonvolatile memory associated with the transmitter. 12. The system of claim 11, wherein the control circuitry is operable to:
determine a filter tap configuration to use based on the result of the comparison; and store the determined filter tap configuration in nonvolatile memory associated with the digital predistortion circuit. 13. The system of claim 12, wherein the filter tap configuration comprises one or more lookup table. 14. The system of claim 11, wherein the control circuitry is operable to select the first calibration waveform from among a plurality of calibration waveforms based on a desired characteristic of the transmitter after calibration. 15. The system of claim 14, wherein the desired characteristic is an amount of peak to average power ratio expansion introduced by the transmitter. 16. The system of claim 11, wherein and the transmitter comprises a laser diode, and the control circuitry is operable to:
determine, based on the captured first waveform, an input current setting and/or bias voltage setting to be used for the laser diode; and store the input current setting and/or bias voltage setting in nonvolatile memory associated with the transmitter. 17. The system of claim 11, wherein each of the plurality of mapping circuit configurations corresponds to a different mapping of the outputs of the plurality of delay circuits among the plurality of filter taps. 18. The system of claim 11, wherein the calibration system is operable to:
select, by the control circuitry, a second calibration waveform for input to the digital predistortion circuit of the transmitter; capture, by the waveform capture circuitry, a second waveform output by the transmitter in response to the second calibration waveform; compare, by the control circuitry, the second calibration waveform to the captured second waveform; select, by the control circuitry, a second one of the plurality of mapping circuit configurations for the digital predistortion circuit based on the result of the comparison of the second calibration waveform and the captured second waveform; and store, by the control circuitry, the second one of the plurality of mapping circuit configurations in nonvolatile memory associated with the transmitter. 19. The system of claim 18, wherein the first calibration waveform has different phase, frequency, and/or amplitude characteristics than the second calibration waveform. 20. The system of claim 11, wherein the calibration system is operable to perform the selection of the first calibration waveform, the capture, the compare, the select of the first one of the plurality of mapping circuit configurations, and the store for each of a plurality of calibration temperatures. | 2,600 |
11,057 | 11,057 | 16,251,502 | 2,653 | The technology disclosed herein enables an endpoint system to present a visual indicator that user communications have been suspended. In a particular embodiment, a method includes exchanging audio user communications for the communication between the first endpoint system and a second endpoint system. At the first endpoint system, the method includes determining that the second endpoint system caused a suspension of the audio user communications and providing a first visual indicator of the suspension. | 1. A method for improving a first endpoint system connected on a communication, the method comprising:
exchanging audio user communications for the communication between the first endpoint system operated by a first user and a second endpoint system operated by a second user, wherein the audio user communications include voice communications between the first user and the second user; at the first endpoint system, determining that the second endpoint system caused a suspension of the audio user communications; and at the first endpoint system, providing a first visual indicator of the suspension. 2. The method of claim 1, further comprising:
at the first endpoint system, determining that the second endpoint system resumed the audio user communications; and at the first endpoint system, providing a second visual indicator that the audio user communications resumed. 3. The method of claim 2, wherein providing the second visual indicator comprises removing the first visual indicator. 4. The method of claim 1, wherein determining that the second endpoint system caused the suspension comprises:
receiving a message indicating the suspension from the second endpoint system. 5. The method of claim 4, wherein the message comprises a control message defined by one of a Session Initiation Protocol (SIP), HyperText Transfer Protocol (HTTP), or H.323. 6. The method of claim 1, wherein determining that the second endpoint system caused the suspension comprises:
receiving a message indicating the suspension from a communication control system facilitating the communication. 7. The method of claim 1, wherein determining that the second endpoint system caused the suspension comprises:
processing the audio user communications to identify hold music indicating the suspension. 8. The method of claim 1, wherein the suspension comprises one of the first endpoint system being placed on hold by the second endpoint system at the direction of the second user or the communication being in process of transfer to another endpoint system at the direction of the second user. 9. The method of claim 1, wherein the first visual indicator comprises an illuminated light on the first endpoint system. 10. The method of claim 1, wherein the first visual indicator comprises a graphical element presented on a display of the first endpoint system. 11. An apparatus implementing a first endpoint system connected on a communication, the first endpoint system comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
exchange audio user communications for the communication between the first endpoint system operated by a first user and a second endpoint system operated by a second user, wherein the audio user communications include voice communications between the first user and the second user;
determine that the second endpoint system caused a suspension of the audio user communications; and
provide a first visual indicator of the suspension. 12. The apparatus of claim 11, wherein the program instructions further direct the processing system to:
determine that the second endpoint system resumed the audio user communications; and provide a second visual indicator that the audio user communications resumed. 13. The apparatus of claim 12, wherein to provide the second visual indicator, the program instructions direct the processing system to remove the first visual indicator. 14. The apparatus of claim 11, wherein to determine that the second endpoint system caused the suspension, the program instructions direct the processing system to:
receive a message indicating the suspension from the second endpoint system. 15. The apparatus of claim 11, wherein to determine that the second endpoint system caused the suspension, the program instructions direct the processing system to:
receive a message indicating the suspension from a communication control system facilitating the communication. 16. The apparatus of claim 11, wherein to determine that the second endpoint system caused the suspension, the program instructions direct the processing system to:
process the audio user communications to identify hold music indicating the suspension. 17. The apparatus of claim 11, wherein the suspension comprises one of the first endpoint system being placed on hold by the second endpoint system at the direction of the second user or the communication being in process of transfer to another endpoint system at the direction of the second user. 18. The apparatus of claim 11, further comprising:
a light; and wherein to provide the first visual indicator of the suspension, the program instructions direct the processing system to illuminate the light. 19. The apparatus of claim 11, further comprising:
a display; and wherein to provide the first visual indicator of the suspension, the program instructions direct the processing system to present a graphical element on the display. 20. One or more computer readable storage media having program instructions stored thereon for improving a first endpoint system connected on a communication, the program instructions, when executed by a processing system of the first endpoint system, direct the first endpoint system to:
exchange audio user communications for the communication between the first endpoint system operated by a first user and a second endpoint system operated by a second user, wherein the audio user communications include voice communications between the first user and the second user; determine that the second endpoint system caused a suspension of the audio user communications; and provide a first visual indicator of the suspension. | The technology disclosed herein enables an endpoint system to present a visual indicator that user communications have been suspended. In a particular embodiment, a method includes exchanging audio user communications for the communication between the first endpoint system and a second endpoint system. At the first endpoint system, the method includes determining that the second endpoint system caused a suspension of the audio user communications and providing a first visual indicator of the suspension.1. A method for improving a first endpoint system connected on a communication, the method comprising:
exchanging audio user communications for the communication between the first endpoint system operated by a first user and a second endpoint system operated by a second user, wherein the audio user communications include voice communications between the first user and the second user; at the first endpoint system, determining that the second endpoint system caused a suspension of the audio user communications; and at the first endpoint system, providing a first visual indicator of the suspension. 2. The method of claim 1, further comprising:
at the first endpoint system, determining that the second endpoint system resumed the audio user communications; and at the first endpoint system, providing a second visual indicator that the audio user communications resumed. 3. The method of claim 2, wherein providing the second visual indicator comprises removing the first visual indicator. 4. The method of claim 1, wherein determining that the second endpoint system caused the suspension comprises:
receiving a message indicating the suspension from the second endpoint system. 5. The method of claim 4, wherein the message comprises a control message defined by one of a Session Initiation Protocol (SIP), HyperText Transfer Protocol (HTTP), or H.323. 6. The method of claim 1, wherein determining that the second endpoint system caused the suspension comprises:
receiving a message indicating the suspension from a communication control system facilitating the communication. 7. The method of claim 1, wherein determining that the second endpoint system caused the suspension comprises:
processing the audio user communications to identify hold music indicating the suspension. 8. The method of claim 1, wherein the suspension comprises one of the first endpoint system being placed on hold by the second endpoint system at the direction of the second user or the communication being in process of transfer to another endpoint system at the direction of the second user. 9. The method of claim 1, wherein the first visual indicator comprises an illuminated light on the first endpoint system. 10. The method of claim 1, wherein the first visual indicator comprises a graphical element presented on a display of the first endpoint system. 11. An apparatus implementing a first endpoint system connected on a communication, the first endpoint system comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
exchange audio user communications for the communication between the first endpoint system operated by a first user and a second endpoint system operated by a second user, wherein the audio user communications include voice communications between the first user and the second user;
determine that the second endpoint system caused a suspension of the audio user communications; and
provide a first visual indicator of the suspension. 12. The apparatus of claim 11, wherein the program instructions further direct the processing system to:
determine that the second endpoint system resumed the audio user communications; and provide a second visual indicator that the audio user communications resumed. 13. The apparatus of claim 12, wherein to provide the second visual indicator, the program instructions direct the processing system to remove the first visual indicator. 14. The apparatus of claim 11, wherein to determine that the second endpoint system caused the suspension, the program instructions direct the processing system to:
receive a message indicating the suspension from the second endpoint system. 15. The apparatus of claim 11, wherein to determine that the second endpoint system caused the suspension, the program instructions direct the processing system to:
receive a message indicating the suspension from a communication control system facilitating the communication. 16. The apparatus of claim 11, wherein to determine that the second endpoint system caused the suspension, the program instructions direct the processing system to:
process the audio user communications to identify hold music indicating the suspension. 17. The apparatus of claim 11, wherein the suspension comprises one of the first endpoint system being placed on hold by the second endpoint system at the direction of the second user or the communication being in process of transfer to another endpoint system at the direction of the second user. 18. The apparatus of claim 11, further comprising:
a light; and wherein to provide the first visual indicator of the suspension, the program instructions direct the processing system to illuminate the light. 19. The apparatus of claim 11, further comprising:
a display; and wherein to provide the first visual indicator of the suspension, the program instructions direct the processing system to present a graphical element on the display. 20. One or more computer readable storage media having program instructions stored thereon for improving a first endpoint system connected on a communication, the program instructions, when executed by a processing system of the first endpoint system, direct the first endpoint system to:
exchange audio user communications for the communication between the first endpoint system operated by a first user and a second endpoint system operated by a second user, wherein the audio user communications include voice communications between the first user and the second user; determine that the second endpoint system caused a suspension of the audio user communications; and provide a first visual indicator of the suspension. | 2,600 |
11,058 | 11,058 | 16,389,499 | 2,674 | Techniques and devices for holding and releasing virtual objects on a display based on input received from one or more handheld controllers are described herein. In some instances, a handheld controller includes one or more sensors, such as proximity sensors, force sensors (e.g., force resisting sensors, etc.), accelerometers, and/or other types of sensors configured to receive input from a hand of a user gripping the handheld controller. Hardware, software, and/or firmware on the controller and/or on a device coupled to the controller (e.g., a game console, a server, etc.) may receive data from these sensors and generate a representation of a corresponding gesture on a display, such as a monitor, a virtual-reality system, and/or the like. | 1. A method comprising:
receiving first data from one or more sensors of a handheld controller, the first data indicating at least one of a force or proximity of at least a portion of a hand of a user holding the handheld controller at a first time; storing, based at least in part on the first data, a first indication that a virtual object rendered on a display has been picked up by the user; presenting, on the display and based at least in part on the first indication, a virtual hand of the user holding the virtual object; receiving second data from the one or more sensors, the second data indicating at least one of a force or proximity of at least a portion of the hand at a second time; storing, based at least in part on the second data, a second indication that the virtual hand is to release the virtual object within a predetermined amount of time from the second time; receiving third data from the one or more sensors, the third data indicating a velocity of the handheld controller at a third time; and presenting, on the display, the virtual hand releasing the virtual object prior to or upon expiration of the predetermined amount of time. 2. The method as recited in claim 1, wherein the first data indicates a force of at least a portion of the hand at the first time, and further comprising:
determining that the force at the first time is greater than a force threshold; and wherein the storing the first indication comprises storing the first indication that the virtual object rendered on the display has been picked up by the user based at least in part on determining that the force at the first time is greater than the force threshold. 3. The method as recited in claim 1, wherein the first data comprises a first capacitance value measured by the one or more sensors, and further comprising:
determining that at least one of the first capacitance value or a second capacitance value that is based at least in part on the first capacitance value is greater than a capacitance threshold; and wherein the storing the first indication comprises storing the first indication that the virtual object rendered on the display has been picked up by the user based at least in part on determining that the at least one of the first capacitance value or the second capacitance value is greater than the capacitance threshold. 4. The method as recited in claim 1, wherein:
the first data indicates a force of at least a portion of the hand at the first time; the second data indicates a force of at least a portion of the hand at the second time; and the method further comprises:
determining a difference between the force at the first time and the force at the second time; and
determining that the difference is greater than a difference threshold;
and wherein storing the second indication comprises storing the second indication that the hand is to release the virtual object based at least in part on determining that the difference is greater than the difference threshold. 5. The method as recited in claim 1, wherein the second data further indicates a velocity of the handheld controller at the second time, and further comprising:
determining that the velocity of the handheld controller at the second time is greater than a velocity threshold; and wherein storing the second indication comprises storing the second indication that the hand is to release the virtual object based at least in part on determining that the velocity of the handheld controller at the second time is greater than the velocity threshold. 6. The method as recited in claim 1, wherein the second data comprises a first capacitance value measured by the one or more sensors, and further comprising:
determining that at least one of the first capacitance value or a second capacitance value that is based at least in part on the first capacitance value is not greater than a capacitance threshold; and wherein storing the second indication comprises storing the second indication that the hand is to release the virtual object based at least in part on determining that the first capacitance value is not greater than the capacitance threshold. 7. The method as recited in claim 1, wherein the second data comprises a velocity of the handheld controller at the second time, and further comprising:
determining that the velocity of the handheld controller at the third time is not greater than the velocity of the handheld controller at the second time; and wherein the presenting the virtual hand releasing the object comprises presenting, on the display, the virtual hand releasing the object based at least in part on determining that the velocity of the handheld controller at the third time is not greater than the velocity of the handheld controller at the second time. 8. The method as recited in claim 1, further comprising:
determining that the predetermined amount of time has expired without presenting the virtual hand releasing the virtual object; and wherein the presenting the virtual hand releasing the object comprises presenting, on the display, the virtual hand releasing the object based at least in part on determining that the predetermined amount of time has expired without presenting the virtual hand releasing the virtual object. 9. The method as recited in claim 1, wherein the second data comprises a velocity of the handheld controller at the second time, and further comprising:
determining that the velocity of the handheld controller at the third time is greater than the velocity of the handheld controller at the second time; storing an indication that the velocity of the handheld controller at the third time corresponds to a peak velocity; storing an indication that the velocity of the handheld controller at the second time corresponds to a floor velocity; calculating, based at least in part on the peak velocity and the floor velocity, an ending velocity; and storing an indication of the ending velocity. 10. The method as recited in claim 9, further comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a fourth time; and determining that the velocity of the handheld controller at the fourth time is less than the floor velocity; and wherein the presenting the virtual hand releasing the virtual object comprises presenting the virtual hand releasing the virtual object based at least in part on determining that the velocity of the handheld controller at the fourth time is less than the floor velocity. 11. The method as recited in claim 9, further comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a fourth time; determining that the velocity of the handheld controller at the fourth time is not less than the floor velocity; determining that the velocity of the handheld controller at the fourth time is less than the ending velocity; and determining that a velocity of the handheld controller has remained less than the ending velocity for greater than a threshold amount of time; and wherein the presenting the virtual hand releasing the virtual object comprises presenting the virtual hand releasing the virtual object based at least in part on determining that the velocity of the handheld controller has remained less than the ending velocity for greater than the threshold amount of time. 12. A system comprising:
one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving first data from one or more sensors of a handheld controller, the first data indicating a force or proximity of at least a portion of a hand of a user holding the handheld controller;
storing an indication that a virtual object is to be held based at least in part on the first data;
receiving second data from the one or more sensors, the second data comprising a velocity of the handheld controller; and
storing an indication that the virtual object is to be released based at least in part on the second data. 13. The system as recited in claim 12, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that the velocity of the handheld controller is less than a previously measured velocity of the handheld controller; and wherein the storing the indication that the virtual object is to be released comprises storing the indication that the virtual object is to be released based at least in part on determining that the velocity of the handheld controller is less than the previously measured velocity of the handheld controller. 14. The system as recited in claim 12, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that a predetermined amount of time has elapsed since storing the indication that the virtual object is to be released; and causing a display to present release of the virtual object based at least in part on determining that the predetermined amount of time has elapsed. 15. The system as recited in claim 12, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time. 16. The system as recited in claim 12, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is greater than the velocity of the handheld controller at the first time; storing an indication that the velocity of the handheld controller at the second time corresponds to a peak velocity; storing an indication that the velocity of the handheld controller at the first time corresponds to a floor velocity; calculating an ending velocity based at least in part on at least one the peak velocity and the floor velocity; and storing an indication of the ending velocity. 17. The system as recited in claim 16, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is less than the floor velocity; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the third time is less than the floor velocity. 18. The system as recited in claim 16, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is not less than the floor velocity; determining that the velocity of the handheld controller at the third time is less than the ending velocity; determining that a velocity of the handheld controller has remained less than the ending velocity for greater than a threshold amount of time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller has remained less than the ending velocity for greater than the threshold amount of time. 19. A handheld controller comprising:
a controller body; one or more sensors coupled to the controller body; one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving first data from the one or more sensors, the first data indicating a force or proximity of at least a portion of a hand of a user holding the handheld controller;
storing an indication that a virtual object is to be held based at least in part on the first data;
receiving second data from the one or more sensors, the second data comprising a velocity of the handheld controller; and
storing an indication that the virtual object is to be released based at least in part on the second data. 20. The handheld controller as recited in claim 19, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that the velocity of the handheld controller is less than a previously measured velocity of the handheld controller; and wherein the storing the indication that the virtual object is to be released comprises storing the indication that the virtual object is to be release based at least in part on determining that the velocity of the handheld controller is less than the previously measured velocity of the handheld controller. 21. The handheld controller as recited in claim 19, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that a predetermined amount of time has elapsed since storing the indication that the virtual object is to be released; and causing a display to present release of the virtual object based at least in part on determining that the predetermined amount of time has elapsed. 22. The handheld controller as recited in claim 19, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time. 23. The handheld controller as recited in claim 19, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is greater than the velocity of the handheld controller at the first time; storing an indication that the velocity of the handheld controller at the second time corresponds to a peak velocity; storing an indication that the velocity of the handheld controller at the first time corresponds to a floor velocity; calculating an ending velocity based at least in part on the peak velocity and the floor velocity; and storing an indication of the ending velocity. 24. The handheld controller as recited in claim 23, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is less than the floor velocity; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the third time is less than the floor velocity. 25. The handheld controller as recited in claim 23, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is not greater than the velocity of the handheld controller at the second time; determining that the velocity of the handheld controller at the third time is not less than the floor velocity; determining that the velocity of the handheld controller at the third time is less than the ending velocity; determining that a velocity of the handheld controller has remained less than the ending velocity for greater than a threshold amount of time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller has remained less than the ending velocity for greater than the threshold amount of time. 26. The handheld controller as recited in claim 19, wherein the one or more sensors comprise:
an accelerometer configured to determine the velocity of the handheld controller; a force sensing resistor (FSR) configured to generate data indicating force of at least a portion of the hand of the user on the handheld controller; and a proximity sensor configured to generate data indicating proximity of at least a portion of the hand of the user on the handheld controller. | Techniques and devices for holding and releasing virtual objects on a display based on input received from one or more handheld controllers are described herein. In some instances, a handheld controller includes one or more sensors, such as proximity sensors, force sensors (e.g., force resisting sensors, etc.), accelerometers, and/or other types of sensors configured to receive input from a hand of a user gripping the handheld controller. Hardware, software, and/or firmware on the controller and/or on a device coupled to the controller (e.g., a game console, a server, etc.) may receive data from these sensors and generate a representation of a corresponding gesture on a display, such as a monitor, a virtual-reality system, and/or the like.1. A method comprising:
receiving first data from one or more sensors of a handheld controller, the first data indicating at least one of a force or proximity of at least a portion of a hand of a user holding the handheld controller at a first time; storing, based at least in part on the first data, a first indication that a virtual object rendered on a display has been picked up by the user; presenting, on the display and based at least in part on the first indication, a virtual hand of the user holding the virtual object; receiving second data from the one or more sensors, the second data indicating at least one of a force or proximity of at least a portion of the hand at a second time; storing, based at least in part on the second data, a second indication that the virtual hand is to release the virtual object within a predetermined amount of time from the second time; receiving third data from the one or more sensors, the third data indicating a velocity of the handheld controller at a third time; and presenting, on the display, the virtual hand releasing the virtual object prior to or upon expiration of the predetermined amount of time. 2. The method as recited in claim 1, wherein the first data indicates a force of at least a portion of the hand at the first time, and further comprising:
determining that the force at the first time is greater than a force threshold; and wherein the storing the first indication comprises storing the first indication that the virtual object rendered on the display has been picked up by the user based at least in part on determining that the force at the first time is greater than the force threshold. 3. The method as recited in claim 1, wherein the first data comprises a first capacitance value measured by the one or more sensors, and further comprising:
determining that at least one of the first capacitance value or a second capacitance value that is based at least in part on the first capacitance value is greater than a capacitance threshold; and wherein the storing the first indication comprises storing the first indication that the virtual object rendered on the display has been picked up by the user based at least in part on determining that the at least one of the first capacitance value or the second capacitance value is greater than the capacitance threshold. 4. The method as recited in claim 1, wherein:
the first data indicates a force of at least a portion of the hand at the first time; the second data indicates a force of at least a portion of the hand at the second time; and the method further comprises:
determining a difference between the force at the first time and the force at the second time; and
determining that the difference is greater than a difference threshold;
and wherein storing the second indication comprises storing the second indication that the hand is to release the virtual object based at least in part on determining that the difference is greater than the difference threshold. 5. The method as recited in claim 1, wherein the second data further indicates a velocity of the handheld controller at the second time, and further comprising:
determining that the velocity of the handheld controller at the second time is greater than a velocity threshold; and wherein storing the second indication comprises storing the second indication that the hand is to release the virtual object based at least in part on determining that the velocity of the handheld controller at the second time is greater than the velocity threshold. 6. The method as recited in claim 1, wherein the second data comprises a first capacitance value measured by the one or more sensors, and further comprising:
determining that at least one of the first capacitance value or a second capacitance value that is based at least in part on the first capacitance value is not greater than a capacitance threshold; and wherein storing the second indication comprises storing the second indication that the hand is to release the virtual object based at least in part on determining that the first capacitance value is not greater than the capacitance threshold. 7. The method as recited in claim 1, wherein the second data comprises a velocity of the handheld controller at the second time, and further comprising:
determining that the velocity of the handheld controller at the third time is not greater than the velocity of the handheld controller at the second time; and wherein the presenting the virtual hand releasing the object comprises presenting, on the display, the virtual hand releasing the object based at least in part on determining that the velocity of the handheld controller at the third time is not greater than the velocity of the handheld controller at the second time. 8. The method as recited in claim 1, further comprising:
determining that the predetermined amount of time has expired without presenting the virtual hand releasing the virtual object; and wherein the presenting the virtual hand releasing the object comprises presenting, on the display, the virtual hand releasing the object based at least in part on determining that the predetermined amount of time has expired without presenting the virtual hand releasing the virtual object. 9. The method as recited in claim 1, wherein the second data comprises a velocity of the handheld controller at the second time, and further comprising:
determining that the velocity of the handheld controller at the third time is greater than the velocity of the handheld controller at the second time; storing an indication that the velocity of the handheld controller at the third time corresponds to a peak velocity; storing an indication that the velocity of the handheld controller at the second time corresponds to a floor velocity; calculating, based at least in part on the peak velocity and the floor velocity, an ending velocity; and storing an indication of the ending velocity. 10. The method as recited in claim 9, further comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a fourth time; and determining that the velocity of the handheld controller at the fourth time is less than the floor velocity; and wherein the presenting the virtual hand releasing the virtual object comprises presenting the virtual hand releasing the virtual object based at least in part on determining that the velocity of the handheld controller at the fourth time is less than the floor velocity. 11. The method as recited in claim 9, further comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a fourth time; determining that the velocity of the handheld controller at the fourth time is not less than the floor velocity; determining that the velocity of the handheld controller at the fourth time is less than the ending velocity; and determining that a velocity of the handheld controller has remained less than the ending velocity for greater than a threshold amount of time; and wherein the presenting the virtual hand releasing the virtual object comprises presenting the virtual hand releasing the virtual object based at least in part on determining that the velocity of the handheld controller has remained less than the ending velocity for greater than the threshold amount of time. 12. A system comprising:
one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving first data from one or more sensors of a handheld controller, the first data indicating a force or proximity of at least a portion of a hand of a user holding the handheld controller;
storing an indication that a virtual object is to be held based at least in part on the first data;
receiving second data from the one or more sensors, the second data comprising a velocity of the handheld controller; and
storing an indication that the virtual object is to be released based at least in part on the second data. 13. The system as recited in claim 12, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that the velocity of the handheld controller is less than a previously measured velocity of the handheld controller; and wherein the storing the indication that the virtual object is to be released comprises storing the indication that the virtual object is to be released based at least in part on determining that the velocity of the handheld controller is less than the previously measured velocity of the handheld controller. 14. The system as recited in claim 12, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that a predetermined amount of time has elapsed since storing the indication that the virtual object is to be released; and causing a display to present release of the virtual object based at least in part on determining that the predetermined amount of time has elapsed. 15. The system as recited in claim 12, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time. 16. The system as recited in claim 12, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is greater than the velocity of the handheld controller at the first time; storing an indication that the velocity of the handheld controller at the second time corresponds to a peak velocity; storing an indication that the velocity of the handheld controller at the first time corresponds to a floor velocity; calculating an ending velocity based at least in part on at least one the peak velocity and the floor velocity; and storing an indication of the ending velocity. 17. The system as recited in claim 16, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is less than the floor velocity; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the third time is less than the floor velocity. 18. The system as recited in claim 16, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is not less than the floor velocity; determining that the velocity of the handheld controller at the third time is less than the ending velocity; determining that a velocity of the handheld controller has remained less than the ending velocity for greater than a threshold amount of time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller has remained less than the ending velocity for greater than the threshold amount of time. 19. A handheld controller comprising:
a controller body; one or more sensors coupled to the controller body; one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving first data from the one or more sensors, the first data indicating a force or proximity of at least a portion of a hand of a user holding the handheld controller;
storing an indication that a virtual object is to be held based at least in part on the first data;
receiving second data from the one or more sensors, the second data comprising a velocity of the handheld controller; and
storing an indication that the virtual object is to be released based at least in part on the second data. 20. The handheld controller as recited in claim 19, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that the velocity of the handheld controller is less than a previously measured velocity of the handheld controller; and wherein the storing the indication that the virtual object is to be released comprises storing the indication that the virtual object is to be release based at least in part on determining that the velocity of the handheld controller is less than the previously measured velocity of the handheld controller. 21. The handheld controller as recited in claim 19, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
determining that a predetermined amount of time has elapsed since storing the indication that the virtual object is to be released; and causing a display to present release of the virtual object based at least in part on determining that the predetermined amount of time has elapsed. 22. The handheld controller as recited in claim 19, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the second time is not greater than the velocity of the handheld controller at the first time. 23. The handheld controller as recited in claim 19, wherein the velocity of the handheld controller comprises a velocity of the handheld controller at a first time, and the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving third data from the one or more sensors, the third data comprising a velocity of the handheld controller at a second time that is after the first time; determining that the velocity of the handheld controller at the second time is greater than the velocity of the handheld controller at the first time; storing an indication that the velocity of the handheld controller at the second time corresponds to a peak velocity; storing an indication that the velocity of the handheld controller at the first time corresponds to a floor velocity; calculating an ending velocity based at least in part on the peak velocity and the floor velocity; and storing an indication of the ending velocity. 24. The handheld controller as recited in claim 23, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is less than the floor velocity; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller at the third time is less than the floor velocity. 25. The handheld controller as recited in claim 23, wherein the one or more computer-readable media further store computer-executable instructions that, when executed, cause the one or more processors to perform acts comprising:
receiving fourth data from the one or more sensors, the fourth data indicating a velocity of the handheld controller at a third time; determining that the velocity of the handheld controller at the third time is not greater than the velocity of the handheld controller at the second time; determining that the velocity of the handheld controller at the third time is not less than the floor velocity; determining that the velocity of the handheld controller at the third time is less than the ending velocity; determining that a velocity of the handheld controller has remained less than the ending velocity for greater than a threshold amount of time; and causing a display to present release of the virtual object based at least in part on determining that the velocity of the handheld controller has remained less than the ending velocity for greater than the threshold amount of time. 26. The handheld controller as recited in claim 19, wherein the one or more sensors comprise:
an accelerometer configured to determine the velocity of the handheld controller; a force sensing resistor (FSR) configured to generate data indicating force of at least a portion of the hand of the user on the handheld controller; and a proximity sensor configured to generate data indicating proximity of at least a portion of the hand of the user on the handheld controller. | 2,600 |
11,059 | 11,059 | 16,027,966 | 2,619 | A method and apparatus for real-time personalization and interactivity for a virtual reality data system uses back-end data mining and machine learning to generate meta-data that may be used by a player to provide interactivity to the user. | 1. A virtual reality data system, comprising:
a virtual reality data backend; a plurality of virtual reality devices coupled to the virtual reality data backend, each virtual reality device having a head mounted display and a player; the virtual reality data backend having a data mining engine that retrieves a piece of virtual reality data content and processes the piece of virtual reality data content to generate interactivity meta-data for the piece of virtual reality data content and a video encoding engine that generates optimized virtual reality data that includes the virtual reality data content and the generated meta-data; and the player in each virtual reality device receiving the optimized virtual reality data from the virtual reality data backend and generating interactivity when the virtual reality data is being displayed in the head mounted display. 2. The system of claim 1, wherein the data mining engine further comprises a machine learning element that performs machine learning to improve the generated meta-data. 3. The system of claim 1, wherein the backend further comprises a storage that stores the meta-data generated for each piece of virtual reality data. 4. The system of claim 3, wherein the storage stores the optimized virtual reality data from the piece of virtual reality content. 5. The system of claim 1, wherein the player generates a trigger that initiates the interactivity based on the received meta-data received from the backend. 6. A method comprising:
retrieving a piece of virtual reality data asset in a virtual reality data backend; performing data mining on the virtual reality data asset to generate information about each scene of the virtual reality data asset; generating meta-data for each scene of the virtual reality data asset; and communicating optimized virtual reality data to a player in a virtual reality device, the optimized virtual reality data having the virtual reality data for the virtual reality data asset and the generated meta-data for each scene of the virtual reality data. 7. The method of claim 6 further comprising providing, by the player in the virtual reality device, interactive data to the user using the generated meta-data for each scene of the virtual reality data. 8. The method of claim 6 further comprising performing machine learning to improve the generated meta-data for each scene of the virtual reality data. 9. The method of claim 6, wherein retrieving the piece of virtual reality data asset further comprises retrieving the piece of virtual reality data asset from one of an external source and a storage of the virtual reality data backend. 10. The method of claim 7 further comprising generating, in the player, a trigger that initiates the interactivity based on the received meta-data received from the backend. 11. An apparatus, comprising:
a virtual reality device having a head mounted display and a low complexity computer system connected to the head mounted display, the computer system having a virtual reality data player executed by a processor of the low complexity computer system that is configured to: receive an optimized virtual reality data that includes the virtual reality data content and the generated meta-data, the generated meta-data generated by a virtual reality backend connected to the virtual reality device by retrieving a piece of virtual reality data content and processing the piece of virtual reality data content to generate interactivity meta-data for the piece of virtual reality data content; and generate interactivity on the low complexity computer system when the virtual reality data is being displayed in the head mounted display based on the received meta-data. | A method and apparatus for real-time personalization and interactivity for a virtual reality data system uses back-end data mining and machine learning to generate meta-data that may be used by a player to provide interactivity to the user.1. A virtual reality data system, comprising:
a virtual reality data backend; a plurality of virtual reality devices coupled to the virtual reality data backend, each virtual reality device having a head mounted display and a player; the virtual reality data backend having a data mining engine that retrieves a piece of virtual reality data content and processes the piece of virtual reality data content to generate interactivity meta-data for the piece of virtual reality data content and a video encoding engine that generates optimized virtual reality data that includes the virtual reality data content and the generated meta-data; and the player in each virtual reality device receiving the optimized virtual reality data from the virtual reality data backend and generating interactivity when the virtual reality data is being displayed in the head mounted display. 2. The system of claim 1, wherein the data mining engine further comprises a machine learning element that performs machine learning to improve the generated meta-data. 3. The system of claim 1, wherein the backend further comprises a storage that stores the meta-data generated for each piece of virtual reality data. 4. The system of claim 3, wherein the storage stores the optimized virtual reality data from the piece of virtual reality content. 5. The system of claim 1, wherein the player generates a trigger that initiates the interactivity based on the received meta-data received from the backend. 6. A method comprising:
retrieving a piece of virtual reality data asset in a virtual reality data backend; performing data mining on the virtual reality data asset to generate information about each scene of the virtual reality data asset; generating meta-data for each scene of the virtual reality data asset; and communicating optimized virtual reality data to a player in a virtual reality device, the optimized virtual reality data having the virtual reality data for the virtual reality data asset and the generated meta-data for each scene of the virtual reality data. 7. The method of claim 6 further comprising providing, by the player in the virtual reality device, interactive data to the user using the generated meta-data for each scene of the virtual reality data. 8. The method of claim 6 further comprising performing machine learning to improve the generated meta-data for each scene of the virtual reality data. 9. The method of claim 6, wherein retrieving the piece of virtual reality data asset further comprises retrieving the piece of virtual reality data asset from one of an external source and a storage of the virtual reality data backend. 10. The method of claim 7 further comprising generating, in the player, a trigger that initiates the interactivity based on the received meta-data received from the backend. 11. An apparatus, comprising:
a virtual reality device having a head mounted display and a low complexity computer system connected to the head mounted display, the computer system having a virtual reality data player executed by a processor of the low complexity computer system that is configured to: receive an optimized virtual reality data that includes the virtual reality data content and the generated meta-data, the generated meta-data generated by a virtual reality backend connected to the virtual reality device by retrieving a piece of virtual reality data content and processing the piece of virtual reality data content to generate interactivity meta-data for the piece of virtual reality data content; and generate interactivity on the low complexity computer system when the virtual reality data is being displayed in the head mounted display based on the received meta-data. | 2,600 |
11,060 | 11,060 | 15,551,898 | 2,622 | An apparatus for remote controlled physical activity monitoring, the apparatus comprising: at least one orientation measurer, wearable on at least one body part of a user, configured to measure orientation of a body part wearing the orientation measurer during a physical activity of the user, at least one pressure meter, wearable on at least one body part of the user, configured to measure pressure applied by muscle of a body part wearing the pressure meter during the physical activity of the user, a computer processor, associated with the orientation measurer and pressure meter, configured to derive monitoring control data from the measured orientation and pressure, and a data transmitter, associated with the computer processor, configured to transmit the monitoring control data to a physical activity monitoring device, and thereby to remotely control a monitoring of the physical activity of the user by the physical activity monitoring device. | 1.-23. (canceled) 24. An apparatus for remote controlled physical activity monitoring, the apparatus comprising:
at least one pressure meter, wearable on at least one body part of a user, configured to measure pressure applied by muscle of a body part wearing said pressure meter during a physical activity of the user; a computer processor, associated with said pressure meter, configured to derive monitoring control data from the measured pressure; and a data transmitter, associated with said computer processor, configured to transmit the monitoring control data to a physical activity monitoring device, and thereby, to remotely control a monitoring of the physical activity of the user by the physical activity monitoring device. 25. The apparatus of claim 24, further comprising at least one orientation measurer, wearable on at least one body part of a user, configured to measure orientation of a body part wearing said orientation measurer during the physical activity of the user, wherein said computer processor is further configured to derive the monitoring control data from the measured orientation and the measured pressure. 26. The apparatus of claim 24, further comprising said physical activity monitoring device, wherein said physical activity monitoring device comprises a camera and a controller, and said controller is configured to control an operation of said camera based on the monitoring control data. 27. The apparatus of claim 24, further comprising said physical activity monitoring device, wherein said physical activity monitoring device comprises a vehicle and a controller and said controller is configured to maneuver said vehicle based on the monitoring control data. 28. The apparatus of claim 24, wherein said pressure meters comprise at least two pressure meters, and said computer processor is further configured to compare a measurement of a first one of said pressure meters with a measurement of a second one of said pressure meters, for deriving the monitoring control data. 29. The apparatus of claim 25, wherein said orientation measurers comprise at least two orientation measurers, and said computer processor is further configured to compare a measurement of a first one of said orientation measurers with a measurement of a second one of said orientation measurers, for deriving monitoring control data. 30. The apparatus of claim 24, where said computer processor is further configured to translate a pressure change measured by at least one of said pressure meters into operation data included in the derived monitoring control data. 31. The apparatus of claim 25, where said computer processor is further configured to translate an angular orientation change measured by at least one of said orientation measurers into operation data included in the derived monitoring control data. 32. The apparatus of claim 25, where said computer processor is further configured to translate a movement in a predefined direction, measured by at least one of said orientation measurers, into operation data included in the derived monitoring control data. 33. An apparatus for remote controlled physical activity monitoring, the apparatus comprising:
at least one orientation measurer, wearable on at least one body part of a user, configured to measure orientation of a body part wearing said orientation measurer during a physical activity of the user; a computer processor, associated with said orientation measurer, configured to derive monitoring control data from the measured orientation; and a data transmitter, associated with said computer processor, configured to transmit the monitoring control data to a physical activity monitoring device, and thereby, to remotely control a monitoring of the physical activity of the user by the physical activity monitoring device. 34. A method for remote controlled physical activity monitoring, the method comprising:
measuring pressure applied by muscle of a body part of a user during a physical activity of the user; deriving monitoring control data from the measured pressure; and transmitting the derived monitoring control data to a physical activity monitoring device, and thereby remote controlling a monitoring of the physical activity of the user by the physical activity monitoring device. 35. The method of claim 34, further comprising: measuring orientation of a body part of the user during the physical activity of the user; and deriving the monitoring control data from the measured orientation and the measured pressure. 36. The method of claim 34, further comprising controlling an operation of a camera of the physical activity monitoring device based on the monitoring control data. 37. The method of claim 34, further comprising maneuvering a vehicle of the physical activity monitoring device based on the monitoring control data. 38. The method of claim 34, further comprising comparing a measurement of a first pressure meter with a measurement of a second pressure meter, for deriving the monitoring control data. 39. The method of claim 35, further comprising comparing a measurement of a first orientation measurer with a measurement of a second orientation measurer, for deriving the monitoring control data. 40. The method of claim 34, further comprising translating a measured pressure change into operation data included in the derived monitoring control data. 41. The method of claim 35, further comprising translating a measured angular orientation change into operation data included in the derived monitoring control data. 42. The method of claim 35, further comprising translating a measured movement in a predefined direction into operation data included in the derived monitoring control data. 43. A method for remote controlled physical activity monitoring, the method comprising:
measuring orientation of a body part of a user during a physical activity of the user; deriving monitoring control data from the measured orientation; and transmitting the derived monitoring control data to a physical activity monitoring device, and thereby remote controlling a monitoring of the physical activity of the user by the physical activity monitoring device. | An apparatus for remote controlled physical activity monitoring, the apparatus comprising: at least one orientation measurer, wearable on at least one body part of a user, configured to measure orientation of a body part wearing the orientation measurer during a physical activity of the user, at least one pressure meter, wearable on at least one body part of the user, configured to measure pressure applied by muscle of a body part wearing the pressure meter during the physical activity of the user, a computer processor, associated with the orientation measurer and pressure meter, configured to derive monitoring control data from the measured orientation and pressure, and a data transmitter, associated with the computer processor, configured to transmit the monitoring control data to a physical activity monitoring device, and thereby to remotely control a monitoring of the physical activity of the user by the physical activity monitoring device.1.-23. (canceled) 24. An apparatus for remote controlled physical activity monitoring, the apparatus comprising:
at least one pressure meter, wearable on at least one body part of a user, configured to measure pressure applied by muscle of a body part wearing said pressure meter during a physical activity of the user; a computer processor, associated with said pressure meter, configured to derive monitoring control data from the measured pressure; and a data transmitter, associated with said computer processor, configured to transmit the monitoring control data to a physical activity monitoring device, and thereby, to remotely control a monitoring of the physical activity of the user by the physical activity monitoring device. 25. The apparatus of claim 24, further comprising at least one orientation measurer, wearable on at least one body part of a user, configured to measure orientation of a body part wearing said orientation measurer during the physical activity of the user, wherein said computer processor is further configured to derive the monitoring control data from the measured orientation and the measured pressure. 26. The apparatus of claim 24, further comprising said physical activity monitoring device, wherein said physical activity monitoring device comprises a camera and a controller, and said controller is configured to control an operation of said camera based on the monitoring control data. 27. The apparatus of claim 24, further comprising said physical activity monitoring device, wherein said physical activity monitoring device comprises a vehicle and a controller and said controller is configured to maneuver said vehicle based on the monitoring control data. 28. The apparatus of claim 24, wherein said pressure meters comprise at least two pressure meters, and said computer processor is further configured to compare a measurement of a first one of said pressure meters with a measurement of a second one of said pressure meters, for deriving the monitoring control data. 29. The apparatus of claim 25, wherein said orientation measurers comprise at least two orientation measurers, and said computer processor is further configured to compare a measurement of a first one of said orientation measurers with a measurement of a second one of said orientation measurers, for deriving monitoring control data. 30. The apparatus of claim 24, where said computer processor is further configured to translate a pressure change measured by at least one of said pressure meters into operation data included in the derived monitoring control data. 31. The apparatus of claim 25, where said computer processor is further configured to translate an angular orientation change measured by at least one of said orientation measurers into operation data included in the derived monitoring control data. 32. The apparatus of claim 25, where said computer processor is further configured to translate a movement in a predefined direction, measured by at least one of said orientation measurers, into operation data included in the derived monitoring control data. 33. An apparatus for remote controlled physical activity monitoring, the apparatus comprising:
at least one orientation measurer, wearable on at least one body part of a user, configured to measure orientation of a body part wearing said orientation measurer during a physical activity of the user; a computer processor, associated with said orientation measurer, configured to derive monitoring control data from the measured orientation; and a data transmitter, associated with said computer processor, configured to transmit the monitoring control data to a physical activity monitoring device, and thereby, to remotely control a monitoring of the physical activity of the user by the physical activity monitoring device. 34. A method for remote controlled physical activity monitoring, the method comprising:
measuring pressure applied by muscle of a body part of a user during a physical activity of the user; deriving monitoring control data from the measured pressure; and transmitting the derived monitoring control data to a physical activity monitoring device, and thereby remote controlling a monitoring of the physical activity of the user by the physical activity monitoring device. 35. The method of claim 34, further comprising: measuring orientation of a body part of the user during the physical activity of the user; and deriving the monitoring control data from the measured orientation and the measured pressure. 36. The method of claim 34, further comprising controlling an operation of a camera of the physical activity monitoring device based on the monitoring control data. 37. The method of claim 34, further comprising maneuvering a vehicle of the physical activity monitoring device based on the monitoring control data. 38. The method of claim 34, further comprising comparing a measurement of a first pressure meter with a measurement of a second pressure meter, for deriving the monitoring control data. 39. The method of claim 35, further comprising comparing a measurement of a first orientation measurer with a measurement of a second orientation measurer, for deriving the monitoring control data. 40. The method of claim 34, further comprising translating a measured pressure change into operation data included in the derived monitoring control data. 41. The method of claim 35, further comprising translating a measured angular orientation change into operation data included in the derived monitoring control data. 42. The method of claim 35, further comprising translating a measured movement in a predefined direction into operation data included in the derived monitoring control data. 43. A method for remote controlled physical activity monitoring, the method comprising:
measuring orientation of a body part of a user during a physical activity of the user; deriving monitoring control data from the measured orientation; and transmitting the derived monitoring control data to a physical activity monitoring device, and thereby remote controlling a monitoring of the physical activity of the user by the physical activity monitoring device. | 2,600 |
11,061 | 11,061 | 15,384,453 | 2,689 | When multiple readers for RF transponders have to be placed in close proximity, such as in adjacent lanes of a highway toll barrier, they can be set to operate at different frequencies. When signals from two adjacent ones of the readers interfere, the resulting signal includes interference terms whose frequencies equal the sum of the reader frequencies and the difference between the reader frequencies. To remove such interference terms while passing the desired terms, a tag includes a low-pass or other frequency-selective filter. | 1-16. (canceled) 17. A system comprising:
a first reader to transmit a first interrogation signal to a transponder at a first frequency and to receive a response signal from the transponder at the first frequency; a second reader to transmit a second interrogation signal to the transponder at a second frequency, wherein said first and second interrogation signals create intermodulation products that cause interference at the transponder; the transponder comprising:
an antenna;
a detector in electrical communication with the antenna; and
a frequency-selective filter in electrical communication with the detector to mitigate the interference. 18. The system of claim 17, wherein the detector comprises a detector diode. 19. The system of claim 17, wherein the frequency selective filter comprises an RC filter. 20. The system of claim 17, wherein the frequency selective filter comprises a low pass filter. 21. The system of claim 17, wherein the frequency selective filter comprises a band pass filter. 22. The system of claim 17, wherein the frequency selective filter comprises a high pass filter. 23. The system of claim 18, wherein the frequency selective filter comprises a low pass filter. 24. The system of claim 18, wherein the frequency selective filter comprises a band pass filter. 25. The system of claim 18, wherein the frequency selective filter comprises a high pass filter. 26. A system comprising:
a first reader to transmit a first interrogation signal to a transponder at a first frequency and receive a response signal from the transponder at a third frequency; a second reader to transmit a second interrogation signal to the transponder at a second frequency and, wherein said first interrogation signal or response signal and the second interrogation signal create intermodulation products that cause interference at the transponder; the transponder comprising:
an antenna;
a detector in electrical communication with the antenna;
a frequency-selective filter in electrical communication with the detector; and
a signal processing circuit in electrical communication with the frequency-selective filter to mitigate the interference. 27. The system of claim 26, wherein the detector comprises a detector diode. 28. The system of claim 26, wherein the frequency selective filter comprises and RC filter. 29. The system of claim 26, wherein the frequency selective filter comprises a low pass filter. 30. The system of claim 26, wherein the frequency selective filter comprises a band pass filter. 31. The system of claim 26, wherein the frequency selective filter comprises a high pass filter. 32. The system of claim 27, wherein the frequency selective filter comprises a low pass filter. 33. The system of claim 27, wherein the frequency selective filter comprises a band pass filter. 34. The system of claim 27, wherein the frequency selective filter comprises a high pass filter. 35. A system comprising:
a first reader to transmit a first interrogation signal to a transponder at a first frequency and to receive a response signal from the transponder at the first frequency; an interference source signal received at the transponder at a second frequency, wherein said first interrogation signal and the interfering source signal create intermodulation products that cause interference at the transponder; the transponder comprising:
an antenna;
a detector in electrical communication with the antenna; and
a frequency-selective filter in electrical communication with the detector to mitigate the interference. 35. The system of claim 35, wherein the detector comprises a detector diode. 36. The system of claim 35, wherein the frequency selective filter comprises and RC filter. 37. The system of claim 35, wherein the frequency selective filter comprises a low pass filter. 38. The system of claim 35 wherein the frequency selective filter comprises a band pass filter. 39. The system of claim 35 wherein the frequency selective filter comprises a high pass filter. 40. The system of claim 36, wherein the frequency selective filter comprises a low pass filter. 41. The system of claim 36, wherein the frequency selective filter comprises a band pass filter. 42. The system of claim 36, wherein the frequency selective filter comprises a high pass filter. 43. A transponder comprising:
an antenna; a detector in electrical communication with the antenna; a frequency-selective filter in electrical communication with the detector to mitigate intermodulation products created by a reader supplying a first interrogation signal to a transponder at a first frequency and an interference source signal received at the transponder at a second frequency. 44. The transponder of claim 43, wherein the detector comprises a detector diode. 45. The transponder of claim 43, wherein the frequency selective filter comprises and RC filter. 46. The transponder of claim 43, wherein the frequency selective filter comprises a low pass filter. 47. The transponder of claim 43, wherein the frequency selective filter comprises a band pass filter. 48. The transponder of claim 43, wherein the frequency selective filter comprises a high pass filter. 49. The transponder of claim 44, wherein the frequency selective filter comprises a low pass filter. 50. The transponder of claim 44, wherein the frequency selective filter comprises a band pass filter. 51. The transponder of claim 44, wherein the frequency selective filter comprises a high pass filter. | When multiple readers for RF transponders have to be placed in close proximity, such as in adjacent lanes of a highway toll barrier, they can be set to operate at different frequencies. When signals from two adjacent ones of the readers interfere, the resulting signal includes interference terms whose frequencies equal the sum of the reader frequencies and the difference between the reader frequencies. To remove such interference terms while passing the desired terms, a tag includes a low-pass or other frequency-selective filter.1-16. (canceled) 17. A system comprising:
a first reader to transmit a first interrogation signal to a transponder at a first frequency and to receive a response signal from the transponder at the first frequency; a second reader to transmit a second interrogation signal to the transponder at a second frequency, wherein said first and second interrogation signals create intermodulation products that cause interference at the transponder; the transponder comprising:
an antenna;
a detector in electrical communication with the antenna; and
a frequency-selective filter in electrical communication with the detector to mitigate the interference. 18. The system of claim 17, wherein the detector comprises a detector diode. 19. The system of claim 17, wherein the frequency selective filter comprises an RC filter. 20. The system of claim 17, wherein the frequency selective filter comprises a low pass filter. 21. The system of claim 17, wherein the frequency selective filter comprises a band pass filter. 22. The system of claim 17, wherein the frequency selective filter comprises a high pass filter. 23. The system of claim 18, wherein the frequency selective filter comprises a low pass filter. 24. The system of claim 18, wherein the frequency selective filter comprises a band pass filter. 25. The system of claim 18, wherein the frequency selective filter comprises a high pass filter. 26. A system comprising:
a first reader to transmit a first interrogation signal to a transponder at a first frequency and receive a response signal from the transponder at a third frequency; a second reader to transmit a second interrogation signal to the transponder at a second frequency and, wherein said first interrogation signal or response signal and the second interrogation signal create intermodulation products that cause interference at the transponder; the transponder comprising:
an antenna;
a detector in electrical communication with the antenna;
a frequency-selective filter in electrical communication with the detector; and
a signal processing circuit in electrical communication with the frequency-selective filter to mitigate the interference. 27. The system of claim 26, wherein the detector comprises a detector diode. 28. The system of claim 26, wherein the frequency selective filter comprises and RC filter. 29. The system of claim 26, wherein the frequency selective filter comprises a low pass filter. 30. The system of claim 26, wherein the frequency selective filter comprises a band pass filter. 31. The system of claim 26, wherein the frequency selective filter comprises a high pass filter. 32. The system of claim 27, wherein the frequency selective filter comprises a low pass filter. 33. The system of claim 27, wherein the frequency selective filter comprises a band pass filter. 34. The system of claim 27, wherein the frequency selective filter comprises a high pass filter. 35. A system comprising:
a first reader to transmit a first interrogation signal to a transponder at a first frequency and to receive a response signal from the transponder at the first frequency; an interference source signal received at the transponder at a second frequency, wherein said first interrogation signal and the interfering source signal create intermodulation products that cause interference at the transponder; the transponder comprising:
an antenna;
a detector in electrical communication with the antenna; and
a frequency-selective filter in electrical communication with the detector to mitigate the interference. 35. The system of claim 35, wherein the detector comprises a detector diode. 36. The system of claim 35, wherein the frequency selective filter comprises and RC filter. 37. The system of claim 35, wherein the frequency selective filter comprises a low pass filter. 38. The system of claim 35 wherein the frequency selective filter comprises a band pass filter. 39. The system of claim 35 wherein the frequency selective filter comprises a high pass filter. 40. The system of claim 36, wherein the frequency selective filter comprises a low pass filter. 41. The system of claim 36, wherein the frequency selective filter comprises a band pass filter. 42. The system of claim 36, wherein the frequency selective filter comprises a high pass filter. 43. A transponder comprising:
an antenna; a detector in electrical communication with the antenna; a frequency-selective filter in electrical communication with the detector to mitigate intermodulation products created by a reader supplying a first interrogation signal to a transponder at a first frequency and an interference source signal received at the transponder at a second frequency. 44. The transponder of claim 43, wherein the detector comprises a detector diode. 45. The transponder of claim 43, wherein the frequency selective filter comprises and RC filter. 46. The transponder of claim 43, wherein the frequency selective filter comprises a low pass filter. 47. The transponder of claim 43, wherein the frequency selective filter comprises a band pass filter. 48. The transponder of claim 43, wherein the frequency selective filter comprises a high pass filter. 49. The transponder of claim 44, wherein the frequency selective filter comprises a low pass filter. 50. The transponder of claim 44, wherein the frequency selective filter comprises a band pass filter. 51. The transponder of claim 44, wherein the frequency selective filter comprises a high pass filter. | 2,600 |
11,062 | 11,062 | 15,669,847 | 2,689 | Systems, devices, and methods for managing a premises management system are described. A method may comprise initiating a first communication session with a premises device and using the first communication session to transmit a command to the premise device by a gateway device. The command may be associated with event data associated with a premises. The method may further comprise initiating a second communication session with the gateway device and using the second communication session to transmit the event data to the gateway device by the premises device. | 1. A system comprising:
a premises device located at a premises; and a gateway device located at the premises and configured to:
determine a command for the premises device, wherein the command is associated with event data, wherein the event data is associated with the premises;
initiate, based on the command, a first communication session with the premises device;
transmit, using the first communication session and to the premises device, the command; and
wherein the premises device is configured to:
determine, based on the command, the event data;
initiate, based on the determination of the event data, a second communication session with the gateway device; and
transmit, using the second communication session and to the gateway device, the event data. 2. The system of claim 1, wherein the premises device comprises at least one of a camera device, a security system device, an automation device, or a personal computing device. 3. The system of claim 1, wherein the premises device is further configured to transmit, to the gateway device, an indication of receipt of the command; and
wherein the gateway device is further configured to, based on the indication of receipt of the command, end the first communication session. 4. The system of claim 1, wherein the gateway device is further configured to:
determine an idle state of the first communication session; and end, based on the determination of the idle state, the first communication session. 5. The system of claim 4, wherein the premises device is further configured to:
determine an idle state of the second communication session; and end, based on the determination of the idle state, the second communication session. 6. The system of claim 1, wherein the gateway device is further configured to:
determine a failure of the first communication session; and re-initiate, based on the determination of the failure, the first communication session. 7. The system of claim 1, wherein at least one of the initiating the first communication session or the initiating the second communication session comprises transmitting a request for a connection, wherein the premises device and the gateway device are configured to authenticate, in response to receiving the request, the request. 8. The system of claim 1, wherein the premises device is further configured to store the event data, wherein the transmitting the event data comprises transmitting, based on the command, the stored event data. 9. The system of claim 8, wherein the command comprises a request for the stored event data. 10. A method comprising:
determining, by a gateway device, a command for a premises device, wherein the gateway device is located at a premises, wherein the command is associated with event data, and wherein the event data is associated with the premises; initiating, based on the command, a first communication session with the premises device; transmitting, using the first communication session and to the premises device, the command; determining, based on the command, the event data; initiating, based on the determination of the event data, a second communication session with the gateway device; and transmitting, using the second communication session and to the gateway device, the event data. 11. The method of claim 10, wherein the initiating the first communication session comprises:
requesting to connect to the premises device; and negotiating an encryption protocol between the premises device and the gateway device. 12. The method of claim 11, wherein the transmitting the command comprises transmitting, using the encryption protocol, the command. 13. The method of claim 10, wherein the initiating the second communication session is further based on a determination that the first communication session has closed. 14. A device comprising:
one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to:
receive, from a gateway device and via a first communication session, a command, wherein the gateway device is located at a premises, wherein the command is associated with event data, and wherein the event data is associated with the premises, and wherein the first communication session was initiated by the gateway device;
determine, based on the command, the event data;
initiate, based on the determination of the event data, a second communication session with the gateway device; and
transmit, using the second communication session and to the gateway device, the event data. 15. The device of claim 14, wherein communications via the first communication session use at least one of an hypertext transfer protocol (HTTP) protocol, an hypertext transfer protocol secure (HTTPS) protocol, a transport layer security (TLS) protocol, and a transmission control protocol (TCP) protocol. 16. The device of claim 14, wherein the event data comprises at least one of a video or an image. 17. The device of claim 14, wherein the command comprises a command to at least one of generate, collect, or determine the event data. 18. The device of claim 14, wherein at least one of the first communication session and the second communication session comprises a persistent connection. 19. The device of claim 14, wherein the instructions, when executed by the one or more processors, further cause the device to transmit, using the first communication session, an indication of receipt of the command. 20. The device of claim 14, wherein the instructions, when executed by the one or more processors, further cause the device to negotiate an encryption with the gateway device, and
wherein transmitting the event data comprises transmitting the even data using the encryption. | Systems, devices, and methods for managing a premises management system are described. A method may comprise initiating a first communication session with a premises device and using the first communication session to transmit a command to the premise device by a gateway device. The command may be associated with event data associated with a premises. The method may further comprise initiating a second communication session with the gateway device and using the second communication session to transmit the event data to the gateway device by the premises device.1. A system comprising:
a premises device located at a premises; and a gateway device located at the premises and configured to:
determine a command for the premises device, wherein the command is associated with event data, wherein the event data is associated with the premises;
initiate, based on the command, a first communication session with the premises device;
transmit, using the first communication session and to the premises device, the command; and
wherein the premises device is configured to:
determine, based on the command, the event data;
initiate, based on the determination of the event data, a second communication session with the gateway device; and
transmit, using the second communication session and to the gateway device, the event data. 2. The system of claim 1, wherein the premises device comprises at least one of a camera device, a security system device, an automation device, or a personal computing device. 3. The system of claim 1, wherein the premises device is further configured to transmit, to the gateway device, an indication of receipt of the command; and
wherein the gateway device is further configured to, based on the indication of receipt of the command, end the first communication session. 4. The system of claim 1, wherein the gateway device is further configured to:
determine an idle state of the first communication session; and end, based on the determination of the idle state, the first communication session. 5. The system of claim 4, wherein the premises device is further configured to:
determine an idle state of the second communication session; and end, based on the determination of the idle state, the second communication session. 6. The system of claim 1, wherein the gateway device is further configured to:
determine a failure of the first communication session; and re-initiate, based on the determination of the failure, the first communication session. 7. The system of claim 1, wherein at least one of the initiating the first communication session or the initiating the second communication session comprises transmitting a request for a connection, wherein the premises device and the gateway device are configured to authenticate, in response to receiving the request, the request. 8. The system of claim 1, wherein the premises device is further configured to store the event data, wherein the transmitting the event data comprises transmitting, based on the command, the stored event data. 9. The system of claim 8, wherein the command comprises a request for the stored event data. 10. A method comprising:
determining, by a gateway device, a command for a premises device, wherein the gateway device is located at a premises, wherein the command is associated with event data, and wherein the event data is associated with the premises; initiating, based on the command, a first communication session with the premises device; transmitting, using the first communication session and to the premises device, the command; determining, based on the command, the event data; initiating, based on the determination of the event data, a second communication session with the gateway device; and transmitting, using the second communication session and to the gateway device, the event data. 11. The method of claim 10, wherein the initiating the first communication session comprises:
requesting to connect to the premises device; and negotiating an encryption protocol between the premises device and the gateway device. 12. The method of claim 11, wherein the transmitting the command comprises transmitting, using the encryption protocol, the command. 13. The method of claim 10, wherein the initiating the second communication session is further based on a determination that the first communication session has closed. 14. A device comprising:
one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to:
receive, from a gateway device and via a first communication session, a command, wherein the gateway device is located at a premises, wherein the command is associated with event data, and wherein the event data is associated with the premises, and wherein the first communication session was initiated by the gateway device;
determine, based on the command, the event data;
initiate, based on the determination of the event data, a second communication session with the gateway device; and
transmit, using the second communication session and to the gateway device, the event data. 15. The device of claim 14, wherein communications via the first communication session use at least one of an hypertext transfer protocol (HTTP) protocol, an hypertext transfer protocol secure (HTTPS) protocol, a transport layer security (TLS) protocol, and a transmission control protocol (TCP) protocol. 16. The device of claim 14, wherein the event data comprises at least one of a video or an image. 17. The device of claim 14, wherein the command comprises a command to at least one of generate, collect, or determine the event data. 18. The device of claim 14, wherein at least one of the first communication session and the second communication session comprises a persistent connection. 19. The device of claim 14, wherein the instructions, when executed by the one or more processors, further cause the device to transmit, using the first communication session, an indication of receipt of the command. 20. The device of claim 14, wherein the instructions, when executed by the one or more processors, further cause the device to negotiate an encryption with the gateway device, and
wherein transmitting the event data comprises transmitting the even data using the encryption. | 2,600 |
11,063 | 11,063 | 15,300,114 | 2,619 | A method for the computer-aided editing of a digital 3D model ( 1 ) of a dental object using digital tools (T 1, T 2, T 3 ) provides for the identification of different dental-specific regions (R 1, R 2, R 3 ) of the 3D model ( 1 ) that are affected by the tool (T 1, T 2, T 3 ) in different ways, for computation of the effect on the whole 3D model and for display thereof as a proposal model ( 2 ) together with the 3D model ( 1 ). The proposal model ( 2 ) is then rejected or accepted in part or in full. If it is accepted in part, at least one subregion of the 3D model ( 1 ) is selected as a region ( 10 ) and a result model ( 6 ) is formed from the 3D model ( 1 ) and the proposal model ( 2 ) by virtue of the 3D model ( 1 ) being taken as the starting point for replacing the selected region ( 10 ) or at least a central portion of the selected region ( 10 ) with a corresponding region of the proposal model ( 2 ) or approaching the latter using a strength factor (S). | 1. A method for the computer-aided editing of a digital 3D model of a dental object using one or more digital tools, wherein at least one digital tool is selected and an effect of the tool is computed for the whole 3D model,
wherein one or more different regions of the 3D model are identified, wherein at least one region corresponds at least in part to an occlusal or incisal or labial or buccal or distal or mesial or lingual or palatal surface of the dental object, wherein the at least one tool affects the different regions differently, wherein the effect computed for the whole 3D model is provided as a proposal model and displayed together with the unchanged 3D model, and that wherein the proposal model is rejected or accepted in part or in full, wherein, if it is accepted in full, the proposal model is displayed as a result model and wherein if it is accepted in part, the whole 3D model or at least a region or at least a subregion of the 3D model is selected as region, a result model is formed from the 3D model and the proposal model by virtue of the 3D model being taken as the starting point for replacing the one or more selected region or at least one central portion of the selected region with a corresponding region of the proposal model or approaching the latter using at least one strength factor and the result model being displayed. 2. The method according to claim 1, wherein a transition region is defined around the selected region, wherein the 3D model is adapted in the transition region step by step to the proposal model weighted by the strength factor. 3. The method according to claim 1, wherein the proposal model is displayed transparently or semi-transparently in the combined presentation. 4. The method according to claim 1, wherein regions, in which the proposal model is covered by the 3D model in the combined presentation, are identified automatically and displayed on the 3D model. 5. The method according to claim 1, wherein a distance between the proposal model and the 3D model is displayed using a color coding of the proposal model. 6. The method according to claim 1, wherein the result model and the 3D model are displayed alternately. 7. The method according to claim 1, wherein the one or more selected region is/are selected using an input device. 8. The method according to claim 1, wherein a mark, which can be moved and selected using an input device, is displayed on the displayed 3D model for selecting a region. 9. The method according to claim 8, wherein a region of the proposal model, which corresponds to the selected region and is captured by the mark, is displayed opaquely and the selected region of the 3D model is displayed transparently or not displayed at all. 10. The method according to claim 1, wherein the one or more line on the displayed 3D model is/are defined and/or changed using the input device. 11. The method according to claim 1, wherein at least one characteristic subregion of the 3D model is automatically identified and displayed and can be changed and/or selected using an input device. 12. The method according to claim 1, wherein the different regions are identified by means of averaged normal vectors. 13. The method according to claim 1, wherein cusps and/or fissures are automatically identified as characteristic subregions within an occlusal surface. 14. The method according to claim 13, wherein the cusps and/or fissures are identified by means of the curvature of the occlusal surface. 15. The method according to claim 13, wherein the at least one tool affects identified cusps and fissures differently. 16. Method according to claim 1, wherein the at least one tool takes into account neighboring teeth or restorations and/or opposite teeth or restorations and/or an emergence profile from at least one 3D data set. | A method for the computer-aided editing of a digital 3D model ( 1 ) of a dental object using digital tools (T 1, T 2, T 3 ) provides for the identification of different dental-specific regions (R 1, R 2, R 3 ) of the 3D model ( 1 ) that are affected by the tool (T 1, T 2, T 3 ) in different ways, for computation of the effect on the whole 3D model and for display thereof as a proposal model ( 2 ) together with the 3D model ( 1 ). The proposal model ( 2 ) is then rejected or accepted in part or in full. If it is accepted in part, at least one subregion of the 3D model ( 1 ) is selected as a region ( 10 ) and a result model ( 6 ) is formed from the 3D model ( 1 ) and the proposal model ( 2 ) by virtue of the 3D model ( 1 ) being taken as the starting point for replacing the selected region ( 10 ) or at least a central portion of the selected region ( 10 ) with a corresponding region of the proposal model ( 2 ) or approaching the latter using a strength factor (S).1. A method for the computer-aided editing of a digital 3D model of a dental object using one or more digital tools, wherein at least one digital tool is selected and an effect of the tool is computed for the whole 3D model,
wherein one or more different regions of the 3D model are identified, wherein at least one region corresponds at least in part to an occlusal or incisal or labial or buccal or distal or mesial or lingual or palatal surface of the dental object, wherein the at least one tool affects the different regions differently, wherein the effect computed for the whole 3D model is provided as a proposal model and displayed together with the unchanged 3D model, and that wherein the proposal model is rejected or accepted in part or in full, wherein, if it is accepted in full, the proposal model is displayed as a result model and wherein if it is accepted in part, the whole 3D model or at least a region or at least a subregion of the 3D model is selected as region, a result model is formed from the 3D model and the proposal model by virtue of the 3D model being taken as the starting point for replacing the one or more selected region or at least one central portion of the selected region with a corresponding region of the proposal model or approaching the latter using at least one strength factor and the result model being displayed. 2. The method according to claim 1, wherein a transition region is defined around the selected region, wherein the 3D model is adapted in the transition region step by step to the proposal model weighted by the strength factor. 3. The method according to claim 1, wherein the proposal model is displayed transparently or semi-transparently in the combined presentation. 4. The method according to claim 1, wherein regions, in which the proposal model is covered by the 3D model in the combined presentation, are identified automatically and displayed on the 3D model. 5. The method according to claim 1, wherein a distance between the proposal model and the 3D model is displayed using a color coding of the proposal model. 6. The method according to claim 1, wherein the result model and the 3D model are displayed alternately. 7. The method according to claim 1, wherein the one or more selected region is/are selected using an input device. 8. The method according to claim 1, wherein a mark, which can be moved and selected using an input device, is displayed on the displayed 3D model for selecting a region. 9. The method according to claim 8, wherein a region of the proposal model, which corresponds to the selected region and is captured by the mark, is displayed opaquely and the selected region of the 3D model is displayed transparently or not displayed at all. 10. The method according to claim 1, wherein the one or more line on the displayed 3D model is/are defined and/or changed using the input device. 11. The method according to claim 1, wherein at least one characteristic subregion of the 3D model is automatically identified and displayed and can be changed and/or selected using an input device. 12. The method according to claim 1, wherein the different regions are identified by means of averaged normal vectors. 13. The method according to claim 1, wherein cusps and/or fissures are automatically identified as characteristic subregions within an occlusal surface. 14. The method according to claim 13, wherein the cusps and/or fissures are identified by means of the curvature of the occlusal surface. 15. The method according to claim 13, wherein the at least one tool affects identified cusps and fissures differently. 16. Method according to claim 1, wherein the at least one tool takes into account neighboring teeth or restorations and/or opposite teeth or restorations and/or an emergence profile from at least one 3D data set. | 2,600 |
11,064 | 11,064 | 16,615,183 | 2,677 | An example device includes an imaging portion, an audio portion and at least one acoustic isolation feature. The audio portion is integrally coupled to the imaging portion. The audio portion is to provide an audio output. The at least one acoustic isolation feature is to facilitate acoustic separation of the imaging portion and the audio portion. | 1. A device, comprising:
an imaging portion; an audio portion integrally coupled to the imaging portion, the audio portion to provide an audio output; and at least one acoustic isolation feature to facilitate acoustic separation of the imaging portion and the audio portion. 2. The device of claim 1, wherein the imaging portion, the audio portion and the at least one acoustic isolation feature are integrally formed in the device. 3. The device of claim 1, wherein the imaging portion includes at least one of a printer, scanner or copier. 4. The device of claim 1, wherein the imaging portion includes a multifunction imaging portion, the multifunction imaging portion including at least two of a printer, scanner or copier. 5. The device of claim 1, wherein audio portion includes at least one speaker. 6. The device of claim 1, wherein the at least one acoustic isolation feature includes at least one of a rib or a textured surface. 7. The device of claim 1, wherein the audio portion includes a wireless interface to couple the audio portion to a user device. 8. The device of claim 7, wherein the wireless interface includes a Bluetooth module. 9. A system, comprising:
a user device; and a multifunction device, the multifunction device including:
an imaging portion to provide at least one imaging function;
an audio portion integrally coupled to the imaging portion, the audio portion to provide an audio output; and
at least one acoustic isolation feature to facilitate acoustic separation of the imaging portion and the audio portion,
wherein the audio portion couples to the user device through a wireless interface. 10. The system of claim 9, wherein the wireless interface includes a Bluetooth interface in the audio portion. 11. The system of claim 9, wherein the wireless interface is a network interface coupled to the imaging portion. 12. The system of claim 9, wherein the imaging portion, the audio portion and the at least one acoustic isolation feature are integrally formed in the multifunction device. 13. The system of claim 9, wherein audio portion includes at least one speaker. 14. The system of claim 9, wherein the at least one acoustic isolation feature includes at least one of a rib or a textured surface. 15. A method, comprising:
providing at least one imaging function in a housing; provide an audio portion within the housing; and provide at least one acoustic isolation feature to provide acoustic separation between the imaging function and the audio portion. | An example device includes an imaging portion, an audio portion and at least one acoustic isolation feature. The audio portion is integrally coupled to the imaging portion. The audio portion is to provide an audio output. The at least one acoustic isolation feature is to facilitate acoustic separation of the imaging portion and the audio portion.1. A device, comprising:
an imaging portion; an audio portion integrally coupled to the imaging portion, the audio portion to provide an audio output; and at least one acoustic isolation feature to facilitate acoustic separation of the imaging portion and the audio portion. 2. The device of claim 1, wherein the imaging portion, the audio portion and the at least one acoustic isolation feature are integrally formed in the device. 3. The device of claim 1, wherein the imaging portion includes at least one of a printer, scanner or copier. 4. The device of claim 1, wherein the imaging portion includes a multifunction imaging portion, the multifunction imaging portion including at least two of a printer, scanner or copier. 5. The device of claim 1, wherein audio portion includes at least one speaker. 6. The device of claim 1, wherein the at least one acoustic isolation feature includes at least one of a rib or a textured surface. 7. The device of claim 1, wherein the audio portion includes a wireless interface to couple the audio portion to a user device. 8. The device of claim 7, wherein the wireless interface includes a Bluetooth module. 9. A system, comprising:
a user device; and a multifunction device, the multifunction device including:
an imaging portion to provide at least one imaging function;
an audio portion integrally coupled to the imaging portion, the audio portion to provide an audio output; and
at least one acoustic isolation feature to facilitate acoustic separation of the imaging portion and the audio portion,
wherein the audio portion couples to the user device through a wireless interface. 10. The system of claim 9, wherein the wireless interface includes a Bluetooth interface in the audio portion. 11. The system of claim 9, wherein the wireless interface is a network interface coupled to the imaging portion. 12. The system of claim 9, wherein the imaging portion, the audio portion and the at least one acoustic isolation feature are integrally formed in the multifunction device. 13. The system of claim 9, wherein audio portion includes at least one speaker. 14. The system of claim 9, wherein the at least one acoustic isolation feature includes at least one of a rib or a textured surface. 15. A method, comprising:
providing at least one imaging function in a housing; provide an audio portion within the housing; and provide at least one acoustic isolation feature to provide acoustic separation between the imaging function and the audio portion. | 2,600 |
11,065 | 11,065 | 15,973,144 | 2,646 | A battery in a mobile phone cover for use with a mobile phone is used to charge a battery in the mobile phone according to a first battery parameter and a second battery parameter. If an application running on the mobile phone determines that a sensed parameter has fallen below a first battery parameter, then the application causes the battery of the mobile phone cover to charge the battery in the mobile phone until the sensed parameter reaches or exceeds the second battery parameter. The mobile phone cover also provides a notification on its display, for example, of an event (e.g., missed call, calendar alert, received message, etc.) when the event occurs on the mobile phone. | 1. A mobile phone, comprising:
one or more processors configured to:
provide a graphical user interface that sets a first battery parameter and a second battery parameter;
receive a sensed battery parameter from a battery of the mobile phone; and
cause a battery of a mobile phone cover to charge the battery of the mobile phone if the sensed battery parameter is less than or equal to the first battery parameter. 2. The mobile phone according to claim 1, wherein the one or more processors are configured to cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter is more than or equal to the second battery parameter. 3. The mobile phone according to claim 1, wherein the first battery parameter is a first battery percentage relating to the battery of the mobile phone. 4. The mobile phone according to claim 3, wherein the second battery parameter is a second battery percentage relating to the battery of the mobile phone, and the second battery percentage is larger than the first battery percentage. 5. The mobile phone according to claim 1, wherein the one or more processors are configured to:
provide a graphical user interface that further sets a third battery parameter; receive a sensed battery parameter from a battery of the mobile phone cover; and cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the third battery parameter. 6. The mobile phone according to claim 5, wherein the first battery parameter and the second battery parameter are battery percentages relating to the battery of the mobile phone. 7. The mobile phone according to claim 1, wherein the one or more processors are configured to:
provide a graphical user interface that sets a fourth battery parameter and a fifth battery parameter; receive a sensed battery parameter from the battery of the mobile phone cover; and cause the battery of the mobile phone to charge the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the fourth battery parameter. 8. The mobile phone according to claim 7, wherein the one or more processors are configured to cause the battery of the mobile phone to stop charging the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone is more than or equal to the fifth battery parameter. 9. The mobile phone according to claim 7, wherein the fourth battery parameter and the fifth battery parameter are battery percentages relating to the battery of the mobile phone cover. 10. The mobile phone according to claim 1, wherein the one or more processors are configured to provide a graphical user interface that causes the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone. 11. The mobile phone according to claim 1, wherein the one or more processors are in a power-save mode, and the one or more processors are configured to provide a graphical user interface that causes the battery of the mobile phone cover to supplement one of a current, a voltage, and a power to the one or more processors of the mobile phone to take the one or more processors out of the power-save mode. 12. The mobile phone according to claim 1, wherein the one or more processors are configured to provide a graphical user interface that causes the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone to cause the one or more processors to enter a turbo mode. 13. A mobile phone cover for use with a mobile phone, comprising:
one or more processors configured to:
receive a first battery parameter and a second battery parameter from the mobile phone;
receive a sensed battery parameter from a battery of the mobile phone; and
cause a battery of the mobile phone cover to charge the battery of the mobile phone if the sensed battery parameter is less than or equal to the first battery parameter. 14. The mobile phone cover according to claim 13, wherein the one or more processors are configured to cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter is more than or equal to the second battery parameter. 15. The mobile phone cover according to claim 13, wherein the first battery parameter is a first battery percentage relating to the battery of the mobile phone. 16. The mobile phone cover according to claim 15, wherein the second battery parameter is a second battery percentage relating to the battery of the mobile phone, and the second battery percentage is larger than the first battery percentage. 17. The mobile phone cover according to claim 13, wherein the one or more processors are configured to:
receive a third battery parameter; receive a sensed battery parameter from a battery of the mobile phone cover; and cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the third battery parameter. 18. The mobile phone cover according to claim 13, wherein the one or more processors are configured to:
receive a fourth battery parameter and a fifth battery parameter; receive a sensed battery parameter from the battery of the mobile phone cover; and cause the battery of the mobile phone to charge the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the fourth battery parameter. 19. The mobile phone cover according to claim 18, wherein the one or more processors are configured to cause the battery of the mobile phone to stop charging the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone is more than or equal to the fifth battery parameter. 20. The mobile phone cover according to claim 13, wherein the one or more processors are configured to cause the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to one or more processors of the mobile phone. 21. The mobile phone cover according to claim 1, wherein the one or more processors of the mobile phone are in a power-save mode, and the one or more processors are configured to cause the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone to take the one or more processors out of the power-save mode. 22. The mobile phone cover according to claim 1, wherein the one or more processors are configured to cause the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to one or more processors of the mobile phone to cause the one or more processors to enter a turbo mode. 23. An accessory for use with a mobile phone, comprising:
one or more processors configured to:
receive a first battery parameter and a second battery parameter from the mobile phone;
receive a sensed battery parameter from a battery of the mobile phone; and
cause a battery of the accessory to charge the battery of the mobile phone if the sensed battery parameter is less than or equal to the first battery parameter. 24. The accessory according to claim 23, wherein the one or more processors of the mobile phone are in a power-save mode, and the one or more processors are configured to cause the battery of the accessory to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone to take the one or more processors out of the power-save mode. 25. The accessory according to claim 23, wherein the one or more processors are configured to cause the battery of the accessory to supplement one or more of a current, a voltage, and a power to one or more processors of the mobile phone to cause the one or more processors to enter a turbo mode. | A battery in a mobile phone cover for use with a mobile phone is used to charge a battery in the mobile phone according to a first battery parameter and a second battery parameter. If an application running on the mobile phone determines that a sensed parameter has fallen below a first battery parameter, then the application causes the battery of the mobile phone cover to charge the battery in the mobile phone until the sensed parameter reaches or exceeds the second battery parameter. The mobile phone cover also provides a notification on its display, for example, of an event (e.g., missed call, calendar alert, received message, etc.) when the event occurs on the mobile phone.1. A mobile phone, comprising:
one or more processors configured to:
provide a graphical user interface that sets a first battery parameter and a second battery parameter;
receive a sensed battery parameter from a battery of the mobile phone; and
cause a battery of a mobile phone cover to charge the battery of the mobile phone if the sensed battery parameter is less than or equal to the first battery parameter. 2. The mobile phone according to claim 1, wherein the one or more processors are configured to cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter is more than or equal to the second battery parameter. 3. The mobile phone according to claim 1, wherein the first battery parameter is a first battery percentage relating to the battery of the mobile phone. 4. The mobile phone according to claim 3, wherein the second battery parameter is a second battery percentage relating to the battery of the mobile phone, and the second battery percentage is larger than the first battery percentage. 5. The mobile phone according to claim 1, wherein the one or more processors are configured to:
provide a graphical user interface that further sets a third battery parameter; receive a sensed battery parameter from a battery of the mobile phone cover; and cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the third battery parameter. 6. The mobile phone according to claim 5, wherein the first battery parameter and the second battery parameter are battery percentages relating to the battery of the mobile phone. 7. The mobile phone according to claim 1, wherein the one or more processors are configured to:
provide a graphical user interface that sets a fourth battery parameter and a fifth battery parameter; receive a sensed battery parameter from the battery of the mobile phone cover; and cause the battery of the mobile phone to charge the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the fourth battery parameter. 8. The mobile phone according to claim 7, wherein the one or more processors are configured to cause the battery of the mobile phone to stop charging the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone is more than or equal to the fifth battery parameter. 9. The mobile phone according to claim 7, wherein the fourth battery parameter and the fifth battery parameter are battery percentages relating to the battery of the mobile phone cover. 10. The mobile phone according to claim 1, wherein the one or more processors are configured to provide a graphical user interface that causes the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone. 11. The mobile phone according to claim 1, wherein the one or more processors are in a power-save mode, and the one or more processors are configured to provide a graphical user interface that causes the battery of the mobile phone cover to supplement one of a current, a voltage, and a power to the one or more processors of the mobile phone to take the one or more processors out of the power-save mode. 12. The mobile phone according to claim 1, wherein the one or more processors are configured to provide a graphical user interface that causes the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone to cause the one or more processors to enter a turbo mode. 13. A mobile phone cover for use with a mobile phone, comprising:
one or more processors configured to:
receive a first battery parameter and a second battery parameter from the mobile phone;
receive a sensed battery parameter from a battery of the mobile phone; and
cause a battery of the mobile phone cover to charge the battery of the mobile phone if the sensed battery parameter is less than or equal to the first battery parameter. 14. The mobile phone cover according to claim 13, wherein the one or more processors are configured to cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter is more than or equal to the second battery parameter. 15. The mobile phone cover according to claim 13, wherein the first battery parameter is a first battery percentage relating to the battery of the mobile phone. 16. The mobile phone cover according to claim 15, wherein the second battery parameter is a second battery percentage relating to the battery of the mobile phone, and the second battery percentage is larger than the first battery percentage. 17. The mobile phone cover according to claim 13, wherein the one or more processors are configured to:
receive a third battery parameter; receive a sensed battery parameter from a battery of the mobile phone cover; and cause the battery of the mobile phone cover to stop charging the battery of the mobile phone if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the third battery parameter. 18. The mobile phone cover according to claim 13, wherein the one or more processors are configured to:
receive a fourth battery parameter and a fifth battery parameter; receive a sensed battery parameter from the battery of the mobile phone cover; and cause the battery of the mobile phone to charge the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone cover is less than or equal to the fourth battery parameter. 19. The mobile phone cover according to claim 18, wherein the one or more processors are configured to cause the battery of the mobile phone to stop charging the battery of the mobile phone cover if the sensed battery parameter from the battery of the mobile phone is more than or equal to the fifth battery parameter. 20. The mobile phone cover according to claim 13, wherein the one or more processors are configured to cause the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to one or more processors of the mobile phone. 21. The mobile phone cover according to claim 1, wherein the one or more processors of the mobile phone are in a power-save mode, and the one or more processors are configured to cause the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone to take the one or more processors out of the power-save mode. 22. The mobile phone cover according to claim 1, wherein the one or more processors are configured to cause the battery of the mobile phone cover to supplement one or more of a current, a voltage, and a power to one or more processors of the mobile phone to cause the one or more processors to enter a turbo mode. 23. An accessory for use with a mobile phone, comprising:
one or more processors configured to:
receive a first battery parameter and a second battery parameter from the mobile phone;
receive a sensed battery parameter from a battery of the mobile phone; and
cause a battery of the accessory to charge the battery of the mobile phone if the sensed battery parameter is less than or equal to the first battery parameter. 24. The accessory according to claim 23, wherein the one or more processors of the mobile phone are in a power-save mode, and the one or more processors are configured to cause the battery of the accessory to supplement one or more of a current, a voltage, and a power to the one or more processors of the mobile phone to take the one or more processors out of the power-save mode. 25. The accessory according to claim 23, wherein the one or more processors are configured to cause the battery of the accessory to supplement one or more of a current, a voltage, and a power to one or more processors of the mobile phone to cause the one or more processors to enter a turbo mode. | 2,600 |
11,066 | 11,066 | 16,368,506 | 2,647 | Intelligent network access technology (NAT) control is described herein. An event associated with a mobile computing device can be detected. Responsive to the event being detected, a subscriber identity module (SIM) associated with the mobile computing device can access location information associated with a location of the mobile computing device. The SIM can access a configuration table storing NAT data provided by a network for controlling which NATs the mobile computing device is permitted to use to access the network. The SIM can determine, based at least in part on the NAT data and the location information, that a requested NAT of the mobile computing device is not a permissible NAT for the mobile computing device in the location and the SIM can cause the mobile computing device to connect to the network via an alternate NAT. | 1. A computer-implemented method for implementing network access technology control, the computer-implemented method comprising:
responsive to detecting an event associated with a mobile computing device, accessing location information associated with a location of the mobile computing device; accessing a configuration table stored in association with a subscriber identity module (SIM) of the mobile computing device, wherein the configuration table indicates permissible network access technologies for the mobile computing device to utilize at each of one or more locations; determining, based at least in part on the configuration table, on network access technology data provided by a network and on the location information, whether a current network access technology of the mobile computing device is a permissible network access technology for the mobile computing device in the location; and based at least in part on determining that the current network access technology is an impermissible network access technology for the mobile computing device in the location, sending, from the SIM to another component of the mobile computing device, an error notification to prohibit the mobile computing device from connecting to the network via the current network access technology. 2. The computer-implemented method as claim 1 recites, wherein the event comprises a boot up of the mobile computing device. 3. The computer-implemented method as claim 1 recites, wherein the event comprises a lapse of a preconfigured period of time associated with a polling cycle. 4. The computer-implemented method as claim 1 recites, wherein the event comprises a change from a previous location of the mobile computing device to the location of the mobile computing device. 5. The computer-implemented method as claim 1 recites, wherein the location information comprises at least one of a mobile country code, a mobile network code, a cell identifier, or a location area code. 6. The computer-implemented method as claim 1 recites, further comprising:
receiving, by the SIM, a first request associated with a different network access technology than the current network access technology;
determining, based at least in part on the network access technology data and the location information, whether the different network access technology is a permissible network access technology for the mobile computing device in the location; and
based at least in part on determining that the different network access technology is the permissible network access technology for the mobile computing device in the location, sending, from the SIM to the network, a second request to access the network via the different network access technology. 7. The computer-implemented method as claim 1 recites, wherein the configuration table is customized for a subscriber profile associated with the mobile computing device. 8. The computer-implemented method as claim 1 recites, further comprising receiving, from one or more server computing devices associated with the network, an instruction to update the configuration table, wherein the instruction is received (i) in near-real time and (ii) responsive to a change in at least one of network traffic or a quality of service associated with the network. 9. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
detecting an event associated with a mobile computing device; accessing location information associated with a location of the mobile computing device; accessing a configuration table stored in association with a subscriber identity module (SIM) of the mobile computing device, wherein the configuration table indicates permissible network access technologies for the mobile computing device to utilize at each of one or more locations; determining, based at least in part on the configuration table, on network access technology data provided by a network, and on the location information, that a requested network access technology of the mobile computing device is an impermissible network access technology for the mobile computing device in the location; and causing the mobile computing device to connect to the network via an alternate network access technology. 10. The one or more non-transitory computer-readable media as claim 9 recites, wherein the event comprises at least one of a boot up of the mobile computing device, a lapse of a preconfigured period of time associated with a polling cycle, or a change to the location of the mobile computing device. 11. The one or more non-transitory computer-readable media as claim 9 recites, wherein the location information comprises at least one of a mobile country code, a mobile network code, a cell identifier, or a location area code. 12. The one or more non-transitory computer-readable media as claim 9 recites, the operations further comprising accessing the location information responsive to detecting the event. 13. The one or more non-transitory computer-readable media as claim 9 recites, the operations further comprising:
accessing the location information responsive to receiving a request from the SIM; and
providing the location information to the SIM, wherein the SIM utilizes the location information to determine that the requested network access technology is an impermissible network access technology. 14. The one or more non-transitory computer-readable media as claim 9 recites, the operations further comprising:
receiving, by the SIM, a request associated with a different network access technology than the requested network access technology;
determining, based at least in part on the network access technology data and the location information, whether the different network access technology is a permissible network access technology for the mobile computing device in the location; and
based at least in part on determining that the different network access technology is the permissible network access technology for the mobile computing device in the location, sending, from the SIM to the network, a request to access the network via the different network access technology. 15. The one or more non-transitory computer-readable media as claim 9 recites, wherein the configuration table is configurable based at least in part on a price plan or a level of service available to an account of a subscriber of the mobile computing device. 16. The one or more non-transitory computer-readable media as claim 9, the operations further comprising receiving, from one or more server computing devices associated with the network, an instruction to update the configuration table, wherein the instruction is received (i) in near-real time and (ii) responsive to a change in at least one of network traffic or a quality of service associated with the network. 17. A mobile computing device comprising:
one or more processors; and one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
detecting an event associated with the mobile computing device;
accessing location information associated with a location of the mobile computing device;
accessing a configuration table stored in association with a subscriber identity module (SIM) of the mobile computing device, wherein the configuration table indicates permissible network access technologies for the mobile computing device to utilize at each of one or more locations;
determining, based at least in part on the configuration table, on network access technology data provided by a network, and on the location information, that a requested network access technology of the mobile computing device is an impermissible network access technology for the mobile computing device in the location; and
causing the mobile computing device to connect to the network via an alternate network access technology. 18. The mobile computing device as claim 17 recites, wherein the SIM determines that the requested network access technology is not a permissible network access technology for the mobile computing device in the location, and causing the mobile computing device to connect to the network via the alternate network access technology comprises sending, from the SIM to another component of the mobile computing device, an error notification to prohibit the mobile computing device from connecting to the network via the requested network access technology. 19. The mobile computing device as claim 17 recites, the operations further comprising:
receiving, by the SIM, a request associated with a different network access technology than the requested network access technology;
determining, based at least in part on the network access technology data and the location information, that the different network access technology is a permissible network access technology for the mobile computing device in the location; and
based at least in part on determining that the different network access technology is the permissible network access technology for the mobile computing device in the location, sending, from the SIM to the network, a request to access the network via the different network access technology. 20. The mobile computing device as claim 19 recites, the operations further comprising:
determining, based at least in part on the network access technology data and the location information, a service available to the mobile computing device in the location via the different network access technology; and
causing the mobile computing device to access the service via the different network access technology. | Intelligent network access technology (NAT) control is described herein. An event associated with a mobile computing device can be detected. Responsive to the event being detected, a subscriber identity module (SIM) associated with the mobile computing device can access location information associated with a location of the mobile computing device. The SIM can access a configuration table storing NAT data provided by a network for controlling which NATs the mobile computing device is permitted to use to access the network. The SIM can determine, based at least in part on the NAT data and the location information, that a requested NAT of the mobile computing device is not a permissible NAT for the mobile computing device in the location and the SIM can cause the mobile computing device to connect to the network via an alternate NAT.1. A computer-implemented method for implementing network access technology control, the computer-implemented method comprising:
responsive to detecting an event associated with a mobile computing device, accessing location information associated with a location of the mobile computing device; accessing a configuration table stored in association with a subscriber identity module (SIM) of the mobile computing device, wherein the configuration table indicates permissible network access technologies for the mobile computing device to utilize at each of one or more locations; determining, based at least in part on the configuration table, on network access technology data provided by a network and on the location information, whether a current network access technology of the mobile computing device is a permissible network access technology for the mobile computing device in the location; and based at least in part on determining that the current network access technology is an impermissible network access technology for the mobile computing device in the location, sending, from the SIM to another component of the mobile computing device, an error notification to prohibit the mobile computing device from connecting to the network via the current network access technology. 2. The computer-implemented method as claim 1 recites, wherein the event comprises a boot up of the mobile computing device. 3. The computer-implemented method as claim 1 recites, wherein the event comprises a lapse of a preconfigured period of time associated with a polling cycle. 4. The computer-implemented method as claim 1 recites, wherein the event comprises a change from a previous location of the mobile computing device to the location of the mobile computing device. 5. The computer-implemented method as claim 1 recites, wherein the location information comprises at least one of a mobile country code, a mobile network code, a cell identifier, or a location area code. 6. The computer-implemented method as claim 1 recites, further comprising:
receiving, by the SIM, a first request associated with a different network access technology than the current network access technology;
determining, based at least in part on the network access technology data and the location information, whether the different network access technology is a permissible network access technology for the mobile computing device in the location; and
based at least in part on determining that the different network access technology is the permissible network access technology for the mobile computing device in the location, sending, from the SIM to the network, a second request to access the network via the different network access technology. 7. The computer-implemented method as claim 1 recites, wherein the configuration table is customized for a subscriber profile associated with the mobile computing device. 8. The computer-implemented method as claim 1 recites, further comprising receiving, from one or more server computing devices associated with the network, an instruction to update the configuration table, wherein the instruction is received (i) in near-real time and (ii) responsive to a change in at least one of network traffic or a quality of service associated with the network. 9. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
detecting an event associated with a mobile computing device; accessing location information associated with a location of the mobile computing device; accessing a configuration table stored in association with a subscriber identity module (SIM) of the mobile computing device, wherein the configuration table indicates permissible network access technologies for the mobile computing device to utilize at each of one or more locations; determining, based at least in part on the configuration table, on network access technology data provided by a network, and on the location information, that a requested network access technology of the mobile computing device is an impermissible network access technology for the mobile computing device in the location; and causing the mobile computing device to connect to the network via an alternate network access technology. 10. The one or more non-transitory computer-readable media as claim 9 recites, wherein the event comprises at least one of a boot up of the mobile computing device, a lapse of a preconfigured period of time associated with a polling cycle, or a change to the location of the mobile computing device. 11. The one or more non-transitory computer-readable media as claim 9 recites, wherein the location information comprises at least one of a mobile country code, a mobile network code, a cell identifier, or a location area code. 12. The one or more non-transitory computer-readable media as claim 9 recites, the operations further comprising accessing the location information responsive to detecting the event. 13. The one or more non-transitory computer-readable media as claim 9 recites, the operations further comprising:
accessing the location information responsive to receiving a request from the SIM; and
providing the location information to the SIM, wherein the SIM utilizes the location information to determine that the requested network access technology is an impermissible network access technology. 14. The one or more non-transitory computer-readable media as claim 9 recites, the operations further comprising:
receiving, by the SIM, a request associated with a different network access technology than the requested network access technology;
determining, based at least in part on the network access technology data and the location information, whether the different network access technology is a permissible network access technology for the mobile computing device in the location; and
based at least in part on determining that the different network access technology is the permissible network access technology for the mobile computing device in the location, sending, from the SIM to the network, a request to access the network via the different network access technology. 15. The one or more non-transitory computer-readable media as claim 9 recites, wherein the configuration table is configurable based at least in part on a price plan or a level of service available to an account of a subscriber of the mobile computing device. 16. The one or more non-transitory computer-readable media as claim 9, the operations further comprising receiving, from one or more server computing devices associated with the network, an instruction to update the configuration table, wherein the instruction is received (i) in near-real time and (ii) responsive to a change in at least one of network traffic or a quality of service associated with the network. 17. A mobile computing device comprising:
one or more processors; and one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
detecting an event associated with the mobile computing device;
accessing location information associated with a location of the mobile computing device;
accessing a configuration table stored in association with a subscriber identity module (SIM) of the mobile computing device, wherein the configuration table indicates permissible network access technologies for the mobile computing device to utilize at each of one or more locations;
determining, based at least in part on the configuration table, on network access technology data provided by a network, and on the location information, that a requested network access technology of the mobile computing device is an impermissible network access technology for the mobile computing device in the location; and
causing the mobile computing device to connect to the network via an alternate network access technology. 18. The mobile computing device as claim 17 recites, wherein the SIM determines that the requested network access technology is not a permissible network access technology for the mobile computing device in the location, and causing the mobile computing device to connect to the network via the alternate network access technology comprises sending, from the SIM to another component of the mobile computing device, an error notification to prohibit the mobile computing device from connecting to the network via the requested network access technology. 19. The mobile computing device as claim 17 recites, the operations further comprising:
receiving, by the SIM, a request associated with a different network access technology than the requested network access technology;
determining, based at least in part on the network access technology data and the location information, that the different network access technology is a permissible network access technology for the mobile computing device in the location; and
based at least in part on determining that the different network access technology is the permissible network access technology for the mobile computing device in the location, sending, from the SIM to the network, a request to access the network via the different network access technology. 20. The mobile computing device as claim 19 recites, the operations further comprising:
determining, based at least in part on the network access technology data and the location information, a service available to the mobile computing device in the location via the different network access technology; and
causing the mobile computing device to access the service via the different network access technology. | 2,600 |
11,067 | 11,067 | 16,827,246 | 2,689 | An embodiment includes a lot management system. The lot management system includes a transmitter comprising a location determination module having a GPS data pathway to an RF transmission module and a receiver including an RF antenna and a receiver processor. | 1. A method for managing an automobile dealership comprising the steps of:
a) providing an inventory comprising multiple vehicles; b) providing a plurality of transmitters, each transmitter associated with a vehicle of the inventory, each transmitter comprising a vehicle information module and a location determination module, each transmitter adapted to collect vehicle information about the associated vehicle, wherein the collected vehicle information comprises at least a first and a second detail about the associated vehicle, the first and second details being selected from the group consisting of make, model, color, options installed, model year, body style, condition, cylinder type, mileage, stock number, VIN, battery voltage, fuel level, and vehicle location; c) providing a receiver in electronic communication with the transmitters; d) transmitting the collected vehicle information about the multiple vehicles to the receiver; e) using the collected vehicle information received by the receiver to perform at least one of the following:
i) identify and display the location of at least one of the multiple vehicles having a desired detail, and
ii) display aggregated data about multiple vehicles. 2. The method of claim 1 further comprising:
determining which of the multiple vehicles have a running engine; and
for each of the vehicles having a running engine, using the vehicle information module to provide a third detail selected from the group consisting of alternator voltage, engine RPMs, vehicle speed, run time since engine start, and distance traveled. 3. The method of claim 1 wherein, in step (e), when the engine of at least one vehicle is running, the transmitter associated with the at least one vehicle transmits vehicle location more frequently than when the engine of the vehicle is not running. 4. The method of claim 1 wherein the item displayed in step e) comprises aggregated data received from multiple transmitters and the aggregated data that is displayed is selected from the group consisting of reports on which vehicles have low fuel, reports on which vehicles have low battery voltage, and reports on which transmitters have not reported for a pre-determined period of time. 5. The method of claim 1 wherein the item displayed in step e) further comprises information about whether a vehicle has been outside a perimeter for a pre-determined period a time, information about whether a vehicle is outside a perimeter after a pre-determined time of day, information about whether the amount of fuel in the vehicle is below a pre-determined level, or information about whether the charge level of a vehicle battery is lower than a pre-determined voltage. 6. The method of claim 1, further including providing at least one handheld device in electronic communication with the receiver, wherein the location or aggregated data displayed in step e) is displayed on the at least one handheld device. 7. The method of claim 1 further comprising powering each transmitter from a vehicle battery from the vehicle associated with the transmitter. 8. The method of claim 1 further comprising plugging each of the vehicle information modules into the OBD-II port of the associated vehicle. 9. The method of claim 1 wherein the vehicle location is transmitted only when the transmitter is within a car lot. | An embodiment includes a lot management system. The lot management system includes a transmitter comprising a location determination module having a GPS data pathway to an RF transmission module and a receiver including an RF antenna and a receiver processor.1. A method for managing an automobile dealership comprising the steps of:
a) providing an inventory comprising multiple vehicles; b) providing a plurality of transmitters, each transmitter associated with a vehicle of the inventory, each transmitter comprising a vehicle information module and a location determination module, each transmitter adapted to collect vehicle information about the associated vehicle, wherein the collected vehicle information comprises at least a first and a second detail about the associated vehicle, the first and second details being selected from the group consisting of make, model, color, options installed, model year, body style, condition, cylinder type, mileage, stock number, VIN, battery voltage, fuel level, and vehicle location; c) providing a receiver in electronic communication with the transmitters; d) transmitting the collected vehicle information about the multiple vehicles to the receiver; e) using the collected vehicle information received by the receiver to perform at least one of the following:
i) identify and display the location of at least one of the multiple vehicles having a desired detail, and
ii) display aggregated data about multiple vehicles. 2. The method of claim 1 further comprising:
determining which of the multiple vehicles have a running engine; and
for each of the vehicles having a running engine, using the vehicle information module to provide a third detail selected from the group consisting of alternator voltage, engine RPMs, vehicle speed, run time since engine start, and distance traveled. 3. The method of claim 1 wherein, in step (e), when the engine of at least one vehicle is running, the transmitter associated with the at least one vehicle transmits vehicle location more frequently than when the engine of the vehicle is not running. 4. The method of claim 1 wherein the item displayed in step e) comprises aggregated data received from multiple transmitters and the aggregated data that is displayed is selected from the group consisting of reports on which vehicles have low fuel, reports on which vehicles have low battery voltage, and reports on which transmitters have not reported for a pre-determined period of time. 5. The method of claim 1 wherein the item displayed in step e) further comprises information about whether a vehicle has been outside a perimeter for a pre-determined period a time, information about whether a vehicle is outside a perimeter after a pre-determined time of day, information about whether the amount of fuel in the vehicle is below a pre-determined level, or information about whether the charge level of a vehicle battery is lower than a pre-determined voltage. 6. The method of claim 1, further including providing at least one handheld device in electronic communication with the receiver, wherein the location or aggregated data displayed in step e) is displayed on the at least one handheld device. 7. The method of claim 1 further comprising powering each transmitter from a vehicle battery from the vehicle associated with the transmitter. 8. The method of claim 1 further comprising plugging each of the vehicle information modules into the OBD-II port of the associated vehicle. 9. The method of claim 1 wherein the vehicle location is transmitted only when the transmitter is within a car lot. | 2,600 |
11,068 | 11,068 | 16,466,135 | 2,616 | There is provided a method and apparatus for modifying a contour comprising a sequence of points positioned on an image. A position of a movable indicator on the image relative to one or more points of the sequence is detected ( 202 ). The movable indicator is movable by a user. At least one point is removed from the contour, at least one point is added to the contour, or at least one point is removed from the contour and at least one point is added to the contour based on a distance of the detected position of the movable indicator on the image from the one or more points ( 204 ). | 1. A method for modifying a contour comprising a sequence of points positioned on an image, the method comprising:
detecting a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; automatically removing at least one point from the contour, or adding at least one point to the contour, based on the shortest distance between the detected position of the movable indicator on the image and the one or more points; and automatically switching between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points, wherein removing at least one point from the contour, or adding at least one point to the contour, is based on whether the shortest distance exceeds a threshold distance, wherein removing at least one point from the contour comprises removing points from the contour that are positioned in the sequence of points between the detected position of the moveable indicator and the point that is the shortest distance from the detected position of the moveable indicator. 2. (canceled) 3. (canceled) 4. (canceled) 5. (canceled) 6. (canceled) 7. A method for modifying a contour comprising a sequence of points positioned on an image, the method comprising:
detecting a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; automatically removing at least one point from the contour, or adding at least one point to the contour, based on the shortest distance between the detected position of the movable indicator on the image and the one or more points; and automatically switching between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points, wherein removing at least one point from the contour, or adding at least one point to the contour, is based on whether the shortest distance exceeds a threshold distance, wherein at least one point is removed only when the at least one point comprises more than a threshold number of points or a length of the contour is more than a threshold length. 8. A method as claimed in claim 1, wherein removing at least one point from the contour comprises:
determining whether a first part of the sequence of points or a second part of the sequence of points comprises the point that is the shortest distance from the detected position of the moveable indicator; and removing points from the part of the contour comprising the point that is the shortest distance from the detected position of the moveable indicator. 9. A method as claimed in claim 1, wherein at least one point is added to the contour at the detected position of the moveable indicator where the shortest distance is more than the threshold distance. 10. A method as claimed in claim 1, wherein removing at least one point from the contour comprises:
removing points from the contour that are less than a predefined distance from the detected position of the moveable indicator. 11. A method as claimed in claim 1, wherein adding at least one point from the contour comprises:
adding at least one point to the contour at the detected position of the moveable indicator. 12. A method as claimed in claim 9, the method further comprising:
connecting the at least one added point to a point on the contour that is the shortest distance from the added point. 13. A method for creating a contour comprising a sequence of points to be positioned on an image, the method comprising:
generating a first interaction event to position in the image an initial point for the sequence, carrying out the method of modifying a contour according to any of the preceding claims, and generating a second interaction event to position a final point for the sequence in the image, thereby finalizing the contour. 14. A computer program product comprising a computer readable medium, the computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method of claim 1. 15. An apparatus for modifying a contour comprising a sequence of points positioned on an image, the apparatus comprising:
a processor configured to:
detect a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; and
automatically remove at least one point from the contour, or add at least one point to the contour, based on a shortest distance between the detected position of the movable indicator on the image and the one or more points; and
automatically switch between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points,
wherein removing at least one point from the contour, or adding at least one point to the contour, is based on whether the shortest distance exceeds a threshold distance, and wherein removing at least one point from the contour comprises removing points from the contour that are positioned in the sequence of points between the detected position of the moveable indictor and the points that are the shortest distance from the detected position of the moveable indicator. 16. A method as claimed in claim 1, wherein the at least one point comprises a number of points that is less than a total number of points in the sequence of points. 17. An apparatus for modifying a contour comprising a sequence of points positioned on an image, the apparatus comprising:
a processor configured to:
detect a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; and
automatically remove points from the contour, or add at least one point to the contour, based on a shortest distance between the detected position of the movable indictor on the image and the one or more points; and
automatically switch between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points, wherein the processor is configured to remove the at least one point from the contour, or adding the least one point to the contour, based on whether the shortest distance exceeds a threshold distance, and wherein the processor is further configured to remove the at least one point only when the at least one point comprises more than a threshold number of points or when a length of the contour is more than a threshold length. | There is provided a method and apparatus for modifying a contour comprising a sequence of points positioned on an image. A position of a movable indicator on the image relative to one or more points of the sequence is detected ( 202 ). The movable indicator is movable by a user. At least one point is removed from the contour, at least one point is added to the contour, or at least one point is removed from the contour and at least one point is added to the contour based on a distance of the detected position of the movable indicator on the image from the one or more points ( 204 ).1. A method for modifying a contour comprising a sequence of points positioned on an image, the method comprising:
detecting a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; automatically removing at least one point from the contour, or adding at least one point to the contour, based on the shortest distance between the detected position of the movable indicator on the image and the one or more points; and automatically switching between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points, wherein removing at least one point from the contour, or adding at least one point to the contour, is based on whether the shortest distance exceeds a threshold distance, wherein removing at least one point from the contour comprises removing points from the contour that are positioned in the sequence of points between the detected position of the moveable indicator and the point that is the shortest distance from the detected position of the moveable indicator. 2. (canceled) 3. (canceled) 4. (canceled) 5. (canceled) 6. (canceled) 7. A method for modifying a contour comprising a sequence of points positioned on an image, the method comprising:
detecting a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; automatically removing at least one point from the contour, or adding at least one point to the contour, based on the shortest distance between the detected position of the movable indicator on the image and the one or more points; and automatically switching between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points, wherein removing at least one point from the contour, or adding at least one point to the contour, is based on whether the shortest distance exceeds a threshold distance, wherein at least one point is removed only when the at least one point comprises more than a threshold number of points or a length of the contour is more than a threshold length. 8. A method as claimed in claim 1, wherein removing at least one point from the contour comprises:
determining whether a first part of the sequence of points or a second part of the sequence of points comprises the point that is the shortest distance from the detected position of the moveable indicator; and removing points from the part of the contour comprising the point that is the shortest distance from the detected position of the moveable indicator. 9. A method as claimed in claim 1, wherein at least one point is added to the contour at the detected position of the moveable indicator where the shortest distance is more than the threshold distance. 10. A method as claimed in claim 1, wherein removing at least one point from the contour comprises:
removing points from the contour that are less than a predefined distance from the detected position of the moveable indicator. 11. A method as claimed in claim 1, wherein adding at least one point from the contour comprises:
adding at least one point to the contour at the detected position of the moveable indicator. 12. A method as claimed in claim 9, the method further comprising:
connecting the at least one added point to a point on the contour that is the shortest distance from the added point. 13. A method for creating a contour comprising a sequence of points to be positioned on an image, the method comprising:
generating a first interaction event to position in the image an initial point for the sequence, carrying out the method of modifying a contour according to any of the preceding claims, and generating a second interaction event to position a final point for the sequence in the image, thereby finalizing the contour. 14. A computer program product comprising a computer readable medium, the computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method of claim 1. 15. An apparatus for modifying a contour comprising a sequence of points positioned on an image, the apparatus comprising:
a processor configured to:
detect a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; and
automatically remove at least one point from the contour, or add at least one point to the contour, based on a shortest distance between the detected position of the movable indicator on the image and the one or more points; and
automatically switch between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points,
wherein removing at least one point from the contour, or adding at least one point to the contour, is based on whether the shortest distance exceeds a threshold distance, and wherein removing at least one point from the contour comprises removing points from the contour that are positioned in the sequence of points between the detected position of the moveable indictor and the points that are the shortest distance from the detected position of the moveable indicator. 16. A method as claimed in claim 1, wherein the at least one point comprises a number of points that is less than a total number of points in the sequence of points. 17. An apparatus for modifying a contour comprising a sequence of points positioned on an image, the apparatus comprising:
a processor configured to:
detect a position of a movable indicator on the image relative to one or more points of the sequence, wherein the movable indicator is movable by a user; and
automatically remove points from the contour, or add at least one point to the contour, based on a shortest distance between the detected position of the movable indictor on the image and the one or more points; and
automatically switch between removing at least one point from the contour and adding at least one point to the contour, based on the distance of the detected position of the movable indicator on the image from the one or more points, wherein the processor is configured to remove the at least one point from the contour, or adding the least one point to the contour, based on whether the shortest distance exceeds a threshold distance, and wherein the processor is further configured to remove the at least one point only when the at least one point comprises more than a threshold number of points or when a length of the contour is more than a threshold length. | 2,600 |
11,069 | 11,069 | 16,172,798 | 2,612 | A method for distributing information includes producing a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors. A message is received from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen. In response to the message, an item of information relating to the primary image is transmitted to the client device, for presentation on a second display screen associated with the client device. | 1. A method for distributing information, comprising:
producing a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors; receiving a message from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen; and transmitting to the client device, in response to the message, an item of information relating to the primary image for presentation on a second display screen associated with the client device. 2. The method according to claim 1, wherein the symbol comprises a plurality of regions meeting at a common vertex and having different, respective colors selected so as to encode the specified digital value,
wherein the digital value is encoded by assigning to each color a three-bit code comprising respective binary values representing three primary color components of the colors of the regions, and combining the digital codes to give the specified digital value, and wherein the colors of the regions are selected such that none of the binary values is constant over all of the regions meeting at the common vertex. 3. The method according to claim 2, wherein the colors of the regions are selected from a color group consisting of red, green, blue, cyan, magenta, yellow, white and black. 4. The method according to claim 1, wherein the symbol comprises multiple sub-symbols, which are presented on the first display screen in different, mutually-separated locations, and which together encode the specified digital value. 5. The method according to claim 4, wherein the sub-symbols are presented in different, respective corners of the first display screen. 6. The method according to claim 1, wherein the at least one primary image comprises a stream of video images on which the symbol is overlaid. 7. The method according to claim 1, wherein the at least one primary image comprises a digital still image. 8. The method according to claim 1, wherein transmitting the item of information comprises causing the client device to overlay the item on the secondary image of the first display screen, which is displayed on the second display screen. 9. The method according to claim 8, wherein transmitting the item of information comprises providing one or more interactive controls, to be overlaid on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating a user interface of the client device. 10. The method according to claim 9, wherein providing the one or more interactive controls comprises registering the interactive controls with respective features of the primary image as presented on the second display screen, and wherein actuating the interactive controls comprises invoking a selection function of the user interface at a location of one of the features of the at least one primary image on the second display screen. 11. The method according to claim 1, wherein the message further contains an identification of a user of the client device, and wherein transmitting the item of information comprises selecting the item to transmit responsively to both the digital value and the identification of the user. 12. A method for displaying information, comprising:
capturing, using a client device, a secondary image of a first display screen on which at least one primary image is presented, overlaid by a symbol comprising a set of color elements having different, respective colors that encodes a digital value; processing the secondary image in the client device so as to decode the digital value; transmitting a message from the client device to a server, the message containing an indication of the specified digital value; receiving, in response to the message, an item of information relating to the primary image; and presenting the item of information on a second display screen associated with the client device. 13. The method according to claim 12, wherein presenting the item of information comprises displaying the secondary image of the first display screen, and overlaying the item of information on the secondary image. 14. The method according to claim 13, wherein overlaying the item of information comprises overlaying one or more interactive controls on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating a user interface of the client device. 15. The method according to claim 14, wherein overlaying the one or more interactive controls comprises registering the interactive controls with respective features of the primary image as presented on the second display screen, and wherein the method comprises actuating the interactive controls by invoking a selection function of the user interface at a location of one of the features of the primary image on the second display screen. 16. The method according to claim 13, wherein displaying the secondary image comprises registering the first display screen with the second display screen on the client device using the symbol overlaid on the at least one primary image. 17. The method according to claim 16, wherein registering the first display comprises computing a transformation to be applied to the secondary image responsively to a disposition of the symbol in the secondary image, and applying the transformation in order to register the first display screen with the second display screen. 18. The method according to claim 16, wherein presenting the item of information comprises detecting a selection by a user of the client device of a feature of the at least one primary image responsively to a gesture made by the user on the second display screen, and generating an output with respect to the feature responsively to the selection. 19. The method according to claim 12, wherein capturing the secondary image comprises adjusting an exposure level of a camera module in the client device in order to detect the symbol and decode the digital value. 20. The method according to claim 19, wherein adjusting the exposure level comprises identifying an area having a greatest brightness value within the secondary image, and setting the exposure level so as to enable capture of the identified area. 21. The method according to claim 19, wherein adjusting the exposure level comprises cycling the camera module over multiple exposure levels until the symbol is detected and decoded. 22. The method according to claim 12, and comprising receiving and storing in a cache on the client device items of information relating to one or more primary images, wherein presenting the item of information comprises, when the item of information corresponding to the specified digital value is present in the cache, retrieving the item of information from the cache for presentation on the second display screen without transmitting the message to the server. 23. Apparatus for distributing information, comprising:
a processor, configured to produce a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors; and a communication interface, which is coupled to receive a message from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen, and to transmit to the client device, in response to the message by the processor, an item of information relating to the primary image for presentation on a second display screen associated with the client device. 24. The apparatus according to claim 23, wherein the symbol comprises a plurality of regions meeting at a common vertex and having different, respective colors selected so as to encode the specified digital value,
wherein the digital value is encoded by assigning to each color a three-bit code comprising respective binary values representing three primary color components of the colors of the regions, and combining the digital codes to give the specified digital value, and wherein the colors of the regions are selected such that none of the binary values is constant over all of the regions meeting at the common vertex. 25. The apparatus according to claim 24, wherein the colors of the regions are selected from a color group consisting of red, green, blue, cyan, magenta, yellow, white and black. 26. The apparatus according to claim 23, wherein the symbol comprises multiple sub-symbols, which are presented on the first display screen in different, mutually-separated locations, and which together encode the specified digital value. 27. The apparatus according to claim 26, wherein the sub-symbols are presented in different, respective corners of the first display screen. 28. The apparatus according to claim 23, wherein the at least one primary image comprises a stream of video images on which the symbol is overlaid. 29. The apparatus according to claim 23, wherein the at least one primary image comprises a digital still image. 30. The apparatus according to claim 23, wherein transmitting the item of information by the communication interface causes the client device to overlay the item on the secondary image of the first display screen, which is displayed on the second display screen. 31. The apparatus according to claim 30, wherein the item of information comprises one or more interactive controls, to be overlaid on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating a user interface of the client device. 32. The apparatus according to claim 23, wherein the message further contains an identification of a user of the client device, and wherein the processor is configured to select the item to transmit responsively to both the digital value and the identification of the user. 33. A device for displaying information, comprising:
a communication interface; a camera module, which is configured to capture a secondary image of a first display screen on which at least one primary image is presented, overlaid by a symbol comprising a set of color elements having different, respective colors that encodes a digital value; and a processor, which is configured to process the secondary image so as to decode the digital value, to transmit a message via the communication interface to a server, the message containing an indication of the specified digital value, to receive, in response to the message, an item of information relating to the primary image, and to present the item of information on a second display screen associated with the client device. 34. The device according to claim 33, wherein the processor is configured to display the secondary image of the first display screen, and to overlay the item of information on the secondary image. 35. The device according to claim 34, and comprising a user interface, wherein the item of information comprises one or more interactive controls, which are overlaid by the processor on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating the user interface. 36. The device according to claim 35, wherein the processor is configured to register the interactive controls with respective features of the primary image as presented on the second display screen, and to actuate the interactive controls in response to invocation of a selection function of the user interface at a location of one of the features of the primary image on the second display screen. 37. The device according to claim 34, wherein the processor is configured to register the first display screen with the second display screen using the symbol overlaid on the at least one primary image. 38. The device according to claim 37, wherein the processor is configured to compute a transformation to be applied to the secondary image responsively to a disposition of the symbol in the secondary image, and to apply the transformation in order to register the first display screen with the second display screen. 39. The device according to claim 37, wherein the processor is configured to detect a selection by a user of a feature of the at least one primary image responsively to a gesture made by the user on the second display screen, and to generate an output with respect to the feature responsively to the selection. 40. The device according to claim 33, wherein the processor is configured to adjust an exposure level of the camera module in order to detect the symbol and decode the digital value. 41. The device according to claim 40, wherein the processor is configured to identify an area having a greatest brightness value within the secondary image, and to set the exposure level so as to enable capture of the identified area. 42. The device according to claim 40, wherein the processor is configured to cycle the camera module over multiple exposure levels until the symbol is detected and decoded. 43. The device according to claim 33, and comprising a memory, which is configured to receive and cache items of information relating to one or more primary images, wherein the processor is configured, when the item of information corresponding to the specified digital value is cached in the memory, to retrieve the item of information from the memory for presentation on the second display screen without transmitting the message to the server. 44. A computer software product, comprising a tangible, non-transitory computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to produce a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors, and upon receiving a message from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen, to transmit to the client device, in response to the message, an item of information relating to the primary image for presentation on a second display screen associated with the client device. 45. A computer software product, comprising a tangible, non-transitory computer-readable medium in which program instructions are stored, which instructions, when read by a computing device, cause the device to capture a secondary image of a first display screen on which at least one primary image is presented, overlaid by a symbol comprising a set of color elements having different, respective colors that encodes a digital value, to process the secondary image so as to decode the digital value, to transmit a message to a server, the message containing an indication of the specified digital value, and upon receiving, in response to the message, an item of information relating to the primary image, to present the item of information on a second display screen. | A method for distributing information includes producing a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors. A message is received from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen. In response to the message, an item of information relating to the primary image is transmitted to the client device, for presentation on a second display screen associated with the client device.1. A method for distributing information, comprising:
producing a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors; receiving a message from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen; and transmitting to the client device, in response to the message, an item of information relating to the primary image for presentation on a second display screen associated with the client device. 2. The method according to claim 1, wherein the symbol comprises a plurality of regions meeting at a common vertex and having different, respective colors selected so as to encode the specified digital value,
wherein the digital value is encoded by assigning to each color a three-bit code comprising respective binary values representing three primary color components of the colors of the regions, and combining the digital codes to give the specified digital value, and wherein the colors of the regions are selected such that none of the binary values is constant over all of the regions meeting at the common vertex. 3. The method according to claim 2, wherein the colors of the regions are selected from a color group consisting of red, green, blue, cyan, magenta, yellow, white and black. 4. The method according to claim 1, wherein the symbol comprises multiple sub-symbols, which are presented on the first display screen in different, mutually-separated locations, and which together encode the specified digital value. 5. The method according to claim 4, wherein the sub-symbols are presented in different, respective corners of the first display screen. 6. The method according to claim 1, wherein the at least one primary image comprises a stream of video images on which the symbol is overlaid. 7. The method according to claim 1, wherein the at least one primary image comprises a digital still image. 8. The method according to claim 1, wherein transmitting the item of information comprises causing the client device to overlay the item on the secondary image of the first display screen, which is displayed on the second display screen. 9. The method according to claim 8, wherein transmitting the item of information comprises providing one or more interactive controls, to be overlaid on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating a user interface of the client device. 10. The method according to claim 9, wherein providing the one or more interactive controls comprises registering the interactive controls with respective features of the primary image as presented on the second display screen, and wherein actuating the interactive controls comprises invoking a selection function of the user interface at a location of one of the features of the at least one primary image on the second display screen. 11. The method according to claim 1, wherein the message further contains an identification of a user of the client device, and wherein transmitting the item of information comprises selecting the item to transmit responsively to both the digital value and the identification of the user. 12. A method for displaying information, comprising:
capturing, using a client device, a secondary image of a first display screen on which at least one primary image is presented, overlaid by a symbol comprising a set of color elements having different, respective colors that encodes a digital value; processing the secondary image in the client device so as to decode the digital value; transmitting a message from the client device to a server, the message containing an indication of the specified digital value; receiving, in response to the message, an item of information relating to the primary image; and presenting the item of information on a second display screen associated with the client device. 13. The method according to claim 12, wherein presenting the item of information comprises displaying the secondary image of the first display screen, and overlaying the item of information on the secondary image. 14. The method according to claim 13, wherein overlaying the item of information comprises overlaying one or more interactive controls on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating a user interface of the client device. 15. The method according to claim 14, wherein overlaying the one or more interactive controls comprises registering the interactive controls with respective features of the primary image as presented on the second display screen, and wherein the method comprises actuating the interactive controls by invoking a selection function of the user interface at a location of one of the features of the primary image on the second display screen. 16. The method according to claim 13, wherein displaying the secondary image comprises registering the first display screen with the second display screen on the client device using the symbol overlaid on the at least one primary image. 17. The method according to claim 16, wherein registering the first display comprises computing a transformation to be applied to the secondary image responsively to a disposition of the symbol in the secondary image, and applying the transformation in order to register the first display screen with the second display screen. 18. The method according to claim 16, wherein presenting the item of information comprises detecting a selection by a user of the client device of a feature of the at least one primary image responsively to a gesture made by the user on the second display screen, and generating an output with respect to the feature responsively to the selection. 19. The method according to claim 12, wherein capturing the secondary image comprises adjusting an exposure level of a camera module in the client device in order to detect the symbol and decode the digital value. 20. The method according to claim 19, wherein adjusting the exposure level comprises identifying an area having a greatest brightness value within the secondary image, and setting the exposure level so as to enable capture of the identified area. 21. The method according to claim 19, wherein adjusting the exposure level comprises cycling the camera module over multiple exposure levels until the symbol is detected and decoded. 22. The method according to claim 12, and comprising receiving and storing in a cache on the client device items of information relating to one or more primary images, wherein presenting the item of information comprises, when the item of information corresponding to the specified digital value is present in the cache, retrieving the item of information from the cache for presentation on the second display screen without transmitting the message to the server. 23. Apparatus for distributing information, comprising:
a processor, configured to produce a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors; and a communication interface, which is coupled to receive a message from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen, and to transmit to the client device, in response to the message by the processor, an item of information relating to the primary image for presentation on a second display screen associated with the client device. 24. The apparatus according to claim 23, wherein the symbol comprises a plurality of regions meeting at a common vertex and having different, respective colors selected so as to encode the specified digital value,
wherein the digital value is encoded by assigning to each color a three-bit code comprising respective binary values representing three primary color components of the colors of the regions, and combining the digital codes to give the specified digital value, and wherein the colors of the regions are selected such that none of the binary values is constant over all of the regions meeting at the common vertex. 25. The apparatus according to claim 24, wherein the colors of the regions are selected from a color group consisting of red, green, blue, cyan, magenta, yellow, white and black. 26. The apparatus according to claim 23, wherein the symbol comprises multiple sub-symbols, which are presented on the first display screen in different, mutually-separated locations, and which together encode the specified digital value. 27. The apparatus according to claim 26, wherein the sub-symbols are presented in different, respective corners of the first display screen. 28. The apparatus according to claim 23, wherein the at least one primary image comprises a stream of video images on which the symbol is overlaid. 29. The apparatus according to claim 23, wherein the at least one primary image comprises a digital still image. 30. The apparatus according to claim 23, wherein transmitting the item of information by the communication interface causes the client device to overlay the item on the secondary image of the first display screen, which is displayed on the second display screen. 31. The apparatus according to claim 30, wherein the item of information comprises one or more interactive controls, to be overlaid on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating a user interface of the client device. 32. The apparatus according to claim 23, wherein the message further contains an identification of a user of the client device, and wherein the processor is configured to select the item to transmit responsively to both the digital value and the identification of the user. 33. A device for displaying information, comprising:
a communication interface; a camera module, which is configured to capture a secondary image of a first display screen on which at least one primary image is presented, overlaid by a symbol comprising a set of color elements having different, respective colors that encodes a digital value; and a processor, which is configured to process the secondary image so as to decode the digital value, to transmit a message via the communication interface to a server, the message containing an indication of the specified digital value, to receive, in response to the message, an item of information relating to the primary image, and to present the item of information on a second display screen associated with the client device. 34. The device according to claim 33, wherein the processor is configured to display the secondary image of the first display screen, and to overlay the item of information on the secondary image. 35. The device according to claim 34, and comprising a user interface, wherein the item of information comprises one or more interactive controls, which are overlaid by the processor on the secondary image so as to enable a user of the client device to actuate the interactive controls by operating the user interface. 36. The device according to claim 35, wherein the processor is configured to register the interactive controls with respective features of the primary image as presented on the second display screen, and to actuate the interactive controls in response to invocation of a selection function of the user interface at a location of one of the features of the primary image on the second display screen. 37. The device according to claim 34, wherein the processor is configured to register the first display screen with the second display screen using the symbol overlaid on the at least one primary image. 38. The device according to claim 37, wherein the processor is configured to compute a transformation to be applied to the secondary image responsively to a disposition of the symbol in the secondary image, and to apply the transformation in order to register the first display screen with the second display screen. 39. The device according to claim 37, wherein the processor is configured to detect a selection by a user of a feature of the at least one primary image responsively to a gesture made by the user on the second display screen, and to generate an output with respect to the feature responsively to the selection. 40. The device according to claim 33, wherein the processor is configured to adjust an exposure level of the camera module in order to detect the symbol and decode the digital value. 41. The device according to claim 40, wherein the processor is configured to identify an area having a greatest brightness value within the secondary image, and to set the exposure level so as to enable capture of the identified area. 42. The device according to claim 40, wherein the processor is configured to cycle the camera module over multiple exposure levels until the symbol is detected and decoded. 43. The device according to claim 33, and comprising a memory, which is configured to receive and cache items of information relating to one or more primary images, wherein the processor is configured, when the item of information corresponding to the specified digital value is cached in the memory, to retrieve the item of information from the memory for presentation on the second display screen without transmitting the message to the server. 44. A computer software product, comprising a tangible, non-transitory computer-readable medium in which program instructions are stored, which instructions, when read by a computer, cause the computer to produce a symbol to be overlaid on at least one primary image presented on a first display screen, the symbol encoding a specified digital value in a set of color elements having different, respective colors, and upon receiving a message from a client device containing an indication of the specified digital value decoded by the client device upon capturing and analyzing a secondary image of the first display screen, to transmit to the client device, in response to the message, an item of information relating to the primary image for presentation on a second display screen associated with the client device. 45. A computer software product, comprising a tangible, non-transitory computer-readable medium in which program instructions are stored, which instructions, when read by a computing device, cause the device to capture a secondary image of a first display screen on which at least one primary image is presented, overlaid by a symbol comprising a set of color elements having different, respective colors that encodes a digital value, to process the secondary image so as to decode the digital value, to transmit a message to a server, the message containing an indication of the specified digital value, and upon receiving, in response to the message, an item of information relating to the primary image, to present the item of information on a second display screen. | 2,600 |
11,070 | 11,070 | 15,771,345 | 2,675 | In an example, processing apparatus includes an interface to receive input data comprising color data, a color mapping module to map at least one color of the input data to a print apparatus color space according to at least one mapping, and a preview module to generate a representation of a printed output of the print apparatus according to the mapping. The print apparatus color space may be defined based on print materials to be used by a print apparatus, wherein at least one print material comprises phosphorescent material. The preview module may generate a first image representing an externally illuminated printed output of the print apparatus, and a second image representing a phosphorescing printed output of the print apparatus in a low light environment. | 1. Processing apparatus comprising:
an interface to receive input data comprising color data; a color mapping module to map at least one color of the input data to a print apparatus color space according to at least one mapping, the print apparatus color space being defined based on print materials to be used by a print apparatus, wherein at least one print material comprises phosphorescent material; and a preview module to generate a representation of printed output of the print apparatus according to the at least one mapping, wherein the preview module is to generate a first image representing an externally illuminated printed output of the print apparatus, and a second image representing a phosphorescing printed output of the print apparatus in a low light environment. 2. Processing apparatus according to claim 1 further comprising a display output to display the first and second image. 3. Processing apparatus according to claim 1 further comprising an input to receive user instructions to modify at least one mapping of the color mapping module. 4. Processing apparatus according to claim 1 further comprising a control data generation module to generate control data to print the printed output. 5. Processing apparatus according to claim 1 in which the color mapping module is to map the colors of the input data to a device independent color space and from the device independent color space to the print apparatus color space. 6. A method comprising:
receiving input data; and determining, using at least one processor, at least one mapping from an input data color space to a print apparatus color space, wherein each color in the print apparatus color space is associated with at least one print material of a print apparatus, and at least one print material or combination of print materials of the print apparatus is associated with an illuminated output color and a phosphorescent output color; and wherein determining the mapping comprises determining a mapping based on the associated illuminated output color and phosphorescent output color. 7. A method according to claim 6 further comprising:
determining a color separation to print a representation of the input data, the color separation comprising, for the print materials of the print apparatus, an indication of at least one location at which a print material is to be applied in printing a printed output. 8. A method according to claim 6 further comprising printing a printed output using the print materials specified by the mapping. 9. A method, comprising:
printing a set of color samples, wherein each color sample is printed using one or a combination of print materials, and wherein at least one of the print materials comprises phosphorescent material; exciting the phosphorescent material in the color samples; imaging radiation emitted from the color samples in a low light environment; and characterising, using at least one processor, a low light color gamut based on the colors of the emitted radiation. 10. A method according to claim 9 further comprising:
Illuminating the color samples; and
characterising, using at least one processor, an illuminated color gamut based on light returned from the color samples. 11. A method according to claim 9 further comprising determining, using at least one processor, a plurality of mappings between the low light color gamut and a device independent color space. 12. A method according to claim 9 further comprising determining, using at least one processor, a plurality of mappings between print materials and combinations of print materials and the low light color gamut. 13. A method according to claim 12 further comprising:
receiving a selection of a color of the low light color gamut;
determining a print material combination which maps to the color; and
generating control data to produce a printable colorant comprising the determined print material combination. 14. A method according to claim 13 further comprising generating the printable colorant. 15. A method according to claim 9 further comprising:
receiving input data in a first color space;
determining, for at least one color of the input data, a mapping from the first color space to a color of the low light color gamut;
determining a color separation to print a representation of the input data, the color separation comprising, for each of a plurality of print materials available as a print material at a print apparatus, an indication of at least one location at which that print material is to be applied in printing a printed output; and
printing a printed output according to the color separation. | In an example, processing apparatus includes an interface to receive input data comprising color data, a color mapping module to map at least one color of the input data to a print apparatus color space according to at least one mapping, and a preview module to generate a representation of a printed output of the print apparatus according to the mapping. The print apparatus color space may be defined based on print materials to be used by a print apparatus, wherein at least one print material comprises phosphorescent material. The preview module may generate a first image representing an externally illuminated printed output of the print apparatus, and a second image representing a phosphorescing printed output of the print apparatus in a low light environment.1. Processing apparatus comprising:
an interface to receive input data comprising color data; a color mapping module to map at least one color of the input data to a print apparatus color space according to at least one mapping, the print apparatus color space being defined based on print materials to be used by a print apparatus, wherein at least one print material comprises phosphorescent material; and a preview module to generate a representation of printed output of the print apparatus according to the at least one mapping, wherein the preview module is to generate a first image representing an externally illuminated printed output of the print apparatus, and a second image representing a phosphorescing printed output of the print apparatus in a low light environment. 2. Processing apparatus according to claim 1 further comprising a display output to display the first and second image. 3. Processing apparatus according to claim 1 further comprising an input to receive user instructions to modify at least one mapping of the color mapping module. 4. Processing apparatus according to claim 1 further comprising a control data generation module to generate control data to print the printed output. 5. Processing apparatus according to claim 1 in which the color mapping module is to map the colors of the input data to a device independent color space and from the device independent color space to the print apparatus color space. 6. A method comprising:
receiving input data; and determining, using at least one processor, at least one mapping from an input data color space to a print apparatus color space, wherein each color in the print apparatus color space is associated with at least one print material of a print apparatus, and at least one print material or combination of print materials of the print apparatus is associated with an illuminated output color and a phosphorescent output color; and wherein determining the mapping comprises determining a mapping based on the associated illuminated output color and phosphorescent output color. 7. A method according to claim 6 further comprising:
determining a color separation to print a representation of the input data, the color separation comprising, for the print materials of the print apparatus, an indication of at least one location at which a print material is to be applied in printing a printed output. 8. A method according to claim 6 further comprising printing a printed output using the print materials specified by the mapping. 9. A method, comprising:
printing a set of color samples, wherein each color sample is printed using one or a combination of print materials, and wherein at least one of the print materials comprises phosphorescent material; exciting the phosphorescent material in the color samples; imaging radiation emitted from the color samples in a low light environment; and characterising, using at least one processor, a low light color gamut based on the colors of the emitted radiation. 10. A method according to claim 9 further comprising:
Illuminating the color samples; and
characterising, using at least one processor, an illuminated color gamut based on light returned from the color samples. 11. A method according to claim 9 further comprising determining, using at least one processor, a plurality of mappings between the low light color gamut and a device independent color space. 12. A method according to claim 9 further comprising determining, using at least one processor, a plurality of mappings between print materials and combinations of print materials and the low light color gamut. 13. A method according to claim 12 further comprising:
receiving a selection of a color of the low light color gamut;
determining a print material combination which maps to the color; and
generating control data to produce a printable colorant comprising the determined print material combination. 14. A method according to claim 13 further comprising generating the printable colorant. 15. A method according to claim 9 further comprising:
receiving input data in a first color space;
determining, for at least one color of the input data, a mapping from the first color space to a color of the low light color gamut;
determining a color separation to print a representation of the input data, the color separation comprising, for each of a plurality of print materials available as a print material at a print apparatus, an indication of at least one location at which that print material is to be applied in printing a printed output; and
printing a printed output according to the color separation. | 2,600 |
11,071 | 11,071 | 15,896,783 | 2,636 | A PON having an OLT configured to send downlink transmissions to ONUs using amplitude modulation and two symbol rates. An example ONU includes a clock-recovery circuit capable of continuous clock extraction from the received variable-rate modulated optical signal. The continuous clock extraction can be achieved, e.g., by (i) configuring the photodetector to convert the higher-rate portions of the received optical signal into transformed electrical waveforms while converting the lower-rate portions thereof into similar electrical waveforms and (ii) configuring the clock-recovery circuit to phase-align the clock signal with signal transitions in the resulting sequence of transformed and similar electrical waveforms. An ONU configured to operate in this manner can advantageously stay locked to the received data signal during transmissions at both symbol rates, without the need to reacquire the clock signal at each rate change and/or at the beginning of each packet intended for the host ONU. | 1. An apparatus comprising an optical receiver configured to receive an optical input signal modulated with data, wherein the optical receiver comprises:
an optical-to-electrical converter configured to:
generate a first amplitude-modulated electrical signal in response to a first amplitude-modulated portion of the optical input signal, the first amplitude-modulated portion having a first symbol rate that is a rate of a clock; and
generate a second amplitude-modulated electrical signal in response to a second amplitude-modulated portion of the optical input signal, the second amplitude-modulated portion having a second symbol rate that is greater than the first symbol rate; and
a clock-recovery circuit configured to generate a first clock signal in response to said first amplitude-modulated electrical signal such that said first clock signal frequency locks to the rate of the clock, and to continue to generate said first clock signal in response to said second amplitude-modulated electrical signal such that said first clock signal continues to frequency lock to the rate of the clock. 2. The apparatus of claim 1, wherein the optical receiver further comprises a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the first and second amplitude-modulated electrical signals at times determined using the first clock signal. 3. The apparatus of claim 2, wherein the signal decoder is configurable to recover said at least some of the data from selected first amplitude-modulated electrical signals of the sequence or from selected second amplitude-modulated electrical signals of the sequence. 4. The apparatus of claim 1, wherein the optical receiver further comprises a frequency multiplier configured to generate a second clock signal by multiplying a frequency of the first clock signal. 5. The apparatus of claim 4, wherein the optical receiver further comprises:
a clock-selector switch configured to select one of the first and second clock signals; and a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the first and second amplitude-modulated electrical signals at times determined using the selected one of the first and second clock signals. 6. The apparatus of claim 1, wherein the clock-recovery circuit comprises:
a voltage-controlled oscillator configured to change a frequency of the first clock signal in response to an error signal; and a phase detector operatively connected to the voltage-controlled oscillator to provide the error signal thereto and configured to generate the error signal based on time differences between the signal transitions in a sequence of said first and second electrical signals and corresponding edges of the first clock signal. 7. The apparatus of claim 6, wherein the clock-recovery circuit further comprises a low-pass filter operatively connected between the phase detector and the voltage-controlled oscillator to cause the error signal to be time-averaged. 8. The apparatus of claim 1, wherein the optical-to-electrical converter is configured to have a low-pass transfer function having a 3-dB attenuation point located between a first frequency and a second frequency, the first and second frequencies being smaller than the second symbol rate or smaller than the first symbol rate. 9. The apparatus of claim 1, wherein:
the first amplitude-modulated portion comprises a non-return-to-zero (NRZ)-modulated optical signal; the second amplitude-modulated portion comprises another NRZ-modulated optical signal; the first amplitude-modulated electrical signal comprises an NRZ-modulated electrical signal; and the second amplitude-modulated electrical signal comprises a duobinary electrical signal. 10. An apparatus comprising an optical receiver configured to receive an optical input signal modulated with data, wherein the optical receiver comprises:
an optical-to-electrical converter configured to:
generate a non-return-to-zero (NRZ)-modulated electrical signal in response to a first NRZ-modulated portion of the optical input signal, the first NRZ-modulated portion having a first symbol rate that is a rate of a clock; and
generate a duobinary electrical signal in response to a second NRZ-modulated portion of the optical input signal, the second NRZ-modulated portion having a second symbol rate that is greater than the first symbol rate; and
a clock-recovery circuit configured to generate a first clock signal in response to said NRZ-modulated electrical signal such that said first clock signal frequency locks to the rate of the clock, and to continue to generate said first clock signal in response to said duobinary electrical signal such that said first clock signal continues to frequency lock to the rate of the clock. 11. The apparatus of claim 10, wherein the optical receiver further comprises a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the NRZ-modulated and duobinary electrical signals at times determined using the first clock signal. 12. The apparatus of claim 11, wherein the signal decoder is configurable to recover said at least some of the data from an NRZ-modulated electrical signal of the sequence or from a duobinary electrical signal of the sequence. 13. The apparatus of claim 10, wherein the optical receiver further comprises a frequency multiplier configured to generate a second clock signal by multiplying a frequency of the first clock signal. 14. The apparatus of claim 13, wherein the optical receiver further comprises a signal decoder configured to recover at least some of the data by sampling at least some duobinary electrical signals of a sequence of the NRZ-modulated and duobinary electrical signals at times determined using the second clock signal. 15. The apparatus of claim 13, wherein the optical receiver further comprises:
a clock-selector switch configured to select one of the first and second clock signals; and a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the NRZ-modulated and duobinary electrical signals at times determined using the selected one of the first and second clock signals. 16. The apparatus of claim 13, wherein the frequency multiplier is configured to generate the second clock signal by multiplying the frequency of the first clock signal by a factor of two or four. 17. The apparatus of claim 10, wherein the clock-recovery circuit comprises:
a voltage-controlled oscillator configured to change a frequency of the first clock signal in response to an error signal; and a phase detector operatively connected to the voltage-controlled oscillator to provide the error signal thereto and configured to generate the error signal based on time differences between the signal transitions in a sequence of the NRZ-modulated and duobinary electrical signals and corresponding edges of the first clock signal. 18. The apparatus of claim 17, wherein the clock-recovery circuit further comprises a low-pass filter operatively connected between the phase detector and the voltage-controlled oscillator to cause the error signal to be time-averaged. 19. The apparatus of claim 10, further comprising an optical transmitter optically connected to apply the optical input signal to the optical receiver;
wherein the optical transmitter comprises a clock generator configured to generate a master clock signal; and wherein the optical transmitter is configured to generate first and second NRZ-modulated portions of the optical input signal using the master clock signal. 20. The apparatus of claim 10, further comprising an optical transmitter and a plurality of additional optical receivers connected to the optical transmitter; and
wherein the optical transmitter is configured to broadcast an optical output signal to the optical receiver and the plurality of additional optical receivers; and wherein the optical output signal so broadcast causes the optical receiver to receive the optical input signal. | A PON having an OLT configured to send downlink transmissions to ONUs using amplitude modulation and two symbol rates. An example ONU includes a clock-recovery circuit capable of continuous clock extraction from the received variable-rate modulated optical signal. The continuous clock extraction can be achieved, e.g., by (i) configuring the photodetector to convert the higher-rate portions of the received optical signal into transformed electrical waveforms while converting the lower-rate portions thereof into similar electrical waveforms and (ii) configuring the clock-recovery circuit to phase-align the clock signal with signal transitions in the resulting sequence of transformed and similar electrical waveforms. An ONU configured to operate in this manner can advantageously stay locked to the received data signal during transmissions at both symbol rates, without the need to reacquire the clock signal at each rate change and/or at the beginning of each packet intended for the host ONU.1. An apparatus comprising an optical receiver configured to receive an optical input signal modulated with data, wherein the optical receiver comprises:
an optical-to-electrical converter configured to:
generate a first amplitude-modulated electrical signal in response to a first amplitude-modulated portion of the optical input signal, the first amplitude-modulated portion having a first symbol rate that is a rate of a clock; and
generate a second amplitude-modulated electrical signal in response to a second amplitude-modulated portion of the optical input signal, the second amplitude-modulated portion having a second symbol rate that is greater than the first symbol rate; and
a clock-recovery circuit configured to generate a first clock signal in response to said first amplitude-modulated electrical signal such that said first clock signal frequency locks to the rate of the clock, and to continue to generate said first clock signal in response to said second amplitude-modulated electrical signal such that said first clock signal continues to frequency lock to the rate of the clock. 2. The apparatus of claim 1, wherein the optical receiver further comprises a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the first and second amplitude-modulated electrical signals at times determined using the first clock signal. 3. The apparatus of claim 2, wherein the signal decoder is configurable to recover said at least some of the data from selected first amplitude-modulated electrical signals of the sequence or from selected second amplitude-modulated electrical signals of the sequence. 4. The apparatus of claim 1, wherein the optical receiver further comprises a frequency multiplier configured to generate a second clock signal by multiplying a frequency of the first clock signal. 5. The apparatus of claim 4, wherein the optical receiver further comprises:
a clock-selector switch configured to select one of the first and second clock signals; and a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the first and second amplitude-modulated electrical signals at times determined using the selected one of the first and second clock signals. 6. The apparatus of claim 1, wherein the clock-recovery circuit comprises:
a voltage-controlled oscillator configured to change a frequency of the first clock signal in response to an error signal; and a phase detector operatively connected to the voltage-controlled oscillator to provide the error signal thereto and configured to generate the error signal based on time differences between the signal transitions in a sequence of said first and second electrical signals and corresponding edges of the first clock signal. 7. The apparatus of claim 6, wherein the clock-recovery circuit further comprises a low-pass filter operatively connected between the phase detector and the voltage-controlled oscillator to cause the error signal to be time-averaged. 8. The apparatus of claim 1, wherein the optical-to-electrical converter is configured to have a low-pass transfer function having a 3-dB attenuation point located between a first frequency and a second frequency, the first and second frequencies being smaller than the second symbol rate or smaller than the first symbol rate. 9. The apparatus of claim 1, wherein:
the first amplitude-modulated portion comprises a non-return-to-zero (NRZ)-modulated optical signal; the second amplitude-modulated portion comprises another NRZ-modulated optical signal; the first amplitude-modulated electrical signal comprises an NRZ-modulated electrical signal; and the second amplitude-modulated electrical signal comprises a duobinary electrical signal. 10. An apparatus comprising an optical receiver configured to receive an optical input signal modulated with data, wherein the optical receiver comprises:
an optical-to-electrical converter configured to:
generate a non-return-to-zero (NRZ)-modulated electrical signal in response to a first NRZ-modulated portion of the optical input signal, the first NRZ-modulated portion having a first symbol rate that is a rate of a clock; and
generate a duobinary electrical signal in response to a second NRZ-modulated portion of the optical input signal, the second NRZ-modulated portion having a second symbol rate that is greater than the first symbol rate; and
a clock-recovery circuit configured to generate a first clock signal in response to said NRZ-modulated electrical signal such that said first clock signal frequency locks to the rate of the clock, and to continue to generate said first clock signal in response to said duobinary electrical signal such that said first clock signal continues to frequency lock to the rate of the clock. 11. The apparatus of claim 10, wherein the optical receiver further comprises a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the NRZ-modulated and duobinary electrical signals at times determined using the first clock signal. 12. The apparatus of claim 11, wherein the signal decoder is configurable to recover said at least some of the data from an NRZ-modulated electrical signal of the sequence or from a duobinary electrical signal of the sequence. 13. The apparatus of claim 10, wherein the optical receiver further comprises a frequency multiplier configured to generate a second clock signal by multiplying a frequency of the first clock signal. 14. The apparatus of claim 13, wherein the optical receiver further comprises a signal decoder configured to recover at least some of the data by sampling at least some duobinary electrical signals of a sequence of the NRZ-modulated and duobinary electrical signals at times determined using the second clock signal. 15. The apparatus of claim 13, wherein the optical receiver further comprises:
a clock-selector switch configured to select one of the first and second clock signals; and a signal decoder configured to recover at least some of the data by sampling at least some electrical signals of a sequence of the NRZ-modulated and duobinary electrical signals at times determined using the selected one of the first and second clock signals. 16. The apparatus of claim 13, wherein the frequency multiplier is configured to generate the second clock signal by multiplying the frequency of the first clock signal by a factor of two or four. 17. The apparatus of claim 10, wherein the clock-recovery circuit comprises:
a voltage-controlled oscillator configured to change a frequency of the first clock signal in response to an error signal; and a phase detector operatively connected to the voltage-controlled oscillator to provide the error signal thereto and configured to generate the error signal based on time differences between the signal transitions in a sequence of the NRZ-modulated and duobinary electrical signals and corresponding edges of the first clock signal. 18. The apparatus of claim 17, wherein the clock-recovery circuit further comprises a low-pass filter operatively connected between the phase detector and the voltage-controlled oscillator to cause the error signal to be time-averaged. 19. The apparatus of claim 10, further comprising an optical transmitter optically connected to apply the optical input signal to the optical receiver;
wherein the optical transmitter comprises a clock generator configured to generate a master clock signal; and wherein the optical transmitter is configured to generate first and second NRZ-modulated portions of the optical input signal using the master clock signal. 20. The apparatus of claim 10, further comprising an optical transmitter and a plurality of additional optical receivers connected to the optical transmitter; and
wherein the optical transmitter is configured to broadcast an optical output signal to the optical receiver and the plurality of additional optical receivers; and wherein the optical output signal so broadcast causes the optical receiver to receive the optical input signal. | 2,600 |
11,072 | 11,072 | 15,840,382 | 2,612 | Provided are an image processing method and an image processing device. The image processing method includes generating an image based on viewpoint information of a user; rendering the image based on information about what is in front of the user; and outputting the rendered image using an optical element. | 1. An image processing method comprising:
generating an image based on viewpoint information of a user; rendering the image based on information about at least one of surroundings and an object present in front of the user; and outputting the rendered image using an optical element. 2. The image processing method of claim 1, wherein the rendering the image comprises acquiring at least one of a shape, a position, and a depth of an object present in front of the user using a sensor. 3. The image processing method of claim 1, wherein the generating the image comprises acquiring the viewpoint information by directly detecting an eye of the user using one of an image camera and an infrared camera or by detecting the eye of the user as reflected on a windshield of a vehicle. 4. The image processing method of claim 1, wherein the generating the image comprises:
determining a position of an eye of the user based on the viewpoint information; and allocating an image to a plurality of sub-pixels corresponding to the position of the eye. 5. The image processing method of claim 4, wherein the allocating the image comprises:
allocating an image to be input to a left eye of the user to a plurality of sub-pixels corresponding to a position of the left eye of the user; and allocating an image to be input to a right eye of the user to a plurality of sub-pixels corresponding to a position of the right eye of the user. 6. The image processing method of claim 1, wherein the generating the image comprises generating the image based on the viewpoint information and an optical transform. 7. The image processing method of claim 1, wherein the optical element is at least one of a lenticular lens and a parallax barrier. 8. The image processing method of claim 1, wherein the outputting the rendered image comprises enlarging the rendered image using a magnifying optical system. 9. The image processing method of claim 8, wherein the magnifying optical system comprises at least one of an aspherical mirror and a plane mirror. 10. The image processing method of claim 1, wherein the rendering the image comprises rendering the image so that a depth of the image is greater than a virtual image distance. 11. An image processing device comprising:
a controller configured to generate an image based on viewpoint information of a user, and to render the image based on information about at least one of surroundings and an object present in front of the user; and an optical element configured to output the rendered image. 12. The image processing device of claim 11, further comprising a sensor configured to acquire at least one of a shape, a position, and a depth of an object present in front of the user. 13. The image processing device of claim 11, further comprising at least one of an image camera and an infrared camera configured to acquire the viewpoint information,
wherein the at least one of the image camera and the infrared camera is configured to acquire the viewpoint information based on one of a direct detection of an eye of the user directly and a detection of the eye of the user as reflected on a windshield of a vehicle. 14. The image processing device of claim 11, wherein the controller is further configured to determine a position of an eye of the user based on the viewpoint information and to allocate an image to a plurality of sub-pixels of a display corresponding to the position of the eye. 15. The image processing device of claim 14, wherein the controller is further configured to allocate an image to be input to a left eye of the user to a plurality of sub-pixels of the display corresponding to a position of the left eye of the user, and to allocate an image to be input to a right eye of the user to a plurality of sub-pixels of the display corresponding to a position of the right eye of the user. 16. The image processing device of claim 11, wherein the controller is further configured to generate the image based on the viewpoint information and an optical transform. 17. The image processing device of claim 11, wherein the optical element is at least one of a lenticular lens and a parallax barrier. 18. The image processing device of claim 11, further comprising:
a magnifying optical system configured to enlarge the image output from the optical element. 19. The image processing device of claim 18, wherein the magnifying optical system comprises at least one of an aspherical mirror and a plane mirror. 20. The image processing device of claim 11, wherein the controller is further configured to render the image so that a depth of the image is greater than a virtual image distance. | Provided are an image processing method and an image processing device. The image processing method includes generating an image based on viewpoint information of a user; rendering the image based on information about what is in front of the user; and outputting the rendered image using an optical element.1. An image processing method comprising:
generating an image based on viewpoint information of a user; rendering the image based on information about at least one of surroundings and an object present in front of the user; and outputting the rendered image using an optical element. 2. The image processing method of claim 1, wherein the rendering the image comprises acquiring at least one of a shape, a position, and a depth of an object present in front of the user using a sensor. 3. The image processing method of claim 1, wherein the generating the image comprises acquiring the viewpoint information by directly detecting an eye of the user using one of an image camera and an infrared camera or by detecting the eye of the user as reflected on a windshield of a vehicle. 4. The image processing method of claim 1, wherein the generating the image comprises:
determining a position of an eye of the user based on the viewpoint information; and allocating an image to a plurality of sub-pixels corresponding to the position of the eye. 5. The image processing method of claim 4, wherein the allocating the image comprises:
allocating an image to be input to a left eye of the user to a plurality of sub-pixels corresponding to a position of the left eye of the user; and allocating an image to be input to a right eye of the user to a plurality of sub-pixels corresponding to a position of the right eye of the user. 6. The image processing method of claim 1, wherein the generating the image comprises generating the image based on the viewpoint information and an optical transform. 7. The image processing method of claim 1, wherein the optical element is at least one of a lenticular lens and a parallax barrier. 8. The image processing method of claim 1, wherein the outputting the rendered image comprises enlarging the rendered image using a magnifying optical system. 9. The image processing method of claim 8, wherein the magnifying optical system comprises at least one of an aspherical mirror and a plane mirror. 10. The image processing method of claim 1, wherein the rendering the image comprises rendering the image so that a depth of the image is greater than a virtual image distance. 11. An image processing device comprising:
a controller configured to generate an image based on viewpoint information of a user, and to render the image based on information about at least one of surroundings and an object present in front of the user; and an optical element configured to output the rendered image. 12. The image processing device of claim 11, further comprising a sensor configured to acquire at least one of a shape, a position, and a depth of an object present in front of the user. 13. The image processing device of claim 11, further comprising at least one of an image camera and an infrared camera configured to acquire the viewpoint information,
wherein the at least one of the image camera and the infrared camera is configured to acquire the viewpoint information based on one of a direct detection of an eye of the user directly and a detection of the eye of the user as reflected on a windshield of a vehicle. 14. The image processing device of claim 11, wherein the controller is further configured to determine a position of an eye of the user based on the viewpoint information and to allocate an image to a plurality of sub-pixels of a display corresponding to the position of the eye. 15. The image processing device of claim 14, wherein the controller is further configured to allocate an image to be input to a left eye of the user to a plurality of sub-pixels of the display corresponding to a position of the left eye of the user, and to allocate an image to be input to a right eye of the user to a plurality of sub-pixels of the display corresponding to a position of the right eye of the user. 16. The image processing device of claim 11, wherein the controller is further configured to generate the image based on the viewpoint information and an optical transform. 17. The image processing device of claim 11, wherein the optical element is at least one of a lenticular lens and a parallax barrier. 18. The image processing device of claim 11, further comprising:
a magnifying optical system configured to enlarge the image output from the optical element. 19. The image processing device of claim 18, wherein the magnifying optical system comprises at least one of an aspherical mirror and a plane mirror. 20. The image processing device of claim 11, wherein the controller is further configured to render the image so that a depth of the image is greater than a virtual image distance. | 2,600 |
11,073 | 11,073 | 16,784,185 | 2,689 | A system for passive locking and unlocking a vehicle is described that includes a memory for storing executable instructions and a processor configured to execute the instructions to unlock the vehicle using a time of flight to transmit the trigger request between the vehicle and the first key fob. The processor receives, from a first key fob, a trigger request to unlock the vehicle, and determines, based key fob identifiers that identify the first key fob and a second key fob that is also near the vehicle, that the first and second key fobs are associated with a vehicle. The processor selects the first key fob based on a key fob characteristic. The processor evaluates the time of flight using the first key fob, and unlocks the vehicle based on the time of flight satisfying a threshold for signal transmission flight time. | 1. A computer-implemented method, comprising:
receiving, from a first key fob, a trigger request comprising a first identifier; determining, based on the first identifier, that the first key fob is associated with a vehicle; receiving a second identifier from a second key fob; determining, based on the second identifier, that the second key fob is associated with the vehicle; selecting the first key fob, based on a key fob characteristic, the key fob characteristic comprising historical data associated with a previous authorization of the first key fob to access the vehicle; determining a time of flight associated with the first key fob; and unlocking the vehicle based on the time of flight satisfying a threshold. 2. The computer-implemented method according to claim 1, further comprising:
transmitting a first message to the first key fob and a second message to the second key fob; receiving, from the first key fob, a first response message including the first identifier and a second response message from the second key fob including the second identifier, wherein the first key fob is selected for the time of flight determination instead of the second key fob based on the key fob characteristic; and sending, to a body control module (BCM), a command message configured to cause the BCM to perform a trigger operation. 3. The computer-implemented method according to claim 1, wherein the key fob characteristic further comprises a Received Signal Strength Indication (RSSI) value. 4. The computer-implemented method according to claim 3, wherein selecting the first key fob comprises:
receiving a first RSSI value for the first key fob; and wherein selecting the first key fob is based on the first RSSI value. 5. The computer-implemented method according to claim 1, wherein selecting the first key fob comprises:
determining date information and time information associated with a previous vehicle access for the first key fob; and wherein selecting the first key fob is based on the date information and the time information. 6. The computer-implemented method according to claim 1, wherein selecting the first key fob comprises:
retrieving a first key fob setting; and selecting the first key fob is based on the first key fob setting. 7. The computer-implemented method according to claim 1, wherein
the historical data comprises date and time information associated with a prior trigger event request, and wherein selecting the first key fob is performed after the first identifier is received from the first key fob and the second identifier is received from the second key fob. 8. A system, comprising:
a processor; and a memory for storing executable instructions, the processor configured to execute the instructions to: receive, from a first key fob, a trigger request comprising a first identifier; determine, based on the first identifier, that the first key fob is associated with a vehicle; receive a second identifier from a second key fob; determine, based on the second identifier, that the second key fob is associated with the vehicle; select the first key fob, based on a key fob characteristic associated with the first key fob, wherein selecting the first key fob is performed after the first identifier is received from the first key fob and the second identifier is received from the second key fob; determine a time of flight associated with the first key fob; and unlock the vehicle based on a time of flight satisfying a threshold. 9. The system according to claim 8, wherein the processor is further configured to execute the instructions to:
transmit a message to the first key fob; receive, from the first key fob, a response message; determine that the time of flight associated with the response message satisfies the threshold; and send, to a body control module (BCM), a command message configured to cause the BCM to perform a trigger operation. 10. The system according to claim 8, wherein the key fob characteristic comprises a Received Signal Strength Indication (RSSI). 11. The system according to claim 10, wherein the processor is further configured to execute the instructions to:
receive a first RSSI value for the first key fob; and wherein to select the first key fob is based on the first RSSI value. 12. The system according to claim 8, wherein the processor is further configured to execute the instructions to:
determine date information and time information associated with a previous vehicle access for the first key fob; and wherein to select the first key fob is based on the date information and the time information. 13. The system according to claim 8, wherein the processor is further configured to execute the instructions to:
retrieve a first key fob setting; and wherein to select the first key fob is based on the first key fob setting. 14. The system according to claim 9, wherein to select the first key fob is based on date and time information associated with a prior trigger event request. 15. A Bluetooth® Low Energy (BLE) module having instructions stored thereupon which, when executed by a processor, cause the processor to:
receive, from a first key fob, a trigger request comprising a first identifier;
determine, based on the first identifier, that the first key fob is associated with a vehicle;
receive a second identifier from a second key fob;
determine, based on the second identifier, that the second key fob is associated with the vehicle;
select the first key fob, based on a key fob characteristic, the key fob characteristic comprising historical data associated with a previous authorization of the first key fob to access the vehicle;
determine a time of flight associated with the first key fob; and
unlock the vehicle based on the time of flight satisfying a threshold. 16. The BLE module according to claim 15, having further instructions stored thereupon to:
transmit a message to the first key fob; receive a response message from the first key fob; determine that the time of flight associated with the response message satisfies the threshold; and transmit, to a body control module (BCM), a command message configured to cause the BCM to perform a trigger operation. 17. The BLE module according to claim 15, having further instructions stored thereupon to:
receive a first Received Signal Strength Indication (RSSI) value for the first key fob; and wherein to select the first key fob is based on the first RSSI value. 18. The BLE module according to claim 15, wherein the processor is further configured to execute the instructions to:
determine date information and time information associated with a previous vehicle access for the first key fob; and wherein to select the first key fob is based on the date information and the time information. 19. The BLE module according to claim 15, wherein the processor is further configured to execute the instructions to:
retrieve a first key fob setting; and wherein to select the first key fob is based on the first key fob setting. 20. The BLE module according to claim 15, wherein to select the first key fob is based on date and time information associated with a prior trigger event request by the first key fob. | A system for passive locking and unlocking a vehicle is described that includes a memory for storing executable instructions and a processor configured to execute the instructions to unlock the vehicle using a time of flight to transmit the trigger request between the vehicle and the first key fob. The processor receives, from a first key fob, a trigger request to unlock the vehicle, and determines, based key fob identifiers that identify the first key fob and a second key fob that is also near the vehicle, that the first and second key fobs are associated with a vehicle. The processor selects the first key fob based on a key fob characteristic. The processor evaluates the time of flight using the first key fob, and unlocks the vehicle based on the time of flight satisfying a threshold for signal transmission flight time.1. A computer-implemented method, comprising:
receiving, from a first key fob, a trigger request comprising a first identifier; determining, based on the first identifier, that the first key fob is associated with a vehicle; receiving a second identifier from a second key fob; determining, based on the second identifier, that the second key fob is associated with the vehicle; selecting the first key fob, based on a key fob characteristic, the key fob characteristic comprising historical data associated with a previous authorization of the first key fob to access the vehicle; determining a time of flight associated with the first key fob; and unlocking the vehicle based on the time of flight satisfying a threshold. 2. The computer-implemented method according to claim 1, further comprising:
transmitting a first message to the first key fob and a second message to the second key fob; receiving, from the first key fob, a first response message including the first identifier and a second response message from the second key fob including the second identifier, wherein the first key fob is selected for the time of flight determination instead of the second key fob based on the key fob characteristic; and sending, to a body control module (BCM), a command message configured to cause the BCM to perform a trigger operation. 3. The computer-implemented method according to claim 1, wherein the key fob characteristic further comprises a Received Signal Strength Indication (RSSI) value. 4. The computer-implemented method according to claim 3, wherein selecting the first key fob comprises:
receiving a first RSSI value for the first key fob; and wherein selecting the first key fob is based on the first RSSI value. 5. The computer-implemented method according to claim 1, wherein selecting the first key fob comprises:
determining date information and time information associated with a previous vehicle access for the first key fob; and wherein selecting the first key fob is based on the date information and the time information. 6. The computer-implemented method according to claim 1, wherein selecting the first key fob comprises:
retrieving a first key fob setting; and selecting the first key fob is based on the first key fob setting. 7. The computer-implemented method according to claim 1, wherein
the historical data comprises date and time information associated with a prior trigger event request, and wherein selecting the first key fob is performed after the first identifier is received from the first key fob and the second identifier is received from the second key fob. 8. A system, comprising:
a processor; and a memory for storing executable instructions, the processor configured to execute the instructions to: receive, from a first key fob, a trigger request comprising a first identifier; determine, based on the first identifier, that the first key fob is associated with a vehicle; receive a second identifier from a second key fob; determine, based on the second identifier, that the second key fob is associated with the vehicle; select the first key fob, based on a key fob characteristic associated with the first key fob, wherein selecting the first key fob is performed after the first identifier is received from the first key fob and the second identifier is received from the second key fob; determine a time of flight associated with the first key fob; and unlock the vehicle based on a time of flight satisfying a threshold. 9. The system according to claim 8, wherein the processor is further configured to execute the instructions to:
transmit a message to the first key fob; receive, from the first key fob, a response message; determine that the time of flight associated with the response message satisfies the threshold; and send, to a body control module (BCM), a command message configured to cause the BCM to perform a trigger operation. 10. The system according to claim 8, wherein the key fob characteristic comprises a Received Signal Strength Indication (RSSI). 11. The system according to claim 10, wherein the processor is further configured to execute the instructions to:
receive a first RSSI value for the first key fob; and wherein to select the first key fob is based on the first RSSI value. 12. The system according to claim 8, wherein the processor is further configured to execute the instructions to:
determine date information and time information associated with a previous vehicle access for the first key fob; and wherein to select the first key fob is based on the date information and the time information. 13. The system according to claim 8, wherein the processor is further configured to execute the instructions to:
retrieve a first key fob setting; and wherein to select the first key fob is based on the first key fob setting. 14. The system according to claim 9, wherein to select the first key fob is based on date and time information associated with a prior trigger event request. 15. A Bluetooth® Low Energy (BLE) module having instructions stored thereupon which, when executed by a processor, cause the processor to:
receive, from a first key fob, a trigger request comprising a first identifier;
determine, based on the first identifier, that the first key fob is associated with a vehicle;
receive a second identifier from a second key fob;
determine, based on the second identifier, that the second key fob is associated with the vehicle;
select the first key fob, based on a key fob characteristic, the key fob characteristic comprising historical data associated with a previous authorization of the first key fob to access the vehicle;
determine a time of flight associated with the first key fob; and
unlock the vehicle based on the time of flight satisfying a threshold. 16. The BLE module according to claim 15, having further instructions stored thereupon to:
transmit a message to the first key fob; receive a response message from the first key fob; determine that the time of flight associated with the response message satisfies the threshold; and transmit, to a body control module (BCM), a command message configured to cause the BCM to perform a trigger operation. 17. The BLE module according to claim 15, having further instructions stored thereupon to:
receive a first Received Signal Strength Indication (RSSI) value for the first key fob; and wherein to select the first key fob is based on the first RSSI value. 18. The BLE module according to claim 15, wherein the processor is further configured to execute the instructions to:
determine date information and time information associated with a previous vehicle access for the first key fob; and wherein to select the first key fob is based on the date information and the time information. 19. The BLE module according to claim 15, wherein the processor is further configured to execute the instructions to:
retrieve a first key fob setting; and wherein to select the first key fob is based on the first key fob setting. 20. The BLE module according to claim 15, wherein to select the first key fob is based on date and time information associated with a prior trigger event request by the first key fob. | 2,600 |
11,074 | 11,074 | 16,672,694 | 2,631 | A transmitting apparatus according to one aspect of the present disclosure transmits a plurality of first transmission data and a plurality of second transmission data by using an OFDM (Orthogonal Frequency-Division Multiplexing) method. The transmitting apparatus includes frame configuring circuitry, which in operation, generates a frame including a first period in which a preamble is transmitted, a second period in which the plurality of first transmission data is multiplexed by a time division multiplexing method and is transmitted, and a third period in which the plurality of second transmission data is multiplexed by a frequency division multiplexing method and is transmitted; and transmitting circuitry that transmits the frame. | 1. A transmission method according to Frequency-Division Multiplexing (FDM), the transmission method comprising:
selecting a pattern from among resource allocation patterns, the resource allocation patterns defining respective allocations of Orthogonal Frequency-Division Multiplexing (OFDM) subcarriers to subcarrier groups; generating a preamble symbol including allocation information indicating the selected pattern; mapping data groups onto the subcarrier groups according to the selected pattern to perform FDM; and transmitting the preamble symbol and the mapped data groups according to OFDM, wherein a fixed subcarrier group having fixed OFDM subcarriers is provided regardless of the selected pattern, and the number of the fixed OFDM subcarriers is fixed regardless of the selected pattern. 2. The transmission method according to claim 1, wherein the number of OFDM subcarriers included in each subcarrier group is a multiplier of N, N being an integer greater than two. 3. The transmission method according to claim 1, wherein
the preamble symbol includes a first portion and a second portion, the first portion includes information on at least one of modulation or error correcting coding, and the second portion includes the allocation information. 4. The transmission method according to claim 1, further comprising:
selecting a Time-Division Multiplexing (TDM) scheme; and performing TDM on the data groups with if the TDM scheme is selected. 5. The transmission method according to claim 1, wherein transmission control information is mapped onto the fixed subcarrier group. 6. A transmission method according to Frequency-Division Multiplexing (FDM), the transmission method comprising:
selecting a pattern from among resource allocation patterns, the resource allocation patterns defining respective allocations of Orthogonal Frequency-Division Multiplexing (OFDM) subcarriers to subcarrier groups, the resource allocation patterns including a first pattern and a second pattern according to which the OFDM subcarriers are allocated to first subcarrier groups and second subcarrier groups, respectively; generating a preamble symbol including allocation information indicating the selected pattern; mapping data groups onto the subcarrier groups according to the selected pattern to perform FDM; and transmitting the preamble symbol and the mapped data groups according to OFDM, wherein the number of the first subcarrier groups is different from the number of the second subcarrier groups, and the first subcarrier groups and the second subcarrier groups include a fixed subcarrier group in common. 7. The transmission method according to claim 6, wherein the number of OFDM subcarriers included in each subcarrier group is a multiplier of N, N being an integer greater than two. 8. The transmission method according to claim 6, wherein
the preamble symbol includes a first portion and a second portion, the first portion includes information on at least one of modulation or error correcting coding, and the second portion includes the allocation information. 9. The transmission method according to claim 6, further comprising:
selecting a Time-Division Multiplexing (TDM) scheme; and performing TDM on the data groups with if the TDM scheme is selected. 10. The transmission method according to claim 6, wherein transmission control information is mapped onto the fixed subcarrier group. 11. A reception method according to Frequency-Division Multiplexing (FDM), the reception method comprising:
receiving a preamble symbol and mapped data groups according to Orthogonal Frequency-Division Multiplexing (OFDM); reading resource allocation information included in the preamble to identify a selected pattern, the selected pattern having been selected from among resource allocation patterns, the resource allocation patterns defining respective allocations of OFDM subcarriers to subcarrier groups, the resource allocation patterns including a first pattern and a second pattern according to which the OFDM subcarriers are allocated to first subcarrier groups and second subcarrier groups, respectively; and demapping on the mapped data groups to generate data groups according to the selected pattern, wherein the number of the first subcarrier groups is different from the number of the second subcarrier groups, and the first subcarrier groups and the second subcarrier groups include a fixed subcarrier group in common. 12. The reception method according to claim 11, wherein the number of OFDM subcarriers included in each subcarrier group is a multiplier of N, N being an integer greater than two. 13. The reception method according to claim 11, wherein
the preamble symbol includes a first portion and a second portion, the first portion includes information on at least one of modulation or error correcting coding, and the second portion includes the allocation information. 14. The reception method according to claim 11, further comprising:
reading the preamble symbol to determine whether a Time-Division Multiplexing (TDM) scheme is used; and demapping on the mapped data groups according to TDM if the TDM scheme is determined to be used. 15. The reception method according to claim 11, wherein transmission control information is mapped onto the fixed subcarrier group. | A transmitting apparatus according to one aspect of the present disclosure transmits a plurality of first transmission data and a plurality of second transmission data by using an OFDM (Orthogonal Frequency-Division Multiplexing) method. The transmitting apparatus includes frame configuring circuitry, which in operation, generates a frame including a first period in which a preamble is transmitted, a second period in which the plurality of first transmission data is multiplexed by a time division multiplexing method and is transmitted, and a third period in which the plurality of second transmission data is multiplexed by a frequency division multiplexing method and is transmitted; and transmitting circuitry that transmits the frame.1. A transmission method according to Frequency-Division Multiplexing (FDM), the transmission method comprising:
selecting a pattern from among resource allocation patterns, the resource allocation patterns defining respective allocations of Orthogonal Frequency-Division Multiplexing (OFDM) subcarriers to subcarrier groups; generating a preamble symbol including allocation information indicating the selected pattern; mapping data groups onto the subcarrier groups according to the selected pattern to perform FDM; and transmitting the preamble symbol and the mapped data groups according to OFDM, wherein a fixed subcarrier group having fixed OFDM subcarriers is provided regardless of the selected pattern, and the number of the fixed OFDM subcarriers is fixed regardless of the selected pattern. 2. The transmission method according to claim 1, wherein the number of OFDM subcarriers included in each subcarrier group is a multiplier of N, N being an integer greater than two. 3. The transmission method according to claim 1, wherein
the preamble symbol includes a first portion and a second portion, the first portion includes information on at least one of modulation or error correcting coding, and the second portion includes the allocation information. 4. The transmission method according to claim 1, further comprising:
selecting a Time-Division Multiplexing (TDM) scheme; and performing TDM on the data groups with if the TDM scheme is selected. 5. The transmission method according to claim 1, wherein transmission control information is mapped onto the fixed subcarrier group. 6. A transmission method according to Frequency-Division Multiplexing (FDM), the transmission method comprising:
selecting a pattern from among resource allocation patterns, the resource allocation patterns defining respective allocations of Orthogonal Frequency-Division Multiplexing (OFDM) subcarriers to subcarrier groups, the resource allocation patterns including a first pattern and a second pattern according to which the OFDM subcarriers are allocated to first subcarrier groups and second subcarrier groups, respectively; generating a preamble symbol including allocation information indicating the selected pattern; mapping data groups onto the subcarrier groups according to the selected pattern to perform FDM; and transmitting the preamble symbol and the mapped data groups according to OFDM, wherein the number of the first subcarrier groups is different from the number of the second subcarrier groups, and the first subcarrier groups and the second subcarrier groups include a fixed subcarrier group in common. 7. The transmission method according to claim 6, wherein the number of OFDM subcarriers included in each subcarrier group is a multiplier of N, N being an integer greater than two. 8. The transmission method according to claim 6, wherein
the preamble symbol includes a first portion and a second portion, the first portion includes information on at least one of modulation or error correcting coding, and the second portion includes the allocation information. 9. The transmission method according to claim 6, further comprising:
selecting a Time-Division Multiplexing (TDM) scheme; and performing TDM on the data groups with if the TDM scheme is selected. 10. The transmission method according to claim 6, wherein transmission control information is mapped onto the fixed subcarrier group. 11. A reception method according to Frequency-Division Multiplexing (FDM), the reception method comprising:
receiving a preamble symbol and mapped data groups according to Orthogonal Frequency-Division Multiplexing (OFDM); reading resource allocation information included in the preamble to identify a selected pattern, the selected pattern having been selected from among resource allocation patterns, the resource allocation patterns defining respective allocations of OFDM subcarriers to subcarrier groups, the resource allocation patterns including a first pattern and a second pattern according to which the OFDM subcarriers are allocated to first subcarrier groups and second subcarrier groups, respectively; and demapping on the mapped data groups to generate data groups according to the selected pattern, wherein the number of the first subcarrier groups is different from the number of the second subcarrier groups, and the first subcarrier groups and the second subcarrier groups include a fixed subcarrier group in common. 12. The reception method according to claim 11, wherein the number of OFDM subcarriers included in each subcarrier group is a multiplier of N, N being an integer greater than two. 13. The reception method according to claim 11, wherein
the preamble symbol includes a first portion and a second portion, the first portion includes information on at least one of modulation or error correcting coding, and the second portion includes the allocation information. 14. The reception method according to claim 11, further comprising:
reading the preamble symbol to determine whether a Time-Division Multiplexing (TDM) scheme is used; and demapping on the mapped data groups according to TDM if the TDM scheme is determined to be used. 15. The reception method according to claim 11, wherein transmission control information is mapped onto the fixed subcarrier group. | 2,600 |
11,075 | 11,075 | 16,656,525 | 2,644 | An aspect includes a method for use with a mobile communications device comprising a plurality of subscriber identification modules sharing a cache. The method includes a first subscriber identification module authenticating with and connecting to the network, followed by a second subscriber identification module authenticating with and connecting to the network. The method also includes, while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache. The method further includes an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache. | 1. A method for use with a mobile communications device comprising a plurality of subscriber identification modules sharing a hardware cache, the method comprising:
a first subscriber identification module authenticating with and connecting to a network, followed by a second subscriber identification module authenticating with and connecting to the network; while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache. 2. The method of claim 1, wherein the first data is requested from the network and stored in the cache by the first subscriber module for processing by the application substantially in real time, and wherein the second data is requested from the network and stored in the cache by the second subscriber module for processing at a time after the first data has been processed. 3. The method of claim 1, wherein at least a portion of the second data is a duplicate of at least a portion of the first data, such that the portion of the second data is processed by the application instead of the portion of the first data. 4. The method of claim 1, wherein the network comprises a citizens broadband radio service. 5. The method of claim 4, wherein the first subscriber identification module connects to the network through a first sector of a given CBSD within the network, and wherein the second subscriber identification module connects to the network through a second sector of the given CBSD within the network. 6. The method of claim 4, wherein the first subscriber identification module connects to the network through a first one of a plurality of CBSDs within the network, and wherein the second subscriber identification module connects to the network through a second one of the plurality of CBSDs within the network. 7. A method for use with a mobile communications device comprising a plurality of subscriber identification modules sharing a hardware cache, the method comprising:
a first subscriber identification module authenticating with and connecting to a network, followed by a second subscriber identification module authenticating with and connecting to the network; while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache, wherein the first subscriber identification module authenticating with and connecting to the network, followed by the second subscriber identification module authenticating with and connecting to the network, further comprises: the first subscriber identification module authenticating with and connecting to a first one of a plurality of CBSDs within the network; and responsive to detection of a trigger, the second subscriber identification module:
selecting either the first one or a second one of the plurality of CBSDs; and
authenticating with and connecting to the selected one of the plurality of CBSDs. 8. The method of claim 7, further comprising measuring signal metrics for respective ones of the plurality of CBSDs within the network, wherein the first or second one of the plurality of CBSDs is selected based at least in part on the measured signal metrics. 9. The method of claim 8, wherein the second subscriber identification module measures the signal metrics responsive to detection of the trigger. 10. The method of claim 8, wherein the signal metrics comprise at least one of Reference Signal Received Power (RSRP) and Reference Signal Received Quality (RSRQ). 11. The method of claim 7, wherein the trigger is based at least in part on the connection between the first subscriber identification module and the first one of the plurality of CBSDs within the network. 12. The method of claim 11, wherein the trigger indicates that the connection of the first subscriber identification module to the network is sufficiently strong that the second subscriber identification module should connect to the network and receive the second data. 13. The method of claim 11, wherein the trigger indicates that the connection of the first subscriber identification module to the network is sufficiently weak that second subscriber identification module should connect to the network and receive the second data. 14. The method of claim 7, wherein the trigger indicates that the application requires the second data to be received from the network and stored in the cache. 15. The method of claim 14, wherein the trigger indicates that the application is consuming the first data from the cache faster than the first subscriber identification module is storing the first data in the cache after receiving the first data from the network. 16. The method of claim 14, further comprising the steps of:
after the second subscriber identification module connects to the network, the second subscriber identification module requesting and receiving the second data from the network and storing the second data in the cache; and after the second subscriber identification module stores the second data in the cache, determining whether the application requires additional data to be received from the network and stored in the cache; and if the application does require the additional data to be received from the network and stored in the cache, the second subscriber identification module requesting and receiving the second data from the network and storing the second data in the cache; if the application does not require the additional data to be received from the network and stored in the cache, the second subscriber identification module disconnecting from the network. 17. The method of claim 16, wherein the second subscriber module is powered on responsive to detection of the trigger, and wherein the second subscriber identification module is powered off upon disconnecting from the network. 18. A computer program product usable within a mobile communications device, the device comprising a plurality of subscriber identification modules sharing a hardware cache, the computer program comprising a non-transitory machine-readable storage medium having machine-readable program code embodied therewith, said machine-readable program code being operative to cause the device to perform a method comprising:
a first subscriber identification module authenticating with and connecting to a network, followed by a second subscriber identification module authenticating with and connecting to the network; while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache. 19. A mobile communications device, comprising:
a memory comprising a hardware cache; a plurality of subscriber identification modules sharing the cache; and a processor operative to cause the device to perform a method comprising:
the first subscriber identification module authenticating with and connecting to the network, followed by a second subscriber identification module authenticating with and connecting to the network;
while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and
an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache. 20. The device of claim 19, further comprising a plurality of radio frequency transceivers corresponding to respective ones of the plurality of subscriber identification modules, wherein the first and second subscriber identification modules connect to and receive data from the network through respective first and second radio frequency transceivers. | An aspect includes a method for use with a mobile communications device comprising a plurality of subscriber identification modules sharing a cache. The method includes a first subscriber identification module authenticating with and connecting to the network, followed by a second subscriber identification module authenticating with and connecting to the network. The method also includes, while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache. The method further includes an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache.1. A method for use with a mobile communications device comprising a plurality of subscriber identification modules sharing a hardware cache, the method comprising:
a first subscriber identification module authenticating with and connecting to a network, followed by a second subscriber identification module authenticating with and connecting to the network; while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache. 2. The method of claim 1, wherein the first data is requested from the network and stored in the cache by the first subscriber module for processing by the application substantially in real time, and wherein the second data is requested from the network and stored in the cache by the second subscriber module for processing at a time after the first data has been processed. 3. The method of claim 1, wherein at least a portion of the second data is a duplicate of at least a portion of the first data, such that the portion of the second data is processed by the application instead of the portion of the first data. 4. The method of claim 1, wherein the network comprises a citizens broadband radio service. 5. The method of claim 4, wherein the first subscriber identification module connects to the network through a first sector of a given CBSD within the network, and wherein the second subscriber identification module connects to the network through a second sector of the given CBSD within the network. 6. The method of claim 4, wherein the first subscriber identification module connects to the network through a first one of a plurality of CBSDs within the network, and wherein the second subscriber identification module connects to the network through a second one of the plurality of CBSDs within the network. 7. A method for use with a mobile communications device comprising a plurality of subscriber identification modules sharing a hardware cache, the method comprising:
a first subscriber identification module authenticating with and connecting to a network, followed by a second subscriber identification module authenticating with and connecting to the network; while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache, wherein the first subscriber identification module authenticating with and connecting to the network, followed by the second subscriber identification module authenticating with and connecting to the network, further comprises: the first subscriber identification module authenticating with and connecting to a first one of a plurality of CBSDs within the network; and responsive to detection of a trigger, the second subscriber identification module:
selecting either the first one or a second one of the plurality of CBSDs; and
authenticating with and connecting to the selected one of the plurality of CBSDs. 8. The method of claim 7, further comprising measuring signal metrics for respective ones of the plurality of CBSDs within the network, wherein the first or second one of the plurality of CBSDs is selected based at least in part on the measured signal metrics. 9. The method of claim 8, wherein the second subscriber identification module measures the signal metrics responsive to detection of the trigger. 10. The method of claim 8, wherein the signal metrics comprise at least one of Reference Signal Received Power (RSRP) and Reference Signal Received Quality (RSRQ). 11. The method of claim 7, wherein the trigger is based at least in part on the connection between the first subscriber identification module and the first one of the plurality of CBSDs within the network. 12. The method of claim 11, wherein the trigger indicates that the connection of the first subscriber identification module to the network is sufficiently strong that the second subscriber identification module should connect to the network and receive the second data. 13. The method of claim 11, wherein the trigger indicates that the connection of the first subscriber identification module to the network is sufficiently weak that second subscriber identification module should connect to the network and receive the second data. 14. The method of claim 7, wherein the trigger indicates that the application requires the second data to be received from the network and stored in the cache. 15. The method of claim 14, wherein the trigger indicates that the application is consuming the first data from the cache faster than the first subscriber identification module is storing the first data in the cache after receiving the first data from the network. 16. The method of claim 14, further comprising the steps of:
after the second subscriber identification module connects to the network, the second subscriber identification module requesting and receiving the second data from the network and storing the second data in the cache; and after the second subscriber identification module stores the second data in the cache, determining whether the application requires additional data to be received from the network and stored in the cache; and if the application does require the additional data to be received from the network and stored in the cache, the second subscriber identification module requesting and receiving the second data from the network and storing the second data in the cache; if the application does not require the additional data to be received from the network and stored in the cache, the second subscriber identification module disconnecting from the network. 17. The method of claim 16, wherein the second subscriber module is powered on responsive to detection of the trigger, and wherein the second subscriber identification module is powered off upon disconnecting from the network. 18. A computer program product usable within a mobile communications device, the device comprising a plurality of subscriber identification modules sharing a hardware cache, the computer program comprising a non-transitory machine-readable storage medium having machine-readable program code embodied therewith, said machine-readable program code being operative to cause the device to perform a method comprising:
a first subscriber identification module authenticating with and connecting to a network, followed by a second subscriber identification module authenticating with and connecting to the network; while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache. 19. A mobile communications device, comprising:
a memory comprising a hardware cache; a plurality of subscriber identification modules sharing the cache; and a processor operative to cause the device to perform a method comprising:
the first subscriber identification module authenticating with and connecting to the network, followed by a second subscriber identification module authenticating with and connecting to the network;
while the first subscriber identification module is requesting and receiving first data from the network and storing the first data in the cache, the second identification module requesting and receiving second data from the network and storing the second data in the cache; and
an application executing on the device retrieving and processing at least a portion of the first and second data stored in the cache, while at least one of the first and second subscriber identification modules is storing additional data received from the network in the cache. 20. The device of claim 19, further comprising a plurality of radio frequency transceivers corresponding to respective ones of the plurality of subscriber identification modules, wherein the first and second subscriber identification modules connect to and receive data from the network through respective first and second radio frequency transceivers. | 2,600 |
11,076 | 11,076 | 13,163,639 | 2,626 | The present disclosure provides an information processing apparatus including, a detection unit configured to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content, and a display change portion configured to change a focus position of the objects making up the object group based on a result of the detection performed by the detection unit, wherein if the result of the detection by the detection unit has detected the operating body moving linearly in a predetermined operating direction thereof substantially parallel to the display surface, then the display change portion changes the focus position of the objects spread out circularly to make up the object group, in a manner moving the focus position in the spread-out direction. | 1. An information processing apparatus comprising:
a detection unit configured to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content; and a display change portion configured to change a focus position of said objects making up said object group based on a result of the detection performed by said detection unit; wherein based on the result of the detection, if said detection unit has detected said operating body moving linearly in a predetermined operating direction thereof substantially parallel to said display surface, then said display change portion changes the focus position of said objects spread out circularly to make up said object group, in a manner moving said focus position in the spread-out direction. 2. The information processing apparatus according to claim 1, wherein said display change portion changes the format in which said object group is displayed based on a proximate distance between said display surface and said operating body, said proximate distance being acquired from the result of the detection performed by said detection unit. 3. The information processing apparatus according to claim 1, wherein, based on the result of the detection, if said detection unit has detected said operating body moving in a direction substantially perpendicular to said predetermined operating direction, then said display change portion determines to select the content related to the currently focused object. 4. The information processing apparatus according to claim 1, wherein said display change portion changes the focus position of said objects making up said object group in accordance with the amount by which said operating body has moved relative to said display surface. 5. The information processing apparatus according to claim 1, wherein said object group is furnished with a determination region including said objects; wherein
said determination region is divided into as many sub-regions as the number of said objects included in said object group, said sub-regions corresponding individually to said objects; and said display change portion focuses on the object corresponding to the sub-object on which said operating body is detected to be positioned based on the result of the detection performed by said detection unit. 6. The information processing apparatus according to claim 5, wherein said display change portion changes said determination region in such a manner as to include said content group in accordance with how said content group is spread out. 7. The information processing apparatus according to claim 5, wherein, if said operating body is detected to have moved out of said determination region based on the result of the determination performed by said detection unit, then said display change portion displays in aggregate fashion said objects making up said object group. 8. The information processing apparatus according to claim 1, wherein said display change portion highlights the currently focused object. 9. The information processing apparatus according to claim 1, wherein said display change portion displays the currently focused object close to the tip of said operating body. 10. The information processing apparatus according to claim 1, wherein, if an operation input is not detected for longer than a predetermined time period based on the result of the detection performed by said detection unit, then said display change portion stops changing the focus position of said objects making up said object group. 11. An information processing method comprising:
causing a detection unit to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content; causing a display change portion to change a focus position of said objects making up said object group based on a result of the detection performed by said detection unit; and based on the result of the detection, if said detection unit has detected said operating body moving linearly in a predetermined operating direction thereof substantially parallel to said display surface, then causing said display change portion to change the focus position of said objects spread out circularly to make up said object group, in a manner moving said focus position in the spread-out direction. 12. A computer program for causing a computer to function as an information processing apparatus comprising:
a detection unit configured to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content; and a display change portion configured to change a focus position of said objects making up said object group based on a result of the detection performed by said detection unit; wherein based on the result of the detection, if said detection unit has detected said operating body moving linearly in a predetermined operating direction thereof substantially parallel to said display surface, then said display change portion changes the focus position of said objects spread out circularly to make up said object group, in a manner moving said focus position in the spread-out direction. | The present disclosure provides an information processing apparatus including, a detection unit configured to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content, and a display change portion configured to change a focus position of the objects making up the object group based on a result of the detection performed by the detection unit, wherein if the result of the detection by the detection unit has detected the operating body moving linearly in a predetermined operating direction thereof substantially parallel to the display surface, then the display change portion changes the focus position of the objects spread out circularly to make up the object group, in a manner moving the focus position in the spread-out direction.1. An information processing apparatus comprising:
a detection unit configured to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content; and a display change portion configured to change a focus position of said objects making up said object group based on a result of the detection performed by said detection unit; wherein based on the result of the detection, if said detection unit has detected said operating body moving linearly in a predetermined operating direction thereof substantially parallel to said display surface, then said display change portion changes the focus position of said objects spread out circularly to make up said object group, in a manner moving said focus position in the spread-out direction. 2. The information processing apparatus according to claim 1, wherein said display change portion changes the format in which said object group is displayed based on a proximate distance between said display surface and said operating body, said proximate distance being acquired from the result of the detection performed by said detection unit. 3. The information processing apparatus according to claim 1, wherein, based on the result of the detection, if said detection unit has detected said operating body moving in a direction substantially perpendicular to said predetermined operating direction, then said display change portion determines to select the content related to the currently focused object. 4. The information processing apparatus according to claim 1, wherein said display change portion changes the focus position of said objects making up said object group in accordance with the amount by which said operating body has moved relative to said display surface. 5. The information processing apparatus according to claim 1, wherein said object group is furnished with a determination region including said objects; wherein
said determination region is divided into as many sub-regions as the number of said objects included in said object group, said sub-regions corresponding individually to said objects; and said display change portion focuses on the object corresponding to the sub-object on which said operating body is detected to be positioned based on the result of the detection performed by said detection unit. 6. The information processing apparatus according to claim 5, wherein said display change portion changes said determination region in such a manner as to include said content group in accordance with how said content group is spread out. 7. The information processing apparatus according to claim 5, wherein, if said operating body is detected to have moved out of said determination region based on the result of the determination performed by said detection unit, then said display change portion displays in aggregate fashion said objects making up said object group. 8. The information processing apparatus according to claim 1, wherein said display change portion highlights the currently focused object. 9. The information processing apparatus according to claim 1, wherein said display change portion displays the currently focused object close to the tip of said operating body. 10. The information processing apparatus according to claim 1, wherein, if an operation input is not detected for longer than a predetermined time period based on the result of the detection performed by said detection unit, then said display change portion stops changing the focus position of said objects making up said object group. 11. An information processing method comprising:
causing a detection unit to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content; causing a display change portion to change a focus position of said objects making up said object group based on a result of the detection performed by said detection unit; and based on the result of the detection, if said detection unit has detected said operating body moving linearly in a predetermined operating direction thereof substantially parallel to said display surface, then causing said display change portion to change the focus position of said objects spread out circularly to make up said object group, in a manner moving said focus position in the spread-out direction. 12. A computer program for causing a computer to function as an information processing apparatus comprising:
a detection unit configured to detect the position of an operating body relative to a display surface of a display unit displaying an object group made up of a plurality of objects each being related to a content; and a display change portion configured to change a focus position of said objects making up said object group based on a result of the detection performed by said detection unit; wherein based on the result of the detection, if said detection unit has detected said operating body moving linearly in a predetermined operating direction thereof substantially parallel to said display surface, then said display change portion changes the focus position of said objects spread out circularly to make up said object group, in a manner moving said focus position in the spread-out direction. | 2,600 |
11,077 | 11,077 | 16,831,214 | 2,644 | A method, a device, and a non-transitory storage medium are described in which policy delivery is provided. A network device of a core network may receive a registration request message that includes a request for an end device policy. The network device may perform a registration procedure and a policy procedure pertaining to an end device. The network device determines when the registration and the policy procedures are completed. The network device transmits a registration response that includes an end device policy to an end device. | 1. A method comprising:
receiving, by a network device of a core network from an end device, a first message that includes an initial registration request and an end device policy request, and wherein the end device is not registered with the network device; executing, by the network device in response to receiving the first message, a registration procedure and a policy control procedure; determining, by the network device, whether the registration procedure and the policy control procedure are completed; generating, by the network device, when the registration procedure and the policy control procedure are completed, a second message that includes a registration response and an end device policy; and transmitting, by the network device to the end device, the second message. 2. The method of claim 1, wherein the second message includes a registration accept message that includes an information element that includes the end device policy. 3. The method of claim 1, wherein the first message is received as a non-access stratum message and the second message is transmitted as a non-access stratum message. 4. The method of claim 1, wherein the end device policy request includes a User Equipment (UE) Policy Section Identifier (UPSI) List Transport message. 5. The method of claim 1, wherein determining whether the registration procedure and the policy control procedure are completed further comprises:
determining, by the network device, whether a registration response is received from a unified data management device or a home subscriber server; and determining, by the network device in response to determining that the registration response is received, that the registration procedure is completed. 6. The method of claim 1, wherein determining whether the registration procedure and the policy control procedure are completed further comprises:
determining, by the network device, whether a policy control create response is received from a policy control function device or a policy charging and rules function; and determining, by the network device in response to determining that the policy control create response is received, that the policy control procedure is completed. 7. The method of claim 1, wherein the end device policy includes at least one of a route selection policy or an access discovery and selection policy. 8. The method of claim 1, wherein the network device is a mobility management entity device or an access and management function device, and the core network includes an evolved packet core network or a next generation core network. 9. A network device comprising:
a communication interface; a memory, wherein the memory stores instructions; and a processor, wherein the processor executes the instructions to:
receive, via the communication interface from an end device, a first message that includes an initial registration request and an end device policy request, wherein the network device is of a core network and the end device is not registered with the network device;
execute, in response to the receipt of the first message, a registration procedure and a policy control procedure;
determine whether the registration procedure and the policy control procedure are completed;
generate, when the registration procedure and the policy control procedure are completed, a second message that includes a registration response and an end device policy; and
transmit, via the communication interface to the end device, the second message. 10. The network device of claim 9, wherein the second message includes a registration accept message that includes an information element that includes the end device policy. 11. The network device of claim 9, wherein the first message is received as a non-access stratum message and the second message is transmitted as a non-access stratum message. 12. The network device of claim 9, wherein the end device policy request includes a User Equipment (UE) Policy Section Identifier (UPSI) List Transport message. 13. The network device of claim 9, wherein the end device policy includes at least one of a route selection policy or an access discovery and selection policy. 14. The network device of claim 9, wherein the network device is a mobility management entity device or an access and management function device, and the core network includes an evolved packet core network or a next generation core network. 15. The network device of claim 9, wherein, when determining whether the registration procedure and the policy control procedure are completed, the processor further executes the instructions to:
determine whether a registration response is received from a unified data management device or a home subscriber server; and determine, in response to a determination that the registration response is received, that the registration procedure is completed. 16. The network device of claim 9 wherein, when determining whether the registration procedure and the policy control procedure are completed, the processor further executes the instructions to:
determine whether policy control create response is received from a policy control function device or a policy charging and rules function; and
determine, in response to a determination that the policy control create response is received, that the policy control procedure is completed. 17. A non-transitory computer-readable storage medium storing instructions executable by a processor of a device of a core network, which when executed cause the device to:
receive, from an end device, a first message that includes an initial registration request and an end device policy request, wherein the end device is not registered with the device; execute, in response to the receipt of the first message, a registration procedure and a policy control procedure; determine whether the registration procedure and the policy control procedure are completed; generate, when the registration procedure and the policy control procedure are completed, a second message that includes a registration response and an end device policy; and transmit, to the end device, the second message. 18. The non-transitory computer-readable storage medium of claim 17, wherein the second message includes a registration accept message that includes an information element that includes the end device policy. 19. The non-transitory computer-readable storage medium of claim 17, wherein the end device policy request includes a User Equipment (UE) Policy Section Identifier (UPSI) List Transport message. 20. The non-transitory computer-readable storage medium of claim 17, wherein the device is a mobility management entity device or an access and management function device, and the core network includes an evolved packet core network or a next generation core network. | A method, a device, and a non-transitory storage medium are described in which policy delivery is provided. A network device of a core network may receive a registration request message that includes a request for an end device policy. The network device may perform a registration procedure and a policy procedure pertaining to an end device. The network device determines when the registration and the policy procedures are completed. The network device transmits a registration response that includes an end device policy to an end device.1. A method comprising:
receiving, by a network device of a core network from an end device, a first message that includes an initial registration request and an end device policy request, and wherein the end device is not registered with the network device; executing, by the network device in response to receiving the first message, a registration procedure and a policy control procedure; determining, by the network device, whether the registration procedure and the policy control procedure are completed; generating, by the network device, when the registration procedure and the policy control procedure are completed, a second message that includes a registration response and an end device policy; and transmitting, by the network device to the end device, the second message. 2. The method of claim 1, wherein the second message includes a registration accept message that includes an information element that includes the end device policy. 3. The method of claim 1, wherein the first message is received as a non-access stratum message and the second message is transmitted as a non-access stratum message. 4. The method of claim 1, wherein the end device policy request includes a User Equipment (UE) Policy Section Identifier (UPSI) List Transport message. 5. The method of claim 1, wherein determining whether the registration procedure and the policy control procedure are completed further comprises:
determining, by the network device, whether a registration response is received from a unified data management device or a home subscriber server; and determining, by the network device in response to determining that the registration response is received, that the registration procedure is completed. 6. The method of claim 1, wherein determining whether the registration procedure and the policy control procedure are completed further comprises:
determining, by the network device, whether a policy control create response is received from a policy control function device or a policy charging and rules function; and determining, by the network device in response to determining that the policy control create response is received, that the policy control procedure is completed. 7. The method of claim 1, wherein the end device policy includes at least one of a route selection policy or an access discovery and selection policy. 8. The method of claim 1, wherein the network device is a mobility management entity device or an access and management function device, and the core network includes an evolved packet core network or a next generation core network. 9. A network device comprising:
a communication interface; a memory, wherein the memory stores instructions; and a processor, wherein the processor executes the instructions to:
receive, via the communication interface from an end device, a first message that includes an initial registration request and an end device policy request, wherein the network device is of a core network and the end device is not registered with the network device;
execute, in response to the receipt of the first message, a registration procedure and a policy control procedure;
determine whether the registration procedure and the policy control procedure are completed;
generate, when the registration procedure and the policy control procedure are completed, a second message that includes a registration response and an end device policy; and
transmit, via the communication interface to the end device, the second message. 10. The network device of claim 9, wherein the second message includes a registration accept message that includes an information element that includes the end device policy. 11. The network device of claim 9, wherein the first message is received as a non-access stratum message and the second message is transmitted as a non-access stratum message. 12. The network device of claim 9, wherein the end device policy request includes a User Equipment (UE) Policy Section Identifier (UPSI) List Transport message. 13. The network device of claim 9, wherein the end device policy includes at least one of a route selection policy or an access discovery and selection policy. 14. The network device of claim 9, wherein the network device is a mobility management entity device or an access and management function device, and the core network includes an evolved packet core network or a next generation core network. 15. The network device of claim 9, wherein, when determining whether the registration procedure and the policy control procedure are completed, the processor further executes the instructions to:
determine whether a registration response is received from a unified data management device or a home subscriber server; and determine, in response to a determination that the registration response is received, that the registration procedure is completed. 16. The network device of claim 9 wherein, when determining whether the registration procedure and the policy control procedure are completed, the processor further executes the instructions to:
determine whether policy control create response is received from a policy control function device or a policy charging and rules function; and
determine, in response to a determination that the policy control create response is received, that the policy control procedure is completed. 17. A non-transitory computer-readable storage medium storing instructions executable by a processor of a device of a core network, which when executed cause the device to:
receive, from an end device, a first message that includes an initial registration request and an end device policy request, wherein the end device is not registered with the device; execute, in response to the receipt of the first message, a registration procedure and a policy control procedure; determine whether the registration procedure and the policy control procedure are completed; generate, when the registration procedure and the policy control procedure are completed, a second message that includes a registration response and an end device policy; and transmit, to the end device, the second message. 18. The non-transitory computer-readable storage medium of claim 17, wherein the second message includes a registration accept message that includes an information element that includes the end device policy. 19. The non-transitory computer-readable storage medium of claim 17, wherein the end device policy request includes a User Equipment (UE) Policy Section Identifier (UPSI) List Transport message. 20. The non-transitory computer-readable storage medium of claim 17, wherein the device is a mobility management entity device or an access and management function device, and the core network includes an evolved packet core network or a next generation core network. | 2,600 |
11,078 | 11,078 | 14,631,232 | 2,621 | A lighting system includes at least one illumination device having a luminaire and a luminaire controller configured to control the operational state of the luminaire according to lighting control data, and a lighting controller configured to transmit the lighting control data to at least one illumination device. The lighting controller is configured to pre-format the lighting control data for the at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for the luminaire. The lighting controller is further configured to encode the pre-formatted color space representation matrices using a data compression algorithm. | 1. A lighting system comprising:
at least one illumination device comprising a luminaire and a luminaire controller configured to control an operational state of the luminaire according to lighting control data; and a lighting controller configured to transmit the lighting control data to the at least one illumination device, wherein the lighting controller is configured to pre-format the lighting control data for the at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for the luminaire, and wherein the lighting controller is further configured to encode the pre-formatted color space representation matrices using a data compression algorithm. 2. The lighting system according to claim 1, wherein the data compression algorithm comprises a run-length encoding scheme, an entropy encoding scheme, a deflation scheme, wavelet transforming, Fourier transforming, Discrete Cosine transforming and/or a fractal compression scheme. 3. The lighting system according to claim 1, wherein the color space representation matrices have a number of columns corresponding to the number of illumination devices in the lighting system. 4. The lighting system according to claim 1, wherein the subsequent rows of the color space representation matrices comprise lighting control data for subsequent time slots of controlling the operational states of the luminaires. 5. The lighting system according to claim 1, wherein the color space representation matrices comprise consecutively arranged subblocks of color space representation values corresponding to lighting control data of luminaires of adjacent illumination devices with respect to the same time slot. 6. The lighting system according to claim 1, wherein the luminaires comprise fluorescent tubes and/or solid-state light emitters. 7. The lighting system according to claim 1, wherein the lighting controller is configured to transmit the lighting control data to the at least one illumination device via a wireless communication link. 8. A method for controlling a lighting system, the method comprising:
pre-formatting lighting control data for at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for luminaires of the illumination device; encoding the pre-formatted color space representation matrices using a data compression algorithm; and transmitting the encoded color space representation matrices to the at least one illumination device via a communication link. 9. The method according to claim 8, wherein the data compression algorithm comprises a run-length encoding scheme, an entropy encoding scheme, a deflation scheme, wavelet transforming, Fourier transforming, Discrete Cosine transforming and/or a fractal compression scheme. 10. The method according to claim 9, wherein the color space representation matrices have a number of columns corresponding to the number of illumination devices. 11. The method according to claim 10, wherein the subsequent rows of the color space representation matrices comprise lighting control data for subsequent time slots of controlling the operational states of the luminaires. 12. The method according to claim 8, wherein the color space representation matrices are transmitted via a wireless communication link. 13. The method according to claim 12, wherein the color space representation matrices are streamed to the at least one illumination device. 14. A non-transitory computer-readable medium comprising computer-readable instructions which, when executed on a computer, cause the computer to perform a method for controlling a lighting system, the method comprising:
pre-formatting lighting control data for at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for luminaires of the illumination device; encoding the pre-formatted color space representation matrices using a data compression algorithm; and transmitting the encoded color space representation matrices to the at least one illumination device via a communication link. 15. An airborne vehicle, comprising a lighting system, the lighting system comprising:
at least one illumination device comprising a luminaire and a luminaire controller configured to control the operational state of the luminaire according to lighting control data; and a lighting controller configured to transmit the lighting control data to the at least one illumination device, wherein the lighting controller is configured to pre-format the lighting control data for the at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for the luminaire, and wherein the lighting controller is further configured to encode the pre-formatted color space representation matrices using a data compression algorithm. | A lighting system includes at least one illumination device having a luminaire and a luminaire controller configured to control the operational state of the luminaire according to lighting control data, and a lighting controller configured to transmit the lighting control data to at least one illumination device. The lighting controller is configured to pre-format the lighting control data for the at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for the luminaire. The lighting controller is further configured to encode the pre-formatted color space representation matrices using a data compression algorithm.1. A lighting system comprising:
at least one illumination device comprising a luminaire and a luminaire controller configured to control an operational state of the luminaire according to lighting control data; and a lighting controller configured to transmit the lighting control data to the at least one illumination device, wherein the lighting controller is configured to pre-format the lighting control data for the at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for the luminaire, and wherein the lighting controller is further configured to encode the pre-formatted color space representation matrices using a data compression algorithm. 2. The lighting system according to claim 1, wherein the data compression algorithm comprises a run-length encoding scheme, an entropy encoding scheme, a deflation scheme, wavelet transforming, Fourier transforming, Discrete Cosine transforming and/or a fractal compression scheme. 3. The lighting system according to claim 1, wherein the color space representation matrices have a number of columns corresponding to the number of illumination devices in the lighting system. 4. The lighting system according to claim 1, wherein the subsequent rows of the color space representation matrices comprise lighting control data for subsequent time slots of controlling the operational states of the luminaires. 5. The lighting system according to claim 1, wherein the color space representation matrices comprise consecutively arranged subblocks of color space representation values corresponding to lighting control data of luminaires of adjacent illumination devices with respect to the same time slot. 6. The lighting system according to claim 1, wherein the luminaires comprise fluorescent tubes and/or solid-state light emitters. 7. The lighting system according to claim 1, wherein the lighting controller is configured to transmit the lighting control data to the at least one illumination device via a wireless communication link. 8. A method for controlling a lighting system, the method comprising:
pre-formatting lighting control data for at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for luminaires of the illumination device; encoding the pre-formatted color space representation matrices using a data compression algorithm; and transmitting the encoded color space representation matrices to the at least one illumination device via a communication link. 9. The method according to claim 8, wherein the data compression algorithm comprises a run-length encoding scheme, an entropy encoding scheme, a deflation scheme, wavelet transforming, Fourier transforming, Discrete Cosine transforming and/or a fractal compression scheme. 10. The method according to claim 9, wherein the color space representation matrices have a number of columns corresponding to the number of illumination devices. 11. The method according to claim 10, wherein the subsequent rows of the color space representation matrices comprise lighting control data for subsequent time slots of controlling the operational states of the luminaires. 12. The method according to claim 8, wherein the color space representation matrices are transmitted via a wireless communication link. 13. The method according to claim 12, wherein the color space representation matrices are streamed to the at least one illumination device. 14. A non-transitory computer-readable medium comprising computer-readable instructions which, when executed on a computer, cause the computer to perform a method for controlling a lighting system, the method comprising:
pre-formatting lighting control data for at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for luminaires of the illumination device; encoding the pre-formatted color space representation matrices using a data compression algorithm; and transmitting the encoded color space representation matrices to the at least one illumination device via a communication link. 15. An airborne vehicle, comprising a lighting system, the lighting system comprising:
at least one illumination device comprising a luminaire and a luminaire controller configured to control the operational state of the luminaire according to lighting control data; and a lighting controller configured to transmit the lighting control data to the at least one illumination device, wherein the lighting controller is configured to pre-format the lighting control data for the at least one illumination device in one or more color space representation matrices having matrix entries corresponding to color space representation values for the luminaire, and wherein the lighting controller is further configured to encode the pre-formatted color space representation matrices using a data compression algorithm. | 2,600 |
11,079 | 11,079 | 16,354,831 | 2,619 | Interactive display and rendering device, system and method for wine bottles are disclosed. An interactive wine management and display device and method using a scanning instrument capable of scanning information regarding a bottle of wine; an imaging instrument capable of collecting an image of at least a portion of a bottle of wine; a connection to a wine database containing information regarding wines; a connection to a database having images of wine bottles. The device is able to render the images as well as information on the wine to a display for user interaction. | 1. An interactive touch screen door for use with a wine cabinet having an interior and a rack in the interior to hold wine bottles, the interactive touch screen door to provide access to the interior and comprising:
a transparent touch display embedded in the interactive touch screen door and having a size at least corresponding to a portion of the interior; and a controller operatively connected to the transparent touch display and configured to:
provide, on the transparent touch display, information associated with the wine bottles stored in the cabinet;
receive, via the transparent touch display, a wine search parameter; and
recommending a bottle of wine from the wine bottles based on the wine search parameter to define a recommended bottle of wine by displaying on the transparent touch display an indicator at a physical location of the recommended bottle of wine in the wine cabinet;
wherein the indicator is superimposed on the transparent touch display to be in line with at least a portion of a contour of at least one of the wine bottles corresponding to the wine search parameter. 2. The interactive touch screen door of claim 1 wherein the wine search parameter comprises at least one of a food type, a price, a year, a location/origin, a vineyard, or a wine type. 3. The interactive touch screen door of claim 1 wherein the controller is further configured to:
receive, via the transparent touch display, a wine selection; and
display, on the transparent touch display, information associated with a bottle of wine of the wine bottles corresponding to the wine selection. 4. The interactive touch screen door of claim 3 wherein the information associated with the bottle of wine comprises an image of a label on the bottle of wine. 5. The interactive touch screen door of claim 4 wherein the information associated with the bottle of wine further includes at least one of a suggested food pairing, a price, a year, a location/origin, a vineyard, or a wine type. 6. The interactive touch screen door of claim 5 wherein the information associated with the bottle of wine further is the suggested food pairing. 7. The interactive touch screen door of claim 1 wherein the controller is further configured to:
receive, via the transparent touch display, a wine consumption indication; and
update a wine database to represent that a bottle of wine corresponding to the wine consumption indication is no longer stored in the wine cabinet. 8. The interactive touch screen door of claim 1, wherein the controller is further configured to:
receive, via the transparent touch display, an add wine bottle indication; display, on the transparent touch display, one or more menus that enable a user to provide information to be associated with a new bottle of wine to be added to the wine cabinet; receive via the transparent touch display a new bottle indication of a location where the new bottle of wine is to be stored in the wine cabinet; and store the information associated with the new bottle of wine and the location of the new bottle of wine in the wine cabinet. 9. The interactive touch screen door of claim 1, further comprising a handheld computing device communicatively coupled to the controller, the handheld computing device configured to:
receive the wine search parameter; and direct the controller to display, on the transparent touch display, the indicator corresponding to the physical location of the wine bottle in the wine cabinet corresponding to the wine search parameter received at the handheld computing device. 10. The interactive touch screen door of claim 9, wherein the controller is configured to receive, via the transparent touch display, a wine selection, and the handheld computing device is configured to display information associated with the wine bottles corresponding to the wine selection. 11. The interactive touch screen door of claim 1 further comprising a handheld computing device communicatively coupled to the controller, the handheld computing device configured to:
receive an add wine bottle indicator; and
receive, via the transparent touch display, an indication of where a new bottle of wine is to be stored in the wine cabinet;
wherein the handheld computing device is configured to capture an image of a label of the new bottle of wine and perform image processing on the captured image to identify the new bottle of wine, and to automatically obtain information to be associated with the new bottle of wine based on the identified new bottle of wine. 12. A door for use with a refrigerated cabinet having an interior and a rack in the interior to hold one or more liquid containers, the door to provide access to the interior and comprising:
a transparent touch display embedded in the door; and a controller operatively connected to the transparent touch display and configured to:
receive, via the transparent touch display, a container search parameter; and
recommending a liquid container from the one or more liquid containers based on the container search parameter to define a recommended container by displaying on the transparent touch display an indicator at a physical location of the recommended container;
wherein the indicator is superimposed on the transparent touch display to be in line with at least a portion of a contour of at least one of the one or more liquid containers corresponding to the container search parameter. 13. The door of claim 12 wherein the controller is further configured to provide, on the transparent touch display, information associated with the one or more liquid containers stored in the refrigerated cabinet. 14. The door of claim 12 wherein the controller is further configured to:
retrieve information related to the recommended container; and
display, on the transparent touch display, the information related to the recommended container. 15. The door of claim 12 wherein the controller is further configured to recommend at least one different liquid container based upon the container search parameter and related to the recommended container by displaying, on the transparent touch display, another indicator at a physical location of the at least one different liquid container. 16. The door of claim 15 wherein the at least one different liquid container is indicated with a line connecting the recommended container to the at least one different liquid container. 17. A method of controlling an interactive wine cabinet including a refrigerated compartment, a rack to hold one or more bottles of wine in the refrigerated compartment, a door to provide access to the refrigerated compartment, and a transparent touch display embedded in the door, the method comprising:
receiving, via the transparent touch display, a wine search parameter; and recommending a bottle of wine from the one or more bottles of wine based on the wine search parameter to define a recommended bottle of wine; and displaying, on the transparent touch display, an indicator at a physical location of the recommended bottle of wine in the interactive wine cabinet, wherein the indicator is superimposed on the transparent touch display to be in line with at least a portion of a contour of the recommended bottle of wine. 18. The method of claim 17 further comprising:
receiving, via the transparent touch display, a wine consumption indication for the recommended bottle of wine; and
updating a wine database to represent that recommended bottle of wine corresponding to the wine consumption indication is no longer stored in the interactive wine cabinet. 19. The method of claim 17, further comprising:
receiving, via the transparent touch display, an add wine bottle indication; displaying, on the transparent touch display, one or more menus that enable a user to provide information to be associated with a new bottle of wine to be added to the interactive wine cabinet; receiving, via the transparent touch display, an indication of a location where the bottle of wine is to be added to the wine cabinet; and storing the information associated with the bottle of wine to be added to the wine cabinet and the location of the bottle of wine to be added to the wine cabinet. 20. The method of claim 17, further comprising a handheld computing device communicatively coupled to the transparent touch display, the handheld computing device configured to:
receiving, at a handheld computing device in communication with the door, the wine search parameter; and directing the transparent touch display to display one or more indicators corresponding to physical locations of the recommended bottle of wine corresponding to the wine search parameter received at the handheld computing device. | Interactive display and rendering device, system and method for wine bottles are disclosed. An interactive wine management and display device and method using a scanning instrument capable of scanning information regarding a bottle of wine; an imaging instrument capable of collecting an image of at least a portion of a bottle of wine; a connection to a wine database containing information regarding wines; a connection to a database having images of wine bottles. The device is able to render the images as well as information on the wine to a display for user interaction.1. An interactive touch screen door for use with a wine cabinet having an interior and a rack in the interior to hold wine bottles, the interactive touch screen door to provide access to the interior and comprising:
a transparent touch display embedded in the interactive touch screen door and having a size at least corresponding to a portion of the interior; and a controller operatively connected to the transparent touch display and configured to:
provide, on the transparent touch display, information associated with the wine bottles stored in the cabinet;
receive, via the transparent touch display, a wine search parameter; and
recommending a bottle of wine from the wine bottles based on the wine search parameter to define a recommended bottle of wine by displaying on the transparent touch display an indicator at a physical location of the recommended bottle of wine in the wine cabinet;
wherein the indicator is superimposed on the transparent touch display to be in line with at least a portion of a contour of at least one of the wine bottles corresponding to the wine search parameter. 2. The interactive touch screen door of claim 1 wherein the wine search parameter comprises at least one of a food type, a price, a year, a location/origin, a vineyard, or a wine type. 3. The interactive touch screen door of claim 1 wherein the controller is further configured to:
receive, via the transparent touch display, a wine selection; and
display, on the transparent touch display, information associated with a bottle of wine of the wine bottles corresponding to the wine selection. 4. The interactive touch screen door of claim 3 wherein the information associated with the bottle of wine comprises an image of a label on the bottle of wine. 5. The interactive touch screen door of claim 4 wherein the information associated with the bottle of wine further includes at least one of a suggested food pairing, a price, a year, a location/origin, a vineyard, or a wine type. 6. The interactive touch screen door of claim 5 wherein the information associated with the bottle of wine further is the suggested food pairing. 7. The interactive touch screen door of claim 1 wherein the controller is further configured to:
receive, via the transparent touch display, a wine consumption indication; and
update a wine database to represent that a bottle of wine corresponding to the wine consumption indication is no longer stored in the wine cabinet. 8. The interactive touch screen door of claim 1, wherein the controller is further configured to:
receive, via the transparent touch display, an add wine bottle indication; display, on the transparent touch display, one or more menus that enable a user to provide information to be associated with a new bottle of wine to be added to the wine cabinet; receive via the transparent touch display a new bottle indication of a location where the new bottle of wine is to be stored in the wine cabinet; and store the information associated with the new bottle of wine and the location of the new bottle of wine in the wine cabinet. 9. The interactive touch screen door of claim 1, further comprising a handheld computing device communicatively coupled to the controller, the handheld computing device configured to:
receive the wine search parameter; and direct the controller to display, on the transparent touch display, the indicator corresponding to the physical location of the wine bottle in the wine cabinet corresponding to the wine search parameter received at the handheld computing device. 10. The interactive touch screen door of claim 9, wherein the controller is configured to receive, via the transparent touch display, a wine selection, and the handheld computing device is configured to display information associated with the wine bottles corresponding to the wine selection. 11. The interactive touch screen door of claim 1 further comprising a handheld computing device communicatively coupled to the controller, the handheld computing device configured to:
receive an add wine bottle indicator; and
receive, via the transparent touch display, an indication of where a new bottle of wine is to be stored in the wine cabinet;
wherein the handheld computing device is configured to capture an image of a label of the new bottle of wine and perform image processing on the captured image to identify the new bottle of wine, and to automatically obtain information to be associated with the new bottle of wine based on the identified new bottle of wine. 12. A door for use with a refrigerated cabinet having an interior and a rack in the interior to hold one or more liquid containers, the door to provide access to the interior and comprising:
a transparent touch display embedded in the door; and a controller operatively connected to the transparent touch display and configured to:
receive, via the transparent touch display, a container search parameter; and
recommending a liquid container from the one or more liquid containers based on the container search parameter to define a recommended container by displaying on the transparent touch display an indicator at a physical location of the recommended container;
wherein the indicator is superimposed on the transparent touch display to be in line with at least a portion of a contour of at least one of the one or more liquid containers corresponding to the container search parameter. 13. The door of claim 12 wherein the controller is further configured to provide, on the transparent touch display, information associated with the one or more liquid containers stored in the refrigerated cabinet. 14. The door of claim 12 wherein the controller is further configured to:
retrieve information related to the recommended container; and
display, on the transparent touch display, the information related to the recommended container. 15. The door of claim 12 wherein the controller is further configured to recommend at least one different liquid container based upon the container search parameter and related to the recommended container by displaying, on the transparent touch display, another indicator at a physical location of the at least one different liquid container. 16. The door of claim 15 wherein the at least one different liquid container is indicated with a line connecting the recommended container to the at least one different liquid container. 17. A method of controlling an interactive wine cabinet including a refrigerated compartment, a rack to hold one or more bottles of wine in the refrigerated compartment, a door to provide access to the refrigerated compartment, and a transparent touch display embedded in the door, the method comprising:
receiving, via the transparent touch display, a wine search parameter; and recommending a bottle of wine from the one or more bottles of wine based on the wine search parameter to define a recommended bottle of wine; and displaying, on the transparent touch display, an indicator at a physical location of the recommended bottle of wine in the interactive wine cabinet, wherein the indicator is superimposed on the transparent touch display to be in line with at least a portion of a contour of the recommended bottle of wine. 18. The method of claim 17 further comprising:
receiving, via the transparent touch display, a wine consumption indication for the recommended bottle of wine; and
updating a wine database to represent that recommended bottle of wine corresponding to the wine consumption indication is no longer stored in the interactive wine cabinet. 19. The method of claim 17, further comprising:
receiving, via the transparent touch display, an add wine bottle indication; displaying, on the transparent touch display, one or more menus that enable a user to provide information to be associated with a new bottle of wine to be added to the interactive wine cabinet; receiving, via the transparent touch display, an indication of a location where the bottle of wine is to be added to the wine cabinet; and storing the information associated with the bottle of wine to be added to the wine cabinet and the location of the bottle of wine to be added to the wine cabinet. 20. The method of claim 17, further comprising a handheld computing device communicatively coupled to the transparent touch display, the handheld computing device configured to:
receiving, at a handheld computing device in communication with the door, the wine search parameter; and directing the transparent touch display to display one or more indicators corresponding to physical locations of the recommended bottle of wine corresponding to the wine search parameter received at the handheld computing device. | 2,600 |
11,080 | 11,080 | 14,530,121 | 2,652 | A resource managing computer system for managing at least one resource in an enterprise is disclosed. The resource managing computer system includes a communication interface for establishing at least one web based chat communication session with at least one customer. The system further includes a monitoring module for monitoring one or more parameters associated with the at least one web based chat communication session. The system further includes a computing module for computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session. The system further includes an allocation module for allocating the at least one resource to the at least one web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score. | 1. A computer-implemented method for managing at least one resource in an enterprise, the method comprising:
establishing at least one web based chat communication session with at least one customer; monitoring one or more parameters associated with the at least one web based chat communication session; computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session; and allocating the at least one resource to the web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score. 2. The method of claim 1, wherein the at least one web based chat communication session is an automated web based chat communication session. 3. The method of claim 1, wherein the at least one computed confidence score is compared with a threshold level. 4. The method of claim 3, further comprising optimizing the threshold level based on availability of the at least allocated resource. 5. The method of claim 1, wherein the at least one computed confidence score is dynamic. 6. The method of claim 1, wherein the at least one computed confidence score is computed by using at least one fuzzy logic. 7. The method of claim 1, further comprising enabling a supervisor to select the at least one resource based on the at least one computed confidence score. 8. The method of claim 1, further comprising routing the at least one web based chat communication session to the at least one allocated resource. 9. The method of claim 1, wherein the at least one allocated resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 10. A resource managing computer system for managing at least one resource in an enterprise, the system comprising:
a communication interface for establishing at least one web based chat communication session with at least one customer; a monitoring module for monitoring one or more parameters associated with the at least one web based chat communication session; a computing module for computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session; and an allocation module for allocating the at least one resource to the at least one web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score. 11. The system of claim 10, wherein the at least one web based chat communication session is an automated web based chat communication session. 12. The system of claim 10, wherein the computing module further compares the at least one computed confidence score with a threshold level. 13. The system of claim 12, wherein the computing module further optimizes the threshold level based on availability of the at least one allocated resource. 14. The system of claim 10, wherein the computing module further computes the at least one computed confidence score by using at least one fuzzy logic. 15. The system of claim 10, wherein the at least one computed confidence score is dynamic. 16. The system of claim 10, wherein the computing module further optimizes a number of active web based chat communication sessions that require agent assistance. 17. The system of claim 10, further comprising a routing module for routing the at least one web based chat communication session to the at least one allocated resource. 18. The system of claim 10, wherein the at least one allocated resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 19. A computer-implemented method for managing at least one resource in an enterprise, the method comprising:
establishing at least one web based chat communication session with at least one customer; monitoring one or more parameters associated with the at least one web based chat communication session; computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session; allocating the at least one resource to the web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score; and routing the at least one web based chat communication session to the at least one allocated resource. 20. The method of claim 19, further comprising comparing the at least one computed confidence score with a threshold level. | A resource managing computer system for managing at least one resource in an enterprise is disclosed. The resource managing computer system includes a communication interface for establishing at least one web based chat communication session with at least one customer. The system further includes a monitoring module for monitoring one or more parameters associated with the at least one web based chat communication session. The system further includes a computing module for computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session. The system further includes an allocation module for allocating the at least one resource to the at least one web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score.1. A computer-implemented method for managing at least one resource in an enterprise, the method comprising:
establishing at least one web based chat communication session with at least one customer; monitoring one or more parameters associated with the at least one web based chat communication session; computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session; and allocating the at least one resource to the web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score. 2. The method of claim 1, wherein the at least one web based chat communication session is an automated web based chat communication session. 3. The method of claim 1, wherein the at least one computed confidence score is compared with a threshold level. 4. The method of claim 3, further comprising optimizing the threshold level based on availability of the at least allocated resource. 5. The method of claim 1, wherein the at least one computed confidence score is dynamic. 6. The method of claim 1, wherein the at least one computed confidence score is computed by using at least one fuzzy logic. 7. The method of claim 1, further comprising enabling a supervisor to select the at least one resource based on the at least one computed confidence score. 8. The method of claim 1, further comprising routing the at least one web based chat communication session to the at least one allocated resource. 9. The method of claim 1, wherein the at least one allocated resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 10. A resource managing computer system for managing at least one resource in an enterprise, the system comprising:
a communication interface for establishing at least one web based chat communication session with at least one customer; a monitoring module for monitoring one or more parameters associated with the at least one web based chat communication session; a computing module for computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session; and an allocation module for allocating the at least one resource to the at least one web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score. 11. The system of claim 10, wherein the at least one web based chat communication session is an automated web based chat communication session. 12. The system of claim 10, wherein the computing module further compares the at least one computed confidence score with a threshold level. 13. The system of claim 12, wherein the computing module further optimizes the threshold level based on availability of the at least one allocated resource. 14. The system of claim 10, wherein the computing module further computes the at least one computed confidence score by using at least one fuzzy logic. 15. The system of claim 10, wherein the at least one computed confidence score is dynamic. 16. The system of claim 10, wherein the computing module further optimizes a number of active web based chat communication sessions that require agent assistance. 17. The system of claim 10, further comprising a routing module for routing the at least one web based chat communication session to the at least one allocated resource. 18. The system of claim 10, wherein the at least one allocated resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 19. A computer-implemented method for managing at least one resource in an enterprise, the method comprising:
establishing at least one web based chat communication session with at least one customer; monitoring one or more parameters associated with the at least one web based chat communication session; computing at least one confidence score based on the one or more monitored parameters of the at least one web based chat communication session; allocating the at least one resource to the web based chat communication session, wherein the allocation is performed based on the at least one computed confidence score; and routing the at least one web based chat communication session to the at least one allocated resource. 20. The method of claim 19, further comprising comparing the at least one computed confidence score with a threshold level. | 2,600 |
11,081 | 11,081 | 14,258,848 | 2,652 | Content of a communication session, such as a voice communication between a user and an agent of a contact center is monitored. A keyword, a phrase, an emotion, or a gesture related to a topic in the monitored content of the communication session is identified. A rule based on the identified the monitored content is applied. In response to applying the rule based on the monitored content, one or more topic suggestions are identified and presented to the user. For example, the rule can detect that the agent changed the discussion from a first topic to a second topic. In response to the agent discussing the second topic, the user is presented with two topics suggestions for the two topics. The user can select one of the topic suggestions to focus the agent on a specific topic suggestion. The selected topic suggestion along with discussions options are then displayed to the agent. | 1. A method comprising:
monitoring content of a communication session between a user and a first agent of a contact center; identifying at least one of a keyword, a phrase, an emotion, and a gesture included in the monitored content of the communication session; applying a rule based on the identified at least one of the keyword, the phrase, the emotion and the gesture; and in response to applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, generating a first topic suggestion for presentation to the user. 2. The method of claim 1, wherein the first topic suggestion comprises a plurality of topic suggestions and the method further comprises:
receiving a selection, by the user, of one of the plurality of topic suggestions; and in response to receiving the selection, by the user, of the one of the plurality of topic suggestions, generating for presentation, to the first agent, the selected one of the plurality of topic suggestions along with a first list of one or more discussion options associated with the selected one of the plurality of topic suggestions. 3. The method of claim 2, further comprising:
transferring the communication session to a second agent or conferencing the second agent into the communication session; and in response to transferring the communication session to the second agent or conferencing the second agent into the communication session, generating for presentation, to the second agent, the selected one of the plurality of topic suggestions along with a second list of one or more discussion options associated with the selected one of the plurality of topic suggestions, wherein the first list is different than the second list. 4. The method of claim 1, wherein the rule is applied based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, and an exceed time duration of the communication session. 5. The method of claim 1, further comprising: identifying a second topic suggestion from information stored in a database and wherein generating the first topic suggestion for selection by the user further comprises generating the second topic suggestion for selection by the user. 6. The method of claim 1, wherein the first topic suggestion comprises a plurality of topic suggestions and further comprising: receiving an input from the user to define an order of the plurality of topic suggestions. 7. The method of claim 1, wherein the first topic suggestion further comprises a plurality of topic suggestions identified by the user prior to an initiation of the communication session. 8. The method of claim 1, wherein applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture is based on at least one of the following:
the first agent is not understanding the topic; the first agent discussing a different topic; a specific individual communicating in the communication session; two or more specific individuals speaking in the communication session; the two or more specific individuals calling in from a single communication endpoint; a plurality of keywords and/or phrases being communicated in the same sentence; the plurality of keywords and/or phrases being in a single email or text message; the keyword and/or phrase being communicated within a specific time period; a non response to an Interactive Voice Response (IVR) system; the keyword and/or phrase being an abbreviation; a number of exchanged emails or text messages; a gesture made during the communication session; a detected emotion of the specific individual communicating in the communication session; and a specific type of punctuation being used with a specific word and/or phrase in a sentence. 9. The method of claim 1, wherein at a conclusion of the communication session, information associated with the communication session is stored in a database for use with further communication sessions associated with the user. 10. The method of claim 1, wherein the first topic suggestion is generated based on detecting a defined number of keywords and/or phrases related to the topic and wherein the first topic suggestion is displayed or played to the user. 11. A system comprising:
a content monitor configured to monitor content of a communication session between a user and a first agent of a contact center and identify at least one of a keyword, a phrase, an emotion, and a gesture related to a topic in the monitored content of the communication session; a rule engine configured to apply a rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture; and a topic module configured to generate a first topic suggestion for selection by the user in response to applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture. 12. The system of claim 11, wherein the first topic suggestion comprises a plurality of topic suggestions, wherein the topic module is further configured to receive a selection, by the user, of one of the plurality of topic suggestions and further comprising:
a presentation module configured to generate for presentation, to the first agent, the selected one of the plurality of topic suggestions along with a first list of one or more discussion options associated with the selected one of the plurality of topic suggestions in response to receiving the selection, by the user, of the one of the plurality of topic suggestions. 13. The system of claim 12, further comprising:
a session manager configured to transfer the communication session to a second agent or conference the second agent into the communication session; and the presentation module is further configured to generate for presentation, to the second agent, the selected one of the plurality of topic suggestions along with a second list of one or more discussion options associated with the selected one of the plurality of topic suggestions in response to transferring the communication session to the second agent or conferencing the second agent into the communication session, and wherein the first list is different than the second list. 14. The system of claim 11, wherein the rule is applied based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, and an exceed time duration of the communication session. 15. The system of claim 11, wherein:
the topic module is further configured to identify a second topic suggestion from information stored in a database; and the presentation module is further configured to generate for presentation to the user the second topic suggestion for selection by the user. 16. The system of claim 11, wherein the first topic suggestion comprises a plurality of topic suggestions and wherein the topic module is further configured to receive an input from the user to define an order of the plurality of topic suggestions. 17. The system of claim 11, wherein the first topic suggestion further comprises a plurality of topic suggestions identified by the user prior to an initiation of the communication session. 18. The system of claim 11, wherein applying the rule to the identified at least one of the keyword, the phrase, the emotion, and the gesture is based on at least one of the following:
the first agent is not understanding the topic; the first agent discussing a different topic; a specific individual communicating in the communication session; two or more specific individuals speaking in the communication session; the two or more specific individuals calling in from a single communication endpoint; a plurality of keywords and/or phrases being communicated in the same sentence; the plurality of keywords and/or phrases being in a single email or text message; the keyword and/or phrase being communicated within a specific time period; the keyword and/or phrase being communicated a number of times; a non response to an Interactive Voice Response (IVR) system; the keyword and/or phrase being an abbreviation; a number of exchanged emails or text messages; a gesture made during the communication session; a detected emotion of the specific individual communicating in the communication session; and a specific type of punctuation being used with a specific word and/or phrase in a sentence. 19. The system of claim 11, wherein applying the rule to the identified at least one of the keyword, the phrase, the emotion, and the gesture is based on a specific type of punctuation being used with a specific word and/or phrase in a sentence. 20. A non-transient computer readable medium having stored thereon instructions that cause a processor to execute a method, the method comprising:
instructions to monitor content of a communication session between a user and an agent of a contact center; instructions to identify at least one of a keyword, a phrase, an emotion, and a gesture related to a topic in the monitored content of the communication session; instructions to apply a rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture; and in response to applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, instructions to generate a topic suggestion for selection by the user. | Content of a communication session, such as a voice communication between a user and an agent of a contact center is monitored. A keyword, a phrase, an emotion, or a gesture related to a topic in the monitored content of the communication session is identified. A rule based on the identified the monitored content is applied. In response to applying the rule based on the monitored content, one or more topic suggestions are identified and presented to the user. For example, the rule can detect that the agent changed the discussion from a first topic to a second topic. In response to the agent discussing the second topic, the user is presented with two topics suggestions for the two topics. The user can select one of the topic suggestions to focus the agent on a specific topic suggestion. The selected topic suggestion along with discussions options are then displayed to the agent.1. A method comprising:
monitoring content of a communication session between a user and a first agent of a contact center; identifying at least one of a keyword, a phrase, an emotion, and a gesture included in the monitored content of the communication session; applying a rule based on the identified at least one of the keyword, the phrase, the emotion and the gesture; and in response to applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, generating a first topic suggestion for presentation to the user. 2. The method of claim 1, wherein the first topic suggestion comprises a plurality of topic suggestions and the method further comprises:
receiving a selection, by the user, of one of the plurality of topic suggestions; and in response to receiving the selection, by the user, of the one of the plurality of topic suggestions, generating for presentation, to the first agent, the selected one of the plurality of topic suggestions along with a first list of one or more discussion options associated with the selected one of the plurality of topic suggestions. 3. The method of claim 2, further comprising:
transferring the communication session to a second agent or conferencing the second agent into the communication session; and in response to transferring the communication session to the second agent or conferencing the second agent into the communication session, generating for presentation, to the second agent, the selected one of the plurality of topic suggestions along with a second list of one or more discussion options associated with the selected one of the plurality of topic suggestions, wherein the first list is different than the second list. 4. The method of claim 1, wherein the rule is applied based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, and an exceed time duration of the communication session. 5. The method of claim 1, further comprising: identifying a second topic suggestion from information stored in a database and wherein generating the first topic suggestion for selection by the user further comprises generating the second topic suggestion for selection by the user. 6. The method of claim 1, wherein the first topic suggestion comprises a plurality of topic suggestions and further comprising: receiving an input from the user to define an order of the plurality of topic suggestions. 7. The method of claim 1, wherein the first topic suggestion further comprises a plurality of topic suggestions identified by the user prior to an initiation of the communication session. 8. The method of claim 1, wherein applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture is based on at least one of the following:
the first agent is not understanding the topic; the first agent discussing a different topic; a specific individual communicating in the communication session; two or more specific individuals speaking in the communication session; the two or more specific individuals calling in from a single communication endpoint; a plurality of keywords and/or phrases being communicated in the same sentence; the plurality of keywords and/or phrases being in a single email or text message; the keyword and/or phrase being communicated within a specific time period; a non response to an Interactive Voice Response (IVR) system; the keyword and/or phrase being an abbreviation; a number of exchanged emails or text messages; a gesture made during the communication session; a detected emotion of the specific individual communicating in the communication session; and a specific type of punctuation being used with a specific word and/or phrase in a sentence. 9. The method of claim 1, wherein at a conclusion of the communication session, information associated with the communication session is stored in a database for use with further communication sessions associated with the user. 10. The method of claim 1, wherein the first topic suggestion is generated based on detecting a defined number of keywords and/or phrases related to the topic and wherein the first topic suggestion is displayed or played to the user. 11. A system comprising:
a content monitor configured to monitor content of a communication session between a user and a first agent of a contact center and identify at least one of a keyword, a phrase, an emotion, and a gesture related to a topic in the monitored content of the communication session; a rule engine configured to apply a rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture; and a topic module configured to generate a first topic suggestion for selection by the user in response to applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture. 12. The system of claim 11, wherein the first topic suggestion comprises a plurality of topic suggestions, wherein the topic module is further configured to receive a selection, by the user, of one of the plurality of topic suggestions and further comprising:
a presentation module configured to generate for presentation, to the first agent, the selected one of the plurality of topic suggestions along with a first list of one or more discussion options associated with the selected one of the plurality of topic suggestions in response to receiving the selection, by the user, of the one of the plurality of topic suggestions. 13. The system of claim 12, further comprising:
a session manager configured to transfer the communication session to a second agent or conference the second agent into the communication session; and the presentation module is further configured to generate for presentation, to the second agent, the selected one of the plurality of topic suggestions along with a second list of one or more discussion options associated with the selected one of the plurality of topic suggestions in response to transferring the communication session to the second agent or conferencing the second agent into the communication session, and wherein the first list is different than the second list. 14. The system of claim 11, wherein the rule is applied based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, and an exceed time duration of the communication session. 15. The system of claim 11, wherein:
the topic module is further configured to identify a second topic suggestion from information stored in a database; and the presentation module is further configured to generate for presentation to the user the second topic suggestion for selection by the user. 16. The system of claim 11, wherein the first topic suggestion comprises a plurality of topic suggestions and wherein the topic module is further configured to receive an input from the user to define an order of the plurality of topic suggestions. 17. The system of claim 11, wherein the first topic suggestion further comprises a plurality of topic suggestions identified by the user prior to an initiation of the communication session. 18. The system of claim 11, wherein applying the rule to the identified at least one of the keyword, the phrase, the emotion, and the gesture is based on at least one of the following:
the first agent is not understanding the topic; the first agent discussing a different topic; a specific individual communicating in the communication session; two or more specific individuals speaking in the communication session; the two or more specific individuals calling in from a single communication endpoint; a plurality of keywords and/or phrases being communicated in the same sentence; the plurality of keywords and/or phrases being in a single email or text message; the keyword and/or phrase being communicated within a specific time period; the keyword and/or phrase being communicated a number of times; a non response to an Interactive Voice Response (IVR) system; the keyword and/or phrase being an abbreviation; a number of exchanged emails or text messages; a gesture made during the communication session; a detected emotion of the specific individual communicating in the communication session; and a specific type of punctuation being used with a specific word and/or phrase in a sentence. 19. The system of claim 11, wherein applying the rule to the identified at least one of the keyword, the phrase, the emotion, and the gesture is based on a specific type of punctuation being used with a specific word and/or phrase in a sentence. 20. A non-transient computer readable medium having stored thereon instructions that cause a processor to execute a method, the method comprising:
instructions to monitor content of a communication session between a user and an agent of a contact center; instructions to identify at least one of a keyword, a phrase, an emotion, and a gesture related to a topic in the monitored content of the communication session; instructions to apply a rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture; and in response to applying the rule based on the identified at least one of the keyword, the phrase, the emotion, and the gesture, instructions to generate a topic suggestion for selection by the user. | 2,600 |
11,082 | 11,082 | 16,441,521 | 2,659 | Behavioral profiling and shaping is used in a “closed-loop” in that an interaction with at least one human is monitored and based on inferred characteristics of the interaction with that human (e.g., their behavioral profile) the interaction is guided. In one exemplary embodiment, the interaction is between two humans, for example, a “customer” and an “agent” and the interaction is monitored and the agent is guided according to the inferred behavioral profile of the customer (or optionally of the agent themselves). | 1. A method for aiding a multi-party interaction comprising:
acquiring signals corresponding to successive communication events between multiple parties; processing the signals to generate a plurality of profile indicators; processing the profile indicators to generate a recommendation for presenting to at least one of the parties in the interaction; and presenting the recommendation to the at least one of the parties. 2. The method of claim 1 wherein the successive communication events comprise conversational turns in a dialog between the multiple parties. 3. The method of claim 1 wherein the successive communication events comprise separate dialogs between the multiple parties. 4. The method of claim 1 wherein the successive communication events comprise linguistic communication events comprising spoken or textual communication. 5. The method of claim 4 wherein processing the signals to generate the plurality of profile indicators includes performing automated speech recognition of the signals. 6. The method of claim 4 wherein processing the signals to generate the plurality of profile indicators includes semantic analysis of linguistic content of the signals. 7. The method of claim 1 wherein the signals corresponding to successive communication events represent non-verbal behavioral features. 8. The method of claim 7 wherein processing the signals to generate the plurality of profile indicators includes performing a direct conversion of a speech signal without explicit recognition of words spoken. 9. The method of claim 1 wherein processing the signals generate the profile indicators comprises processing the signals using a first machine-learning component to generate the profile indicators. 10. The method of claim 9 processing the profile indicators to generate a recommendation comprises processing the profile indicators using a second machine-learning component. 11. The method of claim 1 wherein the generating of the recommendation is ongoing during an interaction based on events in the interaction that have occurred. 12. The method of claim 1 wherein the recommendation includes an indicator related to success of a goal for the interaction. 13. A non-transitory machine-readable medium comprising instructions stored thereon, the instructions when executed by a data processing system cause said system to perform steps comprising:
acquiring signals corresponding to successive communication events between multiple parties; processing the signals to generate a plurality of profile indicators; processing the profile indicators to generate a recommendation for presenting to at least one of the parties in the interaction; and presenting the recommendation to the at least one of the parties. 14. A system for aiding a multi-party interaction, the system comprising:
an input for acquiring signals corresponding to successive communication events between multiple parties; a data processor configured to
process the signals to generate a plurality of profile indicators, and
process the profile indicators to generate a recommendation for presenting to at least one of the parties in the interaction; and
an output for presenting the recommendation to the at least one of the parties. | Behavioral profiling and shaping is used in a “closed-loop” in that an interaction with at least one human is monitored and based on inferred characteristics of the interaction with that human (e.g., their behavioral profile) the interaction is guided. In one exemplary embodiment, the interaction is between two humans, for example, a “customer” and an “agent” and the interaction is monitored and the agent is guided according to the inferred behavioral profile of the customer (or optionally of the agent themselves).1. A method for aiding a multi-party interaction comprising:
acquiring signals corresponding to successive communication events between multiple parties; processing the signals to generate a plurality of profile indicators; processing the profile indicators to generate a recommendation for presenting to at least one of the parties in the interaction; and presenting the recommendation to the at least one of the parties. 2. The method of claim 1 wherein the successive communication events comprise conversational turns in a dialog between the multiple parties. 3. The method of claim 1 wherein the successive communication events comprise separate dialogs between the multiple parties. 4. The method of claim 1 wherein the successive communication events comprise linguistic communication events comprising spoken or textual communication. 5. The method of claim 4 wherein processing the signals to generate the plurality of profile indicators includes performing automated speech recognition of the signals. 6. The method of claim 4 wherein processing the signals to generate the plurality of profile indicators includes semantic analysis of linguistic content of the signals. 7. The method of claim 1 wherein the signals corresponding to successive communication events represent non-verbal behavioral features. 8. The method of claim 7 wherein processing the signals to generate the plurality of profile indicators includes performing a direct conversion of a speech signal without explicit recognition of words spoken. 9. The method of claim 1 wherein processing the signals generate the profile indicators comprises processing the signals using a first machine-learning component to generate the profile indicators. 10. The method of claim 9 processing the profile indicators to generate a recommendation comprises processing the profile indicators using a second machine-learning component. 11. The method of claim 1 wherein the generating of the recommendation is ongoing during an interaction based on events in the interaction that have occurred. 12. The method of claim 1 wherein the recommendation includes an indicator related to success of a goal for the interaction. 13. A non-transitory machine-readable medium comprising instructions stored thereon, the instructions when executed by a data processing system cause said system to perform steps comprising:
acquiring signals corresponding to successive communication events between multiple parties; processing the signals to generate a plurality of profile indicators; processing the profile indicators to generate a recommendation for presenting to at least one of the parties in the interaction; and presenting the recommendation to the at least one of the parties. 14. A system for aiding a multi-party interaction, the system comprising:
an input for acquiring signals corresponding to successive communication events between multiple parties; a data processor configured to
process the signals to generate a plurality of profile indicators, and
process the profile indicators to generate a recommendation for presenting to at least one of the parties in the interaction; and
an output for presenting the recommendation to the at least one of the parties. | 2,600 |
11,083 | 11,083 | 15,823,173 | 2,646 | In some embodiments, one or more wireless stations operate to configure direct communication with neighboring mobile stations, e.g., direct communication between the wireless stations without utilizing an intermediate access point. Embodiments of the disclosure relate to a mechanism for a device to perform selective synchronization (or cluster merging) with one or more neighboring peer devices. | 1. A wireless station, comprising:
at least one antenna; at least one radio in communication with the at least one antenna and configured to perform communications via a Wi-Fi interface; and at least one processor in communication with the at least one radio; wherein the at least one processor is configured to cause the wireless station to:
synchronize timing to a first peer wireless station, wherein the first peer wireless station is a timing master of a first cluster of wireless stations;
receive a beacon from a second peer wireless station via a peer-to-peer communication protocol, wherein the beacon includes an indication of one or more services supported by the second peer wireless station, wherein the second peer wireless station is configured to synchronize timing to a third peer wireless station, wherein the third peer wireless station is a timing master of a second cluster of wireless stations;
determine, based at least in part on the indication of one or more services supported, a service that is common to both the wireless station and the second peer wireless station; and
initiate a merger of the first cluster and the second cluster. 2. The wireless station of claim 1,
wherein the indication of one or more services supported comprises a service hash of services supported by the second peer wireless station. 3. The wireless station of claim 2,
wherein to determine a service that is common to the wireless station and the second peer wireless station, the at least one processor is further configured to:
generate a local hash of services supported by the wireless station; and
compare the local hash to the service hash. 4. The wireless station of claim 1,
wherein the indication of services supported specifies at least one parameter for synchronization. 5. The wireless station of claim 4,
wherein the at least one parameter comprises at least one of:
a service identifier;
a hash of supported services; or
a network name. 6. The wireless station of claim 1,
wherein the at least one processor is further configured to:
receive a beacon from a fourth peer wireless station via a peer-to-peer communication protocol, wherein the beacon includes an indication of services supported by the fourth peer wireless station, wherein the fourth peer wireless station is configured to synchronize timing to a fifth peer wireless station, wherein the fifth peer wireless station is a timing master of a third cluster of wireless stations;
determine, based at least in part on the indication of services supported by the fourth peer wireless station, that there are no services that are common to the wireless station and the fourth peer wireless station; and
determine not to initiate a merger of the first cluster with the third cluster. 7. The wireless station of claim 1,
wherein services supported by the second peer wireless station comprise at least one of:
a service provided by the second peer wireless station;
a service consumed by the second peer wireless station;
a service advertised by the second peer wireless station; or
a service sought by the second peer wireless station. 8. The wireless station of claim 1,
wherein the service that is common to the wireless station and the second peer wireless station comprises at least one of:
a service provided by the second peer wireless station and sought or consumed by the wireless station;
a service consumed by the second peer wireless station and sought, advertised, or provided by the wireless station;
a service advertised by the second peer wireless station and sought or consumed by the wireless station;
a service sought by the second peer wireless station and advertised or provided by the wireless station;
a service provided by the wireless station and sought or consumed by the second peer wireless station;
a service consumed by the wireless station and sought, advertised, or provided by the second peer wireless station;
a service advertised by the wireless station and sought or consumed by the second peer wireless station; or
a service sought by the wireless station and advertised or provided by the second peer wireless station. 9. An apparatus, comprising:
a memory; and at least one processor in communication with the memory; wherein the at least one processor is configured to:
synchronize timing to a first peer wireless station;
receive beacons from a plurality of peer wireless stations via a peer-to-peer communication protocol, wherein the beacons include indications of services supported by the peer wireless stations, wherein the peer wireless stations do not synchronize timing to the first peer wireless station;
determine, based at least in part on the indications of services supported by the peer wireless stations, a common service with a target peer wireless station of the plurality of peer wireless stations; and
initiate a timing synchronization of the target peer wireless station and the first peer wireless station. 10. The apparatus of claim 9,
wherein the indication of services supported comprises hashes of services supported by the plurality of peer wireless stations. 11. The apparatus of claim 10,
wherein to determine a common service, the at least one processor is further configured to:
generate a local hash of supported services; and
compare the local hash to the hashes of services supported by the plurality of peer wireless stations. 12. The apparatus of claim 9,
wherein the indication of services supported specifies at least one parameter for synchronization. 13. The apparatus of claim 12,
wherein the at least one parameter comprises at least one of:
a service identifier;
a hash of supported services; or
a network name. 14. The apparatus of claim 9,
wherein services supported by the target peer wireless station comprise one or more of:
a service provided by the at least one wireless station;
a service consumed by the at least one wireless station;
a service advertised by the at least one wireless station; or
a service sought by the at least one peer wireless station. 15. The apparatus of claim 9,
wherein the common service comprises at least one of:
a service provided by the at least one peer wireless station and sought or consumed by an application in communication with the apparatus;
a service consumed by the at least one peer wireless station and sought, advertised, or provided by an application in communication with the apparatus;
a service advertised by the at least one peer wireless station and sought or consumed by an application in communication with the apparatus;
a service sought by the at least one peer wireless station and advertised or provided by an application in communication with the apparatus;
a service provided by an application in communication with the apparatus and sought or consumed by the at least one peer wireless station;
a service consumed by an application in communication with the apparatus and sought, advertised, or provided by the at least one peer wireless station;
a service advertised by an application in communication with the apparatus and sought or consumed by the at least one peer wireless station; or
a service sought by an application in communication with the apparatus and advertised or provided by the at least one peer wireless station. 16. A non-transitory computer readable memory medium storing program instructions executable by processing circuitry of a wireless station to:
synchronize timing to a first peer wireless station, wherein the first peer wireless station is anchor master of a first cluster of wireless stations, wherein the wireless station and the first peer wireless station are associated with the first cluster;
receive a beacon from a second peer wireless station via a peer-to-peer communication protocol, wherein the beacon includes an indication of one or more services supported by the second peer wireless station, wherein the second peer wireless station is associated with a second cluster of wireless stations;
determine, based at least in part on the indication of one or more services supported, a service that is common to both the wireless station and the second peer wireless station; and
in response to determining the service, initiate a merger of the first cluster and the second cluster. 17. The non-transitory computer readable memory medium 16,
wherein the indication of one or more services supported comprises a service hash of services supported by the second peer wireless station. wherein to determine the service, the program instructions are further executable to:
generate a local hash of services supported by the wireless station; and
compare the local hash to the service hash. 18. The non-transitory computer readable memory medium 16,
wherein the indication of services supported specifies at least one parameter for synchronization, wherein the at least one parameter comprises at least one of:
a service identifier;
a hash of supported services; or
a network name. 19. The non-transitory computer readable memory medium 16,
wherein services supported by the second peer wireless station comprise at least one of:
a service provided by the second peer wireless station;
a service consumed by the second peer wireless station;
a service advertised by the second peer wireless station; or
a service sought by the second peer wireless station. 20. The non-transitory computer readable memory medium 16,
wherein the service comprises at least one of:
a service provided by the second peer wireless station and sought or consumed by the wireless station;
a service consumed by the second peer wireless station and sought, advertised, or provided by the wireless station;
a service advertised by the second peer wireless station and sought or consumed by the wireless station;
a service sought by the second peer wireless station and advertised or provided by the wireless station;
a service provided by the wireless station and sought or consumed by the second peer wireless station;
a service consumed by the wireless station and sought, advertised, or provided by the second peer wireless station;
a service advertised by the wireless station and sought or consumed by the second peer wireless station; or
a service sought by the wireless station and advertised or provided by the second peer wireless station. | In some embodiments, one or more wireless stations operate to configure direct communication with neighboring mobile stations, e.g., direct communication between the wireless stations without utilizing an intermediate access point. Embodiments of the disclosure relate to a mechanism for a device to perform selective synchronization (or cluster merging) with one or more neighboring peer devices.1. A wireless station, comprising:
at least one antenna; at least one radio in communication with the at least one antenna and configured to perform communications via a Wi-Fi interface; and at least one processor in communication with the at least one radio; wherein the at least one processor is configured to cause the wireless station to:
synchronize timing to a first peer wireless station, wherein the first peer wireless station is a timing master of a first cluster of wireless stations;
receive a beacon from a second peer wireless station via a peer-to-peer communication protocol, wherein the beacon includes an indication of one or more services supported by the second peer wireless station, wherein the second peer wireless station is configured to synchronize timing to a third peer wireless station, wherein the third peer wireless station is a timing master of a second cluster of wireless stations;
determine, based at least in part on the indication of one or more services supported, a service that is common to both the wireless station and the second peer wireless station; and
initiate a merger of the first cluster and the second cluster. 2. The wireless station of claim 1,
wherein the indication of one or more services supported comprises a service hash of services supported by the second peer wireless station. 3. The wireless station of claim 2,
wherein to determine a service that is common to the wireless station and the second peer wireless station, the at least one processor is further configured to:
generate a local hash of services supported by the wireless station; and
compare the local hash to the service hash. 4. The wireless station of claim 1,
wherein the indication of services supported specifies at least one parameter for synchronization. 5. The wireless station of claim 4,
wherein the at least one parameter comprises at least one of:
a service identifier;
a hash of supported services; or
a network name. 6. The wireless station of claim 1,
wherein the at least one processor is further configured to:
receive a beacon from a fourth peer wireless station via a peer-to-peer communication protocol, wherein the beacon includes an indication of services supported by the fourth peer wireless station, wherein the fourth peer wireless station is configured to synchronize timing to a fifth peer wireless station, wherein the fifth peer wireless station is a timing master of a third cluster of wireless stations;
determine, based at least in part on the indication of services supported by the fourth peer wireless station, that there are no services that are common to the wireless station and the fourth peer wireless station; and
determine not to initiate a merger of the first cluster with the third cluster. 7. The wireless station of claim 1,
wherein services supported by the second peer wireless station comprise at least one of:
a service provided by the second peer wireless station;
a service consumed by the second peer wireless station;
a service advertised by the second peer wireless station; or
a service sought by the second peer wireless station. 8. The wireless station of claim 1,
wherein the service that is common to the wireless station and the second peer wireless station comprises at least one of:
a service provided by the second peer wireless station and sought or consumed by the wireless station;
a service consumed by the second peer wireless station and sought, advertised, or provided by the wireless station;
a service advertised by the second peer wireless station and sought or consumed by the wireless station;
a service sought by the second peer wireless station and advertised or provided by the wireless station;
a service provided by the wireless station and sought or consumed by the second peer wireless station;
a service consumed by the wireless station and sought, advertised, or provided by the second peer wireless station;
a service advertised by the wireless station and sought or consumed by the second peer wireless station; or
a service sought by the wireless station and advertised or provided by the second peer wireless station. 9. An apparatus, comprising:
a memory; and at least one processor in communication with the memory; wherein the at least one processor is configured to:
synchronize timing to a first peer wireless station;
receive beacons from a plurality of peer wireless stations via a peer-to-peer communication protocol, wherein the beacons include indications of services supported by the peer wireless stations, wherein the peer wireless stations do not synchronize timing to the first peer wireless station;
determine, based at least in part on the indications of services supported by the peer wireless stations, a common service with a target peer wireless station of the plurality of peer wireless stations; and
initiate a timing synchronization of the target peer wireless station and the first peer wireless station. 10. The apparatus of claim 9,
wherein the indication of services supported comprises hashes of services supported by the plurality of peer wireless stations. 11. The apparatus of claim 10,
wherein to determine a common service, the at least one processor is further configured to:
generate a local hash of supported services; and
compare the local hash to the hashes of services supported by the plurality of peer wireless stations. 12. The apparatus of claim 9,
wherein the indication of services supported specifies at least one parameter for synchronization. 13. The apparatus of claim 12,
wherein the at least one parameter comprises at least one of:
a service identifier;
a hash of supported services; or
a network name. 14. The apparatus of claim 9,
wherein services supported by the target peer wireless station comprise one or more of:
a service provided by the at least one wireless station;
a service consumed by the at least one wireless station;
a service advertised by the at least one wireless station; or
a service sought by the at least one peer wireless station. 15. The apparatus of claim 9,
wherein the common service comprises at least one of:
a service provided by the at least one peer wireless station and sought or consumed by an application in communication with the apparatus;
a service consumed by the at least one peer wireless station and sought, advertised, or provided by an application in communication with the apparatus;
a service advertised by the at least one peer wireless station and sought or consumed by an application in communication with the apparatus;
a service sought by the at least one peer wireless station and advertised or provided by an application in communication with the apparatus;
a service provided by an application in communication with the apparatus and sought or consumed by the at least one peer wireless station;
a service consumed by an application in communication with the apparatus and sought, advertised, or provided by the at least one peer wireless station;
a service advertised by an application in communication with the apparatus and sought or consumed by the at least one peer wireless station; or
a service sought by an application in communication with the apparatus and advertised or provided by the at least one peer wireless station. 16. A non-transitory computer readable memory medium storing program instructions executable by processing circuitry of a wireless station to:
synchronize timing to a first peer wireless station, wherein the first peer wireless station is anchor master of a first cluster of wireless stations, wherein the wireless station and the first peer wireless station are associated with the first cluster;
receive a beacon from a second peer wireless station via a peer-to-peer communication protocol, wherein the beacon includes an indication of one or more services supported by the second peer wireless station, wherein the second peer wireless station is associated with a second cluster of wireless stations;
determine, based at least in part on the indication of one or more services supported, a service that is common to both the wireless station and the second peer wireless station; and
in response to determining the service, initiate a merger of the first cluster and the second cluster. 17. The non-transitory computer readable memory medium 16,
wherein the indication of one or more services supported comprises a service hash of services supported by the second peer wireless station. wherein to determine the service, the program instructions are further executable to:
generate a local hash of services supported by the wireless station; and
compare the local hash to the service hash. 18. The non-transitory computer readable memory medium 16,
wherein the indication of services supported specifies at least one parameter for synchronization, wherein the at least one parameter comprises at least one of:
a service identifier;
a hash of supported services; or
a network name. 19. The non-transitory computer readable memory medium 16,
wherein services supported by the second peer wireless station comprise at least one of:
a service provided by the second peer wireless station;
a service consumed by the second peer wireless station;
a service advertised by the second peer wireless station; or
a service sought by the second peer wireless station. 20. The non-transitory computer readable memory medium 16,
wherein the service comprises at least one of:
a service provided by the second peer wireless station and sought or consumed by the wireless station;
a service consumed by the second peer wireless station and sought, advertised, or provided by the wireless station;
a service advertised by the second peer wireless station and sought or consumed by the wireless station;
a service sought by the second peer wireless station and advertised or provided by the wireless station;
a service provided by the wireless station and sought or consumed by the second peer wireless station;
a service consumed by the wireless station and sought, advertised, or provided by the second peer wireless station;
a service advertised by the wireless station and sought or consumed by the second peer wireless station; or
a service sought by the wireless station and advertised or provided by the second peer wireless station. | 2,600 |
11,084 | 11,084 | 15,540,423 | 2,626 | The present invention generally relates to an electronic device comprising a touch interface for a user to control functionality of the electronic device, and specifically to a user interface allowing more than one user to control such functionality. The invention also relates to a corresponding method and a computer program product. | 1. A multiple-user touch system for an electronic device, the touch system comprising:
a touch user interface, and touch control circuitry connected to the touch user interface, the control circuitry being configured to determine if a first object being positioned at a first coordinate at the touch user interface is related to a second object being positioned at a second coordinate at the touch user interface, wherein the first and the second coordinate are spatially separated at the touch user interface and the relation between the first and the second object is determined based on information relating to the first and the second coordinate for the first and the second object, respectively. 2. The touch system according to claim 1, wherein the touch user interface comprises a set of touch areas and the touch control circuitry is connected to the touch areas. 3. The touch system according to claim 1, wherein the relation between the first and the second object is determined by establishing an electrical coupling between the first and the second object. 4. The touch system according to claim 3, wherein a profile for the electrical coupling between first and the second object is matched to a predetermined reference profile. 5. The touch system according to claim 2, wherein the electrical coupling between the first and the second object is determined by transmitting a predetermined signal from a touch area at the first coordinate and to analyze a representation of the predetermined signal at a touch area at the second coordinate. 6. The touch system according to claim 1, wherein the touch control circuitry is further configured to determine the first and the second coordinate of the first and the second object, respectively, at the touch interface. 7. The touch system according to claim 2, wherein the set of touch areas comprises a first and a second set of conductive lines comprised with the touch user interface. 8. The touch system according to claim 7, wherein the first set of conductive lines is orthogonal to the second set of conductive lines. 9. The touch system according to claim 7, wherein the first and the second coordinate is determined by activating the first set of conductive lines and scanning the second set of conductive lines. 10. The touch system according to claim 7, wherein the touch control circuitry is configured to determine the electrical coupling between the first and the second object by:
transmitting a predetermined signal from a first (Xa) conductive line of the first set of conductive lines, and/or transmitting the predetermined signal from a first (Ya) conductive line of the second set of conductive lines, wherein the first (Xa) conductive line of the first set of conductive lines and the first (Ya) conductive line of the second set of conductive lines are selected to correspond to the first coordinate for the first object, receiving a representation of the predetermined signal at a second (Xb) conductive line of the first set of conductive lines, and/or receiving a representation of the predetermined signal at a second (Yb) conductive line of the second set of conductive lines, wherein the second (Xb) conductive line of the first set of conductive lines and the second (Yb) conductive line of the second set of conductive lines are selected to correspond to the second coordinate for the second object. 11. The touch system according to claim 1, wherein the first and the second objects are at least one of a body part, such as a finger, or a conductive stylus. 12. The touch system according to claim 1, wherein the touch user interface is a touch screen. 13. The touch system according to claim 1, further comprising a display element arranged adjacently to the touch sensor. 14. The touch system according to claim 5, wherein the predetermined signal is selected to reduce interference from a surrounding of the touch system and/or selected based on a conductivity model for the human body. 15. A method for operating a multiple-user touch system for an electronic device, the touch system comprising a touch user interface, and touch control circuitry connected to the touch user interface, wherein the method comprises the step of:
determining if a first object being positioned at a first coordinate at the touch user interface is related to a second object being positioned at a second coordinate at the touch user interface, wherein the first and the second coordinate are spatially separated at the touch user interface and the step of determining the relation between the first and the second object is based on information relating to the first and the second coordinate for the first and the second object, respectively. 16. The method according to claim 15, wherein determining if the first and the second object are related comprises the step of establishing an electrical coupling between the first and the second object. 17. The method according to claim 16, wherein the step of establishing the electrical coupling between the first and the second object comprises the steps of:
transmitting a predetermined signal from a conductive area of the touch interface positioned at the first coordinate, and/or analyzing a representation of the predetermined signal a conductive area of the touch interface positioned at the second coordinate. 18. The method according to claim 16, wherein the touch interface comprises a first and a second set of conductive lines connected to the touch control circuitry, and the method further comprises the steps of:
transmitting a predetermined signal from a first (Xa) conductive line of the first set of conductive lines; transmitting the predetermined signal from a first (Ya) conductive line of the second set of conductive lines; and/or receiving a representation of the predetermined signal at a second (Xb) conductive line of the first set of conductive lines, and/or receiving a representation of the predetermined signal at a second (Yb) conductive line of the second set of conductive lines, wherein the first (Xa) conductive line of the first set of conductive lines and the first (Ya) conductive line of the second set of conductive lines are selected to correspond to the first coordinate for the first object, and the second (Xb) conductive line of the first set of conductive lines and the second (Yb) conductive line of the second set of conductive lines are selected to correspond to the second coordinate for the second object. 19. A computer program product comprising a computer readable medium having stored thereon computer program means for operating a multiple-user touch system for an electronic device, the touch system comprising a touch user interface, and touch control circuitry connected to the touch user interface, the computer program product comprising:
code for determining if a first object being positioned at a first coordinate at the touch user interface is related to a second object being positioned at a second coordinate at the touch user interface, wherein the first and the second coordinate are spatially separated at the touch user interface and the step of determining the relation between the first and the second object is based on information relating to the first and the second coordinate for the first and the second object, respectively. | The present invention generally relates to an electronic device comprising a touch interface for a user to control functionality of the electronic device, and specifically to a user interface allowing more than one user to control such functionality. The invention also relates to a corresponding method and a computer program product.1. A multiple-user touch system for an electronic device, the touch system comprising:
a touch user interface, and touch control circuitry connected to the touch user interface, the control circuitry being configured to determine if a first object being positioned at a first coordinate at the touch user interface is related to a second object being positioned at a second coordinate at the touch user interface, wherein the first and the second coordinate are spatially separated at the touch user interface and the relation between the first and the second object is determined based on information relating to the first and the second coordinate for the first and the second object, respectively. 2. The touch system according to claim 1, wherein the touch user interface comprises a set of touch areas and the touch control circuitry is connected to the touch areas. 3. The touch system according to claim 1, wherein the relation between the first and the second object is determined by establishing an electrical coupling between the first and the second object. 4. The touch system according to claim 3, wherein a profile for the electrical coupling between first and the second object is matched to a predetermined reference profile. 5. The touch system according to claim 2, wherein the electrical coupling between the first and the second object is determined by transmitting a predetermined signal from a touch area at the first coordinate and to analyze a representation of the predetermined signal at a touch area at the second coordinate. 6. The touch system according to claim 1, wherein the touch control circuitry is further configured to determine the first and the second coordinate of the first and the second object, respectively, at the touch interface. 7. The touch system according to claim 2, wherein the set of touch areas comprises a first and a second set of conductive lines comprised with the touch user interface. 8. The touch system according to claim 7, wherein the first set of conductive lines is orthogonal to the second set of conductive lines. 9. The touch system according to claim 7, wherein the first and the second coordinate is determined by activating the first set of conductive lines and scanning the second set of conductive lines. 10. The touch system according to claim 7, wherein the touch control circuitry is configured to determine the electrical coupling between the first and the second object by:
transmitting a predetermined signal from a first (Xa) conductive line of the first set of conductive lines, and/or transmitting the predetermined signal from a first (Ya) conductive line of the second set of conductive lines, wherein the first (Xa) conductive line of the first set of conductive lines and the first (Ya) conductive line of the second set of conductive lines are selected to correspond to the first coordinate for the first object, receiving a representation of the predetermined signal at a second (Xb) conductive line of the first set of conductive lines, and/or receiving a representation of the predetermined signal at a second (Yb) conductive line of the second set of conductive lines, wherein the second (Xb) conductive line of the first set of conductive lines and the second (Yb) conductive line of the second set of conductive lines are selected to correspond to the second coordinate for the second object. 11. The touch system according to claim 1, wherein the first and the second objects are at least one of a body part, such as a finger, or a conductive stylus. 12. The touch system according to claim 1, wherein the touch user interface is a touch screen. 13. The touch system according to claim 1, further comprising a display element arranged adjacently to the touch sensor. 14. The touch system according to claim 5, wherein the predetermined signal is selected to reduce interference from a surrounding of the touch system and/or selected based on a conductivity model for the human body. 15. A method for operating a multiple-user touch system for an electronic device, the touch system comprising a touch user interface, and touch control circuitry connected to the touch user interface, wherein the method comprises the step of:
determining if a first object being positioned at a first coordinate at the touch user interface is related to a second object being positioned at a second coordinate at the touch user interface, wherein the first and the second coordinate are spatially separated at the touch user interface and the step of determining the relation between the first and the second object is based on information relating to the first and the second coordinate for the first and the second object, respectively. 16. The method according to claim 15, wherein determining if the first and the second object are related comprises the step of establishing an electrical coupling between the first and the second object. 17. The method according to claim 16, wherein the step of establishing the electrical coupling between the first and the second object comprises the steps of:
transmitting a predetermined signal from a conductive area of the touch interface positioned at the first coordinate, and/or analyzing a representation of the predetermined signal a conductive area of the touch interface positioned at the second coordinate. 18. The method according to claim 16, wherein the touch interface comprises a first and a second set of conductive lines connected to the touch control circuitry, and the method further comprises the steps of:
transmitting a predetermined signal from a first (Xa) conductive line of the first set of conductive lines; transmitting the predetermined signal from a first (Ya) conductive line of the second set of conductive lines; and/or receiving a representation of the predetermined signal at a second (Xb) conductive line of the first set of conductive lines, and/or receiving a representation of the predetermined signal at a second (Yb) conductive line of the second set of conductive lines, wherein the first (Xa) conductive line of the first set of conductive lines and the first (Ya) conductive line of the second set of conductive lines are selected to correspond to the first coordinate for the first object, and the second (Xb) conductive line of the first set of conductive lines and the second (Yb) conductive line of the second set of conductive lines are selected to correspond to the second coordinate for the second object. 19. A computer program product comprising a computer readable medium having stored thereon computer program means for operating a multiple-user touch system for an electronic device, the touch system comprising a touch user interface, and touch control circuitry connected to the touch user interface, the computer program product comprising:
code for determining if a first object being positioned at a first coordinate at the touch user interface is related to a second object being positioned at a second coordinate at the touch user interface, wherein the first and the second coordinate are spatially separated at the touch user interface and the step of determining the relation between the first and the second object is based on information relating to the first and the second coordinate for the first and the second object, respectively. | 2,600 |
11,085 | 11,085 | 16,425,310 | 2,647 | In a general aspect, a motion detection system detects gestures (e.g., human gestures) and initiates actions in response to the detected gestures. In some aspects, channel information is obtained based on wireless signals transmitted through a space by one or more wireless communication devices. A gesture recognition engine analyzes the channel information to detect a gesture (e.g., a predetermined gesture sequence) in the space. An action to be initiated in response to the detected gesture is identified. An instruction to perform the action is sent to a network-connected device associated with the space. | 1. A method comprising:
obtaining channel information based on wireless signals transmitted through a space by one or more wireless communication devices; by operation of a gesture recognition engine, analyzing the channel information to detect a gesture in the space, wherein detecting the gesture comprises using a time-frequency filter to detect a time-frequency signature of the gesture; identifying an action to be initiated in response to the detected gesture; and sending, to a network-connected device associated with the space, an instruction to perform the action. 2. The method of claim 1, comprising:
detecting a location of the gesture; and determining the action to be initiated based on the location of the gesture. 3. The method of claim 1, wherein detecting the gesture comprises detecting a sequence of gestures, and detecting the sequence of gestures comprises determining that a first gesture and a second gesture occurred within a gesture timeout period. 4. The method of claim 3, wherein detecting the sequence of gestures comprises:
in response to detecting the first gesture, initiating a state of a state machine and initiating a gesture timeout counter; in response to detecting the second gesture within the gesture timeout period, progressing the state of the state machine and reinitiating the gesture timeout counter; after reinitiating the gesture timeout counter, detecting a gesture timeout based on the gesture timeout counter; and identifying the action based on the state of the state machine at the gesture timeout. 5. (canceled) 6. The method of claim 5, wherein the channel information comprises a time series of channel responses, and using the time-frequency filter comprises applying weighting coefficients to frequency components of the channel responses. 7. The method of claim 6, wherein the time-frequency filter comprises an adaptive time-frequency filter that tunes the weighting coefficients to detect time-frequency signatures of gestures. 8. The method of claim 7, wherein the adaptive time-frequency filter tunes the weighting coefficients to detect gestures that modulate an intensity of the channel responses at a frequency in a frequency range corresponding to human gestures. 9. A non-transitory computer-readable medium comprising instructions that are operable, when executed by data processing apparatus, to perform operations comprising:
obtaining channel information based on wireless signals transmitted through a space by one or more wireless communication devices; by operation of a gesture recognition engine, analyzing the channel information to detect a gesture in the space, wherein detecting the gesture comprises using a time-frequency filter to detect a time-frequency signature of the gesture; identifying an action to be initiated in response to the detected gesture; and sending, to a network-connected device associated with the space, an instruction to perform the action. 10. The computer-readable medium of claim 9, the operations comprising:
detecting a location of the gesture; and determining the action to be initiated based on the location of the gesture. 11. The computer-readable medium of claim 9, wherein detecting the gesture comprises detecting a sequence of gestures, and detecting the sequence of gestures comprises determining that a first gesture and a second gesture occurred within a gesture timeout period. 12. The computer-readable medium of claim 11, wherein detecting the sequence of gestures comprises:
in response to detecting the first gesture, initiating a state of a state machine and initiating a gesture timeout counter; in response to detecting the second gesture within the gesture timeout period, progressing the state of the state machine and reinitiating the gesture timeout counter; after reinitiating the gesture timeout counter, detecting a gesture timeout based on the gesture timeout counter; and identifying the action based on the state of the state machine at the gesture timeout. 13. (canceled) 14. The computer-readable medium of claim 13, wherein the channel information comprises a time series of channel responses, and using the time-frequency filter comprises applying weighting coefficients to frequency components of the channel responses. 15. The computer-readable medium of claim 14, wherein the time-frequency filter comprises an adaptive time-frequency filter that tunes the weighting coefficients to detect time-frequency signatures of gestures. 16. The computer-readable medium of claim 15, wherein the adaptive time-frequency filter tunes the weighting coefficients to detect gestures that modulate an intensity of the channel responses at a frequency in a frequency range corresponding to human gestures. 17. A system comprising:
wireless communication devices operable to transmit wireless signals through a space; a network-connected device associated with the space; and a computer device comprising one or more processors operable to perform operations comprising:
obtaining channel information based on wireless signals transmitted through the space by one or more of the wireless communication devices;
by operation of a gesture recognition engine, analyzing the channel information to detect a gesture in the space, wherein detecting the gesture comprises using a time-frequency filter to detect a time-frequency signature of the gesture;
identifying an action to be initiated in response to the detected gesture; and
sending, to a network-connected device associated with the space, an instruction to perform the action. 18. The system of claim 17, the operations comprising:
detecting a location of the gesture; and determining the action to be initiated based on the location of the gesture. 19. The system of claim 17, wherein detecting the gesture comprises detecting a sequence of gestures, and detecting the sequence of gestures comprises determining that a first gesture and a second gesture occurred within a gesture timeout period. 20. The system of claim 19, wherein detecting the sequence of gestures comprises:
in response to detecting the first gesture, initiating a state of a state machine and initiating a gesture timeout counter; in response to detecting the second gesture within the gesture timeout period, progressing the state of the state machine and reinitiating the gesture timeout counter; after reinitiating the gesture timeout counter, detecting a gesture timeout based on the gesture timeout counter; and identifying the action based on the state of the state machine at the gesture timeout. 21. (canceled) 22. The system of claim 21, wherein the channel information comprises a time series of channel responses, and using the time-frequency filter comprises applying weighting coefficients to frequency components of the channel responses. 23. The system of claim 22, wherein the time-frequency filter comprises an adaptive time-frequency filter that tunes the weighting coefficients to detect time-frequency signatures of gestures. 24. The system of claim 23, wherein the adaptive time-frequency filter tunes the weighting coefficients to detect gestures that modulate an intensity of the channel responses at a frequency in a frequency range corresponding to human gestures. | In a general aspect, a motion detection system detects gestures (e.g., human gestures) and initiates actions in response to the detected gestures. In some aspects, channel information is obtained based on wireless signals transmitted through a space by one or more wireless communication devices. A gesture recognition engine analyzes the channel information to detect a gesture (e.g., a predetermined gesture sequence) in the space. An action to be initiated in response to the detected gesture is identified. An instruction to perform the action is sent to a network-connected device associated with the space.1. A method comprising:
obtaining channel information based on wireless signals transmitted through a space by one or more wireless communication devices; by operation of a gesture recognition engine, analyzing the channel information to detect a gesture in the space, wherein detecting the gesture comprises using a time-frequency filter to detect a time-frequency signature of the gesture; identifying an action to be initiated in response to the detected gesture; and sending, to a network-connected device associated with the space, an instruction to perform the action. 2. The method of claim 1, comprising:
detecting a location of the gesture; and determining the action to be initiated based on the location of the gesture. 3. The method of claim 1, wherein detecting the gesture comprises detecting a sequence of gestures, and detecting the sequence of gestures comprises determining that a first gesture and a second gesture occurred within a gesture timeout period. 4. The method of claim 3, wherein detecting the sequence of gestures comprises:
in response to detecting the first gesture, initiating a state of a state machine and initiating a gesture timeout counter; in response to detecting the second gesture within the gesture timeout period, progressing the state of the state machine and reinitiating the gesture timeout counter; after reinitiating the gesture timeout counter, detecting a gesture timeout based on the gesture timeout counter; and identifying the action based on the state of the state machine at the gesture timeout. 5. (canceled) 6. The method of claim 5, wherein the channel information comprises a time series of channel responses, and using the time-frequency filter comprises applying weighting coefficients to frequency components of the channel responses. 7. The method of claim 6, wherein the time-frequency filter comprises an adaptive time-frequency filter that tunes the weighting coefficients to detect time-frequency signatures of gestures. 8. The method of claim 7, wherein the adaptive time-frequency filter tunes the weighting coefficients to detect gestures that modulate an intensity of the channel responses at a frequency in a frequency range corresponding to human gestures. 9. A non-transitory computer-readable medium comprising instructions that are operable, when executed by data processing apparatus, to perform operations comprising:
obtaining channel information based on wireless signals transmitted through a space by one or more wireless communication devices; by operation of a gesture recognition engine, analyzing the channel information to detect a gesture in the space, wherein detecting the gesture comprises using a time-frequency filter to detect a time-frequency signature of the gesture; identifying an action to be initiated in response to the detected gesture; and sending, to a network-connected device associated with the space, an instruction to perform the action. 10. The computer-readable medium of claim 9, the operations comprising:
detecting a location of the gesture; and determining the action to be initiated based on the location of the gesture. 11. The computer-readable medium of claim 9, wherein detecting the gesture comprises detecting a sequence of gestures, and detecting the sequence of gestures comprises determining that a first gesture and a second gesture occurred within a gesture timeout period. 12. The computer-readable medium of claim 11, wherein detecting the sequence of gestures comprises:
in response to detecting the first gesture, initiating a state of a state machine and initiating a gesture timeout counter; in response to detecting the second gesture within the gesture timeout period, progressing the state of the state machine and reinitiating the gesture timeout counter; after reinitiating the gesture timeout counter, detecting a gesture timeout based on the gesture timeout counter; and identifying the action based on the state of the state machine at the gesture timeout. 13. (canceled) 14. The computer-readable medium of claim 13, wherein the channel information comprises a time series of channel responses, and using the time-frequency filter comprises applying weighting coefficients to frequency components of the channel responses. 15. The computer-readable medium of claim 14, wherein the time-frequency filter comprises an adaptive time-frequency filter that tunes the weighting coefficients to detect time-frequency signatures of gestures. 16. The computer-readable medium of claim 15, wherein the adaptive time-frequency filter tunes the weighting coefficients to detect gestures that modulate an intensity of the channel responses at a frequency in a frequency range corresponding to human gestures. 17. A system comprising:
wireless communication devices operable to transmit wireless signals through a space; a network-connected device associated with the space; and a computer device comprising one or more processors operable to perform operations comprising:
obtaining channel information based on wireless signals transmitted through the space by one or more of the wireless communication devices;
by operation of a gesture recognition engine, analyzing the channel information to detect a gesture in the space, wherein detecting the gesture comprises using a time-frequency filter to detect a time-frequency signature of the gesture;
identifying an action to be initiated in response to the detected gesture; and
sending, to a network-connected device associated with the space, an instruction to perform the action. 18. The system of claim 17, the operations comprising:
detecting a location of the gesture; and determining the action to be initiated based on the location of the gesture. 19. The system of claim 17, wherein detecting the gesture comprises detecting a sequence of gestures, and detecting the sequence of gestures comprises determining that a first gesture and a second gesture occurred within a gesture timeout period. 20. The system of claim 19, wherein detecting the sequence of gestures comprises:
in response to detecting the first gesture, initiating a state of a state machine and initiating a gesture timeout counter; in response to detecting the second gesture within the gesture timeout period, progressing the state of the state machine and reinitiating the gesture timeout counter; after reinitiating the gesture timeout counter, detecting a gesture timeout based on the gesture timeout counter; and identifying the action based on the state of the state machine at the gesture timeout. 21. (canceled) 22. The system of claim 21, wherein the channel information comprises a time series of channel responses, and using the time-frequency filter comprises applying weighting coefficients to frequency components of the channel responses. 23. The system of claim 22, wherein the time-frequency filter comprises an adaptive time-frequency filter that tunes the weighting coefficients to detect time-frequency signatures of gestures. 24. The system of claim 23, wherein the adaptive time-frequency filter tunes the weighting coefficients to detect gestures that modulate an intensity of the channel responses at a frequency in a frequency range corresponding to human gestures. | 2,600 |
11,086 | 11,086 | 16,910,154 | 2,644 | A mobile device includes a memory, a battery, and a processor. The processor is configured for synchronizing data with a host. The synchronization is based on an amount of battery charge remaining for the battery. | 1-4. (canceled) 5. A server that manages transactions between first and second end user devices, the server comprising:
a communication interface; a processor communicatively coupled to the communication interface; and a memory communicatively coupled to the processor, the memory containing instructions executable by the processor whereby the server is operable to: receive a first connection from a first device; authenticate the first end user device over the first connection; receive a first transaction from the first end user device in response to user directed input at the first end user device, wherein the first connection is maintained independently of the first transaction; generate a trigger for a second end user device based on the first transaction from the first end user device, wherein the trigger is pushed to the second end user device; after the generation of the trigger for the second end user device, receive a second connection from the second end user device while the first connection is maintained; authenticate the second end user device over the second connection, wherein the trigger notifies the second end user device of new data from the first transaction to be received by the second end user device from the server for display to a user. 6. The server of claim 5, wherein the server is further operable to:
send a second transaction to the second end user device using the second connection, wherein the second transaction contains the new data, 7. The server of claim 5, wherein the trigger is pushed over a connection different from the second connection. 8. The server of claim 5, wherein a third transaction is received by the server in response to user input at the second end user device. 9. The server of claim 5, wherein the server is in a publicly accessible network. 10. The server of claim 5, wherein the second end user device is in a wireless network. 11. The server of claim 5, wherein the server is further operable to:
receive configuration information from the first end user device, wherein the configuration information comprises login data associated with a user of the first end user device. | A mobile device includes a memory, a battery, and a processor. The processor is configured for synchronizing data with a host. The synchronization is based on an amount of battery charge remaining for the battery.1-4. (canceled) 5. A server that manages transactions between first and second end user devices, the server comprising:
a communication interface; a processor communicatively coupled to the communication interface; and a memory communicatively coupled to the processor, the memory containing instructions executable by the processor whereby the server is operable to: receive a first connection from a first device; authenticate the first end user device over the first connection; receive a first transaction from the first end user device in response to user directed input at the first end user device, wherein the first connection is maintained independently of the first transaction; generate a trigger for a second end user device based on the first transaction from the first end user device, wherein the trigger is pushed to the second end user device; after the generation of the trigger for the second end user device, receive a second connection from the second end user device while the first connection is maintained; authenticate the second end user device over the second connection, wherein the trigger notifies the second end user device of new data from the first transaction to be received by the second end user device from the server for display to a user. 6. The server of claim 5, wherein the server is further operable to:
send a second transaction to the second end user device using the second connection, wherein the second transaction contains the new data, 7. The server of claim 5, wherein the trigger is pushed over a connection different from the second connection. 8. The server of claim 5, wherein a third transaction is received by the server in response to user input at the second end user device. 9. The server of claim 5, wherein the server is in a publicly accessible network. 10. The server of claim 5, wherein the second end user device is in a wireless network. 11. The server of claim 5, wherein the server is further operable to:
receive configuration information from the first end user device, wherein the configuration information comprises login data associated with a user of the first end user device. | 2,600 |
11,087 | 11,087 | 16,512,173 | 2,645 | A method for reporting performance of a terminal in a mobile communication system includes the steps of receiving a request for performance reporting from a base station, determining an indicator of whether a delay time related operation that the terminal supports is in correspondence with the request which corresponds to a pre-set condition, and transmitting a message including the determined indicator to the base station. The size of the performance reporting message may be minimized in reporting the performance of the terminal. | 1. A method by a terminal in a communication system, the method comprising:
receiving, from a base station, a first message related to a capability of the terminal; generating a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported; and transmitting, to the base station, the second message, wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 2. The method of claim 1, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 3. The method of claim 1, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 4. The method of claim 1, further comprising:
identifying whether the first message includes a specific radio access technology (RAT) type, wherein the first information and the second information are generated for the second message. 5. The method of claim 1, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. 6. A terminal in a communication system, the terminal comprising:
a transceiver; and a controller coupled with the transceiver and configured to:
receive, from a base station, a first message related to a capability of the terminal,
generate a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported, and
transmit, to the base station, the second message,
wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 7. The terminal of claim 6, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 8. The terminal of claim 6, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 9. The terminal of claim 6, wherein the controller is further configured to identify whether the first message includes a specific radio access technology (RAT) type,
wherein the first information and the second information are generated for the second message. 10. The terminal of claim 6, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. 11. A method by a base station in a communication system, the method comprising:
transmitting, to a terminal, a first message related to a capability of the terminal; and receiving, from the terminal, a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported, wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 12. The method of claim 11, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 13. The method of claim 11, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 14. The method of claim 11, wherein the first information and the second information are generated for the second message, in case that a specific radio access technology (RAT) type information is included in the first message. 15. The method of claim 11, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. 16. A base station in a communication system, the base station comprising:
a transceiver; and a controller coupled with the transceiver and configured to:
transmit, to a terminal, a first message related to a capability of the terminal, and
receive, from the terminal, a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported,
wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 17. The base station of claim 16, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 18. The base station of claim 16, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 19. The base station of claim 16, wherein the first information and the second information are generated for the second message, in case that a specific radio access technology (RAT) type information is included in the first message. 20. The base station of claim 16, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. | A method for reporting performance of a terminal in a mobile communication system includes the steps of receiving a request for performance reporting from a base station, determining an indicator of whether a delay time related operation that the terminal supports is in correspondence with the request which corresponds to a pre-set condition, and transmitting a message including the determined indicator to the base station. The size of the performance reporting message may be minimized in reporting the performance of the terminal.1. A method by a terminal in a communication system, the method comprising:
receiving, from a base station, a first message related to a capability of the terminal; generating a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported; and transmitting, to the base station, the second message, wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 2. The method of claim 1, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 3. The method of claim 1, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 4. The method of claim 1, further comprising:
identifying whether the first message includes a specific radio access technology (RAT) type, wherein the first information and the second information are generated for the second message. 5. The method of claim 1, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. 6. A terminal in a communication system, the terminal comprising:
a transceiver; and a controller coupled with the transceiver and configured to:
receive, from a base station, a first message related to a capability of the terminal,
generate a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported, and
transmit, to the base station, the second message,
wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 7. The terminal of claim 6, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 8. The terminal of claim 6, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 9. The terminal of claim 6, wherein the controller is further configured to identify whether the first message includes a specific radio access technology (RAT) type,
wherein the first information and the second information are generated for the second message. 10. The terminal of claim 6, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. 11. A method by a base station in a communication system, the method comprising:
transmitting, to a terminal, a first message related to a capability of the terminal; and receiving, from the terminal, a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported, wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 12. The method of claim 11, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 13. The method of claim 11, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 14. The method of claim 11, wherein the first information and the second information are generated for the second message, in case that a specific radio access technology (RAT) type information is included in the first message. 15. The method of claim 11, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. 16. A base station in a communication system, the base station comprising:
a transceiver; and a controller coupled with the transceiver and configured to:
transmit, to a terminal, a first message related to a capability of the terminal, and
receive, from the terminal, a second message related to capability information of the terminal, wherein the second message includes first information on a band list and second information indicating whether a multiple timing advance is supported,
wherein, in case that a band combination related to the band list is comprised of more than one band entry, the second information indicates whether different timing advances on different band entries are supported. 17. The base station of claim 16, wherein, in case that the band combination related to the band list is comprised of one band entry, the second information indicates whether different timing advances across cells related to the band list are supported. 18. The base station of claim 16, wherein the second information is indicated by an 1 bit indicator,
wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an inter-band band combination or an intra-band non-contiguous band combination, and wherein, in case that the band combination related to the band list is comprised of more than one band entry, the band combination relates to an intra-band contiguous band combination. 19. The base station of claim 16, wherein the first information and the second information are generated for the second message, in case that a specific radio access technology (RAT) type information is included in the first message. 20. The base station of claim 16, wherein the first message includes terminal capability enquiry message, and
wherein the second message includes terminal capability information message. | 2,600 |
11,088 | 11,088 | 16,219,837 | 2,657 | A configuration is implemented via a processor to receive a request for spoken language interpretation of a user query from a first spoken language to a second spoken language. The first spoken language is spoken by a user situated at an audio-based device that is remotely situated from the customer care platform. The user query is sent from the audio-based device by the user to the customer care platform. The configuration performs, at a language interpretation platform, a first spoken language interpretation of the user query from the first spoken language to the second spoken language. Further, the configuration transmits, from the language interpretation platform to the customer care platform, the first spoken language interpretation so that a customer care representative speaking the second spoken language understands the first spoken language being spoken by the user. | 1. A computer program product comprising a computer readable storage device having a computer readable program stored thereon, wherein the computer readable program when executed on a computer causes the computer to:
receive, with a processor from a customer care platform, a request for spoken language interpretation of a user query from a first spoken language to a second spoken language, the first spoken language being spoken by a user situated at an audio-based device that is remotely situated from the customer care platform, the user query being sent from the audio-based device by the user to the customer care platform; perform, at a language interpretation platform, a first spoken language interpretation of the user query from the first spoken language to the second spoken language; transmit, from the language interpretation platform to the customer care platform, the first spoken language interpretation so that a customer care representative speaking the second spoken language understands the first spoken language being spoken by the user; receive, at the language interpretation platform from the customer care platform, a customer care response in the second spoken language; perform, at the language interpretation platform via a plurality of language interpretation resources, a second spoken language interpretation of the customer care response from the second spoken language to the first spoken language; generate, with the processor, audio data corresponding to the second spoken language interpretation of the customer care response, the audio data representing a singular voice for the plurality of language interpretation resources; and transmit, with the processor, the audio data to the customer care platform so that the customer care platform sends the audio data to the audio-based device for consumption at the audio-based device without rendering of audio data in the first spoken language. 2. The computer program product of claim 1, wherein the plurality of language interpretation resources comprise a machine interpreter and a language interpreter. 3. The computer program product of claim 1, wherein the plurality of language interpretation resources comprise a first machine interpreter and a second machine interpreter, the second machine interpreter being trained according to a different skill set than the first machine interpreter. 4. The computer program product of claim 1, wherein the plurality of language interpretation resources comprise a first human interpreter and a second human interpreter, the second human interpreter being trained according to a different skill set than the first human interpreter. 5. The computer program product of claim 1, wherein the computer is further caused to monitor the second spoken language interpretation for compliance with one or more quality control criteria. 6. The computer program product of claim 5, wherein the one or more quality control criteria comprise speed and accuracy. 7. The computer program product of claim 5, wherein the computer is further caused to transition the second spoken language interpretation from a first language interpretation resource in the plurality of language interpretation resources to a second language interpretation resource in the plurality of language interpretation resources during a presentation to the user of the second language interpretation according to the singular voice. 8. The computer program product of claim 1, wherein the audio-based device is a telephone. 9. The computer program product of claim 1, wherein the audio-based device is a microphone. 10. The computer program product of claim 1, wherein the audio-based device is a computing device. 11. A method comprising:
receiving, with a processor from a customer care platform, a request for spoken language interpretation of a user query from a first spoken language to a second spoken language, the first spoken language being spoken by a user situated at an audio-based device that is remotely situated from the customer care platform, the user query being sent from the audio-based device by the user to the customer care platform; performing, at a language interpretation platform, a first spoken language interpretation of the user query from the first spoken language to the second spoken language; transmitting, from the language interpretation platform to the customer care platform, the first spoken language interpretation so that a customer care representative speaking the second spoken language understands the first spoken language being spoken by the user; receiving, at the language interpretation platform from the customer care platform, a customer care response in the second spoken language; performing, at the language interpretation platform via a plurality of language interpretation resources, a second spoken language interpretation of the customer care response from the second spoken language to the first spoken language; generating, with the processor, audio data corresponding to the second spoken language interpretation of the customer care response, the audio data representing a singular voice for the plurality of language interpretation resources; and transmitting, with the processor, the audio data to the customer care platform so that the customer care platform sends the audio data to the audio-based device for consumption at the audio-based device without rendering of audio data in the first spoken language. 12. The method of claim 11, wherein the plurality of language interpretation resources comprise a machine interpreter and a language interpreter. 13. The method of claim 11, wherein the plurality of language interpretation resources comprise a first machine interpreter and a second machine interpreter, the second machine interpreter being trained according to a different skill set than the first machine interpreter. 14. The method of claim 11, wherein the plurality of language interpretation resources comprise a first human interpreter and a second human interpreter, the second human interpreter being trained according to a different skill set than the first human interpreter. 15. The method of claim 11, further comprising monitoring the second spoken language interpretation for compliance with one or more quality control criteria. 16. The method of claim 15, wherein the one or more quality control criteria comprise speed and accuracy. 17. The method of claim 15, further comprising transitioning the second spoken language interpretation from a first language interpretation resource in the plurality of language interpretation resources to a second language interpretation resource in the plurality of language interpretation resources during a presentation to the user of the second language interpretation according to the singular voice. 18. The method of claim 11, wherein the audio-based device is a telephone. 19. The method of claim 11, wherein the audio-based device is a microphone. 20. The method of claim 11, wherein the audio-based device is a computing device. | A configuration is implemented via a processor to receive a request for spoken language interpretation of a user query from a first spoken language to a second spoken language. The first spoken language is spoken by a user situated at an audio-based device that is remotely situated from the customer care platform. The user query is sent from the audio-based device by the user to the customer care platform. The configuration performs, at a language interpretation platform, a first spoken language interpretation of the user query from the first spoken language to the second spoken language. Further, the configuration transmits, from the language interpretation platform to the customer care platform, the first spoken language interpretation so that a customer care representative speaking the second spoken language understands the first spoken language being spoken by the user.1. A computer program product comprising a computer readable storage device having a computer readable program stored thereon, wherein the computer readable program when executed on a computer causes the computer to:
receive, with a processor from a customer care platform, a request for spoken language interpretation of a user query from a first spoken language to a second spoken language, the first spoken language being spoken by a user situated at an audio-based device that is remotely situated from the customer care platform, the user query being sent from the audio-based device by the user to the customer care platform; perform, at a language interpretation platform, a first spoken language interpretation of the user query from the first spoken language to the second spoken language; transmit, from the language interpretation platform to the customer care platform, the first spoken language interpretation so that a customer care representative speaking the second spoken language understands the first spoken language being spoken by the user; receive, at the language interpretation platform from the customer care platform, a customer care response in the second spoken language; perform, at the language interpretation platform via a plurality of language interpretation resources, a second spoken language interpretation of the customer care response from the second spoken language to the first spoken language; generate, with the processor, audio data corresponding to the second spoken language interpretation of the customer care response, the audio data representing a singular voice for the plurality of language interpretation resources; and transmit, with the processor, the audio data to the customer care platform so that the customer care platform sends the audio data to the audio-based device for consumption at the audio-based device without rendering of audio data in the first spoken language. 2. The computer program product of claim 1, wherein the plurality of language interpretation resources comprise a machine interpreter and a language interpreter. 3. The computer program product of claim 1, wherein the plurality of language interpretation resources comprise a first machine interpreter and a second machine interpreter, the second machine interpreter being trained according to a different skill set than the first machine interpreter. 4. The computer program product of claim 1, wherein the plurality of language interpretation resources comprise a first human interpreter and a second human interpreter, the second human interpreter being trained according to a different skill set than the first human interpreter. 5. The computer program product of claim 1, wherein the computer is further caused to monitor the second spoken language interpretation for compliance with one or more quality control criteria. 6. The computer program product of claim 5, wherein the one or more quality control criteria comprise speed and accuracy. 7. The computer program product of claim 5, wherein the computer is further caused to transition the second spoken language interpretation from a first language interpretation resource in the plurality of language interpretation resources to a second language interpretation resource in the plurality of language interpretation resources during a presentation to the user of the second language interpretation according to the singular voice. 8. The computer program product of claim 1, wherein the audio-based device is a telephone. 9. The computer program product of claim 1, wherein the audio-based device is a microphone. 10. The computer program product of claim 1, wherein the audio-based device is a computing device. 11. A method comprising:
receiving, with a processor from a customer care platform, a request for spoken language interpretation of a user query from a first spoken language to a second spoken language, the first spoken language being spoken by a user situated at an audio-based device that is remotely situated from the customer care platform, the user query being sent from the audio-based device by the user to the customer care platform; performing, at a language interpretation platform, a first spoken language interpretation of the user query from the first spoken language to the second spoken language; transmitting, from the language interpretation platform to the customer care platform, the first spoken language interpretation so that a customer care representative speaking the second spoken language understands the first spoken language being spoken by the user; receiving, at the language interpretation platform from the customer care platform, a customer care response in the second spoken language; performing, at the language interpretation platform via a plurality of language interpretation resources, a second spoken language interpretation of the customer care response from the second spoken language to the first spoken language; generating, with the processor, audio data corresponding to the second spoken language interpretation of the customer care response, the audio data representing a singular voice for the plurality of language interpretation resources; and transmitting, with the processor, the audio data to the customer care platform so that the customer care platform sends the audio data to the audio-based device for consumption at the audio-based device without rendering of audio data in the first spoken language. 12. The method of claim 11, wherein the plurality of language interpretation resources comprise a machine interpreter and a language interpreter. 13. The method of claim 11, wherein the plurality of language interpretation resources comprise a first machine interpreter and a second machine interpreter, the second machine interpreter being trained according to a different skill set than the first machine interpreter. 14. The method of claim 11, wherein the plurality of language interpretation resources comprise a first human interpreter and a second human interpreter, the second human interpreter being trained according to a different skill set than the first human interpreter. 15. The method of claim 11, further comprising monitoring the second spoken language interpretation for compliance with one or more quality control criteria. 16. The method of claim 15, wherein the one or more quality control criteria comprise speed and accuracy. 17. The method of claim 15, further comprising transitioning the second spoken language interpretation from a first language interpretation resource in the plurality of language interpretation resources to a second language interpretation resource in the plurality of language interpretation resources during a presentation to the user of the second language interpretation according to the singular voice. 18. The method of claim 11, wherein the audio-based device is a telephone. 19. The method of claim 11, wherein the audio-based device is a microphone. 20. The method of claim 11, wherein the audio-based device is a computing device. | 2,600 |
11,089 | 11,089 | 16,281,867 | 2,636 | An optical receiver is provided for a diverged-beam, free space optical communications system. The optical receiver includes a demultiplexer and a detector array. The demultiplexer includes a diffractive optic configured to receive an optical beam propagating in free space. The optical beam includes a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, and the diffractive optic is configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. The detector array includes a plurality of optical detectors configured to convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels. The plurality of optical detectors includes at least twice as many optical detectors as optical carrier signals in the plurality of optical carrier signals. | 1. A diverged-beam, free space optical (DBFSO) communications system comprising:
an optical transmitter configured to produce a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, combine the plurality of optical carrier signals into an optical beam, and transmit the optical beam for propagation in free space; and an optical receiver configured to receive the optical beam propagating in free space, spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels, the optical receiver including a plurality of optical detectors configured to convert the plurality of optical carrier signals into the respective plurality of electrical signals, the plurality of optical detectors including at least twice as many optical detectors as optical carrier signals in the plurality of optical carrier signals. 2. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer with a diffraction grating configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. 3. The DBFSO system of claim 2, wherein the optical receiver includes a demultiplexer with a holographic volume phase grating configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. 4. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer with first and second diffractive optics configured to spatially separate the optical beam along two axes. 5. The DBFSO system of claim 1, wherein each of the optical detectors of the plurality of optical detectors is at most 9 microns in size. 6. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer including a diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and an additional optic configured to split the optical carrier signals by polarization. 7. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer including a diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and an additional optic configured to split the optical carrier signals by configured to split the optical carrier signals by angular orbital momentum. 8. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer including a diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and further includes an array of optics between the diffractive optic and detector array, the array of optics configured to spatially spread the plurality of optical carrier signals. 9. The DBFSO system of claim 8, wherein the array of optics is an array of mirrors. 10. The DBFSO system of claim 1, wherein the plurality of optical detectors includes multiple detectors for each communication channel of the plurality of communication channels,
wherein the optical receiver further includes solid-state switching devices coupled to respective ones of the plurality of optical detectors, and by which the respective ones of the plurality of optical detectors are individually and separately selectable for connection to processing circuitry, and wherein the multiple detectors for each communication channel are individually and separately selectable so that in some instances at least some but not all of the multiple detectors are selected, and others of the multiple detectors are not utilized. 11. The DBFSO system of claim 10, wherein the others of the multiple detectors that are not utilized are connected to a low voltage or ground, or to a high voltage, by respective ones of the solid-state switching devices coupled to the others of the multiple detectors. 12. An optical receiver for a diverged-beam, free space optical communications system, the optical receiver comprising:
a demultiplexer including a diffractive optic configured to receive an optical beam propagating in free space, the optical beam including a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, the diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals; and a detector array including a plurality of optical detectors configured to convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels, the plurality of optical detectors including at least twice as many optical detectors as optical carrier signals in the plurality of optical carrier signals. 13. The optical receiver of claim 12, wherein the diffractive optic is a diffraction grating. 14. The optical receiver of claim 13, wherein the diffraction grating is a holographic volume phase grating. 15. The optical receiver of claim 12, wherein the demultiplexer includes first and second diffractive optics configured to spatially separate the optical beam along two axes. 16. The optical receiver of claim 12, wherein each of the optical detectors of the plurality of optical detectors is at most 9 microns in size. 17. The optical receiver of claim 12 further comprising an additional optic configured to split the optical carrier signals by polarization. 18. The optical receiver of claim 12 further comprising an additional optic configured to split the optical carrier signals by configured to split the optical carrier signals by angular orbital momentum. 19. The optical receiver of claim 12 further comprising an array of optics between the diffractive optic and detector array, and configured to spatially spread the plurality of optical carrier signals. 20. The optical receiver of claim 19, wherein the array of optics is an array of mirrors. 21. A diverged-beam, free space optical (DBFSO) communications system comprising:
an optical transmitter configured to produce a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, combine the plurality of optical carrier signals into an optical beam, and transmit the optical beam for propagation in free space; and an optical receiver configured to receive the optical beam propagating in free space, spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels, the optical receiver including a plurality of optical detectors configured to convert the plurality of optical carrier signals into the respective plurality of electrical signals, the plurality of optical detectors having an acceptance angle greater than 0.1 degree for at least some of the communication channels. 22. The DBFSO system of claim 21, wherein the optical receiver includes a demultiplexer with a diffraction grating or a holographic volume phase grating configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. | An optical receiver is provided for a diverged-beam, free space optical communications system. The optical receiver includes a demultiplexer and a detector array. The demultiplexer includes a diffractive optic configured to receive an optical beam propagating in free space. The optical beam includes a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, and the diffractive optic is configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. The detector array includes a plurality of optical detectors configured to convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels. The plurality of optical detectors includes at least twice as many optical detectors as optical carrier signals in the plurality of optical carrier signals.1. A diverged-beam, free space optical (DBFSO) communications system comprising:
an optical transmitter configured to produce a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, combine the plurality of optical carrier signals into an optical beam, and transmit the optical beam for propagation in free space; and an optical receiver configured to receive the optical beam propagating in free space, spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels, the optical receiver including a plurality of optical detectors configured to convert the plurality of optical carrier signals into the respective plurality of electrical signals, the plurality of optical detectors including at least twice as many optical detectors as optical carrier signals in the plurality of optical carrier signals. 2. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer with a diffraction grating configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. 3. The DBFSO system of claim 2, wherein the optical receiver includes a demultiplexer with a holographic volume phase grating configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. 4. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer with first and second diffractive optics configured to spatially separate the optical beam along two axes. 5. The DBFSO system of claim 1, wherein each of the optical detectors of the plurality of optical detectors is at most 9 microns in size. 6. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer including a diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and an additional optic configured to split the optical carrier signals by polarization. 7. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer including a diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and an additional optic configured to split the optical carrier signals by configured to split the optical carrier signals by angular orbital momentum. 8. The DBFSO system of claim 1, wherein the optical receiver includes a demultiplexer including a diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and further includes an array of optics between the diffractive optic and detector array, the array of optics configured to spatially spread the plurality of optical carrier signals. 9. The DBFSO system of claim 8, wherein the array of optics is an array of mirrors. 10. The DBFSO system of claim 1, wherein the plurality of optical detectors includes multiple detectors for each communication channel of the plurality of communication channels,
wherein the optical receiver further includes solid-state switching devices coupled to respective ones of the plurality of optical detectors, and by which the respective ones of the plurality of optical detectors are individually and separately selectable for connection to processing circuitry, and wherein the multiple detectors for each communication channel are individually and separately selectable so that in some instances at least some but not all of the multiple detectors are selected, and others of the multiple detectors are not utilized. 11. The DBFSO system of claim 10, wherein the others of the multiple detectors that are not utilized are connected to a low voltage or ground, or to a high voltage, by respective ones of the solid-state switching devices coupled to the others of the multiple detectors. 12. An optical receiver for a diverged-beam, free space optical communications system, the optical receiver comprising:
a demultiplexer including a diffractive optic configured to receive an optical beam propagating in free space, the optical beam including a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, the diffractive optic configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals; and a detector array including a plurality of optical detectors configured to convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels, the plurality of optical detectors including at least twice as many optical detectors as optical carrier signals in the plurality of optical carrier signals. 13. The optical receiver of claim 12, wherein the diffractive optic is a diffraction grating. 14. The optical receiver of claim 13, wherein the diffraction grating is a holographic volume phase grating. 15. The optical receiver of claim 12, wherein the demultiplexer includes first and second diffractive optics configured to spatially separate the optical beam along two axes. 16. The optical receiver of claim 12, wherein each of the optical detectors of the plurality of optical detectors is at most 9 microns in size. 17. The optical receiver of claim 12 further comprising an additional optic configured to split the optical carrier signals by polarization. 18. The optical receiver of claim 12 further comprising an additional optic configured to split the optical carrier signals by configured to split the optical carrier signals by angular orbital momentum. 19. The optical receiver of claim 12 further comprising an array of optics between the diffractive optic and detector array, and configured to spatially spread the plurality of optical carrier signals. 20. The optical receiver of claim 19, wherein the array of optics is an array of mirrors. 21. A diverged-beam, free space optical (DBFSO) communications system comprising:
an optical transmitter configured to produce a plurality of optical carrier signals of respective wavelengths for a plurality of communication channels, combine the plurality of optical carrier signals into an optical beam, and transmit the optical beam for propagation in free space; and an optical receiver configured to receive the optical beam propagating in free space, spatially separate the optical beam by wavelength into the plurality of optical carrier signals, and convert the plurality of optical carrier signals into a respective plurality of electrical signals for the plurality of communication channels, the optical receiver including a plurality of optical detectors configured to convert the plurality of optical carrier signals into the respective plurality of electrical signals, the plurality of optical detectors having an acceptance angle greater than 0.1 degree for at least some of the communication channels. 22. The DBFSO system of claim 21, wherein the optical receiver includes a demultiplexer with a diffraction grating or a holographic volume phase grating configured to spatially separate the optical beam by wavelength into the plurality of optical carrier signals. | 2,600 |
11,090 | 11,090 | 16,287,861 | 2,654 | An exemplary audio enhancement system substantially eliminates latency by returning audio input signals in amplified form directly in the analog domain to the source, thereby reducing signal degradation and removing redundancy from digital and analog audio transmission and processing architectures. | 1. An audio enhancement circuit for enhancing an audio signal from a source, comprising:
a decoupling circuit element connected in series to a non-inverting input of a first amplifier powered by a DC voltage; a variable resistor configured to permit the audio signal to travel from the first amplifier to a second amplifier, wherein the second amplifier is coupled to a first feedback loop at its non-inverting input. 2. The audio enhancement circuit of claim 1, further comprising a gain trimmer. 3. The audio enhancement circuit of claim 1, further comprising a third amplifier coupled to the variable resister via its non-inverting input. 4. The audio enhancement circuit of claim 3, further comprising a fourth amplifier coupled to the variable resister and through which the audio signal travels from the first amplifier. 5. The audio enhancement circuit of claim 4, further comprising a fifth amplifier coupled to the variable resister via the fourth amplifier. 6. The audio enhancement circuit of claim 3, wherein the feedback loop comprises the third amplifier. 7. The audio enhancement circuit of claim 5, further comprising a second feedback loop coupled to the fourth amplifier. 8. The audio enhancement circuit of claim 1, further comprising a third amplifier, a fourth amplifier, and a fifth amplifier through which the audio signal travels from the first amplifier. 9. A device for enhancing audio, comprising the audio enhancement circuit of claim 1. 10. The device for enhancing audio of claim 9, further comprising an adjustment mechanism to adjust audio provided to the audio enhancement circuit. 11. The device for enhancing audio of claim 9, further comprising a transmitter coupled to an audio receiver. 12. The device for enhancing audio of claim 10, further comprising a transmitter coupled to an audio receiver. 13. A method of audio enhancement, the method comprising the steps of:
transmitting audio signals to an enhancement circuit; transmitting enhanced audio signals to a user; and maintaining substantially zero latency between audio signal transmission and enhanced audio signal transmission. 14. The method of claim 13, wherein the substantially zero latency is maintained using a plurality of amplifiers, wherein at least one of the plurality of amplifiers is powered by the DC power source and at least one of the plurality of amplifiers receives the audio signals via its non-inverting input. 15. The method of claim 13, wherein the substantially zero latency is maintained using a variable resistor by which the audio signals travel from the first amplifier to a second amplifier, wherein the second amplifier is coupled to a feedback loop. 16. An audio enhancement system, comprising:
a microphone; and an enhancement circuit coupled to the microphone, the enhancement circuit comprising:
a DC power source; and
a plurality of amplifiers, wherein at least one of the plurality of amplifiers is powered by the DC power source, and at least one of the plurality of amplifiers receives a first signal representative of the sound via a variable resistor and a second signal representative of the sound via a feedback loop coupled at a non-inverting input. 17. The audio enhancement system of claim 16, wherein the system has at least three amplifiers. 18. The audio enhancement system of claim 17, wherein the feedback loop includes at least one of the plurality of amplifiers. 19. The audio enhancement system of claim 18, further comprising a second plurality of amplifiers with a feedback loop coupled at a non-inverting input of at least one of the second plurality of amplifiers. 20. The audio enhancement system of claim 16, wherein the enhancement circuit is coupled to one or both of the microphone and transmitter via electrical or wireless connections. | An exemplary audio enhancement system substantially eliminates latency by returning audio input signals in amplified form directly in the analog domain to the source, thereby reducing signal degradation and removing redundancy from digital and analog audio transmission and processing architectures.1. An audio enhancement circuit for enhancing an audio signal from a source, comprising:
a decoupling circuit element connected in series to a non-inverting input of a first amplifier powered by a DC voltage; a variable resistor configured to permit the audio signal to travel from the first amplifier to a second amplifier, wherein the second amplifier is coupled to a first feedback loop at its non-inverting input. 2. The audio enhancement circuit of claim 1, further comprising a gain trimmer. 3. The audio enhancement circuit of claim 1, further comprising a third amplifier coupled to the variable resister via its non-inverting input. 4. The audio enhancement circuit of claim 3, further comprising a fourth amplifier coupled to the variable resister and through which the audio signal travels from the first amplifier. 5. The audio enhancement circuit of claim 4, further comprising a fifth amplifier coupled to the variable resister via the fourth amplifier. 6. The audio enhancement circuit of claim 3, wherein the feedback loop comprises the third amplifier. 7. The audio enhancement circuit of claim 5, further comprising a second feedback loop coupled to the fourth amplifier. 8. The audio enhancement circuit of claim 1, further comprising a third amplifier, a fourth amplifier, and a fifth amplifier through which the audio signal travels from the first amplifier. 9. A device for enhancing audio, comprising the audio enhancement circuit of claim 1. 10. The device for enhancing audio of claim 9, further comprising an adjustment mechanism to adjust audio provided to the audio enhancement circuit. 11. The device for enhancing audio of claim 9, further comprising a transmitter coupled to an audio receiver. 12. The device for enhancing audio of claim 10, further comprising a transmitter coupled to an audio receiver. 13. A method of audio enhancement, the method comprising the steps of:
transmitting audio signals to an enhancement circuit; transmitting enhanced audio signals to a user; and maintaining substantially zero latency between audio signal transmission and enhanced audio signal transmission. 14. The method of claim 13, wherein the substantially zero latency is maintained using a plurality of amplifiers, wherein at least one of the plurality of amplifiers is powered by the DC power source and at least one of the plurality of amplifiers receives the audio signals via its non-inverting input. 15. The method of claim 13, wherein the substantially zero latency is maintained using a variable resistor by which the audio signals travel from the first amplifier to a second amplifier, wherein the second amplifier is coupled to a feedback loop. 16. An audio enhancement system, comprising:
a microphone; and an enhancement circuit coupled to the microphone, the enhancement circuit comprising:
a DC power source; and
a plurality of amplifiers, wherein at least one of the plurality of amplifiers is powered by the DC power source, and at least one of the plurality of amplifiers receives a first signal representative of the sound via a variable resistor and a second signal representative of the sound via a feedback loop coupled at a non-inverting input. 17. The audio enhancement system of claim 16, wherein the system has at least three amplifiers. 18. The audio enhancement system of claim 17, wherein the feedback loop includes at least one of the plurality of amplifiers. 19. The audio enhancement system of claim 18, further comprising a second plurality of amplifiers with a feedback loop coupled at a non-inverting input of at least one of the second plurality of amplifiers. 20. The audio enhancement system of claim 16, wherein the enhancement circuit is coupled to one or both of the microphone and transmitter via electrical or wireless connections. | 2,600 |
11,091 | 11,091 | 16,409,022 | 2,651 | Ear buds may have optical proximity sensors and accelerometers. Control circuitry may analyze output from the optical proximity sensors and the accelerometers to identify a current operational state for the ear buds. The control circuitry may also analyze the accelerometer output to identify tap input such as double taps made by a user on ear bud housings. Samples in the accelerometer output may be analyzed to determine whether the samples associated with a tap have been clipped. If the samples have been clipped, a curve may be fit to the samples. Optical sensor data may be analyzed in conjunction with potential tap input data from the accelerometer. If the optical sensor data is ordered, a tap input may be confirmed. If the optical sensor data is disordered, the control circuitry can conclude that accelerometer data corresponds to false tap input associated with unintentional contact with the housing. | 1. A wireless ear bud, comprising:
a housing; a speaker in the housing; an optical proximity sensor in the housing; an accelerometer in the housing that produces output signals; and control circuitry that identifies double tap input on the housing by detecting first and second pulses in the output signals from the accelerometer. 2. The wireless ear bud defined in claim 1, wherein the housing comprises a main body portion and a stem portion extending from the main body portion. 3. The wireless ear bud defined in claim 2 wherein the optical proximity sensor and speaker are located in the main body portion. 4. The wireless ear bud defined in claim 2 wherein the stem portion has exposed electrical contacts. 5. The wireless ear bud defined in claim 2 wherein the accelerometer detects acceleration along first, second, and third axes. 6. The wireless ear bud defined in claim 5 wherein the second axis is aligned with the stem portion. 7. The wireless ear bud defined in claim 6 wherein the control circuitry compares the output signals associated with the first and second axes to determine whether the wireless ear bud is in an in-ear operating state. 8. The wireless ear bud defined in claim 2 further comprising an additional optical proximity sensor in the main body portion. 9. The wireless ear bud defined in claim 8 wherein the optical proximity sensor comprises a tragus sensor and the additional optical proximity sensor comprises a concha sensor. 10. The wireless ear bud defined in claim 1 wherein the control circuitry determines whether a magnitude of the first and second pulses in the output signals is greater than a threshold. 11. The wireless ear bud defined in claim 10 wherein the control circuitry determines whether the first and second pulses in the output signals occur within a predetermined time span. 12. A wireless ear bud, comprising
a housing having a main body portion and a stem portion extending from the main body portion; a speaker in the main body portion; an accelerometer that produces accelerometer output, wherein the accelerometer measures acceleration along first, second, and third axes and wherein the second axis is aligned with the stem portion; and control circuitry that:
identifies in-ear operation of the wireless ear bud by comparing the acceleration along the first axis with the acceleration along the second axis; and
identifies double tap input on the housing by detecting first and second pulses in the accelerometer output. 13. The wireless ear bud defined in claim 12 further comprising an optical proximity sensor in the main body portion. 14. The wireless ear bud defined in claim 13 further comprising an additional proximity sensor in the main body portion. 15. The wireless ear bud defined in claim 13 further comprising an additional proximity sensor in the stem portion. 16. The wireless ear bud defined in claim 12 wherein the control circuitry identifies the double tap input by determining whether the first and second pulses have a magnitude that exceeds a threshold and whether the first and second pulses occur within a predetermined time span. 17. A wireless ear bud, comprising:
a housing having a main body portion and a stem portion; a first proximity sensor in the main body portion and a second proximity sensor in the stem portion; a speaker in the main body portion; an accelerometer that produces accelerometer output; and control circuitry that controls the speaker based on double tap input on the housing, wherein the control circuitry identifies the double tap input by detecting first and second pulses in the accelerometer output. 18. The wireless ear bud defined in claim 17 wherein the first and second proximity sensors comprise optical proximity sensors. 19. The wireless ear bud defined in claim 17 wherein the control circuitry identifies the double tap input by determining whether the first and second pulses occur within a predetermined time span. 20. The wireless ear bud defined in claim 17 wherein the first proximity sensor produces proximity sensor output and wherein the control circuitry determines whether the double tap input is true double tap input or false double tap input based at least partly on the proximity sensor output. | Ear buds may have optical proximity sensors and accelerometers. Control circuitry may analyze output from the optical proximity sensors and the accelerometers to identify a current operational state for the ear buds. The control circuitry may also analyze the accelerometer output to identify tap input such as double taps made by a user on ear bud housings. Samples in the accelerometer output may be analyzed to determine whether the samples associated with a tap have been clipped. If the samples have been clipped, a curve may be fit to the samples. Optical sensor data may be analyzed in conjunction with potential tap input data from the accelerometer. If the optical sensor data is ordered, a tap input may be confirmed. If the optical sensor data is disordered, the control circuitry can conclude that accelerometer data corresponds to false tap input associated with unintentional contact with the housing.1. A wireless ear bud, comprising:
a housing; a speaker in the housing; an optical proximity sensor in the housing; an accelerometer in the housing that produces output signals; and control circuitry that identifies double tap input on the housing by detecting first and second pulses in the output signals from the accelerometer. 2. The wireless ear bud defined in claim 1, wherein the housing comprises a main body portion and a stem portion extending from the main body portion. 3. The wireless ear bud defined in claim 2 wherein the optical proximity sensor and speaker are located in the main body portion. 4. The wireless ear bud defined in claim 2 wherein the stem portion has exposed electrical contacts. 5. The wireless ear bud defined in claim 2 wherein the accelerometer detects acceleration along first, second, and third axes. 6. The wireless ear bud defined in claim 5 wherein the second axis is aligned with the stem portion. 7. The wireless ear bud defined in claim 6 wherein the control circuitry compares the output signals associated with the first and second axes to determine whether the wireless ear bud is in an in-ear operating state. 8. The wireless ear bud defined in claim 2 further comprising an additional optical proximity sensor in the main body portion. 9. The wireless ear bud defined in claim 8 wherein the optical proximity sensor comprises a tragus sensor and the additional optical proximity sensor comprises a concha sensor. 10. The wireless ear bud defined in claim 1 wherein the control circuitry determines whether a magnitude of the first and second pulses in the output signals is greater than a threshold. 11. The wireless ear bud defined in claim 10 wherein the control circuitry determines whether the first and second pulses in the output signals occur within a predetermined time span. 12. A wireless ear bud, comprising
a housing having a main body portion and a stem portion extending from the main body portion; a speaker in the main body portion; an accelerometer that produces accelerometer output, wherein the accelerometer measures acceleration along first, second, and third axes and wherein the second axis is aligned with the stem portion; and control circuitry that:
identifies in-ear operation of the wireless ear bud by comparing the acceleration along the first axis with the acceleration along the second axis; and
identifies double tap input on the housing by detecting first and second pulses in the accelerometer output. 13. The wireless ear bud defined in claim 12 further comprising an optical proximity sensor in the main body portion. 14. The wireless ear bud defined in claim 13 further comprising an additional proximity sensor in the main body portion. 15. The wireless ear bud defined in claim 13 further comprising an additional proximity sensor in the stem portion. 16. The wireless ear bud defined in claim 12 wherein the control circuitry identifies the double tap input by determining whether the first and second pulses have a magnitude that exceeds a threshold and whether the first and second pulses occur within a predetermined time span. 17. A wireless ear bud, comprising:
a housing having a main body portion and a stem portion; a first proximity sensor in the main body portion and a second proximity sensor in the stem portion; a speaker in the main body portion; an accelerometer that produces accelerometer output; and control circuitry that controls the speaker based on double tap input on the housing, wherein the control circuitry identifies the double tap input by detecting first and second pulses in the accelerometer output. 18. The wireless ear bud defined in claim 17 wherein the first and second proximity sensors comprise optical proximity sensors. 19. The wireless ear bud defined in claim 17 wherein the control circuitry identifies the double tap input by determining whether the first and second pulses occur within a predetermined time span. 20. The wireless ear bud defined in claim 17 wherein the first proximity sensor produces proximity sensor output and wherein the control circuitry determines whether the double tap input is true double tap input or false double tap input based at least partly on the proximity sensor output. | 2,600 |
11,092 | 11,092 | 16,102,408 | 2,645 | Systems, apparatuses, and methods are described for random access of a wireless device. A distributed radio access network (RAN) entity may configure a random access (RA) resource and a RA preamble for a contention free random access of a wireless device. The distributed RAN entity may transmit, to a central RAN entity, a RA failure indication if it does not detect a RA preamble on the RA resource from a wireless device. | 1. A method, comprising:
transmitting, by a wireless device to a central unit associated with a base station via a distributed unit associated with the base station, a measurement report of a cell; receiving, by the wireless device from the central unit via the distributed unit, cell configuration parameters of the cell; receiving, by the wireless device from the distributed unit, an indication of a random access (RA) resource; transmitting, by the wireless device via the RA resource of the cell, at least one RA preamble; receiving, by the wireless device, cell reconfiguration parameters; reconfiguring, by the wireless device, the RA resource; and transmitting, by the wireless device using the reconfigured RA resource, the at least one RA preamble. 2. The method of claim 1, further comprising accessing, by the wireless device, the cell if a RA response is received. 3. The method of claim 1, further comprising receiving, by the wireless device and from the distributed unit, the RA preamble for random access to the cell. 4. The method of claim 1, further comprising receiving via a radio resource control message, the RA resource. 5. The method of claim 1, wherein the at least one RA preamble is transmitted to the distributed unit. 6. The method of claim 1, wherein the RA preamble is transmitted to a second distributed unit. 7. The method of claim 1, wherein the RA resource comprises an indication of a secondary cell synchronization. 8. The method of claim 1, wherein the RA resource comprises an indication of a handover of the wireless device. 9. A method, comprising:
receiving, by a distributed unit associated with a base station from a central unit associated with the base station, a first message comprising cell configuration parameters of a cell; transmitting, by the distributed unit to a wireless device, the first message and an indication of a random access (RA) resource of the cell; monitoring, by the distributed unit, the RA resource of the cell, wherein the monitoring comprises determining whether the RA resource of the cell comprises at least one RA preamble from the wireless device; and if the distributed unit receives the at least one RA preamble from the wireless device, transmitting, by the distributed unit to the wireless device, a RA response. 10. The method of claim 9, further comprising receiving, by the distributed unit from the wireless device via the cell, transport blocks. 11. The method of claim 10, further comprising transmitting, by the distributed unit, the transport blocks. 12. The method of claim 9, further comprising transmitting, by the distributed unit to the central unit of a base station, a RA failure indication, if the distributed unit does not receive at least one RA preamble from the wireless device. 13. The method of claim 9, further comprising transmitting, by the distributed unit to the wireless device, the RA preamble for RA to the cell. 14. A method, comprising:
monitoring, by a distributed radio access network (RAN) entity, a random access (RA) resource of a cell, wherein the monitoring comprising determining whether at least one RA preamble transmitted by a wireless device is detected; determining, by the distributed RAN entity and based on the RA resource, the at least one RA preamble is not detected; and generating, by the distributed RAN entity and based on the determination that the at least one RA preamble is not detected, a first message indicating a RA failure of the wireless device; and transmitting, to a central RAN entity, the first message. 15. The method of claim 14, further comprising:
determining, by the distributed RAN entity and based on the RA resource, the at least one RA preamble is detected; and storing, by the distributed RAN entity, the configurations of the cell for the wireless device. 16. The method of claim 14, further comprising transmitting, by the distributed RAN entity and to the wireless device, a RRC message. 17. The method of claim 16, further comprising receiving, by the central RAN entity and from distributed RAN entity, the RA failure indication. 18. The method of claim 14, further comprising transmitting, by the central RAN entity and to a second distributed RAN entity, a configuration request for the cell and for the wireless device. 19. The method of claim 18, further comprising receiving, by the central RAN entity and from the second distributed RAN entity, information describing RA resources for random access to the cell. 20. The method of claim 19, further comprising transmitting, by the central RAN entity and to the distributed RAN entity, a RRC message comprising the information of RA resources. | Systems, apparatuses, and methods are described for random access of a wireless device. A distributed radio access network (RAN) entity may configure a random access (RA) resource and a RA preamble for a contention free random access of a wireless device. The distributed RAN entity may transmit, to a central RAN entity, a RA failure indication if it does not detect a RA preamble on the RA resource from a wireless device.1. A method, comprising:
transmitting, by a wireless device to a central unit associated with a base station via a distributed unit associated with the base station, a measurement report of a cell; receiving, by the wireless device from the central unit via the distributed unit, cell configuration parameters of the cell; receiving, by the wireless device from the distributed unit, an indication of a random access (RA) resource; transmitting, by the wireless device via the RA resource of the cell, at least one RA preamble; receiving, by the wireless device, cell reconfiguration parameters; reconfiguring, by the wireless device, the RA resource; and transmitting, by the wireless device using the reconfigured RA resource, the at least one RA preamble. 2. The method of claim 1, further comprising accessing, by the wireless device, the cell if a RA response is received. 3. The method of claim 1, further comprising receiving, by the wireless device and from the distributed unit, the RA preamble for random access to the cell. 4. The method of claim 1, further comprising receiving via a radio resource control message, the RA resource. 5. The method of claim 1, wherein the at least one RA preamble is transmitted to the distributed unit. 6. The method of claim 1, wherein the RA preamble is transmitted to a second distributed unit. 7. The method of claim 1, wherein the RA resource comprises an indication of a secondary cell synchronization. 8. The method of claim 1, wherein the RA resource comprises an indication of a handover of the wireless device. 9. A method, comprising:
receiving, by a distributed unit associated with a base station from a central unit associated with the base station, a first message comprising cell configuration parameters of a cell; transmitting, by the distributed unit to a wireless device, the first message and an indication of a random access (RA) resource of the cell; monitoring, by the distributed unit, the RA resource of the cell, wherein the monitoring comprises determining whether the RA resource of the cell comprises at least one RA preamble from the wireless device; and if the distributed unit receives the at least one RA preamble from the wireless device, transmitting, by the distributed unit to the wireless device, a RA response. 10. The method of claim 9, further comprising receiving, by the distributed unit from the wireless device via the cell, transport blocks. 11. The method of claim 10, further comprising transmitting, by the distributed unit, the transport blocks. 12. The method of claim 9, further comprising transmitting, by the distributed unit to the central unit of a base station, a RA failure indication, if the distributed unit does not receive at least one RA preamble from the wireless device. 13. The method of claim 9, further comprising transmitting, by the distributed unit to the wireless device, the RA preamble for RA to the cell. 14. A method, comprising:
monitoring, by a distributed radio access network (RAN) entity, a random access (RA) resource of a cell, wherein the monitoring comprising determining whether at least one RA preamble transmitted by a wireless device is detected; determining, by the distributed RAN entity and based on the RA resource, the at least one RA preamble is not detected; and generating, by the distributed RAN entity and based on the determination that the at least one RA preamble is not detected, a first message indicating a RA failure of the wireless device; and transmitting, to a central RAN entity, the first message. 15. The method of claim 14, further comprising:
determining, by the distributed RAN entity and based on the RA resource, the at least one RA preamble is detected; and storing, by the distributed RAN entity, the configurations of the cell for the wireless device. 16. The method of claim 14, further comprising transmitting, by the distributed RAN entity and to the wireless device, a RRC message. 17. The method of claim 16, further comprising receiving, by the central RAN entity and from distributed RAN entity, the RA failure indication. 18. The method of claim 14, further comprising transmitting, by the central RAN entity and to a second distributed RAN entity, a configuration request for the cell and for the wireless device. 19. The method of claim 18, further comprising receiving, by the central RAN entity and from the second distributed RAN entity, information describing RA resources for random access to the cell. 20. The method of claim 19, further comprising transmitting, by the central RAN entity and to the distributed RAN entity, a RRC message comprising the information of RA resources. | 2,600 |
11,093 | 11,093 | 16,277,673 | 2,615 | In implementations of prefabricated building components based on municipal and county codes, a housing design requirement describing a limitation on a dimension of a housing feature is identified. A housing design of a house is generated to comply with the identified limitation on the dimension, and the housing design is configured to be manufactured using a prefabricated component. The housing design is refined by adjusting design dimensions based on stock dimensions of subcomponents of the prefabricated component, and a refined housing design is generated based on the adjustments to the design dimensions. | 1. A method comprising:
identifying a housing design requirement from a housing code, the housing design requirement describing a maximum floor area and at least one feature that is excluded from a calculation of a floor area; generating a housing design of a house configured to be manufactured using at least one prefabricated component, the housing design complying with the maximum floor area, wherein the housing design includes the at least one feature that is excluded from the calculation of the floor area; refining the housing design by adjusting design dimensions based on stock dimensions of subcomponents of the at least one prefabricated component; and generating a refined housing design of the house based on the adjusting, the refined housing design including:
the at least one prefabricated component; and
a lower level of the house, the lower level having a storage space and no living space as defined by the housing code. 2. The method as described in claim 1, wherein the design dimensions include the stock dimensions of the subcomponents. 3. The method as described in claim 1, wherein the subcomponents include cross laminated timber panels. 4. The method as described in claim 1, wherein the subcomponents include structural insulated panels. 5. The method as described in claim 1, wherein the subcomponents include prefabricated walls. 6-9. (canceled) 10. The method as described in claim 1, wherein the housing code is a municipal housing code for cottage housing. 11-20. (canceled) 21. The method as described in claim 1, wherein the at least one feature is a bay window or a fireplace. 22. The method as described in claim 1, wherein the at least one feature is a utility closet or a stairway. 23. The method as described in claim 1, wherein the at least one feature is a storage area. 24. A method comprising:
identifying a housing design requirement from a municipal housing code for cottage housing, the housing design requirement describing a maximum footprint and at least one feature that is excluded from a calculation of a footprint; generating a housing design of a house configured to be manufactured using at least one prefabricated component, the housing design complying with the maximum footprint, wherein the housing design includes the at least one feature that is excluded from the calculation of the footprint; refining the housing design by adjusting design dimensions based on stock dimensions of subcomponents of the at least one prefabricated component; and generating a refined housing design of the house based on the adjusting, the refined housing design including:
the at least one prefabricated component; and
a roof of the house, the roof of the house having a height as defined by the municipal code for cottage housing of 30 feet or less. 25. The method as described in claim 24, wherein the at least one feature is a bay window or a utility closet. 26. The method as described in claim 24, wherein the at least one feature is a stairway or a fireplace. 27. The method as described in claim 24, wherein the at least one feature is a storage area. 28. The method as described in claim 24, wherein the design dimensions include the stock dimensions of the subcomponents. 29. The method as described in claim 28, wherein the subcomponents include cross laminated timber panels. 30. The method as described in claim 28, wherein the subcomponents include structural insulated panels. 31. The method as described in claim 28, wherein the subcomponents include prefabricated walls. 32. A method comprising:
identifying a housing design requirement from a municipal housing code for cottage housing, the housing design requirement describing a maximum floor area and at least one feature that is excluded from a calculation of a floor area; generating a housing design of a house configured to be manufactured using at least one prefabricated component, the housing design complying with the maximum floor area, wherein the housing design includes the at least one feature that is excluded from the calculation of the floor area; refining the housing design by adjusting design dimensions based on stock dimensions of subcomponents of the at least one prefabricated component; and generating a refined housing design of the house based on the adjusting, the refined housing design including:
the at least one prefabricated component;
a lower level of the house, the lower level having a storage space and no living space as defined by the municipal code for cottage housing; and
a roof of the house, the roof of the house having a height as defined by the municipal code for cottage housing of 30 feet or less. 33. The method as described in claim 32, wherein the at least one feature includes at least one of a bay window, a fireplace, a utility closet, a stairway, and a storage area. 34. The method as described in claim 32, wherein subcomponents include cross laminated timber panels. | In implementations of prefabricated building components based on municipal and county codes, a housing design requirement describing a limitation on a dimension of a housing feature is identified. A housing design of a house is generated to comply with the identified limitation on the dimension, and the housing design is configured to be manufactured using a prefabricated component. The housing design is refined by adjusting design dimensions based on stock dimensions of subcomponents of the prefabricated component, and a refined housing design is generated based on the adjustments to the design dimensions.1. A method comprising:
identifying a housing design requirement from a housing code, the housing design requirement describing a maximum floor area and at least one feature that is excluded from a calculation of a floor area; generating a housing design of a house configured to be manufactured using at least one prefabricated component, the housing design complying with the maximum floor area, wherein the housing design includes the at least one feature that is excluded from the calculation of the floor area; refining the housing design by adjusting design dimensions based on stock dimensions of subcomponents of the at least one prefabricated component; and generating a refined housing design of the house based on the adjusting, the refined housing design including:
the at least one prefabricated component; and
a lower level of the house, the lower level having a storage space and no living space as defined by the housing code. 2. The method as described in claim 1, wherein the design dimensions include the stock dimensions of the subcomponents. 3. The method as described in claim 1, wherein the subcomponents include cross laminated timber panels. 4. The method as described in claim 1, wherein the subcomponents include structural insulated panels. 5. The method as described in claim 1, wherein the subcomponents include prefabricated walls. 6-9. (canceled) 10. The method as described in claim 1, wherein the housing code is a municipal housing code for cottage housing. 11-20. (canceled) 21. The method as described in claim 1, wherein the at least one feature is a bay window or a fireplace. 22. The method as described in claim 1, wherein the at least one feature is a utility closet or a stairway. 23. The method as described in claim 1, wherein the at least one feature is a storage area. 24. A method comprising:
identifying a housing design requirement from a municipal housing code for cottage housing, the housing design requirement describing a maximum footprint and at least one feature that is excluded from a calculation of a footprint; generating a housing design of a house configured to be manufactured using at least one prefabricated component, the housing design complying with the maximum footprint, wherein the housing design includes the at least one feature that is excluded from the calculation of the footprint; refining the housing design by adjusting design dimensions based on stock dimensions of subcomponents of the at least one prefabricated component; and generating a refined housing design of the house based on the adjusting, the refined housing design including:
the at least one prefabricated component; and
a roof of the house, the roof of the house having a height as defined by the municipal code for cottage housing of 30 feet or less. 25. The method as described in claim 24, wherein the at least one feature is a bay window or a utility closet. 26. The method as described in claim 24, wherein the at least one feature is a stairway or a fireplace. 27. The method as described in claim 24, wherein the at least one feature is a storage area. 28. The method as described in claim 24, wherein the design dimensions include the stock dimensions of the subcomponents. 29. The method as described in claim 28, wherein the subcomponents include cross laminated timber panels. 30. The method as described in claim 28, wherein the subcomponents include structural insulated panels. 31. The method as described in claim 28, wherein the subcomponents include prefabricated walls. 32. A method comprising:
identifying a housing design requirement from a municipal housing code for cottage housing, the housing design requirement describing a maximum floor area and at least one feature that is excluded from a calculation of a floor area; generating a housing design of a house configured to be manufactured using at least one prefabricated component, the housing design complying with the maximum floor area, wherein the housing design includes the at least one feature that is excluded from the calculation of the floor area; refining the housing design by adjusting design dimensions based on stock dimensions of subcomponents of the at least one prefabricated component; and generating a refined housing design of the house based on the adjusting, the refined housing design including:
the at least one prefabricated component;
a lower level of the house, the lower level having a storage space and no living space as defined by the municipal code for cottage housing; and
a roof of the house, the roof of the house having a height as defined by the municipal code for cottage housing of 30 feet or less. 33. The method as described in claim 32, wherein the at least one feature includes at least one of a bay window, a fireplace, a utility closet, a stairway, and a storage area. 34. The method as described in claim 32, wherein subcomponents include cross laminated timber panels. | 2,600 |
11,094 | 11,094 | 15,949,720 | 2,625 | A spatial direction of a wearable device that represents an actual viewing direction of the wearable device is determined. The spatial direction of the wearable device is used to select, from a multi-view image comprising single-view images, a set of single-view images. A display image is caused to be rendered on a device display of the wearable device. The display image represents a single-view image as viewed from the actual viewing direction of the wearable device. The display image is constructed based on the spatial direction of the wearable device and the set of single-view images. | 1. A method, comprising:
determining a spatial direction of a wearable device, the spatial direction of the wearable device representing an actual viewing direction of the wearable device at a first time point; using the spatial direction of the wearable device that represents the actual viewing direction of the wearable device to select, from a multi-view image comprising a plurality of single-view images, a set of two or more single-view images corresponding to a set of two or more viewing directions at the first time point, each single-view image in the plurality of single-view images in the multi-view image (a) corresponding to a respective viewing direction in a plurality of viewing directions and (b) representing a view of the multi-view image from the respective viewing direction; causing a display image to be rendered on a device display of the wearable device, the display image representing a single-view image as viewed from the actual viewing direction of the wearable device at the first time point, the display image being constructed based at least in part on the spatial direction of the wearable device and the set of two or more single-view images corresponding to the set of two or more viewing directions. 2. The method of claim 1, wherein the plurality of single-view images of the multi-view image is received with depth information for the plurality of single-view images, and wherein the display images are constructed based further on the depth information. 3. The method of claim 1, wherein the spatial direction of the wearable device is determined based on one or more spatial coordinates of the wearable device, and wherein the one or more spatial coordinates of the wearable device comprise one of: one or more rotational coordinates only, or a combination of translational coordinates and rotational coordinates. 4. The method of claim 1, wherein single-view images in the set of two or more single-view images are selected based further on a spatial position of the wearable device. 5. The method of claim 1, wherein the spatial direction of the wearable device is determined in relation to a reference coordinate system of a 3D space in which the wearable device resides. 6. The method of claim 1, wherein the plurality of viewing directions supported by the plurality of single-view images of the multi-view image forms a solid angle in relation to a viewer of the wearable device. 7. The method of claim 1, wherein the display image comprises a left view display image and a right view display image that form a stereoscopic image. 8. The method of claim 1, wherein a cinema display image depicting a first proper subset of visual objects in a plurality of visual objects in a 3D image space; wherein the display image represents a device display image that depicts one or more second proper subsets of visual objects in the plurality of visual objects in the 3D image space; and wherein the cinema display image is concurrently rendered on a cinema display for viewing by a viewer while the display image is rendered on the device display of the wearable device for viewing by the same viewer. 9. The method of claim 8, wherein the cinema display image depicting the first proper subset of visual objects in the plurality of visual objects in the 3D image space is received in a cinema image layer of a multi-layer multi-view video signal, and wherein the image set used to construct the display image is received in one or more device image layers of the multi-layer multi-view video signal. 10. The method of claim 8, further comprising:
receiving timing information associated with the cinema display image; using the timing information to synchronize rendering the cinema display image and the device display image. 11. The method of claim 1, wherein the display image is constructed by interpolating single-view images in the image set based on the spatial direction of the wearable device. 12. The method of claim 1, wherein the image set includes a subset of relatively low resolution images each of which covers a relatively large field of view and a subset of relatively high resolution images each of which covers a relatively small focus region of a viewer. 13. The method of claim 1, wherein the display image is constructed by the wearable device. 14. The method of claim 1, wherein the display image is constructed by a video streaming server that streams the display image to the wearable device. 15. The method of claim 1, wherein the image set is transmitted from a video streaming server to the wearable device. 16. The method of claim 1, wherein the plurality of single-view images is received locally from a storage device accessible to the wearable device. 17. A method, comprising:
determining a spatial direction of a wearable device, the spatial direction of the wearable device representing an actual viewing direction of the wearable device at a first time point; receiving a set of two or more single-view images corresponding to a set of two or more viewing directions at the first time point, the spatial direction of the wearable device being used to select, from a multi-view image comprising a plurality of single-view images, single-view images into the set of two or more single-view images, each single-view image in the plurality of single-view images in the multi-view image (a) corresponding to a respective viewing direction in a plurality of viewing directions and (b) representing a view of the multi-view image from the respective viewing direction; constructing a display image based at least in part on the spatial direction of the wearable device and the set of two or more single-view images corresponding to the set of two or more viewing directions, the display image being rendered on a device display of the wearable device, the display image representing a single-view image as viewed from the actual viewing direction of the wearable device at the first time point. 18. An apparatus performing the method as recited in claim 17. 19. A system performing the method as recited in claim 17. 20. A non-transitory computer readable storage medium, storing software instructions, which when executed by one or more processors cause performance of the method recited in claim 17. 21. A computing device comprising one or more processors and one or more storage media, storing a set of instructions, which when executed by one or more processors cause performance of the method recited in claim 17. | A spatial direction of a wearable device that represents an actual viewing direction of the wearable device is determined. The spatial direction of the wearable device is used to select, from a multi-view image comprising single-view images, a set of single-view images. A display image is caused to be rendered on a device display of the wearable device. The display image represents a single-view image as viewed from the actual viewing direction of the wearable device. The display image is constructed based on the spatial direction of the wearable device and the set of single-view images.1. A method, comprising:
determining a spatial direction of a wearable device, the spatial direction of the wearable device representing an actual viewing direction of the wearable device at a first time point; using the spatial direction of the wearable device that represents the actual viewing direction of the wearable device to select, from a multi-view image comprising a plurality of single-view images, a set of two or more single-view images corresponding to a set of two or more viewing directions at the first time point, each single-view image in the plurality of single-view images in the multi-view image (a) corresponding to a respective viewing direction in a plurality of viewing directions and (b) representing a view of the multi-view image from the respective viewing direction; causing a display image to be rendered on a device display of the wearable device, the display image representing a single-view image as viewed from the actual viewing direction of the wearable device at the first time point, the display image being constructed based at least in part on the spatial direction of the wearable device and the set of two or more single-view images corresponding to the set of two or more viewing directions. 2. The method of claim 1, wherein the plurality of single-view images of the multi-view image is received with depth information for the plurality of single-view images, and wherein the display images are constructed based further on the depth information. 3. The method of claim 1, wherein the spatial direction of the wearable device is determined based on one or more spatial coordinates of the wearable device, and wherein the one or more spatial coordinates of the wearable device comprise one of: one or more rotational coordinates only, or a combination of translational coordinates and rotational coordinates. 4. The method of claim 1, wherein single-view images in the set of two or more single-view images are selected based further on a spatial position of the wearable device. 5. The method of claim 1, wherein the spatial direction of the wearable device is determined in relation to a reference coordinate system of a 3D space in which the wearable device resides. 6. The method of claim 1, wherein the plurality of viewing directions supported by the plurality of single-view images of the multi-view image forms a solid angle in relation to a viewer of the wearable device. 7. The method of claim 1, wherein the display image comprises a left view display image and a right view display image that form a stereoscopic image. 8. The method of claim 1, wherein a cinema display image depicting a first proper subset of visual objects in a plurality of visual objects in a 3D image space; wherein the display image represents a device display image that depicts one or more second proper subsets of visual objects in the plurality of visual objects in the 3D image space; and wherein the cinema display image is concurrently rendered on a cinema display for viewing by a viewer while the display image is rendered on the device display of the wearable device for viewing by the same viewer. 9. The method of claim 8, wherein the cinema display image depicting the first proper subset of visual objects in the plurality of visual objects in the 3D image space is received in a cinema image layer of a multi-layer multi-view video signal, and wherein the image set used to construct the display image is received in one or more device image layers of the multi-layer multi-view video signal. 10. The method of claim 8, further comprising:
receiving timing information associated with the cinema display image; using the timing information to synchronize rendering the cinema display image and the device display image. 11. The method of claim 1, wherein the display image is constructed by interpolating single-view images in the image set based on the spatial direction of the wearable device. 12. The method of claim 1, wherein the image set includes a subset of relatively low resolution images each of which covers a relatively large field of view and a subset of relatively high resolution images each of which covers a relatively small focus region of a viewer. 13. The method of claim 1, wherein the display image is constructed by the wearable device. 14. The method of claim 1, wherein the display image is constructed by a video streaming server that streams the display image to the wearable device. 15. The method of claim 1, wherein the image set is transmitted from a video streaming server to the wearable device. 16. The method of claim 1, wherein the plurality of single-view images is received locally from a storage device accessible to the wearable device. 17. A method, comprising:
determining a spatial direction of a wearable device, the spatial direction of the wearable device representing an actual viewing direction of the wearable device at a first time point; receiving a set of two or more single-view images corresponding to a set of two or more viewing directions at the first time point, the spatial direction of the wearable device being used to select, from a multi-view image comprising a plurality of single-view images, single-view images into the set of two or more single-view images, each single-view image in the plurality of single-view images in the multi-view image (a) corresponding to a respective viewing direction in a plurality of viewing directions and (b) representing a view of the multi-view image from the respective viewing direction; constructing a display image based at least in part on the spatial direction of the wearable device and the set of two or more single-view images corresponding to the set of two or more viewing directions, the display image being rendered on a device display of the wearable device, the display image representing a single-view image as viewed from the actual viewing direction of the wearable device at the first time point. 18. An apparatus performing the method as recited in claim 17. 19. A system performing the method as recited in claim 17. 20. A non-transitory computer readable storage medium, storing software instructions, which when executed by one or more processors cause performance of the method recited in claim 17. 21. A computing device comprising one or more processors and one or more storage media, storing a set of instructions, which when executed by one or more processors cause performance of the method recited in claim 17. | 2,600 |
11,095 | 11,095 | 16,783,109 | 2,628 | A display device includes a display panel including a pixel electrically connected to a feedback line, a sensor electrically connected to the feedback line, the sensor being configured to measure an impedance of the pixel in response to a first control signal, and to measure a driving current flowing through the pixel in response to a second control signal, and a timing controller configured to selectively generate the first control signal and the second control signal based on an aging time of the display panel. | 1. A display device comprising:
a display panel comprising a pixel electrically connected to a feedback line; and a sensor electrically connected to the feedback line and the pixel, the sensor being configured to measure an impedance of the pixel in a first sensing condition and to measure a driving current flowing through the pixel in a second sensing condition. 2. The display device of claim 1, wherein the first sensing condition indicates when an aging time of the display panel is less than a reference time corresponding to a saturation time point of an impedance variation of the pixel, and
wherein the second sensing condition indicates when the aging time is greater than the reference time. 3. The display device of claim 1, wherein the first sensing condition indicates when input data that comprises a grayscale value corresponding to the pixel is less than, or equal to, a reference grayscale value corresponding to an unstable current-voltage characteristic of the pixel, and
wherein the second sensing condition indicates when the input data is greater than the reference grayscale value. 4. The display device of claim 1, wherein the sensor is further configured to provide a first reference voltage to the feedback line in the first sensing condition, and to measure the impedance of the pixel by integrating a first current that is fed back through the feedback line according to the first reference voltage, and
wherein the first reference voltage is lower than, or equal to, a threshold voltage of an organic light emitting diode of the pixel. 5. The display device of claim 4, wherein the sensor is further configured to discharge a parasitic capacitor of the organic light emitting diode by providing a low power voltage to the feedback line before the first reference voltage is provided to the feedback line. 6. The display device of claim 1, wherein the sensor is further configured to provide a second reference voltage to the feedback line in the second sensing condition, and to measure the driving current by integrating a second current that is fed back through the feedback line according to the second reference voltage, and
wherein the second reference voltage is greater than, or equal to, a threshold voltage of an organic light emitting diode of the pixel. 7. The display device of claim 1, wherein the pixel comprises:
an organic light emitting diode comprising a cathode electrically connected to a low power voltage; and a sensing transistor electrically connected between an anode of the organic light emitting diode and the feedback line. 8. The display device of claim 7, wherein the sensor comprises:
an amplifier comprising:
a first input terminal electrically connected to the feedback line;
a second input terminal configured to receive a reference voltage; and
an output terminal;
a capacitor electrically connected between the first input terminal of the amplifier and the output terminal of the amplifier; and a switch electrically connected in parallel to the capacitor, the switch being configured to be turned off based on a switch control signal. 9. The display device of claim 8,
wherein a first sensing control signal is generated to control the sensing transistor, and a first switch control signal is generated to control the switch in the first sensing condition, wherein the first sensing control signal has a first turn-on voltage to turn on the sensing transistor in a first sensing period, and wherein the first switch control signal has a second turn-off voltage to turn off the switch in the first sensing period. 10. The display device of claim 9,
wherein a second sensing control signal is generated to control the sensing transistor, and a second switch control signal is generated to control the switch in the second sensing condition, wherein the second sensing control signal has the first turn-on voltage in a second sensing period, and wherein the second switch control signal has a second turn-on voltage to turn on the switch in a reset period, and has the second turn-off voltage in an integration period, the second sensing period comprising the reset period and the integration period. 11. The display device of claim 1, wherein an amount of pixel degradation of the pixel is calculated based on the impedance of the pixel or the driving current. 12. The display device of claim 11, wherein the impedance variation is calculated based on the impedance, and the amount of pixel degradation corresponding to the impedance variation is obtained by using a first degradation curve that represents a correlation between the impedance variation and the amount of pixel degradation. | A display device includes a display panel including a pixel electrically connected to a feedback line, a sensor electrically connected to the feedback line, the sensor being configured to measure an impedance of the pixel in response to a first control signal, and to measure a driving current flowing through the pixel in response to a second control signal, and a timing controller configured to selectively generate the first control signal and the second control signal based on an aging time of the display panel.1. A display device comprising:
a display panel comprising a pixel electrically connected to a feedback line; and a sensor electrically connected to the feedback line and the pixel, the sensor being configured to measure an impedance of the pixel in a first sensing condition and to measure a driving current flowing through the pixel in a second sensing condition. 2. The display device of claim 1, wherein the first sensing condition indicates when an aging time of the display panel is less than a reference time corresponding to a saturation time point of an impedance variation of the pixel, and
wherein the second sensing condition indicates when the aging time is greater than the reference time. 3. The display device of claim 1, wherein the first sensing condition indicates when input data that comprises a grayscale value corresponding to the pixel is less than, or equal to, a reference grayscale value corresponding to an unstable current-voltage characteristic of the pixel, and
wherein the second sensing condition indicates when the input data is greater than the reference grayscale value. 4. The display device of claim 1, wherein the sensor is further configured to provide a first reference voltage to the feedback line in the first sensing condition, and to measure the impedance of the pixel by integrating a first current that is fed back through the feedback line according to the first reference voltage, and
wherein the first reference voltage is lower than, or equal to, a threshold voltage of an organic light emitting diode of the pixel. 5. The display device of claim 4, wherein the sensor is further configured to discharge a parasitic capacitor of the organic light emitting diode by providing a low power voltage to the feedback line before the first reference voltage is provided to the feedback line. 6. The display device of claim 1, wherein the sensor is further configured to provide a second reference voltage to the feedback line in the second sensing condition, and to measure the driving current by integrating a second current that is fed back through the feedback line according to the second reference voltage, and
wherein the second reference voltage is greater than, or equal to, a threshold voltage of an organic light emitting diode of the pixel. 7. The display device of claim 1, wherein the pixel comprises:
an organic light emitting diode comprising a cathode electrically connected to a low power voltage; and a sensing transistor electrically connected between an anode of the organic light emitting diode and the feedback line. 8. The display device of claim 7, wherein the sensor comprises:
an amplifier comprising:
a first input terminal electrically connected to the feedback line;
a second input terminal configured to receive a reference voltage; and
an output terminal;
a capacitor electrically connected between the first input terminal of the amplifier and the output terminal of the amplifier; and a switch electrically connected in parallel to the capacitor, the switch being configured to be turned off based on a switch control signal. 9. The display device of claim 8,
wherein a first sensing control signal is generated to control the sensing transistor, and a first switch control signal is generated to control the switch in the first sensing condition, wherein the first sensing control signal has a first turn-on voltage to turn on the sensing transistor in a first sensing period, and wherein the first switch control signal has a second turn-off voltage to turn off the switch in the first sensing period. 10. The display device of claim 9,
wherein a second sensing control signal is generated to control the sensing transistor, and a second switch control signal is generated to control the switch in the second sensing condition, wherein the second sensing control signal has the first turn-on voltage in a second sensing period, and wherein the second switch control signal has a second turn-on voltage to turn on the switch in a reset period, and has the second turn-off voltage in an integration period, the second sensing period comprising the reset period and the integration period. 11. The display device of claim 1, wherein an amount of pixel degradation of the pixel is calculated based on the impedance of the pixel or the driving current. 12. The display device of claim 11, wherein the impedance variation is calculated based on the impedance, and the amount of pixel degradation corresponding to the impedance variation is obtained by using a first degradation curve that represents a correlation between the impedance variation and the amount of pixel degradation. | 2,600 |
11,096 | 11,096 | 16,359,050 | 2,689 | Systems and methods are provided and include a communication gateway of a control module. The communication gateway establishes a wireless communication connection with a user device. A plurality of sensors are configured to, in response to the user device being connected to the communication gateway, communicate signal information about the wireless communication connection to the control module. A reference data generator module determines a first probability value based on the signal information. The first probability value indicates a probability that a user of the user device will enter a vehicle. A passive entry/passive start (PEPS) system is configured to, in response to the first probability value being greater than at least one threshold probability, activate a vehicle function associated with the at least one threshold probability. | 1. A system comprising:
a communication gateway of a control module of a vehicle, wherein the communication gateway is configured to establish a wireless communication connection with a user device, the control module including at least one processor that is configured to execute instructions stored in a nontransitory memory; a plurality of sensors that are configured to, in response to the user device being connected to the communication gateway, communicate signal information about the wireless communication connection to the control module; a reference data generator module that is implemented by the at least one processor of the control module, wherein the reference data generator module is configured to determine a first probability value based on the signal information, and the first probability value indicates a probability that a user of the user device will enter the vehicle; and a passive entry/passive start (PEPS) system that is configured to, in response to the first probability value being greater than at least one threshold probability, activate a vehicle function associated with the at least one threshold probability. 2. The system of claim 1, wherein the reference data generator module is configured to determine the first probability value is greater than the at least one threshold probability based on a probability curve, and the probability curve is generated by the reference data generator module and is based on at least one entry of a reference data store that is implemented by the nontransitory memory. 3. The system of claim 2, wherein the reference data generator module is configured to generate a first entry of the at least one entry, and the first entry is based on the signal information and corresponds to the first probability value. 4. The system of claim 3, wherein each of the at least one entry is associated with a measurement event and includes information indicating with whether the user of the user device enters the vehicle, signal information of the measurement event, and location data of the measurement event. 5. The system of claim 2, wherein the probability curve is based on at least one dynamic probability value, and the at least one dynamic probability value indicates the probability that the user of the user device will enter the vehicle when the user of the user device is within a threshold distance of the vehicle. 6. The system of claim 5, wherein the at least one dynamic probability value is configured to update in response to the reference data generator module generating a new entry of the at least one entry. 7. The system of claim 2, wherein the probability curve is a continuous curve. 8. The system of claim 1, wherein the vehicle function includes at least one of unlocking a door of the vehicle, unlocking a trunk of the vehicle, starting the vehicle, activating a heating system of the vehicle, activating an air conditioning system of the vehicle, and activating a lighting system of the vehicle. 9. The system of claim 1, wherein the wireless communication connection is a Bluetooth low energy (BLE) communication connection. 10. The system of claim 1, wherein the signal information includes at least one of a signal strength, a time of arrival, a time difference of arrival, an angle of arrival, and a round trip time of flight of a two-way ranging communication signal between the set of the plurality of sensors and the user device. 11. A method comprising:
establishing, with a communication gateway of a control module in a vehicle, a wireless communication connection with a user device, wherein the control module includes at least one processor that is configured to execute instructions stored in a nontransitory memory; communicating, in response to establishing the wireless communication connection, signal information about the wireless communication connection from a plurality of sensors to the control module; determining, using a reference data generator module that is implemented by the at least one processor of the control module, a first probability value based on the signal information, wherein the first probability value indicates a probability that a user of the user device will enter the vehicle; and in response to the first probability value being greater than at least one threshold probability, activating, using a passive entry/passive start (PEPS) system, a vehicle function associated with the at least one threshold probability. 12. The method of claim 11, wherein determining the first probability value is greater than the at least one threshold probability is based on a probability curve, and the probability curve is generated by the reference data generator module and is based on at least one entry of a reference data store that is implemented by the nontransitory memory. 13. The method of claim 12, further comprising generating a first entry of the at least one entry, wherein the first entry is based on the signal information and corresponds to the first probability value. 14. The method of claim 13, wherein each of the at least one entry is associated with a measurement event and includes information indicating with whether the user of the user device enters the vehicle, signal information of the measurement event, and location data of the measurement event. 15. The method of claim 12, wherein the probability curve is based on at least one dynamic probability value, wherein the at least one dynamic probability value indicates the probability that the user of the user device will enter the vehicle when the user of the user device is within a threshold distance of the vehicle. 16. The method of claim 15, further comprising updating the at least one dynamic probability value in response to the reference data generator module generating a new entry of the at least one entry. 17. The method of claim 12, wherein the probability curve is a continuous curve. 18. The method of claim 11, wherein the activating the vehicle function includes at least one of unlocking a door of the vehicle, unlocking a trunk of the vehicle, starting the vehicle, activating a heating system of the vehicle, activating an air conditioning system of the vehicle, and activating a lighting system of the vehicle. 19. The method of claim 11, wherein the wireless communication connection is a Bluetooth low energy (BLW) communication connection. 20. The method of claim 11, wherein the signal information further includes at least one of a signal strength, a time of arrival, a time difference of arrival, an angle of arrival, and a round trip time of flight of a two-way ranging communication signal between the set of the plurality of sensors and the user device. | Systems and methods are provided and include a communication gateway of a control module. The communication gateway establishes a wireless communication connection with a user device. A plurality of sensors are configured to, in response to the user device being connected to the communication gateway, communicate signal information about the wireless communication connection to the control module. A reference data generator module determines a first probability value based on the signal information. The first probability value indicates a probability that a user of the user device will enter a vehicle. A passive entry/passive start (PEPS) system is configured to, in response to the first probability value being greater than at least one threshold probability, activate a vehicle function associated with the at least one threshold probability.1. A system comprising:
a communication gateway of a control module of a vehicle, wherein the communication gateway is configured to establish a wireless communication connection with a user device, the control module including at least one processor that is configured to execute instructions stored in a nontransitory memory; a plurality of sensors that are configured to, in response to the user device being connected to the communication gateway, communicate signal information about the wireless communication connection to the control module; a reference data generator module that is implemented by the at least one processor of the control module, wherein the reference data generator module is configured to determine a first probability value based on the signal information, and the first probability value indicates a probability that a user of the user device will enter the vehicle; and a passive entry/passive start (PEPS) system that is configured to, in response to the first probability value being greater than at least one threshold probability, activate a vehicle function associated with the at least one threshold probability. 2. The system of claim 1, wherein the reference data generator module is configured to determine the first probability value is greater than the at least one threshold probability based on a probability curve, and the probability curve is generated by the reference data generator module and is based on at least one entry of a reference data store that is implemented by the nontransitory memory. 3. The system of claim 2, wherein the reference data generator module is configured to generate a first entry of the at least one entry, and the first entry is based on the signal information and corresponds to the first probability value. 4. The system of claim 3, wherein each of the at least one entry is associated with a measurement event and includes information indicating with whether the user of the user device enters the vehicle, signal information of the measurement event, and location data of the measurement event. 5. The system of claim 2, wherein the probability curve is based on at least one dynamic probability value, and the at least one dynamic probability value indicates the probability that the user of the user device will enter the vehicle when the user of the user device is within a threshold distance of the vehicle. 6. The system of claim 5, wherein the at least one dynamic probability value is configured to update in response to the reference data generator module generating a new entry of the at least one entry. 7. The system of claim 2, wherein the probability curve is a continuous curve. 8. The system of claim 1, wherein the vehicle function includes at least one of unlocking a door of the vehicle, unlocking a trunk of the vehicle, starting the vehicle, activating a heating system of the vehicle, activating an air conditioning system of the vehicle, and activating a lighting system of the vehicle. 9. The system of claim 1, wherein the wireless communication connection is a Bluetooth low energy (BLE) communication connection. 10. The system of claim 1, wherein the signal information includes at least one of a signal strength, a time of arrival, a time difference of arrival, an angle of arrival, and a round trip time of flight of a two-way ranging communication signal between the set of the plurality of sensors and the user device. 11. A method comprising:
establishing, with a communication gateway of a control module in a vehicle, a wireless communication connection with a user device, wherein the control module includes at least one processor that is configured to execute instructions stored in a nontransitory memory; communicating, in response to establishing the wireless communication connection, signal information about the wireless communication connection from a plurality of sensors to the control module; determining, using a reference data generator module that is implemented by the at least one processor of the control module, a first probability value based on the signal information, wherein the first probability value indicates a probability that a user of the user device will enter the vehicle; and in response to the first probability value being greater than at least one threshold probability, activating, using a passive entry/passive start (PEPS) system, a vehicle function associated with the at least one threshold probability. 12. The method of claim 11, wherein determining the first probability value is greater than the at least one threshold probability is based on a probability curve, and the probability curve is generated by the reference data generator module and is based on at least one entry of a reference data store that is implemented by the nontransitory memory. 13. The method of claim 12, further comprising generating a first entry of the at least one entry, wherein the first entry is based on the signal information and corresponds to the first probability value. 14. The method of claim 13, wherein each of the at least one entry is associated with a measurement event and includes information indicating with whether the user of the user device enters the vehicle, signal information of the measurement event, and location data of the measurement event. 15. The method of claim 12, wherein the probability curve is based on at least one dynamic probability value, wherein the at least one dynamic probability value indicates the probability that the user of the user device will enter the vehicle when the user of the user device is within a threshold distance of the vehicle. 16. The method of claim 15, further comprising updating the at least one dynamic probability value in response to the reference data generator module generating a new entry of the at least one entry. 17. The method of claim 12, wherein the probability curve is a continuous curve. 18. The method of claim 11, wherein the activating the vehicle function includes at least one of unlocking a door of the vehicle, unlocking a trunk of the vehicle, starting the vehicle, activating a heating system of the vehicle, activating an air conditioning system of the vehicle, and activating a lighting system of the vehicle. 19. The method of claim 11, wherein the wireless communication connection is a Bluetooth low energy (BLW) communication connection. 20. The method of claim 11, wherein the signal information further includes at least one of a signal strength, a time of arrival, a time difference of arrival, an angle of arrival, and a round trip time of flight of a two-way ranging communication signal between the set of the plurality of sensors and the user device. | 2,600 |
11,097 | 11,097 | 16,310,072 | 2,633 | A radio network sends downlink signaling to a user equipment (UE) that triggers an enhanced uplink beam selection protocol, based on quality of the UE's uplink signaling the network receives according to a basic uplink beam selection protocol. In response the UE transmits pre-defined signaling such as uplink beam references signals (U-BRS) with uplink beams according to the downlink signaling. The network measures and selects one or more of those uplink beams for the UE to use for sending uplink data, and notifies this selection to the UE. In various embodiments the basic uplink beam selection protocol is based on uplink-downlink reciprocity, the downlink triggering signaling is dynamic and further selects a subset of uplink beams, and multiple UEs can be triggered in common signaling where blind decoding by the UEs is enabled via a scrambling ID for this enhanced uplink beam selection protocol purpose. | 1-36. (canceled) 37. An apparatus comprising:
at least one processor and at least one memory storing a computer program, wherein the at least one processor is configured with the at least one memory and the computer program to cause the apparatus to at least:
in response to receiving downlink triggering signaling, transmit pre-defined uplink signaling with uplink beams according to the downlink triggering signaling;
receive a reply to the pre-defined uplink signaling that identifies one or more of the uplink beams; and
send uplink data on the identified one or more uplink beams. 38. The apparatus according to claim 36, wherein the downlink triggering signaling is received after the apparatus transmits uplink data selected according to a basic uplink beam selection protocol that comprises reciprocity wherein a user equipment's beam for uplink data is selected based on a network's beam for downlink data. 39. The apparatus according to claim 36, wherein the downlink triggering signaling selects the uplink beams as one or more subsets from among at least two predefined subsets of all possible user equipment beams for uplink data. 40. The apparatus according to claim 39, wherein:
at least one of the predefined subsets define multiple orthogonal beams; and at least one other of the predefined subsets define multiple spatially adjacent beams. 41. The apparatus according to claim 36, wherein the apparatus is a user equipment, and the downlink triggering signaling is scrambled with a scrambling identity that is assigned to the user equipment. 42. The apparatus according to claim 36, wherein the pre-defined uplink signaling comprises an uplink beam reference signal. 43. The apparatus according to claim 36, wherein the pre-defined uplink signaling is sent in a predefined subframe and multiplexed with pre-defined uplink signaling from multiple other user equipments that are similarly triggered to send respective pre-defined uplink signaling. 44. The apparatus according to claim 36, wherein the apparatus is a user equipment operating with a 5G mmWave radio access technology. 45. A method comprising:
in response to receiving downlink triggering signaling, transmitting pre-defined uplink signaling with uplink beams according to the downlink triggering signaling; receiving a reply to the pre-defined uplink signaling that identifies one or more of the uplink beams; and thereafter sending uplink data on the identified one or more uplink beams. 46. The method according to claim 45, wherein the pre-defined uplink signaling comprises an uplink beam reference signal. 47. The method according to claim 45, wherein the pre-defined uplink signaling is sent in a predefined subframe and multiplexed with pre-defined uplink signaling from multiple other user equipments that are similarly triggered to send respective pre-defined uplink signaling. 48. The method according to claim 45, wherein the method is performed by a user equipment operating with a 5G mmWave radio access technology. 49. An apparatus comprising:
at least one processor and at least one memory storing a computer program, wherein the at least one processor is configured with the at least one memory and the computer program to cause the apparatus to at least: based on quality of uplink signaling received from a user equipment according to a basic uplink beam selection protocol, send downlink signaling to the user equipment that triggers an enhanced uplink beam selection protocol; receive pre-defined signaling with uplink beams from the user equipment according to the downlink signaling; select one or more of the uplink beams for the user equipment to use for sending uplink data; and notify the user equipment of the selection. 50. The apparatus according to claim 49, wherein the basic uplink beam selection protocol comprises reciprocity wherein the user equipment's beam for uplink data is selected based on the apparatus' beam for downlink data to the user equipment. 51. The apparatus according to claim 49, wherein the downlink signaling that triggers the enhanced uplink beam selection protocol further selects one or more subsets from among at least two predefined subsets of all possible user equipment uplink beams. 52. The apparatus according to claim 51, wherein:
at least one of the predefined subsets define multiple orthogonal beams; and at least one other of the predefined subsets define multiple spatially adjacent beams. 53. The apparatus according to claim 49, wherein the downlink signaling is scrambled with a scrambling identity to enable the user equipment to blind decode the downlink signaling using the scrambling identity. 54. The apparatus according to claim 49, wherein the pre-defined signaling comprises an uplink beam reference signal. 55. The apparatus according to claim 49, wherein pre-defined signaling from multiple user equipments that are triggered for the enhanced uplink beam selection protocol are received multiplexed in a predefined subframe. 56. The apparatus according to claim 49, wherein the apparatus is a network radio access node or components thereof operating with a 5G mmWave radio access technology. | A radio network sends downlink signaling to a user equipment (UE) that triggers an enhanced uplink beam selection protocol, based on quality of the UE's uplink signaling the network receives according to a basic uplink beam selection protocol. In response the UE transmits pre-defined signaling such as uplink beam references signals (U-BRS) with uplink beams according to the downlink signaling. The network measures and selects one or more of those uplink beams for the UE to use for sending uplink data, and notifies this selection to the UE. In various embodiments the basic uplink beam selection protocol is based on uplink-downlink reciprocity, the downlink triggering signaling is dynamic and further selects a subset of uplink beams, and multiple UEs can be triggered in common signaling where blind decoding by the UEs is enabled via a scrambling ID for this enhanced uplink beam selection protocol purpose.1-36. (canceled) 37. An apparatus comprising:
at least one processor and at least one memory storing a computer program, wherein the at least one processor is configured with the at least one memory and the computer program to cause the apparatus to at least:
in response to receiving downlink triggering signaling, transmit pre-defined uplink signaling with uplink beams according to the downlink triggering signaling;
receive a reply to the pre-defined uplink signaling that identifies one or more of the uplink beams; and
send uplink data on the identified one or more uplink beams. 38. The apparatus according to claim 36, wherein the downlink triggering signaling is received after the apparatus transmits uplink data selected according to a basic uplink beam selection protocol that comprises reciprocity wherein a user equipment's beam for uplink data is selected based on a network's beam for downlink data. 39. The apparatus according to claim 36, wherein the downlink triggering signaling selects the uplink beams as one or more subsets from among at least two predefined subsets of all possible user equipment beams for uplink data. 40. The apparatus according to claim 39, wherein:
at least one of the predefined subsets define multiple orthogonal beams; and at least one other of the predefined subsets define multiple spatially adjacent beams. 41. The apparatus according to claim 36, wherein the apparatus is a user equipment, and the downlink triggering signaling is scrambled with a scrambling identity that is assigned to the user equipment. 42. The apparatus according to claim 36, wherein the pre-defined uplink signaling comprises an uplink beam reference signal. 43. The apparatus according to claim 36, wherein the pre-defined uplink signaling is sent in a predefined subframe and multiplexed with pre-defined uplink signaling from multiple other user equipments that are similarly triggered to send respective pre-defined uplink signaling. 44. The apparatus according to claim 36, wherein the apparatus is a user equipment operating with a 5G mmWave radio access technology. 45. A method comprising:
in response to receiving downlink triggering signaling, transmitting pre-defined uplink signaling with uplink beams according to the downlink triggering signaling; receiving a reply to the pre-defined uplink signaling that identifies one or more of the uplink beams; and thereafter sending uplink data on the identified one or more uplink beams. 46. The method according to claim 45, wherein the pre-defined uplink signaling comprises an uplink beam reference signal. 47. The method according to claim 45, wherein the pre-defined uplink signaling is sent in a predefined subframe and multiplexed with pre-defined uplink signaling from multiple other user equipments that are similarly triggered to send respective pre-defined uplink signaling. 48. The method according to claim 45, wherein the method is performed by a user equipment operating with a 5G mmWave radio access technology. 49. An apparatus comprising:
at least one processor and at least one memory storing a computer program, wherein the at least one processor is configured with the at least one memory and the computer program to cause the apparatus to at least: based on quality of uplink signaling received from a user equipment according to a basic uplink beam selection protocol, send downlink signaling to the user equipment that triggers an enhanced uplink beam selection protocol; receive pre-defined signaling with uplink beams from the user equipment according to the downlink signaling; select one or more of the uplink beams for the user equipment to use for sending uplink data; and notify the user equipment of the selection. 50. The apparatus according to claim 49, wherein the basic uplink beam selection protocol comprises reciprocity wherein the user equipment's beam for uplink data is selected based on the apparatus' beam for downlink data to the user equipment. 51. The apparatus according to claim 49, wherein the downlink signaling that triggers the enhanced uplink beam selection protocol further selects one or more subsets from among at least two predefined subsets of all possible user equipment uplink beams. 52. The apparatus according to claim 51, wherein:
at least one of the predefined subsets define multiple orthogonal beams; and at least one other of the predefined subsets define multiple spatially adjacent beams. 53. The apparatus according to claim 49, wherein the downlink signaling is scrambled with a scrambling identity to enable the user equipment to blind decode the downlink signaling using the scrambling identity. 54. The apparatus according to claim 49, wherein the pre-defined signaling comprises an uplink beam reference signal. 55. The apparatus according to claim 49, wherein pre-defined signaling from multiple user equipments that are triggered for the enhanced uplink beam selection protocol are received multiplexed in a predefined subframe. 56. The apparatus according to claim 49, wherein the apparatus is a network radio access node or components thereof operating with a 5G mmWave radio access technology. | 2,600 |
11,098 | 11,098 | 16,383,489 | 2,613 | An electronic device may have input-output devices such as sensors, displays, wireless circuitry, and other electronic components mounted within a housing. The housing may have opposing front and rear walls. A display may be formed on a front side of the device and may be overlapped by a front housing wall such as a glass layer. Sensors and other components may be formed on a rear side of the device and may be overlapped by a rear housing wall. The rear housing wall may have a glass portion or other transparent structure through which projectors project images onto nearby surfaces and through which image sensors and other optical sensors receive light. The housing may be supported by a stand. An electrical component in the stand may interact with an electronic device on the stand. Wireless circuitry in an external item may wirelessly couple to wireless circuitry within the housing. | 1. A computer, comprising:
a housing having a front housing wall and an opposing rear housing wall; a stand configured to support the housing above a support surface; a display configured to display an image through the front housing wall; and a magnetic structure configured to attract an external item to the housing to align a first wireless signal structure in an interior portion of the housing with a second wireless signal structure in the external item. 2. The computer defined in claim 1 wherein the front housing wall comprises a first glass layer that overlaps the display, wherein the rear housing wall comprises an opposing second glass layer that covers a rear surface of the housing, wherein the magnetic structure is configured to attach the external item to the second glass layer, and wherein the first wireless signal structure comprises a coil that is inductively coupled to the second wireless signal structure through the second glass layer. 3. The computer defined in claim 1 wherein the housing comprises glass and wherein first wireless signal structure is configured to receive wireless power from the second wireless signal structure through the glass. 4. The computer defined in claim 1 wherein the housing comprises glass and wherein the first wireless signal structure is configured to transmit wireless power to the second wireless signal structure through the glass. 5. The computer defined in claim 1 wherein the rear housing wall comprises a transparent layer, the computer further comprising:
an image sensor that receives light through the transparent layer; and
a projector that projects through the transparent layer. 6. The computer defined in claim 1 wherein the computer further comprises:
a gaze tracking sensor;
an image sensor; and
control circuitry configured to display the image on the display using information gathered with the image sensor and information from the gaze tracking sensor. 7. The computer defined in claim 1 further comprising first and second projectors that are configured to project respective first and second images onto surfaces located respectively on left and right sides of the rear housing wall. 8. The computer defined in claim 1 further comprising a sidewall formed from a transparent material that is coupled between the front housing wall and the rear housing wall. 9. The computer defined in claim 8 further comprising an array of pixels overlapped by the sidewall and configured to emit light through the sidewall. 10. A computer, comprising:
a housing; a display mounted within the housing; a projector configured to project an image onto a surface adjacent to the housing; an input device configured to gather input; and control circuitry configured to move a displayed object from the display to the image projected onto the surface based on the input. 11. The computer defined in claim 10 further comprising a shutter, wherein the shutter is interposed between a glass portion of the housing and the projector and wherein the control circuitry is configured to place the shutter in a transparent state when the projector is projecting the image onto the surface. 12. The computer defined in claim 10 further comprising a stand that is configured to support the housing, wherein a glass portion of the housing covers a rear surface of the housing and wherein the display is configured to display content through an opposing front surface of the housing. 13. The computer defined in claim 10 wherein the housing comprises a rear glass wall and wherein the projector is configured to project the image through the rear glass wall. 14. The computer defined in claim 10 wherein the housing comprises a rear glass wall having a portion forming a lens element. 15. The computer defined in claim 10 further comprising a stand configured to support the housing on a support surface, wherein the projector is configured to project the image onto the support surface. 16. A computer, comprising:
a housing having opposing front and rear sides; a display on the front side of the housing; an input device configured to gather input; an image sensor on the rear side of the housing; a gaze tracking sensor; and control circuitry configured to display an image captured with the image sensor on the display using information from the gaze tracking sensor. 17. The computer defined in claim 16 further comprising a stand configured to support the housing on a support surface, wherein the housing comprises a glass wall on the rear side. 18. The computer defined in claim 16 further comprising:
a stand configured to support the housing on a support surface; and
a projector configured to project onto the support surface. 19. The computer defined in claim 16 wherein the front side of the housing has a front layer of glass and a front metal layer attached to the front layer of glass and wherein the rear side of the housing has a rear layer of glass and a rear metal layer attached to the rear layer of glass. 20. The computer defined in claim 16 further comprising:
a stand configured to support the housing, wherein the stand has a glass planar portion; and
an electronic component overlapped by the glass planar portion of the stand, wherein the electronic component comprises an electronic component selected from the group consisting of: a pixel array, a wireless communications circuit, and a wireless power circuit. 21. The computer defined in claim 16 wherein the image sensor comprises a three-dimensional image sensor configured to gather three-dimensional shape information on a real-world object and wherein the computer further comprising a projector configured to project an image onto the real-world object based on the gathered three-dimensional shape information. 22. The computer defined in claim 16 wherein the housing has transparent sidewalls and wherein the display has pixels on the front side and pixels under the transparent sidewalls. | An electronic device may have input-output devices such as sensors, displays, wireless circuitry, and other electronic components mounted within a housing. The housing may have opposing front and rear walls. A display may be formed on a front side of the device and may be overlapped by a front housing wall such as a glass layer. Sensors and other components may be formed on a rear side of the device and may be overlapped by a rear housing wall. The rear housing wall may have a glass portion or other transparent structure through which projectors project images onto nearby surfaces and through which image sensors and other optical sensors receive light. The housing may be supported by a stand. An electrical component in the stand may interact with an electronic device on the stand. Wireless circuitry in an external item may wirelessly couple to wireless circuitry within the housing.1. A computer, comprising:
a housing having a front housing wall and an opposing rear housing wall; a stand configured to support the housing above a support surface; a display configured to display an image through the front housing wall; and a magnetic structure configured to attract an external item to the housing to align a first wireless signal structure in an interior portion of the housing with a second wireless signal structure in the external item. 2. The computer defined in claim 1 wherein the front housing wall comprises a first glass layer that overlaps the display, wherein the rear housing wall comprises an opposing second glass layer that covers a rear surface of the housing, wherein the magnetic structure is configured to attach the external item to the second glass layer, and wherein the first wireless signal structure comprises a coil that is inductively coupled to the second wireless signal structure through the second glass layer. 3. The computer defined in claim 1 wherein the housing comprises glass and wherein first wireless signal structure is configured to receive wireless power from the second wireless signal structure through the glass. 4. The computer defined in claim 1 wherein the housing comprises glass and wherein the first wireless signal structure is configured to transmit wireless power to the second wireless signal structure through the glass. 5. The computer defined in claim 1 wherein the rear housing wall comprises a transparent layer, the computer further comprising:
an image sensor that receives light through the transparent layer; and
a projector that projects through the transparent layer. 6. The computer defined in claim 1 wherein the computer further comprises:
a gaze tracking sensor;
an image sensor; and
control circuitry configured to display the image on the display using information gathered with the image sensor and information from the gaze tracking sensor. 7. The computer defined in claim 1 further comprising first and second projectors that are configured to project respective first and second images onto surfaces located respectively on left and right sides of the rear housing wall. 8. The computer defined in claim 1 further comprising a sidewall formed from a transparent material that is coupled between the front housing wall and the rear housing wall. 9. The computer defined in claim 8 further comprising an array of pixels overlapped by the sidewall and configured to emit light through the sidewall. 10. A computer, comprising:
a housing; a display mounted within the housing; a projector configured to project an image onto a surface adjacent to the housing; an input device configured to gather input; and control circuitry configured to move a displayed object from the display to the image projected onto the surface based on the input. 11. The computer defined in claim 10 further comprising a shutter, wherein the shutter is interposed between a glass portion of the housing and the projector and wherein the control circuitry is configured to place the shutter in a transparent state when the projector is projecting the image onto the surface. 12. The computer defined in claim 10 further comprising a stand that is configured to support the housing, wherein a glass portion of the housing covers a rear surface of the housing and wherein the display is configured to display content through an opposing front surface of the housing. 13. The computer defined in claim 10 wherein the housing comprises a rear glass wall and wherein the projector is configured to project the image through the rear glass wall. 14. The computer defined in claim 10 wherein the housing comprises a rear glass wall having a portion forming a lens element. 15. The computer defined in claim 10 further comprising a stand configured to support the housing on a support surface, wherein the projector is configured to project the image onto the support surface. 16. A computer, comprising:
a housing having opposing front and rear sides; a display on the front side of the housing; an input device configured to gather input; an image sensor on the rear side of the housing; a gaze tracking sensor; and control circuitry configured to display an image captured with the image sensor on the display using information from the gaze tracking sensor. 17. The computer defined in claim 16 further comprising a stand configured to support the housing on a support surface, wherein the housing comprises a glass wall on the rear side. 18. The computer defined in claim 16 further comprising:
a stand configured to support the housing on a support surface; and
a projector configured to project onto the support surface. 19. The computer defined in claim 16 wherein the front side of the housing has a front layer of glass and a front metal layer attached to the front layer of glass and wherein the rear side of the housing has a rear layer of glass and a rear metal layer attached to the rear layer of glass. 20. The computer defined in claim 16 further comprising:
a stand configured to support the housing, wherein the stand has a glass planar portion; and
an electronic component overlapped by the glass planar portion of the stand, wherein the electronic component comprises an electronic component selected from the group consisting of: a pixel array, a wireless communications circuit, and a wireless power circuit. 21. The computer defined in claim 16 wherein the image sensor comprises a three-dimensional image sensor configured to gather three-dimensional shape information on a real-world object and wherein the computer further comprising a projector configured to project an image onto the real-world object based on the gathered three-dimensional shape information. 22. The computer defined in claim 16 wherein the housing has transparent sidewalls and wherein the display has pixels on the front side and pixels under the transparent sidewalls. | 2,600 |
11,099 | 11,099 | 16,715,039 | 2,633 | Methods and apparatus are described to automatically re-enable monitoring of a bypassed security sensor by a security system control device. | 1. A method performed by a security system control device in a security system, the security system comprising the security system control device and one or more security sensors coupled to the security system control device, the method comprising:
ignoring alarm signals received from a first security sensor of the one or more security sensors after the first security sensor has been bypassed; receiving a command to change a mode of operation of the security system; and in response to receiving the command to change the operating mode of the security system, processing future alarm signals received from the first security sensor. 2. The method of claim 1, wherein a first mode of operation is an armed-home mode of operation and a second mode of operation is an armed-away mode of operation, wherein receiving a command to change a mode of operation of the security system comprises receiving a command to change the mode of operation from the armed-home mode of operation to the armed-away mode of operation. 3. The method of claim 1, wherein a first mode of operation is an armed-away mode of operation and a second mode of operation is a disarmed mode of operation, wherein receiving a command to change a mode of operation of the security system comprises receiving a command to change the mode of operation from the armed-away mode of operation to the disarmed mode of operation. 4. The method of claim 1, wherein processing future alarm signals comprises causing a siren to sound in response to receiving a first alarm signal from the first security sensor after the mode of operation has been changed. 5. The method of claim 1, wherein beginning processing future alarm signals comprises sending a system alarm signal to a remote monitoring station in response to receiving a first alarm signal from the security sensor after the mode of operation has been changed. 6. The method of claim 1, further comprising:
in response to receiving the command to change the mode of operation of the security system, transmitting a message to a remote location indicating that the security sensor has been bypassed. 7. The method of claim 1, further comprising:
in response to receiving the command to change the mode of operation of the security system, providing an indication that the security sensor has changed status from being bypassed to being monitored. 8. The method of claim 1, wherein the command to change the mode of operation of the security system is wirelessly received from a keypad. 9. The method of claim 1, wherein the command to change the mode of operation of the security system is received from a wireless communication device. 10. A security system control device used in a security system, the security system comprising the security system control device and one or more security sensors coupled to the security system control device, the security system control device comprising:
a receiver for receiving alarm signals from the one or more security sensors; a memory for storing processor-executable instructions; a processor, coupled to the receiver and the memory, for executing the processor-executable instructions that causes the security system control device to:
ignore alarm signals received from a first security sensor of the one or more security sensors after the first security sensor has been bypassed;
receive a command to change a mode of operation of the security system; and
in response to receiving the command to change the operating mode of the security system, process, by the processor, future alarm signals received from the first security sensor via the receiver. 11. The security system control device of claim 10, wherein a first mode of operation is an armed-home mode of operation and a second mode of operation is an armed-away mode of operation, wherein the processor-executable instructions that cause the security system control device to receive a command to change a mode of operation of the security system comprises processor-executable instructions that cause the security system control device to receive a command to change the mode of operation from the armed-home mode of operation to the armed-away mode of operation. 12. The security system control device of claim 10, wherein a first mode of operation is an armed-away mode of operation and a second mode of operation is a disarmed mode of operation, wherein the processor-executable instructions that cause the security system control device to receive a command to change a mode of operation of the security system comprises processor-executable instructions that cause the security system control device to receive a command to change the mode of operation from the armed-away mode of operation to the disarmed mode of operation. 13. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to process future alarm signals comprises processor-executable instructions that causes the security system control device to cause a siren to sound in response to receiving a first alarm signal from the first security sensor after the mode of operation has been changed. 14. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to beginning processing future alarm signals comprises processor-executable instructions that causes the security system control device to send a system alarm signal to a remote monitoring station in response to receiving a first alarm signal from the security sensor after the mode of operation has been changed. 15. The security system control device of claim 10, comprising further instructions that causes the security system control device to:
in response to receiving the command to change the mode of operation of the security system, transmit a message to a remote location indicating that the security sensor has been bypassed. 16. The security system control device of claim 10, comprising further instructions that causes the security system control device to:
in response to receiving the command to change the mode of operation of the security system, provide an indication that the security sensor has changed status from being bypassed to being monitored. 17. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to receive the command to change the mode of operation of the security system comprises processor-executable instructions that causes the security system control device to wirelessly receive the command from a keypad. 18. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to receive the command to change the mode of operation of the security system comprises processor-executable instructions that causes the security system control device to wirelessly receive the command from a wireless communication device. | Methods and apparatus are described to automatically re-enable monitoring of a bypassed security sensor by a security system control device.1. A method performed by a security system control device in a security system, the security system comprising the security system control device and one or more security sensors coupled to the security system control device, the method comprising:
ignoring alarm signals received from a first security sensor of the one or more security sensors after the first security sensor has been bypassed; receiving a command to change a mode of operation of the security system; and in response to receiving the command to change the operating mode of the security system, processing future alarm signals received from the first security sensor. 2. The method of claim 1, wherein a first mode of operation is an armed-home mode of operation and a second mode of operation is an armed-away mode of operation, wherein receiving a command to change a mode of operation of the security system comprises receiving a command to change the mode of operation from the armed-home mode of operation to the armed-away mode of operation. 3. The method of claim 1, wherein a first mode of operation is an armed-away mode of operation and a second mode of operation is a disarmed mode of operation, wherein receiving a command to change a mode of operation of the security system comprises receiving a command to change the mode of operation from the armed-away mode of operation to the disarmed mode of operation. 4. The method of claim 1, wherein processing future alarm signals comprises causing a siren to sound in response to receiving a first alarm signal from the first security sensor after the mode of operation has been changed. 5. The method of claim 1, wherein beginning processing future alarm signals comprises sending a system alarm signal to a remote monitoring station in response to receiving a first alarm signal from the security sensor after the mode of operation has been changed. 6. The method of claim 1, further comprising:
in response to receiving the command to change the mode of operation of the security system, transmitting a message to a remote location indicating that the security sensor has been bypassed. 7. The method of claim 1, further comprising:
in response to receiving the command to change the mode of operation of the security system, providing an indication that the security sensor has changed status from being bypassed to being monitored. 8. The method of claim 1, wherein the command to change the mode of operation of the security system is wirelessly received from a keypad. 9. The method of claim 1, wherein the command to change the mode of operation of the security system is received from a wireless communication device. 10. A security system control device used in a security system, the security system comprising the security system control device and one or more security sensors coupled to the security system control device, the security system control device comprising:
a receiver for receiving alarm signals from the one or more security sensors; a memory for storing processor-executable instructions; a processor, coupled to the receiver and the memory, for executing the processor-executable instructions that causes the security system control device to:
ignore alarm signals received from a first security sensor of the one or more security sensors after the first security sensor has been bypassed;
receive a command to change a mode of operation of the security system; and
in response to receiving the command to change the operating mode of the security system, process, by the processor, future alarm signals received from the first security sensor via the receiver. 11. The security system control device of claim 10, wherein a first mode of operation is an armed-home mode of operation and a second mode of operation is an armed-away mode of operation, wherein the processor-executable instructions that cause the security system control device to receive a command to change a mode of operation of the security system comprises processor-executable instructions that cause the security system control device to receive a command to change the mode of operation from the armed-home mode of operation to the armed-away mode of operation. 12. The security system control device of claim 10, wherein a first mode of operation is an armed-away mode of operation and a second mode of operation is a disarmed mode of operation, wherein the processor-executable instructions that cause the security system control device to receive a command to change a mode of operation of the security system comprises processor-executable instructions that cause the security system control device to receive a command to change the mode of operation from the armed-away mode of operation to the disarmed mode of operation. 13. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to process future alarm signals comprises processor-executable instructions that causes the security system control device to cause a siren to sound in response to receiving a first alarm signal from the first security sensor after the mode of operation has been changed. 14. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to beginning processing future alarm signals comprises processor-executable instructions that causes the security system control device to send a system alarm signal to a remote monitoring station in response to receiving a first alarm signal from the security sensor after the mode of operation has been changed. 15. The security system control device of claim 10, comprising further instructions that causes the security system control device to:
in response to receiving the command to change the mode of operation of the security system, transmit a message to a remote location indicating that the security sensor has been bypassed. 16. The security system control device of claim 10, comprising further instructions that causes the security system control device to:
in response to receiving the command to change the mode of operation of the security system, provide an indication that the security sensor has changed status from being bypassed to being monitored. 17. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to receive the command to change the mode of operation of the security system comprises processor-executable instructions that causes the security system control device to wirelessly receive the command from a keypad. 18. The security system control device of claim 10, wherein the processor-executable instructions that cause the security system control device to receive the command to change the mode of operation of the security system comprises processor-executable instructions that causes the security system control device to wirelessly receive the command from a wireless communication device. | 2,600 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.