Unnamed: 0
int64
0
350k
level_0
int64
0
351k
ApplicationNumber
int64
9.75M
96.1M
ArtUnit
int64
1.6k
3.99k
Abstract
stringlengths
1
8.37k
Claims
stringlengths
3
292k
abstract-claims
stringlengths
68
293k
TechCenter
int64
1.6k
3.9k
10,600
10,600
13,725,424
2,612
A blend buffer has a pre-determined plurality of locations, each with a set of registers. The locations are allocatable to pixels. The blend buffer has a first write port and a second write port. The first write port couples with a texture read unit and the second write port couples with a blending unit. The blending unit also interfaces with a read port of the blend buffer. The texture unit receives texture coordinates from a texture coordinate calculator. The blending unit is operable to interface with the texture coordinate calculator. The blending unit is operable to perform write only transactions of pixel data to locations of a render target that corresponds to respective locations in the blend buffer, once after completion of processing the pixels for which data is being written.
1. A 3- graphics D system, comprising: a frame buffer; a blend buffer, provided in a memory distinct from a memory in which the frame buffer exists, the blend buffer organized to contain a pre-determined number of locations, with each location comprising a set of registers and comprising a first write port, a second write port and at least one read port, wherein locations of the plurality are assigned to pixels of one or more primitives; a texture read unit coupled for reading from a texture cache and coupled to the first write port of the blend buffer, the texture read unit operable to receive calculated texture coordinates from a texture coordinate calculator, and use the received calculated texture coordinates in reading texture data and providing that texture data for storage in the blend buffer as a current set of textures; and a blending unit coupled to the second write port of the blend buffer, to the at least one read port of the blend buffer, and to the texture coordinate calculator, the blending unit operable to perform polygon walking for each pixel having a location assigned in the blend buffer, for each texture of the current set of textures, and to write the pixels from the blend buffer to the frame buffer as write-only transactions on the frame buffer. 2. The 3- graphics D system of claim 1, further comprising a hardware semaphore operable to stall write transactions to the blend buffer, from the first write port and the second write port, which would over write valid data in the blend buffer. 3. The 3- graphics D system of claim 2, wherein each register in the set of registers for each location of the blend buffer is associated with a valid flag used by the hardware semaphore to determine validity of data in the blend buffer. 4. The 3- graphics D system of claim 1, wherein each location of the blend buffer is operable to store coordinates of a pixel in a render target to which the data from that location will be written, wherein the coordinates in each location can be non-sequential and the blending unit is operable to perform random writes to non-sequential locations of the frame buffer in order to write the pixels from the blend buffer to the frame buffer.
A blend buffer has a pre-determined plurality of locations, each with a set of registers. The locations are allocatable to pixels. The blend buffer has a first write port and a second write port. The first write port couples with a texture read unit and the second write port couples with a blending unit. The blending unit also interfaces with a read port of the blend buffer. The texture unit receives texture coordinates from a texture coordinate calculator. The blending unit is operable to interface with the texture coordinate calculator. The blending unit is operable to perform write only transactions of pixel data to locations of a render target that corresponds to respective locations in the blend buffer, once after completion of processing the pixels for which data is being written.1. A 3- graphics D system, comprising: a frame buffer; a blend buffer, provided in a memory distinct from a memory in which the frame buffer exists, the blend buffer organized to contain a pre-determined number of locations, with each location comprising a set of registers and comprising a first write port, a second write port and at least one read port, wherein locations of the plurality are assigned to pixels of one or more primitives; a texture read unit coupled for reading from a texture cache and coupled to the first write port of the blend buffer, the texture read unit operable to receive calculated texture coordinates from a texture coordinate calculator, and use the received calculated texture coordinates in reading texture data and providing that texture data for storage in the blend buffer as a current set of textures; and a blending unit coupled to the second write port of the blend buffer, to the at least one read port of the blend buffer, and to the texture coordinate calculator, the blending unit operable to perform polygon walking for each pixel having a location assigned in the blend buffer, for each texture of the current set of textures, and to write the pixels from the blend buffer to the frame buffer as write-only transactions on the frame buffer. 2. The 3- graphics D system of claim 1, further comprising a hardware semaphore operable to stall write transactions to the blend buffer, from the first write port and the second write port, which would over write valid data in the blend buffer. 3. The 3- graphics D system of claim 2, wherein each register in the set of registers for each location of the blend buffer is associated with a valid flag used by the hardware semaphore to determine validity of data in the blend buffer. 4. The 3- graphics D system of claim 1, wherein each location of the blend buffer is operable to store coordinates of a pixel in a render target to which the data from that location will be written, wherein the coordinates in each location can be non-sequential and the blending unit is operable to perform random writes to non-sequential locations of the frame buffer in order to write the pixels from the blend buffer to the frame buffer.
2,600
10,601
10,601
14,573,820
2,652
Agents of a contact center process work items for clients of the contact center. Agents may, at some point, experience “burnout.” Detecting early signs, and automatically mitigating the burnout allows agents to maintain interest in their work and avoid expensive training of new agents and lost experience of burnt-out agents. Agents may be monitored for single events and/or a pattern of events indicating burnout. Automatic detection and response to the burnout provides agents indicating burnout to have more interesting, less difficult, or otherwise more favorable working conditions. Feedback monitoring determines if the mitigation efforts are successful. If not, and an agent is burnt out and likely to quit, measures may be automatically launched to prevent the burnt-out agent from “infecting” other agents.
1. A system, comprising: a memory operable to store accessible data and instructions; a network interface that interconnects the server to network components via a communication network; and a processor performing: accessing, via the network interface, an endpoint of an agent; receiving from the endpoint, a first action of the agent; determining whether the first action is associated with burnout; and upon determining the first action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a burnout mitigation modification. 2. The system of claim 1, wherein the processor further performs: monitoring, after the modification of the work assignment of the agent, the endpoint for a second action; determining whether the second action is associated with burnout; and upon determining the second action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a terminating mitigation modification. 3. The system of claim 2, wherein the terminating mitigation modification is the assignment of at least one work item to the agent selected to mitigate burnout with respect to at least one other agent not associated with the endpoint. 4. The system of claim 2, wherein the processor further performs storing indicia of the second action and indicia of the burnout mitigation modification and an associated indicia of the failure of the burnout mitigation modification. 5. The system of claim 1, wherein 1 the burnout mitigation modification is a modification of the work assigned to the agent to provide the agent with at least one of escalated actions, variety of work items, more positive sentiment work items, fewer negative sentiment work items, and reduced workload. 6. The system of claim 1, wherein the processor further performs: monitoring, after the modification of the work assignment of the agent, the endpoint for a second action; determining whether the second action is associated with burnout; and upon determining the second action is not associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a normalization modification negating, at least in part, the burnout mitigation modification. 7. The system of claim 6, wherein the processor further performs storing indicia of the second action and indicia of the burnout mitigation modification and an associated indicia of the success of the burnout mitigation modification. 8. The system of claim 1, wherein the processor further performs, signaling, via the network interface, a supervisor terminal associated with a supervisor of the agent wherein the signal is an indicia of burnout. 9. The system of claim 1, wherein the first action is at least one of speech of the agent and non-speech vocalization. 10. The system of claim 1, wherein the first action is at least one of speech of the agent, non-speech vocalization, communication from the agent to a supervisor, communication from the agent to a consultative agent, questionnaire response, rank, acceptance of overtime, acceptance of shift changes, acceptance of work duty changes, typing rate, and typing accuracy. 11. The system of claim 1, wherein the processor determines whether the first action is associated with burnout action upon determining a trend of actions, comprising the first action, indicates burnout. 12. A processor, performing operations comprising: accessing, via a network interface, an endpoint of an agent; receiving from the endpoint, a first action of the agent; determining whether the first action is associated with burnout; and upon determining the first action is associated with burnout, automatically causing a modification of a work assignment of the agent, wherein the modification is a burnout mitigation modification. 13. The processor of claim 12, performing operations further comprising: monitoring, after the modification of the work assignment of the agent, the endpoint for a second action; determining whether the second action is associated with burnout; and upon determining the second action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a terminating mitigation modification. 14. The processor of claim 13, wherein the terminating mitigation modification is the assignment of at least one work item to the agent selected to mitigate burnout with respect to at least one other agent not associated with the endpoint. 15. The processor of claim 12, wherein the processor further performs storing indicia of the second action and indicia of the burnout mitigation modification and an associated indicia of the failure of the burnout mitigation modification. 16. The processor of claim 12, wherein the first action is at least one of speech of the agent, non-speech vocalization, communication from the agent to a supervisor, communication from the agent to a consultative agent, questionnaire response, rank, acceptance of overtime, acceptance of shift changes, acceptance of work duty changes, facial expression, typing rate, typing force, and typing accuracy. 17. A processor, comprising: means to store accessible data and processor executable instructions; means to interconnect to network components; and means to process data, including: accessing, via the interconnect means, an endpoint of an agent; receiving from the endpoint, a first action of the agent; determining whether the first action is associated with burnout; and upon determining the first action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a burnout mitigation modification. 18. The processor of claim 17, wherein the processing means determines further comprises means to determine whether the first action is associated with burnout action upon determining a trend of actions, comprising the first action, indicates burnout. 19. The system of claim 17, wherein the processor means further comprises means to perform signaling, via the interconnect means, a supervisor terminal associated with a supervisor of the agent wherein the signal is an indicia of burnout. 20. The system of claim 17, wherein the first action is at least one of speech of the agent and non-speech vocalization.
Agents of a contact center process work items for clients of the contact center. Agents may, at some point, experience “burnout.” Detecting early signs, and automatically mitigating the burnout allows agents to maintain interest in their work and avoid expensive training of new agents and lost experience of burnt-out agents. Agents may be monitored for single events and/or a pattern of events indicating burnout. Automatic detection and response to the burnout provides agents indicating burnout to have more interesting, less difficult, or otherwise more favorable working conditions. Feedback monitoring determines if the mitigation efforts are successful. If not, and an agent is burnt out and likely to quit, measures may be automatically launched to prevent the burnt-out agent from “infecting” other agents.1. A system, comprising: a memory operable to store accessible data and instructions; a network interface that interconnects the server to network components via a communication network; and a processor performing: accessing, via the network interface, an endpoint of an agent; receiving from the endpoint, a first action of the agent; determining whether the first action is associated with burnout; and upon determining the first action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a burnout mitigation modification. 2. The system of claim 1, wherein the processor further performs: monitoring, after the modification of the work assignment of the agent, the endpoint for a second action; determining whether the second action is associated with burnout; and upon determining the second action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a terminating mitigation modification. 3. The system of claim 2, wherein the terminating mitigation modification is the assignment of at least one work item to the agent selected to mitigate burnout with respect to at least one other agent not associated with the endpoint. 4. The system of claim 2, wherein the processor further performs storing indicia of the second action and indicia of the burnout mitigation modification and an associated indicia of the failure of the burnout mitigation modification. 5. The system of claim 1, wherein 1 the burnout mitigation modification is a modification of the work assigned to the agent to provide the agent with at least one of escalated actions, variety of work items, more positive sentiment work items, fewer negative sentiment work items, and reduced workload. 6. The system of claim 1, wherein the processor further performs: monitoring, after the modification of the work assignment of the agent, the endpoint for a second action; determining whether the second action is associated with burnout; and upon determining the second action is not associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a normalization modification negating, at least in part, the burnout mitigation modification. 7. The system of claim 6, wherein the processor further performs storing indicia of the second action and indicia of the burnout mitigation modification and an associated indicia of the success of the burnout mitigation modification. 8. The system of claim 1, wherein the processor further performs, signaling, via the network interface, a supervisor terminal associated with a supervisor of the agent wherein the signal is an indicia of burnout. 9. The system of claim 1, wherein the first action is at least one of speech of the agent and non-speech vocalization. 10. The system of claim 1, wherein the first action is at least one of speech of the agent, non-speech vocalization, communication from the agent to a supervisor, communication from the agent to a consultative agent, questionnaire response, rank, acceptance of overtime, acceptance of shift changes, acceptance of work duty changes, typing rate, and typing accuracy. 11. The system of claim 1, wherein the processor determines whether the first action is associated with burnout action upon determining a trend of actions, comprising the first action, indicates burnout. 12. A processor, performing operations comprising: accessing, via a network interface, an endpoint of an agent; receiving from the endpoint, a first action of the agent; determining whether the first action is associated with burnout; and upon determining the first action is associated with burnout, automatically causing a modification of a work assignment of the agent, wherein the modification is a burnout mitigation modification. 13. The processor of claim 12, performing operations further comprising: monitoring, after the modification of the work assignment of the agent, the endpoint for a second action; determining whether the second action is associated with burnout; and upon determining the second action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a terminating mitigation modification. 14. The processor of claim 13, wherein the terminating mitigation modification is the assignment of at least one work item to the agent selected to mitigate burnout with respect to at least one other agent not associated with the endpoint. 15. The processor of claim 12, wherein the processor further performs storing indicia of the second action and indicia of the burnout mitigation modification and an associated indicia of the failure of the burnout mitigation modification. 16. The processor of claim 12, wherein the first action is at least one of speech of the agent, non-speech vocalization, communication from the agent to a supervisor, communication from the agent to a consultative agent, questionnaire response, rank, acceptance of overtime, acceptance of shift changes, acceptance of work duty changes, facial expression, typing rate, typing force, and typing accuracy. 17. A processor, comprising: means to store accessible data and processor executable instructions; means to interconnect to network components; and means to process data, including: accessing, via the interconnect means, an endpoint of an agent; receiving from the endpoint, a first action of the agent; determining whether the first action is associated with burnout; and upon determining the first action is associated with burnout, automatically modifying a work assignment of the agent, wherein the modification is a burnout mitigation modification. 18. The processor of claim 17, wherein the processing means determines further comprises means to determine whether the first action is associated with burnout action upon determining a trend of actions, comprising the first action, indicates burnout. 19. The system of claim 17, wherein the processor means further comprises means to perform signaling, via the interconnect means, a supervisor terminal associated with a supervisor of the agent wherein the signal is an indicia of burnout. 20. The system of claim 17, wherein the first action is at least one of speech of the agent and non-speech vocalization.
2,600
10,602
10,602
15,927,587
2,685
A universal controlling device is provided with one or more buttons which, when activated in a set up mode, serves to initiate a rapid configuration of the universal controlling device to adapt the universal controlling device to communicate with an intended target appliance.
1. A method for configuring a universal controlling device to exchange communications with an intended target device via an RF communications channel, comprising: receiving into the universal controlling device while in a setup mode of operation of the universal controlling device a selection of a one of a plurality of quick setup input elements of the universal controlling device wherein each of the plurality of quick setup input elements is linked to a different one of a plurality of RF communication protocols usable by the universal controlling device to exchange communications with a device type of the intended target device; and in response to the selection of the one of the plurality of quick setup input elements of the universal controlling device, causing the universal controlling device to exit the setup mode of operation of the universal controlling device whereupon the universal remote control will have configured itself to use the one of the plurality of RF communication protocols that was linked to the selected one of the plurality of quick setup input elements when the universal controlling device is subsequently operated in an operating mode of the universal controlling device in which the universal controlling device is intended to exchange communications with the intended target device. 2. The method as recited in claim 1, wherein the device type of the intended target device comprises a set top box type. 3. The method as recited in claim 1, causing the universal controlling device to use an IR protocol when the universal controlling device is placed into an operating mode of the universal controlling device in which the universal controlling device is intended to transmit communications to a device of a device type other than the device type of the intended target device. 4. The method as recited in claim 1, comprising receiving into the universal controlling device while in the setup mode of operation of the universal controlling device data indicative of the device type of the intended target device. 5. The method as recited in claim 4, comprising using the data indicative of the device type to link each of the plurality of quick setup input elements to the different one of the plurality of RF communication protocols usable to exchange communications with the device type of the intended target device. 6. The method as recited in claim 1, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with the device type of the intended target device at a time of manufacture of the universal controlling device. 7. The method as recited in claim 1, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with the device type of the intended target device at time prior to deployment of the universal controlling device. 8. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to an activation of a setup input element of the universal controlling device. 9. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to an activation of a setup input element of the universal controlling device for a predetermined period of time. 10. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of a setup input element of the universal controlling device and a one of the quick setup input elements. 11. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of a setup input element of the universal controlling device and a one of the quick setup input elements for a predetermined period of time. 12. A universal controlling, comprising: a RF communications circuit; a processing unit in communication with the RF communications circuit for causing the RF communications circuit to exchange RF communications with an intended target device; and a key matrix having a plurality of quick setup input elements wherein each of the plurality of quick setup input elements is linked to a different one of a plurality of RF communication protocols usable by the processing circuit in communication with the RF communications circuit to exchange RF communications with a device type of the intended target device; wherein, while the universal controlling device is in a setup mode of operation of the universal controlling device, a selection of a one of the plurality of quick setup input elements causes the universal controlling device to configure itself to use the one of the plurality of RF communication protocols linked to the selected one of the plurality of quick setup input elements when the universal controlling device is subsequently used in an operating mode of the universal controlling device to exchange RF communications with the intended target device. 13. The universal controlling device as recited in claim 12, wherein the device type of the intended target device comprises a set top box type. 14. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a plurality of device mode input elements and wherein an activation of a device mode input element indicates to the universal controlling device the device type of the intended target device. 15. The universal controlling device as recited in claim 14, comprising using the activation of the device mode input element to select a one of a plurality of RF communication groups and creating a linkage between the plurality of quick setup input elements and different ones of a plurality of RF communication protocols in the selected one of the plurality RF communication protocol groups. 16. The universal controlling device as recited in claim 12, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with a device type of the intended target device at a time of manufacture of the universal controlling device. 17. The universal controlling device as recited in claim 12, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with a device type of the intended target device at time prior to deployment of the universal controlling device. 18. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to an activation of the setup input element. 19. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to an activation of the setup input element for a predetermined period of time. 20. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of the setup input element and a one of the quick setup input elements. 21. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of the setup input element and a one of the quick setup input elements for a predetermined period of time.
A universal controlling device is provided with one or more buttons which, when activated in a set up mode, serves to initiate a rapid configuration of the universal controlling device to adapt the universal controlling device to communicate with an intended target appliance.1. A method for configuring a universal controlling device to exchange communications with an intended target device via an RF communications channel, comprising: receiving into the universal controlling device while in a setup mode of operation of the universal controlling device a selection of a one of a plurality of quick setup input elements of the universal controlling device wherein each of the plurality of quick setup input elements is linked to a different one of a plurality of RF communication protocols usable by the universal controlling device to exchange communications with a device type of the intended target device; and in response to the selection of the one of the plurality of quick setup input elements of the universal controlling device, causing the universal controlling device to exit the setup mode of operation of the universal controlling device whereupon the universal remote control will have configured itself to use the one of the plurality of RF communication protocols that was linked to the selected one of the plurality of quick setup input elements when the universal controlling device is subsequently operated in an operating mode of the universal controlling device in which the universal controlling device is intended to exchange communications with the intended target device. 2. The method as recited in claim 1, wherein the device type of the intended target device comprises a set top box type. 3. The method as recited in claim 1, causing the universal controlling device to use an IR protocol when the universal controlling device is placed into an operating mode of the universal controlling device in which the universal controlling device is intended to transmit communications to a device of a device type other than the device type of the intended target device. 4. The method as recited in claim 1, comprising receiving into the universal controlling device while in the setup mode of operation of the universal controlling device data indicative of the device type of the intended target device. 5. The method as recited in claim 4, comprising using the data indicative of the device type to link each of the plurality of quick setup input elements to the different one of the plurality of RF communication protocols usable to exchange communications with the device type of the intended target device. 6. The method as recited in claim 1, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with the device type of the intended target device at a time of manufacture of the universal controlling device. 7. The method as recited in claim 1, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with the device type of the intended target device at time prior to deployment of the universal controlling device. 8. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to an activation of a setup input element of the universal controlling device. 9. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to an activation of a setup input element of the universal controlling device for a predetermined period of time. 10. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of a setup input element of the universal controlling device and a one of the quick setup input elements. 11. The method as recited in claim 1, comprising causing the universal controlling device to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of a setup input element of the universal controlling device and a one of the quick setup input elements for a predetermined period of time. 12. A universal controlling, comprising: a RF communications circuit; a processing unit in communication with the RF communications circuit for causing the RF communications circuit to exchange RF communications with an intended target device; and a key matrix having a plurality of quick setup input elements wherein each of the plurality of quick setup input elements is linked to a different one of a plurality of RF communication protocols usable by the processing circuit in communication with the RF communications circuit to exchange RF communications with a device type of the intended target device; wherein, while the universal controlling device is in a setup mode of operation of the universal controlling device, a selection of a one of the plurality of quick setup input elements causes the universal controlling device to configure itself to use the one of the plurality of RF communication protocols linked to the selected one of the plurality of quick setup input elements when the universal controlling device is subsequently used in an operating mode of the universal controlling device to exchange RF communications with the intended target device. 13. The universal controlling device as recited in claim 12, wherein the device type of the intended target device comprises a set top box type. 14. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a plurality of device mode input elements and wherein an activation of a device mode input element indicates to the universal controlling device the device type of the intended target device. 15. The universal controlling device as recited in claim 14, comprising using the activation of the device mode input element to select a one of a plurality of RF communication groups and creating a linkage between the plurality of quick setup input elements and different ones of a plurality of RF communication protocols in the selected one of the plurality RF communication protocol groups. 16. The universal controlling device as recited in claim 12, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with a device type of the intended target device at a time of manufacture of the universal controlling device. 17. The universal controlling device as recited in claim 12, wherein each of the plurality of quick setup input elements is linked to the different one of the plurality of RF communication protocols usable to exchange communications with a device type of the intended target device at time prior to deployment of the universal controlling device. 18. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to an activation of the setup input element. 19. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to an activation of the setup input element for a predetermined period of time. 20. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of the setup input element and a one of the quick setup input elements. 21. The universal controlling device as recited in claim 12, wherein the universal controlling device comprises a setup input element and the universal controlling device is caused to enter into the setup mode of operation of the universal controlling device in response to a simultaneous activation of the setup input element and a one of the quick setup input elements for a predetermined period of time.
2,600
10,603
10,603
14,334,981
2,674
A method for securing electronic transactions includes associating a mobile electronic device with a first user. A first computer system retrievably stores registration data relating to the first user, including a device identifier that is unique to the mobile electronic device. A security application that supports in-application push notifications is installed on the mobile electronic device. The first computer system sends a push notification to the mobile electronic device, the push notification prompting the first user to provide a confirmation reply via a user interface of the security application for activating the mobile electronic device as a security token. The mobile electronic device is activated as a security token for the first user in response to receiving at the first computer system, from the mobile electronic device, the confirmation reply from the first user.
1. A method comprising: associating a mobile electronic device with a first user; retrievably storing, by a first computer system, registration data relating to the first user and including a device identifier that is unique to the mobile electronic device associated with the first user; sending, by the first computer system, a push notification to the mobile electronic device, the push notification prompting the first user to provide a confirmation reply via a user interface of a security application for activating the mobile electronic device as a security token; and activating the mobile electronic device as a security token for the first user in response to receiving at the first computer system, from the mobile electronic device, the confirmation reply from the first user. 2. The method of claim 1 wherein retrievably storing the registration data includes retrievably storing first authentication data for use in authenticating the first user to the first computer system. 3. The method of claim 2 wherein the push notification is for prompting the first user to provide second authentication data, and further comprising receiving from the mobile electronic device the second authentication data at the first computer system. 4. The method of claim 2 wherein the push notification is for prompting the first user to provide second authentication data including a biometric input, and further comprising receiving from the mobile electronic device the second authentication data at the first computer system. 5. The method of claim 3 wherein activating the mobile electronic device as a security token for the first user comprises assigning the security token to one of a plurality of different security levels in dependence upon a result of comparing the second authentication data to the first authentication data. 6. The method of claim 1 wherein the mobile electronic device is a smartphone. 7. The method of claim 1 comprising: providing from the first user to a second computer system an electronic transaction request; prior to completing the requested electronic transaction, sending an authorization request from the second computer system to the first computer system; sending, by the first computer system, a push notification to the mobile electronic device, the push notification prompting the first user to provide a response for authorizing the requested electronic transaction; receiving, from the mobile electronic device, the response at the first computer system; in response upon receiving the response at the first computer system, providing to the second computer system an authorization message; and in response to receiving the authorization message at the second computer system, completing the electronic transaction for the first user. 8. The method of claim 1 comprising: associating the security token with a specific authorized service, the service for being authenticated in reliance upon the security token. 9. The method of claim 1 wherein the security token comprises tokenization data uniquely associated with the smart phone such that copying of the tokenization data to another smartphone other than results in a valid token. 10. A method comprising: registering by a first system a first user, comprising retrievably storing authentication data for use in authenticating the first user to the first system; registering by a second system the first user, comprising associating a uniquely identifiable mobile electronic device with the first user; requesting by the first user to the first system an electronic transaction requiring authentication of the first user by the first system; authenticating the first user by the first system based on the retrievably stored authentication data and based on data provide by the first user in response to an authentication challenge by the first system; subsequent to authenticating the first user, requesting by the first system to the second system a secondary authentication of the first user; sending from the second system to the uniquely identifiable mobile electronic device a push notification prompting the first user to provide a secondary authentication response via the uniquely identifiable mobile electronic device; receiving by the second system from the uniquely identifiable mobile electronic device the secondary authentication response provided by the first user; providing the secondary authentication of the first user from the second system to the first system based on the secondary authentication response; and subsequent to receiving the secondary authentication of the first user, performing by the first system the requested electronic transaction for the first user. 11. The method of claim 10 wherein associating a uniquely identifiable mobile electronic device with the first user comprises installing a security application on said device. 12. The method of claim 10 wherein the secondary authentication response provided by the first user comprises at least one of a password and a username. 13. The method of claim 10 wherein the secondary authentication response provided by the first user comprises biometric data. 14. A method comprising: associating a mobile electronic device with a first user; installing on the mobile electronic device a security application that supports in-application push notifications; registering, by a security computer, the mobile electronic device as a security token for use by the first user for authorizing electronic transactions; receiving at the security computer, from a first transaction system, a first request for authorization to complete a first electronic transaction; receiving at the security computer, from a second transaction system, a second request for authorization to complete a second electronic transaction; sending from the security computer to the mobile electronic device a first push notification prompting the first user to provide a first response authorizing the first electronic transaction; sending from the security computer to the mobile electronic device a second push notification prompting the first user to provide a second response authorizing the second electronic transaction; and providing from the security computer: a first authorization to the first transaction system in dependence upon receiving the first response from the first user authorizing the first electronic transaction; and a second authorization to the second transaction system in dependence upon receiving the second response from the first user authorizing the second electronic transaction. 15. The method of claim 14 wherein the first response from the first user comprises first authentication information required for a first security level, and the second response from the first user comprises second authentication information required for a second security level different than the first security level. 16. The method of claim 14 wherein the first transaction system is associated with a first entity and the second transaction system is associated with a second entity different than the first entity. 17. The method according to 14 wherein the first transaction system authenticates the first user prior to the security computer providing the first authorization. 18. The method according to claim 14 wherein the first transaction system relates to a first service and the second transaction system relates to a second different service. 19. A method comprising: associating a mobile electronic device with a first user; installing on the mobile electronic device a security application that supports in-application push notifications; registering, by a first computer system, the mobile electronic device as a security token for use by the first user for authorizing electronic transactions; receiving an electronic transaction request from the first user, the electronic transaction request associated with a security level of a plurality of different security levels; transmitting via at least a push notification a request for N responses each including different authentication information, wherein the number N is greater than 1 and is determined based on the security level that is associated with the electronic transaction request; and in dependence upon receiving at the first computer system an expected response from the first user for each of the N responses, via the mobile electronic device, authorizing the electronic response by the first computer system. 20. A method according to claim 19 wherein the different authentication information comprises multi-factor authentication information.
A method for securing electronic transactions includes associating a mobile electronic device with a first user. A first computer system retrievably stores registration data relating to the first user, including a device identifier that is unique to the mobile electronic device. A security application that supports in-application push notifications is installed on the mobile electronic device. The first computer system sends a push notification to the mobile electronic device, the push notification prompting the first user to provide a confirmation reply via a user interface of the security application for activating the mobile electronic device as a security token. The mobile electronic device is activated as a security token for the first user in response to receiving at the first computer system, from the mobile electronic device, the confirmation reply from the first user.1. A method comprising: associating a mobile electronic device with a first user; retrievably storing, by a first computer system, registration data relating to the first user and including a device identifier that is unique to the mobile electronic device associated with the first user; sending, by the first computer system, a push notification to the mobile electronic device, the push notification prompting the first user to provide a confirmation reply via a user interface of a security application for activating the mobile electronic device as a security token; and activating the mobile electronic device as a security token for the first user in response to receiving at the first computer system, from the mobile electronic device, the confirmation reply from the first user. 2. The method of claim 1 wherein retrievably storing the registration data includes retrievably storing first authentication data for use in authenticating the first user to the first computer system. 3. The method of claim 2 wherein the push notification is for prompting the first user to provide second authentication data, and further comprising receiving from the mobile electronic device the second authentication data at the first computer system. 4. The method of claim 2 wherein the push notification is for prompting the first user to provide second authentication data including a biometric input, and further comprising receiving from the mobile electronic device the second authentication data at the first computer system. 5. The method of claim 3 wherein activating the mobile electronic device as a security token for the first user comprises assigning the security token to one of a plurality of different security levels in dependence upon a result of comparing the second authentication data to the first authentication data. 6. The method of claim 1 wherein the mobile electronic device is a smartphone. 7. The method of claim 1 comprising: providing from the first user to a second computer system an electronic transaction request; prior to completing the requested electronic transaction, sending an authorization request from the second computer system to the first computer system; sending, by the first computer system, a push notification to the mobile electronic device, the push notification prompting the first user to provide a response for authorizing the requested electronic transaction; receiving, from the mobile electronic device, the response at the first computer system; in response upon receiving the response at the first computer system, providing to the second computer system an authorization message; and in response to receiving the authorization message at the second computer system, completing the electronic transaction for the first user. 8. The method of claim 1 comprising: associating the security token with a specific authorized service, the service for being authenticated in reliance upon the security token. 9. The method of claim 1 wherein the security token comprises tokenization data uniquely associated with the smart phone such that copying of the tokenization data to another smartphone other than results in a valid token. 10. A method comprising: registering by a first system a first user, comprising retrievably storing authentication data for use in authenticating the first user to the first system; registering by a second system the first user, comprising associating a uniquely identifiable mobile electronic device with the first user; requesting by the first user to the first system an electronic transaction requiring authentication of the first user by the first system; authenticating the first user by the first system based on the retrievably stored authentication data and based on data provide by the first user in response to an authentication challenge by the first system; subsequent to authenticating the first user, requesting by the first system to the second system a secondary authentication of the first user; sending from the second system to the uniquely identifiable mobile electronic device a push notification prompting the first user to provide a secondary authentication response via the uniquely identifiable mobile electronic device; receiving by the second system from the uniquely identifiable mobile electronic device the secondary authentication response provided by the first user; providing the secondary authentication of the first user from the second system to the first system based on the secondary authentication response; and subsequent to receiving the secondary authentication of the first user, performing by the first system the requested electronic transaction for the first user. 11. The method of claim 10 wherein associating a uniquely identifiable mobile electronic device with the first user comprises installing a security application on said device. 12. The method of claim 10 wherein the secondary authentication response provided by the first user comprises at least one of a password and a username. 13. The method of claim 10 wherein the secondary authentication response provided by the first user comprises biometric data. 14. A method comprising: associating a mobile electronic device with a first user; installing on the mobile electronic device a security application that supports in-application push notifications; registering, by a security computer, the mobile electronic device as a security token for use by the first user for authorizing electronic transactions; receiving at the security computer, from a first transaction system, a first request for authorization to complete a first electronic transaction; receiving at the security computer, from a second transaction system, a second request for authorization to complete a second electronic transaction; sending from the security computer to the mobile electronic device a first push notification prompting the first user to provide a first response authorizing the first electronic transaction; sending from the security computer to the mobile electronic device a second push notification prompting the first user to provide a second response authorizing the second electronic transaction; and providing from the security computer: a first authorization to the first transaction system in dependence upon receiving the first response from the first user authorizing the first electronic transaction; and a second authorization to the second transaction system in dependence upon receiving the second response from the first user authorizing the second electronic transaction. 15. The method of claim 14 wherein the first response from the first user comprises first authentication information required for a first security level, and the second response from the first user comprises second authentication information required for a second security level different than the first security level. 16. The method of claim 14 wherein the first transaction system is associated with a first entity and the second transaction system is associated with a second entity different than the first entity. 17. The method according to 14 wherein the first transaction system authenticates the first user prior to the security computer providing the first authorization. 18. The method according to claim 14 wherein the first transaction system relates to a first service and the second transaction system relates to a second different service. 19. A method comprising: associating a mobile electronic device with a first user; installing on the mobile electronic device a security application that supports in-application push notifications; registering, by a first computer system, the mobile electronic device as a security token for use by the first user for authorizing electronic transactions; receiving an electronic transaction request from the first user, the electronic transaction request associated with a security level of a plurality of different security levels; transmitting via at least a push notification a request for N responses each including different authentication information, wherein the number N is greater than 1 and is determined based on the security level that is associated with the electronic transaction request; and in dependence upon receiving at the first computer system an expected response from the first user for each of the N responses, via the mobile electronic device, authorizing the electronic response by the first computer system. 20. A method according to claim 19 wherein the different authentication information comprises multi-factor authentication information.
2,600
10,604
10,604
15,798,629
2,653
An apparatus for airborne-sound-based condition monitoring of a device includes at least one microphone assigned to each component of a plurality of components of the device that is to be monitored.
1. An apparatus for airborne-sound-based condition monitoring of a device comprising: at least one microphone assigned to each component of a plurality of components of the device that is to be monitored. 2. The apparatus according to claim 1, wherein: the at least one microphone includes a directional characteristic or is a directional microphone, and the directional microphone includes a rotatable joint or a swiveling joint. 3. The apparatus according to claim 1, further comprising: a primary housing located at a distance from the device. 4. The apparatus according to claim 3, wherein: the at least one microphone is arranged on the primary housing, and/or the at least one microphone is located at another distance from the primary housing. 5. The apparatus according to claim 1, wherein the at least one microphone is arranged in the immediate vicinity or in the airborne-sound near-field of the corresponding component of the plurality of components to which the at least one microphone is assigned. 6. The apparatus according to claim 1, wherein: the at one microphone includes at least two microphones arranged approximately along a beam that originates from the corresponding component of the plurality of components to which the at least two microphones are assigned. 7. The apparatus according to claim 1, wherein: the at least one microphone is arranged approximately in one plane, and the apparatus is configured to determine the corresponding component of the plurality of components to which the at least one microphone is assigned from transit time differences of a sound signal. 8. The apparatus according to claim 1, further comprising: an energy-harvesting apparatus fastened to the device and configured to convert vibrations or heat of the device into electrical energy. 9. The apparatus according to claim 4, further comprising: a decomposed structure, in which the at least one microphone is configured as a satellite microphone with integrated electronics. 10. The apparatus according to claim 9, wherein the satellite microphone is implemented with energy independence and includes a battery, an accumulator, and/or an energy-harvesting apparatus configured to convert vibrations or heat of the device into electrical energy to power the satellite microphone. 11. The apparatus according to claim 10, wherein: an electronic design of the primary housing or of a primary device and of the satellite microphone are of identical construction, and the apparatus further includes a wireless data transmission device and a further sensor. 12. The apparatus according to claim 1, wherein the airborne-sound-based condition is cavitation, insufficient suction, wrong or incorrect installation of the plurality of components, or a deviating speed of rotation. 13. A system comprising: a device including a plurality of components; and an apparatus for airborne-sound-based condition monitoring of the device, the apparatus comprising at least one microphone assigned to each component of the plurality of components of the device. 14. The system according to claim 13, wherein: the device includes a mobile working machine, a powertrain, a hydrostatic gearbox, one or a plurality of hydrostatic displacement machines, an electrical machine, a combustion engine, a hydrodynamic machine, a mechanical gearbox, or a hydraulic control unit, and the plurality of components includes a hydrostatic displacement machine, an auxiliary aggregate of a combustion engine, a drive shaft, a roller bearing, a piston, a tire, or a valve. 15. A method for airborne-sound-based condition monitoring of a device including a plurality of components, comprising: assigning at least one microphone to each component of the plurality of components of the device; and evaluating sound signals of the assigned at least one microphone. 16. The method according to claim 15, further comprising: initializing through generating a sound signal at at least one component of the plurality of components. 17. The method according to claim 15, further comprising: automatically activating the condition monitoring through a trigger. 18. The method according to claim 15, further comprising: minimizing interfering noises during the condition monitoring; or insulating the sound signals from the interfering noises during the condition monitoring. 19. The method according to claim 15, further comprising: calculating or estimating a remaining service life of at least one component of the plurality of components. 20. The method according to claim 15, further comprising: localizing at least one component of the plurality of components with a beam-forming method.
An apparatus for airborne-sound-based condition monitoring of a device includes at least one microphone assigned to each component of a plurality of components of the device that is to be monitored.1. An apparatus for airborne-sound-based condition monitoring of a device comprising: at least one microphone assigned to each component of a plurality of components of the device that is to be monitored. 2. The apparatus according to claim 1, wherein: the at least one microphone includes a directional characteristic or is a directional microphone, and the directional microphone includes a rotatable joint or a swiveling joint. 3. The apparatus according to claim 1, further comprising: a primary housing located at a distance from the device. 4. The apparatus according to claim 3, wherein: the at least one microphone is arranged on the primary housing, and/or the at least one microphone is located at another distance from the primary housing. 5. The apparatus according to claim 1, wherein the at least one microphone is arranged in the immediate vicinity or in the airborne-sound near-field of the corresponding component of the plurality of components to which the at least one microphone is assigned. 6. The apparatus according to claim 1, wherein: the at one microphone includes at least two microphones arranged approximately along a beam that originates from the corresponding component of the plurality of components to which the at least two microphones are assigned. 7. The apparatus according to claim 1, wherein: the at least one microphone is arranged approximately in one plane, and the apparatus is configured to determine the corresponding component of the plurality of components to which the at least one microphone is assigned from transit time differences of a sound signal. 8. The apparatus according to claim 1, further comprising: an energy-harvesting apparatus fastened to the device and configured to convert vibrations or heat of the device into electrical energy. 9. The apparatus according to claim 4, further comprising: a decomposed structure, in which the at least one microphone is configured as a satellite microphone with integrated electronics. 10. The apparatus according to claim 9, wherein the satellite microphone is implemented with energy independence and includes a battery, an accumulator, and/or an energy-harvesting apparatus configured to convert vibrations or heat of the device into electrical energy to power the satellite microphone. 11. The apparatus according to claim 10, wherein: an electronic design of the primary housing or of a primary device and of the satellite microphone are of identical construction, and the apparatus further includes a wireless data transmission device and a further sensor. 12. The apparatus according to claim 1, wherein the airborne-sound-based condition is cavitation, insufficient suction, wrong or incorrect installation of the plurality of components, or a deviating speed of rotation. 13. A system comprising: a device including a plurality of components; and an apparatus for airborne-sound-based condition monitoring of the device, the apparatus comprising at least one microphone assigned to each component of the plurality of components of the device. 14. The system according to claim 13, wherein: the device includes a mobile working machine, a powertrain, a hydrostatic gearbox, one or a plurality of hydrostatic displacement machines, an electrical machine, a combustion engine, a hydrodynamic machine, a mechanical gearbox, or a hydraulic control unit, and the plurality of components includes a hydrostatic displacement machine, an auxiliary aggregate of a combustion engine, a drive shaft, a roller bearing, a piston, a tire, or a valve. 15. A method for airborne-sound-based condition monitoring of a device including a plurality of components, comprising: assigning at least one microphone to each component of the plurality of components of the device; and evaluating sound signals of the assigned at least one microphone. 16. The method according to claim 15, further comprising: initializing through generating a sound signal at at least one component of the plurality of components. 17. The method according to claim 15, further comprising: automatically activating the condition monitoring through a trigger. 18. The method according to claim 15, further comprising: minimizing interfering noises during the condition monitoring; or insulating the sound signals from the interfering noises during the condition monitoring. 19. The method according to claim 15, further comprising: calculating or estimating a remaining service life of at least one component of the plurality of components. 20. The method according to claim 15, further comprising: localizing at least one component of the plurality of components with a beam-forming method.
2,600
10,605
10,605
13,753,171
2,652
Systems and methods for identifying participants present in a communication session are disclosed. More particularly, the identification of participants using a shared communication endpoint is enabled. Identification can include receiving information from a first participant that identifies a second participant. The identification of the second participant can include receiving a selection of the second participant from a list of expected conference participants presented to the first participant through a communication device associated with the first participant, after the first participant has been registered as a participant in the communication session.
1. A communication system, comprising: a first shared communication device; memory; a processor; a conference application stored on the memory and executed by the processor, wherein the conference application is operable to: register a first user of the first shared communication device, wherein information identifying the first user is provided to the conference application by one of the first user and a first user communication device associated with the first user; receive information identifying a second user of the first shared communication device, wherein the information identifying the second user of the first shared communication device is received from the first user of the first shared communication device. 2. The system of claim 1, further comprising: a second shared communication device, wherein the information identifying the first and second users is output by the second shared communication device. 3. The system of claim 2, wherein the conference application is further operable to: register a third user of the second shared communication device, wherein the information identifying the third user is provided to the conference application by one of the first registered user, the second registered user, and a fourth registered user. 4. The system of claim 3, wherein the information identifying the third user is provided to the conference application by a fourth registered user, wherein the fourth registered user is a user of the second shared communication device and is registered with the communication application. 5. The system of claim 1, wherein the information identifying the first user is provided to the conference application by the first user communication device. 6. The system of claim 5, wherein the information identifying the second user is received from the first user device. 7. The system of claim 6, wherein the first shared communication device is a communication endpoint for a real time communication carried over a first communication channel, wherein the memory and the processor are provided as part of a communication feature server, and wherein the first user device is in communication with the communication feature server using a second communication channel that is different than the first communication channel. 8. The system of claim 7, wherein the first user device includes an application that places the first user device in communication with the communication feature server over the first communication channel, and wherein the first user device is not an endpoint for the real time communication carried over the first communication channel. 9. A method for identifying a party using a shared communication endpoint that is participating in a communication session, comprising: receiving information identifying a first user of a first shared communication device at a conference application; receiving information from the identified first user identifying a second user at the conference application, wherein the second user is a user of one of the first shared communication endpoint and a second communication endpoint; modifying a roster maintained by the conference application to include the information identifying the first and second users as active participants in a first communication session, wherein the first shared communication device is an endpoint for the first communication session. 10. The method of claim 9, wherein the second user is a user of the first shared communication device, wherein the information identifying the second user is received from the first user, and wherein the information identifying the second user received from the first user of the first shared communication device informs the conference application that the first shared communication device is shared by multiple users. 11. The method of claim 10, wherein the information identifying the first user is received from a first user device, and wherein the information identifying the second user of the first shared communication device is received from the first user device. 12. The method of claim 11, further comprising: registering the first and second users of the first shared communication device with the conference application as being present in the first communication session. 13. The method of claim 12, wherein the first user is registered after receiving presence information for the first user, wherein the received presence information confirms that the first user is at a location of the first shared communication device, and wherein the second user is a user of the first shared communication device. 14. The method of claim 12, further comprising: receiving additional information identifying the second user from a third user, and wherein the second user is registered after receiving the information identifying the second user from the first and third users. 15. The method of claim 14, wherein the third user is one of a user of the first shared communication device and a user of a second shared communication device. 16. The method of claim 11, wherein the second user is identified by name on a display of the first user device, and wherein the information identifying the second user received from the first user device is sent from the first user device to the conference application in response to a selection of the information identifying the second user by the first user. 17. The method of claim 16, further comprising: after receiving the information identifying the second user, the conference application prompting the first user to confirm the continued participation of at least one of the first user and second user in the first communication session. 18. The method of claim 17, further comprising: receiving information at the conference application indicating that one of the first and second users is no longer a participant in the communication session, modifying the roster to remove the at least one of the first and second users. 19. A tangible computer readable medium having stored thereon computer executable instructions, the computer executable instructions causing a processor to execute a method for identifying a user of a shared communication endpoint during a communication session, the computer readable instructions comprising: instructions to receive information identifying at least a first participant in the communication session, wherein the information identifying the first participant indicates that the first participant is using a first shared communication endpoint, and wherein the information identifying the first participant is received from a first user device that is different than the first shared communication endpoint; instructions to receiving information identifying at least a second participant in the communication session, wherein the information identifying the second participant indicates that the second participant is using the first shared communication endpoint, and wherein the information identifying the second participant includes information received from the first participant through the first user device; instructions to include the first and second participants in a roster of participants in the communication session. 20. The tangible computer readable medium of claim 17, wherein the instructions further include: instructions to provide a list of potential participants to a plurality of user devices, wherein the first user device and a second user device is included in the plurality of user devices; instructions to assign a first confidence rating with the information identifying the second participant received from the first user device; instructions to assign a second confidence rating with information identifying the second participant received from a second user device, wherein a confidence rating assigned to information is based on at least one of: a determined location of a user device providing the information, an identified owner of a user device providing the information, prompt one of the first and second first party is associated with a first communication device.
Systems and methods for identifying participants present in a communication session are disclosed. More particularly, the identification of participants using a shared communication endpoint is enabled. Identification can include receiving information from a first participant that identifies a second participant. The identification of the second participant can include receiving a selection of the second participant from a list of expected conference participants presented to the first participant through a communication device associated with the first participant, after the first participant has been registered as a participant in the communication session.1. A communication system, comprising: a first shared communication device; memory; a processor; a conference application stored on the memory and executed by the processor, wherein the conference application is operable to: register a first user of the first shared communication device, wherein information identifying the first user is provided to the conference application by one of the first user and a first user communication device associated with the first user; receive information identifying a second user of the first shared communication device, wherein the information identifying the second user of the first shared communication device is received from the first user of the first shared communication device. 2. The system of claim 1, further comprising: a second shared communication device, wherein the information identifying the first and second users is output by the second shared communication device. 3. The system of claim 2, wherein the conference application is further operable to: register a third user of the second shared communication device, wherein the information identifying the third user is provided to the conference application by one of the first registered user, the second registered user, and a fourth registered user. 4. The system of claim 3, wherein the information identifying the third user is provided to the conference application by a fourth registered user, wherein the fourth registered user is a user of the second shared communication device and is registered with the communication application. 5. The system of claim 1, wherein the information identifying the first user is provided to the conference application by the first user communication device. 6. The system of claim 5, wherein the information identifying the second user is received from the first user device. 7. The system of claim 6, wherein the first shared communication device is a communication endpoint for a real time communication carried over a first communication channel, wherein the memory and the processor are provided as part of a communication feature server, and wherein the first user device is in communication with the communication feature server using a second communication channel that is different than the first communication channel. 8. The system of claim 7, wherein the first user device includes an application that places the first user device in communication with the communication feature server over the first communication channel, and wherein the first user device is not an endpoint for the real time communication carried over the first communication channel. 9. A method for identifying a party using a shared communication endpoint that is participating in a communication session, comprising: receiving information identifying a first user of a first shared communication device at a conference application; receiving information from the identified first user identifying a second user at the conference application, wherein the second user is a user of one of the first shared communication endpoint and a second communication endpoint; modifying a roster maintained by the conference application to include the information identifying the first and second users as active participants in a first communication session, wherein the first shared communication device is an endpoint for the first communication session. 10. The method of claim 9, wherein the second user is a user of the first shared communication device, wherein the information identifying the second user is received from the first user, and wherein the information identifying the second user received from the first user of the first shared communication device informs the conference application that the first shared communication device is shared by multiple users. 11. The method of claim 10, wherein the information identifying the first user is received from a first user device, and wherein the information identifying the second user of the first shared communication device is received from the first user device. 12. The method of claim 11, further comprising: registering the first and second users of the first shared communication device with the conference application as being present in the first communication session. 13. The method of claim 12, wherein the first user is registered after receiving presence information for the first user, wherein the received presence information confirms that the first user is at a location of the first shared communication device, and wherein the second user is a user of the first shared communication device. 14. The method of claim 12, further comprising: receiving additional information identifying the second user from a third user, and wherein the second user is registered after receiving the information identifying the second user from the first and third users. 15. The method of claim 14, wherein the third user is one of a user of the first shared communication device and a user of a second shared communication device. 16. The method of claim 11, wherein the second user is identified by name on a display of the first user device, and wherein the information identifying the second user received from the first user device is sent from the first user device to the conference application in response to a selection of the information identifying the second user by the first user. 17. The method of claim 16, further comprising: after receiving the information identifying the second user, the conference application prompting the first user to confirm the continued participation of at least one of the first user and second user in the first communication session. 18. The method of claim 17, further comprising: receiving information at the conference application indicating that one of the first and second users is no longer a participant in the communication session, modifying the roster to remove the at least one of the first and second users. 19. A tangible computer readable medium having stored thereon computer executable instructions, the computer executable instructions causing a processor to execute a method for identifying a user of a shared communication endpoint during a communication session, the computer readable instructions comprising: instructions to receive information identifying at least a first participant in the communication session, wherein the information identifying the first participant indicates that the first participant is using a first shared communication endpoint, and wherein the information identifying the first participant is received from a first user device that is different than the first shared communication endpoint; instructions to receiving information identifying at least a second participant in the communication session, wherein the information identifying the second participant indicates that the second participant is using the first shared communication endpoint, and wherein the information identifying the second participant includes information received from the first participant through the first user device; instructions to include the first and second participants in a roster of participants in the communication session. 20. The tangible computer readable medium of claim 17, wherein the instructions further include: instructions to provide a list of potential participants to a plurality of user devices, wherein the first user device and a second user device is included in the plurality of user devices; instructions to assign a first confidence rating with the information identifying the second participant received from the first user device; instructions to assign a second confidence rating with information identifying the second participant received from a second user device, wherein a confidence rating assigned to information is based on at least one of: a determined location of a user device providing the information, an identified owner of a user device providing the information, prompt one of the first and second first party is associated with a first communication device.
2,600
10,606
10,606
15,716,730
2,655
A portable electronic device includes a baseband integrated circuit configured to generate communication data and control signals. The portable electronic device also includes an optical path configured to be coupled to the baseband integrated circuit to transmit the data signals from the baseband integrated circuit. The portable electronic device additionally includes a radiohead configured to be coupled to the optical path to receive the data signals transmitted along the optical path from the baseband integrated circuit.
1. A portable electronic device, comprising: a baseband integrated circuit configured to generate communication data and control signals; an optical path configured to be coupled to the baseband integrated circuit to transmit the data signals from the baseband integrated circuit; and a radiohead configured to be coupled to the optical path to receive the data signals transmitted along the optical path from the baseband integrated circuit. 2. The portable electronic device of claim 1, wherein the optical path is configured to transmit the control signals from the baseband integrated circuit to the radiohead. 3. The portable electronic device of claim 1, comprising a control line control line configured to transmit the control signals from the baseband integrated circuit to the radiohead. 4. The portable electronic device of claim 3, wherein the control line comprises a metal conductor. 5. The portable electronic device of claim 1, comprising an optical interface to couple the baseband integrated circuit to the optical path. 6. The portable electronic device of claim 5, wherein the optical interface is housed within the baseband integrated circuit. 7. The portable electronic device of claim 1, wherein the radiohead is configured to transmit received communication data signals to the baseband integrated circuit via the optical path. 8. An electronic device, comprising: an enclosure configured to house electronic circuitry of the electronic device; a radiohead configured to generate communication signals; a transmission line configured to be coupled to radiohead to transmit the communication signals from the radiohead, wherein the transmission line is less than or equal to approximately 10 millimeters in length; and an antenna configured to be coupled to the transmission line to receive the communication signals from the radiohead. 9. The electronic device of claim 8, wherein the radiohead is disposed proximate to an outer edge of the enclosure. 10. The electronic device of claim 8, comprising a baseband integrated circuit configured to generate communication data and control signals. 11. The electronic device of claim 10, comprising a control line coupled between the radiohead and the baseband integrated circuit, wherein the baseband integrated circuit is configured to send the control signals to the radiohead via the control line. 12. The electronic device of claim 10, comprising an optical path configured to be coupled to the baseband integrated circuit to transmit the data signals from the baseband integrated circuit. 13. The electronic device of claim 12, comprising a second radiohead configured to be coupled to the optical path to receive second data signals transmitted along the optical path from the baseband integrated circuit, wherein the second radiohead is configured to generate second communication signals based on the second data signals. 14. The electronic device of claim 12, comprising a second radiohead and a second optical path configured to be coupled to second radiohead and the baseband integrated circuit, wherein the second radiohead is configured to receive second data signals transmitted along the second optical path from the baseband integrated circuit, wherein the second radiohead is configured to generate second communication signals based on the second data signals. 15. The electronic device of claim 14, wherein the optical path and the second optical path are coupled to the baseband integrated circuit via separate interfaces. 16. The electronic device of claim 14, wherein the radiohead is configured to generate the communication signals at a first radio frequency, wherein the second radiohead is configured to generate the communication signals at a second radio frequency. 17. The electronic device of claim 12, wherein the baseband integrated circuit is configured to transmit the control signals to the radiohead via the optical path. 18. The electronic device of claim 17, wherein the optical path comprises a multimode plastic optical fiber cable. 19. A method, comprising: generating communication data and control signals via a baseband integrated circuit of a portable electronic device; transmitting the data signals from the baseband integrated circuit of the portable electronic device to a radiohead of the portable electronic device across a fiber optic path; and receiving the data signals at the radiohead of the portable electronic device. 20. The method of claim 19, comprising transmitting the control signals from the baseband integrated circuit of the portable electronic device to the radiohead of the portable electronic device across the fiber optic path.
A portable electronic device includes a baseband integrated circuit configured to generate communication data and control signals. The portable electronic device also includes an optical path configured to be coupled to the baseband integrated circuit to transmit the data signals from the baseband integrated circuit. The portable electronic device additionally includes a radiohead configured to be coupled to the optical path to receive the data signals transmitted along the optical path from the baseband integrated circuit.1. A portable electronic device, comprising: a baseband integrated circuit configured to generate communication data and control signals; an optical path configured to be coupled to the baseband integrated circuit to transmit the data signals from the baseband integrated circuit; and a radiohead configured to be coupled to the optical path to receive the data signals transmitted along the optical path from the baseband integrated circuit. 2. The portable electronic device of claim 1, wherein the optical path is configured to transmit the control signals from the baseband integrated circuit to the radiohead. 3. The portable electronic device of claim 1, comprising a control line control line configured to transmit the control signals from the baseband integrated circuit to the radiohead. 4. The portable electronic device of claim 3, wherein the control line comprises a metal conductor. 5. The portable electronic device of claim 1, comprising an optical interface to couple the baseband integrated circuit to the optical path. 6. The portable electronic device of claim 5, wherein the optical interface is housed within the baseband integrated circuit. 7. The portable electronic device of claim 1, wherein the radiohead is configured to transmit received communication data signals to the baseband integrated circuit via the optical path. 8. An electronic device, comprising: an enclosure configured to house electronic circuitry of the electronic device; a radiohead configured to generate communication signals; a transmission line configured to be coupled to radiohead to transmit the communication signals from the radiohead, wherein the transmission line is less than or equal to approximately 10 millimeters in length; and an antenna configured to be coupled to the transmission line to receive the communication signals from the radiohead. 9. The electronic device of claim 8, wherein the radiohead is disposed proximate to an outer edge of the enclosure. 10. The electronic device of claim 8, comprising a baseband integrated circuit configured to generate communication data and control signals. 11. The electronic device of claim 10, comprising a control line coupled between the radiohead and the baseband integrated circuit, wherein the baseband integrated circuit is configured to send the control signals to the radiohead via the control line. 12. The electronic device of claim 10, comprising an optical path configured to be coupled to the baseband integrated circuit to transmit the data signals from the baseband integrated circuit. 13. The electronic device of claim 12, comprising a second radiohead configured to be coupled to the optical path to receive second data signals transmitted along the optical path from the baseband integrated circuit, wherein the second radiohead is configured to generate second communication signals based on the second data signals. 14. The electronic device of claim 12, comprising a second radiohead and a second optical path configured to be coupled to second radiohead and the baseband integrated circuit, wherein the second radiohead is configured to receive second data signals transmitted along the second optical path from the baseband integrated circuit, wherein the second radiohead is configured to generate second communication signals based on the second data signals. 15. The electronic device of claim 14, wherein the optical path and the second optical path are coupled to the baseband integrated circuit via separate interfaces. 16. The electronic device of claim 14, wherein the radiohead is configured to generate the communication signals at a first radio frequency, wherein the second radiohead is configured to generate the communication signals at a second radio frequency. 17. The electronic device of claim 12, wherein the baseband integrated circuit is configured to transmit the control signals to the radiohead via the optical path. 18. The electronic device of claim 17, wherein the optical path comprises a multimode plastic optical fiber cable. 19. A method, comprising: generating communication data and control signals via a baseband integrated circuit of a portable electronic device; transmitting the data signals from the baseband integrated circuit of the portable electronic device to a radiohead of the portable electronic device across a fiber optic path; and receiving the data signals at the radiohead of the portable electronic device. 20. The method of claim 19, comprising transmitting the control signals from the baseband integrated circuit of the portable electronic device to the radiohead of the portable electronic device across the fiber optic path.
2,600
10,607
10,607
15,737,695
2,626
An LED display device includes a first display unit including a plurality of LED elements, a second display unit including a measurement LED element equivalent to the LED element included in the first display unit, a luminance measurement unit to measure a luminance of the measurement LED element, a luminance decrease rate storage unit to store therein a luminance decrease rate of the measurement LED element, an accumulated lighting time estimation unit to estimate an accumulated lighting time at intervals of a predetermined time, an average duty ratio storage unit to store therein an average duty ratio obtained by dividing the accumulated lighting time which is estimated, by an accumulated operating time, and a luminance correction coefficient calculation unit to obtain a luminance decrease rate with reference to the luminance decrease rate storage unit and the average duty ratio storage unit and calculate a luminance correction coefficient from the luminance decrease rate.
1-10. (canceled) 11. An LED display device, comprising: a first display which comprises a plurality of LED elements; a first driver to drive said plurality of LED elements of said first display on the basis of a video signal received from a video signal processor; a second display which comprises at least one measurement LED element equivalent to one of said plurality of LED elements included in said first display; a second driver to drive said measurement LED element of said second display; a luminance measurer to measure a luminance of said measurement LED element; a luminance decrease rate storage to store therein a relation between a lighting time of said measurement LED element and a luminance decrease rate of said measurement LED element based on a measurement result of said luminance measurer; an accumulated lighting time estimator to estimate an accumulated lighting time at intervals of a predetermined time for each of said plurality of LED elements; an average duty ratio storage to store therein an average duty ratio obtained by dividing said accumulated lighting time which is estimated, by an accumulated operating time for said plurality of LED elements; a luminance correction coefficient calculator to obtain a luminance decrease rate with reference to said luminance decrease rate storage and said average duty ratio storage for each of said plurality of LED elements and calculate a luminance correction coefficient from said luminance decrease rate; and a luminance corrector to control said first driver to correct a luminance of each of said plurality of LED elements on the basis of said luminance correction coefficient, wherein said accumulated lighting time estimator calculates a first accumulated lighting time up to a time traced back by said predetermined time from the present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device up to said time traced back by said predetermined time from said present time by an average duty ratio of the LED element, said accumulated lighting time estimator calculates a second accumulated lighting time from said predetermined time ago up to said present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device from said predetermined time ago up to said present time by a duty ratio of the LED element at said present time, said accumulated lighting time estimator calculates an accumulated lighting time up to said present time for each of said plurality of LED elements, by adding said first accumulated lighting time and said second accumulated lighting time, said accumulated lighting time estimator updates said average duty ratio for each of said plurality of LED elements by a value obtained by dividing said accumulated lighting time up to said present time by said accumulated operating time of said LED display device up to said present time and stores said average duty ratio into said average duty ratio storage, and said accumulated lighting time estimator performs calculation of said accumulated lighting time up to said present time and update of said average duty ratio at intervals of said predetermined time for each of said plurality of LED elements. 12. The LED display device according to claim 11, wherein said accumulated lighting time estimator calculates said second accumulated lighting time by multiplying said accumulated operating time by an addition average of a duty ratio at said present time and said average duty ratio of said LED element, instead of calculating said second accumulated lighting time by multiplying said accumulated operating time by a duty ratio of the LED element at said present time. 13. The LED display device according to claim 11, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference. 14. The LED display device according to claim 12, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference. 15. The LED display device according to claim 11, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates. 16. The LED display device according to claim 12, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates. 17. A luminance correction method for an LED display device, wherein said LED display device comprises: a first display which comprises a plurality of LED elements; a first driver to drive said first display on the basis of a video signal received from a video signal processor; a second display which comprises at least one measurement LED element equivalent to one of said plurality of LED elements included in said first display; a second driver to drive said second display; a luminance measurer; a luminance decrease rate storage; an accumulated lighting time estimator; an average duty ratio storage; a luminance correction coefficient calculator; and a luminance corrector, said luminance correction method comprising: (a) measuring a luminance of said measurement LED element by said luminance measurer; (b) storing a relation between a lighting time of said measurement LED element and a luminance decrease rate of said measurement LED element based on a measurement result of said luminance measurer into said luminance decrease rate storage; (c) estimating an accumulated lighting time at intervals of a predetermined time for each of said plurality of LED elements by said accumulated lighting time estimator; (d) storing an average duty ratio obtained by dividing said accumulated lighting time which is estimated, by an accumulated operating time for said plurality of LED elements into said average duty ratio storage; (e) obtaining a luminance decrease rate with reference to said luminance decrease rate storage and said average duty ratio storage for each of said plurality of LED elements and calculating a luminance correction coefficient from said luminance decrease rate by said luminance correction coefficient calculator; and (f) controlling said first driver to correct a luminance of each of said plurality of LED elements on the basis of said luminance correction coefficient by said luminance corrector, said operation (c) comprises: (c1) calculating a first accumulated lighting time up to a time traced back by said predetermined time from the present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device up to said time traced back by said predetermined time from said present time by an average duty ratio of the LED element by said accumulated lighting time estimator; (c2) calculating a second accumulated lighting time from said predetermined time ago up to said present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device from said predetermined time ago up to said present time by a duty ratio of the LED element at said present time by said accumulated lighting time estimator; (c3) calculating an accumulated lighting time up to said present time for each of said plurality of LED elements, by adding said first accumulated lighting time and said second accumulated lighting time by said accumulated lighting time estimator after said operations (c1) and (c2); and (c4) updating said average duty ratio for each of said plurality of LED elements by a value obtained by dividing said accumulated lighting time up to said present time by said accumulated operating time of said LED display device up to said present time and storing said average duty ratio into said average duty ratio storage by said accumulated lighting time estimator after said operation (c3), and said accumulated lighting time estimator repeats said operations (c1), (c2), (c3), and (c4) at intervals of said predetermined time. 18. The luminance correction method according to claim 17, wherein said operation (c2) comprises calculating said second accumulated lighting time by multiplying said accumulated operating time by an addition average of a duty ratio at said present time and said average duty ratio of said LED element, instead of calculating said second accumulated lighting time by multiplying said accumulated operating time by a duty ratio of the LED element at said present time. 19. The luminance correction method according to claim 17, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference in said operation (e). 20. The luminance correction method according to claim 18, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference in said operation (e). 21. The luminance correction method according to claim 17, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates. 22. The luminance correction method according to claim 18, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates.
An LED display device includes a first display unit including a plurality of LED elements, a second display unit including a measurement LED element equivalent to the LED element included in the first display unit, a luminance measurement unit to measure a luminance of the measurement LED element, a luminance decrease rate storage unit to store therein a luminance decrease rate of the measurement LED element, an accumulated lighting time estimation unit to estimate an accumulated lighting time at intervals of a predetermined time, an average duty ratio storage unit to store therein an average duty ratio obtained by dividing the accumulated lighting time which is estimated, by an accumulated operating time, and a luminance correction coefficient calculation unit to obtain a luminance decrease rate with reference to the luminance decrease rate storage unit and the average duty ratio storage unit and calculate a luminance correction coefficient from the luminance decrease rate.1-10. (canceled) 11. An LED display device, comprising: a first display which comprises a plurality of LED elements; a first driver to drive said plurality of LED elements of said first display on the basis of a video signal received from a video signal processor; a second display which comprises at least one measurement LED element equivalent to one of said plurality of LED elements included in said first display; a second driver to drive said measurement LED element of said second display; a luminance measurer to measure a luminance of said measurement LED element; a luminance decrease rate storage to store therein a relation between a lighting time of said measurement LED element and a luminance decrease rate of said measurement LED element based on a measurement result of said luminance measurer; an accumulated lighting time estimator to estimate an accumulated lighting time at intervals of a predetermined time for each of said plurality of LED elements; an average duty ratio storage to store therein an average duty ratio obtained by dividing said accumulated lighting time which is estimated, by an accumulated operating time for said plurality of LED elements; a luminance correction coefficient calculator to obtain a luminance decrease rate with reference to said luminance decrease rate storage and said average duty ratio storage for each of said plurality of LED elements and calculate a luminance correction coefficient from said luminance decrease rate; and a luminance corrector to control said first driver to correct a luminance of each of said plurality of LED elements on the basis of said luminance correction coefficient, wherein said accumulated lighting time estimator calculates a first accumulated lighting time up to a time traced back by said predetermined time from the present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device up to said time traced back by said predetermined time from said present time by an average duty ratio of the LED element, said accumulated lighting time estimator calculates a second accumulated lighting time from said predetermined time ago up to said present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device from said predetermined time ago up to said present time by a duty ratio of the LED element at said present time, said accumulated lighting time estimator calculates an accumulated lighting time up to said present time for each of said plurality of LED elements, by adding said first accumulated lighting time and said second accumulated lighting time, said accumulated lighting time estimator updates said average duty ratio for each of said plurality of LED elements by a value obtained by dividing said accumulated lighting time up to said present time by said accumulated operating time of said LED display device up to said present time and stores said average duty ratio into said average duty ratio storage, and said accumulated lighting time estimator performs calculation of said accumulated lighting time up to said present time and update of said average duty ratio at intervals of said predetermined time for each of said plurality of LED elements. 12. The LED display device according to claim 11, wherein said accumulated lighting time estimator calculates said second accumulated lighting time by multiplying said accumulated operating time by an addition average of a duty ratio at said present time and said average duty ratio of said LED element, instead of calculating said second accumulated lighting time by multiplying said accumulated operating time by a duty ratio of the LED element at said present time. 13. The LED display device according to claim 11, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference. 14. The LED display device according to claim 12, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference. 15. The LED display device according to claim 11, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates. 16. The LED display device according to claim 12, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates. 17. A luminance correction method for an LED display device, wherein said LED display device comprises: a first display which comprises a plurality of LED elements; a first driver to drive said first display on the basis of a video signal received from a video signal processor; a second display which comprises at least one measurement LED element equivalent to one of said plurality of LED elements included in said first display; a second driver to drive said second display; a luminance measurer; a luminance decrease rate storage; an accumulated lighting time estimator; an average duty ratio storage; a luminance correction coefficient calculator; and a luminance corrector, said luminance correction method comprising: (a) measuring a luminance of said measurement LED element by said luminance measurer; (b) storing a relation between a lighting time of said measurement LED element and a luminance decrease rate of said measurement LED element based on a measurement result of said luminance measurer into said luminance decrease rate storage; (c) estimating an accumulated lighting time at intervals of a predetermined time for each of said plurality of LED elements by said accumulated lighting time estimator; (d) storing an average duty ratio obtained by dividing said accumulated lighting time which is estimated, by an accumulated operating time for said plurality of LED elements into said average duty ratio storage; (e) obtaining a luminance decrease rate with reference to said luminance decrease rate storage and said average duty ratio storage for each of said plurality of LED elements and calculating a luminance correction coefficient from said luminance decrease rate by said luminance correction coefficient calculator; and (f) controlling said first driver to correct a luminance of each of said plurality of LED elements on the basis of said luminance correction coefficient by said luminance corrector, said operation (c) comprises: (c1) calculating a first accumulated lighting time up to a time traced back by said predetermined time from the present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device up to said time traced back by said predetermined time from said present time by an average duty ratio of the LED element by said accumulated lighting time estimator; (c2) calculating a second accumulated lighting time from said predetermined time ago up to said present time for each of said plurality of LED elements, by multiplying an accumulated operating time of said LED display device from said predetermined time ago up to said present time by a duty ratio of the LED element at said present time by said accumulated lighting time estimator; (c3) calculating an accumulated lighting time up to said present time for each of said plurality of LED elements, by adding said first accumulated lighting time and said second accumulated lighting time by said accumulated lighting time estimator after said operations (c1) and (c2); and (c4) updating said average duty ratio for each of said plurality of LED elements by a value obtained by dividing said accumulated lighting time up to said present time by said accumulated operating time of said LED display device up to said present time and storing said average duty ratio into said average duty ratio storage by said accumulated lighting time estimator after said operation (c3), and said accumulated lighting time estimator repeats said operations (c1), (c2), (c3), and (c4) at intervals of said predetermined time. 18. The luminance correction method according to claim 17, wherein said operation (c2) comprises calculating said second accumulated lighting time by multiplying said accumulated operating time by an addition average of a duty ratio at said present time and said average duty ratio of said LED element, instead of calculating said second accumulated lighting time by multiplying said accumulated operating time by a duty ratio of the LED element at said present time. 19. The luminance correction method according to claim 17, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference in said operation (e). 20. The luminance correction method according to claim 18, wherein said luminance correction coefficient calculator calculates a luminance correction coefficient for each of all said plurality of LED elements with a luminance decrease rate of one of said plurality of LED elements, which has the largest luminance decrease rate, used as a reference in said operation (e). 21. The luminance correction method according to claim 17, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates. 22. The luminance correction method according to claim 18, wherein said second driver drives said measurement LED element with a duty ratio of 100% during a period while said LED display device operates.
2,600
10,608
10,608
13,801,801
2,683
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.
1. A method for causing an appliance to play streaming media, comprising: causing a playing of a media stream on a portable electronic device; placing the portable electronic device into communication with the appliance; and in response to the portable electronic device being placed into communication with the appliance, causing the appliance to automatically select a media input mode, stopping the playing of the media stream on the portable electronic device, and commencing a playing of the media stream on the appliance via use of the media input mode that was automatically selected. 2. The method as recited in claim 1, wherein, in response to the portable electronic device being placed into communication with the appliance, the media stream is caused to be routed from the portable electronic device to the appliance to commence the playing of the media stream on the appliance via use of the media input mode that was automatically selected. 3. The method as recited in claim 2, wherein the portable electronic device is placed into communication with the appliance by being placed into a docking station integrated into the appliance. 4. The method as recited in claim 2, wherein the portable electronic device is adapted to command functional operations of the appliance. 5. The method as recited in claim 1, wherein the appliance comprises a television. 6. The method as recited in claim 1, comprising causing the media stream to be accessed from a server device via a wide area network. 7. The method as recited in claim 1, comprising causing the media stream to be accessed from a server device via a local area network.
Media rendering system including a remote control device and associated docking station. The remote control device interfaces with a remote server to stream media content for local and/or external playback. The remote control device may interface with a docking station to playback rendered media on one or more entertainment appliances. The portable device preferably has standard remote control capability in order to enable advanced features and functions for media playback.1. A method for causing an appliance to play streaming media, comprising: causing a playing of a media stream on a portable electronic device; placing the portable electronic device into communication with the appliance; and in response to the portable electronic device being placed into communication with the appliance, causing the appliance to automatically select a media input mode, stopping the playing of the media stream on the portable electronic device, and commencing a playing of the media stream on the appliance via use of the media input mode that was automatically selected. 2. The method as recited in claim 1, wherein, in response to the portable electronic device being placed into communication with the appliance, the media stream is caused to be routed from the portable electronic device to the appliance to commence the playing of the media stream on the appliance via use of the media input mode that was automatically selected. 3. The method as recited in claim 2, wherein the portable electronic device is placed into communication with the appliance by being placed into a docking station integrated into the appliance. 4. The method as recited in claim 2, wherein the portable electronic device is adapted to command functional operations of the appliance. 5. The method as recited in claim 1, wherein the appliance comprises a television. 6. The method as recited in claim 1, comprising causing the media stream to be accessed from a server device via a wide area network. 7. The method as recited in claim 1, comprising causing the media stream to be accessed from a server device via a local area network.
2,600
10,609
10,609
14,721,793
2,654
There is provided a media player device for use with a text medium having a plurality of pages each having a plurality of words, the media player device comprising an audio output, a memory storing a text medium application software; and a processor configured to execute the text medium application software to receive a first signal emitted from a wireless communication element embedded in the text medium, the first signal including a text medium ID uniquely identifying the text medium, and play an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in each of the plurality of pages.
1. A media player device for use with a text medium having a plurality of words, the media player device comprising: an audio output; a memory storing a text medium application software; and a processor configured to execute the text medium application software to: receive a first signal emitted from a wireless communication element of the text medium, the first signal including a text medium ID uniquely identifying the text medium; and play an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in the text medium. 2. The media player device of claim 1, wherein the memory includes the text medium ID, and wherein after receiving the text medium ID from the wireless communication element of the text medium, the processor is further configured to: identify the audio stored in the memory and corresponding to the text medium ID. 3. The media player device of claim 1, wherein after receiving the text medium ID from the wireless communication element of the text medium, the processor is further configured to: transmit the text medium ID to a server over a network; and receive, from the server and in response to transmitting the text medium ID, the audio corresponding to the text medium ID. 4. The media player device of claim 1, wherein prior to playing the audio, the processor is further configured to: receive a second signal emitted from a second wireless communication element embedded in a second text medium, wherein the second signal includes a second text medium ID uniquely identifying the second text medium. 5. The media player device of claim 4, wherein after receiving the first signal and the second signal, the processor is further configured to: detect a proximity of each of the text medium and the second text medium; determine, based on the proximity of each of the text medium and the second text medium, that the text medium is a closest text medium to the media player device; and wherein the processor plays the audio corresponding to the text medium ID in response to further determining that the text medium is the closest text medium to the media player device. 6. The media player device of claim 1, wherein the processor is further configured to: create a bookmark location in the audio corresponding to the text medium ID, the bookmark location corresponding to a position of a last played word in the audio; and store the bookmark location in the memory. 7. The media player device of claim 6, wherein before playing the audio corresponding to the text medium ID, the processor is configured to: locate, in response to receiving the text medium ID, the bookmark location in the memory; and continue playing the audio corresponding to the text medium ID from the bookmark location. 8. The media player device of claim 1 further comprising a display, wherein the processor is further configured to: display, in response to receiving the text medium ID, a visual content corresponding to the text medium ID on the display. 9. The media player device of claim 1, wherein the first signal is one of a radio frequency identification (RFID) signal, a near field communication (NFC) signal, a Bluetooth® (BT) signal, and a Bluetooth® low energy (BLE) signal. 10. The media player device of claim 1, wherein the media player device is connected to a wireless network having at least a second media player device connected to the wireless network, the second media player device playing the audio corresponding to the text medium ID, and wherein after receiving the first signal, the processor is further configured to: receive, via the wireless network, a playback location in the audio corresponding to the text medium ID from the second media player device, the playback location corresponding to a current location of the audio corresponding to the text medium ID being played by the second media player device; and play the audio corresponding to the text medium ID from the playback location. 11. A method for use with a media player device having an audio output, a memory and a processor, the method comprising: receiving, using the processor, a first signal emitted from a wireless communication element embedded in a text medium having a plurality of pages, the first signal including a text medium ID uniquely identifying the text medium; and playing, using the processor, an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in the text medium. 12. The method of claim 11, wherein the memory includes the text medium ID, and wherein after receiving the text medium ID from the wireless communication element of the text medium, the method further comprises: identifying, using the processor, the audio stored in the memory and corresponding to the text medium ID. 13. The method of claim 11, wherein after receiving the text medium ID from the wireless communication element of the text medium, the method further comprises: transmitting, using the processor, the text medium ID to a server over a network; and receiving, from the server and in response to transmitting the text medium ID, the audio corresponding to the text medium ID. 14. The method of claim 11, wherein prior to playing the audio, the method further comprises: receiving, using the processor, a second signal emitted from a second wireless communication element embedded in a second text medium, wherein the second signal includes a second text medium ID uniquely identifying the second text medium. 15. The method of claim 14, wherein after receiving the first signal and the second signal, the method further comprises: detecting, using the processor, a proximity of each of the text medium and the second text medium; determining, using the processor, based on the proximity of each of the text medium and the second text medium, that the text medium is a closest text medium to the media player device; and wherein the processor plays the audio corresponding to the text medium ID in response to further determining that the text medium is the closest text medium to the media player device. 16. The method of claim 11, further comprising: creating, using the processor, a bookmark location in the audio corresponding to the text medium ID, the bookmark location corresponding to a position of a last played word in the audio; and storing the bookmark location in the memory. 17. The method of claim 16, wherein, before playing the audio corresponding to the text medium ID, the method further comprises: locating, using the processor, in response to receiving the text medium ID, the bookmark location in the memory; and continue playing, using the processor, the audio corresponding to the text medium ID from the bookmark location. 18. The method of claim 11, wherein the media player device further comprises a display, the method further comprising: displaying, in response to receiving the text medium ID, a visual content corresponding to the text medium ID on the display. 19. The method of claim 11, wherein the media player device is connected to a wireless network having at least a second media player device connected to the wireless network, the second media player device playing the audio corresponding to the text medium ID, and wherein after receiving the first signal, the method further comprises: receiving, using the processor, via the wireless network, a playback location in the audio corresponding to the text medium ID from the second media player device, the playback location corresponding to a current location of the audio corresponding to the text medium ID being played by the second media player device; and playing, using the processor, the audio corresponding to the text medium ID from the playback location. 20. A media player device for use with a text medium appearing in a plurality of pages each having a plurality of words, the media player device comprising: an audio output; a memory storing a text medium application software; and a processor configured to execute the text medium application software to: obtain a text medium ID uniquely identifying the text medium; and play an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in each of the plurality of pages.
There is provided a media player device for use with a text medium having a plurality of pages each having a plurality of words, the media player device comprising an audio output, a memory storing a text medium application software; and a processor configured to execute the text medium application software to receive a first signal emitted from a wireless communication element embedded in the text medium, the first signal including a text medium ID uniquely identifying the text medium, and play an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in each of the plurality of pages.1. A media player device for use with a text medium having a plurality of words, the media player device comprising: an audio output; a memory storing a text medium application software; and a processor configured to execute the text medium application software to: receive a first signal emitted from a wireless communication element of the text medium, the first signal including a text medium ID uniquely identifying the text medium; and play an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in the text medium. 2. The media player device of claim 1, wherein the memory includes the text medium ID, and wherein after receiving the text medium ID from the wireless communication element of the text medium, the processor is further configured to: identify the audio stored in the memory and corresponding to the text medium ID. 3. The media player device of claim 1, wherein after receiving the text medium ID from the wireless communication element of the text medium, the processor is further configured to: transmit the text medium ID to a server over a network; and receive, from the server and in response to transmitting the text medium ID, the audio corresponding to the text medium ID. 4. The media player device of claim 1, wherein prior to playing the audio, the processor is further configured to: receive a second signal emitted from a second wireless communication element embedded in a second text medium, wherein the second signal includes a second text medium ID uniquely identifying the second text medium. 5. The media player device of claim 4, wherein after receiving the first signal and the second signal, the processor is further configured to: detect a proximity of each of the text medium and the second text medium; determine, based on the proximity of each of the text medium and the second text medium, that the text medium is a closest text medium to the media player device; and wherein the processor plays the audio corresponding to the text medium ID in response to further determining that the text medium is the closest text medium to the media player device. 6. The media player device of claim 1, wherein the processor is further configured to: create a bookmark location in the audio corresponding to the text medium ID, the bookmark location corresponding to a position of a last played word in the audio; and store the bookmark location in the memory. 7. The media player device of claim 6, wherein before playing the audio corresponding to the text medium ID, the processor is configured to: locate, in response to receiving the text medium ID, the bookmark location in the memory; and continue playing the audio corresponding to the text medium ID from the bookmark location. 8. The media player device of claim 1 further comprising a display, wherein the processor is further configured to: display, in response to receiving the text medium ID, a visual content corresponding to the text medium ID on the display. 9. The media player device of claim 1, wherein the first signal is one of a radio frequency identification (RFID) signal, a near field communication (NFC) signal, a Bluetooth® (BT) signal, and a Bluetooth® low energy (BLE) signal. 10. The media player device of claim 1, wherein the media player device is connected to a wireless network having at least a second media player device connected to the wireless network, the second media player device playing the audio corresponding to the text medium ID, and wherein after receiving the first signal, the processor is further configured to: receive, via the wireless network, a playback location in the audio corresponding to the text medium ID from the second media player device, the playback location corresponding to a current location of the audio corresponding to the text medium ID being played by the second media player device; and play the audio corresponding to the text medium ID from the playback location. 11. A method for use with a media player device having an audio output, a memory and a processor, the method comprising: receiving, using the processor, a first signal emitted from a wireless communication element embedded in a text medium having a plurality of pages, the first signal including a text medium ID uniquely identifying the text medium; and playing, using the processor, an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in the text medium. 12. The method of claim 11, wherein the memory includes the text medium ID, and wherein after receiving the text medium ID from the wireless communication element of the text medium, the method further comprises: identifying, using the processor, the audio stored in the memory and corresponding to the text medium ID. 13. The method of claim 11, wherein after receiving the text medium ID from the wireless communication element of the text medium, the method further comprises: transmitting, using the processor, the text medium ID to a server over a network; and receiving, from the server and in response to transmitting the text medium ID, the audio corresponding to the text medium ID. 14. The method of claim 11, wherein prior to playing the audio, the method further comprises: receiving, using the processor, a second signal emitted from a second wireless communication element embedded in a second text medium, wherein the second signal includes a second text medium ID uniquely identifying the second text medium. 15. The method of claim 14, wherein after receiving the first signal and the second signal, the method further comprises: detecting, using the processor, a proximity of each of the text medium and the second text medium; determining, using the processor, based on the proximity of each of the text medium and the second text medium, that the text medium is a closest text medium to the media player device; and wherein the processor plays the audio corresponding to the text medium ID in response to further determining that the text medium is the closest text medium to the media player device. 16. The method of claim 11, further comprising: creating, using the processor, a bookmark location in the audio corresponding to the text medium ID, the bookmark location corresponding to a position of a last played word in the audio; and storing the bookmark location in the memory. 17. The method of claim 16, wherein, before playing the audio corresponding to the text medium ID, the method further comprises: locating, using the processor, in response to receiving the text medium ID, the bookmark location in the memory; and continue playing, using the processor, the audio corresponding to the text medium ID from the bookmark location. 18. The method of claim 11, wherein the media player device further comprises a display, the method further comprising: displaying, in response to receiving the text medium ID, a visual content corresponding to the text medium ID on the display. 19. The method of claim 11, wherein the media player device is connected to a wireless network having at least a second media player device connected to the wireless network, the second media player device playing the audio corresponding to the text medium ID, and wherein after receiving the first signal, the method further comprises: receiving, using the processor, via the wireless network, a playback location in the audio corresponding to the text medium ID from the second media player device, the playback location corresponding to a current location of the audio corresponding to the text medium ID being played by the second media player device; and playing, using the processor, the audio corresponding to the text medium ID from the playback location. 20. A media player device for use with a text medium appearing in a plurality of pages each having a plurality of words, the media player device comprising: an audio output; a memory storing a text medium application software; and a processor configured to execute the text medium application software to: obtain a text medium ID uniquely identifying the text medium; and play an audio, via the audio output and in response to receiving the text medium ID, wherein the audio corresponds to the text medium ID and the audio pronounces the plurality of words in a same sequence appearing in each of the plurality of pages.
2,600
10,610
10,610
16,170,322
2,691
A wearable interface device, such as a head-mounted display, provides an augmented reality and/or display system and may be used in accordance with medical devices and the performance of medical treatments, particularly a dialysis machine and a dialysis treatment. The wearable interface device may be worn by a user, such as a health care practitioner (HCP), in connection with remotely monitoring and/or controlling the dialysis machine during the dialysis treatment. The HCP may receive alerts and/or other information concerning the dialysis treatment from the dialysis machine that are displayed on the wearable interface device and may use the wearable interface device to control the dialysis machine via the exchange of wireless signals with the dialysis machine. The wearable interface device may recognize commands from the HCP, such as gestures, to provide non-contact operation of the wearable interface device and remote control of the dialysis machine by the HCP.
1. A method of remotely interfacing with a medical device, comprising: providing a head-mounted wearable interface device that enables remote interfacing with the medical device by a user wearing the wearable interface device; wirelessly exchanging signals between the wearable interface device and the medical device, wherein the signals correspond to a treatment performed using the medical device; processing at least one of the signals at the wearable interface device to generate information corresponding to the treatment performed using the medical device; displaying the information on a screen of the wearable interface device; and augmenting a real view through the wearable interface device with information corresponding to the treatment performed using the medical device. 2. The method according to claim 1, wherein the medical device includes a dialysis machine. 3. The method according to claim 1, further comprising: recognizing, at the wearable interface device, at least one non-contact command input by the user. 4. The method according to claim 3, wherein the non-contact command is used to remotely control the medical device during the treatment. 5. The method according to claim 4, wherein the non-contact command includes a command to remotely control the medical device during the treatment by modifying at least one parameter of the medical device from a position in which the wearable interface device is out of a visual line-of-sight of the medical device. 6. The method according to claim 1, wherein the information displayed on the screen of the wearable interface device includes dialysis treatment information. 7. The method according to claim 6, wherein the dialysis treatment information includes an alert concerning the dialysis treatment. 8. (canceled) 9. A non-transitory computer-readable medium storing software that remotely interfaces with a medical device, the software comprising: executable code that operates a head-mounted wearable interface device that enables remote interfacing with the medical device by a user wearing the wearable interface device; executable code that wirelessly exchanges signals between the wearable interface device and the medical device, wherein the signals corresponds to a treatment performed using the medical device; executable code that processes at least one of the signals at the wearable interface device to generate information corresponding to the treatment performed using the medical device; executable code that displays the information on a screen of the wearable interface device; and executable code that augments a real view through the wearable interface device with information corresponding to the treatment performed using the medical device. 10. The non-transitory computer-readable medium according to claim 9, wherein the medical device includes a dialysis machine. 11. The non-transitory computer-readable medium according to claim 9, wherein the software further comprises: executable code that recognizes, at the wearable interface device, at least one non-contact command input by the user. 12. The non-transitory computer-readable medium according to claim 11, wherein the non-contact command is used to remotely control the medical device during the treatment. 13. The non-transitory computer-readable medium according to claim 12, wherein the non-contact command includes a command to remotely control the medical device during the treatment by modifying at least one parameter of the medical device from a position in which the wearable interface device is out of a visual line-of-sight of the medical device. 14. The non-transitory computer-readable medium according to claim 9, wherein the information displayed on the screen of the wearable interface device includes dialysis treatment information. 15. The non-transitory computer-readable medium according to claim 14, wherein the dialysis treatment information includes an alert concerning the dialysis treatment. 16. (canceled) 17. A system for enabling remote interfacing with a dialysis machine, comprising: at least one sensor of the dialysis machine that receives and transmits signals corresponding to a dialysis treatment performed by the dialysis machine; a head-mounted wearable interface device that is worn by a user and that is wirelessly coupled to the at least one sensor, wherein the wearable interface device includes: at least one processor that processes received signals into information corresponding to the dialysis treatment; and at least one screen that displays an augmented view through the wearable interface device that includes a real view and the information corresponding to the dialysis treatment. 18. The system according to claim 17, wherein the wearable interface device further includes: a camera that captures an image being viewed using the wearable interface device. 19. The system according to claim 17, wherein the wearable interface device controls the dialysis machine when the wearable interface device is out of a visual line-of-sight with the dialysis machine during the dialysis treatment. 20. The system according to claim 17, wherein the wearable interface device includes a non-transitory computer readable medium storing software that enables control of the dialysis machine during the dialysis treatment using at least one dialysis treatment screen displayed on the head-mounted wearable interface device. 21. The method according to claim 1, wherein the head-mounted wearable interface device includes a camera. 22. The method according to claim 21, wherein an image that has been captured by the camera is displayed on the screen of the head-mounted wearable interface device. 23. The method according to claim 22, wherein the image includes a medication, and wherein the head-mounted wearable interface device performs one or more of: (i) displaying information documenting the medication, or (ii) displaying an alert concerning the medication. 24. The method according to claim 1, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to the medical device performing the treatment. 25. The method according to claim 1, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to a patient undergoing the treatment. 26. The non-transitory computer-readable medium according to claim 9, wherein the head-mounted wearable interface device includes a camera. 27. The non-transitory computer-readable medium according to claim 26, wherein an image that has been captured by the camera is displayed on the screen of the head-mounted wearable interface device. 28. The non-transitory computer-readable medium according to claim 27, wherein the image includes a medication, and wherein the head-mounted wearable interface device performs one or more of: (i) displaying information documenting the medication, or (ii) displaying an alert concerning the medication. 29. The non-transitory computer-readable medium according to claim 9, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to the medical device performing the treatment. 30. The non-transitory computer-readable medium according to claim 9, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to a patient undergoing the treatment. 31. The system according to claim 17, wherein the head-mounted wearable interface device further includes at least one command recognition component that recognizes a non-contact command input by the user to the head-mounted wearable interface device. 32. The system according to claim 18, wherein an image that has been captured by the camera is displayed on the screen of the head-mounted wearable interface device. 33. The system according to claim 32, wherein the image includes a medication, and wherein the head-mounted wearable interface device performs one or more of: (i) displaying information documenting the medication, or (ii) displaying an alert concerning the medication. 34. The system according to claim 17, wherein, in the augmented view, the real view through the head-mounted wearable interface device is augmented with information corresponding to the dialysis machine performing the dialysis treatment. 35. The system according to claim 17, wherein, in the augmented view, the real view through the head-mounted wearable interface device is augmented with information corresponding to a patient undergoing the dialysis treatment.
A wearable interface device, such as a head-mounted display, provides an augmented reality and/or display system and may be used in accordance with medical devices and the performance of medical treatments, particularly a dialysis machine and a dialysis treatment. The wearable interface device may be worn by a user, such as a health care practitioner (HCP), in connection with remotely monitoring and/or controlling the dialysis machine during the dialysis treatment. The HCP may receive alerts and/or other information concerning the dialysis treatment from the dialysis machine that are displayed on the wearable interface device and may use the wearable interface device to control the dialysis machine via the exchange of wireless signals with the dialysis machine. The wearable interface device may recognize commands from the HCP, such as gestures, to provide non-contact operation of the wearable interface device and remote control of the dialysis machine by the HCP.1. A method of remotely interfacing with a medical device, comprising: providing a head-mounted wearable interface device that enables remote interfacing with the medical device by a user wearing the wearable interface device; wirelessly exchanging signals between the wearable interface device and the medical device, wherein the signals correspond to a treatment performed using the medical device; processing at least one of the signals at the wearable interface device to generate information corresponding to the treatment performed using the medical device; displaying the information on a screen of the wearable interface device; and augmenting a real view through the wearable interface device with information corresponding to the treatment performed using the medical device. 2. The method according to claim 1, wherein the medical device includes a dialysis machine. 3. The method according to claim 1, further comprising: recognizing, at the wearable interface device, at least one non-contact command input by the user. 4. The method according to claim 3, wherein the non-contact command is used to remotely control the medical device during the treatment. 5. The method according to claim 4, wherein the non-contact command includes a command to remotely control the medical device during the treatment by modifying at least one parameter of the medical device from a position in which the wearable interface device is out of a visual line-of-sight of the medical device. 6. The method according to claim 1, wherein the information displayed on the screen of the wearable interface device includes dialysis treatment information. 7. The method according to claim 6, wherein the dialysis treatment information includes an alert concerning the dialysis treatment. 8. (canceled) 9. A non-transitory computer-readable medium storing software that remotely interfaces with a medical device, the software comprising: executable code that operates a head-mounted wearable interface device that enables remote interfacing with the medical device by a user wearing the wearable interface device; executable code that wirelessly exchanges signals between the wearable interface device and the medical device, wherein the signals corresponds to a treatment performed using the medical device; executable code that processes at least one of the signals at the wearable interface device to generate information corresponding to the treatment performed using the medical device; executable code that displays the information on a screen of the wearable interface device; and executable code that augments a real view through the wearable interface device with information corresponding to the treatment performed using the medical device. 10. The non-transitory computer-readable medium according to claim 9, wherein the medical device includes a dialysis machine. 11. The non-transitory computer-readable medium according to claim 9, wherein the software further comprises: executable code that recognizes, at the wearable interface device, at least one non-contact command input by the user. 12. The non-transitory computer-readable medium according to claim 11, wherein the non-contact command is used to remotely control the medical device during the treatment. 13. The non-transitory computer-readable medium according to claim 12, wherein the non-contact command includes a command to remotely control the medical device during the treatment by modifying at least one parameter of the medical device from a position in which the wearable interface device is out of a visual line-of-sight of the medical device. 14. The non-transitory computer-readable medium according to claim 9, wherein the information displayed on the screen of the wearable interface device includes dialysis treatment information. 15. The non-transitory computer-readable medium according to claim 14, wherein the dialysis treatment information includes an alert concerning the dialysis treatment. 16. (canceled) 17. A system for enabling remote interfacing with a dialysis machine, comprising: at least one sensor of the dialysis machine that receives and transmits signals corresponding to a dialysis treatment performed by the dialysis machine; a head-mounted wearable interface device that is worn by a user and that is wirelessly coupled to the at least one sensor, wherein the wearable interface device includes: at least one processor that processes received signals into information corresponding to the dialysis treatment; and at least one screen that displays an augmented view through the wearable interface device that includes a real view and the information corresponding to the dialysis treatment. 18. The system according to claim 17, wherein the wearable interface device further includes: a camera that captures an image being viewed using the wearable interface device. 19. The system according to claim 17, wherein the wearable interface device controls the dialysis machine when the wearable interface device is out of a visual line-of-sight with the dialysis machine during the dialysis treatment. 20. The system according to claim 17, wherein the wearable interface device includes a non-transitory computer readable medium storing software that enables control of the dialysis machine during the dialysis treatment using at least one dialysis treatment screen displayed on the head-mounted wearable interface device. 21. The method according to claim 1, wherein the head-mounted wearable interface device includes a camera. 22. The method according to claim 21, wherein an image that has been captured by the camera is displayed on the screen of the head-mounted wearable interface device. 23. The method according to claim 22, wherein the image includes a medication, and wherein the head-mounted wearable interface device performs one or more of: (i) displaying information documenting the medication, or (ii) displaying an alert concerning the medication. 24. The method according to claim 1, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to the medical device performing the treatment. 25. The method according to claim 1, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to a patient undergoing the treatment. 26. The non-transitory computer-readable medium according to claim 9, wherein the head-mounted wearable interface device includes a camera. 27. The non-transitory computer-readable medium according to claim 26, wherein an image that has been captured by the camera is displayed on the screen of the head-mounted wearable interface device. 28. The non-transitory computer-readable medium according to claim 27, wherein the image includes a medication, and wherein the head-mounted wearable interface device performs one or more of: (i) displaying information documenting the medication, or (ii) displaying an alert concerning the medication. 29. The non-transitory computer-readable medium according to claim 9, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to the medical device performing the treatment. 30. The non-transitory computer-readable medium according to claim 9, wherein the real view through the head-mounted wearable interface device is augmented with information corresponding to a patient undergoing the treatment. 31. The system according to claim 17, wherein the head-mounted wearable interface device further includes at least one command recognition component that recognizes a non-contact command input by the user to the head-mounted wearable interface device. 32. The system according to claim 18, wherein an image that has been captured by the camera is displayed on the screen of the head-mounted wearable interface device. 33. The system according to claim 32, wherein the image includes a medication, and wherein the head-mounted wearable interface device performs one or more of: (i) displaying information documenting the medication, or (ii) displaying an alert concerning the medication. 34. The system according to claim 17, wherein, in the augmented view, the real view through the head-mounted wearable interface device is augmented with information corresponding to the dialysis machine performing the dialysis treatment. 35. The system according to claim 17, wherein, in the augmented view, the real view through the head-mounted wearable interface device is augmented with information corresponding to a patient undergoing the dialysis treatment.
2,600
10,611
10,611
14,552,729
2,652
A computing system for managing a plurality of resources in an enterprise is disclosed. The computing system includes a monitoring one or more attributes of at least one communication session between at least one customer and at least one resource of the plurality of resources. The system further includes a database for storing the one or more monitored attributes. The system further includes a computing module for computing a contact time parameter for each of the at least one resource of the plurality of resources based on the one or more stored attributes. The system further includes an agent selection module for selecting at least one resource from the plurality of resources based on the computed contact time parameter. The system further includes a routing module for routing the at least one communication session to the at least one selected resource.
1. A computing system for managing a plurality of resources in an enterprise, the system comprising: a processor; a computer-readable medium coupled to the processor, the medium comprising one or more computer readable instructions, the processor executing the one of more computer readable instructions to: monitor attributes of at least one communication session between at least one customer and at least one resource of the plurality of resources; store the monitored attributes in a database; compute a contact time parameter for the each of the at least one resource of the plurality of resources based on the stored monitored attributes; select a resource from the plurality of resources based on the computed contact time parameter; and route a communication session for an incoming contact to the selected resource. 2. The system of claim 1, wherein the processor further extracts at least one keyword from the at least one communication session. 3. The system of claim 2, wherein the extracted keyword is stored in the database. 4. The system of claim 1, wherein the processor further calculates a score for each of the resource of the plurality of resources based on the computed contact time parameter. 5. The system of claim 1, wherein the processor further ranks the computed contact time parameter based on the calculated score. 6. The system of claim 5, wherein the contact time parameter for each of the plurality of resources is ranked based on at least one business strategy. 7. The system of claim 5, wherein the processor further selects the at least one resource based on the ranks of each of the computed contact time parameters associated with the plurality of the resources. 8. The system of claim 1, wherein the processor further enables a supervisor to manually select the at least one resource from the plurality of resources. 9. The system of claim 1, wherein the at least one resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 10. A computer-implemented method for managing a plurality of resources in an enterprise, the method comprising: monitoring, by a processor, one or more attributes of at least one communication session between at least one customer and at least one resource of the plurality of resources; storing, by the processor, the one or more monitored attributes in at least one database; computing, by the processor, a contact time parameter for each of the at least one resource of the plurality of resources based on the one or more stored monitored attributes; selecting, by the processor, a resource from the plurality of resources based on the computed contact time parameter; and routing, by the processor, a communication session for an incoming contact to the selected resource. 11. The method of claim 10, wherein the processor further extracts at least one keyword from the at least one communication session. 12. The method of claim 11, further stores the extracted keyword in the at least one database. 13. The method of claim 10, wherein the processor further calculates a score for each of the resource of the plurality of resources based on the computed contact time parameter. 14. The method of claim 10, wherein the processor further ranks the computed contact time parameter based on the calculated score. 15. The method of claim 14, wherein the processor further selects the at least one resource based on the rank of each of the computed contact time parameters. 16. The method of claim 10, wherein the processor further enables a supervisor to manually select the at least one resource from the plurality of resources. 17. The method of claim 10, wherein the at least one resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 18. A computer-implemented method for selecting at least one resource for at least one communication session in an enterprise, the method comprising: monitoring, by a processor, one or more attributes of the at least one communication session between at least one customer and the at least one resource of a plurality of resources; storing, by the processor, the one or more monitored attributes in at least one database; computing, by the processor, a contact time parameter for each of the at least one resource of the plurality of resources based on the one or more stored monitored attributes; selecting, by the processor, at least one resource from the plurality of resources based on the computed contact time parameter, wherein the at least one resource is selected by ranking the computed contact time parameter; and routing, by the processor, at least one communication session for an incoming contact to the at least one selected resource to provide customer service. 19. The method of claim 18, wherein the processor further calculates a score for each of the resource of the plurality of resources based on the computed contact time parameter. 20. The method of claim 18, wherein the at least one resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME).
A computing system for managing a plurality of resources in an enterprise is disclosed. The computing system includes a monitoring one or more attributes of at least one communication session between at least one customer and at least one resource of the plurality of resources. The system further includes a database for storing the one or more monitored attributes. The system further includes a computing module for computing a contact time parameter for each of the at least one resource of the plurality of resources based on the one or more stored attributes. The system further includes an agent selection module for selecting at least one resource from the plurality of resources based on the computed contact time parameter. The system further includes a routing module for routing the at least one communication session to the at least one selected resource.1. A computing system for managing a plurality of resources in an enterprise, the system comprising: a processor; a computer-readable medium coupled to the processor, the medium comprising one or more computer readable instructions, the processor executing the one of more computer readable instructions to: monitor attributes of at least one communication session between at least one customer and at least one resource of the plurality of resources; store the monitored attributes in a database; compute a contact time parameter for the each of the at least one resource of the plurality of resources based on the stored monitored attributes; select a resource from the plurality of resources based on the computed contact time parameter; and route a communication session for an incoming contact to the selected resource. 2. The system of claim 1, wherein the processor further extracts at least one keyword from the at least one communication session. 3. The system of claim 2, wherein the extracted keyword is stored in the database. 4. The system of claim 1, wherein the processor further calculates a score for each of the resource of the plurality of resources based on the computed contact time parameter. 5. The system of claim 1, wherein the processor further ranks the computed contact time parameter based on the calculated score. 6. The system of claim 5, wherein the contact time parameter for each of the plurality of resources is ranked based on at least one business strategy. 7. The system of claim 5, wherein the processor further selects the at least one resource based on the ranks of each of the computed contact time parameters associated with the plurality of the resources. 8. The system of claim 1, wherein the processor further enables a supervisor to manually select the at least one resource from the plurality of resources. 9. The system of claim 1, wherein the at least one resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 10. A computer-implemented method for managing a plurality of resources in an enterprise, the method comprising: monitoring, by a processor, one or more attributes of at least one communication session between at least one customer and at least one resource of the plurality of resources; storing, by the processor, the one or more monitored attributes in at least one database; computing, by the processor, a contact time parameter for each of the at least one resource of the plurality of resources based on the one or more stored monitored attributes; selecting, by the processor, a resource from the plurality of resources based on the computed contact time parameter; and routing, by the processor, a communication session for an incoming contact to the selected resource. 11. The method of claim 10, wherein the processor further extracts at least one keyword from the at least one communication session. 12. The method of claim 11, further stores the extracted keyword in the at least one database. 13. The method of claim 10, wherein the processor further calculates a score for each of the resource of the plurality of resources based on the computed contact time parameter. 14. The method of claim 10, wherein the processor further ranks the computed contact time parameter based on the calculated score. 15. The method of claim 14, wherein the processor further selects the at least one resource based on the rank of each of the computed contact time parameters. 16. The method of claim 10, wherein the processor further enables a supervisor to manually select the at least one resource from the plurality of resources. 17. The method of claim 10, wherein the at least one resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME). 18. A computer-implemented method for selecting at least one resource for at least one communication session in an enterprise, the method comprising: monitoring, by a processor, one or more attributes of the at least one communication session between at least one customer and the at least one resource of a plurality of resources; storing, by the processor, the one or more monitored attributes in at least one database; computing, by the processor, a contact time parameter for each of the at least one resource of the plurality of resources based on the one or more stored monitored attributes; selecting, by the processor, at least one resource from the plurality of resources based on the computed contact time parameter, wherein the at least one resource is selected by ranking the computed contact time parameter; and routing, by the processor, at least one communication session for an incoming contact to the at least one selected resource to provide customer service. 19. The method of claim 18, wherein the processor further calculates a score for each of the resource of the plurality of resources based on the computed contact time parameter. 20. The method of claim 18, wherein the at least one resource is one of a reserve agent, an agent, a supervisor, or a Subject Matter Expert (SME).
2,600
10,612
10,612
11,039,057
2,674
An apparatus ( 200 ) has a document receiver ( 201 ) to receive a document and an image scanner ( 202 ) to facilitate capturing an image of that document. A data compressor ( 204 ) compresses the document image data and a transmitter ( 206 ) forwards the document image data to a predetermined processing center ( 302 ). These actions preferably occur in automated fashion in response to a trigger ( 209 ), thereby avoiding a need for a local user to interface with the apparatus much beyond simply presenting the document to the apparatus. Depending upon preference, encryption and post-capture image processing can also be provided. In a preferred approach, a local user output ( 212 ) provides status or other information to a local user.
1. A method comprising: detecting a need to effect an image capture and forward process; automatically capturing a document as an image to provide a captured document image; automatically compressing the captured document image to provide a compressed image; automatically forwarding the compressed image to a predetermined processing center. 2. The method of claim 1 wherein detecting a need to effect an image capture and forward process further comprises detecting the document. 3. The method of claim 1 wherein detecting a need to effect an image capture and forward process further comprises detecting assertion of a user-assertable control interface. 4. The method of claim 1 wherein automatically capturing a document as an image further comprises using at least one of: an automatic document feed scanner; a flatbed scanner. 5. The method of claim 1 and further comprising providing information to a local user regarding at least one of detecting the need to capture an image, automatically capturing the image, automatically compressing the image, automatically forwarding the compressed image, network status, batch status, and error conditions. 6. The method of claim 5 wherein providing information to a local user further comprises providing audible information to a local user. 7. The method of claim 5 wherein providing information to a local user further comprises providing graphic information to a local user. 8. The method of claim 7 wherein providing graphic information to a local user further comprises providing graphic information to a local user using at least one of a liquid crystal display and at least one indicator light. 9. The method of claim 1 wherein automatically forwarding the compressed image to a predetermined processing center further comprises using mobile telephony to forward the compressed image. 10. The method of claim 9 wherein using mobile telephony to forward the compressed image further comprises automatically dialing a telephone number as corresponds to a predetermined Internet service provider. 11. The method of claim 1 wherein: automatically compressing the image to provide a compressed image further comprising automatically encrypting the compressed image to provide an encrypted compressed image; and automatically forwarding the compressed image to a predetermined processing center further comprises automatically forwarding the encrypted compressed image to a predetermined processing center. 12. The method of claim 1 wherein automatically capturing a document as an image to provide a captured document image further comprises effecting a post-capture process comprising at least one of: image enhancement; character recognition; shape recognition; pattern recognition; document identification. 13. The method of claim 1 and further comprising: storing information regarding a plurality of captured document images to provide buffered information; and wherein automatically forwarding the compressed image to a predetermined processing center further comprises automatically batch forwarding the buffered information to the predetermined processing center. 14. An apparatus comprising: a document receiver; an image scanner operably coupled to the document receiver and having a scanned document image output; a data compressor having an input operably coupled to the scanned document image output and having a compressed scanned document image output; a memory having a address for a predetermined processing center stored therein; a transmitter operably coupled to the memory and having an input operably coupled to receive the compressed scanned document image output; a trigger operably coupled to the image scanner, the data compressor, and the transmitter; such that a single actuation of the trigger causes automated operation of the image scanner, the data compressor, and the transmitter wherein a document is automatically scanned, compressed, and transmitted to the predetermined processing center. 15. The apparatus of claim 14 wherein the document receiver comprises one of: an automatic document feeder; a flatbed document receiver. 16. The apparatus of claim 14 wherein the trigger comprises a user assertable control interface, such that assertion of the user assertable control interface causes automatic scanning, compression, and transmission of a scanned image of the document to the predetermined processing center. 17. The apparatus of claim 14 wherein the trigger comprises a part of the document receiver, such that placement of the document in the document receiver causes automatic scanning, compression, and transmission of a scanned image of the document to the predetermined processing center. 18. The apparatus of claim 14 and further comprising an encryption unit having an input operably coupled to the scanned document image output and having an encrypted scanned document image output that is operably coupled to the input of the transmitter. 19. The apparatus of claim 14 wherein the transmitter comprises at least one of a mobile telephone interface, a wide area network interface, and a local area network interface. 20. The apparatus of claim 14 and further comprising a housing to substantially receive the document receiver, the image scanner, the data compressor, the memory, and the transmitter. 21. The apparatus of claim 14 and further comprising control means for responding to the trigger and for causing automated sequential operation of the image scanner, the data compressor, and the transmitter to effect automated provision of a scanned, compressed image of the document to the predetermined processing center. 22. The apparatus of claim 14 and further comprising an image processor having an input operably coupled to the image scanner and having an output operably coupled to the data compressor, wherein the image processor comprises at least one of: an image enhancer; a character recognition unit; a shape recognition unit; a pattern recognition unit; a document identification unit. 23. The apparatus of claim 14 and further comprising: a buffer memory operably coupled to receive compressed scanned document images and to provide a batch comprising at least one compressed scanned document image to the transmitter to facilitate batch transmissions. 24. The apparatus of claim 14 wherein the memory further has at least one configurable property parameter stored therein. 25. The apparatus of claim 24 wherein the at least one configurable property parameter comprises at least one of: an image acquisition source parameter; an image transfer physical layer parameter; an image transfer transport layer parameter; an image transfer session layer parameter.
An apparatus ( 200 ) has a document receiver ( 201 ) to receive a document and an image scanner ( 202 ) to facilitate capturing an image of that document. A data compressor ( 204 ) compresses the document image data and a transmitter ( 206 ) forwards the document image data to a predetermined processing center ( 302 ). These actions preferably occur in automated fashion in response to a trigger ( 209 ), thereby avoiding a need for a local user to interface with the apparatus much beyond simply presenting the document to the apparatus. Depending upon preference, encryption and post-capture image processing can also be provided. In a preferred approach, a local user output ( 212 ) provides status or other information to a local user.1. A method comprising: detecting a need to effect an image capture and forward process; automatically capturing a document as an image to provide a captured document image; automatically compressing the captured document image to provide a compressed image; automatically forwarding the compressed image to a predetermined processing center. 2. The method of claim 1 wherein detecting a need to effect an image capture and forward process further comprises detecting the document. 3. The method of claim 1 wherein detecting a need to effect an image capture and forward process further comprises detecting assertion of a user-assertable control interface. 4. The method of claim 1 wherein automatically capturing a document as an image further comprises using at least one of: an automatic document feed scanner; a flatbed scanner. 5. The method of claim 1 and further comprising providing information to a local user regarding at least one of detecting the need to capture an image, automatically capturing the image, automatically compressing the image, automatically forwarding the compressed image, network status, batch status, and error conditions. 6. The method of claim 5 wherein providing information to a local user further comprises providing audible information to a local user. 7. The method of claim 5 wherein providing information to a local user further comprises providing graphic information to a local user. 8. The method of claim 7 wherein providing graphic information to a local user further comprises providing graphic information to a local user using at least one of a liquid crystal display and at least one indicator light. 9. The method of claim 1 wherein automatically forwarding the compressed image to a predetermined processing center further comprises using mobile telephony to forward the compressed image. 10. The method of claim 9 wherein using mobile telephony to forward the compressed image further comprises automatically dialing a telephone number as corresponds to a predetermined Internet service provider. 11. The method of claim 1 wherein: automatically compressing the image to provide a compressed image further comprising automatically encrypting the compressed image to provide an encrypted compressed image; and automatically forwarding the compressed image to a predetermined processing center further comprises automatically forwarding the encrypted compressed image to a predetermined processing center. 12. The method of claim 1 wherein automatically capturing a document as an image to provide a captured document image further comprises effecting a post-capture process comprising at least one of: image enhancement; character recognition; shape recognition; pattern recognition; document identification. 13. The method of claim 1 and further comprising: storing information regarding a plurality of captured document images to provide buffered information; and wherein automatically forwarding the compressed image to a predetermined processing center further comprises automatically batch forwarding the buffered information to the predetermined processing center. 14. An apparatus comprising: a document receiver; an image scanner operably coupled to the document receiver and having a scanned document image output; a data compressor having an input operably coupled to the scanned document image output and having a compressed scanned document image output; a memory having a address for a predetermined processing center stored therein; a transmitter operably coupled to the memory and having an input operably coupled to receive the compressed scanned document image output; a trigger operably coupled to the image scanner, the data compressor, and the transmitter; such that a single actuation of the trigger causes automated operation of the image scanner, the data compressor, and the transmitter wherein a document is automatically scanned, compressed, and transmitted to the predetermined processing center. 15. The apparatus of claim 14 wherein the document receiver comprises one of: an automatic document feeder; a flatbed document receiver. 16. The apparatus of claim 14 wherein the trigger comprises a user assertable control interface, such that assertion of the user assertable control interface causes automatic scanning, compression, and transmission of a scanned image of the document to the predetermined processing center. 17. The apparatus of claim 14 wherein the trigger comprises a part of the document receiver, such that placement of the document in the document receiver causes automatic scanning, compression, and transmission of a scanned image of the document to the predetermined processing center. 18. The apparatus of claim 14 and further comprising an encryption unit having an input operably coupled to the scanned document image output and having an encrypted scanned document image output that is operably coupled to the input of the transmitter. 19. The apparatus of claim 14 wherein the transmitter comprises at least one of a mobile telephone interface, a wide area network interface, and a local area network interface. 20. The apparatus of claim 14 and further comprising a housing to substantially receive the document receiver, the image scanner, the data compressor, the memory, and the transmitter. 21. The apparatus of claim 14 and further comprising control means for responding to the trigger and for causing automated sequential operation of the image scanner, the data compressor, and the transmitter to effect automated provision of a scanned, compressed image of the document to the predetermined processing center. 22. The apparatus of claim 14 and further comprising an image processor having an input operably coupled to the image scanner and having an output operably coupled to the data compressor, wherein the image processor comprises at least one of: an image enhancer; a character recognition unit; a shape recognition unit; a pattern recognition unit; a document identification unit. 23. The apparatus of claim 14 and further comprising: a buffer memory operably coupled to receive compressed scanned document images and to provide a batch comprising at least one compressed scanned document image to the transmitter to facilitate batch transmissions. 24. The apparatus of claim 14 wherein the memory further has at least one configurable property parameter stored therein. 25. The apparatus of claim 24 wherein the at least one configurable property parameter comprises at least one of: an image acquisition source parameter; an image transfer physical layer parameter; an image transfer transport layer parameter; an image transfer session layer parameter.
2,600
10,613
10,613
13,897,656
2,621
An aspect provides a method, including: receiving a first input from a device component indicating positional orientation of an information handling device; receiving a second input from a device component indicating that the information handling device is undergoing movement; and disabling, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device. Other aspects are described and claimed.
1. A method, comprising: receiving a first input from a device component indicating positional orientation of an information handling device; receiving a second input from a device component indicating that the information handling device is undergoing movement; and disabling, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device. 2. The method of claim 1, wherein: the device component comprises a first and a second device component; the first component comprises a gyroscope; and the second component comprises an accelerometer. 3. The method of claim 1, further comprising: receiving input from a proximity sensor, wherein the input from the proximity sensor comprises input associated with an inadvertent input condition; and disabling, responsive to the input from the proximity sensor, at least a portion of a touch input surface of the information handling device associated with the proximity sensor input. 4. The method of claim 3, wherein the inadvertent input condition comprises holding the information handling device such that a surface of the information handling device is proximate to an arm of the user. 5. The method of claim 4, wherein the surface of the information handling device is a back surface of the information handling device. 6. The method of claim 4, wherein the surface of the information handling device is a front surface of the information handling device including the touch input surface. 7. The method of claim 1, wherein disabling comprises disabling the entire touch input surface of the information handling device. 8. The method of claim 1, wherein the at least a portion of a touch input surface of the information handling device comprises one or more portions of the touch input surface pre-associated with an inadvertent input condition. 9. The method of claim 8, wherein the one or more portions of the touch input surface pre-associated with an inadvertent input condition comprise an edge of the touch input surface. 10. An information handling device, comprising: one or more processors; a memory device storing instructions accessible to the one or more processors, the instructions being executable by the one or more processors to: receive a first input from a device component indicating positional orientation of the information handling device; receive a second input from a device component indicating that the information handling device is undergoing movement; and disable, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device. 11. The information handling device of claim 10, wherein: the device component comprises a first and a second device component; the first component comprises a gyroscope; and the second component comprises an accelerometer. 12. The information handling device of claim 10, further comprising: a proximity sensor, wherein the instructions further comprise instructions being executable by the one or more processors to: receive input from the proximity sensor, wherein the input from the proximity sensor comprises input associated with an inadvertent input condition; and disable, responsive to the input from the proximity sensor, at least a portion of a touch input surface of the information handling device associated with the proximity sensor input. 13. The information handling device of claim 12, wherein the inadvertent input condition comprises holding the information handling device such that a surface of the information handling device is proximate to an arm of the user. 14. The information handling device of claim 13, wherein the surface of the information handling device is a back surface of the information handling device. 15. The information handling device of claim 13, wherein the surface of the information handling device is a front surface of the information handling device including the touch input surface. 16. The information handling device of claim 10, wherein disabling comprises disabling the entire touch input surface of the information handling device. 17. The information handling device of claim 10, wherein the at least a portion of a touch input surface of the information handling device comprises one or more portions of the touch input surface pre-associated with an inadvertent input condition. 18. The information handling device of claim 17, wherein the one or more portions of the touch input surface pre-associated with an inadvertent input condition comprise an edge of the touch input surface. 19. A computer program product, comprising: a storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive a first input from a device component indicating positional orientation of an information handling device; computer readable program code configured to receive a second input from a device component indicating that the information handling device is undergoing movement; and computer readable program code configured to disable, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device.
An aspect provides a method, including: receiving a first input from a device component indicating positional orientation of an information handling device; receiving a second input from a device component indicating that the information handling device is undergoing movement; and disabling, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device. Other aspects are described and claimed.1. A method, comprising: receiving a first input from a device component indicating positional orientation of an information handling device; receiving a second input from a device component indicating that the information handling device is undergoing movement; and disabling, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device. 2. The method of claim 1, wherein: the device component comprises a first and a second device component; the first component comprises a gyroscope; and the second component comprises an accelerometer. 3. The method of claim 1, further comprising: receiving input from a proximity sensor, wherein the input from the proximity sensor comprises input associated with an inadvertent input condition; and disabling, responsive to the input from the proximity sensor, at least a portion of a touch input surface of the information handling device associated with the proximity sensor input. 4. The method of claim 3, wherein the inadvertent input condition comprises holding the information handling device such that a surface of the information handling device is proximate to an arm of the user. 5. The method of claim 4, wherein the surface of the information handling device is a back surface of the information handling device. 6. The method of claim 4, wherein the surface of the information handling device is a front surface of the information handling device including the touch input surface. 7. The method of claim 1, wherein disabling comprises disabling the entire touch input surface of the information handling device. 8. The method of claim 1, wherein the at least a portion of a touch input surface of the information handling device comprises one or more portions of the touch input surface pre-associated with an inadvertent input condition. 9. The method of claim 8, wherein the one or more portions of the touch input surface pre-associated with an inadvertent input condition comprise an edge of the touch input surface. 10. An information handling device, comprising: one or more processors; a memory device storing instructions accessible to the one or more processors, the instructions being executable by the one or more processors to: receive a first input from a device component indicating positional orientation of the information handling device; receive a second input from a device component indicating that the information handling device is undergoing movement; and disable, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device. 11. The information handling device of claim 10, wherein: the device component comprises a first and a second device component; the first component comprises a gyroscope; and the second component comprises an accelerometer. 12. The information handling device of claim 10, further comprising: a proximity sensor, wherein the instructions further comprise instructions being executable by the one or more processors to: receive input from the proximity sensor, wherein the input from the proximity sensor comprises input associated with an inadvertent input condition; and disable, responsive to the input from the proximity sensor, at least a portion of a touch input surface of the information handling device associated with the proximity sensor input. 13. The information handling device of claim 12, wherein the inadvertent input condition comprises holding the information handling device such that a surface of the information handling device is proximate to an arm of the user. 14. The information handling device of claim 13, wherein the surface of the information handling device is a back surface of the information handling device. 15. The information handling device of claim 13, wherein the surface of the information handling device is a front surface of the information handling device including the touch input surface. 16. The information handling device of claim 10, wherein disabling comprises disabling the entire touch input surface of the information handling device. 17. The information handling device of claim 10, wherein the at least a portion of a touch input surface of the information handling device comprises one or more portions of the touch input surface pre-associated with an inadvertent input condition. 18. The information handling device of claim 17, wherein the one or more portions of the touch input surface pre-associated with an inadvertent input condition comprise an edge of the touch input surface. 19. A computer program product, comprising: a storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to receive a first input from a device component indicating positional orientation of an information handling device; computer readable program code configured to receive a second input from a device component indicating that the information handling device is undergoing movement; and computer readable program code configured to disable, responsive to the first input and the second input, at least a portion of a touch input surface of the information handling device.
2,600
10,614
10,614
15,706,715
2,688
A system for providing security to a fleet of vehicles, the system comprising: a plurality of modules, each module configured to monitor messages propagating in an in-vehicle network of a vehicle comprised in the fleet; a memory having data characterizing messages, and software executable to: identify an anomaly in communications over the in-vehicle communication network; and instruct a communication interface, configured to support communication with an entity external to the vehicle, to transmit monitoring data responsive to the messages; and a processor configured to execute the software in the memory; and a data monitoring and processing hub external to the vehicles comprised in the fleet and operable to receive transmission of monitoring data from the plurality of modules.
1. A system for providing security to a fleet of vehicles, the system comprising: a plurality of modules, each module configured to monitor messages propagating in an in-vehicle network of a vehicle comprised in the fleet, the in-vehicle network having a bus and at least one node connected to the bus, each module comprising: at least one communication port connectable to a portion of the in-vehicle network, via which the module receives and transmits messages; a memory having data characterizing messages that the at least one node transmits and receives during normal operation of the node, and software executable to: identify, responsive to the data characterizing messages and messages received from the in-vehicle network, an anomaly in communications over the in-vehicle communication network; and instruct a communication interface, configured to support communication with an entity external to the vehicle, to transmit monitoring data responsive to the received messages; and a processor configured to execute the software in the memory; and a data monitoring and processing hub external to the vehicles comprised in the fleet and operable to receive transmission of monitoring data from the plurality of modules. 2. The system according to claim 1, wherein the communication interface is comprised in a module of the plurality of modules. 3. The system according to claim 1, wherein the communication interface is comprised in a node connected to the bus of an in-vehicle network. 4. The system according to claim 1, wherein the hub is operable to process the monitoring data it receives from the plurality of modules to determine if at least a portion of the vehicles comprised in the fleet is under threat of an imminent cyber attack, is under a cyber attack, or has vulnerability to a cyber attack. 5. The system according to claim 4, wherein the hub is operable to transmit information to configure one or more modules in one or more vehicles comprised in the fleet to engage the cyber attack, responsive to the determination of if at least a portion of the vehicles comprised in the fleet is under threat of an imminent cyber attack, is under a cyber attack, or has vulnerability to a cyber attack. 6. The system according to claim 4, wherein the hub is operable to provide a user interface that displays information regarding health of the fleet with respect to cyber attacks. 7. The system according to claim 6, wherein the user interface of the hub is operable to display a distribution of anomalous messages detected by at least a portion of the plurality of modules in a specified timeframe. 8. The system according to claim 6, wherein the user interface of the hub is operable to display a distribution of anomalous messages detected by at least a portion of the plurality of modules in a specified geographical area. 9. The system according to claim 8, wherein the distribution is displayed as a heat map. 10. The system according to claim 1, wherein the data comprises a state feature vector representing a state of the vehicle. 11. The system according to claim 10, wherein the software is executable to change the state feature vector responsive to identifying an anomaly in communications over the in-vehicle communication network. 12. The system according to claim 1, wherein the software is executable to raise an alert responsive to identifying an anomaly in communications over the in-vehicle communication network. 13. The system according to claim 1, wherein the software is executable to identify if at least one of the messages received via the at least one communication port is anomalous. 14. The system according to 13, wherein the monitoring data comprises data relevant to tracking performance of the module, responsive to identifying at least one anomalous message. 15. The system according to claim 14, wherein the monitoring data comprises information regarding one or more anomalous messages identified by the module. 16. The system according to claim 14, wherein the hub is operable to track the performance of one or more of the plurality of modules. 17. The system according to claim 16, wherein the tracking of performance comprises determining how frequently one or more the plurality of modules generates false positives or false negatives in identifying messages as anomalous messages. 18. The system according to claim 1 wherein the module is configured to transmit the monitoring data or a portion thereof based on a request that the module receives from the hub. 19. The system according to claim 18 wherein the transmission of the monitoring data or portion thereof is subject to authenticating the request. 20. The system according to claim 19 wherein the module is configured to stop transmitting the monitoring data or portion thereof in response to a communication from the hub.
A system for providing security to a fleet of vehicles, the system comprising: a plurality of modules, each module configured to monitor messages propagating in an in-vehicle network of a vehicle comprised in the fleet; a memory having data characterizing messages, and software executable to: identify an anomaly in communications over the in-vehicle communication network; and instruct a communication interface, configured to support communication with an entity external to the vehicle, to transmit monitoring data responsive to the messages; and a processor configured to execute the software in the memory; and a data monitoring and processing hub external to the vehicles comprised in the fleet and operable to receive transmission of monitoring data from the plurality of modules.1. A system for providing security to a fleet of vehicles, the system comprising: a plurality of modules, each module configured to monitor messages propagating in an in-vehicle network of a vehicle comprised in the fleet, the in-vehicle network having a bus and at least one node connected to the bus, each module comprising: at least one communication port connectable to a portion of the in-vehicle network, via which the module receives and transmits messages; a memory having data characterizing messages that the at least one node transmits and receives during normal operation of the node, and software executable to: identify, responsive to the data characterizing messages and messages received from the in-vehicle network, an anomaly in communications over the in-vehicle communication network; and instruct a communication interface, configured to support communication with an entity external to the vehicle, to transmit monitoring data responsive to the received messages; and a processor configured to execute the software in the memory; and a data monitoring and processing hub external to the vehicles comprised in the fleet and operable to receive transmission of monitoring data from the plurality of modules. 2. The system according to claim 1, wherein the communication interface is comprised in a module of the plurality of modules. 3. The system according to claim 1, wherein the communication interface is comprised in a node connected to the bus of an in-vehicle network. 4. The system according to claim 1, wherein the hub is operable to process the monitoring data it receives from the plurality of modules to determine if at least a portion of the vehicles comprised in the fleet is under threat of an imminent cyber attack, is under a cyber attack, or has vulnerability to a cyber attack. 5. The system according to claim 4, wherein the hub is operable to transmit information to configure one or more modules in one or more vehicles comprised in the fleet to engage the cyber attack, responsive to the determination of if at least a portion of the vehicles comprised in the fleet is under threat of an imminent cyber attack, is under a cyber attack, or has vulnerability to a cyber attack. 6. The system according to claim 4, wherein the hub is operable to provide a user interface that displays information regarding health of the fleet with respect to cyber attacks. 7. The system according to claim 6, wherein the user interface of the hub is operable to display a distribution of anomalous messages detected by at least a portion of the plurality of modules in a specified timeframe. 8. The system according to claim 6, wherein the user interface of the hub is operable to display a distribution of anomalous messages detected by at least a portion of the plurality of modules in a specified geographical area. 9. The system according to claim 8, wherein the distribution is displayed as a heat map. 10. The system according to claim 1, wherein the data comprises a state feature vector representing a state of the vehicle. 11. The system according to claim 10, wherein the software is executable to change the state feature vector responsive to identifying an anomaly in communications over the in-vehicle communication network. 12. The system according to claim 1, wherein the software is executable to raise an alert responsive to identifying an anomaly in communications over the in-vehicle communication network. 13. The system according to claim 1, wherein the software is executable to identify if at least one of the messages received via the at least one communication port is anomalous. 14. The system according to 13, wherein the monitoring data comprises data relevant to tracking performance of the module, responsive to identifying at least one anomalous message. 15. The system according to claim 14, wherein the monitoring data comprises information regarding one or more anomalous messages identified by the module. 16. The system according to claim 14, wherein the hub is operable to track the performance of one or more of the plurality of modules. 17. The system according to claim 16, wherein the tracking of performance comprises determining how frequently one or more the plurality of modules generates false positives or false negatives in identifying messages as anomalous messages. 18. The system according to claim 1 wherein the module is configured to transmit the monitoring data or a portion thereof based on a request that the module receives from the hub. 19. The system according to claim 18 wherein the transmission of the monitoring data or portion thereof is subject to authenticating the request. 20. The system according to claim 19 wherein the module is configured to stop transmitting the monitoring data or portion thereof in response to a communication from the hub.
2,600
10,615
10,615
13,772,182
2,616
Attributes of graphics objects are processed in a plurality of graphics processing pipelines. A streaming multiprocessor (SM) retrieves a first set of parameters associated with a set of graphics objects from a first set of buffers. The SM performs a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters stored in a second set of buffers. The SM performs a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters stored in a third set of buffers. One advantage of the disclosed techniques is that work is redistributed from a first phase to a second phase of graphics processing without having to copy the attributes to and retrieve the attributes from the cache or system memory, resulting in reduced power consumption.
1. A method for processing attributes of graphics objects in a plurality of graphics processing pipelines, the method comprising: retrieving a first set of parameters associated with a set of graphics objects from a first set of buffers; performing a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters; storing the second set of parameters in a second set of buffers; performing a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters; and storing the third set of parameters in a third set of buffers. 2. The method of claim 1 further comprising: allocating space for the first set of buffers within a first portion of a shared memory; and allocating space for the second set of buffers within a second portion of the shared memory. 3. The method of claim 2, further comprising: deallocating space for the first set of buffers; allocating space for a fourth set of buffers within the first portion of a shared memory; retrieving a fourth set of parameters associated with the set of graphics objects from the fourth set of buffers; performing the first set of operations on the fourth set of parameters according to the first phase of processing to produce a fourth set of parameters; and storing the fourth set of parameters in the second set of buffers. 4. The method of claim 2 further comprising: deallocating the space for the first set of buffers; and allocating space for the third set of buffers within the first portion of the shared memory. 5. The method of claim 4, further comprising: deallocating the space for the third set of buffers; allocating space for a fourth set of buffers within the second portion of the shared memory; performing the second set of operations on the second set of parameters according to the second phase of processing to produce a fourth set of parameters; and storing the fourth set of parameters in the fourth set of buffers. 6. The method of claim 2, wherein the first portion of the shared memory and the second portion of the shared memory have a fixed size. 7. The method of claim 1, further comprising redistributing the second set of parameters across the plurality of graphics processing pipelines prior to performing the second set of operations. 8. The method of claim 1, further comprising transferring contents of the third set of buffers to a later stage in a first graphics processing pipeline included in the plurality of graphics processing pipelines. 9. The method of claim 1, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; and specifying the second set of parameters as the vertex output data. 10. The method of claim 9, wherein performing the second set of operations comprises: processing the second set of parameters with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 11. The method of claim 1, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; storing the vertex output data in the first set of buffers; processing the stored vertex output data with a tessellation initialization shader program executed by a tessellation initialization processing unit to produce tessellation initialization output data; and specifying the second set of parameters as the tessellation initialization output data. 12. The method of claim 11, wherein performing the second set of operations comprises: processing the second set of parameters with a tessellation shader program executed by a tessellation processing unit to produce tessellation output data; storing the tessellation output data in the third set of buffers; processing the stored tessellation output data with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 13. A subsystem comprising: a streaming multiprocessor configured to redistribute attributes of graphics objects in a graphics processing pipeline between a first processing phase and a second processing phase by performing the steps of: retrieving a first set of parameters associated with a set of graphics objects from a first set of buffers; performing a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters; storing the second set of parameters in a second set of buffers; performing a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters; and storing the third set of parameters in a third set of buffers. 14. The subsystem of claim 13 wherein the streaming multiprocessor is further configured to perform the steps of: allocating space for the first set of buffers within a first portion of a shared memory; and allocating space for the second set of buffers within a second portion of the shared memory. 15. The subsystem of claim 14 wherein the streaming multiprocessor is further configured to perform the steps of: deallocating the space for the first set of buffers; and allocating space for the third set of buffers within the first portion of the shared memory. 16. The subsystem of claim 14, wherein the first portion of the shared memory and the second portion of the shared memory have a fixed size. 17. The subsystem of claim 13, wherein the streaming multiprocessor is further configured to perform the step of redistributing the second set of parameters across the plurality of graphics processing pipelines prior to performing the second set of operations. 18. The subsystem of claim 13, wherein the streaming multiprocessor is further configured to perform the step of transferring contents of the third set of buffers to a later stage in a first graphics processing pipeline included in the plurality of graphics processing pipelines. 19. The subsystem of claim 13, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; and specifying the second set of parameters as the vertex output data. 20. The subsystem of claim 19, wherein performing the second set of operations comprises: processing the second set of parameters with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 21. The subsystem of claim 13, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; storing the vertex output data in the first set of buffers; processing the stored vertex output data with a tessellation initialization shader program executed by a tessellation initialization processing unit to produce tessellation initialization output data; and specifying the second set of parameters as the tessellation initialization output data. 22. The subsystem of claim 21, wherein performing the second set of operations comprises: processing the second set of parameters with a tessellation shader program executed by a tessellation processing unit to produce tessellation output data; storing the tessellation output data in the third set of buffers; processing the stored tessellation output data with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 23. A computing device comprising: a subsystem that includes a streaming multiprocessor configured to redistribute attributes of graphics objects in a graphics processing pipeline between a first processing phase and a second processing phase by performing the steps of: retrieving a first set of parameters associated with a set of graphics objects from a first set of buffers; performing a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters; storing the second set of parameters in a second set of buffers; performing a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters; and storing the third set of parameters in a third set of buffers.
Attributes of graphics objects are processed in a plurality of graphics processing pipelines. A streaming multiprocessor (SM) retrieves a first set of parameters associated with a set of graphics objects from a first set of buffers. The SM performs a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters stored in a second set of buffers. The SM performs a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters stored in a third set of buffers. One advantage of the disclosed techniques is that work is redistributed from a first phase to a second phase of graphics processing without having to copy the attributes to and retrieve the attributes from the cache or system memory, resulting in reduced power consumption.1. A method for processing attributes of graphics objects in a plurality of graphics processing pipelines, the method comprising: retrieving a first set of parameters associated with a set of graphics objects from a first set of buffers; performing a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters; storing the second set of parameters in a second set of buffers; performing a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters; and storing the third set of parameters in a third set of buffers. 2. The method of claim 1 further comprising: allocating space for the first set of buffers within a first portion of a shared memory; and allocating space for the second set of buffers within a second portion of the shared memory. 3. The method of claim 2, further comprising: deallocating space for the first set of buffers; allocating space for a fourth set of buffers within the first portion of a shared memory; retrieving a fourth set of parameters associated with the set of graphics objects from the fourth set of buffers; performing the first set of operations on the fourth set of parameters according to the first phase of processing to produce a fourth set of parameters; and storing the fourth set of parameters in the second set of buffers. 4. The method of claim 2 further comprising: deallocating the space for the first set of buffers; and allocating space for the third set of buffers within the first portion of the shared memory. 5. The method of claim 4, further comprising: deallocating the space for the third set of buffers; allocating space for a fourth set of buffers within the second portion of the shared memory; performing the second set of operations on the second set of parameters according to the second phase of processing to produce a fourth set of parameters; and storing the fourth set of parameters in the fourth set of buffers. 6. The method of claim 2, wherein the first portion of the shared memory and the second portion of the shared memory have a fixed size. 7. The method of claim 1, further comprising redistributing the second set of parameters across the plurality of graphics processing pipelines prior to performing the second set of operations. 8. The method of claim 1, further comprising transferring contents of the third set of buffers to a later stage in a first graphics processing pipeline included in the plurality of graphics processing pipelines. 9. The method of claim 1, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; and specifying the second set of parameters as the vertex output data. 10. The method of claim 9, wherein performing the second set of operations comprises: processing the second set of parameters with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 11. The method of claim 1, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; storing the vertex output data in the first set of buffers; processing the stored vertex output data with a tessellation initialization shader program executed by a tessellation initialization processing unit to produce tessellation initialization output data; and specifying the second set of parameters as the tessellation initialization output data. 12. The method of claim 11, wherein performing the second set of operations comprises: processing the second set of parameters with a tessellation shader program executed by a tessellation processing unit to produce tessellation output data; storing the tessellation output data in the third set of buffers; processing the stored tessellation output data with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 13. A subsystem comprising: a streaming multiprocessor configured to redistribute attributes of graphics objects in a graphics processing pipeline between a first processing phase and a second processing phase by performing the steps of: retrieving a first set of parameters associated with a set of graphics objects from a first set of buffers; performing a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters; storing the second set of parameters in a second set of buffers; performing a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters; and storing the third set of parameters in a third set of buffers. 14. The subsystem of claim 13 wherein the streaming multiprocessor is further configured to perform the steps of: allocating space for the first set of buffers within a first portion of a shared memory; and allocating space for the second set of buffers within a second portion of the shared memory. 15. The subsystem of claim 14 wherein the streaming multiprocessor is further configured to perform the steps of: deallocating the space for the first set of buffers; and allocating space for the third set of buffers within the first portion of the shared memory. 16. The subsystem of claim 14, wherein the first portion of the shared memory and the second portion of the shared memory have a fixed size. 17. The subsystem of claim 13, wherein the streaming multiprocessor is further configured to perform the step of redistributing the second set of parameters across the plurality of graphics processing pipelines prior to performing the second set of operations. 18. The subsystem of claim 13, wherein the streaming multiprocessor is further configured to perform the step of transferring contents of the third set of buffers to a later stage in a first graphics processing pipeline included in the plurality of graphics processing pipelines. 19. The subsystem of claim 13, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; and specifying the second set of parameters as the vertex output data. 20. The subsystem of claim 19, wherein performing the second set of operations comprises: processing the second set of parameters with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 21. The subsystem of claim 13, wherein performing the first set of operations comprises: processing the first set of parameters with a vertex shader program executed by a vertex processing unit to produce vertex output data; storing the vertex output data in the first set of buffers; processing the stored vertex output data with a tessellation initialization shader program executed by a tessellation initialization processing unit to produce tessellation initialization output data; and specifying the second set of parameters as the tessellation initialization output data. 22. The subsystem of claim 21, wherein performing the second set of operations comprises: processing the second set of parameters with a tessellation shader program executed by a tessellation processing unit to produce tessellation output data; storing the tessellation output data in the third set of buffers; processing the stored tessellation output data with a geometry shader program executed by a geometry processing unit to produce geometry output data; and specifying the third set of parameters as the geometry output data. 23. A computing device comprising: a subsystem that includes a streaming multiprocessor configured to redistribute attributes of graphics objects in a graphics processing pipeline between a first processing phase and a second processing phase by performing the steps of: retrieving a first set of parameters associated with a set of graphics objects from a first set of buffers; performing a first set of operations on the first set of parameters according to a first phase of processing to produce a second set of parameters; storing the second set of parameters in a second set of buffers; performing a second set of operations on the second set of parameters according to a second phase of processing to produce a third set of parameters; and storing the third set of parameters in a third set of buffers.
2,600
10,616
10,616
15,634,484
2,688
Embodiments relate generally to systems and methods for assessing cognitive function limitations. A system may comprise a sensor module, wherein the sensor module comprises a gas detector; and a device, wherein the device is configured to: obtain data from the sensor module, wherein the data comprises environmental data; calculate a cognitive availability value based on the environmental data; and display the cognitive availability value on a user interface of the device.
1. A system for assessing cognitive function limitations, the system comprising: a sensor module; and a computing device, comprising a processor; wherein the sensor module is configured to detect a plurality of different types of data for a surrounding environment associated with a user of the computing device and transmit the data to the computing, device, and the computing device is configured to: obtain the data from the sensor module, wherein the data for the surrounding environment comprises environmental data relating to a plurality of factors affecting cognitive availability: calculate a total cognitive availability value based on the environmental data, wherein using the data relating to each factor, a cognitive availability value is generated for each factor, wherein the total cognitive availability factor is each factor, and wherein the total cognitive availability value is a single value indicative of effects of the plurality of factors on the user's cognitive availability; and display the cognitive availability value to the user on a user interface of the device. 2. The system of claim 1, wherein the sensor module and the device comprise a plurality of sensors comprising a gas detector, a clock, a humidity sensor, a temperature sensor, a microphone, a camera, a motion detector, and an ambient light sensor; and wherein the processor uses the data from each of the plurality of sensors to calculate the cognitive availability value for the factor associated with each sensor. 3. The system of claim 2, wherein the data comprises temperature data, humidity data, and time. 4. The system of claim 3, wherein the data further comprises microphone data, camera data, motion detector data, and ambient light sensor data, and wherein the camera data comprises data indicative of the user's pupil position. 5. The system of claim 4, wherein the ambient light sensor is configured to measure intensity and/or color of light. 6. The system of claim 4, wherein the device is further configured to display, in the user interface, an alertness icon based on the cognitive availability value. 7. The system of claim 1, wherein the sensor module is connected to the device by a universal serial bus cable or wherein the sensor module is embedded within the device. 8. The system of claim 1, wherein, when availability value, the device weighs each cognitive availability value relating to one of the plurality of factors. 9. The system of claim 8, wherein the device is further configured to store a user profile indicative of an amount for weighing of each of the plurality of factors based on individual user sensitivity to the plurality of factors and to adjust weighing of each cognitive availability value relating to one of the plurality of factors accordingly. 10. The system of claim 9, wherein the device is further configured to allow storing of a cognitive availability value for a previously performed task and to provide a recommendation for environmental conditions based on the stored cognitive availability value for the previously performed task. 11. A method for assessing cognitive function limitations, the method comprising: providing a sensor module connected to a computing device: obtaining sensor data for a surrounding environment associated with a user, with the device, from the sensor module, wherein the sensor data for the surrounding environment comprises environmental data relating to a plurality of factors affecting cognitive availability; calculating a total cognitive availability value based on the environmental data, wherein using the data relating to each factor, a cognitive availability value is generated for each factor, wherein the total cognitive availability value is generated by combining all of the cognitive availability values relating to each factor, and wherein the total cognitive availability value is a single value indicative of effects of die plurality of factors on cognitive availability; and displaying the cognitive availability value to the user on a user interface of the device. 12. The method of claim 11, further comprising allowing, with the device, weighting each cognitive availability value relating to one of the plurality of factors for a calculation of the total cognitive availability value; wherein the sensor data is from a plurality of sensors comprising a gas detector, a clock, a humidity sensor, a temperature sensor, a microphone, a camera, a motion detector, a clock, and an ambient light sensor. 13. The method of claim 12, further comprising adjusting the weighting of each cognitive availability value relating to one of the plurality based on a preference or a cognitive availability value of a previously performed task. 14. The method of claim 13, further comprising recommending, with the device, environmental conditions based on the preference or the cognitive availability value of the previously performed task. 15. A method for assessing cognitive function limitations, the method comprising: providing a sensor module connected to a device; obtaining data for a surrounding environment associated with a user with the device; allowing a selection of the data for the surrounding environment relating to a plurality of factors affecting cognitive availability, with the device, based on a preference or a previously calculated cognitive availability value for a previously performed task; calculating a cognitive availability value based on the selection of the data, wherein the cognitive availability value is a single value indicative of an amount of reduction of the user's cognitive availability in the surrounding environment versus the user's cognitive availability in an ideal environment, accounting for the plurality of factors; and displaying the cognitive availability value to the user on a user interface of the device. 16. The method of claim 15, further comprising allowing, with the device, weighting of the data from each of a gas detector, a clock, a humidity sensor, a temperature sensor, a microphone, a camera, a motion detector, a clock, and an ambient light sensor. 17. The method of claim 16, further allowing, with the device, an addition of data from an additional sensor for a cognitive availability value calculation. 18. The method of claim 17, further comprising allowing, with the device, adjustment of a weight of the data for a calculation of the cognitive availability value. 19. The method of claim 15, further comprising displaying, in the user interface, an alertness icon based on the cognitive availability value. 20. The method of claim 15, farther comprising allowing storage of a profile, wherein the profile comprises weighted data.
Embodiments relate generally to systems and methods for assessing cognitive function limitations. A system may comprise a sensor module, wherein the sensor module comprises a gas detector; and a device, wherein the device is configured to: obtain data from the sensor module, wherein the data comprises environmental data; calculate a cognitive availability value based on the environmental data; and display the cognitive availability value on a user interface of the device.1. A system for assessing cognitive function limitations, the system comprising: a sensor module; and a computing device, comprising a processor; wherein the sensor module is configured to detect a plurality of different types of data for a surrounding environment associated with a user of the computing device and transmit the data to the computing, device, and the computing device is configured to: obtain the data from the sensor module, wherein the data for the surrounding environment comprises environmental data relating to a plurality of factors affecting cognitive availability: calculate a total cognitive availability value based on the environmental data, wherein using the data relating to each factor, a cognitive availability value is generated for each factor, wherein the total cognitive availability factor is each factor, and wherein the total cognitive availability value is a single value indicative of effects of the plurality of factors on the user's cognitive availability; and display the cognitive availability value to the user on a user interface of the device. 2. The system of claim 1, wherein the sensor module and the device comprise a plurality of sensors comprising a gas detector, a clock, a humidity sensor, a temperature sensor, a microphone, a camera, a motion detector, and an ambient light sensor; and wherein the processor uses the data from each of the plurality of sensors to calculate the cognitive availability value for the factor associated with each sensor. 3. The system of claim 2, wherein the data comprises temperature data, humidity data, and time. 4. The system of claim 3, wherein the data further comprises microphone data, camera data, motion detector data, and ambient light sensor data, and wherein the camera data comprises data indicative of the user's pupil position. 5. The system of claim 4, wherein the ambient light sensor is configured to measure intensity and/or color of light. 6. The system of claim 4, wherein the device is further configured to display, in the user interface, an alertness icon based on the cognitive availability value. 7. The system of claim 1, wherein the sensor module is connected to the device by a universal serial bus cable or wherein the sensor module is embedded within the device. 8. The system of claim 1, wherein, when availability value, the device weighs each cognitive availability value relating to one of the plurality of factors. 9. The system of claim 8, wherein the device is further configured to store a user profile indicative of an amount for weighing of each of the plurality of factors based on individual user sensitivity to the plurality of factors and to adjust weighing of each cognitive availability value relating to one of the plurality of factors accordingly. 10. The system of claim 9, wherein the device is further configured to allow storing of a cognitive availability value for a previously performed task and to provide a recommendation for environmental conditions based on the stored cognitive availability value for the previously performed task. 11. A method for assessing cognitive function limitations, the method comprising: providing a sensor module connected to a computing device: obtaining sensor data for a surrounding environment associated with a user, with the device, from the sensor module, wherein the sensor data for the surrounding environment comprises environmental data relating to a plurality of factors affecting cognitive availability; calculating a total cognitive availability value based on the environmental data, wherein using the data relating to each factor, a cognitive availability value is generated for each factor, wherein the total cognitive availability value is generated by combining all of the cognitive availability values relating to each factor, and wherein the total cognitive availability value is a single value indicative of effects of die plurality of factors on cognitive availability; and displaying the cognitive availability value to the user on a user interface of the device. 12. The method of claim 11, further comprising allowing, with the device, weighting each cognitive availability value relating to one of the plurality of factors for a calculation of the total cognitive availability value; wherein the sensor data is from a plurality of sensors comprising a gas detector, a clock, a humidity sensor, a temperature sensor, a microphone, a camera, a motion detector, a clock, and an ambient light sensor. 13. The method of claim 12, further comprising adjusting the weighting of each cognitive availability value relating to one of the plurality based on a preference or a cognitive availability value of a previously performed task. 14. The method of claim 13, further comprising recommending, with the device, environmental conditions based on the preference or the cognitive availability value of the previously performed task. 15. A method for assessing cognitive function limitations, the method comprising: providing a sensor module connected to a device; obtaining data for a surrounding environment associated with a user with the device; allowing a selection of the data for the surrounding environment relating to a plurality of factors affecting cognitive availability, with the device, based on a preference or a previously calculated cognitive availability value for a previously performed task; calculating a cognitive availability value based on the selection of the data, wherein the cognitive availability value is a single value indicative of an amount of reduction of the user's cognitive availability in the surrounding environment versus the user's cognitive availability in an ideal environment, accounting for the plurality of factors; and displaying the cognitive availability value to the user on a user interface of the device. 16. The method of claim 15, further comprising allowing, with the device, weighting of the data from each of a gas detector, a clock, a humidity sensor, a temperature sensor, a microphone, a camera, a motion detector, a clock, and an ambient light sensor. 17. The method of claim 16, further allowing, with the device, an addition of data from an additional sensor for a cognitive availability value calculation. 18. The method of claim 17, further comprising allowing, with the device, adjustment of a weight of the data for a calculation of the cognitive availability value. 19. The method of claim 15, further comprising displaying, in the user interface, an alertness icon based on the cognitive availability value. 20. The method of claim 15, farther comprising allowing storage of a profile, wherein the profile comprises weighted data.
2,600
10,617
10,617
15,289,102
2,626
This application is directed to detecting touch events on a touch sensing surface coupled to a capacitive sense array and one or more force electrodes. The capacitive sense array includes a plurality of sense electrodes configured to provide a plurality of capacitive sense signals. The force electrodes are configured to provide one or more force signals. A subset of sense electrodes are determined to be associated with a candidate touch based on the capacitive sense signals and their associated baseline values. When force associated with the force signals is below a predetermined force threshold, the candidate touch is determined as an invalid touch (e.g., coverage by a water drop on an area of the touch sensing surface corresponding to the subset of sense electrodes). Baseline values are adjusted for the subset of sense electrodes for determining subsequent touch events associated with the subset of sense electrodes.
1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising: at a processing device coupled to a capacitive sense array and one or more force electrodes associated with the capacitive sense array, wherein the capacitive sense array includes a plurality of sense electrodes: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals and a plurality of baseline values, determining that a subset of sense electrodes are associated with a candidate touch, wherein each of the plurality of baseline values is associated with at least one of the plurality of sense electrodes; obtaining one or more force signals from the one or more force electrodes; in accordance with a determination that force associated with the one or more force signals is below a predetermined force threshold: determining that the candidate touch is not a valid touch; and for each of the subset of sense electrodes of the capacitive sense array, adjusting a respective baseline value for determining subsequent touch events associated with the respective sense electrode according to one of the plurality of capacitive sense signals associated with the respective sense electrode. 2. The method of claim 1, further comprising: in accordance with the plurality of capacitive sense signals and the adjusted baseline values of the subset of sense electrodes, determining a subsequent touch event on the touch sense surface, including filtering a subset of the plurality of capacitive sense signals according to the baseline values associated with the plurality of sense electrodes. 3. The method of claim 2, wherein the candidate touch is associated with a water drop covering an area of the touch sensing surface corresponding to the subset of sense electrodes, and determining the subsequent touch event on the touch sense surface further comprises: in accordance with the adjusted baseline values of the subset of sense electrodes, detecting the subsequent touch that at least partially overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that is covered by the water drop. 4. The method of claim 3, wherein the subsequent touch overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that is covered by the water drop. 5. The method of claim 4, wherein the subsequent touch has a touch area that is smaller than the area of the touch sensing surface covered by the water drop, and the subsequent touch is entirely contained within the water drop. 6. The method of claim 1, further comprising determining a subsequent touch event on the touch sense surface, including: in accordance with the adjusted baseline values, detecting the subsequent touch that does not overlap with the area of the touch sensing surface corresponding to the subset of sense electrodes that is covered in the water drop, wherein no touch is detected at the area of the touch sensing surface in accordance with the adjusted baseline values of the subset of sense electrodes. 7. The method of claim 1, wherein the water drop has an irregular shape. 8. The method of claim 1, wherein determining that the candidate touch is not a valid touch further comprises: identifying the candidate touch as a water drop or hovering of an object based on at least one of a duration of the candidate touch and a signal strength of a subset of capacitive sense signals corresponding to the candidate touch. 9. The method of claim 1, wherein the water drop has a maximum diameter that is less than one centimeter. 10. The method of claim 1, wherein the plurality of baseline values are configured to be continuously adjusted based on the one or more force signals. 11. The method of claim 1, wherein the plurality of baseline values are configured to be periodically adjusted based on the one or more force signals. 12. A processing device, wherein: a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals and a plurality of baseline values, determining that a subset of sense electrodes are associated with a candidate touch, wherein each of the plurality of baseline values is associated with at least one of the plurality of sense electrodes; obtaining one or more force signals from the one or more force electrodes; in accordance with a determination that force associated with the one or more force signals is below a predetermined force threshold: determining that the candidate touch is not a valid touch; and for each of the subset of sense electrodes of the capacitive sense array, adjusting a respective baseline value for determining subsequent touch events associated with the respective sense electrode according to one of the plurality of capacitive sense signals associated with the respective sense electrode. 13. The processing device of claim 12, wherein the one or more programs further comprise instructions for: in accordance with the plurality of capacitive sense signals and the adjusted baseline values of the subset of sense electrodes, determining a subsequent touch event on the touch sense surface, including filtering a subset of the plurality of capacitive sense signals according to the baseline values associated with the plurality of sense electrodes. 14. An electronic system, comprising: a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured to: obtain a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals and a plurality of baseline values, determine that a subset of sense electrodes are associated with a candidate touch, wherein each of the plurality of baseline values is associated with at least one of the plurality of sense electrodes; obtain one or more force signals from the one or more force electrodes; in accordance with a determination that force associated with the one or more force signals is below a predetermined force threshold: determine that the candidate touch is not a valid touch; and for each of the subset of sense electrodes of the capacitive sense array, adjust a respective baseline value for determining subsequent touch events associated with the respective sense electrode according to one of the plurality of capacitive sense signals associated with the respective sense electrode. 15. The electronic system of claim 14, wherein the processing device further comprises: a touch sense circuit configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes; a force sense circuit configured to obtain the one or more force signals from the one or more force electrodes; and a controller, coupled to the force sense circuit and the capacitive sense circuit, the controller being configured to synchronize the plurality of capacitive sense signals and obtain the one or more force signals for further processing. 16. The electronic system of claim 14, wherein the processing device is further configured to: in accordance with the plurality of capacitive sense signals and the adjusted baseline values of the subset of sense electrodes, determine a subsequent touch event on the touch sense surface, including filtering a subset of the plurality of capacitive sense signals according to the baseline values associated with the plurality of sense electrodes. 17. The electronic system of claim 16, wherein the candidate touch is associated with a water drop covering an area of the touch sensing surface corresponding to the subset of sense electrodes, and the processing device is configured to determine the subsequent touch event on the touch sense surface by: in accordance with the adjusted baseline values of the subset of sense electrodes, detecting the subsequent touch that at least partially overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that was covered by the water drop. 18. The electronic system of claim 17, wherein the subsequent touch overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that was covered by the water drop. 19. The electronic system of claim 18, wherein the candidate touch is associated with hovering of a conductive object or a stylus above the area of the touch sensing surface corresponding to the subset of sense electrodes. 20. The electronic system of claim 14, wherein the processing device is configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array by at least one of: measuring the plurality of capacitive sense signals based on self-capacitances of the plurality of sense electrodes with respect to a ground; and measuring the plurality of capacitive sense signals based on mutual capacitances of the plurality of sense electrodes.
This application is directed to detecting touch events on a touch sensing surface coupled to a capacitive sense array and one or more force electrodes. The capacitive sense array includes a plurality of sense electrodes configured to provide a plurality of capacitive sense signals. The force electrodes are configured to provide one or more force signals. A subset of sense electrodes are determined to be associated with a candidate touch based on the capacitive sense signals and their associated baseline values. When force associated with the force signals is below a predetermined force threshold, the candidate touch is determined as an invalid touch (e.g., coverage by a water drop on an area of the touch sensing surface corresponding to the subset of sense electrodes). Baseline values are adjusted for the subset of sense electrodes for determining subsequent touch events associated with the subset of sense electrodes.1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising: at a processing device coupled to a capacitive sense array and one or more force electrodes associated with the capacitive sense array, wherein the capacitive sense array includes a plurality of sense electrodes: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals and a plurality of baseline values, determining that a subset of sense electrodes are associated with a candidate touch, wherein each of the plurality of baseline values is associated with at least one of the plurality of sense electrodes; obtaining one or more force signals from the one or more force electrodes; in accordance with a determination that force associated with the one or more force signals is below a predetermined force threshold: determining that the candidate touch is not a valid touch; and for each of the subset of sense electrodes of the capacitive sense array, adjusting a respective baseline value for determining subsequent touch events associated with the respective sense electrode according to one of the plurality of capacitive sense signals associated with the respective sense electrode. 2. The method of claim 1, further comprising: in accordance with the plurality of capacitive sense signals and the adjusted baseline values of the subset of sense electrodes, determining a subsequent touch event on the touch sense surface, including filtering a subset of the plurality of capacitive sense signals according to the baseline values associated with the plurality of sense electrodes. 3. The method of claim 2, wherein the candidate touch is associated with a water drop covering an area of the touch sensing surface corresponding to the subset of sense electrodes, and determining the subsequent touch event on the touch sense surface further comprises: in accordance with the adjusted baseline values of the subset of sense electrodes, detecting the subsequent touch that at least partially overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that is covered by the water drop. 4. The method of claim 3, wherein the subsequent touch overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that is covered by the water drop. 5. The method of claim 4, wherein the subsequent touch has a touch area that is smaller than the area of the touch sensing surface covered by the water drop, and the subsequent touch is entirely contained within the water drop. 6. The method of claim 1, further comprising determining a subsequent touch event on the touch sense surface, including: in accordance with the adjusted baseline values, detecting the subsequent touch that does not overlap with the area of the touch sensing surface corresponding to the subset of sense electrodes that is covered in the water drop, wherein no touch is detected at the area of the touch sensing surface in accordance with the adjusted baseline values of the subset of sense electrodes. 7. The method of claim 1, wherein the water drop has an irregular shape. 8. The method of claim 1, wherein determining that the candidate touch is not a valid touch further comprises: identifying the candidate touch as a water drop or hovering of an object based on at least one of a duration of the candidate touch and a signal strength of a subset of capacitive sense signals corresponding to the candidate touch. 9. The method of claim 1, wherein the water drop has a maximum diameter that is less than one centimeter. 10. The method of claim 1, wherein the plurality of baseline values are configured to be continuously adjusted based on the one or more force signals. 11. The method of claim 1, wherein the plurality of baseline values are configured to be periodically adjusted based on the one or more force signals. 12. A processing device, wherein: a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals and a plurality of baseline values, determining that a subset of sense electrodes are associated with a candidate touch, wherein each of the plurality of baseline values is associated with at least one of the plurality of sense electrodes; obtaining one or more force signals from the one or more force electrodes; in accordance with a determination that force associated with the one or more force signals is below a predetermined force threshold: determining that the candidate touch is not a valid touch; and for each of the subset of sense electrodes of the capacitive sense array, adjusting a respective baseline value for determining subsequent touch events associated with the respective sense electrode according to one of the plurality of capacitive sense signals associated with the respective sense electrode. 13. The processing device of claim 12, wherein the one or more programs further comprise instructions for: in accordance with the plurality of capacitive sense signals and the adjusted baseline values of the subset of sense electrodes, determining a subsequent touch event on the touch sense surface, including filtering a subset of the plurality of capacitive sense signals according to the baseline values associated with the plurality of sense electrodes. 14. An electronic system, comprising: a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured to: obtain a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals and a plurality of baseline values, determine that a subset of sense electrodes are associated with a candidate touch, wherein each of the plurality of baseline values is associated with at least one of the plurality of sense electrodes; obtain one or more force signals from the one or more force electrodes; in accordance with a determination that force associated with the one or more force signals is below a predetermined force threshold: determine that the candidate touch is not a valid touch; and for each of the subset of sense electrodes of the capacitive sense array, adjust a respective baseline value for determining subsequent touch events associated with the respective sense electrode according to one of the plurality of capacitive sense signals associated with the respective sense electrode. 15. The electronic system of claim 14, wherein the processing device further comprises: a touch sense circuit configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes; a force sense circuit configured to obtain the one or more force signals from the one or more force electrodes; and a controller, coupled to the force sense circuit and the capacitive sense circuit, the controller being configured to synchronize the plurality of capacitive sense signals and obtain the one or more force signals for further processing. 16. The electronic system of claim 14, wherein the processing device is further configured to: in accordance with the plurality of capacitive sense signals and the adjusted baseline values of the subset of sense electrodes, determine a subsequent touch event on the touch sense surface, including filtering a subset of the plurality of capacitive sense signals according to the baseline values associated with the plurality of sense electrodes. 17. The electronic system of claim 16, wherein the candidate touch is associated with a water drop covering an area of the touch sensing surface corresponding to the subset of sense electrodes, and the processing device is configured to determine the subsequent touch event on the touch sense surface by: in accordance with the adjusted baseline values of the subset of sense electrodes, detecting the subsequent touch that at least partially overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that was covered by the water drop. 18. The electronic system of claim 17, wherein the subsequent touch overlaps with the area of the touch sensing surface corresponding to the subset of sense electrodes that was covered by the water drop. 19. The electronic system of claim 18, wherein the candidate touch is associated with hovering of a conductive object or a stylus above the area of the touch sensing surface corresponding to the subset of sense electrodes. 20. The electronic system of claim 14, wherein the processing device is configured to obtain the plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array by at least one of: measuring the plurality of capacitive sense signals based on self-capacitances of the plurality of sense electrodes with respect to a ground; and measuring the plurality of capacitive sense signals based on mutual capacitances of the plurality of sense electrodes.
2,600
10,618
10,618
14,634,304
2,685
An embodiment of a device includes an image-capture sensor, a determiner, and a notifier. The image-capture sensor is configured to be located on a subject having a body portion and to capture data representative of an image of an object. The determiner is configured to determine, in response to the data, whether the body portion may contact the object. The notifier is configured to warn, or otherwise notify, the subject in response to the determiner determining that the body portion may contact the object. Such a device (e.g., attached to, or part of, a shoe) may be useful to warn a subject of a potential collision between an object (e.g., stairs, furniture, door jamb, curb, toy) and a body part (e.g., foot, toes) in which the subject has lost feeling, the ability to feel pain, or proprioception. And such a warning may help the subject to avoid inadvertently and repeatedly injuring the body part.
1. A device, comprising: an image-capture sensor configured to be located on a subject having a body portion and to capture data representative of an image of an object; a determiner configured to determine, in response to the data, whether the body portion may contact the object; and a notifier configured to notify the subject in response to the determiner determining that the body portion may contact the object. 2. The device of claim 1 wherein the image-capture sensor includes a pixel array. 3. The device of claim 1 wherein the image-capture sensor includes a camera. 4. The device of claim 1 wherein the image-capture sensor includes a video camera. 5. A device, comprising: means for capturing information that is related to an image of an object, the means for capturing attachable to a subject having a body portion; means for determining, in response to the information, whether the body portion may contact the object; and means for notifying the subject in response to the means for determining that the body portion may contact the object. 6. (canceled) 7. A method, comprising: capturing, with an image-capture sensor configured to be located on a subject having a body portion, data representative of an image of an object; determining, in response to the data, whether the body portion of the subject may contact the object; and notifying the subject in response to determining that the body portion may contact the object. 8.-15. (canceled) 16. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of space adjacent to the body portion. 17. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of a surface that is supporting the subject. 18. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of a space toward which the subject is moving. 19. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of a space in front of the subject. 20.-39. (canceled) 40. The method of claim 7 wherein the determining includes: determining a movement of the body portion relative to the object; and determining whether the body portion may contact the object in response to the movement. 41. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a trajectory along which the body portion has moved relative to the object. 42. The method of claim 7 wherein the determining includes determining whether the body portion may collide with the object. 43. The method of claim 7 wherein the determining includes determining with a determiner that is remote from the image-capture sensor. 44. The method of claim 7 wherein the determining includes determining with a determiner that is remote from the subject. 45. The method of claim 7 wherein the notifying the subject includes notifying the subject with a notifier that is remote from the image-capture sensor. 46. The method of claim 7 wherein the notifying the subject includes notifying the subject with a notifier that is remote from the subject. 47. The method of claim 7 wherein the notifying the subject includes generating a notification. 48. The method of claim 7 wherein the notifying the subject includes notifying another subject. 49.-50. (canceled) 51. The device of claim 1 wherein the image-capture sensor includes a personal computing device. 52. The device of claim 1 wherein the image-capture sensor is configured to generate data representing multiple images of the object. 53.-55. (canceled) 56. The device of claim 1, further including: a housing; and wherein at least the image-capture sensor, determiner, and notifier are disposed in the housing. 57.-59. (canceled) 60. The device of claim 1 wherein the image includes a light image. 61. The device of claim 1 wherein the image includes a thermal image. 62. The device of claim 1 wherein the image-capture sensor is configured to generate information representing a sound image of the object. 63.-64. (canceled) 65. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a speed of the body portion relative to the object. 66. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a direction in which the body portion is moving relative to the object. 67. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a location of the body portion relative to the object. 68. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a rate at which a speed of the body portion relative to the object is changing. 69.-70. (canceled) 71. The method of claim 7 wherein the data representing an image of an object includes data regarding light emitted or reflected from the object. 72.-75. (canceled)
An embodiment of a device includes an image-capture sensor, a determiner, and a notifier. The image-capture sensor is configured to be located on a subject having a body portion and to capture data representative of an image of an object. The determiner is configured to determine, in response to the data, whether the body portion may contact the object. The notifier is configured to warn, or otherwise notify, the subject in response to the determiner determining that the body portion may contact the object. Such a device (e.g., attached to, or part of, a shoe) may be useful to warn a subject of a potential collision between an object (e.g., stairs, furniture, door jamb, curb, toy) and a body part (e.g., foot, toes) in which the subject has lost feeling, the ability to feel pain, or proprioception. And such a warning may help the subject to avoid inadvertently and repeatedly injuring the body part.1. A device, comprising: an image-capture sensor configured to be located on a subject having a body portion and to capture data representative of an image of an object; a determiner configured to determine, in response to the data, whether the body portion may contact the object; and a notifier configured to notify the subject in response to the determiner determining that the body portion may contact the object. 2. The device of claim 1 wherein the image-capture sensor includes a pixel array. 3. The device of claim 1 wherein the image-capture sensor includes a camera. 4. The device of claim 1 wherein the image-capture sensor includes a video camera. 5. A device, comprising: means for capturing information that is related to an image of an object, the means for capturing attachable to a subject having a body portion; means for determining, in response to the information, whether the body portion may contact the object; and means for notifying the subject in response to the means for determining that the body portion may contact the object. 6. (canceled) 7. A method, comprising: capturing, with an image-capture sensor configured to be located on a subject having a body portion, data representative of an image of an object; determining, in response to the data, whether the body portion of the subject may contact the object; and notifying the subject in response to determining that the body portion may contact the object. 8.-15. (canceled) 16. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of space adjacent to the body portion. 17. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of a surface that is supporting the subject. 18. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of a space toward which the subject is moving. 19. The device of claim 1 wherein the image-capture sensor is configured to capture data representative of a region of a space in front of the subject. 20.-39. (canceled) 40. The method of claim 7 wherein the determining includes: determining a movement of the body portion relative to the object; and determining whether the body portion may contact the object in response to the movement. 41. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a trajectory along which the body portion has moved relative to the object. 42. The method of claim 7 wherein the determining includes determining whether the body portion may collide with the object. 43. The method of claim 7 wherein the determining includes determining with a determiner that is remote from the image-capture sensor. 44. The method of claim 7 wherein the determining includes determining with a determiner that is remote from the subject. 45. The method of claim 7 wherein the notifying the subject includes notifying the subject with a notifier that is remote from the image-capture sensor. 46. The method of claim 7 wherein the notifying the subject includes notifying the subject with a notifier that is remote from the subject. 47. The method of claim 7 wherein the notifying the subject includes generating a notification. 48. The method of claim 7 wherein the notifying the subject includes notifying another subject. 49.-50. (canceled) 51. The device of claim 1 wherein the image-capture sensor includes a personal computing device. 52. The device of claim 1 wherein the image-capture sensor is configured to generate data representing multiple images of the object. 53.-55. (canceled) 56. The device of claim 1, further including: a housing; and wherein at least the image-capture sensor, determiner, and notifier are disposed in the housing. 57.-59. (canceled) 60. The device of claim 1 wherein the image includes a light image. 61. The device of claim 1 wherein the image includes a thermal image. 62. The device of claim 1 wherein the image-capture sensor is configured to generate information representing a sound image of the object. 63.-64. (canceled) 65. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a speed of the body portion relative to the object. 66. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a direction in which the body portion is moving relative to the object. 67. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a location of the body portion relative to the object. 68. The method of claim 7 wherein the determining includes determining whether the body portion may contact the object in response to a rate at which a speed of the body portion relative to the object is changing. 69.-70. (canceled) 71. The method of claim 7 wherein the data representing an image of an object includes data regarding light emitted or reflected from the object. 72.-75. (canceled)
2,600
10,619
10,619
16,009,034
2,631
A UE my receiving an indication of a beam pair link (BPL), wherein the BPL comprises a base station (BS) transmit beam and a corresponding UE receive beam. The UE may tag the BPL based on the UE receive beam. The UE may take one or more actions associated with the tagged BPL.
1. A method for wireless communication by a user equipment (UE), comprising: receiving an indication of a beam pair link (BPL), wherein the BPL comprises a base station (BS) transmit beam and a corresponding UE receive beam; tagging the BPL based on the UE receive beam; and taking one or more actions associated with the tagged BPL. 2. The method of claim 1, wherein taking the one or more actions comprises: transmitting, to the BS, an indication of the tagged BPL. 3. The method of claim 1, wherein taking the one more actions comprises: receiving signaling in accordance with the BPL. 4. The method of claim 1, wherein taking the one or more actions comprises: receiving a downlink transmission indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, receiving signaling, transmitted from one or more neighboring beams of the BS transmit beam, using the UE receive beam; determining a signal quality associated with transmissions from one or more of the neighboring beams of the BS transmit beam; and indicating to the BS a recommended BS transmit beam corresponding to the UE receive beam of the tagged BPL based, at least in part, on the determined signal quality. 5. The method of claim 1, wherein taking the one or more actions comprises: receiving a downlink transmission indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, receiving signaling from the BS transmit beam via one or more receive beams neighboring the corresponding UE receive beam of the BPL; determining a signal quality associated with one or more of the neighboring beams of the UE receive beam; and updating the UE receive beam corresponding to the BS transmit beam of the tagged BPL based at least in part, on the determined signal quality. 6. The method of claim 5, further comprising: determining whether a different tag is needed in response to the updated UE receive beam; in response to determining a different tag is needed, computing the different tag; indicating the different tag to the BS; and assigning the different tag to the updated UE receive beam and BS transmit beam. 7. The method of claim 6, wherein the different tag comprises one of: a new tag or a currently-used tag. 8. The method of claim 1, wherein taking one or more actions associated with the tagged BPL comprises: transmitting, to the BS, an indication of the tagged BPL in response to at least one of: a new BPL or an established BPL sharing a same UE receive beam with the new BPL. 9. The method of claim 1, wherein taking one or more actions associated with the tagged BPL comprises: receiving, from the BS, a message to remove a tag and its current association to one or more BPLs; and in response to the message, making the removed tag available for assignment to one or more new BPLs. 10. A method for wireless communication by a base station (BS), comprising: transmitting an indication of a beam pair link (BPL), wherein the BPL comprises a BS transmit beam and a corresponding user equipment (UE) receive beam; receiving an indication of a tag assigned to the BPL based on the UE receive beam; and taking one or more actions associated with the tagged BPL. 11. The method of claim 10, receiving the indication of the tag comprises: receiving, from the UE, an indication of the tagged BPL. 12. The method of claim 10, wherein taking the one more actions comprises: transmitting signaling in accordance with the BPL. 13. The method of claim 12, wherein the tag comprises a beam indication. 14. The method of claim 10, wherein taking the one or more actions comprises: transmitting a downlink assignment indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, transmitting signaling, using one or more neighboring beams of the BS transmit beam; and receiving a recommendation for an updated BS transmit beam corresponding to the UE receive beam of the tagged BPL, wherein the updated BS transmit beam and the corresponding UE receive beam are assigned the tag. 15. The method of claim 10, wherein taking the one or more actions comprises: transmitting a downlink assignment indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, transmitting signaling using the BS transmit beam; and receiving an updated tag, which maybe a new or old tag corresponding to the BS transmit beam of the tagged BPL, wherein the updated UE receive beam and the corresponding BS transmit beam are assigned one of the tag or an updated tag. 16. The method of claim 15, further comprising: transmitting an indication for the updated tag in response to the updated UE receive beam; and receiving the updated tag assigned to the updated UE receive beam and BS transmit beam. 17. The method of claim 10, wherein taking one or more actions associated with the tagged BPL comprises: receiving an indication of the tagged BPL in response to at least one of: a new BPL or an established BPL sharing a same UE receive beam with the new BPL. 18. The method of claim 10, wherein taking the one or more actions associated with the tagged BPL comprises: signaling to the UE removal of a tag and its current association to one or more BPLs wherein the removed tag is available for future assignment to one or more new BPLs. 19. An apparatus for wireless communication by a user equipment (UE), comprising: means for receiving an indication of a beam pair link (BPL), wherein the BPL comprises a base station (BS) transmit beam and a corresponding UE receive beam; means for tagging the BPL based on the UE receive beam; and means for taking one or more actions associated with the tagged BPL. 20. The apparatus of claim 19, wherein the means for taking the one or more actions comprises: means for transmitting, to the BS, an indication of the tagged BPL. 21. The apparatus of claim 19, wherein the means for taking the one more actions comprises: means for receiving signaling in accordance with the BPL. 22. The apparatus of claim 19, wherein the means for taking the one or more actions comprises: means for receiving a downlink transmission indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, means for receiving signaling, transmitted from one or more neighboring beams of the BS transmit beam, using the UE receive beam; means for determining a signal quality associated with transmissions from one or more of the neighboring beams of the BS transmit beam; and means for indicating to the BS a recommended BS transmit beam corresponding to the UE receive beam of the tagged BPL based, at least in part, on the determined signal quality. 23. The apparatus of claim 22, wherein the means for taking the one or more actions comprises: means for receiving a downlink transmission indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, means for receiving signaling from the BS transmit beam via one or more receive beams neighboring the corresponding UE receive beam of the BPL; means for determining a signal quality associated with one or more of the neighboring beams of the UE receive beam; and means for updating the UE receive beam corresponding to the BS transmit beam of the tagged BPL based at least in part, on the determined signal quality. 24. The apparatus of claim 23, further comprising: means for determining whether a different tag is needed in response to the updated UE receive beam; in response to determining a different tag is needed, means for computing the different tag; means for indicating the different tag to the BS; and means for assigning the different tag to the updated UE receive beam and BS transmit beam. 25. The apparatus of claim 24, wherein the different tag comprises one of: a new tag or a currently-used tag. 26. An apparatus for wireless communication by a base station (BS), comprising: means for transmitting an indication of a beam pair link (BPL), wherein the BPL comprises a BS transmit beam and a corresponding user equipment (UE) receive beam; means for receiving an indication of a tag assigned to the BPL based on the UE receive beam; and means for taking one or more actions associated with the tagged BPL. 27. The apparatus of claim 26, wherein the means for taking the one or more actions comprises: means for transmitting a downlink assignment indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, means for transmitting signaling, using one or more neighboring beams of the BS transmit beam; and means for receiving a recommendation for an updated BS transmit beam corresponding to the UE receive beam of the tagged BPL, wherein the updated BS transmit beam and the corresponding UE receive beam are assigned the tag. 28. The apparatus of claim 26, wherein the means for taking the one or more actions comprises: means for transmitting a downlink assignment indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, means for transmitting signaling using the BS transmit beam; and means for receiving an updated tag, which maybe a new or old tag corresponding to the BS transmit beam of the tagged BPL, wherein the updated UE receive beam and the corresponding BS transmit beam are assigned one of the tag or an updated tag. 29. The apparatus of claim 28, further comprising: means for transmitting an indication for the updated tag in response to the updated UE receive beam; and means for receiving the updated tag assigned to the updated UE receive beam and BS transmit beam. 30. The apparatus of claim 26, wherein the means for taking the one or more actions associated with the tagged BPL comprises: means for signaling to the UE removal of a tag and its current association to one or more BPLs wherein the removed tag is available for future assignment to one or more new BPLs.
A UE my receiving an indication of a beam pair link (BPL), wherein the BPL comprises a base station (BS) transmit beam and a corresponding UE receive beam. The UE may tag the BPL based on the UE receive beam. The UE may take one or more actions associated with the tagged BPL.1. A method for wireless communication by a user equipment (UE), comprising: receiving an indication of a beam pair link (BPL), wherein the BPL comprises a base station (BS) transmit beam and a corresponding UE receive beam; tagging the BPL based on the UE receive beam; and taking one or more actions associated with the tagged BPL. 2. The method of claim 1, wherein taking the one or more actions comprises: transmitting, to the BS, an indication of the tagged BPL. 3. The method of claim 1, wherein taking the one more actions comprises: receiving signaling in accordance with the BPL. 4. The method of claim 1, wherein taking the one or more actions comprises: receiving a downlink transmission indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, receiving signaling, transmitted from one or more neighboring beams of the BS transmit beam, using the UE receive beam; determining a signal quality associated with transmissions from one or more of the neighboring beams of the BS transmit beam; and indicating to the BS a recommended BS transmit beam corresponding to the UE receive beam of the tagged BPL based, at least in part, on the determined signal quality. 5. The method of claim 1, wherein taking the one or more actions comprises: receiving a downlink transmission indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, receiving signaling from the BS transmit beam via one or more receive beams neighboring the corresponding UE receive beam of the BPL; determining a signal quality associated with one or more of the neighboring beams of the UE receive beam; and updating the UE receive beam corresponding to the BS transmit beam of the tagged BPL based at least in part, on the determined signal quality. 6. The method of claim 5, further comprising: determining whether a different tag is needed in response to the updated UE receive beam; in response to determining a different tag is needed, computing the different tag; indicating the different tag to the BS; and assigning the different tag to the updated UE receive beam and BS transmit beam. 7. The method of claim 6, wherein the different tag comprises one of: a new tag or a currently-used tag. 8. The method of claim 1, wherein taking one or more actions associated with the tagged BPL comprises: transmitting, to the BS, an indication of the tagged BPL in response to at least one of: a new BPL or an established BPL sharing a same UE receive beam with the new BPL. 9. The method of claim 1, wherein taking one or more actions associated with the tagged BPL comprises: receiving, from the BS, a message to remove a tag and its current association to one or more BPLs; and in response to the message, making the removed tag available for assignment to one or more new BPLs. 10. A method for wireless communication by a base station (BS), comprising: transmitting an indication of a beam pair link (BPL), wherein the BPL comprises a BS transmit beam and a corresponding user equipment (UE) receive beam; receiving an indication of a tag assigned to the BPL based on the UE receive beam; and taking one or more actions associated with the tagged BPL. 11. The method of claim 10, receiving the indication of the tag comprises: receiving, from the UE, an indication of the tagged BPL. 12. The method of claim 10, wherein taking the one more actions comprises: transmitting signaling in accordance with the BPL. 13. The method of claim 12, wherein the tag comprises a beam indication. 14. The method of claim 10, wherein taking the one or more actions comprises: transmitting a downlink assignment indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, transmitting signaling, using one or more neighboring beams of the BS transmit beam; and receiving a recommendation for an updated BS transmit beam corresponding to the UE receive beam of the tagged BPL, wherein the updated BS transmit beam and the corresponding UE receive beam are assigned the tag. 15. The method of claim 10, wherein taking the one or more actions comprises: transmitting a downlink assignment indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, transmitting signaling using the BS transmit beam; and receiving an updated tag, which maybe a new or old tag corresponding to the BS transmit beam of the tagged BPL, wherein the updated UE receive beam and the corresponding BS transmit beam are assigned one of the tag or an updated tag. 16. The method of claim 15, further comprising: transmitting an indication for the updated tag in response to the updated UE receive beam; and receiving the updated tag assigned to the updated UE receive beam and BS transmit beam. 17. The method of claim 10, wherein taking one or more actions associated with the tagged BPL comprises: receiving an indication of the tagged BPL in response to at least one of: a new BPL or an established BPL sharing a same UE receive beam with the new BPL. 18. The method of claim 10, wherein taking the one or more actions associated with the tagged BPL comprises: signaling to the UE removal of a tag and its current association to one or more BPLs wherein the removed tag is available for future assignment to one or more new BPLs. 19. An apparatus for wireless communication by a user equipment (UE), comprising: means for receiving an indication of a beam pair link (BPL), wherein the BPL comprises a base station (BS) transmit beam and a corresponding UE receive beam; means for tagging the BPL based on the UE receive beam; and means for taking one or more actions associated with the tagged BPL. 20. The apparatus of claim 19, wherein the means for taking the one or more actions comprises: means for transmitting, to the BS, an indication of the tagged BPL. 21. The apparatus of claim 19, wherein the means for taking the one more actions comprises: means for receiving signaling in accordance with the BPL. 22. The apparatus of claim 19, wherein the means for taking the one or more actions comprises: means for receiving a downlink transmission indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, means for receiving signaling, transmitted from one or more neighboring beams of the BS transmit beam, using the UE receive beam; means for determining a signal quality associated with transmissions from one or more of the neighboring beams of the BS transmit beam; and means for indicating to the BS a recommended BS transmit beam corresponding to the UE receive beam of the tagged BPL based, at least in part, on the determined signal quality. 23. The apparatus of claim 22, wherein the means for taking the one or more actions comprises: means for receiving a downlink transmission indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, means for receiving signaling from the BS transmit beam via one or more receive beams neighboring the corresponding UE receive beam of the BPL; means for determining a signal quality associated with one or more of the neighboring beams of the UE receive beam; and means for updating the UE receive beam corresponding to the BS transmit beam of the tagged BPL based at least in part, on the determined signal quality. 24. The apparatus of claim 23, further comprising: means for determining whether a different tag is needed in response to the updated UE receive beam; in response to determining a different tag is needed, means for computing the different tag; means for indicating the different tag to the BS; and means for assigning the different tag to the updated UE receive beam and BS transmit beam. 25. The apparatus of claim 24, wherein the different tag comprises one of: a new tag or a currently-used tag. 26. An apparatus for wireless communication by a base station (BS), comprising: means for transmitting an indication of a beam pair link (BPL), wherein the BPL comprises a BS transmit beam and a corresponding user equipment (UE) receive beam; means for receiving an indication of a tag assigned to the BPL based on the UE receive beam; and means for taking one or more actions associated with the tagged BPL. 27. The apparatus of claim 26, wherein the means for taking the one or more actions comprises: means for transmitting a downlink assignment indicating beam refinement of the BS transmit beam of the tagged BPL; during the refinement, means for transmitting signaling, using one or more neighboring beams of the BS transmit beam; and means for receiving a recommendation for an updated BS transmit beam corresponding to the UE receive beam of the tagged BPL, wherein the updated BS transmit beam and the corresponding UE receive beam are assigned the tag. 28. The apparatus of claim 26, wherein the means for taking the one or more actions comprises: means for transmitting a downlink assignment indicating beam refinement of the UE receive beam of the tagged BPL; during the refinement, means for transmitting signaling using the BS transmit beam; and means for receiving an updated tag, which maybe a new or old tag corresponding to the BS transmit beam of the tagged BPL, wherein the updated UE receive beam and the corresponding BS transmit beam are assigned one of the tag or an updated tag. 29. The apparatus of claim 28, further comprising: means for transmitting an indication for the updated tag in response to the updated UE receive beam; and means for receiving the updated tag assigned to the updated UE receive beam and BS transmit beam. 30. The apparatus of claim 26, wherein the means for taking the one or more actions associated with the tagged BPL comprises: means for signaling to the UE removal of a tag and its current association to one or more BPLs wherein the removed tag is available for future assignment to one or more new BPLs.
2,600
10,620
10,620
16,034,132
2,689
Methods and systems for managing a premises are disclosed. The premises may comprise a premises management device. The premises management device may be in communication with a security system. The premises management device may manage consumption by maintaining or disabling components associated with the premise management device.
1. A method comprising: determining, by a premises management device located at a premises, an indication of a failure of a power supply located at the premises, wherein the premises management device comprises a plurality of components; maintaining, based at least in part on the indication of the failure of the power supply, operations associated with a first component of the plurality of components; and disabling, based at least in part on the indication of the failure of the power supply, operations associated with a second component, of the plurality of components, associated with a lower priority than the first component. 2. The method of claim 1, wherein the first component comprises a premises safety component and the second component comprises a premises content component. 3. The method of claim 1, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the premises management device. 4. The method of claim 1, wherein maintaining operations associated with the first component comprises one or more of maintaining access of the first component to a backup power supply, maintaining communication with the first component, or maintaining functionality of the first component. 5. The method of claim 1, wherein disabling operations associated with the second component comprises one or more of preventing access of the second component to a backup power supply, disabling communication with the second component, or disabling functionality of the second component. 6. The method of claim 1, wherein one or more of the operations associated with the first component or the operations associated with the second component comprise controlling of a premises device located at the premises. 7. The method of claim 1, wherein the indication of the failure of the power supply comprises an indication of a duration of the failure of the power supply. 8. A method comprising: determining, by a premises management device located at a premises, an indication of a failure of a power supply located at the premises, wherein the premises management device comprises a plurality of components; and disabling, based at least in part on the indication of the failure of the power supply and priority information associated with one or more of the plurality of components, operations associated with a component, of the plurality of components, associated with a lower priority than at least one other component of the plurality of components. 9. The method of claim 8, further comprising maintaining, based at least in part on the indication of the failure, operations associated with one or more of the at least one other component of the plurality of components. 10. The method of claim 8, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the premises management device. 11. The method of claim 8, wherein the operations associated with the component comprise controlling a premises device located at the premises. 12. The method of claim 8, wherein disabling operations associated with the component comprises one or more of preventing access of the component to a backup power supply, disabling communication with the component, or disabling functionality of the component. 13. The method of claim 8, wherein the indication of the failure of the power supply comprises an indication of a duration of the failure of the power supply. 14. The method of claim 8, wherein the premises management device comprises one or more of a gateway device or an interface device. 15. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: determine an indication of a failure of a power supply located at a premises; maintain, based at least in part on the indication of the failure of the power supply, operations associated with a first component of a plurality of components; and disable, based at least in part on the indication of the failure of the power supply, operations associated with a second component, of the plurality of components, associated with a lower priority than the first component. 16. The device of claim 15, wherein the first component comprises a premises safety component and the second component comprises a premises content component. 17. The device of claim 15, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the device. 18. The device of claim 15, wherein the instructions that, when executed by the one or more processors, cause the device to maintain operations associated with the first component comprises instructions that, when executed by the one or more processors, cause the device to one or more of maintain access of the first component to a backup power supply, maintain communication with the first component, or maintain functionality of the first component. 19. The device of claim 15, wherein the instructions that, when executed by the one or more processors, cause the device to disable operations associated with the second component comprises instructions that, when executed by the one or more processors, cause the device to one or more of prevent access of the second component to a backup power supply, disable communication with the second component, or disable functionality of the second component. 20. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: determine an indication of a failure of a power supply located at a premises; and disable, based at least in part on the indication of the failure of the power supply and priority information associated with one or more of a plurality of components of the device, operations associated with a component, of the plurality of components, associated with a lower priority than at least one other component of the plurality of components. 21. The device of claim 20, wherein the instructions, when executed by the one or more processors, further cause the device to maintain, based at least in part on the indication of the failure, operations associated with one or more of the at least one other component of the plurality of components. 22. The device of claim 20, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the device. 23. The device of claim 20, wherein the operations associated with the component comprise controlling a premises device located at the premises. 24. The device of claim 20, wherein the instructions that, when executed by the one or more processors, cause the device to disable operations associated with the component comprises instructions that, when executed by the one or more processors, cause the device to one or more of prevent access of the component to a backup power supply, disable communication with the component, or disable functionality of the component. 25. The device of claim 20, wherein the indication of the failure of the power supply comprises an indication of a duration of the failure of the power supply.
Methods and systems for managing a premises are disclosed. The premises may comprise a premises management device. The premises management device may be in communication with a security system. The premises management device may manage consumption by maintaining or disabling components associated with the premise management device.1. A method comprising: determining, by a premises management device located at a premises, an indication of a failure of a power supply located at the premises, wherein the premises management device comprises a plurality of components; maintaining, based at least in part on the indication of the failure of the power supply, operations associated with a first component of the plurality of components; and disabling, based at least in part on the indication of the failure of the power supply, operations associated with a second component, of the plurality of components, associated with a lower priority than the first component. 2. The method of claim 1, wherein the first component comprises a premises safety component and the second component comprises a premises content component. 3. The method of claim 1, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the premises management device. 4. The method of claim 1, wherein maintaining operations associated with the first component comprises one or more of maintaining access of the first component to a backup power supply, maintaining communication with the first component, or maintaining functionality of the first component. 5. The method of claim 1, wherein disabling operations associated with the second component comprises one or more of preventing access of the second component to a backup power supply, disabling communication with the second component, or disabling functionality of the second component. 6. The method of claim 1, wherein one or more of the operations associated with the first component or the operations associated with the second component comprise controlling of a premises device located at the premises. 7. The method of claim 1, wherein the indication of the failure of the power supply comprises an indication of a duration of the failure of the power supply. 8. A method comprising: determining, by a premises management device located at a premises, an indication of a failure of a power supply located at the premises, wherein the premises management device comprises a plurality of components; and disabling, based at least in part on the indication of the failure of the power supply and priority information associated with one or more of the plurality of components, operations associated with a component, of the plurality of components, associated with a lower priority than at least one other component of the plurality of components. 9. The method of claim 8, further comprising maintaining, based at least in part on the indication of the failure, operations associated with one or more of the at least one other component of the plurality of components. 10. The method of claim 8, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the premises management device. 11. The method of claim 8, wherein the operations associated with the component comprise controlling a premises device located at the premises. 12. The method of claim 8, wherein disabling operations associated with the component comprises one or more of preventing access of the component to a backup power supply, disabling communication with the component, or disabling functionality of the component. 13. The method of claim 8, wherein the indication of the failure of the power supply comprises an indication of a duration of the failure of the power supply. 14. The method of claim 8, wherein the premises management device comprises one or more of a gateway device or an interface device. 15. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: determine an indication of a failure of a power supply located at a premises; maintain, based at least in part on the indication of the failure of the power supply, operations associated with a first component of a plurality of components; and disable, based at least in part on the indication of the failure of the power supply, operations associated with a second component, of the plurality of components, associated with a lower priority than the first component. 16. The device of claim 15, wherein the first component comprises a premises safety component and the second component comprises a premises content component. 17. The device of claim 15, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the device. 18. The device of claim 15, wherein the instructions that, when executed by the one or more processors, cause the device to maintain operations associated with the first component comprises instructions that, when executed by the one or more processors, cause the device to one or more of maintain access of the first component to a backup power supply, maintain communication with the first component, or maintain functionality of the first component. 19. The device of claim 15, wherein the instructions that, when executed by the one or more processors, cause the device to disable operations associated with the second component comprises instructions that, when executed by the one or more processors, cause the device to one or more of prevent access of the second component to a backup power supply, disable communication with the second component, or disable functionality of the second component. 20. A device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to: determine an indication of a failure of a power supply located at a premises; and disable, based at least in part on the indication of the failure of the power supply and priority information associated with one or more of a plurality of components of the device, operations associated with a component, of the plurality of components, associated with a lower priority than at least one other component of the plurality of components. 21. The device of claim 20, wherein the instructions, when executed by the one or more processors, further cause the device to maintain, based at least in part on the indication of the failure, operations associated with one or more of the at least one other component of the plurality of components. 22. The device of claim 20, wherein the plurality of components comprise one or more of an application, a process, an interface, or an element associated with the device. 23. The device of claim 20, wherein the operations associated with the component comprise controlling a premises device located at the premises. 24. The device of claim 20, wherein the instructions that, when executed by the one or more processors, cause the device to disable operations associated with the component comprises instructions that, when executed by the one or more processors, cause the device to one or more of prevent access of the component to a backup power supply, disable communication with the component, or disable functionality of the component. 25. The device of claim 20, wherein the indication of the failure of the power supply comprises an indication of a duration of the failure of the power supply.
2,600
10,621
10,621
15,527,850
2,642
This invention relates to a method in a cellular communications network having a plurality of base stations, and to a base station for said cellular communications network, wherein each base station is configured to use a frequency band of one of a hierarchy of spectrum levels, the method comprising: a first base station using a frequency band of a first spectrum level of the hierarchy of spectrum levels; determining an interference level in the network; determining whether the interference level indicates that the first base station's performance is sub-optimal; and, if so, the first base station reconfiguring to use a frequency band of a second spectrum level of the hierarchy of spectrum levels, the second spectrum level having a different number of frequency bands than the first spectrum level.
1. A method in a cellular communications network having a plurality of base stations, wherein each base station stores data representing a hierarchy of spectrum levels each defining one or more frequency bands in which the base station may operate, wherein a lower order level of the hierarchy of spectrum levels includes a greater number of frequency bands than a higher order level, the method comprising the steps of: a first base station using a frequency band of a first spectrum level of the hierarchy of spectrum levels; determining an interference level in the network; determining whether the interference level meets a first threshold indicating that the first base station's performance is sub-optimal; and, if so, the first base station reconfiguring to use a frequency band of a second spectrum level of the hierarchy of spectrum levels, the second spectrum level having a different number of frequency bands than the first spectrum level. 2. A method as claimed in claim 1, wherein the determined interference level is above a first threshold, and the second spectrum level is a lower order level having a greater number of frequency bands than the first spectrum level. 3. A method as claimed in claim 1, wherein the determined interference level indicates that interference is below the first threshold, and the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 4. A method as claimed in claim 3, wherein the determined interference level indicates that the interference is below the first threshold and below a second threshold, wherein the second threshold is less than the first threshold, the method further comprising the steps of: the first base station assessing an interference level of a first frequency band in the second spectrum level; and the first base station reconfiguring to use the first frequency band of the second spectrum level, wherein the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 5. A method as claimed in claim 1, further comprising the steps of: the first base station assessing an interference level of a first frequency band in the second spectrum level; and the first base station reconfiguring to use the first frequency band of the second spectrum level. 6. A method as claimed in claim 1, wherein a second base station has an overlapping coverage area with the first base station, the method further comprising the steps of: the second base station reconfiguring to use a frequency band of the second spectrum level. 7. A method as claimed in claim 1, wherein the step of determining whether the interference level indicates that performance is sub-optimal includes determining whether a previous reconfiguration improved performance. 8. A non-transitory computer-readable storage medium storing a computer program or suite of computer programs which upon execution by a computer system performs the method in claim 1. 9. A base station for a cellular communications network, the base station comprising: a transceiver for communicating with a User Equipment (UE); a memory storing data representing a hierarchy of spectrum levels each defining one or more frequency bands in which the base station may operate, wherein a lower order level of the hierarchy of spectrum levels includes a greater number of frequency bands than a higher order level; and a processor adapted to configure communications between the transceiver and the UE, wherein, in response to an interference level in the network meeting a first threshold indicating that the base station's performance is sub-optimal, the processor is adapted to reconfigure the transceiver from using a frequency band of a first level of a hierarchy of spectrum levels to using a frequency band of a second level of the hierarchy of spectrum levels, the second spectrum level having a different number of frequency bands than the first spectrum level. 10. A base station as claimed in claim 9, wherein the network interference level is above a first threshold, and the second spectrum level is a lower order level having a greater number of frequency bands than the first spectrum level. 11. A base station as claimed in claim 9, wherein the network interference level is below the first threshold, and the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 12. A base station as claimed in claim 9, wherein the network interference level is below the first threshold and below a second threshold, wherein the second threshold is less than the first threshold, and the processor is further adapted to assess an interference level of a first frequency band in the second spectrum level, and to reconfigure the transceiver to use the first frequency band of the second spectrum level, wherein the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 13. A base station as claimed in claim 9, wherein the processor is adapted to determine the interference level in the network and to determine whether the interference level indicates that the base station's performance is sub-optimal. 14. A base station as claimed in claim 13, wherein the processor is further adapted to assess an interference level of a first frequency band in the second spectrum level, and to reconfigure the transceiver to use the first frequency band of the second spectrum level. 15. A base station as claimed in claim 9, wherein the processor is adapted to cause the transceiver to send a message to a second base station, the second base station having an overlapping coverage area with the first base station, indicating that the second base station should reconfigure to use a frequency band of the second spectrum level. 16. A base station as claimed in claim 9, being a Home evolved Node B. 17. A cellular communications network comprises a base station as claimed in claim 9.
This invention relates to a method in a cellular communications network having a plurality of base stations, and to a base station for said cellular communications network, wherein each base station is configured to use a frequency band of one of a hierarchy of spectrum levels, the method comprising: a first base station using a frequency band of a first spectrum level of the hierarchy of spectrum levels; determining an interference level in the network; determining whether the interference level indicates that the first base station's performance is sub-optimal; and, if so, the first base station reconfiguring to use a frequency band of a second spectrum level of the hierarchy of spectrum levels, the second spectrum level having a different number of frequency bands than the first spectrum level.1. A method in a cellular communications network having a plurality of base stations, wherein each base station stores data representing a hierarchy of spectrum levels each defining one or more frequency bands in which the base station may operate, wherein a lower order level of the hierarchy of spectrum levels includes a greater number of frequency bands than a higher order level, the method comprising the steps of: a first base station using a frequency band of a first spectrum level of the hierarchy of spectrum levels; determining an interference level in the network; determining whether the interference level meets a first threshold indicating that the first base station's performance is sub-optimal; and, if so, the first base station reconfiguring to use a frequency band of a second spectrum level of the hierarchy of spectrum levels, the second spectrum level having a different number of frequency bands than the first spectrum level. 2. A method as claimed in claim 1, wherein the determined interference level is above a first threshold, and the second spectrum level is a lower order level having a greater number of frequency bands than the first spectrum level. 3. A method as claimed in claim 1, wherein the determined interference level indicates that interference is below the first threshold, and the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 4. A method as claimed in claim 3, wherein the determined interference level indicates that the interference is below the first threshold and below a second threshold, wherein the second threshold is less than the first threshold, the method further comprising the steps of: the first base station assessing an interference level of a first frequency band in the second spectrum level; and the first base station reconfiguring to use the first frequency band of the second spectrum level, wherein the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 5. A method as claimed in claim 1, further comprising the steps of: the first base station assessing an interference level of a first frequency band in the second spectrum level; and the first base station reconfiguring to use the first frequency band of the second spectrum level. 6. A method as claimed in claim 1, wherein a second base station has an overlapping coverage area with the first base station, the method further comprising the steps of: the second base station reconfiguring to use a frequency band of the second spectrum level. 7. A method as claimed in claim 1, wherein the step of determining whether the interference level indicates that performance is sub-optimal includes determining whether a previous reconfiguration improved performance. 8. A non-transitory computer-readable storage medium storing a computer program or suite of computer programs which upon execution by a computer system performs the method in claim 1. 9. A base station for a cellular communications network, the base station comprising: a transceiver for communicating with a User Equipment (UE); a memory storing data representing a hierarchy of spectrum levels each defining one or more frequency bands in which the base station may operate, wherein a lower order level of the hierarchy of spectrum levels includes a greater number of frequency bands than a higher order level; and a processor adapted to configure communications between the transceiver and the UE, wherein, in response to an interference level in the network meeting a first threshold indicating that the base station's performance is sub-optimal, the processor is adapted to reconfigure the transceiver from using a frequency band of a first level of a hierarchy of spectrum levels to using a frequency band of a second level of the hierarchy of spectrum levels, the second spectrum level having a different number of frequency bands than the first spectrum level. 10. A base station as claimed in claim 9, wherein the network interference level is above a first threshold, and the second spectrum level is a lower order level having a greater number of frequency bands than the first spectrum level. 11. A base station as claimed in claim 9, wherein the network interference level is below the first threshold, and the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 12. A base station as claimed in claim 9, wherein the network interference level is below the first threshold and below a second threshold, wherein the second threshold is less than the first threshold, and the processor is further adapted to assess an interference level of a first frequency band in the second spectrum level, and to reconfigure the transceiver to use the first frequency band of the second spectrum level, wherein the second spectrum level is a higher order level having fewer frequency bands than the first spectrum level. 13. A base station as claimed in claim 9, wherein the processor is adapted to determine the interference level in the network and to determine whether the interference level indicates that the base station's performance is sub-optimal. 14. A base station as claimed in claim 13, wherein the processor is further adapted to assess an interference level of a first frequency band in the second spectrum level, and to reconfigure the transceiver to use the first frequency band of the second spectrum level. 15. A base station as claimed in claim 9, wherein the processor is adapted to cause the transceiver to send a message to a second base station, the second base station having an overlapping coverage area with the first base station, indicating that the second base station should reconfigure to use a frequency band of the second spectrum level. 16. A base station as claimed in claim 9, being a Home evolved Node B. 17. A cellular communications network comprises a base station as claimed in claim 9.
2,600
10,622
10,622
14,762,180
2,611
The present invention relates to a method and an arrangement for providing a 3D model of an environment. The method comprises the step of forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein each node is associated to a 3D coordinate in a geographical coordinate system, determining for a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge.
1. A method for providing a 3D model of an environment comprising the step of forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein the step of determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of the mesh modelling the environment in that node and/or surface and/or edge and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge, wherein each node of the mesh is associated to a 3D coordinate in a geographical coordinate system and wherein the mesh uncertainty represents a measure of the reliability of 3D coordinate in the geographical coordinate system. 2. Method according to claim 1, wherein the mesh uncertainty is based on the geometry of the mesh. 3. A method according to claim 2, wherein the determination of the mesh uncertainty related to the geometry of the mesh is based on the level of detail of the mesh. 4. A method according to claim 3, wherein the step of forming the mesh comprises forming a hierarchical mesh having a plurality of selectable levels of details, each level associated to an uncertainty and wherein the determination of the mesh uncertainty is based on the level of detail of the selected level of the mesh. 5. A method according to claim 1, comprising the step of determining the geometry of the mesh locally at the nodes/surfaces/edges, wherein the determination of the mesh uncertainty of a specific node/surface/edge is based on the determined local geometry. 6. A method according to claim 1, wherein a plurality of the nodes and/or edges and/or surfaces of the mesh are associated to an attribute, said attribute comprising texture information. 7. A method according to claim 6, wherein the attribute further comprises a texture uncertainty measure. 8. A method according to claim 1, wherein the step of forming the mesh comprises providing a plurality of distance measurements to each area or point in the environment from a plurality of geographically known positions using a distance determining device, providing the 3D model for each area or point based on the plurality of distance measurements. 9. A method according to claim 1, wherein the step of forming the mesh comprises dividing the environment into a plurality of areas or points, providing for each area or point a plurality of geo-referenced image sets, wherein each image comprises the area or point, performing for each area or point image stereo processing on each image set so as to provide a plurality of 3D sub models for that area or point and providing the 3D model for each area or point based on the plurality of 3D sub-models and forming a composed 3D model of the environment based on the 3D models related to the different areas or points. 10. A method according to claim 8, wherein the step of determining the mesh uncertainty comprises determining the mesh uncertainty based on a spread in the measurements. 11. A method according to claim 9, comprising the step of determining the number of images available for a certain area or point of the environment and comprising the step of determining the mesh uncertainty based on the number of images available for the certain area or point. 12. A method according to claim 9, comprising the step of determining the spatial spread of the images available and comprising the step of determining the mesh uncertainty based on the spatial spread of the images available. 13. A method according to claim 9, comprising the step of determining a measure related to the spread in the sub models related to a plurality of areas or points, wherein the determination of the mesh uncertainty of a specific node/surface/edge is based on the determined value related to the spread in at least one point/area corresponding to that node/surface/edge. 14. A method according to claim 1, wherein the step of providing the mesh uncertainty comprises providing a value for the uncertainty in at least two directions. 15. A method according to claim 1, wherein the mesh uncertainty comprises at least one value related to a distance. 16. A method according to claim 1, wherein the mesh uncertainty comprises at least one probability value. 17. A method according to claim 1, comprising the step of visualizing the mesh uncertainty level in the mesh. 18. Computer program comprising a program code for providing a 3D model of an environment, comprising the step of: forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein the steps of determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of the mesh modelling the environment in that node and/or surface and/or edge, and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge, wherein each node of the mesh is associated to a 3D coordinate in a geographical coordinate system and wherein the mesh uncertainty represents a measure of the reliability of 3D coordinate in the geographical coordinate system. 19. Computer program product comprising a program code stored on a computer readable media for providing a 3D model of an environment, comprising the steps of: forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein the steps of determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of the mesh modelling the environment in that node and/or surface and/or edge and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge, wherein each node of the mesh is associated to a 3D coordinate in a geographical coordinate system and wherein the mesh uncertainty represents a measure of the reliability of 3D coordinate in the geographical coordinate system. 20. An arrangement for providing a 3D model of an environment comprising a memory or having means for access to a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein each node is associated to a 3D coordinate in a geographical coordinate system, and a processing unit arranged to determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of 3D coordinate in the geographical coordinate system in that node and/or surface and/or edge and to associate the determined mesh uncertainty to the corresponding node and/or surface and/or edge. 21. An arrangement according to claim 20, wherein the processing unit further is arranged to form the mesh. 22. An arrangement according to claim 20, further comprising output means arranged to present selected information related to the mesh and the associated mesh uncertainty.
The present invention relates to a method and an arrangement for providing a 3D model of an environment. The method comprises the step of forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein each node is associated to a 3D coordinate in a geographical coordinate system, determining for a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge.1. A method for providing a 3D model of an environment comprising the step of forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein the step of determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of the mesh modelling the environment in that node and/or surface and/or edge and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge, wherein each node of the mesh is associated to a 3D coordinate in a geographical coordinate system and wherein the mesh uncertainty represents a measure of the reliability of 3D coordinate in the geographical coordinate system. 2. Method according to claim 1, wherein the mesh uncertainty is based on the geometry of the mesh. 3. A method according to claim 2, wherein the determination of the mesh uncertainty related to the geometry of the mesh is based on the level of detail of the mesh. 4. A method according to claim 3, wherein the step of forming the mesh comprises forming a hierarchical mesh having a plurality of selectable levels of details, each level associated to an uncertainty and wherein the determination of the mesh uncertainty is based on the level of detail of the selected level of the mesh. 5. A method according to claim 1, comprising the step of determining the geometry of the mesh locally at the nodes/surfaces/edges, wherein the determination of the mesh uncertainty of a specific node/surface/edge is based on the determined local geometry. 6. A method according to claim 1, wherein a plurality of the nodes and/or edges and/or surfaces of the mesh are associated to an attribute, said attribute comprising texture information. 7. A method according to claim 6, wherein the attribute further comprises a texture uncertainty measure. 8. A method according to claim 1, wherein the step of forming the mesh comprises providing a plurality of distance measurements to each area or point in the environment from a plurality of geographically known positions using a distance determining device, providing the 3D model for each area or point based on the plurality of distance measurements. 9. A method according to claim 1, wherein the step of forming the mesh comprises dividing the environment into a plurality of areas or points, providing for each area or point a plurality of geo-referenced image sets, wherein each image comprises the area or point, performing for each area or point image stereo processing on each image set so as to provide a plurality of 3D sub models for that area or point and providing the 3D model for each area or point based on the plurality of 3D sub-models and forming a composed 3D model of the environment based on the 3D models related to the different areas or points. 10. A method according to claim 8, wherein the step of determining the mesh uncertainty comprises determining the mesh uncertainty based on a spread in the measurements. 11. A method according to claim 9, comprising the step of determining the number of images available for a certain area or point of the environment and comprising the step of determining the mesh uncertainty based on the number of images available for the certain area or point. 12. A method according to claim 9, comprising the step of determining the spatial spread of the images available and comprising the step of determining the mesh uncertainty based on the spatial spread of the images available. 13. A method according to claim 9, comprising the step of determining a measure related to the spread in the sub models related to a plurality of areas or points, wherein the determination of the mesh uncertainty of a specific node/surface/edge is based on the determined value related to the spread in at least one point/area corresponding to that node/surface/edge. 14. A method according to claim 1, wherein the step of providing the mesh uncertainty comprises providing a value for the uncertainty in at least two directions. 15. A method according to claim 1, wherein the mesh uncertainty comprises at least one value related to a distance. 16. A method according to claim 1, wherein the mesh uncertainty comprises at least one probability value. 17. A method according to claim 1, comprising the step of visualizing the mesh uncertainty level in the mesh. 18. Computer program comprising a program code for providing a 3D model of an environment, comprising the step of: forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein the steps of determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of the mesh modelling the environment in that node and/or surface and/or edge, and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge, wherein each node of the mesh is associated to a 3D coordinate in a geographical coordinate system and wherein the mesh uncertainty represents a measure of the reliability of 3D coordinate in the geographical coordinate system. 19. Computer program product comprising a program code stored on a computer readable media for providing a 3D model of an environment, comprising the steps of: forming a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein the steps of determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of the mesh modelling the environment in that node and/or surface and/or edge and associating the determined mesh uncertainty to the corresponding node and/or surface and/or edge, wherein each node of the mesh is associated to a 3D coordinate in a geographical coordinate system and wherein the mesh uncertainty represents a measure of the reliability of 3D coordinate in the geographical coordinate system. 20. An arrangement for providing a 3D model of an environment comprising a memory or having means for access to a mesh modelling the environment in three dimensions, said mesh comprising nodes interconnected by edges and having surfaces boarded by the edges, wherein each node is associated to a 3D coordinate in a geographical coordinate system, and a processing unit arranged to determining for each of a plurality of the nodes and/or surfaces and/or edges in the mesh a mesh uncertainty representing a measure of the reliability of 3D coordinate in the geographical coordinate system in that node and/or surface and/or edge and to associate the determined mesh uncertainty to the corresponding node and/or surface and/or edge. 21. An arrangement according to claim 20, wherein the processing unit further is arranged to form the mesh. 22. An arrangement according to claim 20, further comprising output means arranged to present selected information related to the mesh and the associated mesh uncertainty.
2,600
10,623
10,623
15,167,913
2,649
The described features generally relate to receiving one or more positioning signals at a satellite terminal during installation of the satellite terminal at a customer premises, and providing position-based access to a satellite communications system based on a satellite terminal installation position determined from the received positioning signals. The determined installation position of the satellite terminal may then be employed for various network access techniques, such as providing access to the satellite communications system, providing position-based content, or restricting content via the satellite communications system based on the determined installation position. In some examples the determined installation position of the satellite terminal may be used to approximate a propagation delay between the satellite terminal and various devices of the satellite communications system, such as a serving satellite and/or a serving gateway, to improve device synchronization and radio frequency spectrum resource utilization.
1. A method for use in a satellite communications system, comprising: receiving a plurality of positioning signals for a satellite terminal during installation of the satellite terminal at a customer premises; determining, at the satellite terminal, a first installation position of the satellite terminal based on the plurality of positioning signals; determining whether the satellite terminal has rights to access the satellite communications system at the customer premises based at least in part on the determined first installation position of the satellite terminal; and in response to determining the satellite terminal has rights to access the satellite communications system, permitting communications between the satellite terminal and a target satellite of the satellite communications system. 2. The method of claim 1, wherein the plurality of positioning signals are received from a plurality of positioning satellites. 3. The method of claim 2, wherein the plurality of positioning satellites are included in a global navigation satellite system constellation. 4. The method of claim 1, further comprising: receiving a plurality of subsequent positioning signals at the satellite terminal; determining, at the satellite terminal, a second installation position of the satellite terminal based on the plurality of subsequent positioning signals; and initiating a terminal setup procedure based on a difference between the determined second installation position of the satellite terminal and the determined first installation position of the satellite terminal exceeding a threshold. 5. The method of claim 1, wherein determining whether the satellite terminal has rights to access the satellite communications system comprises: identifying a geographic region of the satellite terminal based at least in part on the determined first installation position of the satellite terminal; and determining whether the satellite terminal has rights to access the satellite communications system based at least in part on the determined first installation position of the satellite terminal and the identified geographic region. 6. The method of claim 1, further comprising: restricting access to content based at least in part on the determined first installation position of the satellite terminal. 7. The method of claim 1, further comprising: receiving position-specific content at the satellite terminal based at least in part on the determined first installation position of the satellite terminal. 8. The method of claim 1, wherein permitting communications between the satellite terminal and the target satellite comprises: determining a signal timing offset based at least in part on the determined first installation position of the satellite terminal; and permitting communications based at least in part on the determined signal timing offset. 9. The method of claim 8, wherein determining the signal timing offset comprises: estimating a distance between the satellite terminal and the target satellite, the estimating based at least in part on the determined first installation position of the satellite terminal; and determining the signal timing offset based at least in part on the estimated distance. 10. The method of claim 8, wherein determining the signal timing offset comprises: determining, at the satellite terminal, a position of the target satellite; and determining the signal timing offset based at least in part on the determined first installation position of the target satellite. 11. The method of claim 10, wherein determining the position of the target satellite comprises: determining the position of the target satellite based at least in part on one or more of a value stored at the satellite terminal, or a signal received from the target satellite, or a combination thereof. 12. The method of claim 8, further comprising: receiving information associated with a contention slot of the target satellite; wherein permitting communications between the satellite terminal and the target satellite comprises: determining a transmission timing based at least in part on the received information and the determined signal timing offset; and transmitting, based at least in part on the determined transmission timing, a network entry signal to the target satellite. 13. The method of claim 8, further comprising: receiving a subsequent positioning signal at the satellite terminal; and adjusting the signal timing offset based at least in part on the subsequent positioning signal. 14. The method of claim 8, further comprising: receiving an adjustment signal from the target satellite; and adjusting the signal timing offset based at least in part on the received adjustment signal. 15. A satellite terminal, comprising: a positioning signal receiver; a communication signal transceiver; a processor; memory in electronic communication with the processor; and instructions stored in the memory and executable by the processor to cause the satellite terminal to: receive a plurality of positioning signals at the positioning signal receiver during installation of the satellite terminal at a customer premises; determine a first installation position of the satellite terminal based on the plurality of positioning signals; determine whether the satellite terminal has rights to access a satellite communications system at the customer premises based at least in part on the determined first installation position of the satellite terminal; and permit communications between the satellite terminal and a target satellite of the satellite communications system. 16. The satellite terminal of claim 15, wherein the instructions are executable by the processor to cause the satellite terminal to: receive the plurality of positioning signals from a plurality of positioning satellites. 17. The satellite terminal of claim 15, wherein the instructions are executable by the processor to cause the satellite terminal to: receive one or more subsequent positioning signals at the satellite terminal; determine, at the satellite terminal, a second installation position of the satellite terminal based on the one or more subsequent positioning signals; and initiate a terminal setup procedure based on a difference between the determined second installation position of the satellite terminal and the determined first installation position of the satellite terminal exceeding a threshold. 18. The satellite terminal of claim 15, wherein the instructions that cause the satellite terminal to permit communications between the satellite terminal and the target satellite of the satellite communications system comprise instructions to: determine a signal timing offset based at least in part on the determined first installation position; and permit communications based at least in part on the determined signal timing offset. 19. A satellite terminal, comprising: means for receiving a plurality of positioning signals at the satellite terminal during installation of the satellite terminal at a customer premises; means for determining, at the satellite terminal, an installation position of the satellite terminal based on the plurality of positioning signals; and means for determining whether the satellite terminal has rights to access a satellite communications system at the customer premises based at least in part on the determined installation position; and means for permitting communications between the satellite terminal and a target satellite of the satellite communications system. 20. The satellite terminal of claim 19, wherein the means for permitting communications between the satellite terminal and the target satellite comprises: means for determining a signal timing offset based at least in part on the determined installation position; and means for permitting communications based at least in part on the determined signal timing offset.
The described features generally relate to receiving one or more positioning signals at a satellite terminal during installation of the satellite terminal at a customer premises, and providing position-based access to a satellite communications system based on a satellite terminal installation position determined from the received positioning signals. The determined installation position of the satellite terminal may then be employed for various network access techniques, such as providing access to the satellite communications system, providing position-based content, or restricting content via the satellite communications system based on the determined installation position. In some examples the determined installation position of the satellite terminal may be used to approximate a propagation delay between the satellite terminal and various devices of the satellite communications system, such as a serving satellite and/or a serving gateway, to improve device synchronization and radio frequency spectrum resource utilization.1. A method for use in a satellite communications system, comprising: receiving a plurality of positioning signals for a satellite terminal during installation of the satellite terminal at a customer premises; determining, at the satellite terminal, a first installation position of the satellite terminal based on the plurality of positioning signals; determining whether the satellite terminal has rights to access the satellite communications system at the customer premises based at least in part on the determined first installation position of the satellite terminal; and in response to determining the satellite terminal has rights to access the satellite communications system, permitting communications between the satellite terminal and a target satellite of the satellite communications system. 2. The method of claim 1, wherein the plurality of positioning signals are received from a plurality of positioning satellites. 3. The method of claim 2, wherein the plurality of positioning satellites are included in a global navigation satellite system constellation. 4. The method of claim 1, further comprising: receiving a plurality of subsequent positioning signals at the satellite terminal; determining, at the satellite terminal, a second installation position of the satellite terminal based on the plurality of subsequent positioning signals; and initiating a terminal setup procedure based on a difference between the determined second installation position of the satellite terminal and the determined first installation position of the satellite terminal exceeding a threshold. 5. The method of claim 1, wherein determining whether the satellite terminal has rights to access the satellite communications system comprises: identifying a geographic region of the satellite terminal based at least in part on the determined first installation position of the satellite terminal; and determining whether the satellite terminal has rights to access the satellite communications system based at least in part on the determined first installation position of the satellite terminal and the identified geographic region. 6. The method of claim 1, further comprising: restricting access to content based at least in part on the determined first installation position of the satellite terminal. 7. The method of claim 1, further comprising: receiving position-specific content at the satellite terminal based at least in part on the determined first installation position of the satellite terminal. 8. The method of claim 1, wherein permitting communications between the satellite terminal and the target satellite comprises: determining a signal timing offset based at least in part on the determined first installation position of the satellite terminal; and permitting communications based at least in part on the determined signal timing offset. 9. The method of claim 8, wherein determining the signal timing offset comprises: estimating a distance between the satellite terminal and the target satellite, the estimating based at least in part on the determined first installation position of the satellite terminal; and determining the signal timing offset based at least in part on the estimated distance. 10. The method of claim 8, wherein determining the signal timing offset comprises: determining, at the satellite terminal, a position of the target satellite; and determining the signal timing offset based at least in part on the determined first installation position of the target satellite. 11. The method of claim 10, wherein determining the position of the target satellite comprises: determining the position of the target satellite based at least in part on one or more of a value stored at the satellite terminal, or a signal received from the target satellite, or a combination thereof. 12. The method of claim 8, further comprising: receiving information associated with a contention slot of the target satellite; wherein permitting communications between the satellite terminal and the target satellite comprises: determining a transmission timing based at least in part on the received information and the determined signal timing offset; and transmitting, based at least in part on the determined transmission timing, a network entry signal to the target satellite. 13. The method of claim 8, further comprising: receiving a subsequent positioning signal at the satellite terminal; and adjusting the signal timing offset based at least in part on the subsequent positioning signal. 14. The method of claim 8, further comprising: receiving an adjustment signal from the target satellite; and adjusting the signal timing offset based at least in part on the received adjustment signal. 15. A satellite terminal, comprising: a positioning signal receiver; a communication signal transceiver; a processor; memory in electronic communication with the processor; and instructions stored in the memory and executable by the processor to cause the satellite terminal to: receive a plurality of positioning signals at the positioning signal receiver during installation of the satellite terminal at a customer premises; determine a first installation position of the satellite terminal based on the plurality of positioning signals; determine whether the satellite terminal has rights to access a satellite communications system at the customer premises based at least in part on the determined first installation position of the satellite terminal; and permit communications between the satellite terminal and a target satellite of the satellite communications system. 16. The satellite terminal of claim 15, wherein the instructions are executable by the processor to cause the satellite terminal to: receive the plurality of positioning signals from a plurality of positioning satellites. 17. The satellite terminal of claim 15, wherein the instructions are executable by the processor to cause the satellite terminal to: receive one or more subsequent positioning signals at the satellite terminal; determine, at the satellite terminal, a second installation position of the satellite terminal based on the one or more subsequent positioning signals; and initiate a terminal setup procedure based on a difference between the determined second installation position of the satellite terminal and the determined first installation position of the satellite terminal exceeding a threshold. 18. The satellite terminal of claim 15, wherein the instructions that cause the satellite terminal to permit communications between the satellite terminal and the target satellite of the satellite communications system comprise instructions to: determine a signal timing offset based at least in part on the determined first installation position; and permit communications based at least in part on the determined signal timing offset. 19. A satellite terminal, comprising: means for receiving a plurality of positioning signals at the satellite terminal during installation of the satellite terminal at a customer premises; means for determining, at the satellite terminal, an installation position of the satellite terminal based on the plurality of positioning signals; and means for determining whether the satellite terminal has rights to access a satellite communications system at the customer premises based at least in part on the determined installation position; and means for permitting communications between the satellite terminal and a target satellite of the satellite communications system. 20. The satellite terminal of claim 19, wherein the means for permitting communications between the satellite terminal and the target satellite comprises: means for determining a signal timing offset based at least in part on the determined installation position; and means for permitting communications based at least in part on the determined signal timing offset.
2,600
10,624
10,624
14,752,128
2,655
A system matches text-to-speech (TTS) or other output to a quality of an input spoken utterance. The system uses trained models to detect a speech quality and generates an indicator of the speech quality. The speech quality may be determined from audio or non-audio data. The indicator is sent to downstream components of the system such as a command processor or TTS system. The output of the system is then determined using the indicator of speech quality, thus customizing an output of the system to the manner in which the utterance was spoken.
1. A computer-implemented method for processing a whispered utterance and responding in whispered synthesized speech, the method comprising: receiving input audio data comprising an input utterance; processing the input audio data with at least one trained model to determine that the input utterance was whispered; performing automatic speech recognition (ASR) on the input audio data to determine input text corresponding to the input utterance; performing natural language understanding processing on the input text to identify a query; determining content responding to the query based on the input utterance being whispered; and causing the content to be output. 2. The computer-implemented method of claim 1, further comprising: performing text-to-speech (TTS) processing on output text based on a speech quality indicator to generate output audio data, wherein the output audio data comprises synthesized speech responding to the query, wherein the synthesized speech is configured to sound like a whispered voice, wherein performing TTS processing further comprises: performing unit selection using a voice corpus to select a plurality of stored audio data segments of recorded whispered speech, the stored audio data segments corresponding to the output text; and concatenating the plurality of stored audio segments to determine the output audio data. 3. The computer-implemented method of claim 1, wherein the trained model comprises a support vector machine (SVM) configured to process audio feature vectors to determine that speech associated with the audio feature vectors has a resonance below a resonance threshold and has a volume below a volume threshold. 4. A computer-implemented method comprising: determining an input speech quality corresponding to input audio data; performing automatic speech recognition on the input audio data to determine input text; determining content based on the input text and the input speech quality; and causing the content to be output. 5. The computer-implemented method of claim 4, wherein determining the input speech quality comprises processing the input audio data using at least one trained classifier configured to classify the audio data as either corresponding to the speech quality or not corresponding to the speech quality. 6. The computer-implemented method of claim 4, further comprising: performing natural language understanding processing on the input text to identify a search query; and processing the query with a search engine to obtain a search result; wherein determining the content comprises selecting, based on the input speech quality, a portion of the search result as the content. 7. The computer-implemented method of claim 4, further comprising determining the input speech quality indicates that the audio data corresponds to whispered speech. 8. The computer-implemented method of claim 7, wherein determining the input speech quality comprises processing the input audio data with a trained classifier configured to process audio feature vectors to determine that the input audio data has a resonance below a resonance threshold and has a volume below a volume threshold. 9. The computer-implemented method of claim 8, further comprising processing input non-audio data to determine the input speech quality. 10. The computer-implemented method of claim 9, wherein processing the input non-audio data comprises: receiving light data from a light sensor; determining that the light data is below a light threshold; and inputting an indication that the light data is below the light threshold into the trained classifier. 11. The computer-implemented method of claim 7, further comprising: performing text-to-speech (TTS) processing on output text to generate output audio data, wherein the TTS processing is based on the input speech quality, and wherein performing TTS processing further comprises: performing unit selection using a voice corpus to select a plurality of stored audio data segments of recorded whispered speech, the stored audio data segments corresponding to the output text; and concatenating the plurality of stored audio segments to determine the output audio data, wherein the output audio data corresponds to an output utterance that responds to the query in a whispered voice. 12. The computer-implemented method of claim 11, further comprising selecting the output text from a plurality of prepared text samples based on the speech quality. 13. A computing system comprising: at least one processor; a memory including instructions operable to be executed by the at least one processor to cause the system to perform a set of actions comprising: determining an input speech quality corresponding to input audio data; performing automatic speech recognition on the input audio data to determine input text; determining content based on the input text and the input speech quality; and causing the content to be output. 14. The computing system of claim 13, wherein determining the input speech quality comprises processing the input audio data using at least one trained classifier configured to classify the audio data as either corresponding to the speech quality or not corresponding to the speech quality. 15. The computing system of claim 13, the set of actions further comprising: performing natural language understanding processing on the input text to identify a search query; and processing the query with a search engine to obtain a search result; wherein determining the content comprises selecting, based on the input speech quality, a portion of the search result as the content. 16. The computing system of claim 13, the set of actions further comprising determining the input speech quality indicates that the audio data corresponds to whispered speech. 17. The computing system of claim 16, wherein determining the input speech quality comprises processing the input audio data with a trained classifier configured to process audio feature vectors to determine that the input audio data has a resonance below a resonance threshold and has a volume below a volume threshold. 18. The computing system of claim 17, the set of actions further comprising processing input non-audio data to determine the input speech quality. 19. The computing system of claim 18, wherein processing the input non-audio data comprises: receiving light data from a light sensor; determining that the light data is below a light threshold; and inputting an indication that the light data is below the light threshold into the trained classifier. 20. The computing system of claim 16, the set of actions further comprising: performing text-to-speech (TTS) processing on output text to generate output audio data, wherein the TTS processing is based on the input speech quality, and wherein performing TTS processing further comprises: performing unit selection using a voice corpus to select a plurality of stored audio data segments of recorded whispered speech, the stored audio data segments corresponding to the output text; and concatenating the plurality of stored audio segments to determine the output audio data, wherein the output audio data corresponds to an output utterance that responds to the query in a whispered voice. 21. The computing system of claim 20, the set of actions further comprising selecting the output text from a plurality of prepared text samples based on the speech quality. 22. The computer-implemented method of claim 4, further comprising: performing natural language understanding processing on the input text to identify a query; determining first content and second content that are responsive to the query; and selecting the first content as the content for output based on the input speech quality. 23. The computer-implemented method of claim 4, further comprising: performing natural language understanding processing on the input text to determine the input text corresponds to a request to play music; determining first music content and second music content that are responsive to the request; and determining the first music content includes an audio quality corresponding to the input speech quality; and selecting the first music content as the content for output.
A system matches text-to-speech (TTS) or other output to a quality of an input spoken utterance. The system uses trained models to detect a speech quality and generates an indicator of the speech quality. The speech quality may be determined from audio or non-audio data. The indicator is sent to downstream components of the system such as a command processor or TTS system. The output of the system is then determined using the indicator of speech quality, thus customizing an output of the system to the manner in which the utterance was spoken.1. A computer-implemented method for processing a whispered utterance and responding in whispered synthesized speech, the method comprising: receiving input audio data comprising an input utterance; processing the input audio data with at least one trained model to determine that the input utterance was whispered; performing automatic speech recognition (ASR) on the input audio data to determine input text corresponding to the input utterance; performing natural language understanding processing on the input text to identify a query; determining content responding to the query based on the input utterance being whispered; and causing the content to be output. 2. The computer-implemented method of claim 1, further comprising: performing text-to-speech (TTS) processing on output text based on a speech quality indicator to generate output audio data, wherein the output audio data comprises synthesized speech responding to the query, wherein the synthesized speech is configured to sound like a whispered voice, wherein performing TTS processing further comprises: performing unit selection using a voice corpus to select a plurality of stored audio data segments of recorded whispered speech, the stored audio data segments corresponding to the output text; and concatenating the plurality of stored audio segments to determine the output audio data. 3. The computer-implemented method of claim 1, wherein the trained model comprises a support vector machine (SVM) configured to process audio feature vectors to determine that speech associated with the audio feature vectors has a resonance below a resonance threshold and has a volume below a volume threshold. 4. A computer-implemented method comprising: determining an input speech quality corresponding to input audio data; performing automatic speech recognition on the input audio data to determine input text; determining content based on the input text and the input speech quality; and causing the content to be output. 5. The computer-implemented method of claim 4, wherein determining the input speech quality comprises processing the input audio data using at least one trained classifier configured to classify the audio data as either corresponding to the speech quality or not corresponding to the speech quality. 6. The computer-implemented method of claim 4, further comprising: performing natural language understanding processing on the input text to identify a search query; and processing the query with a search engine to obtain a search result; wherein determining the content comprises selecting, based on the input speech quality, a portion of the search result as the content. 7. The computer-implemented method of claim 4, further comprising determining the input speech quality indicates that the audio data corresponds to whispered speech. 8. The computer-implemented method of claim 7, wherein determining the input speech quality comprises processing the input audio data with a trained classifier configured to process audio feature vectors to determine that the input audio data has a resonance below a resonance threshold and has a volume below a volume threshold. 9. The computer-implemented method of claim 8, further comprising processing input non-audio data to determine the input speech quality. 10. The computer-implemented method of claim 9, wherein processing the input non-audio data comprises: receiving light data from a light sensor; determining that the light data is below a light threshold; and inputting an indication that the light data is below the light threshold into the trained classifier. 11. The computer-implemented method of claim 7, further comprising: performing text-to-speech (TTS) processing on output text to generate output audio data, wherein the TTS processing is based on the input speech quality, and wherein performing TTS processing further comprises: performing unit selection using a voice corpus to select a plurality of stored audio data segments of recorded whispered speech, the stored audio data segments corresponding to the output text; and concatenating the plurality of stored audio segments to determine the output audio data, wherein the output audio data corresponds to an output utterance that responds to the query in a whispered voice. 12. The computer-implemented method of claim 11, further comprising selecting the output text from a plurality of prepared text samples based on the speech quality. 13. A computing system comprising: at least one processor; a memory including instructions operable to be executed by the at least one processor to cause the system to perform a set of actions comprising: determining an input speech quality corresponding to input audio data; performing automatic speech recognition on the input audio data to determine input text; determining content based on the input text and the input speech quality; and causing the content to be output. 14. The computing system of claim 13, wherein determining the input speech quality comprises processing the input audio data using at least one trained classifier configured to classify the audio data as either corresponding to the speech quality or not corresponding to the speech quality. 15. The computing system of claim 13, the set of actions further comprising: performing natural language understanding processing on the input text to identify a search query; and processing the query with a search engine to obtain a search result; wherein determining the content comprises selecting, based on the input speech quality, a portion of the search result as the content. 16. The computing system of claim 13, the set of actions further comprising determining the input speech quality indicates that the audio data corresponds to whispered speech. 17. The computing system of claim 16, wherein determining the input speech quality comprises processing the input audio data with a trained classifier configured to process audio feature vectors to determine that the input audio data has a resonance below a resonance threshold and has a volume below a volume threshold. 18. The computing system of claim 17, the set of actions further comprising processing input non-audio data to determine the input speech quality. 19. The computing system of claim 18, wherein processing the input non-audio data comprises: receiving light data from a light sensor; determining that the light data is below a light threshold; and inputting an indication that the light data is below the light threshold into the trained classifier. 20. The computing system of claim 16, the set of actions further comprising: performing text-to-speech (TTS) processing on output text to generate output audio data, wherein the TTS processing is based on the input speech quality, and wherein performing TTS processing further comprises: performing unit selection using a voice corpus to select a plurality of stored audio data segments of recorded whispered speech, the stored audio data segments corresponding to the output text; and concatenating the plurality of stored audio segments to determine the output audio data, wherein the output audio data corresponds to an output utterance that responds to the query in a whispered voice. 21. The computing system of claim 20, the set of actions further comprising selecting the output text from a plurality of prepared text samples based on the speech quality. 22. The computer-implemented method of claim 4, further comprising: performing natural language understanding processing on the input text to identify a query; determining first content and second content that are responsive to the query; and selecting the first content as the content for output based on the input speech quality. 23. The computer-implemented method of claim 4, further comprising: performing natural language understanding processing on the input text to determine the input text corresponds to a request to play music; determining first music content and second music content that are responsive to the request; and determining the first music content includes an audio quality corresponding to the input speech quality; and selecting the first music content as the content for output.
2,600
10,625
10,625
14,523,828
2,646
The technology disclosed relates to determining positional information about an object of interest is provided. In particular, it includes, conducting scanning of a field of interest with an emission from a transmission area according to an ordered scan pattern. The emission can be received to form a signal based upon at least one salient property (e.g., intensity, amplitude, frequency, polarization, phase, or other detectable feature) of the emission varying with time at the object of interest. Synchronization information about the ordered scan pattern can be derived from a source, a second signal broadcast separately, social media share, others, or and/or combinations thereof). A correspondence between at least one characteristic of the signal and the synchronization information can be established. Positional information can be determined based at least in part upon the correspondence.
1. A method of locating a mobile device relative to one or more emitter arrays, the method including: receiving a first swept first beam-formed signal in a mobile device, wherein the first beam-formed signal is associated with time related data that varies according to the sweep of the first beam-formed signal; trapping at least a first received value of the time related data corresponding to receipt of a maximum amplitude of the first beam-formed signal during the sweep of the first beam-formed signal; and returning the trapped first time related data value. 2. The method of claim 1, applied to 2D triangulation, further including: receiving a second swept beam-formed signal in the mobile device, wherein the second beam-formed signal is associated with time related data that varies according to the sweep of a second beam-formed signal; trapping at least a second received value of the time related data corresponding to receipt of a maximum amplitude of the second beam-formed signal during the sweep of the second beam-formed signal; and returning the trapped second time related data value for triangulation of a position at which the first and second beam-formed signals were received. 3. The method of claim 2, further including distinguishing the first and second beam-formed signals based on different respective frequencies. 4. The method of claim 2, further including distinguishing the first and second beam-formed signals based on different respective polarizations. 5. The method of claim 2, further including distinguishing the first and second beam-formed signals based on amplitude modulation. 6. The method of claim 2, wherein: the first and second beam-formed signals have different mixes of frequencies; the mixes of frequencies produce first and second beats of amplitude due to respective frequency differences; and the beats of amplitude encode time related data. 7. The method of claim 2, wherein the time related data comprises a series of time stamps. 8. The method of claim 2, wherein the time related data comprises a clock signal. 9. The method of claim 2, wherein the time related data comprises a periodic frequency chirp. 10. The method of claim 2, further including transmitting the time related data via a separate channel. 11. The method of claim 2, further including transmitting the time related data via a sub channel. 12. The method of claim 2, applied to 3D triangulation, wherein sweep patterns of the first and second beam-formed signals define respective planes that are not parallel. 13. The method of claim 2, wherein combined sweep patterns of the first and second beam-formed signals define a toroidal emission pattern. 14. The method of claim 2, further including receiving a calculated position based at least in part on the returned first and second received values of the time related data. 15. The method of claim 2, wherein the first and second beam-formed signals are swept in opposing directions. 16. The method of claim 2, further including receiving geo-relevant information within an indoor facility responsive to returning the trapped first and second time related data values. 17. A method of locating a mobile device relative to one or more emitter arrays, the method including: forming a first swept first beam-formed signal directed towards a mobile device, wherein the first beam-formed signal is associated with time related data that varies according to the sweep of the first beam-formed signal; receiving from the mobile device at least a first received value of the time related data corresponding to receipt of a maximum amplitude at the mobile device of the first beam-formed signal during the sweep of the first beam-formed signal; and calculating an angle of arrival between emitters used to form the first swept beam-formed signal and the mobile device. 18. The method of claim 17, applied to 2D triangulation, further including: forming a second swept second beam-formed signal directed towards the mobile device using a second set of emitters separated from a first set of emitters used to form the first swept first beam-formed signal, wherein the second beam-formed signal is associated with time related data that varies according to the sweep of the second beam-formed signal; receiving from the mobile device at least a second received value of the time related data corresponding to receipt of a maximum amplitude at the mobile device of the second beam-formed signal during the sweep of the second beam-formed signal; and triangulating a position of the mobile device at which the first and second beam-formed signals were received. 19. The method of claim 18, further including tracking positions of the mobile device over a period of time. 20. The method of claim 18, further including determining based upon positions of the mobile device and data describing points of interest, a set of points of interest visited by a user of the mobile device. 21. The method of claim 18, further including providing to third parties at the points of interest, a set of identifiers associated with users of mobile devices determined to have visited the points of interest of the third parties. 22. A method of locating a mobile device relative to one or more emitter arrays, the method including: receiving a first swept first beam-formed signal in a mobile device, wherein the first beam-formed signal is associated with time related data that varies according to the sweep of the first beam-formed signal; receiving a second swept beam-formed signal in the mobile device, wherein the second beam-formed signal is associated with time related data that varies according to the sweep of the second beam-formed signal; trapping at least a first received value of the time related data corresponding to receipt of a maximum amplitude of the first beam-formed signal during the sweep of the first beam-formed signal; trapping at least a second received value of the time related data corresponding to receipt of a maximum amplitude of the second beam-formed signal during the sweep of the beam-formed signal; receiving data describing source location and timing of each of the first and second beam-formed signals; and calculating a position of the mobile device at least relative to the source locations. 23. A method of distinguishing among mobile devices in a three-dimensional (3D) sensory space, the method including: transmitting from a pair of phased arrays in known locations swept beam-formed signals, wherein each beam formed signal is associated with respective time related data that varies during sweeping of the respective beam-formed signals; receiving from a mobile device one or more reports of respective time related data values captured when the respective beam-formed signals were directed towards the mobile device; triangulating a location of the mobile device based on the reports of respective time related data values; in a three-dimensional (3D) field of view, using the triangulated location of the mobile device to distinguish the mobile device from other objects with similar appearances in the field of view; and interpreting gestures associated with the distinguished mobile device or user of the mobile device. 24. A method of locating a mobile device relative to one or more emitter arrays, the method including: emitting two or more mixed signals from a first pair of two or more emitters to produce a first scan that sweeps a field of interest, wherein: each mixed signal is a sum of at least two different frequencies that beat against each other and produce a beat characteristic; the emission from the first pair of emitters causes a first combined signal to sweep the field of interest; and the first combined signal is associated with time related data that varies according to a current direction of the sweep; receiving from a mobile device a first value of the time related data corresponding to a time when the direction of the sweep by the first combined signal was directed at the mobile device; and calculating a first angle relative to the first pair of emitters and the mobile device based at least in part on the first value. 25. The method of claim 24, applied to 2D triangulation, further including: emitting two or more mixed signals from a second pair of two or more emitters to produce a second scan that sweeps a field of interest, wherein: each mixed signal is a sum of at least two different frequencies that beat against each other and produce a beat characteristic; the emission from the second pair of emitters causes a second combined signal to sweep the field of interest; and the second combined signal is associated with time related data that varies according to a current direction of the sweep; receiving from the mobile device a second value of the time related data corresponding to a time when the direction of the sweep by the second combined signal was directed at the mobile device; and calculating a second angle relative to the second pair of emitters and the mobile device based at least in part on the second value for triangulation of a position at which the first and second combined signals were received. 26. The method of claim 24, wherein the combined signals are pulsed over a variety of phase angles. 27. The method of claim 24, further including applying a frequency filter to distinguish the two or more mixed signals.
The technology disclosed relates to determining positional information about an object of interest is provided. In particular, it includes, conducting scanning of a field of interest with an emission from a transmission area according to an ordered scan pattern. The emission can be received to form a signal based upon at least one salient property (e.g., intensity, amplitude, frequency, polarization, phase, or other detectable feature) of the emission varying with time at the object of interest. Synchronization information about the ordered scan pattern can be derived from a source, a second signal broadcast separately, social media share, others, or and/or combinations thereof). A correspondence between at least one characteristic of the signal and the synchronization information can be established. Positional information can be determined based at least in part upon the correspondence.1. A method of locating a mobile device relative to one or more emitter arrays, the method including: receiving a first swept first beam-formed signal in a mobile device, wherein the first beam-formed signal is associated with time related data that varies according to the sweep of the first beam-formed signal; trapping at least a first received value of the time related data corresponding to receipt of a maximum amplitude of the first beam-formed signal during the sweep of the first beam-formed signal; and returning the trapped first time related data value. 2. The method of claim 1, applied to 2D triangulation, further including: receiving a second swept beam-formed signal in the mobile device, wherein the second beam-formed signal is associated with time related data that varies according to the sweep of a second beam-formed signal; trapping at least a second received value of the time related data corresponding to receipt of a maximum amplitude of the second beam-formed signal during the sweep of the second beam-formed signal; and returning the trapped second time related data value for triangulation of a position at which the first and second beam-formed signals were received. 3. The method of claim 2, further including distinguishing the first and second beam-formed signals based on different respective frequencies. 4. The method of claim 2, further including distinguishing the first and second beam-formed signals based on different respective polarizations. 5. The method of claim 2, further including distinguishing the first and second beam-formed signals based on amplitude modulation. 6. The method of claim 2, wherein: the first and second beam-formed signals have different mixes of frequencies; the mixes of frequencies produce first and second beats of amplitude due to respective frequency differences; and the beats of amplitude encode time related data. 7. The method of claim 2, wherein the time related data comprises a series of time stamps. 8. The method of claim 2, wherein the time related data comprises a clock signal. 9. The method of claim 2, wherein the time related data comprises a periodic frequency chirp. 10. The method of claim 2, further including transmitting the time related data via a separate channel. 11. The method of claim 2, further including transmitting the time related data via a sub channel. 12. The method of claim 2, applied to 3D triangulation, wherein sweep patterns of the first and second beam-formed signals define respective planes that are not parallel. 13. The method of claim 2, wherein combined sweep patterns of the first and second beam-formed signals define a toroidal emission pattern. 14. The method of claim 2, further including receiving a calculated position based at least in part on the returned first and second received values of the time related data. 15. The method of claim 2, wherein the first and second beam-formed signals are swept in opposing directions. 16. The method of claim 2, further including receiving geo-relevant information within an indoor facility responsive to returning the trapped first and second time related data values. 17. A method of locating a mobile device relative to one or more emitter arrays, the method including: forming a first swept first beam-formed signal directed towards a mobile device, wherein the first beam-formed signal is associated with time related data that varies according to the sweep of the first beam-formed signal; receiving from the mobile device at least a first received value of the time related data corresponding to receipt of a maximum amplitude at the mobile device of the first beam-formed signal during the sweep of the first beam-formed signal; and calculating an angle of arrival between emitters used to form the first swept beam-formed signal and the mobile device. 18. The method of claim 17, applied to 2D triangulation, further including: forming a second swept second beam-formed signal directed towards the mobile device using a second set of emitters separated from a first set of emitters used to form the first swept first beam-formed signal, wherein the second beam-formed signal is associated with time related data that varies according to the sweep of the second beam-formed signal; receiving from the mobile device at least a second received value of the time related data corresponding to receipt of a maximum amplitude at the mobile device of the second beam-formed signal during the sweep of the second beam-formed signal; and triangulating a position of the mobile device at which the first and second beam-formed signals were received. 19. The method of claim 18, further including tracking positions of the mobile device over a period of time. 20. The method of claim 18, further including determining based upon positions of the mobile device and data describing points of interest, a set of points of interest visited by a user of the mobile device. 21. The method of claim 18, further including providing to third parties at the points of interest, a set of identifiers associated with users of mobile devices determined to have visited the points of interest of the third parties. 22. A method of locating a mobile device relative to one or more emitter arrays, the method including: receiving a first swept first beam-formed signal in a mobile device, wherein the first beam-formed signal is associated with time related data that varies according to the sweep of the first beam-formed signal; receiving a second swept beam-formed signal in the mobile device, wherein the second beam-formed signal is associated with time related data that varies according to the sweep of the second beam-formed signal; trapping at least a first received value of the time related data corresponding to receipt of a maximum amplitude of the first beam-formed signal during the sweep of the first beam-formed signal; trapping at least a second received value of the time related data corresponding to receipt of a maximum amplitude of the second beam-formed signal during the sweep of the beam-formed signal; receiving data describing source location and timing of each of the first and second beam-formed signals; and calculating a position of the mobile device at least relative to the source locations. 23. A method of distinguishing among mobile devices in a three-dimensional (3D) sensory space, the method including: transmitting from a pair of phased arrays in known locations swept beam-formed signals, wherein each beam formed signal is associated with respective time related data that varies during sweeping of the respective beam-formed signals; receiving from a mobile device one or more reports of respective time related data values captured when the respective beam-formed signals were directed towards the mobile device; triangulating a location of the mobile device based on the reports of respective time related data values; in a three-dimensional (3D) field of view, using the triangulated location of the mobile device to distinguish the mobile device from other objects with similar appearances in the field of view; and interpreting gestures associated with the distinguished mobile device or user of the mobile device. 24. A method of locating a mobile device relative to one or more emitter arrays, the method including: emitting two or more mixed signals from a first pair of two or more emitters to produce a first scan that sweeps a field of interest, wherein: each mixed signal is a sum of at least two different frequencies that beat against each other and produce a beat characteristic; the emission from the first pair of emitters causes a first combined signal to sweep the field of interest; and the first combined signal is associated with time related data that varies according to a current direction of the sweep; receiving from a mobile device a first value of the time related data corresponding to a time when the direction of the sweep by the first combined signal was directed at the mobile device; and calculating a first angle relative to the first pair of emitters and the mobile device based at least in part on the first value. 25. The method of claim 24, applied to 2D triangulation, further including: emitting two or more mixed signals from a second pair of two or more emitters to produce a second scan that sweeps a field of interest, wherein: each mixed signal is a sum of at least two different frequencies that beat against each other and produce a beat characteristic; the emission from the second pair of emitters causes a second combined signal to sweep the field of interest; and the second combined signal is associated with time related data that varies according to a current direction of the sweep; receiving from the mobile device a second value of the time related data corresponding to a time when the direction of the sweep by the second combined signal was directed at the mobile device; and calculating a second angle relative to the second pair of emitters and the mobile device based at least in part on the second value for triangulation of a position at which the first and second combined signals were received. 26. The method of claim 24, wherein the combined signals are pulsed over a variety of phase angles. 27. The method of claim 24, further including applying a frequency filter to distinguish the two or more mixed signals.
2,600
10,626
10,626
15,802,563
2,625
A touch sensitive processing apparatus is used to detect at least one object approximating or touching a touch screen and is configured for iteratively executing the following steps: having a driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with multiple second electrodes to form multiple intersection areas, the other of the two or more first electrodes intersects with multiple third electrodes to form multiple intersection areas; and having a sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information and having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information.
1. A touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing apparatus comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with the second electrodes to form the multiple intersection areas, the other of the two or more first electrodes intersects with the third electrodes to form the multiple intersection areas; and having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 2. The touch sensitive processing apparatus of claim 1, wherein the processor is further configured for: executing iteratively the following steps when all of the first electrodes intersecting with the second electrodes to form multiple intersection areas have been sent the driving signal: having the driving circuit sending the driving signal to one of the electrodes having been not sent the driving signal; and having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 3. The touch sensitive processing apparatus of claim 2, wherein the processor is further configured for: piecing up all of the one-dimensional sensing information with respective to the order of the first electrodes to a two-dimensional sensing information when all of the first electrodes have been sent the driving signal; and detecting at least one object approximating or touching the touch screen according to the two-dimensional sensing information. 4. The touch sensitive processing apparatus of claim 1, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in order according to the positions on the touch screen. 5. The touch sensitive processing apparatus of claim 1, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in random. 6. The touch sensitive processing apparatus of claim 1, wherein the processor is further configured for: executing the following steps before executing the iterative steps: having the driving circuit sending the driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if at least one object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if at least one object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and executing the iterative steps when the at least one object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 7. The touch sensitive processing apparatus of claim 1, wherein intervals between each two of the first electrodes are equivalent. 8. The touch sensitive processing apparatus of claim 1, wherein a number of the second electrodes equals to a number of the third electrodes, an axial direction of each of the second electrodes is the same as that of one of the third electrodes. 9. The touch sensitive processing apparatus of claim 1, wherein the first axis is parallel to an axial direction of pixel-refreshing of the touch screen. 10. The touch sensitive processing apparatus of claim 1, wherein the second electrodes connects to a touch sensitive processing apparatus via a first side of the touch screen, the third electrodes connects to the touch sensitive processing apparatus via a second side of the touch screen, wherein the first side is parallel to the second side. 11. The touch sensitive processing apparatus of claim 1, wherein the touch screen is an in-cell touch LCD screen, the first electrodes are the common electrodes of the in-cell touch LCD screen. 12. A touch sensitive processing method adaptive to a touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing method comprising: executing iteratively the following steps: having the driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with the second electrodes to form the multiple intersection areas, the other of the two or more first electrodes intersects with the third electrodes to form the multiple intersection areas; and having a sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 13. The touch sensitive processing method of claim 12, further comprising: executing iteratively the following steps when all of the first electrodes intersecting with the second electrodes to form multiple intersection areas have been sent the driving signal: having the driving circuit sending the driving signal to one of the electrodes having been not sent the driving signal; and having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 14. The touch sensitive processing method of claim 13, further comprising: piecing up all of the one-dimensional sensing information with respective to the order of the first electrodes to a two-dimensional sensing information when all of the first electrodes have been sent the driving signal; and detecting at least one object approximating or touching the touch screen according to the two-dimensional sensing information. 15. The touch sensitive processing method of claim 12, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in order according to the positions on the touch screen. 16. The touch sensitive processing method of claim 12, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in random. 17. The touch sensitive processing method of claim 12, further comprising: executing the following steps before executing the iterative steps: having the driving circuit sending the driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if at least one object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if at least one object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and executing the iterative steps when the at least one object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 18. The touch sensitive processing method of claim 12, wherein intervals between each two of the first electrodes are equivalent. 19. The touch sensitive processing method of claim 12, wherein a number of the second electrodes equals to a number of the third electrodes, an axial direction of each of the second electrodes is the same as that of one of the third electrodes. 20. The touch sensitive processing method of claim 12, wherein the first axis is parallel to an axial direction of pixel-refreshing of the touch screen. 21. The touch sensitive processing method of claim 12, wherein the second electrodes connects to a touch sensitive processing apparatus via a first side of the touch screen, the third electrodes connects to the touch sensitive processing apparatus via a second side of the touch screen, wherein the first side is parallel to the second side. 22. The touch sensitive processing method of claim 12, wherein the touch screen is an in-cell touch LCD screen, the first electrodes are the common electrodes of the in-cell touch LCD screen. 23. An electronic apparatus used to detect at least one object approximating or touching a touch screen, the electronic system comprising: the touch screen, comprising: a plurality of first electrodes being parallel to a first axis; a plurality of second electrodes being parallel to a second axis; and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas; and a touch sensitive processing apparatus, comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with the second electrodes to form the multiple intersection areas, the other of the two or more first electrodes intersects with the third electrodes to form the multiple intersection areas; and having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 24. A touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing apparatus comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit sending the driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if any object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if any object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and reporting to a host there is no approaching object when no object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 25. A touch sensitive processing method adaptive to a touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing method comprising: having a driving circuit sending a driving signal to all of the first electrodes; having a sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if any object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if any object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and reporting to a host there is no approaching object when no object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 26. An electronic apparatus used to detect at least one object approximating or touching a touch screen, comprising: the touch screen, comprising: a plurality of first electrodes being parallel to a first axis; a plurality of second electrodes being parallel to a second axis; and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas; and a touch sensitive processing apparatus, comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit sending a driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if any object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if any object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and reporting to a host there is no approaching object when no object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined.
A touch sensitive processing apparatus is used to detect at least one object approximating or touching a touch screen and is configured for iteratively executing the following steps: having a driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with multiple second electrodes to form multiple intersection areas, the other of the two or more first electrodes intersects with multiple third electrodes to form multiple intersection areas; and having a sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information and having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information.1. A touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing apparatus comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with the second electrodes to form the multiple intersection areas, the other of the two or more first electrodes intersects with the third electrodes to form the multiple intersection areas; and having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 2. The touch sensitive processing apparatus of claim 1, wherein the processor is further configured for: executing iteratively the following steps when all of the first electrodes intersecting with the second electrodes to form multiple intersection areas have been sent the driving signal: having the driving circuit sending the driving signal to one of the electrodes having been not sent the driving signal; and having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 3. The touch sensitive processing apparatus of claim 2, wherein the processor is further configured for: piecing up all of the one-dimensional sensing information with respective to the order of the first electrodes to a two-dimensional sensing information when all of the first electrodes have been sent the driving signal; and detecting at least one object approximating or touching the touch screen according to the two-dimensional sensing information. 4. The touch sensitive processing apparatus of claim 1, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in order according to the positions on the touch screen. 5. The touch sensitive processing apparatus of claim 1, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in random. 6. The touch sensitive processing apparatus of claim 1, wherein the processor is further configured for: executing the following steps before executing the iterative steps: having the driving circuit sending the driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if at least one object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if at least one object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and executing the iterative steps when the at least one object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 7. The touch sensitive processing apparatus of claim 1, wherein intervals between each two of the first electrodes are equivalent. 8. The touch sensitive processing apparatus of claim 1, wherein a number of the second electrodes equals to a number of the third electrodes, an axial direction of each of the second electrodes is the same as that of one of the third electrodes. 9. The touch sensitive processing apparatus of claim 1, wherein the first axis is parallel to an axial direction of pixel-refreshing of the touch screen. 10. The touch sensitive processing apparatus of claim 1, wherein the second electrodes connects to a touch sensitive processing apparatus via a first side of the touch screen, the third electrodes connects to the touch sensitive processing apparatus via a second side of the touch screen, wherein the first side is parallel to the second side. 11. The touch sensitive processing apparatus of claim 1, wherein the touch screen is an in-cell touch LCD screen, the first electrodes are the common electrodes of the in-cell touch LCD screen. 12. A touch sensitive processing method adaptive to a touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing method comprising: executing iteratively the following steps: having the driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with the second electrodes to form the multiple intersection areas, the other of the two or more first electrodes intersects with the third electrodes to form the multiple intersection areas; and having a sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 13. The touch sensitive processing method of claim 12, further comprising: executing iteratively the following steps when all of the first electrodes intersecting with the second electrodes to form multiple intersection areas have been sent the driving signal: having the driving circuit sending the driving signal to one of the electrodes having been not sent the driving signal; and having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 14. The touch sensitive processing method of claim 13, further comprising: piecing up all of the one-dimensional sensing information with respective to the order of the first electrodes to a two-dimensional sensing information when all of the first electrodes have been sent the driving signal; and detecting at least one object approximating or touching the touch screen according to the two-dimensional sensing information. 15. The touch sensitive processing method of claim 12, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in order according to the positions on the touch screen. 16. The touch sensitive processing method of claim 12, wherein the two or more first electrodes used for sending the driving signal in iteratively executing steps are selected in random. 17. The touch sensitive processing method of claim 12, further comprising: executing the following steps before executing the iterative steps: having the driving circuit sending the driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if at least one object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if at least one object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and executing the iterative steps when the at least one object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 18. The touch sensitive processing method of claim 12, wherein intervals between each two of the first electrodes are equivalent. 19. The touch sensitive processing method of claim 12, wherein a number of the second electrodes equals to a number of the third electrodes, an axial direction of each of the second electrodes is the same as that of one of the third electrodes. 20. The touch sensitive processing method of claim 12, wherein the first axis is parallel to an axial direction of pixel-refreshing of the touch screen. 21. The touch sensitive processing method of claim 12, wherein the second electrodes connects to a touch sensitive processing apparatus via a first side of the touch screen, the third electrodes connects to the touch sensitive processing apparatus via a second side of the touch screen, wherein the first side is parallel to the second side. 22. The touch sensitive processing method of claim 12, wherein the touch screen is an in-cell touch LCD screen, the first electrodes are the common electrodes of the in-cell touch LCD screen. 23. An electronic apparatus used to detect at least one object approximating or touching a touch screen, the electronic system comprising: the touch screen, comprising: a plurality of first electrodes being parallel to a first axis; a plurality of second electrodes being parallel to a second axis; and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas; and a touch sensitive processing apparatus, comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit simultaneously sending a driving signal to two or more first electrodes, wherein at least one of the two or more first electrodes intersects with the second electrodes to form the multiple intersection areas, the other of the two or more first electrodes intersects with the third electrodes to form the multiple intersection areas; and having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate another one-dimensional sensing information. 24. A touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing apparatus comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit sending the driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if any object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if any object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and reporting to a host there is no approaching object when no object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 25. A touch sensitive processing method adaptive to a touch sensitive processing apparatus configured to connect to a touch screen and used to detect at least one object approximating or touching the touch screen, wherein the touch screen comprises a plurality of first electrodes being parallel to a first axis, a plurality of second electrodes being parallel to a second axis, and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas, the touch sensitive processing method comprising: having a driving circuit sending a driving signal to all of the first electrodes; having a sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if any object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if any object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and reporting to a host there is no approaching object when no object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined. 26. An electronic apparatus used to detect at least one object approximating or touching a touch screen, comprising: the touch screen, comprising: a plurality of first electrodes being parallel to a first axis; a plurality of second electrodes being parallel to a second axis; and a plurality of third electrodes being parallel to the second axis, wherein each of the first electrodes is arranged to be spanned on the touch screen and intersects with the second electrodes or the third electrodes to form multiple intersection areas; and a touch sensitive processing apparatus, comprising: a driving circuit, connecting to the first electrodes, respectively; a sensing circuit, connecting to the second electrodes and the third electrodes, respectively; and a processor, configured to connect to the driving circuit and the sensing circuit, the processor configured for iteratively executing the following steps: having the driving circuit sending a driving signal to all of the first electrodes; having the sensing circuit simultaneously sensing the driving signal via the second electrodes to generate a one-dimensional first half screen sensing information, having the sensing circuit simultaneously sensing the driving signal via the third electrodes to generate a one-dimensional second half screen sensing information; determining if any object approximating or touching at least one of the second electrodes according to the one-dimensional first half screen sensing information; determining if any object approximating or touching at least one of the third electrodes according to the one-dimensional second half screen sensing information; and reporting to a host there is no approaching object when no object approximating or touching at least one of the second electrodes and at least one of the third electrodes is determined.
2,600
10,627
10,627
16,122,866
2,684
A location confirmation platform of a released arrestee's location and a property entry status system for establishing a virtual boundary around a property in which an individual is scheduled or authorized to appear or occupy and polling and confirming the individual's presence within the boundary.
1. A system for confirming the location of an incarcerated individual, comprising a mobile device, a global positioning system receiver, a microprocessor and a wireless communication transceiver coupled to the global positioning system receiver; a user interface, comprising a display and a physical characteristic input for recording an attribute of the incarcerated individual; and a storage medium, wherein the mobile device is programmed to send device location and physical characteristic data to a remote system and receive a request for the mobile device location confirmation. 2. The system of claim 1, wherein the physical characteristic input is a fingerprint sensor for recording a fingerprint of the incarcerated individual. 3. The system of claim 1 further comprising a breath analyzer. 4. The system of claim 1, wherein the physical characteristic input is a digital camera for recording a representation of a facial feature of the incarcerated individual. 5. The system of claim 1, wherein the physical characteristic input is a microphone for recording a representation of a voice recording of the incarcerated individual. 6. A computer-implemented method for computerized location confirmation of an incarcerated individual, comprising executing on a processor the steps of: presenting on a computerized user interface a graphical representation of a location of an incarcerated individual; receiving an electronic communication containing physical characteristic data of an incarcerated individual and source location data; retrieving from a storage medium associated with the processor a client file containing personal feature profile data of the incarcerated individual; a first comparison of the received physical characteristic data to the retrieved personal feature profile data; a second comparison of the received source location data to a predefined location; and updating the client file according to a result obtained from the first comparison and the second comparison wherein receiving step comprises receiving physical characteristic data and source location data from a mobile device. 7. The method of claim 6, wherein the physical characteristic data is a representation of a fingerprint of the incarcerated individual. 8. The method of claim 6, wherein the physical characteristic data is a representation of a facial feature of the incarcerated individual. 9. The method of claim 6, wherein the physical characteristic data is a representation of a voice recording of the incarcerated individual. 10. The method of claim 6, further comprising sending a computer generated alert to a remote receiver indicating an actual location of the incarcerated person. 11. The method of claim 6, wherein the graphical representation of a location of an incarcerated individual includes a representation of a virtual boundary of an authorized location of the individual. 12. The system of claim 6, further comprising a processor that presents on a computerized user interface a graphical representation of a location of an incarcerated individual.
A location confirmation platform of a released arrestee's location and a property entry status system for establishing a virtual boundary around a property in which an individual is scheduled or authorized to appear or occupy and polling and confirming the individual's presence within the boundary.1. A system for confirming the location of an incarcerated individual, comprising a mobile device, a global positioning system receiver, a microprocessor and a wireless communication transceiver coupled to the global positioning system receiver; a user interface, comprising a display and a physical characteristic input for recording an attribute of the incarcerated individual; and a storage medium, wherein the mobile device is programmed to send device location and physical characteristic data to a remote system and receive a request for the mobile device location confirmation. 2. The system of claim 1, wherein the physical characteristic input is a fingerprint sensor for recording a fingerprint of the incarcerated individual. 3. The system of claim 1 further comprising a breath analyzer. 4. The system of claim 1, wherein the physical characteristic input is a digital camera for recording a representation of a facial feature of the incarcerated individual. 5. The system of claim 1, wherein the physical characteristic input is a microphone for recording a representation of a voice recording of the incarcerated individual. 6. A computer-implemented method for computerized location confirmation of an incarcerated individual, comprising executing on a processor the steps of: presenting on a computerized user interface a graphical representation of a location of an incarcerated individual; receiving an electronic communication containing physical characteristic data of an incarcerated individual and source location data; retrieving from a storage medium associated with the processor a client file containing personal feature profile data of the incarcerated individual; a first comparison of the received physical characteristic data to the retrieved personal feature profile data; a second comparison of the received source location data to a predefined location; and updating the client file according to a result obtained from the first comparison and the second comparison wherein receiving step comprises receiving physical characteristic data and source location data from a mobile device. 7. The method of claim 6, wherein the physical characteristic data is a representation of a fingerprint of the incarcerated individual. 8. The method of claim 6, wherein the physical characteristic data is a representation of a facial feature of the incarcerated individual. 9. The method of claim 6, wherein the physical characteristic data is a representation of a voice recording of the incarcerated individual. 10. The method of claim 6, further comprising sending a computer generated alert to a remote receiver indicating an actual location of the incarcerated person. 11. The method of claim 6, wherein the graphical representation of a location of an incarcerated individual includes a representation of a virtual boundary of an authorized location of the individual. 12. The system of claim 6, further comprising a processor that presents on a computerized user interface a graphical representation of a location of an incarcerated individual.
2,600
10,628
10,628
15,036,283
2,643
A radio node ( 12 ) is configured for use in a wireless communication system ( 10 ) in which system information is transmitted in parts. The radio node ( 12 ) in this regard generates explicit signaling that is associated with a first part ( 16 ) of system information and that indicates a sequence ( 24 ) with which a second part ( 20 ) of system information is to be demodulated or descrambled. The radio node ( 12 ) transmits the explicit signaling over a signaling channel ( 25 ). Correspondingly, a wireless communication device ( 14 ) receives system information for the system ( 10 ) in parts. The device ( 14 ) receives the first part ( 16 ) over the first channel ( 18 ). The device ( 14 ) also receives the explicit signaling over the signaling channel ( 25 ). The device ( 14 ) further receives the second part ( 20 ) over the second channel ( 22 ), by demodulating or descrambling the second part ( 20 ) using the sequence ( 24 ) indicated by the explicit signaling.
1-34. (canceled) 35. A method implemented by a wireless communication device for receiving system information for a wireless communication system in parts, the method comprising: receiving, over a first channel, a first part of system information; receiving, over a signaling channel, explicit signaling that is associated with the first part and that indicates a sequence with which the wireless communication device is to demodulate or descramble a second part of system information; and receiving the second part of system information over a second channel, by demodulating or descrambling the second part using the indicated sequence. 36. The method of claim 35, further comprising accessing the wireless communication system using both the first and second parts of system information. 37. The method of claim 35, wherein the second part of system information includes an access information table containing multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices, and wherein the method further comprises: receiving a system signature signal indicating one of multiple different possible system signatures for the wireless communication system; receiving the first part of system information based on the system signature signal; determining an index into the access information table included in the second part of system information, based on the system signature signal; and accessing the wireless communication system using a configuration indexed in the access information table by the determined index. 38. The method of claim 37, further comprising transmitting the first part of system information over a first channel different than a second channel over which the second part of system information is transmitted. 39. The method of claim 35, wherein the explicit signaling is included in the first part of system information and is transmitted over the same channel as the first part of system information. 40. The method of claim 35, wherein the explicit signaling is excluded from the first part of system information, wherein the explicit signaling is configured to be demodulated based on the same sequence with which the first part of system information is configured to be demodulated. 41. The method of claim 35, wherein the first part of system information is to be demodulated or descrambled using a different sequence than that with which the second part of system information is to be demodulated or descrambled. 42. The method of claim 35, wherein the sequence indicated by the explicit signaling distinguishes the second part of system information from one or more other second parts of system information that are receivable using one or more other respective sequences for demodulating or descrambling. 43. The method of claim 35, wherein multiple different possible second parts of system information each include a type of table accessible using an index obtained by the wireless communication device, and wherein the sequence indicated by the explicit signaling distinguishes the second part of system information as including the table of said type targeted by the index. 44. The method of claim 43, wherein the type of table is an access information table which contains multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices. 45. The method of claim 35, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 46. The method of claim 35, wherein the explicit signaling indicates a demodulation reference signal sequence. 47. The method of claim 35, wherein the explicit signaling indicates a scrambling code sequence. 48. The method of claim 35, wherein the explicit signaling indicates a synchronization signal sequence. 49. The method of claim 35, wherein the explicit signaling further indicates one or more of: information about a resource size in the time domain and/or frequency domain of a second channel over which the second part of system information is transmitted; a modulation and coding scheme of a second channel over which the second part of system information is transmitted; and an antenna configuration for a second channel over which the second part of system information is transmitted. 50. The method of claim 35, wherein the first part of system information includes a first type of system information, wherein different first parts of system information that indicate different system information of the first type are respectively transmitted in different areas, and wherein the second part of system information is common information transmitted jointly in the different areas in which the different first parts of system information are respectively transmitted. 51. The method of claim 35, wherein the first part of system information is transmitted more frequently than the second part of system information. 52. The method of claim 35, wherein the second part of system information includes initial access information required for a wireless communication device to initially access the wireless communication system. 53. The method of claim 35, wherein the wireless communication system is a Long Term Evolution system, and wherein the first part of system information comprises a master information block and the second part of system information comprises a system information block. 54. A method implemented by a radio node configured for use in a wireless communication system in which system information is transmitted in parts, the method comprising: generating explicit signaling that is associated with a first part of system information and that indicates a sequence with which a second part of system information is to be demodulated or descrambled; and transmitting the explicit signaling over a signaling channel. 55. The method of claim 54, wherein the second part of system information includes an access information table containing multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices, and wherein the method further comprises: receiving a system signature signal indicating one of multiple different possible system signatures for the wireless communication system; receiving the first part of system information based on the system signature signal; determining an index into the access information table included in the second part of system information, based on the system signature signal; and accessing the wireless communication system using a configuration indexed in the access information table by the determined index. 56. The method of claim 55, further comprising transmitting the first part of system information over a first channel different than a second channel over which the second part of system information is transmitted. 57. The method of claim 54, wherein the explicit signaling is included in the first part of system information and is transmitted over the same channel as the first part of system information. 58. The method of claim 54, wherein the explicit signaling is excluded from the first part of system information, wherein the explicit signaling is configured to be demodulated based on the same sequence with which the first part of system information is configured to be demodulated. 59. The method of claim 54, wherein the first part of system information is to be demodulated or descrambled using a different sequence than that with which the second part (20) of system information is to be demodulated or descrambled. 60. The method of claim 54, wherein the sequence indicated by the explicit signaling distinguishes the second part of system information from one or more other second parts of system information that are receivable using one or more other respective sequences for demodulating or descrambling. 61. The method of claim 54, wherein multiple different possible second parts of system information each include a type of table accessible using an index obtained by the wireless communication device, and wherein the sequence indicated by the explicit signaling distinguishes the second part of system information as including the table of said type targeted by the index. 62. The method of claim 43, wherein the type of table is an access information table which contains multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices. 63. The method of claim 54, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 64. The method of claim 54, wherein the explicit signaling indicates a demodulation reference signal sequence. 65. The method of claim 54, wherein the explicit signaling indicates a scrambling code sequence. 66. The method of claim 54, wherein the explicit signaling indicates a synchronization signal sequence. 67. The method of claim 54, wherein the explicit signaling further indicates one or more of: information about a resource size in the time domain and/or frequency domain of a second channel over which the second part of system information is transmitted; a modulation and coding scheme of a second channel over which the second part of system information is transmitted; and an antenna configuration for a second channel over which the second part of system information is transmitted. 68. The method of claim 54, wherein the first part of system information includes a first type of system information, wherein different first parts of system information that indicate different system information of the first type are respectively transmitted in different areas, and wherein the second part of system information is common information transmitted jointly in the different areas in which the different first parts of system information are respectively transmitted. 69. The method of claim 54, wherein the first part of system information is transmitted more frequently than the second part of system information. 70. The method of claim 54, wherein the second part of system information includes initial access information required for a wireless communication device to initially access the wireless communication system. 71. The method of claim 54, wherein the wireless communication system is a Long Term Evolution system, and wherein the first part of system information comprises a master information block and the second part of system information comprises a system information block. 72. A wireless communication device for receiving system information for a wireless communication system in parts, the wireless communication device comprising: a processor and a memory, the memory containing instructions executable by the processor whereby the wireless communication device is configured to: receive, over a first channel, a first part of system information; receive, over a signaling channel, explicit signaling that is associated with the first part and that indicates a sequence with which the wireless communication device is to demodulate or descramble a second part of system information; and receive the second part of system information over a second channel, by demodulating or descrambling the second part using the indicated sequence. 73. The wireless communication device of claim 72, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 74. A radio node for use in a wireless communication system in which system information is transmitted in parts the radio node comprising: a processor and a memory, the memory containing instructions executable by the processor whereby the wireless communication device is configured to: generate explicit signaling that is associated with a first part of system information and that indicates a sequence with which a second part of system information is to be demodulated or descrambled; and transmit the explicit signaling over a signaling channel. 75. The radio node of claim 74, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 76. A computer program product stored on a non-transitory computer readable medium and comprising instructions that, when executed by processing circuitry of a wireless communication device, cause the wireless communication device to receive system information for a wireless communication system in parts by: receiving, over a first channel, a first part of system information; receiving, over a signaling channel, explicit signaling that is associated with the first part and that indicates a sequence with which the wireless communication device is to demodulate or descramble a second part of system information; and receiving the second part of system information over a second channel, by demodulating or descrambling the second part using the indicated sequence. 77. A computer program product stored on a non-transitory computer readable medium and comprising instructions that, when executed by processing circuitry of a radio node configured for use in a wireless communication system in which system information is transmitted in parts, cause the radio node to: generate explicit signaling that is associated with a first part of system information and that indicates a sequence with which a second part of system information is to be demodulated or descrambled; and transmit the explicit signaling over a signaling channel.
A radio node ( 12 ) is configured for use in a wireless communication system ( 10 ) in which system information is transmitted in parts. The radio node ( 12 ) in this regard generates explicit signaling that is associated with a first part ( 16 ) of system information and that indicates a sequence ( 24 ) with which a second part ( 20 ) of system information is to be demodulated or descrambled. The radio node ( 12 ) transmits the explicit signaling over a signaling channel ( 25 ). Correspondingly, a wireless communication device ( 14 ) receives system information for the system ( 10 ) in parts. The device ( 14 ) receives the first part ( 16 ) over the first channel ( 18 ). The device ( 14 ) also receives the explicit signaling over the signaling channel ( 25 ). The device ( 14 ) further receives the second part ( 20 ) over the second channel ( 22 ), by demodulating or descrambling the second part ( 20 ) using the sequence ( 24 ) indicated by the explicit signaling.1-34. (canceled) 35. A method implemented by a wireless communication device for receiving system information for a wireless communication system in parts, the method comprising: receiving, over a first channel, a first part of system information; receiving, over a signaling channel, explicit signaling that is associated with the first part and that indicates a sequence with which the wireless communication device is to demodulate or descramble a second part of system information; and receiving the second part of system information over a second channel, by demodulating or descrambling the second part using the indicated sequence. 36. The method of claim 35, further comprising accessing the wireless communication system using both the first and second parts of system information. 37. The method of claim 35, wherein the second part of system information includes an access information table containing multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices, and wherein the method further comprises: receiving a system signature signal indicating one of multiple different possible system signatures for the wireless communication system; receiving the first part of system information based on the system signature signal; determining an index into the access information table included in the second part of system information, based on the system signature signal; and accessing the wireless communication system using a configuration indexed in the access information table by the determined index. 38. The method of claim 37, further comprising transmitting the first part of system information over a first channel different than a second channel over which the second part of system information is transmitted. 39. The method of claim 35, wherein the explicit signaling is included in the first part of system information and is transmitted over the same channel as the first part of system information. 40. The method of claim 35, wherein the explicit signaling is excluded from the first part of system information, wherein the explicit signaling is configured to be demodulated based on the same sequence with which the first part of system information is configured to be demodulated. 41. The method of claim 35, wherein the first part of system information is to be demodulated or descrambled using a different sequence than that with which the second part of system information is to be demodulated or descrambled. 42. The method of claim 35, wherein the sequence indicated by the explicit signaling distinguishes the second part of system information from one or more other second parts of system information that are receivable using one or more other respective sequences for demodulating or descrambling. 43. The method of claim 35, wherein multiple different possible second parts of system information each include a type of table accessible using an index obtained by the wireless communication device, and wherein the sequence indicated by the explicit signaling distinguishes the second part of system information as including the table of said type targeted by the index. 44. The method of claim 43, wherein the type of table is an access information table which contains multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices. 45. The method of claim 35, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 46. The method of claim 35, wherein the explicit signaling indicates a demodulation reference signal sequence. 47. The method of claim 35, wherein the explicit signaling indicates a scrambling code sequence. 48. The method of claim 35, wherein the explicit signaling indicates a synchronization signal sequence. 49. The method of claim 35, wherein the explicit signaling further indicates one or more of: information about a resource size in the time domain and/or frequency domain of a second channel over which the second part of system information is transmitted; a modulation and coding scheme of a second channel over which the second part of system information is transmitted; and an antenna configuration for a second channel over which the second part of system information is transmitted. 50. The method of claim 35, wherein the first part of system information includes a first type of system information, wherein different first parts of system information that indicate different system information of the first type are respectively transmitted in different areas, and wherein the second part of system information is common information transmitted jointly in the different areas in which the different first parts of system information are respectively transmitted. 51. The method of claim 35, wherein the first part of system information is transmitted more frequently than the second part of system information. 52. The method of claim 35, wherein the second part of system information includes initial access information required for a wireless communication device to initially access the wireless communication system. 53. The method of claim 35, wherein the wireless communication system is a Long Term Evolution system, and wherein the first part of system information comprises a master information block and the second part of system information comprises a system information block. 54. A method implemented by a radio node configured for use in a wireless communication system in which system information is transmitted in parts, the method comprising: generating explicit signaling that is associated with a first part of system information and that indicates a sequence with which a second part of system information is to be demodulated or descrambled; and transmitting the explicit signaling over a signaling channel. 55. The method of claim 54, wherein the second part of system information includes an access information table containing multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices, and wherein the method further comprises: receiving a system signature signal indicating one of multiple different possible system signatures for the wireless communication system; receiving the first part of system information based on the system signature signal; determining an index into the access information table included in the second part of system information, based on the system signature signal; and accessing the wireless communication system using a configuration indexed in the access information table by the determined index. 56. The method of claim 55, further comprising transmitting the first part of system information over a first channel different than a second channel over which the second part of system information is transmitted. 57. The method of claim 54, wherein the explicit signaling is included in the first part of system information and is transmitted over the same channel as the first part of system information. 58. The method of claim 54, wherein the explicit signaling is excluded from the first part of system information, wherein the explicit signaling is configured to be demodulated based on the same sequence with which the first part of system information is configured to be demodulated. 59. The method of claim 54, wherein the first part of system information is to be demodulated or descrambled using a different sequence than that with which the second part (20) of system information is to be demodulated or descrambled. 60. The method of claim 54, wherein the sequence indicated by the explicit signaling distinguishes the second part of system information from one or more other second parts of system information that are receivable using one or more other respective sequences for demodulating or descrambling. 61. The method of claim 54, wherein multiple different possible second parts of system information each include a type of table accessible using an index obtained by the wireless communication device, and wherein the sequence indicated by the explicit signaling distinguishes the second part of system information as including the table of said type targeted by the index. 62. The method of claim 43, wherein the type of table is an access information table which contains multiple configurations for accessing the wireless communication system, wherein the multiple configurations are respectively indexed by different indices. 63. The method of claim 54, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 64. The method of claim 54, wherein the explicit signaling indicates a demodulation reference signal sequence. 65. The method of claim 54, wherein the explicit signaling indicates a scrambling code sequence. 66. The method of claim 54, wherein the explicit signaling indicates a synchronization signal sequence. 67. The method of claim 54, wherein the explicit signaling further indicates one or more of: information about a resource size in the time domain and/or frequency domain of a second channel over which the second part of system information is transmitted; a modulation and coding scheme of a second channel over which the second part of system information is transmitted; and an antenna configuration for a second channel over which the second part of system information is transmitted. 68. The method of claim 54, wherein the first part of system information includes a first type of system information, wherein different first parts of system information that indicate different system information of the first type are respectively transmitted in different areas, and wherein the second part of system information is common information transmitted jointly in the different areas in which the different first parts of system information are respectively transmitted. 69. The method of claim 54, wherein the first part of system information is transmitted more frequently than the second part of system information. 70. The method of claim 54, wherein the second part of system information includes initial access information required for a wireless communication device to initially access the wireless communication system. 71. The method of claim 54, wherein the wireless communication system is a Long Term Evolution system, and wherein the first part of system information comprises a master information block and the second part of system information comprises a system information block. 72. A wireless communication device for receiving system information for a wireless communication system in parts, the wireless communication device comprising: a processor and a memory, the memory containing instructions executable by the processor whereby the wireless communication device is configured to: receive, over a first channel, a first part of system information; receive, over a signaling channel, explicit signaling that is associated with the first part and that indicates a sequence with which the wireless communication device is to demodulate or descramble a second part of system information; and receive the second part of system information over a second channel, by demodulating or descrambling the second part using the indicated sequence. 73. The wireless communication device of claim 72, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 74. A radio node for use in a wireless communication system in which system information is transmitted in parts the radio node comprising: a processor and a memory, the memory containing instructions executable by the processor whereby the wireless communication device is configured to: generate explicit signaling that is associated with a first part of system information and that indicates a sequence with which a second part of system information is to be demodulated or descrambled; and transmit the explicit signaling over a signaling channel. 75. The radio node of claim 74, wherein the second part of system information comprises a common access information table, C-AIT, that includes multiple access information configurations respectively indexed by different system signature indices, SSIs, wherein one or more of the access information configurations comprise a configuration for initially accessing the wireless communication system, and wherein the first part of system information comprises a system signature block, SSB, that is associated with an SSI. 76. A computer program product stored on a non-transitory computer readable medium and comprising instructions that, when executed by processing circuitry of a wireless communication device, cause the wireless communication device to receive system information for a wireless communication system in parts by: receiving, over a first channel, a first part of system information; receiving, over a signaling channel, explicit signaling that is associated with the first part and that indicates a sequence with which the wireless communication device is to demodulate or descramble a second part of system information; and receiving the second part of system information over a second channel, by demodulating or descrambling the second part using the indicated sequence. 77. A computer program product stored on a non-transitory computer readable medium and comprising instructions that, when executed by processing circuitry of a radio node configured for use in a wireless communication system in which system information is transmitted in parts, cause the radio node to: generate explicit signaling that is associated with a first part of system information and that indicates a sequence with which a second part of system information is to be demodulated or descrambled; and transmit the explicit signaling over a signaling channel.
2,600
10,629
10,629
15,429,898
2,652
Various embodiments of a hearing assistance device and a method of forming such device are disclosed. The device includes a housing having a shell and a frame disposed at least partially within the shell. An inner surface of the shell and at least a portion of the frame define a void. Further, an indentation hardness value of the frame is greater than an indentation hardness value of the shell. The hearing assistance device also includes hearing assistance components that are disposed at least partially within the void.
1. A hearing assistance device, comprising: a housing comprising a shell and a frame disposed at least partially within the shell, wherein an inner surface of the shell and at least a portion of the frame define a void, and further wherein an indentation hardness value of the frame is greater than an indentation hardness value of the shell; and hearing assistance components disposed at least partially within the void. 2. The device of claim 1, wherein a first component of the hearing assistance components is disposed on the frame. 3. The device of claim 2, wherein a second component of the hearing assistance components is disposed on the inner surface of the shell. 4. The device of claim 1, wherein the frame is attached to the inner surface of the shell. 5. The device of claim 1, wherein a first region of the inner surface of the shell is spaced apart from the frame and a second region of the inner surface of the shell is in contact with the frame. 6. The device of claim 5, wherein the first region of the inner surface of the shell is adapted to collapse into the void when the hearing assistance device is inserted into an ear canal of a wearer. 7. The device of claim 1, wherein the shell extends between a first end of the shell and a second end of the shell, wherein an outlet port is disposed at the first end of the shell. 8. The device of claim 7, further comprising a faceplate connected to the frame, wherein the faceplate is disposed at the second end of the shell. 9. The device of claim 7, wherein the shell has a length along a longitudinal axis that extends between the first and second ends, wherein the frame has a length along the longitudinal axis that is less than the length of the shell. 10. The device of claim 1, wherein the shell comprises silicone. 11. The device of claim 1, wherein the indentation hardness value of the shell is at least about 20 Shore A and no greater than about 70 Shore A. 12. The device of claim 11, wherein the indentation hardness value of the frame is at least about 50 Shore D and no greater than about 60 Rockwell C. 13. The device of claim 1, wherein the shell comprises an average thickness of at least about 0.1 mm and no greater than about 10 mm. 14. A method of forming a hearing assistance device comprising a housing and hearing assistance components disposed at least partially within the housing, comprising: forming the housing, wherein forming the housing comprises forming a shell and forming a frame; disposing at least one hearing assistance component of the hearing assistance components on the frame; and inserting the frame into the shell such that a void is defined between an inner surface of the shell and at least a portion of the frame. 15. The method of claim 14, further comprising attaching the frame to the inner surface of the shell. 16. The method of claim 14, wherein forming the shell comprises injection molding the shell. 17. The method of claim 14, wherein forming the frame comprises 3D printing the frame. 18. The method of claim 14, wherein forming the shell and forming the frame comprises 3D printing the shell and frame such that the shell and frame are integral. 19. The method of claim 14, further comprising connecting a faceplate to the frame and an opening in the shell through which the frame is inserted. 20. The method of claim 14, wherein inserting the frame into the shell comprises inserting the frame into a slot formed in the inner surface of the shell.
Various embodiments of a hearing assistance device and a method of forming such device are disclosed. The device includes a housing having a shell and a frame disposed at least partially within the shell. An inner surface of the shell and at least a portion of the frame define a void. Further, an indentation hardness value of the frame is greater than an indentation hardness value of the shell. The hearing assistance device also includes hearing assistance components that are disposed at least partially within the void.1. A hearing assistance device, comprising: a housing comprising a shell and a frame disposed at least partially within the shell, wherein an inner surface of the shell and at least a portion of the frame define a void, and further wherein an indentation hardness value of the frame is greater than an indentation hardness value of the shell; and hearing assistance components disposed at least partially within the void. 2. The device of claim 1, wherein a first component of the hearing assistance components is disposed on the frame. 3. The device of claim 2, wherein a second component of the hearing assistance components is disposed on the inner surface of the shell. 4. The device of claim 1, wherein the frame is attached to the inner surface of the shell. 5. The device of claim 1, wherein a first region of the inner surface of the shell is spaced apart from the frame and a second region of the inner surface of the shell is in contact with the frame. 6. The device of claim 5, wherein the first region of the inner surface of the shell is adapted to collapse into the void when the hearing assistance device is inserted into an ear canal of a wearer. 7. The device of claim 1, wherein the shell extends between a first end of the shell and a second end of the shell, wherein an outlet port is disposed at the first end of the shell. 8. The device of claim 7, further comprising a faceplate connected to the frame, wherein the faceplate is disposed at the second end of the shell. 9. The device of claim 7, wherein the shell has a length along a longitudinal axis that extends between the first and second ends, wherein the frame has a length along the longitudinal axis that is less than the length of the shell. 10. The device of claim 1, wherein the shell comprises silicone. 11. The device of claim 1, wherein the indentation hardness value of the shell is at least about 20 Shore A and no greater than about 70 Shore A. 12. The device of claim 11, wherein the indentation hardness value of the frame is at least about 50 Shore D and no greater than about 60 Rockwell C. 13. The device of claim 1, wherein the shell comprises an average thickness of at least about 0.1 mm and no greater than about 10 mm. 14. A method of forming a hearing assistance device comprising a housing and hearing assistance components disposed at least partially within the housing, comprising: forming the housing, wherein forming the housing comprises forming a shell and forming a frame; disposing at least one hearing assistance component of the hearing assistance components on the frame; and inserting the frame into the shell such that a void is defined between an inner surface of the shell and at least a portion of the frame. 15. The method of claim 14, further comprising attaching the frame to the inner surface of the shell. 16. The method of claim 14, wherein forming the shell comprises injection molding the shell. 17. The method of claim 14, wherein forming the frame comprises 3D printing the frame. 18. The method of claim 14, wherein forming the shell and forming the frame comprises 3D printing the shell and frame such that the shell and frame are integral. 19. The method of claim 14, further comprising connecting a faceplate to the frame and an opening in the shell through which the frame is inserted. 20. The method of claim 14, wherein inserting the frame into the shell comprises inserting the frame into a slot formed in the inner surface of the shell.
2,600
10,630
10,630
13,777,804
2,621
A system includes at least one lighting device, e.g., at least one LED luminaire, and a control circuit configured to control a spectral output produced by the at least one lighting device responsive to environmental information about an area illuminated by the at least one lighting device. The control circuit may be configured to control a color temperature of the illumination responsive to the environmental information. In some embodiments, the control circuit may be configured to lower the color temperature of the illumination responsive to the environmental information indicating a level of reflected light and/or a weather condition, such as precipitation, correlated with the presence or likely presence of glare.
1. A system comprising: at least one lighting device; and a control circuit configured to a control spectral output produced by the at least one lighting device responsive to environmental information about an area illuminated by the at least one lighting device. 2. The system of claim 1, wherein the control circuit is configured to control a color temperature of the illumination responsive to the environmental information. 3. The system of claim 2, wherein the control circuit is configured to lower the color temperature of the illumination responsive to the environmental information indicating a condition causing glare. 4. The system of claim 3, wherein the control circuit is configured to lower the color temperature of the illumination responsive to the environmental information indicating precipitation. 5. The system of claim 1, wherein the environmental information comprises reflected light information and/or weather information. 6. The system of claim 1, wherein the environmental information is provided by at least one environmental sensor positioned proximate the at least one lighting device. 7. The system of claim 6, wherein the at least one environmental sensor comprises at least one light sensor and/or at least one weather sensor. 8. The system of claim 1, wherein the environmental information is provided by a weather monitoring system. 9. The system of claim 1, wherein the at least one lighting device comprises at least one outdoor luminaire. 10. The system of claim 1, wherein the at least one lighting device comprises a headlight of a vehicle and wherein the environmental information is provided by at least one environmental sensor positioned on the vehicle. 11. The system of claim 1, wherein the at least one lighting device comprises at least one LED lighting device. 12. A system comprising: at least one LED luminaire; at least one environmental sensor configured to sense a glare-correlated environmental characteristic of an area illuminated by the at least one LED luminaire; and a control circuit operatively coupled to the at least one LED luminaire and to the at least one environmental sensor and configured to control a color temperature of illumination produced by the at least one LED luminaire responsive to the sensed environmental characteristic. 13. The system of claim 12, wherein the at least one environmental sensor comprises at least one reflected light sensor. 14. The system of claim 12, wherein the at least one environmental sensor comprises at least one weather sensor. 15. The system of claim 14, wherein the at least one weather sensor comprises a precipitation sensor. 16. The system of claim 12, wherein the at least one environmental sensor is positioned on a structure that supports the at least one LED luminaire. 17. The system of claim 12, wherein the at least one LED luminaire comprises a plurality of spaced-apart LED luminaires, wherein the at least one environmental sensor comprises at least one environmental sensor configured to sense a glare-correlated environmental characteristic of an area collectively illuminated by the plurality of LED luminaires, and wherein the control circuit is operatively coupled to the plurality of LED luminaires and configured to collectively control a color temperature of illumination produced by the plurality of LED luminaires. 18. A method of operating a lighting system, the method comprising: providing glare-correlated environmental information pertaining to an area illuminated by the lighting system; and controlling a spectral output of at least one lighting device of the lighting system responsive to the glare-correlated environmental information. 19. The method of claim 18, wherein controlling a spectral output of at least one lighting device of the lighting system responsive to the glare-correlated environmental information comprises controlling a color temperature of illumination produced by the at least one lighting device responsive to the environmental information. 20. The method of claim 18, wherein the environmental information comprises reflected light information and/or weather information. 21. The method of claim 18, wherein providing glare-correlated environmental information pertaining to an area illuminated by the lighting system comprises providing the environmental information from at least one environmental sensor positioned proximate the at least one lighting device. 22. The method of claim 21, wherein the at least one environmental sensor comprises at least one light sensor and/or at least one weather sensor. 23. The method of claim 18, wherein providing glare-correlated environmental information pertaining to an area illuminated by the lighting system comprises obtaining the environmental information from a weather monitoring system. 24. The method of claim 18, wherein the at least one lighting device comprises at least one outdoor luminaire 25. The method of claim 18, wherein the at least one lighting device comprises a headlight of a vehicle. 26. The method of claim 18, wherein the at least one lighting device comprises at least one LED lighting device.
A system includes at least one lighting device, e.g., at least one LED luminaire, and a control circuit configured to control a spectral output produced by the at least one lighting device responsive to environmental information about an area illuminated by the at least one lighting device. The control circuit may be configured to control a color temperature of the illumination responsive to the environmental information. In some embodiments, the control circuit may be configured to lower the color temperature of the illumination responsive to the environmental information indicating a level of reflected light and/or a weather condition, such as precipitation, correlated with the presence or likely presence of glare.1. A system comprising: at least one lighting device; and a control circuit configured to a control spectral output produced by the at least one lighting device responsive to environmental information about an area illuminated by the at least one lighting device. 2. The system of claim 1, wherein the control circuit is configured to control a color temperature of the illumination responsive to the environmental information. 3. The system of claim 2, wherein the control circuit is configured to lower the color temperature of the illumination responsive to the environmental information indicating a condition causing glare. 4. The system of claim 3, wherein the control circuit is configured to lower the color temperature of the illumination responsive to the environmental information indicating precipitation. 5. The system of claim 1, wherein the environmental information comprises reflected light information and/or weather information. 6. The system of claim 1, wherein the environmental information is provided by at least one environmental sensor positioned proximate the at least one lighting device. 7. The system of claim 6, wherein the at least one environmental sensor comprises at least one light sensor and/or at least one weather sensor. 8. The system of claim 1, wherein the environmental information is provided by a weather monitoring system. 9. The system of claim 1, wherein the at least one lighting device comprises at least one outdoor luminaire. 10. The system of claim 1, wherein the at least one lighting device comprises a headlight of a vehicle and wherein the environmental information is provided by at least one environmental sensor positioned on the vehicle. 11. The system of claim 1, wherein the at least one lighting device comprises at least one LED lighting device. 12. A system comprising: at least one LED luminaire; at least one environmental sensor configured to sense a glare-correlated environmental characteristic of an area illuminated by the at least one LED luminaire; and a control circuit operatively coupled to the at least one LED luminaire and to the at least one environmental sensor and configured to control a color temperature of illumination produced by the at least one LED luminaire responsive to the sensed environmental characteristic. 13. The system of claim 12, wherein the at least one environmental sensor comprises at least one reflected light sensor. 14. The system of claim 12, wherein the at least one environmental sensor comprises at least one weather sensor. 15. The system of claim 14, wherein the at least one weather sensor comprises a precipitation sensor. 16. The system of claim 12, wherein the at least one environmental sensor is positioned on a structure that supports the at least one LED luminaire. 17. The system of claim 12, wherein the at least one LED luminaire comprises a plurality of spaced-apart LED luminaires, wherein the at least one environmental sensor comprises at least one environmental sensor configured to sense a glare-correlated environmental characteristic of an area collectively illuminated by the plurality of LED luminaires, and wherein the control circuit is operatively coupled to the plurality of LED luminaires and configured to collectively control a color temperature of illumination produced by the plurality of LED luminaires. 18. A method of operating a lighting system, the method comprising: providing glare-correlated environmental information pertaining to an area illuminated by the lighting system; and controlling a spectral output of at least one lighting device of the lighting system responsive to the glare-correlated environmental information. 19. The method of claim 18, wherein controlling a spectral output of at least one lighting device of the lighting system responsive to the glare-correlated environmental information comprises controlling a color temperature of illumination produced by the at least one lighting device responsive to the environmental information. 20. The method of claim 18, wherein the environmental information comprises reflected light information and/or weather information. 21. The method of claim 18, wherein providing glare-correlated environmental information pertaining to an area illuminated by the lighting system comprises providing the environmental information from at least one environmental sensor positioned proximate the at least one lighting device. 22. The method of claim 21, wherein the at least one environmental sensor comprises at least one light sensor and/or at least one weather sensor. 23. The method of claim 18, wherein providing glare-correlated environmental information pertaining to an area illuminated by the lighting system comprises obtaining the environmental information from a weather monitoring system. 24. The method of claim 18, wherein the at least one lighting device comprises at least one outdoor luminaire 25. The method of claim 18, wherein the at least one lighting device comprises a headlight of a vehicle. 26. The method of claim 18, wherein the at least one lighting device comprises at least one LED lighting device.
2,600
10,631
10,631
16,139,906
2,600
A child safety seat and temperature warning system includes a child restraint including a seat and a backrest that is attached to and extends upwardly from the seat. A processor is mounted in the child restraint. A weight sensor is mounted within the seat and detects when a weight is positioned on the seat. A heat sensor is mounted in the child restraint and detects an ambient temperature adjacent to the child restraint. The weight sensor and the heat sensor are electrically coupled to the processor. A vehicle includes an alarm system that is in communication with the processor. The vehicle alarm system is turned on when the vehicle alarm system receives a warning signal. The warning signal is sent by the heat sensor to the processor when the weight sensor detects a weight and the heat sensor detects an unsuitable temperature which is greater than an upper threshold temperature.
1. A child safety seat and temperature warning system, said system comprising: a child restraint including a seat and a backrest being attached to and extending upwardly from said seat; a processor being mounted in said child restraint; a weight sensor being mounted within said seat and detecting when a weight is positioned on said seat; a heat sensor being mounted in said backrest of said child restraint and detecting an ambient temperature adjacent to said child restraint, said weight sensor and said heat sensor being operationally coupled to said processor; a vehicle including a dashboard and an alarm system, said processor being in communication with said alarm system; said vehicle alarm system emitting sound and flashing light when said vehicle alarm system receives a warning signal, said warning signal being sent by said heat sensor to said processor when said weight sensor detects a weight and said heat sensor detects an unsuitable temperature being greater than an upper threshold temperature; a cellular remote device; and a call box, said call box being operationally coupled to said processor, said call box being programmable to initiate a telecommunication to said cellular remote device, said telecommunication including transmitting a temperature within said vehicle as detected by said heat sensor, said cellular remote device displaying the temperature within said vehicle as detected by said heat sensor upon said cellular remote device receiving said telecommunication wherein said cellular remote device is configured to display the temperature within said vehicle as detected by said heat sensor without said telecommunication being responded to by a user. 2. The child safety seat and temperature warning system according to claim 1, wherein: said alarm system includes: a transmitter configured to transmit a cellular emergency signal; a global positioning system providing a location to said transmitter such that the location is transmitted by said transmitter. 3. The child safety seat and temperature warning system according to claim 2, wherein said transmitter transmits said cellular emergency signal when said warning signal is received by said alarm system. 4. The child safety seat and temperature warning system according to claim 2, wherein said alarm system including a timer for measuring a time elapsed of said alarm system receiving said warning signal, said transmitter transmitting said cellular emergency signal when said time elapsed is greater than a threshold time. 5. The child safety seat and temperature warning system according to claim 4, further including an override switch being mounted in said vehicle and being electrically coupled to said alarm system, said transmitter transmitting said cellular emergency signal when said override switch is actuated. 6. The child safety seat and temperature warning system according to claim 2, further including an override switch being mounted in said vehicle and being electrically coupled to said alarm system, said transmitter transmitting said cellular emergency signal when said override switch is actuated. 7. The child safety seat and temperature warning system according to claim 1, wherein said warning signal is sent by said heat sensor detects an unsuitable temperature being less than a lower threshold temperature. 8. The child safety seat and temperature warning system according to claim 1, wherein said call box is positioned on said dashboard of said vehicle. 9. The child safety seat and temperature warning system according to claim 2, wherein said call box is distinct from and extrinsic to said vehicle alarm system. 10. The child safety seat and temperature warning system of claim 1, further comprising said processor activating said call box to initiate said telecommunication a predetermined length of time after said alarm system receives said alarm signal. 11. A child safety seat and temperature warning system, said system comprising: a child restraint including a seat and a backrest being attached to and extending upwardly from said seat; a processor being mounted in said child restraint; a weight sensor being mounted within said seat and detecting when a weight is positioned on said seat; a heat sensor being mounted in said backrest of said child restraint and detecting an ambient temperature adjacent to said child restraint, said weight sensor and said heat sensor being electrically coupled to said processor; a vehicle including an alarm system, said processor being in communication with said alarm system, said alarm system including: a transmitter configured to transmit a cellular emergency signal; a global positioning system providing a location to said transmitter such that the location is transmitted by said transmitter; said vehicle alarm system emitting sound and flashing light when said vehicle alarm system receives a warning signal, said warning signal being sent by said heat sensor to said processor when said weight sensor detects a weight and said heat sensor detects an unsuitable temperature being greater than an upper threshold temperature or less than a lower threshold temperature; said alarm system including a timer for measuring a time elapsed of said alarm system receiving said warning signal; said transmitter transmitting said cellular emergency signal when said time elapsed is greater than a threshold time; an override switch being mounted in said vehicle and being electrically coupled to said alarm system, said transmitter transmitting said cellular emergency signal when said override switch is actuated; a cellular remote device; and a call box, said call box being positioned on said dashboard of said vehicle, said call box being operationally coupled to said processor, said call box being programmable to initiate a telecommunication to said cellular remote device, said telecommunication including transmitting a temperature within said vehicle as detected by said heat sensor, said cellular remote device displaying the temperature within said vehicle as detected by said heat sensor upon said cellular remote device receiving said telecommunication wherein said cellular remote device is configured to display the temperature within said vehicle as detected by said heat sensor without said telecommunication being responded to by a user, said call box being distinct from and extrinsic to said vehicle alarm system, said processor activating said call box to initiate said telecommunication a predetermined length of time after said alarm system receives said alarm signal.
A child safety seat and temperature warning system includes a child restraint including a seat and a backrest that is attached to and extends upwardly from the seat. A processor is mounted in the child restraint. A weight sensor is mounted within the seat and detects when a weight is positioned on the seat. A heat sensor is mounted in the child restraint and detects an ambient temperature adjacent to the child restraint. The weight sensor and the heat sensor are electrically coupled to the processor. A vehicle includes an alarm system that is in communication with the processor. The vehicle alarm system is turned on when the vehicle alarm system receives a warning signal. The warning signal is sent by the heat sensor to the processor when the weight sensor detects a weight and the heat sensor detects an unsuitable temperature which is greater than an upper threshold temperature.1. A child safety seat and temperature warning system, said system comprising: a child restraint including a seat and a backrest being attached to and extending upwardly from said seat; a processor being mounted in said child restraint; a weight sensor being mounted within said seat and detecting when a weight is positioned on said seat; a heat sensor being mounted in said backrest of said child restraint and detecting an ambient temperature adjacent to said child restraint, said weight sensor and said heat sensor being operationally coupled to said processor; a vehicle including a dashboard and an alarm system, said processor being in communication with said alarm system; said vehicle alarm system emitting sound and flashing light when said vehicle alarm system receives a warning signal, said warning signal being sent by said heat sensor to said processor when said weight sensor detects a weight and said heat sensor detects an unsuitable temperature being greater than an upper threshold temperature; a cellular remote device; and a call box, said call box being operationally coupled to said processor, said call box being programmable to initiate a telecommunication to said cellular remote device, said telecommunication including transmitting a temperature within said vehicle as detected by said heat sensor, said cellular remote device displaying the temperature within said vehicle as detected by said heat sensor upon said cellular remote device receiving said telecommunication wherein said cellular remote device is configured to display the temperature within said vehicle as detected by said heat sensor without said telecommunication being responded to by a user. 2. The child safety seat and temperature warning system according to claim 1, wherein: said alarm system includes: a transmitter configured to transmit a cellular emergency signal; a global positioning system providing a location to said transmitter such that the location is transmitted by said transmitter. 3. The child safety seat and temperature warning system according to claim 2, wherein said transmitter transmits said cellular emergency signal when said warning signal is received by said alarm system. 4. The child safety seat and temperature warning system according to claim 2, wherein said alarm system including a timer for measuring a time elapsed of said alarm system receiving said warning signal, said transmitter transmitting said cellular emergency signal when said time elapsed is greater than a threshold time. 5. The child safety seat and temperature warning system according to claim 4, further including an override switch being mounted in said vehicle and being electrically coupled to said alarm system, said transmitter transmitting said cellular emergency signal when said override switch is actuated. 6. The child safety seat and temperature warning system according to claim 2, further including an override switch being mounted in said vehicle and being electrically coupled to said alarm system, said transmitter transmitting said cellular emergency signal when said override switch is actuated. 7. The child safety seat and temperature warning system according to claim 1, wherein said warning signal is sent by said heat sensor detects an unsuitable temperature being less than a lower threshold temperature. 8. The child safety seat and temperature warning system according to claim 1, wherein said call box is positioned on said dashboard of said vehicle. 9. The child safety seat and temperature warning system according to claim 2, wherein said call box is distinct from and extrinsic to said vehicle alarm system. 10. The child safety seat and temperature warning system of claim 1, further comprising said processor activating said call box to initiate said telecommunication a predetermined length of time after said alarm system receives said alarm signal. 11. A child safety seat and temperature warning system, said system comprising: a child restraint including a seat and a backrest being attached to and extending upwardly from said seat; a processor being mounted in said child restraint; a weight sensor being mounted within said seat and detecting when a weight is positioned on said seat; a heat sensor being mounted in said backrest of said child restraint and detecting an ambient temperature adjacent to said child restraint, said weight sensor and said heat sensor being electrically coupled to said processor; a vehicle including an alarm system, said processor being in communication with said alarm system, said alarm system including: a transmitter configured to transmit a cellular emergency signal; a global positioning system providing a location to said transmitter such that the location is transmitted by said transmitter; said vehicle alarm system emitting sound and flashing light when said vehicle alarm system receives a warning signal, said warning signal being sent by said heat sensor to said processor when said weight sensor detects a weight and said heat sensor detects an unsuitable temperature being greater than an upper threshold temperature or less than a lower threshold temperature; said alarm system including a timer for measuring a time elapsed of said alarm system receiving said warning signal; said transmitter transmitting said cellular emergency signal when said time elapsed is greater than a threshold time; an override switch being mounted in said vehicle and being electrically coupled to said alarm system, said transmitter transmitting said cellular emergency signal when said override switch is actuated; a cellular remote device; and a call box, said call box being positioned on said dashboard of said vehicle, said call box being operationally coupled to said processor, said call box being programmable to initiate a telecommunication to said cellular remote device, said telecommunication including transmitting a temperature within said vehicle as detected by said heat sensor, said cellular remote device displaying the temperature within said vehicle as detected by said heat sensor upon said cellular remote device receiving said telecommunication wherein said cellular remote device is configured to display the temperature within said vehicle as detected by said heat sensor without said telecommunication being responded to by a user, said call box being distinct from and extrinsic to said vehicle alarm system, said processor activating said call box to initiate said telecommunication a predetermined length of time after said alarm system receives said alarm signal.
2,600
10,632
10,632
15,054,817
2,623
An apparatus and method for providing a screen mirroring service. The apparatus includes a communication unit for connecting to an external device; and a control unit for generating a plurality of screen data, transmitting one of the plurality of screen data to the external device, and controlling the screen data being transmitted to the external device according to a control signal received from the external device.
1. An electronic device for providing a screen mirroring service, the device comprising: a communication unit configured to connect to an external device; and a control unit configured to generate a plurality of screen data, transmit one of the plurality of screen data to the external device, and control the screen data being transmitted to the external device according to a control signal received from the external device. 2. The electronic device of claim 1, further comprising a display unit displaying another one of the plurality of screen data. 3. The electronic device of claim 2, wherein the plurality of screen data are screen data each corresponding to different respective functions executed by the electronic device. 4. The electronic device of claim 2, wherein the plurality of screen data are different screen data each corresponding to execution of a same function in the electronic device. 5. The electronic device of claim 1, wherein the plurality of screen data comprises a first screen data and a second screen data. 6. The electronic device of claim 5, wherein the first screen data and the second screen data have different resolutions. 7. The electronic device of claim 5, wherein the first screen data has image pixel coordinates of a first range, and the second screen data has image pixel coordinates of a second range, and at least a part of the image pixel coordinates of the second range is the same as at least a part of the image pixel coordinates of the first range. 8. The electronic device of claim 5, wherein the first screen data is related to a first application program, and the second screen data is related to a second application program. 9. The electronic device of claim 5, wherein the first screen data is related to a first operation of a first application program, and the second screen data is related a second operation of the first application program. 10. The electronic device of claim 5, wherein the first screen data comprises media data provided through a first application program, and the second image frame data comprises a user interface of the first application program. 11. A method for providing a screen mirroring service in an electronic device, the method comprising: connecting to an external device; generating a plurality of screen data; transmitting one of the plurality of screen data to the external device; and controlling the screen data being transmitted to the external device according to a control signal received from the external device. 12. The method of claim 11, further comprising displaying another one of the plurality of screen data. 13. The method of claim 12, wherein the plurality of screen data are screen data each corresponding to different respective functions executed by the electronic device. 14. The method of claim 12, wherein the plurality of screen data are different screen data each corresponding to execution of a same function in the electronic device. 15. The method of claim 11, wherein the plurality of screen data are different screen data each corresponding to execution of a same function in the electronic device. 16. The method of claim 15, wherein the first screen data and the second screen data have different resolutions. 17. The method of claim 15, wherein the first screen data has image pixel coordinates of a first range, and the second screen data has image pixel coordinates of a second range, and at least a part of the image pixel coordinates of the second range is the same as at least a part of the image pixel coordinates of the first range. 18. The method of claim 15, wherein the first screen data is related to a first application program, and the second screen data is related to a second application program. 19. The method of claim 15, wherein the first screen data is related to a first operation of a first application program, and the second screen data is related a second operation of the first application program. 20. The method of claim 15, wherein the first screen data comprises media data provided through a first application program, and the second screen data comprises a user interface of the first application program.
An apparatus and method for providing a screen mirroring service. The apparatus includes a communication unit for connecting to an external device; and a control unit for generating a plurality of screen data, transmitting one of the plurality of screen data to the external device, and controlling the screen data being transmitted to the external device according to a control signal received from the external device.1. An electronic device for providing a screen mirroring service, the device comprising: a communication unit configured to connect to an external device; and a control unit configured to generate a plurality of screen data, transmit one of the plurality of screen data to the external device, and control the screen data being transmitted to the external device according to a control signal received from the external device. 2. The electronic device of claim 1, further comprising a display unit displaying another one of the plurality of screen data. 3. The electronic device of claim 2, wherein the plurality of screen data are screen data each corresponding to different respective functions executed by the electronic device. 4. The electronic device of claim 2, wherein the plurality of screen data are different screen data each corresponding to execution of a same function in the electronic device. 5. The electronic device of claim 1, wherein the plurality of screen data comprises a first screen data and a second screen data. 6. The electronic device of claim 5, wherein the first screen data and the second screen data have different resolutions. 7. The electronic device of claim 5, wherein the first screen data has image pixel coordinates of a first range, and the second screen data has image pixel coordinates of a second range, and at least a part of the image pixel coordinates of the second range is the same as at least a part of the image pixel coordinates of the first range. 8. The electronic device of claim 5, wherein the first screen data is related to a first application program, and the second screen data is related to a second application program. 9. The electronic device of claim 5, wherein the first screen data is related to a first operation of a first application program, and the second screen data is related a second operation of the first application program. 10. The electronic device of claim 5, wherein the first screen data comprises media data provided through a first application program, and the second image frame data comprises a user interface of the first application program. 11. A method for providing a screen mirroring service in an electronic device, the method comprising: connecting to an external device; generating a plurality of screen data; transmitting one of the plurality of screen data to the external device; and controlling the screen data being transmitted to the external device according to a control signal received from the external device. 12. The method of claim 11, further comprising displaying another one of the plurality of screen data. 13. The method of claim 12, wherein the plurality of screen data are screen data each corresponding to different respective functions executed by the electronic device. 14. The method of claim 12, wherein the plurality of screen data are different screen data each corresponding to execution of a same function in the electronic device. 15. The method of claim 11, wherein the plurality of screen data are different screen data each corresponding to execution of a same function in the electronic device. 16. The method of claim 15, wherein the first screen data and the second screen data have different resolutions. 17. The method of claim 15, wherein the first screen data has image pixel coordinates of a first range, and the second screen data has image pixel coordinates of a second range, and at least a part of the image pixel coordinates of the second range is the same as at least a part of the image pixel coordinates of the first range. 18. The method of claim 15, wherein the first screen data is related to a first application program, and the second screen data is related to a second application program. 19. The method of claim 15, wherein the first screen data is related to a first operation of a first application program, and the second screen data is related a second operation of the first application program. 20. The method of claim 15, wherein the first screen data comprises media data provided through a first application program, and the second screen data comprises a user interface of the first application program.
2,600
10,633
10,633
15,788,630
2,661
A method, apparatus, system, and computer program product provide the ability to extract level information and reference grid information from point cloud data. Point cloud data is obtained and organized into a three-dimensional structure of voxels. Potential boundary points are filtered from the boundary cells. Level information is extracted from a Z-axis histogram of the voxels positioned along the Z-axis of the three-dimensional voxel structure and further refined. Reference grid information is extracted from an X-axis histogram of the voxels positioned along the X-axis of the three-dimensional voxel structure and a Y-axis histogram of the voxels positioned along the Y-axis of the three-dimensional voxel structure and further refined.
1. A computer-implemented method for extracting floor plan information for a building interior from point cloud data, comprising: (a) obtaining point cloud data, of the building interior, comprising laser scanning points of a building; (b) organizing the point cloud data into a three-dimensional structure of voxels, the three-dimensional structure consisting of an X-axis, Y-axis, and Z-axis, wherein each voxel represents a value on a regular grid in three-dimensional space; (c) accumulating non-empty voxels along the Z-axis to generate a Z-axis histogram; (d) extracting level information from the Z-axis histogram, wherein peaks of the Z-axis histogram identify a location of floors and ceilings of the building interior; (e) segmenting the building interior by level and processing each level separately; (f) for each level: (1) finding one or more straight walls by examining, for each non-empty voxel, neighboring voxels, and retaining voxels that form a vertical plane; (2) projecting remaining voxels onto an 2D XY plane; (3) accumulating non-empty voxels in each row of the 2D XY plane to form a Y-histogram along the Y-axis; (4) accumulating non-empty voxels in each column of the 2D XY plane to form a X-histogram along the X-axis; and (5) generating a reference grid of a floor plan based on peaks from the X-histogram and the Y-histogram. 2. The computer-implemented method of claim 1 further comprising: estimating a principal axis direction of the point cloud data; and transforming the point cloud data by adjusting the principal axis direction, wherein after transforming, the building stands upright. 3. The computer-implemented method of claim 1 further comprising refining the extracted level information by plane sweeping to detect a density variation inside of a voxel dimension range. 4. The computer-implemented method of claim 1 further comprising: for each level, removing points representing horizontal objects by removing points that cover an area greater than a region threshold. 5. The computer-implemented method of claim 4 further comprising: re-computing the Z-axis histogram after the points representing horizontal objects have been removed; and filtering out points corresponding to histogram values in the Z-axis histogram above a maximum value. 6. The computer-implemented method of claim 1 wherein a point xi in the Z-axis histogram is identified as a peak if: i) xi is greater than or equal to the mean m of neighboring points around xi; ii) an absolute value of xi minus m is greater than or equal to the standard deviation s of the neighboring points around the peak xi multiplied by a value h predefined by the user; iii) s is greater than or equal to a standard deviation threshold τs; and iv) xi is greater than or equal to an area threshold a. 7. The computer-implemented method of claim 1 wherein: the X-axis histogram comprises voxels along the X-axis that contain at least one laser scanning point and the Y-axis histogram comprises voxels along the Y-axis that contain at least one laser scanning point; and a peak in the X-axis histogram or Y-axis histogram identifies a rough wall location on the reference grid. 8. The computer-implemented method of claim 1 further comprising refining the extracted reference grid information by line sweeping. 9. A system for extracting floor plan information for a building interior from point cloud data in a computer system comprising: (a) a computer having a memory; and (b) an application executing on the computer, wherein the application: (1) obtains point cloud data, of the building interior, comprising laser scanning points of a building; (2) organizes the point cloud data into a three-dimensional structure of voxels, the three-dimensional structure consisting of an X-axis, Y-axis, and Z-axis, wherein each voxel represents a value on a regular grid in three-dimensional space; (3) accumulates non-empty voxels along the Z-axis to generate a Z-axis histogram; (4) extracts level information from the Z-axis histogram, wherein peaks of the Z-axis histogram identify a location of floors and ceilings of the building interior; (5) segments the building interior by level and processing each level separately; (6) for each level: (i) finds one or more straight walls by examining, for each non-empty voxel, neighboring voxels, and retaining voxels that form a vertical plane; (ii) projects remaining voxels onto an 2D XY plane; (iii) accumulates non-empty voxels in each row of the 2D XY plane to form a Y-histogram along the Y-axis; (iv) accumulates non-empty voxels in each column of the 2D XY plane to form a X-histogram along the X-axis; and (v) generates a reference grid of a floor plan based on peaks from the X-histogram and the Y-histogram. 10. The system of claim 9 wherein the application further: estimates a principal axis direction of the point cloud data; and transforms the point cloud data by adjusting the principal axis direction, wherein after transforming, the building stands upright. 11. The system of claim 9 wherein the application further refines the extracted level information by plane sweeping to detect a density variation inside of a voxel dimension range. 12. The system of claim 9 wherein the application further: for each level, removes points representing horizontal objects by removing points that cover an area greater than a region threshold. 13. The system of claim 12 wherein the application further: re-computes the Z-axis histogram after the points representing horizontal objects have been removed; and filters out points corresponding to histogram values in the Z-axis histogram above a maximum value. 14. The system of claim 9 wherein a point xi in the Z-axis histogram is identified as a peak if: i) xi is greater than or equal to the mean m of neighboring points around xi; ii) an absolute value of xi minus m is greater than or equal to the standard deviation s of the neighboring points around the peak xi multiplied by a value h predefined by the user; iii) s is greater than or equal to a standard deviation threshold τs; and iv) xi is greater than or equal to an area threshold a. 15. The system of claim 9 wherein: the X-axis histogram comprises voxels along the X-axis that contain at least one laser scanning point and the Y-axis histogram comprises voxels along the Y-axis that contain at least one laser scanning point; and a peak in the X-axis histogram or Y-axis histogram identifies a rough wall location on the reference grid. 16. The system of claim 9 wherein the application further refines the extracted reference grid information by line sweeping.
A method, apparatus, system, and computer program product provide the ability to extract level information and reference grid information from point cloud data. Point cloud data is obtained and organized into a three-dimensional structure of voxels. Potential boundary points are filtered from the boundary cells. Level information is extracted from a Z-axis histogram of the voxels positioned along the Z-axis of the three-dimensional voxel structure and further refined. Reference grid information is extracted from an X-axis histogram of the voxels positioned along the X-axis of the three-dimensional voxel structure and a Y-axis histogram of the voxels positioned along the Y-axis of the three-dimensional voxel structure and further refined.1. A computer-implemented method for extracting floor plan information for a building interior from point cloud data, comprising: (a) obtaining point cloud data, of the building interior, comprising laser scanning points of a building; (b) organizing the point cloud data into a three-dimensional structure of voxels, the three-dimensional structure consisting of an X-axis, Y-axis, and Z-axis, wherein each voxel represents a value on a regular grid in three-dimensional space; (c) accumulating non-empty voxels along the Z-axis to generate a Z-axis histogram; (d) extracting level information from the Z-axis histogram, wherein peaks of the Z-axis histogram identify a location of floors and ceilings of the building interior; (e) segmenting the building interior by level and processing each level separately; (f) for each level: (1) finding one or more straight walls by examining, for each non-empty voxel, neighboring voxels, and retaining voxels that form a vertical plane; (2) projecting remaining voxels onto an 2D XY plane; (3) accumulating non-empty voxels in each row of the 2D XY plane to form a Y-histogram along the Y-axis; (4) accumulating non-empty voxels in each column of the 2D XY plane to form a X-histogram along the X-axis; and (5) generating a reference grid of a floor plan based on peaks from the X-histogram and the Y-histogram. 2. The computer-implemented method of claim 1 further comprising: estimating a principal axis direction of the point cloud data; and transforming the point cloud data by adjusting the principal axis direction, wherein after transforming, the building stands upright. 3. The computer-implemented method of claim 1 further comprising refining the extracted level information by plane sweeping to detect a density variation inside of a voxel dimension range. 4. The computer-implemented method of claim 1 further comprising: for each level, removing points representing horizontal objects by removing points that cover an area greater than a region threshold. 5. The computer-implemented method of claim 4 further comprising: re-computing the Z-axis histogram after the points representing horizontal objects have been removed; and filtering out points corresponding to histogram values in the Z-axis histogram above a maximum value. 6. The computer-implemented method of claim 1 wherein a point xi in the Z-axis histogram is identified as a peak if: i) xi is greater than or equal to the mean m of neighboring points around xi; ii) an absolute value of xi minus m is greater than or equal to the standard deviation s of the neighboring points around the peak xi multiplied by a value h predefined by the user; iii) s is greater than or equal to a standard deviation threshold τs; and iv) xi is greater than or equal to an area threshold a. 7. The computer-implemented method of claim 1 wherein: the X-axis histogram comprises voxels along the X-axis that contain at least one laser scanning point and the Y-axis histogram comprises voxels along the Y-axis that contain at least one laser scanning point; and a peak in the X-axis histogram or Y-axis histogram identifies a rough wall location on the reference grid. 8. The computer-implemented method of claim 1 further comprising refining the extracted reference grid information by line sweeping. 9. A system for extracting floor plan information for a building interior from point cloud data in a computer system comprising: (a) a computer having a memory; and (b) an application executing on the computer, wherein the application: (1) obtains point cloud data, of the building interior, comprising laser scanning points of a building; (2) organizes the point cloud data into a three-dimensional structure of voxels, the three-dimensional structure consisting of an X-axis, Y-axis, and Z-axis, wherein each voxel represents a value on a regular grid in three-dimensional space; (3) accumulates non-empty voxels along the Z-axis to generate a Z-axis histogram; (4) extracts level information from the Z-axis histogram, wherein peaks of the Z-axis histogram identify a location of floors and ceilings of the building interior; (5) segments the building interior by level and processing each level separately; (6) for each level: (i) finds one or more straight walls by examining, for each non-empty voxel, neighboring voxels, and retaining voxels that form a vertical plane; (ii) projects remaining voxels onto an 2D XY plane; (iii) accumulates non-empty voxels in each row of the 2D XY plane to form a Y-histogram along the Y-axis; (iv) accumulates non-empty voxels in each column of the 2D XY plane to form a X-histogram along the X-axis; and (v) generates a reference grid of a floor plan based on peaks from the X-histogram and the Y-histogram. 10. The system of claim 9 wherein the application further: estimates a principal axis direction of the point cloud data; and transforms the point cloud data by adjusting the principal axis direction, wherein after transforming, the building stands upright. 11. The system of claim 9 wherein the application further refines the extracted level information by plane sweeping to detect a density variation inside of a voxel dimension range. 12. The system of claim 9 wherein the application further: for each level, removes points representing horizontal objects by removing points that cover an area greater than a region threshold. 13. The system of claim 12 wherein the application further: re-computes the Z-axis histogram after the points representing horizontal objects have been removed; and filters out points corresponding to histogram values in the Z-axis histogram above a maximum value. 14. The system of claim 9 wherein a point xi in the Z-axis histogram is identified as a peak if: i) xi is greater than or equal to the mean m of neighboring points around xi; ii) an absolute value of xi minus m is greater than or equal to the standard deviation s of the neighboring points around the peak xi multiplied by a value h predefined by the user; iii) s is greater than or equal to a standard deviation threshold τs; and iv) xi is greater than or equal to an area threshold a. 15. The system of claim 9 wherein: the X-axis histogram comprises voxels along the X-axis that contain at least one laser scanning point and the Y-axis histogram comprises voxels along the Y-axis that contain at least one laser scanning point; and a peak in the X-axis histogram or Y-axis histogram identifies a rough wall location on the reference grid. 16. The system of claim 9 wherein the application further refines the extracted reference grid information by line sweeping.
2,600
10,634
10,634
14,696,444
2,652
A method of voice call diversion includes detecting an incoming voice call communication from a calling device, identifying an alternate communication option, providing the alternate communication option to the calling device, detecting a selection of the alternate communication option from the calling device, and diverting the incoming voice call communication so as to utilize the selected alternate communication option.
1. A method of voice call diversion, comprising: detecting an incoming voice call communication from a calling device; identifying an alternate communication option; providing the alternate communication option to the calling device; detecting a selection of the alternate communication option from the calling device; and diverting the incoming voice call communication so as to utilize the selected alternate communication option. 2. The method of claim 1, wherein detecting the incoming voice call communication comprises placing the incoming voice call communication in a queue. 3. The method of claim 1, wherein identifying the alternate communication option comprises identifying one or more non-voice communication options. 4. The method of claim 3, wherein providing the alternate communication option to the calling device comprises sending a notification to the calling device of the one or more non-voice communication options. 5. The method of claim 4, wherein detecting a selection of the alternate communication option comprises detecting a selection from the one or more non-voice communication options. 6. The method of claim 1, wherein diverting the incoming voice call so as to utilize the selected alternate communication option comprises converting a communication format of the incoming voice call communication to a format of the selected alternate communication option. 7. The method of claim 6, wherein the alternate communication option is a text message communication option. 8. A computer program product comprising non-transitory computer program instructions that when executed by a processor cause the processor to perform the method according to claim 1. 9. A system for voice call diversion, comprising: a communication system for detecting an incoming voice call communication from a calling device; and a call diversion module configured to: identify an alternate communication option; provide the alternate communication option to the calling device; detect a selection of the alternate communication option; and divert the incoming voice call communication so as to utilize the selected alternate communication option. 10. The system of claim 9, wherein upon detecting the incoming voice call communication, the system is configured to place the incoming voice call communication in a queue. 11. The system of claim 9, wherein the call diversion module is configured to identify one or more alternate non-voice communication options. 12. The system of claim 11, wherein the call diversion module is configured to provide the alternate communication option to the calling device by sending a notification to the calling device of the one or more non-voice communication options. 13. The system of claim 12, wherein the call diversion module is configured to detect a selection from the one or more alternate non-voice communication options. 14. The system of claim 9, wherein the communication diversion module is configured to divert the incoming voice call so as to utilize the selected alternate communication option by converting a communication format of the incoming voice call communication to a format of the selected alternate communication option. 15. The system of claim 14, wherein the alternate communication option is a text message communication option.
A method of voice call diversion includes detecting an incoming voice call communication from a calling device, identifying an alternate communication option, providing the alternate communication option to the calling device, detecting a selection of the alternate communication option from the calling device, and diverting the incoming voice call communication so as to utilize the selected alternate communication option.1. A method of voice call diversion, comprising: detecting an incoming voice call communication from a calling device; identifying an alternate communication option; providing the alternate communication option to the calling device; detecting a selection of the alternate communication option from the calling device; and diverting the incoming voice call communication so as to utilize the selected alternate communication option. 2. The method of claim 1, wherein detecting the incoming voice call communication comprises placing the incoming voice call communication in a queue. 3. The method of claim 1, wherein identifying the alternate communication option comprises identifying one or more non-voice communication options. 4. The method of claim 3, wherein providing the alternate communication option to the calling device comprises sending a notification to the calling device of the one or more non-voice communication options. 5. The method of claim 4, wherein detecting a selection of the alternate communication option comprises detecting a selection from the one or more non-voice communication options. 6. The method of claim 1, wherein diverting the incoming voice call so as to utilize the selected alternate communication option comprises converting a communication format of the incoming voice call communication to a format of the selected alternate communication option. 7. The method of claim 6, wherein the alternate communication option is a text message communication option. 8. A computer program product comprising non-transitory computer program instructions that when executed by a processor cause the processor to perform the method according to claim 1. 9. A system for voice call diversion, comprising: a communication system for detecting an incoming voice call communication from a calling device; and a call diversion module configured to: identify an alternate communication option; provide the alternate communication option to the calling device; detect a selection of the alternate communication option; and divert the incoming voice call communication so as to utilize the selected alternate communication option. 10. The system of claim 9, wherein upon detecting the incoming voice call communication, the system is configured to place the incoming voice call communication in a queue. 11. The system of claim 9, wherein the call diversion module is configured to identify one or more alternate non-voice communication options. 12. The system of claim 11, wherein the call diversion module is configured to provide the alternate communication option to the calling device by sending a notification to the calling device of the one or more non-voice communication options. 13. The system of claim 12, wherein the call diversion module is configured to detect a selection from the one or more alternate non-voice communication options. 14. The system of claim 9, wherein the communication diversion module is configured to divert the incoming voice call so as to utilize the selected alternate communication option by converting a communication format of the incoming voice call communication to a format of the selected alternate communication option. 15. The system of claim 14, wherein the alternate communication option is a text message communication option.
2,600
10,635
10,635
15,888,112
2,611
Embodiments of the present disclosure relate to techniques for providing an augmented reality experience for virtual desktops. In particular, certain embodiments relate to acquiring, by a computing device one or more images from a client device and determining, by the computing device, that the one or more images contain an artifact to be augmented. Further, certain embodiments involve acquiring, by the computing device, a screen buffer from a virtual desktop or application running on it and applying, by the computing device, a geometric transformation to the screen buffer. Further, certain embodiments relate to augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images. Further, certain embodiments relate to providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience.
1. A method for providing an augmented reality experience, comprising: acquiring, by a computing device, one or more images from a client device separate from the computing device; determining, by the computing device, that the one or more images contain an artifact to be augmented; acquiring, by the computing device, a screen buffer from an application located on a host machine separate from the computing device; applying, by the computing device, a geometric transformation to the screen buffer; augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images; providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience. 2. The method of claim 1, further comprising: prior to determining that the one or more images contain an artifact to be augmented, pre-processing, by the computing device, the one or more images. 3. The method of claim 1, wherein the pre-processing of the one or more images comprises at least one of: resizing; noise removal; and smoothing. 4. The method of claim 1, wherein the geometric transformation involves the nearest neighbor interpolation technique. 5. The method of claim 1, wherein the determining that the one or more images contain an artifact to be augmented is based on homography. 6. The method of claim 5, wherein the acquiring of one or more images from the client device comprises: acquiring, by the computing device, the one or more images at a specified frame rate and resolution. 7. The method of claim 1, wherein the augmenting of the one or more images further comprises: generating, by the computing device, a mask for the screen buffer; and inserting, by the computing device, the screen buffer onto the artifact using the mask. 8. The method of claim 1, wherein the method is performed with the assistance of a virtualized graphical processing unit (GPU). 9. The method of claim 1, wherein the user of the client device interacts with the augmented reality experience in order to perform at least one of the following actions on a virtual machine: suspending, starting, stopping, taking a snapshot, taking a screenshot, triggering migration, creating templates, and sharing. 10. The method of claim 1, wherein user session recording is used to record both a virtual desktop session screen and the augmented reality experience, either separately or simultaneously. 11. A non-transitory computer readable medium comprising instructions to be executed in a computer system, wherein the instructions when executed in the computer system perform a method for providing an augmented reality experience, the method comprising: acquiring, by a computing device, one or more images from a client device separate from the computing device; determining, by the computing device, that the one or more images contain an artifact to be augmented; acquiring, by the computing device, a screen buffer from an application located on a host machine separate from the computing device; applying, by the computing device, a geometric transformation to the screen buffer; augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images; providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience. 12. The non-transitory computer readable medium of claim 11, further comprising: prior to determining that the one or more images contain an artifact to be augmented, pre-processing, by the computing device, the one or more images. 13. The non-transitory computer readable medium of claim 11, wherein the pre-processing of the one or more images comprises at least one of: resizing; noise removal; and smoothing. 14. The non-transitory computer readable medium of claim 11, wherein the geometric transformation involves the nearest neighbor interpolation technique. 15. The non-transitory computer readable medium of claim 11, wherein the determining that the one or more images contain an artifact to be augmented is based on homography. 16. The non-transitory computer readable medium of claim 11, wherein the acquiring of one or more images from the client device comprises: acquiring, by the computing device, the one or more images at a specified frame rate and resolution. 17. The non-transitory computer readable medium of claim 11, wherein the augmenting of the one or more images further comprises: generating, by the computing device, a mask for the screen buffer; and inserting, by the computing device, the screen buffer onto the artifact using the mask. 18. The non-transitory computer readable medium of claim 11, wherein the method is performed with the assistance of a virtualized graphical processing unit (GPU). 19. The non-transitory computer readable medium of claim 11, wherein the user of the client device interacts with the augmented reality experience in order to perform at least one of the following actions on a virtual machine: suspending, starting, stopping, taking a snapshot, taking a screenshot, triggering migration, creating templates, and sharing. 20. The non-transitory computer readable medium of claim 11, wherein user session recording is used to record both a virtual desktop session screen and the augmented reality experience, either separately or simultaneously. 21. A computer system, wherein system software for the computer system is programmed to execute a method for providing an augmented reality experience, the method comprising: acquiring, by a computing device, one or more images from a client device separate from the computing device; determining, by the computing device, that the one or more images contain an artifact to be augmented; acquiring, by the computing device, a screen buffer from an application located on a host machine separate from the computing device; applying, by the computing device, a geometric transformation to the screen buffer; augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images; providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience. 22. The computer system of claim 21, further comprising: prior to determining that the one or more images contain an artifact to be augmented, pre-processing, by the computing device, the one or more images. 23. The computer system of claim 21, wherein the pre-processing of the one or more images comprises at least one of: resizing; noise removal; and smoothing. 24. The computer system of claim 21, wherein the geometric transformation involves the nearest neighbor interpolation technique. 25. The computer system of claim 21, wherein the determining that the one or more images contain an artifact to be augmented is based on homography. 26. The computer system of claim 21, wherein the acquiring of one or more images from the client device comprises: acquiring, by the computing device, the one or more images at a specified frame rate and resolution. 27. The computer system of claim 21, wherein the augmenting of the one or more images further comprises: generating, by the computing device, a mask for the screen buffer; and inserting, by the computing device, the screen buffer onto the artifact using the mask. 28. The computer system of claim 21, wherein the method is performed with the assistance of a virtualized graphical processing unit (GPU). 29. The computer system of claim 21, wherein the user of the client device interacts with the augmented reality experience in order to perform at least one of the following actions on a virtual machine: suspending, starting, stopping, taking a snapshot, taking a screenshot, triggering migration, creating templates, and sharing. 30. The computer system of claim 21, wherein user session recording is used to record both a virtual desktop session screen and the augmented reality experience, either separately or simultaneously.
Embodiments of the present disclosure relate to techniques for providing an augmented reality experience for virtual desktops. In particular, certain embodiments relate to acquiring, by a computing device one or more images from a client device and determining, by the computing device, that the one or more images contain an artifact to be augmented. Further, certain embodiments involve acquiring, by the computing device, a screen buffer from a virtual desktop or application running on it and applying, by the computing device, a geometric transformation to the screen buffer. Further, certain embodiments relate to augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images. Further, certain embodiments relate to providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience.1. A method for providing an augmented reality experience, comprising: acquiring, by a computing device, one or more images from a client device separate from the computing device; determining, by the computing device, that the one or more images contain an artifact to be augmented; acquiring, by the computing device, a screen buffer from an application located on a host machine separate from the computing device; applying, by the computing device, a geometric transformation to the screen buffer; augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images; providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience. 2. The method of claim 1, further comprising: prior to determining that the one or more images contain an artifact to be augmented, pre-processing, by the computing device, the one or more images. 3. The method of claim 1, wherein the pre-processing of the one or more images comprises at least one of: resizing; noise removal; and smoothing. 4. The method of claim 1, wherein the geometric transformation involves the nearest neighbor interpolation technique. 5. The method of claim 1, wherein the determining that the one or more images contain an artifact to be augmented is based on homography. 6. The method of claim 5, wherein the acquiring of one or more images from the client device comprises: acquiring, by the computing device, the one or more images at a specified frame rate and resolution. 7. The method of claim 1, wherein the augmenting of the one or more images further comprises: generating, by the computing device, a mask for the screen buffer; and inserting, by the computing device, the screen buffer onto the artifact using the mask. 8. The method of claim 1, wherein the method is performed with the assistance of a virtualized graphical processing unit (GPU). 9. The method of claim 1, wherein the user of the client device interacts with the augmented reality experience in order to perform at least one of the following actions on a virtual machine: suspending, starting, stopping, taking a snapshot, taking a screenshot, triggering migration, creating templates, and sharing. 10. The method of claim 1, wherein user session recording is used to record both a virtual desktop session screen and the augmented reality experience, either separately or simultaneously. 11. A non-transitory computer readable medium comprising instructions to be executed in a computer system, wherein the instructions when executed in the computer system perform a method for providing an augmented reality experience, the method comprising: acquiring, by a computing device, one or more images from a client device separate from the computing device; determining, by the computing device, that the one or more images contain an artifact to be augmented; acquiring, by the computing device, a screen buffer from an application located on a host machine separate from the computing device; applying, by the computing device, a geometric transformation to the screen buffer; augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images; providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience. 12. The non-transitory computer readable medium of claim 11, further comprising: prior to determining that the one or more images contain an artifact to be augmented, pre-processing, by the computing device, the one or more images. 13. The non-transitory computer readable medium of claim 11, wherein the pre-processing of the one or more images comprises at least one of: resizing; noise removal; and smoothing. 14. The non-transitory computer readable medium of claim 11, wherein the geometric transformation involves the nearest neighbor interpolation technique. 15. The non-transitory computer readable medium of claim 11, wherein the determining that the one or more images contain an artifact to be augmented is based on homography. 16. The non-transitory computer readable medium of claim 11, wherein the acquiring of one or more images from the client device comprises: acquiring, by the computing device, the one or more images at a specified frame rate and resolution. 17. The non-transitory computer readable medium of claim 11, wherein the augmenting of the one or more images further comprises: generating, by the computing device, a mask for the screen buffer; and inserting, by the computing device, the screen buffer onto the artifact using the mask. 18. The non-transitory computer readable medium of claim 11, wherein the method is performed with the assistance of a virtualized graphical processing unit (GPU). 19. The non-transitory computer readable medium of claim 11, wherein the user of the client device interacts with the augmented reality experience in order to perform at least one of the following actions on a virtual machine: suspending, starting, stopping, taking a snapshot, taking a screenshot, triggering migration, creating templates, and sharing. 20. The non-transitory computer readable medium of claim 11, wherein user session recording is used to record both a virtual desktop session screen and the augmented reality experience, either separately or simultaneously. 21. A computer system, wherein system software for the computer system is programmed to execute a method for providing an augmented reality experience, the method comprising: acquiring, by a computing device, one or more images from a client device separate from the computing device; determining, by the computing device, that the one or more images contain an artifact to be augmented; acquiring, by the computing device, a screen buffer from an application located on a host machine separate from the computing device; applying, by the computing device, a geometric transformation to the screen buffer; augmenting, by the computing device, the one or more images by inserting the screen buffer onto the artifact, resulting in one or more augmented images; providing, by the computing device, the one or more augmented images to the client device in order to provide a user of the client device with the augmented reality experience. 22. The computer system of claim 21, further comprising: prior to determining that the one or more images contain an artifact to be augmented, pre-processing, by the computing device, the one or more images. 23. The computer system of claim 21, wherein the pre-processing of the one or more images comprises at least one of: resizing; noise removal; and smoothing. 24. The computer system of claim 21, wherein the geometric transformation involves the nearest neighbor interpolation technique. 25. The computer system of claim 21, wherein the determining that the one or more images contain an artifact to be augmented is based on homography. 26. The computer system of claim 21, wherein the acquiring of one or more images from the client device comprises: acquiring, by the computing device, the one or more images at a specified frame rate and resolution. 27. The computer system of claim 21, wherein the augmenting of the one or more images further comprises: generating, by the computing device, a mask for the screen buffer; and inserting, by the computing device, the screen buffer onto the artifact using the mask. 28. The computer system of claim 21, wherein the method is performed with the assistance of a virtualized graphical processing unit (GPU). 29. The computer system of claim 21, wherein the user of the client device interacts with the augmented reality experience in order to perform at least one of the following actions on a virtual machine: suspending, starting, stopping, taking a snapshot, taking a screenshot, triggering migration, creating templates, and sharing. 30. The computer system of claim 21, wherein user session recording is used to record both a virtual desktop session screen and the augmented reality experience, either separately or simultaneously.
2,600
10,636
10,636
15,869,320
2,649
A system includes a processor configured to determine that a vehicle is in a predefined power-preservation state. The processor is also configured to determine a lowest transmit-power available cellular band for vehicle telematics services, responsive to the power-preservation state. The processor is further configured to disable all bands other than the lowest-power available band and use the lowest-power available band for vehicle communication as long as the power-preservation state persists.
1. A system comprising: a processor configured to: determine that a vehicle is in a predefined power-preservation state; responsive to the power-preservation state, determine a lowest transmit-power available cellular band for vehicle telematics services; disable all bands other than the lowest-power available band; and use the lowest-power available band for vehicle communication as long as the power-preservation state persists. 2. The system of claim 1, wherein the predefined power-preservation state includes the vehicle being parked. 3. The system of claim 1, wherein the vehicle includes an electric powered vehicle and wherein the predefined power-preservation state includes the vehicle being parked but not charging. 4. The system of claim 1, wherein the power-preservation state includes a vehicle electric power resource falling below a predetermined threshold. 5. The system of claim 1, wherein the processor is configured to re-enable disabled bands when the power-preservation state ceases. 6. The system of claim 1, wherein the processor is configured to designate bands as available bands when the bands have transmission capacity to service predefined vehicle needs, and to disable bands lacking transmission capacity. 7. The system of claim 1, wherein the processor is configured to: receive a transmit request over the lowest-power available band, for a file transfer; determine that the file transfer request is suited for processing over a currently disabled band; and disable the lowest-power band and enable the currently disabled band, for the duration of the file transfer, responsive to the determination. 8. The system of claim 7, wherein the determination that the file transfer request is suited for processing over a currently disabled band is based on an expected size of the file transfer request. 9. The system of claim 7, wherein the determination that the file transfer request is suited for processing over a currently disabled band is based on an expected duration of the file transfer request over the currently disabled band compared to the lowest-power available band. 10. A system comprising: a processor configured to: receive a file transfer request over a currently enabled cellular band, at a vehicle in a predefined power-preservation state; determine a characteristic of the request predefining the request for processing over a faster available band than the currently enabled band; and responsive to the determination, enable the faster available band for processing the request and disable the currently enabled band, until handling of the request is completed. 11. The system of claim 10, wherein the predefined power-preservation state includes the vehicle being parked. 12. The system of claim 10, wherein the vehicle includes an electric powered vehicle and wherein the predefined power-preservation state includes the vehicle being parked but not charging. 13. The system of claim 10, wherein the power-preservation state includes a vehicle electric power resource falling below a predetermined threshold. 14. The system of claim 10, wherein the predefined characteristic includes a file size threshold. 15. The system of claim 14, wherein the faster band includes a band for which the processor determines that handling the request would use less aggregate power than handling the request over the currently enabled band. 16. The system of claim 10, wherein the predefined characteristic includes a vehicle update identifier associated with the request. 17. The system of claim 16, wherein the faster band includes the fastest available band, regardless of power consumption for handling the request. 18. The system of claim 17, wherein the faster band includes the fastest available band, regardless of power consumption for handling the request, provided that the processor determines that sufficient power reserves remain to complete handling of the request using the fastest available band. 19. The system of claim 10, wherein the predefined characteristic includes the processor determining that aggregate power usage for request handling over the faster available band would be less than aggregate power usage for request handling over the currently enabled band. 20. A computer-implemented method comprising: responsive to detecting a vehicle power-preservation state, disabling all available cellular band connections except for a currently available cellular band connection determined to require the lowest power usage of the all available cellular band connections.
A system includes a processor configured to determine that a vehicle is in a predefined power-preservation state. The processor is also configured to determine a lowest transmit-power available cellular band for vehicle telematics services, responsive to the power-preservation state. The processor is further configured to disable all bands other than the lowest-power available band and use the lowest-power available band for vehicle communication as long as the power-preservation state persists.1. A system comprising: a processor configured to: determine that a vehicle is in a predefined power-preservation state; responsive to the power-preservation state, determine a lowest transmit-power available cellular band for vehicle telematics services; disable all bands other than the lowest-power available band; and use the lowest-power available band for vehicle communication as long as the power-preservation state persists. 2. The system of claim 1, wherein the predefined power-preservation state includes the vehicle being parked. 3. The system of claim 1, wherein the vehicle includes an electric powered vehicle and wherein the predefined power-preservation state includes the vehicle being parked but not charging. 4. The system of claim 1, wherein the power-preservation state includes a vehicle electric power resource falling below a predetermined threshold. 5. The system of claim 1, wherein the processor is configured to re-enable disabled bands when the power-preservation state ceases. 6. The system of claim 1, wherein the processor is configured to designate bands as available bands when the bands have transmission capacity to service predefined vehicle needs, and to disable bands lacking transmission capacity. 7. The system of claim 1, wherein the processor is configured to: receive a transmit request over the lowest-power available band, for a file transfer; determine that the file transfer request is suited for processing over a currently disabled band; and disable the lowest-power band and enable the currently disabled band, for the duration of the file transfer, responsive to the determination. 8. The system of claim 7, wherein the determination that the file transfer request is suited for processing over a currently disabled band is based on an expected size of the file transfer request. 9. The system of claim 7, wherein the determination that the file transfer request is suited for processing over a currently disabled band is based on an expected duration of the file transfer request over the currently disabled band compared to the lowest-power available band. 10. A system comprising: a processor configured to: receive a file transfer request over a currently enabled cellular band, at a vehicle in a predefined power-preservation state; determine a characteristic of the request predefining the request for processing over a faster available band than the currently enabled band; and responsive to the determination, enable the faster available band for processing the request and disable the currently enabled band, until handling of the request is completed. 11. The system of claim 10, wherein the predefined power-preservation state includes the vehicle being parked. 12. The system of claim 10, wherein the vehicle includes an electric powered vehicle and wherein the predefined power-preservation state includes the vehicle being parked but not charging. 13. The system of claim 10, wherein the power-preservation state includes a vehicle electric power resource falling below a predetermined threshold. 14. The system of claim 10, wherein the predefined characteristic includes a file size threshold. 15. The system of claim 14, wherein the faster band includes a band for which the processor determines that handling the request would use less aggregate power than handling the request over the currently enabled band. 16. The system of claim 10, wherein the predefined characteristic includes a vehicle update identifier associated with the request. 17. The system of claim 16, wherein the faster band includes the fastest available band, regardless of power consumption for handling the request. 18. The system of claim 17, wherein the faster band includes the fastest available band, regardless of power consumption for handling the request, provided that the processor determines that sufficient power reserves remain to complete handling of the request using the fastest available band. 19. The system of claim 10, wherein the predefined characteristic includes the processor determining that aggregate power usage for request handling over the faster available band would be less than aggregate power usage for request handling over the currently enabled band. 20. A computer-implemented method comprising: responsive to detecting a vehicle power-preservation state, disabling all available cellular band connections except for a currently available cellular band connection determined to require the lowest power usage of the all available cellular band connections.
2,600
10,637
10,637
14,548,607
2,627
One embodiment provides a method, including: receiving, from an input device, user input; detecting, at an input device, a modifier key input comprising input from a single modifier key location; determining, using a processor, a location of a cursor; and modifying, using a processor, at least one character associated with the location of a cursor based upon the modifier key input. Other aspects are described and claimed.
1. A method, comprising: receiving, from an input device, user input; detecting, at an input device, a modifier key input comprising input from a single modifier key location; determining, using a processor, a location of a cursor; and modifying, using a processor, at least one character associated with the location of a cursor based upon the modifier key input. 2. The method of claim 1, further comprising identifying an application in which the user input was received and wherein the modifying is further based upon the application identified. 3. The method of claim 1, further comprising: detecting a second modifier key input, wherein the second modifier key input comprises input from the single modifier key location; and modifying the at least one character associated with the location of a cursor based upon the second modifier key input. 4. The method of claim 1, further comprising: detecting a second modifier key input, wherein the second modifier key input comprises input from a different modifier key location; and modifying the at least one character associated with the location of a cursor based upon the second modifier key input. 5. The method of claim 1, wherein the modifying comprises modifying at least one character located after the location of a cursor. 6. The method of claim 1, wherein the modifying comprises modifying at least one character located before the location of a cursor. 7. The method of claim 1, wherein the modifier key input comprises multiple instances of input from a single modifier key location. 8. The method of claim 1, wherein the modifying comprises modifying the at least one character in a manner associated with a key function associated with the single modifier key location. 9. The method of claim 1, wherein the modifying comprises modifying the at least one character in a manner configured by a user. 10. The method of claim 1, wherein the location of a cursor comprises a selection of a plurality of characters. 11. The method of claim 1, wherein the modifier key input comprises an input of predetermined duration. 12. An information handling device, comprising: a processor; at least one input device operatively coupled to the processor; a memory device that stores instructions executable by the processor to: receive, from one of the at least one input device, user input; detect, at one of the at least one input device, a modifier key input comprising input from a single modifier key location; determine a location of a cursor; and modify at least one character associated with the location of a cursor based upon the modifier key input. 13. The information handling device of claim 12, wherein the instructions are further executable by the processor to identify an application in which the user input was received and wherein to modify is further based upon the application identified. 14. The information handling device of claim 12, wherein the instructions are further executable by the processor to: detect a second modifier key input, wherein the second modifier key input comprises input from the single modifier key location; and modify the at least one character associated with the location of a cursor based upon the second modifier key input. 15. The information handling device of claim 12, wherein the instructions are further executable by the processor to: detect a second modifier key input, wherein the second modifier key input comprises input from a different modifier key location; and modify the at least one character associated with the location of a cursor based upon the second modifier key input. 16. The information handling device of claim 12, wherein to modify comprises modifying at least one character located at a location selected from the group consisting of: after the location of a cursor and before the location of a cursor. 17. The information handling device of claim 12, wherein the modifier key input comprises multiple instances of input from a single modifier key location. 18. The information handling device of claim 12, wherein to modify comprises modifying the at least one character in a manner associated with a key function associated with the single modifier key location. 19. The information handling device of claim 12, wherein to modify comprises modifying the at least one character in a manner configured by a user. 20. The information handling device of claim 12, wherein the location of a cursor comprises a selection of a plurality of characters. 21. A product, comprising: a storage device having code stored therewith, the code being executable by the processor and comprising: code that receives, from an input device, user input; code that detects, at an input device, a modifier key input comprising input from a single modifier key; code that determines, using a processor, a location of a cursor; and code that modifies, using a processor, at least one character associated with the location of a cursor based upon the modifier key input.
One embodiment provides a method, including: receiving, from an input device, user input; detecting, at an input device, a modifier key input comprising input from a single modifier key location; determining, using a processor, a location of a cursor; and modifying, using a processor, at least one character associated with the location of a cursor based upon the modifier key input. Other aspects are described and claimed.1. A method, comprising: receiving, from an input device, user input; detecting, at an input device, a modifier key input comprising input from a single modifier key location; determining, using a processor, a location of a cursor; and modifying, using a processor, at least one character associated with the location of a cursor based upon the modifier key input. 2. The method of claim 1, further comprising identifying an application in which the user input was received and wherein the modifying is further based upon the application identified. 3. The method of claim 1, further comprising: detecting a second modifier key input, wherein the second modifier key input comprises input from the single modifier key location; and modifying the at least one character associated with the location of a cursor based upon the second modifier key input. 4. The method of claim 1, further comprising: detecting a second modifier key input, wherein the second modifier key input comprises input from a different modifier key location; and modifying the at least one character associated with the location of a cursor based upon the second modifier key input. 5. The method of claim 1, wherein the modifying comprises modifying at least one character located after the location of a cursor. 6. The method of claim 1, wherein the modifying comprises modifying at least one character located before the location of a cursor. 7. The method of claim 1, wherein the modifier key input comprises multiple instances of input from a single modifier key location. 8. The method of claim 1, wherein the modifying comprises modifying the at least one character in a manner associated with a key function associated with the single modifier key location. 9. The method of claim 1, wherein the modifying comprises modifying the at least one character in a manner configured by a user. 10. The method of claim 1, wherein the location of a cursor comprises a selection of a plurality of characters. 11. The method of claim 1, wherein the modifier key input comprises an input of predetermined duration. 12. An information handling device, comprising: a processor; at least one input device operatively coupled to the processor; a memory device that stores instructions executable by the processor to: receive, from one of the at least one input device, user input; detect, at one of the at least one input device, a modifier key input comprising input from a single modifier key location; determine a location of a cursor; and modify at least one character associated with the location of a cursor based upon the modifier key input. 13. The information handling device of claim 12, wherein the instructions are further executable by the processor to identify an application in which the user input was received and wherein to modify is further based upon the application identified. 14. The information handling device of claim 12, wherein the instructions are further executable by the processor to: detect a second modifier key input, wherein the second modifier key input comprises input from the single modifier key location; and modify the at least one character associated with the location of a cursor based upon the second modifier key input. 15. The information handling device of claim 12, wherein the instructions are further executable by the processor to: detect a second modifier key input, wherein the second modifier key input comprises input from a different modifier key location; and modify the at least one character associated with the location of a cursor based upon the second modifier key input. 16. The information handling device of claim 12, wherein to modify comprises modifying at least one character located at a location selected from the group consisting of: after the location of a cursor and before the location of a cursor. 17. The information handling device of claim 12, wherein the modifier key input comprises multiple instances of input from a single modifier key location. 18. The information handling device of claim 12, wherein to modify comprises modifying the at least one character in a manner associated with a key function associated with the single modifier key location. 19. The information handling device of claim 12, wherein to modify comprises modifying the at least one character in a manner configured by a user. 20. The information handling device of claim 12, wherein the location of a cursor comprises a selection of a plurality of characters. 21. A product, comprising: a storage device having code stored therewith, the code being executable by the processor and comprising: code that receives, from an input device, user input; code that detects, at an input device, a modifier key input comprising input from a single modifier key; code that determines, using a processor, a location of a cursor; and code that modifies, using a processor, at least one character associated with the location of a cursor based upon the modifier key input.
2,600
10,638
10,638
15,537,371
2,644
A method for transmitting an IP data packet to an IP address associated with a host name is described. A first service message of a Short Message Service is transmitted to a Short Message Service gateway server. The first service message includes a host name resolution request for a host name. A second service message of the Short Message Service is received from the Short Message Service gateway server. The second service message includes an IP address associated with the host name. An IP data packet is transmitted to the IP address associated with the host name.
1-11. (canceled) 12. A method comprising: transmitting a first service message of a Short Message Service to a Short Message Service gateway server, the first service message comprising a host name resolution request for a host name; receiving, from the Short Message Service gateway server, a second service message of the Short Message Service, the second service message comprising an IP address associated with the host name; and transmitting an IP data packet to the IP address associated with the host name. 13. The method of claim 12, wherein the first service message is sent by an application, and wherein the first and the second service messages contain a code identifying the application. 14. The method of claim 12, further comprising encoding the first service message prior to transmitting the first service message. 15. The method of claim 14, wherein encoding the first service message comprises encrypting the first service message. 16. The method of claim 14, wherein encoding the first service message comprises performing data compression of the first service message. 17. A computer-program product that stores software code that can be executed on a computer device, the software code including instructions for implementing the method of claim 12. 18. A computer-implemented device comprising: a communication interface configured to transmit an IP data packet to an IP address associated with a host name; a processor; and a memory coupled to the processor and storing software code that can be executed on a computer device, the software code including instructions for implementing a method comprising: transmitting a first service message of a Short Message Service to a Short Message Service gateway server, the first service message comprising a host name resolution request for the host name; receiving from the Short Message Service gateway server a second service message of the Short Message Service, the second service message comprising an IP address associated with the host name; and transmitting an IP data packet to the IP address associated with the host name. 19. The device of claim 18, wherein the device is incorporated in a Universal Integrated Circuit Card. 20. The device of claim 18, wherein the device comprises an embedded Universal Integrated Circuit Card. 21. The device of claim 18, wherein the memory comprises a Subscriber Identity Module application that stores an application for implementing the method. 22. The device of claim 18, further comprising a further processor and a further memory, wherein the further memory stores an application, which, when executed on the further processor, communicates with the processor. 23. A method for resolving a host name, the method comprising: receiving a first service message of a Short Message Service from a mobile device, the first service message comprising a host name resolution request for the host name; determining by use of a local database and/or a remote Domain Name System server an IP address associated with the host name; and transmitting a second service message of the Short Message Service to the mobile device, the second service message comprising the IP address associated with the host name. 24. The method of claim 23, wherein the first service message is received from an application, and wherein the first and the second service messages contain a code identifying the application. 25. The method of claim 23, further comprising encoding the second service message prior to transmitting the second service message. 26. The method of claim 25, wherein encoding the second service message comprises encrypting the second service message. 27. The method of claim 25, wherein encoding the second service message comprises performing data compression of the second service message. 28. A computer-program product that stores software code that can be executed on a computer device, the software code including instructions for implementing the method of claim 23. 29. A host name resolution server comprising: a processor; and a memory coupled to the processor and storing software code that can be executed on a computer device, the software code including instructions for implementing a method comprising: receiving a first service message of a Short Message Service from a mobile device, the first service message comprising a host name resolution request for a host name; determining by use of a local database and/or a Domain Name System server an IP address associated with the host name; and transmitting a second service message of the Short Message Service to the mobile device, the second service message comprising the IP address associated with the host name. 30. The server of claim 29, further comprising the local database, wherein determining the IP address associated with the host name comprises using the local database to determine the IP address associated with the host name. 31. The server of claim 29, wherein the Domain Name System server is implemented as software executed by the processor, wherein determining the IP address associated with the host name comprises determining the IP address associated with the host name by use of the Domain Name System server. 32. The server of claim 29, wherein determining the IP address associated with the host name comprises determining the IP address associated with the host name by use of a remote Domain Name System server.
A method for transmitting an IP data packet to an IP address associated with a host name is described. A first service message of a Short Message Service is transmitted to a Short Message Service gateway server. The first service message includes a host name resolution request for a host name. A second service message of the Short Message Service is received from the Short Message Service gateway server. The second service message includes an IP address associated with the host name. An IP data packet is transmitted to the IP address associated with the host name.1-11. (canceled) 12. A method comprising: transmitting a first service message of a Short Message Service to a Short Message Service gateway server, the first service message comprising a host name resolution request for a host name; receiving, from the Short Message Service gateway server, a second service message of the Short Message Service, the second service message comprising an IP address associated with the host name; and transmitting an IP data packet to the IP address associated with the host name. 13. The method of claim 12, wherein the first service message is sent by an application, and wherein the first and the second service messages contain a code identifying the application. 14. The method of claim 12, further comprising encoding the first service message prior to transmitting the first service message. 15. The method of claim 14, wherein encoding the first service message comprises encrypting the first service message. 16. The method of claim 14, wherein encoding the first service message comprises performing data compression of the first service message. 17. A computer-program product that stores software code that can be executed on a computer device, the software code including instructions for implementing the method of claim 12. 18. A computer-implemented device comprising: a communication interface configured to transmit an IP data packet to an IP address associated with a host name; a processor; and a memory coupled to the processor and storing software code that can be executed on a computer device, the software code including instructions for implementing a method comprising: transmitting a first service message of a Short Message Service to a Short Message Service gateway server, the first service message comprising a host name resolution request for the host name; receiving from the Short Message Service gateway server a second service message of the Short Message Service, the second service message comprising an IP address associated with the host name; and transmitting an IP data packet to the IP address associated with the host name. 19. The device of claim 18, wherein the device is incorporated in a Universal Integrated Circuit Card. 20. The device of claim 18, wherein the device comprises an embedded Universal Integrated Circuit Card. 21. The device of claim 18, wherein the memory comprises a Subscriber Identity Module application that stores an application for implementing the method. 22. The device of claim 18, further comprising a further processor and a further memory, wherein the further memory stores an application, which, when executed on the further processor, communicates with the processor. 23. A method for resolving a host name, the method comprising: receiving a first service message of a Short Message Service from a mobile device, the first service message comprising a host name resolution request for the host name; determining by use of a local database and/or a remote Domain Name System server an IP address associated with the host name; and transmitting a second service message of the Short Message Service to the mobile device, the second service message comprising the IP address associated with the host name. 24. The method of claim 23, wherein the first service message is received from an application, and wherein the first and the second service messages contain a code identifying the application. 25. The method of claim 23, further comprising encoding the second service message prior to transmitting the second service message. 26. The method of claim 25, wherein encoding the second service message comprises encrypting the second service message. 27. The method of claim 25, wherein encoding the second service message comprises performing data compression of the second service message. 28. A computer-program product that stores software code that can be executed on a computer device, the software code including instructions for implementing the method of claim 23. 29. A host name resolution server comprising: a processor; and a memory coupled to the processor and storing software code that can be executed on a computer device, the software code including instructions for implementing a method comprising: receiving a first service message of a Short Message Service from a mobile device, the first service message comprising a host name resolution request for a host name; determining by use of a local database and/or a Domain Name System server an IP address associated with the host name; and transmitting a second service message of the Short Message Service to the mobile device, the second service message comprising the IP address associated with the host name. 30. The server of claim 29, further comprising the local database, wherein determining the IP address associated with the host name comprises using the local database to determine the IP address associated with the host name. 31. The server of claim 29, wherein the Domain Name System server is implemented as software executed by the processor, wherein determining the IP address associated with the host name comprises determining the IP address associated with the host name by use of the Domain Name System server. 32. The server of claim 29, wherein determining the IP address associated with the host name comprises determining the IP address associated with the host name by use of a remote Domain Name System server.
2,600
10,639
10,639
14,290,850
2,625
An electronic device with a display and a touch-sensitive surface displays a user interface with a plurality of content units, where the content units are arranged along a first axis in the user interface, and a respective content unit is associated with corresponding metadata. The device detects a contact on the touch-sensitive surface and a first movement of the contact. In response to detecting the first movement of the contact, the device moves a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement, and for one or more respective content units in the first set of content units, the device displays metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact.
1. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, and a touch-sensitive surface, cause the device to: display a user interface with a plurality of content units, wherein: the plurality of content units are arranged along a first axis in the user interface; and a respective content unit is associated with corresponding metadata; detect a contact on the touch-sensitive surface and detect a first movement of the contact; and in response to detecting the first movement of the contact: move a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement; and for one or more respective content units in the first set of content units, display metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact. 2. The computer readable storage medium of claim 1, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: in response to detecting the first movement of the contact, move the set of one or more of the content units parallel to the first axis in the user interface in accordance with the first movement. 3. The computer readable storage medium of claim 1, wherein: a respective content unit of the first set of content units corresponds to respective metadata; and moving the first set of content units perpendicular to the first axis includes revealing the respective metadata at a location that was previously occupied by the respective content unit. 4. The computer readable storage medium of claim 1, wherein: the plurality of content units includes a second set of one or more content units displayed on the display; and moving the first set of content units perpendicular to the first axis includes moving the first set of content units without moving the second set of content units perpendicular to the first axis. 5. The computer readable storage medium of claim 4, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: in response to detecting the first movement of the contact, display metadata for one or more of the content units in the second set of content units and one or more of the content units in the first set of content units. 6. The computer readable storage medium of claim 4, wherein: immediately prior to detecting the first movement of the contact: one or more of the first set of content units are arranged within a first region of the display; and one or more of the second set of content units are arranged within a second region of the display that is offset from the first region in a direction perpendicular to the first axis; and after detecting the first movement of the contact: one or more of the first set of content units and the second set of content units are arranged within the second region of the display; and metadata for the displayed content units is displayed within the first region of the display. 7. The computer readable storage medium of claim 1, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: while the metadata for the respective content unit is displayed, detect liftoff of the contact; and in response to detecting liftoff of the contact, cease to display the metadata for the respective content unit. 8. The computer readable storage medium of claim 1, wherein the metadata for the respective content unit includes one or more of: a time that corresponds to the respective content unit; a date that corresponds to the respective content unit; a read status that corresponds to the respective content unit; a size of the respective content unit; a distance that corresponds to the respective content unit; an author of the respective content unit; a duration of the respective content unit; a security setting of the respective content unit; and a privacy status of the respective content unit. 9. The computer readable storage medium of claim 1, wherein: the plurality of content units are electronic messages; the respective content unit is a respective electronic message; and the metadata for the respective electronic message includes a time at which the electronic message was sent or received. 10. The computer readable storage medium of claim 1, wherein: the user interface includes a plurality of messages in a conversation between a first user and a second user; and the first set of content units that moves perpendicular to the first axis in the user interface includes messages sent by the first user and excludes messages sent by the second user. 11. The computer readable storage medium of claim 1, wherein: the plurality of content units are representations of digital photos; the respective content unit is a representation of a respective digital photo; and the metadata for the representation of the respective digital photo includes image capture data that is indicative of camera settings that were used to capture the respective digital photo. 12. The computer readable storage medium of claim 1, wherein: the plurality of content units are steps in turn by turn directions; the respective content unit is a respective step in the turn by turn directions; and the metadata for the respective step includes an estimate of the time it will take to complete the respective step. 13. The computer readable storage medium of claim 1, wherein: the first set of content units have a color determined based on a color gradient that changes from a first color to a second color along the first axis; and the computer readable storage medium, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: detect a change in orientation of the device; and in response to detecting the change in orientation of the device, adjust the gradient in accordance with the change in orientation of the device. 14. The computer readable storage medium of claim 1, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: detect a second movement of a contact on the touch-sensitive surface, wherein the second movement includes a respective component of movement that corresponds to movement that is parallel to the first axis; and in response to detecting the second movement of the contact: determine a magnitude of the respective component of movement of the contact on the touch-sensitive surface; and move a first content unit in the plurality of content units parallel to the first axis by a first amount that is proportional to the magnitude of the respective component of movement of the contact. 15. The computer readable storage medium of claim 14, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: in response to detecting the second movement of the contact: while moving the first content unit, move a second content unit that is adjacent to the first content unit along the first axis by a second amount that is less than the first amount; and move a third content unit that is adjacent to the second content unit along the first axis by a third amount that is less than the second amount. 16. A method, comprising: at an electronic device with a touch-sensitive surface and a display: displaying a user interface with a plurality of content units, wherein: the plurality of content units are arranged along a first axis in the user interface; and a respective content unit is associated with corresponding metadata; detecting a contact on the touch-sensitive surface and detecting a first movement of the contact; and in response to detecting the first movement of the contact: moving a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement; and for one or more respective content units in the first set of content units, displaying metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact. 17. An electronic device, comprising: a display; a touch-sensitive surface; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a user interface with a plurality of content units, wherein: the plurality of content units are arranged along a first axis in the user interface; and a respective content unit is associated with corresponding metadata; detecting a contact on the touch-sensitive surface and detecting a first movement of the contact; and in response to detecting the first movement of the contact: moving a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement; and for one or more respective content units in the first set of content units, displaying metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact.
An electronic device with a display and a touch-sensitive surface displays a user interface with a plurality of content units, where the content units are arranged along a first axis in the user interface, and a respective content unit is associated with corresponding metadata. The device detects a contact on the touch-sensitive surface and a first movement of the contact. In response to detecting the first movement of the contact, the device moves a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement, and for one or more respective content units in the first set of content units, the device displays metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact.1. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, and a touch-sensitive surface, cause the device to: display a user interface with a plurality of content units, wherein: the plurality of content units are arranged along a first axis in the user interface; and a respective content unit is associated with corresponding metadata; detect a contact on the touch-sensitive surface and detect a first movement of the contact; and in response to detecting the first movement of the contact: move a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement; and for one or more respective content units in the first set of content units, display metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact. 2. The computer readable storage medium of claim 1, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: in response to detecting the first movement of the contact, move the set of one or more of the content units parallel to the first axis in the user interface in accordance with the first movement. 3. The computer readable storage medium of claim 1, wherein: a respective content unit of the first set of content units corresponds to respective metadata; and moving the first set of content units perpendicular to the first axis includes revealing the respective metadata at a location that was previously occupied by the respective content unit. 4. The computer readable storage medium of claim 1, wherein: the plurality of content units includes a second set of one or more content units displayed on the display; and moving the first set of content units perpendicular to the first axis includes moving the first set of content units without moving the second set of content units perpendicular to the first axis. 5. The computer readable storage medium of claim 4, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: in response to detecting the first movement of the contact, display metadata for one or more of the content units in the second set of content units and one or more of the content units in the first set of content units. 6. The computer readable storage medium of claim 4, wherein: immediately prior to detecting the first movement of the contact: one or more of the first set of content units are arranged within a first region of the display; and one or more of the second set of content units are arranged within a second region of the display that is offset from the first region in a direction perpendicular to the first axis; and after detecting the first movement of the contact: one or more of the first set of content units and the second set of content units are arranged within the second region of the display; and metadata for the displayed content units is displayed within the first region of the display. 7. The computer readable storage medium of claim 1, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: while the metadata for the respective content unit is displayed, detect liftoff of the contact; and in response to detecting liftoff of the contact, cease to display the metadata for the respective content unit. 8. The computer readable storage medium of claim 1, wherein the metadata for the respective content unit includes one or more of: a time that corresponds to the respective content unit; a date that corresponds to the respective content unit; a read status that corresponds to the respective content unit; a size of the respective content unit; a distance that corresponds to the respective content unit; an author of the respective content unit; a duration of the respective content unit; a security setting of the respective content unit; and a privacy status of the respective content unit. 9. The computer readable storage medium of claim 1, wherein: the plurality of content units are electronic messages; the respective content unit is a respective electronic message; and the metadata for the respective electronic message includes a time at which the electronic message was sent or received. 10. The computer readable storage medium of claim 1, wherein: the user interface includes a plurality of messages in a conversation between a first user and a second user; and the first set of content units that moves perpendicular to the first axis in the user interface includes messages sent by the first user and excludes messages sent by the second user. 11. The computer readable storage medium of claim 1, wherein: the plurality of content units are representations of digital photos; the respective content unit is a representation of a respective digital photo; and the metadata for the representation of the respective digital photo includes image capture data that is indicative of camera settings that were used to capture the respective digital photo. 12. The computer readable storage medium of claim 1, wherein: the plurality of content units are steps in turn by turn directions; the respective content unit is a respective step in the turn by turn directions; and the metadata for the respective step includes an estimate of the time it will take to complete the respective step. 13. The computer readable storage medium of claim 1, wherein: the first set of content units have a color determined based on a color gradient that changes from a first color to a second color along the first axis; and the computer readable storage medium, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: detect a change in orientation of the device; and in response to detecting the change in orientation of the device, adjust the gradient in accordance with the change in orientation of the device. 14. The computer readable storage medium of claim 1, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: detect a second movement of a contact on the touch-sensitive surface, wherein the second movement includes a respective component of movement that corresponds to movement that is parallel to the first axis; and in response to detecting the second movement of the contact: determine a magnitude of the respective component of movement of the contact on the touch-sensitive surface; and move a first content unit in the plurality of content units parallel to the first axis by a first amount that is proportional to the magnitude of the respective component of movement of the contact. 15. The computer readable storage medium of claim 14, including instructions, which when executed by the electronic device with the display and the touch-sensitive surface, cause the device to: in response to detecting the second movement of the contact: while moving the first content unit, move a second content unit that is adjacent to the first content unit along the first axis by a second amount that is less than the first amount; and move a third content unit that is adjacent to the second content unit along the first axis by a third amount that is less than the second amount. 16. A method, comprising: at an electronic device with a touch-sensitive surface and a display: displaying a user interface with a plurality of content units, wherein: the plurality of content units are arranged along a first axis in the user interface; and a respective content unit is associated with corresponding metadata; detecting a contact on the touch-sensitive surface and detecting a first movement of the contact; and in response to detecting the first movement of the contact: moving a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement; and for one or more respective content units in the first set of content units, displaying metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact. 17. An electronic device, comprising: a display; a touch-sensitive surface; one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: displaying a user interface with a plurality of content units, wherein: the plurality of content units are arranged along a first axis in the user interface; and a respective content unit is associated with corresponding metadata; detecting a contact on the touch-sensitive surface and detecting a first movement of the contact; and in response to detecting the first movement of the contact: moving a first set of one or more of the content units perpendicular to the first axis in the user interface in accordance with the first movement; and for one or more respective content units in the first set of content units, displaying metadata for the respective content unit adjacent to the respective content unit that was not displayed immediately prior to detecting the first movement of the contact.
2,600
10,640
10,640
15,421,579
2,616
A graphics processing system comprising: a tiling unit configured to tile a first view of a scene into a plurality of tiles and generate a list of primitives associated with each tile; a processing unit configured to identify a first subset of the tiles that are each associated with at least a predetermined number of primitives in dependence on the list; and a rendering unit configured to render to a render target each of the identified tiles.
1. A graphics processing system comprising: a tiling unit configured to tile a first view of a scene into a plurality of tiles and generate a list of primitives associated with each tile; a processing unit configured to identify a first subset of the tiles that are each associated with at least a predetermined number of primitives in dependence on the list; and a rendering unit configured to render to a render target each of the identified tiles and not render tiles that are not identified by the processing unit. 2. A system as claimed in claim 1, further comprising memory and a memory management unit configured to allocate a portion of the memory to each of the identified tiles. 3. A system as claimed in claim 2, wherein the rendering unit is configured to store data resulting from the render of each identified tile at the allocated portion of memory for that tile. 4. A system as claimed in claim 2, wherein the memory management unit is further configured to not allocate a portion of memory for each of the plurality of tiles that are not identified by the processing unit. 5. A system as claimed in claim 4, wherein the rendering unit is further configured to, for a subsequent render, access memory locations associated with the tiles that are not identified and the memory management unit is further configured to return a predefined value in response to the access. 6. A system as claimed in claim 1, wherein the number of tiles identified in the first subset is less than the number of tiles the scene is tiled into. 7. A system as claimed in claim 1, wherein the tiling unit is configured to generate the list of primitives associated with each tile by determining which primitives are located at least partially within that tile. 8. A system as claimed in claim 1, wherein: the processing unit is configured to identify a second subset of the tiles that are associated with parts of the scene that are visible in a second view; and the rendering unit is configured to render each of the tiles that are identified in both the first and second subset. 9. A system as claimed in claim 2, wherein: the processing unit is configured to identify a second subset of the tiles that are associated with parts of the scene that are visible in a second view; and the memory management unit is configured to allocate a portion of the memory to each of the tiles identified in both the first and second subset. 10. A graphics processing system comprising: memory for storing data; a tiling unit configured to tile a first view of a scene into a plurality of tiles; a rendering unit configured to render each tile that is associated with at least a predetermined number of primitives and output render data resulting from the render of that tile; and a memory management unit configured to: detect the render data output for each rendered tile and allocate a portion of memory for that rendered tile; and store the render data for each rendered tile at the portion of memory allocated for that tile not allocate a portion of memory for each of the plurality of tiles that are not associated with at least the predetermined number of primitives. 11. A system as claimed in claim 10, wherein the rendering unit is further configured to, for a subsequent render, access the memory to read data associated with tiles that are not associated with at least the predetermined number of primitives and the memory management unit is further configured to return a predefined value in response to said access. 12. A system as claimed in claim 10, wherein the rendering unit is further configured to not output data for tiles that are not associated with at least a predetermined number of primitives. 13. A system as claimed in claim 10, further comprising a processing unit configured to identify a subset of the tiles that are associated with parts of the scene that are visible in a second view, wherein the rendering unit is configured to render each of the tiles that are identified in the subset and associated with at least the predetermined number of primitives. 14. A system as claimed in claim 1, wherein the render target is a texture. 15. A system as claimed in claim 10, wherein the render data is data for a texture. 16. A system as claimed in claim 14, wherein the rendering unit is configured to apply the texture to the scene in a subsequent render of the scene. 17. A system as claimed in claim 16, wherein the texture is applied to a second view of the scene, the second view being different to the first view. 18. A system as claimed in claim 14, wherein the texture is a shadow map. 19. A system as claimed in claim 1, wherein the predetermined number is equal to or greater than one. 20. A graphics processing method comprising: tiling a first view of a scene into a plurality of tiles; generating a list of primitives associated with each tile; identifying a first subset of the tiles that are each associated with at least a predetermined number of primitives in dependence on the list; and rendering to a render target each of the identified tiles and not rendering tiles that are not identified.
A graphics processing system comprising: a tiling unit configured to tile a first view of a scene into a plurality of tiles and generate a list of primitives associated with each tile; a processing unit configured to identify a first subset of the tiles that are each associated with at least a predetermined number of primitives in dependence on the list; and a rendering unit configured to render to a render target each of the identified tiles.1. A graphics processing system comprising: a tiling unit configured to tile a first view of a scene into a plurality of tiles and generate a list of primitives associated with each tile; a processing unit configured to identify a first subset of the tiles that are each associated with at least a predetermined number of primitives in dependence on the list; and a rendering unit configured to render to a render target each of the identified tiles and not render tiles that are not identified by the processing unit. 2. A system as claimed in claim 1, further comprising memory and a memory management unit configured to allocate a portion of the memory to each of the identified tiles. 3. A system as claimed in claim 2, wherein the rendering unit is configured to store data resulting from the render of each identified tile at the allocated portion of memory for that tile. 4. A system as claimed in claim 2, wherein the memory management unit is further configured to not allocate a portion of memory for each of the plurality of tiles that are not identified by the processing unit. 5. A system as claimed in claim 4, wherein the rendering unit is further configured to, for a subsequent render, access memory locations associated with the tiles that are not identified and the memory management unit is further configured to return a predefined value in response to the access. 6. A system as claimed in claim 1, wherein the number of tiles identified in the first subset is less than the number of tiles the scene is tiled into. 7. A system as claimed in claim 1, wherein the tiling unit is configured to generate the list of primitives associated with each tile by determining which primitives are located at least partially within that tile. 8. A system as claimed in claim 1, wherein: the processing unit is configured to identify a second subset of the tiles that are associated with parts of the scene that are visible in a second view; and the rendering unit is configured to render each of the tiles that are identified in both the first and second subset. 9. A system as claimed in claim 2, wherein: the processing unit is configured to identify a second subset of the tiles that are associated with parts of the scene that are visible in a second view; and the memory management unit is configured to allocate a portion of the memory to each of the tiles identified in both the first and second subset. 10. A graphics processing system comprising: memory for storing data; a tiling unit configured to tile a first view of a scene into a plurality of tiles; a rendering unit configured to render each tile that is associated with at least a predetermined number of primitives and output render data resulting from the render of that tile; and a memory management unit configured to: detect the render data output for each rendered tile and allocate a portion of memory for that rendered tile; and store the render data for each rendered tile at the portion of memory allocated for that tile not allocate a portion of memory for each of the plurality of tiles that are not associated with at least the predetermined number of primitives. 11. A system as claimed in claim 10, wherein the rendering unit is further configured to, for a subsequent render, access the memory to read data associated with tiles that are not associated with at least the predetermined number of primitives and the memory management unit is further configured to return a predefined value in response to said access. 12. A system as claimed in claim 10, wherein the rendering unit is further configured to not output data for tiles that are not associated with at least a predetermined number of primitives. 13. A system as claimed in claim 10, further comprising a processing unit configured to identify a subset of the tiles that are associated with parts of the scene that are visible in a second view, wherein the rendering unit is configured to render each of the tiles that are identified in the subset and associated with at least the predetermined number of primitives. 14. A system as claimed in claim 1, wherein the render target is a texture. 15. A system as claimed in claim 10, wherein the render data is data for a texture. 16. A system as claimed in claim 14, wherein the rendering unit is configured to apply the texture to the scene in a subsequent render of the scene. 17. A system as claimed in claim 16, wherein the texture is applied to a second view of the scene, the second view being different to the first view. 18. A system as claimed in claim 14, wherein the texture is a shadow map. 19. A system as claimed in claim 1, wherein the predetermined number is equal to or greater than one. 20. A graphics processing method comprising: tiling a first view of a scene into a plurality of tiles; generating a list of primitives associated with each tile; identifying a first subset of the tiles that are each associated with at least a predetermined number of primitives in dependence on the list; and rendering to a render target each of the identified tiles and not rendering tiles that are not identified.
2,600
10,641
10,641
15,905,520
2,663
The present approach relates to detection of image artifacts symptomatic of needed calibration and/or failing hardware with no or limited human intervention, such as using machine learning. Detection of image artifacts can occur as part of normal imaging system operation and/or as part of a quality assessment of a newly manufactured or already installed system. Detection of image artifacts can adapt or learn as new scans are acquired using supervised or semi-supervised learning. Assessment of system imaging performance in the recently manufactured as well as the installed base can be performed reliably and automatically.
1. A neural network configured to identify serviceable issues related to the operation of an imaging system, the neural network comprising: an input layer configured to receive images generated by imaging systems; two or more hidden layers configured to receive the images from the input layer and to generate a respective segmented image for each image, wherein the segmented images comprise at least one segment corresponding to image artifacts; and an output layer configured to provide an output based on the segmented images. 2. The neural network of claim 1, wherein the output comprises an indication of a hardware or system component issue related to an image artifact identified in a respective segmented image. 3. The neural network of claim 1 wherein the output comprises a ranked list of service operations based on their likelihood of resolving an identified image artifact issue. 4. The neural network of claim 1, wherein the output comprises a probability assessment of the types of artifacts present in a corresponding input image. 5. The neural network of claim 1, wherein the output comprises a service call recommendation or appointment in response to an image artifact identified in a respective segmented image. 6. The neural network of claim 1, wherein the respective segmented images are segmented into background, tissue or phantom, and artifacts. 7. The neural network of claim 1, comprising training or refining the neural network using semi-supervised learning, wherein an image data set used for semi-supervised learning is derived from both an installed-based of imaging systems and a manufacturing base of imaging systems. 8. The neural network of claim 1, wherein the images received by the input layer are derived from both an installed-based of imaging systems and a manufacturing base of imaging systems. 9. A method for diagnosing imaging system issues, comprising: receiving as an input at an input layer of a trained neural network an image generated by an imaging system; processing the image via one or more layers of the trained neural network, wherein processing the image comprises at least segmenting the image to derive a segment corresponding to image artifacts; and outputting at an output layer of the trained neural network an output based on the segment corresponding to image artifacts. 10. The method of claim 9, wherein the imaging system is installed at a customer site or is undergoing evaluation after manufacture but prior to installation. 11. The method of claim 9, wherein the output comprises an indication of a hardware or system component issue related to an image artifact identified in the segment corresponding to image artifacts. 12. The method of claim 9, wherein the output comprises a ranked list of service operations based on their likelihood of resolving an image artifact identified in the segment corresponding to image artifacts. 13. The method of claim 9, wherein the output comprises a probability assessment of the types of artifacts present in the image. 14. The method of claim 9, wherein the output comprises a service call recommendation or appointment in response to an image artifact identified in the segment corresponding to image artifacts. 15. The method of claim 9, wherein processing the image comprises segmenting the image into background, tissue or phantom, and artifact segments. 16. The method of claim 9, comprising refining the training neural network over time using semi-supervised learning, wherein training images used for semi-supervised learning are derived from both an installed-based of imaging systems and a manufacturing base of imaging systems. 17. One or more non-transitory computer-readable media encoding processor-executable routines, wherein the routines, when executed by a processor, cause acts to be performed comprising: receiving as an input at an input layer of a trained neural network an image generated by an imaging system; processing the image via one or more layers of the trained neural network, wherein processing the image comprises at least segmenting the image to derive a segment corresponding to image artifacts; and outputting at an output layer of the trained neural network an output based on the segment corresponding to image artifacts. 18. The one or more non-transitory computer-readable media of claim 17, wherein the output comprises an indication of a hardware or system component issue related to an image artifact identified in the segment corresponding to image artifacts. 19. The one or more non-transitory computer-readable media of claim 17, wherein the output comprises a ranked list of service operations based on their likelihood of resolving an image artifact identified in the segment corresponding to image artifacts. 20. The one or more non-transitory computer-readable media of claim 17, wherein the output comprises a service call recommendation or appointment in response to an image artifact identified in the segment corresponding to image artifacts.
The present approach relates to detection of image artifacts symptomatic of needed calibration and/or failing hardware with no or limited human intervention, such as using machine learning. Detection of image artifacts can occur as part of normal imaging system operation and/or as part of a quality assessment of a newly manufactured or already installed system. Detection of image artifacts can adapt or learn as new scans are acquired using supervised or semi-supervised learning. Assessment of system imaging performance in the recently manufactured as well as the installed base can be performed reliably and automatically.1. A neural network configured to identify serviceable issues related to the operation of an imaging system, the neural network comprising: an input layer configured to receive images generated by imaging systems; two or more hidden layers configured to receive the images from the input layer and to generate a respective segmented image for each image, wherein the segmented images comprise at least one segment corresponding to image artifacts; and an output layer configured to provide an output based on the segmented images. 2. The neural network of claim 1, wherein the output comprises an indication of a hardware or system component issue related to an image artifact identified in a respective segmented image. 3. The neural network of claim 1 wherein the output comprises a ranked list of service operations based on their likelihood of resolving an identified image artifact issue. 4. The neural network of claim 1, wherein the output comprises a probability assessment of the types of artifacts present in a corresponding input image. 5. The neural network of claim 1, wherein the output comprises a service call recommendation or appointment in response to an image artifact identified in a respective segmented image. 6. The neural network of claim 1, wherein the respective segmented images are segmented into background, tissue or phantom, and artifacts. 7. The neural network of claim 1, comprising training or refining the neural network using semi-supervised learning, wherein an image data set used for semi-supervised learning is derived from both an installed-based of imaging systems and a manufacturing base of imaging systems. 8. The neural network of claim 1, wherein the images received by the input layer are derived from both an installed-based of imaging systems and a manufacturing base of imaging systems. 9. A method for diagnosing imaging system issues, comprising: receiving as an input at an input layer of a trained neural network an image generated by an imaging system; processing the image via one or more layers of the trained neural network, wherein processing the image comprises at least segmenting the image to derive a segment corresponding to image artifacts; and outputting at an output layer of the trained neural network an output based on the segment corresponding to image artifacts. 10. The method of claim 9, wherein the imaging system is installed at a customer site or is undergoing evaluation after manufacture but prior to installation. 11. The method of claim 9, wherein the output comprises an indication of a hardware or system component issue related to an image artifact identified in the segment corresponding to image artifacts. 12. The method of claim 9, wherein the output comprises a ranked list of service operations based on their likelihood of resolving an image artifact identified in the segment corresponding to image artifacts. 13. The method of claim 9, wherein the output comprises a probability assessment of the types of artifacts present in the image. 14. The method of claim 9, wherein the output comprises a service call recommendation or appointment in response to an image artifact identified in the segment corresponding to image artifacts. 15. The method of claim 9, wherein processing the image comprises segmenting the image into background, tissue or phantom, and artifact segments. 16. The method of claim 9, comprising refining the training neural network over time using semi-supervised learning, wherein training images used for semi-supervised learning are derived from both an installed-based of imaging systems and a manufacturing base of imaging systems. 17. One or more non-transitory computer-readable media encoding processor-executable routines, wherein the routines, when executed by a processor, cause acts to be performed comprising: receiving as an input at an input layer of a trained neural network an image generated by an imaging system; processing the image via one or more layers of the trained neural network, wherein processing the image comprises at least segmenting the image to derive a segment corresponding to image artifacts; and outputting at an output layer of the trained neural network an output based on the segment corresponding to image artifacts. 18. The one or more non-transitory computer-readable media of claim 17, wherein the output comprises an indication of a hardware or system component issue related to an image artifact identified in the segment corresponding to image artifacts. 19. The one or more non-transitory computer-readable media of claim 17, wherein the output comprises a ranked list of service operations based on their likelihood of resolving an image artifact identified in the segment corresponding to image artifacts. 20. The one or more non-transitory computer-readable media of claim 17, wherein the output comprises a service call recommendation or appointment in response to an image artifact identified in the segment corresponding to image artifacts.
2,600
10,642
10,642
13,985,610
2,674
A device and method for the control and management of the printing parameters of a printing machine, particularly with a plurality of consecutive printing processes, is presented. The device and method include at least one video camera associable with a printing and conversion assembly and movable along a motorized guide in order to acquire a print medium. An image processor is functionally associated with the video camera to search, recognize and measure on the image of the print medium acquired by the video camera the printing values of at least one reference register mark reproduced on the print medium. The image processor is functionally connected to a unit for the actuation of the printing and conversion assembly to correct and restabilize the printing values according to the difference between the printing values measured by the image processor and the desired theoretical printing values.
1-8. (canceled) 9. A device for the control and management of the printing parameters of a printing machine, particularly with a plurality of consecutive printing processes, the device comprising: at least one video camera associable with a printing and conversion assembly and movable along a motorized guide in order to acquire a print medium; and image processing means being further comprised which are functionally associated with the at least one video camera to search, recognize and measure on the image of the print medium acquired by the at least one video camera the printing values of at least one reference register mark reproduced on the print medium, the image processing means being functionally connected to means for actuation of the printing and conversion assembly to correct and restabilize the printing values according to the difference between the printing values measured by the image processing means and the desired theoretical printing values. 10. The device according to claim 9, further comprising means for lighting a viewing field of the at least one video camera, which are adapted to generate a constant and diffuse light throughout the exposure time. 11. The device according to claim 9, wherein the at least one video camera has a minimum resolution of 400×400 pixels per square centimeter. 12. The device according to claim 10, wherein the viewing field of the at least one video camera has a maximum size of 16'12 mm. 13. The device according to claim 9, wherein the at least one video camera is arrangeable at a maximum distance of 50 mm from the printing and conversion assembly. 14. The device according to claim 10, wherein the lighting means comprise ultraviolet ray lamps. 15. A method for the control and management of the printing parameters of a printing machine, particularly with a plurality of consecutive printing processes, by means of the device according to claim 9, the method comprising: acquiring a print medium by means of the at least one video camera; searching and recognizing, by the image processing means, within the image of the print medium acquired by the at least one video camera, at least one reference register mark reproduced on the print medium; measuring, on the image of the print medium acquired by the at least one video camera, the printing values of the at least one reference register mark reproduced on the print medium; calculating, by the image processing means, the difference between the printing values measured by the image processing means and the desired theoretical printing values; and correcting and/or restabilizating the printing values through the intervention of the image processing means on the means for the actuation of the printing and conversion assembly according to the calculated difference. 16. The method according to claim 15, further comprising lighting the viewing field of the at least one video camera with a constant and diffuse light by the lighting means throughout the exposure time.
A device and method for the control and management of the printing parameters of a printing machine, particularly with a plurality of consecutive printing processes, is presented. The device and method include at least one video camera associable with a printing and conversion assembly and movable along a motorized guide in order to acquire a print medium. An image processor is functionally associated with the video camera to search, recognize and measure on the image of the print medium acquired by the video camera the printing values of at least one reference register mark reproduced on the print medium. The image processor is functionally connected to a unit for the actuation of the printing and conversion assembly to correct and restabilize the printing values according to the difference between the printing values measured by the image processor and the desired theoretical printing values.1-8. (canceled) 9. A device for the control and management of the printing parameters of a printing machine, particularly with a plurality of consecutive printing processes, the device comprising: at least one video camera associable with a printing and conversion assembly and movable along a motorized guide in order to acquire a print medium; and image processing means being further comprised which are functionally associated with the at least one video camera to search, recognize and measure on the image of the print medium acquired by the at least one video camera the printing values of at least one reference register mark reproduced on the print medium, the image processing means being functionally connected to means for actuation of the printing and conversion assembly to correct and restabilize the printing values according to the difference between the printing values measured by the image processing means and the desired theoretical printing values. 10. The device according to claim 9, further comprising means for lighting a viewing field of the at least one video camera, which are adapted to generate a constant and diffuse light throughout the exposure time. 11. The device according to claim 9, wherein the at least one video camera has a minimum resolution of 400×400 pixels per square centimeter. 12. The device according to claim 10, wherein the viewing field of the at least one video camera has a maximum size of 16'12 mm. 13. The device according to claim 9, wherein the at least one video camera is arrangeable at a maximum distance of 50 mm from the printing and conversion assembly. 14. The device according to claim 10, wherein the lighting means comprise ultraviolet ray lamps. 15. A method for the control and management of the printing parameters of a printing machine, particularly with a plurality of consecutive printing processes, by means of the device according to claim 9, the method comprising: acquiring a print medium by means of the at least one video camera; searching and recognizing, by the image processing means, within the image of the print medium acquired by the at least one video camera, at least one reference register mark reproduced on the print medium; measuring, on the image of the print medium acquired by the at least one video camera, the printing values of the at least one reference register mark reproduced on the print medium; calculating, by the image processing means, the difference between the printing values measured by the image processing means and the desired theoretical printing values; and correcting and/or restabilizating the printing values through the intervention of the image processing means on the means for the actuation of the printing and conversion assembly according to the calculated difference. 16. The method according to claim 15, further comprising lighting the viewing field of the at least one video camera with a constant and diffuse light by the lighting means throughout the exposure time.
2,600
10,643
10,643
14,546,362
2,674
A display apparatus is provided. The display apparatus includes a communicator configured to communicate with a voice recognition apparatus that recognizes an uttered voice of a user, an input unit configured to receive the uttered voice of the user, a display unit configured to receiving voice recognition result information about the uttered voice of the user received from the voice recognition apparatus and display the voice recognition result information, and a processor configured to, when the display apparatus is turned on, perform an access to the voice recognition apparatus by transmitting access request information to the voice recognition apparatus, and when the uttered voice is inputted through the input unit, transmit voice information on the uttered voice to the voice recognition apparatus through the communicator.
1. A display apparatus comprising: a communicator configured to communicate with a voice recognition apparatus that recognizes an uttered voice of a user; an input unit configured to receive the uttered voice of the user; a display unit configured to receive voice recognition result information about the uttered voice of the user received from the voice recognition apparatus and display the voice recognition result information; and a processor configured to, when the display apparatus is turned on, perform an access to the voice recognition apparatus by transmitting access request information to the voice recognition apparatus, and when the uttered voice is inputted through the input unit, transmit voice information on the uttered voice to the voice recognition apparatus through the communicator. 2. The display apparatus of claim 1, wherein when the display apparatus is turned on and a voice recognition-related application is initialized, the processor performs an access to the voice recognition apparatus. 3. The display apparatus of claim 1, wherein when a predetermined event occurs within a first threshold time while the access to the voice recognition apparatus is maintained, the processor activates a voice recognition mode for recognizing the uttered voice of the user, and wherein the event comprises at least one of a first event in which a user command for operating in a voice recognition mode is received, a second event in which motion information is received from a remote control apparatus, and a third event in which an image regarding a motion of the user is inputted. 4. The display apparatus of claim 3, wherein when at least one event from among the first to third events does not occur within the first threshold time, the processor transmits dummy data for maintaining the access to the voice recognition apparatus. 5. The display apparatus of claim 3, wherein when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated, the processor transmits the dummy data for maintaining the access to the voice recognition apparatus. 6. The display apparatus of claim 3, wherein when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated, the processor deactivates the voice recognition mode. 7. The display apparatus of claim 6, wherein the first threshold time is a duration in which the access to the voice recognition apparatus is maintained, and wherein the second threshold time is a duration in which the access to the voice recognition apparatus is maintained and the voice recognition mode operates in an activated mode. 8. The display apparatus of claim 1, wherein when a control command to turn off the display apparatus is input, the processor disconnects the access to the voice recognition apparatus. 9. A method of controlling a display apparatus, the method comprising: performing an access to a voice recognition apparatus that recognizes an uttered voice of a user, when the uttered voice of the user is input, transmitting voice information about the input uttered voice of the user to the voice recognition apparatus; and receiving voice recognition result information about the uttered voice received from the voice recognition apparatus and displaying the voice recognition result information. 10. The method of claim 9, wherein when the display apparatus is turned on and a voice recognition-related application is initialized, the performing the access comprises performing an access to the voice recognition apparatus. 11. The method of claim 9, further comprising: activating a voice recognition mode for recognizing an uttered voice of a user when a predetermined event occurs within the first threshold time while the access to the voice recognition apparatus is maintained, wherein the event comprises a first event in which a user command for operating in a voice recognition mode is received, a second event in which motion information is received from a remote control apparatus, and a third event in which an image regarding a motion of the user is inputted. 12. The method of claim 11, wherein when at least one event from among the first to third events does not occur, the maintaining the access comprises transmitting dummy data for maintaining the access to the voice recognition apparatus. 13. The method of claim 11, wherein when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated, the maintaining the access comprises transmitting dummy data for maintaining the access to the voice recognition apparatus. 14. The method of claim 11, further comprising: deactivating the voice recognition mode when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated. 15. The method of claim 14, where in the first threshold time is a duration in which the access to the voice recognition apparatus is maintained, and wherein the second threshold time is a duration in which the access to the voice recognition apparatus is maintained and the voice recognition mode operates in an activated status. 16. The method of claim 10, further comprising: when a control command to turn off the display apparatus is input, disconnecting the access to the voice recognition apparatus. 17. A non-transitory computer readable recording medium storing a program which is executed to perform a method of controlling a display apparatus, the method comprising: performing an access to a voice recognition apparatus that recognizes an uttered voice of a user when the display apparatus is turned on; transmitting voice information of the inputted uttered voice to the voice recognition apparatus when the uttered voice of the user is inputted; and receiving and displaying recognition result information on the uttered voice received from the voice recognition apparatus.
A display apparatus is provided. The display apparatus includes a communicator configured to communicate with a voice recognition apparatus that recognizes an uttered voice of a user, an input unit configured to receive the uttered voice of the user, a display unit configured to receiving voice recognition result information about the uttered voice of the user received from the voice recognition apparatus and display the voice recognition result information, and a processor configured to, when the display apparatus is turned on, perform an access to the voice recognition apparatus by transmitting access request information to the voice recognition apparatus, and when the uttered voice is inputted through the input unit, transmit voice information on the uttered voice to the voice recognition apparatus through the communicator.1. A display apparatus comprising: a communicator configured to communicate with a voice recognition apparatus that recognizes an uttered voice of a user; an input unit configured to receive the uttered voice of the user; a display unit configured to receive voice recognition result information about the uttered voice of the user received from the voice recognition apparatus and display the voice recognition result information; and a processor configured to, when the display apparatus is turned on, perform an access to the voice recognition apparatus by transmitting access request information to the voice recognition apparatus, and when the uttered voice is inputted through the input unit, transmit voice information on the uttered voice to the voice recognition apparatus through the communicator. 2. The display apparatus of claim 1, wherein when the display apparatus is turned on and a voice recognition-related application is initialized, the processor performs an access to the voice recognition apparatus. 3. The display apparatus of claim 1, wherein when a predetermined event occurs within a first threshold time while the access to the voice recognition apparatus is maintained, the processor activates a voice recognition mode for recognizing the uttered voice of the user, and wherein the event comprises at least one of a first event in which a user command for operating in a voice recognition mode is received, a second event in which motion information is received from a remote control apparatus, and a third event in which an image regarding a motion of the user is inputted. 4. The display apparatus of claim 3, wherein when at least one event from among the first to third events does not occur within the first threshold time, the processor transmits dummy data for maintaining the access to the voice recognition apparatus. 5. The display apparatus of claim 3, wherein when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated, the processor transmits the dummy data for maintaining the access to the voice recognition apparatus. 6. The display apparatus of claim 3, wherein when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated, the processor deactivates the voice recognition mode. 7. The display apparatus of claim 6, wherein the first threshold time is a duration in which the access to the voice recognition apparatus is maintained, and wherein the second threshold time is a duration in which the access to the voice recognition apparatus is maintained and the voice recognition mode operates in an activated mode. 8. The display apparatus of claim 1, wherein when a control command to turn off the display apparatus is input, the processor disconnects the access to the voice recognition apparatus. 9. A method of controlling a display apparatus, the method comprising: performing an access to a voice recognition apparatus that recognizes an uttered voice of a user, when the uttered voice of the user is input, transmitting voice information about the input uttered voice of the user to the voice recognition apparatus; and receiving voice recognition result information about the uttered voice received from the voice recognition apparatus and displaying the voice recognition result information. 10. The method of claim 9, wherein when the display apparatus is turned on and a voice recognition-related application is initialized, the performing the access comprises performing an access to the voice recognition apparatus. 11. The method of claim 9, further comprising: activating a voice recognition mode for recognizing an uttered voice of a user when a predetermined event occurs within the first threshold time while the access to the voice recognition apparatus is maintained, wherein the event comprises a first event in which a user command for operating in a voice recognition mode is received, a second event in which motion information is received from a remote control apparatus, and a third event in which an image regarding a motion of the user is inputted. 12. The method of claim 11, wherein when at least one event from among the first to third events does not occur, the maintaining the access comprises transmitting dummy data for maintaining the access to the voice recognition apparatus. 13. The method of claim 11, wherein when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated, the maintaining the access comprises transmitting dummy data for maintaining the access to the voice recognition apparatus. 14. The method of claim 11, further comprising: deactivating the voice recognition mode when the uttered voice of the user is not inputted within the second threshold time while the voice recognition mode is activated. 15. The method of claim 14, where in the first threshold time is a duration in which the access to the voice recognition apparatus is maintained, and wherein the second threshold time is a duration in which the access to the voice recognition apparatus is maintained and the voice recognition mode operates in an activated status. 16. The method of claim 10, further comprising: when a control command to turn off the display apparatus is input, disconnecting the access to the voice recognition apparatus. 17. A non-transitory computer readable recording medium storing a program which is executed to perform a method of controlling a display apparatus, the method comprising: performing an access to a voice recognition apparatus that recognizes an uttered voice of a user when the display apparatus is turned on; transmitting voice information of the inputted uttered voice to the voice recognition apparatus when the uttered voice of the user is inputted; and receiving and displaying recognition result information on the uttered voice received from the voice recognition apparatus.
2,600
10,644
10,644
14,639,919
2,651
Audio loudness adjustment techniques are described. In one or more implementations, primary and secondary sound data originating as part of an audio signal is adjusted. For example, a loudness of the sound data is adjusted. To do so, the loudness, which indicates a sound intensity of the primary and secondary sound data, is determined. Adjustments are then computed for at least a portion of the audio signal based on a target dynamic range parameter, which defines a desired difference between the loudness of the primary and secondary sound data respectively. Based on the computed adjustments, a variety of actions may be performed, such as applying the adjustments to the audio signal to generate an adjusted audio signal in which the primary and secondary sound data substantially have the desired loudness difference. Further, a preview of the adjusted audio signal may be updated in real-time for display in a user interface.
1. In a digital audio environment to adjust primary and secondary sound data originating as part of an audio signal by one or more computing devices, a method comprising: determining loudness of the audio signal by the one or more computing devices, the loudness indicating a sound intensity of the primary and secondary sound data; computing adjustments to the loudness by the one or more computing devices for at least a portion of the audio signal based on a target dynamic range parameter that defines a desired difference between the loudness of the primary and secondary sound data respectively; and applying the computed adjustments by the one or more computing devices to the audio signal to generate an adjusted audio signal in which the primary and secondary sound data substantially have the desired difference in the loudness. 2. A method as described in claim 1, further comprising receiving an input to specify the target dynamic range parameter, the adjustments to the loudness being computed responsive to receiving the input. 3. A method as described in claim 2, wherein the input to specify the target dynamic range parameter is received via a single user interface element. 4. A method as described in claim 2, wherein the input is received via a user interface that includes waveform representations that represent the audio signal and a preview of the adjusted audio signal. 5. A method as described in claim 4, further comprising generating the user interface for display, including generating the waveform representation of the preview substantially in real-time, the waveform representation of the preview being updated as the input to specify the target dynamic range parameter is received. 6. A method as described in claim 5, wherein the waveform representation of the preview is generated prior to applying the computed adjustments to the audio signal to generate the adjusted audio signal. 7. A method as described in claim 1, wherein the adjustments result in the loudness of at least one of the primary or secondary data being substantially leveled over the audio signal. 8. A method as described in claim 1, wherein the adjustments result in the loudness of at least one of the primary or secondary data being amplified over the audio signal. 9. A method as described in claim 8, wherein the increase of the target dynamic range parameter increases the desired difference between the loudness of the primary and secondary sound data, and the adjustments are configured to adjust the loudness of the portion to result in the primary and secondary sound data substantially having the increased desired difference in the loudness. 10. A method as described in claim 1, wherein the primary data corresponds to speech, the secondary data corresponds to background noise, and the target dynamic range parameter defines the desired difference between the loudness of the speech and the loudness of the background noise. 11. In a digital audio environment to adjust primary and secondary sound data originating as part of an audio signal and to display a preview of adjusted sound data by one or more computing devices, a method comprising: generating a graphical user interface for display that includes: a first waveform representation configured to represent an unadulterated version of the audio signal; and a second waveform representation configured to represent an adjusted version of the audio signal that is adjustable based input received via one or more user interface elements; and responsive to receiving input via one of the user interface elements to change a target dynamic range parameter that defines a desired difference in loudness between the primary and secondary sound data respectively, updating the second waveform representation to reflect adjustments to the loudness computed according to the input to change the target dynamic range parameter. 12. A method as described in claim 11, further comprising computing the adjustments to the loudness to result in the primary and secondary sound data having the desired difference in the loudness. 13. A method as described in claim 11, wherein the user interface element to adjust the target dynamic range parameter comprises a slider that enables the target dynamic range parameter to be increased or decreased. 14. A method as described in claim 11, wherein the one or more user interface elements include separate amplification and leveling user interface elements, the amplification user interface element enabling amplification adjustments to be made to the primary and secondary sound data, the leveling user interface element enabling leveling adjustments to be made to the primary and secondary sound data, and the input received via the one user interface element to adjust the target dynamic range parameter effective to make both the amplification and the leveling adjustments to the primary and secondary sound data independent of inputs received via the amplification and leveling user interface elements. 15. A method as described in claim 11, wherein the second waveform representation is updated for display in the user interface without generating the adjusted version of the audio signal. 16. A method as described in claim 11, further comprising: receiving additional input via the one or more user interface elements to apply the computed adjustments to the audio signal; and generating the adjusted version of the audio signal by adjusting the audio signal in accordance with the computed adjustments. 17. A method as described in claim 11, further comprising outputting the adjusted version of the audio signal via an audio output device. 18. A system implemented in a digital audio environment to adjust primary and secondary sound data originating as part of an audio signal, the system comprising: a loudness adjustment module, implemented at least partially in hardware, to: change a target dynamic range parameter that defines a desired difference between a loudness of the primary and the secondary sound data responsive to receiving input via a user interface to make the change; and compute adjustments to the loudness for at least a portion of the audio signal responsive to receipt of the input and to result in the primary and secondary sound data substantially having the desired difference in the loudness; and a display device to display via the user interface a preview of a new audio signal that reflects application of the computed loudness adjustments to the audio signal. 19. A system as described in claim 18, wherein the preview of the new audio signal comprises a waveform representation of the new audio signal. 20. A system as described in claim 18, wherein the preview of the new audio signal is updated for display substantially in real-time in conjunction with receiving the input to change the target dynamic range parameter.
Audio loudness adjustment techniques are described. In one or more implementations, primary and secondary sound data originating as part of an audio signal is adjusted. For example, a loudness of the sound data is adjusted. To do so, the loudness, which indicates a sound intensity of the primary and secondary sound data, is determined. Adjustments are then computed for at least a portion of the audio signal based on a target dynamic range parameter, which defines a desired difference between the loudness of the primary and secondary sound data respectively. Based on the computed adjustments, a variety of actions may be performed, such as applying the adjustments to the audio signal to generate an adjusted audio signal in which the primary and secondary sound data substantially have the desired loudness difference. Further, a preview of the adjusted audio signal may be updated in real-time for display in a user interface.1. In a digital audio environment to adjust primary and secondary sound data originating as part of an audio signal by one or more computing devices, a method comprising: determining loudness of the audio signal by the one or more computing devices, the loudness indicating a sound intensity of the primary and secondary sound data; computing adjustments to the loudness by the one or more computing devices for at least a portion of the audio signal based on a target dynamic range parameter that defines a desired difference between the loudness of the primary and secondary sound data respectively; and applying the computed adjustments by the one or more computing devices to the audio signal to generate an adjusted audio signal in which the primary and secondary sound data substantially have the desired difference in the loudness. 2. A method as described in claim 1, further comprising receiving an input to specify the target dynamic range parameter, the adjustments to the loudness being computed responsive to receiving the input. 3. A method as described in claim 2, wherein the input to specify the target dynamic range parameter is received via a single user interface element. 4. A method as described in claim 2, wherein the input is received via a user interface that includes waveform representations that represent the audio signal and a preview of the adjusted audio signal. 5. A method as described in claim 4, further comprising generating the user interface for display, including generating the waveform representation of the preview substantially in real-time, the waveform representation of the preview being updated as the input to specify the target dynamic range parameter is received. 6. A method as described in claim 5, wherein the waveform representation of the preview is generated prior to applying the computed adjustments to the audio signal to generate the adjusted audio signal. 7. A method as described in claim 1, wherein the adjustments result in the loudness of at least one of the primary or secondary data being substantially leveled over the audio signal. 8. A method as described in claim 1, wherein the adjustments result in the loudness of at least one of the primary or secondary data being amplified over the audio signal. 9. A method as described in claim 8, wherein the increase of the target dynamic range parameter increases the desired difference between the loudness of the primary and secondary sound data, and the adjustments are configured to adjust the loudness of the portion to result in the primary and secondary sound data substantially having the increased desired difference in the loudness. 10. A method as described in claim 1, wherein the primary data corresponds to speech, the secondary data corresponds to background noise, and the target dynamic range parameter defines the desired difference between the loudness of the speech and the loudness of the background noise. 11. In a digital audio environment to adjust primary and secondary sound data originating as part of an audio signal and to display a preview of adjusted sound data by one or more computing devices, a method comprising: generating a graphical user interface for display that includes: a first waveform representation configured to represent an unadulterated version of the audio signal; and a second waveform representation configured to represent an adjusted version of the audio signal that is adjustable based input received via one or more user interface elements; and responsive to receiving input via one of the user interface elements to change a target dynamic range parameter that defines a desired difference in loudness between the primary and secondary sound data respectively, updating the second waveform representation to reflect adjustments to the loudness computed according to the input to change the target dynamic range parameter. 12. A method as described in claim 11, further comprising computing the adjustments to the loudness to result in the primary and secondary sound data having the desired difference in the loudness. 13. A method as described in claim 11, wherein the user interface element to adjust the target dynamic range parameter comprises a slider that enables the target dynamic range parameter to be increased or decreased. 14. A method as described in claim 11, wherein the one or more user interface elements include separate amplification and leveling user interface elements, the amplification user interface element enabling amplification adjustments to be made to the primary and secondary sound data, the leveling user interface element enabling leveling adjustments to be made to the primary and secondary sound data, and the input received via the one user interface element to adjust the target dynamic range parameter effective to make both the amplification and the leveling adjustments to the primary and secondary sound data independent of inputs received via the amplification and leveling user interface elements. 15. A method as described in claim 11, wherein the second waveform representation is updated for display in the user interface without generating the adjusted version of the audio signal. 16. A method as described in claim 11, further comprising: receiving additional input via the one or more user interface elements to apply the computed adjustments to the audio signal; and generating the adjusted version of the audio signal by adjusting the audio signal in accordance with the computed adjustments. 17. A method as described in claim 11, further comprising outputting the adjusted version of the audio signal via an audio output device. 18. A system implemented in a digital audio environment to adjust primary and secondary sound data originating as part of an audio signal, the system comprising: a loudness adjustment module, implemented at least partially in hardware, to: change a target dynamic range parameter that defines a desired difference between a loudness of the primary and the secondary sound data responsive to receiving input via a user interface to make the change; and compute adjustments to the loudness for at least a portion of the audio signal responsive to receipt of the input and to result in the primary and secondary sound data substantially having the desired difference in the loudness; and a display device to display via the user interface a preview of a new audio signal that reflects application of the computed loudness adjustments to the audio signal. 19. A system as described in claim 18, wherein the preview of the new audio signal comprises a waveform representation of the new audio signal. 20. A system as described in claim 18, wherein the preview of the new audio signal is updated for display substantially in real-time in conjunction with receiving the input to change the target dynamic range parameter.
2,600
10,645
10,645
15,179,409
2,668
Methods and systems for performing image analytics using graphical reporting associated with clinical images. One system includes at least one data source and a server. The server includes an electronic processor and an interface for communicating with the data source. The electronic processor is configured to receive training information from the at least one data source over the interface. The training information includes a plurality of images and graphical reporting associated with each of the plurality of images. Each graphical reporting includes a graphical marker designating a portion of one of the plurality of images and diagnostic information associated with the portion of the one of the plurality of images. The electronic processor is also configured to perform machine learning to develop a model using the training information. The model is used to automatically analyze an image.
1. A system for performing image analytics using graphical reporting associated with clinical images, the system comprising: at least one data source; and a server including an electronic processor and an interface for communicating with the data source the electronic processor configured to receive training information from the at least one data source over the interface, the training information including a plurality of images and graphical reporting associated with each of the plurality of images, each graphical reporting including a graphical marker designating a portion of one of the plurality of images and diagnostic information associated with the portion of the one of the plurality of images, and perform machine learning to develop a model using the training information, wherein the model is used to automatically analyze an image. 2. The system of claim 1, wherein the graphical marker includes a one-dimensional graphical marker, a two-dimensional graphical marker, or a three-dimensional graphical marker. 3. The system of claim 1, wherein the graphical marker was manually generated by a diagnosing physician. 4. The system of claim 1, wherein the graphical marker was automatically generated by a computer system. 5. The system of claim 1, wherein the diagnostic information includes a classification of a plurality of categories. 6. The system of claim 1, wherein the diagnostic information includes a normal category, an abnormal category, and an indeterminate category. 7. The system of claim 1, wherein the diagnostic information includes a probability. 8. The system of claim 1, wherein the diagnostic information includes a diagnosis. 9. The system of claim 1, wherein the diagnostic information includes an anatomical structure. 10. The system of claim 1, wherein the diagnostic information includes a measurement or a location. 11. The system of claim 1, wherein the diagnostic information includes morphological characteristics of an anomaly within the portion. 12. The system of claim 1, wherein the training information further includes a Digital Imaging and Communications in Medicine (DICOM) header file associated with at least one of the plurality of images. 13. The system of claim 1, wherein the training information further includes procedure information associated with at least one of the plurality of images. 14. The system of claim 1, wherein the training information further includes patient information associated with at least one of the plurality of images. 15. The system of claim 14, wherein the electronic processor is configured to receive the patient information from a hospital information system (HIS) or an electronic medical record (EMR). 16. The system of claim 1, wherein the training information further includes a report associated with at least one of the plurality of images. 17. The system of claim 16, wherein the report includes an order, a DICOM structured radiology report, a pathology report, or a test result. 18. The system of claim 1, wherein the training information further includes an imaging property of a first image included in the plurality of images, the imaging property including at least one selected from a group consisting of a first indication of whether a contrast agent was used, a second indication of whether a radioactive isotope was used, a time after an agent or isotope was introduced, an orientation, and an image acquisition parameter. 19. The system of claim 1, wherein the at least one data source includes a picture archiving and communications system (PACS), a vendor neutral archive, a radiology information system (RIS), an electronic medical record (EMR), a hospital information system (HIS), an image study ordering system, or a computer assisted detection (CAD) system. 20. The system of claim 1, wherein the electronic processor is configured to perform the machine learning to develop the model using the training information by evaluating a first portion designated by a first graphical marker within a first image included in the plurality of images on a second image included in the plurality of images, the second image acquired under a different imaging condition than the first image. 21. The system of claim 20, wherein the different imaging condition includes at least one selected from a group consisting of a different image within the same exam, a different exam, a different time after injection of an agent or isotope, a different magnetic resonance imaging (MRI) acquisition parameter, and a different radiation parameter. 22. The system of claim 1, wherein the electronic processor is configured to perform the machine learning to develop the model using the training information by comparing an anatomical position associated with a first image included in the plurality of images with an anatomical position in a second plurality of images acquired using different image acquisition properties than the first image. 23. The system of claim 1, wherein the electronic processor is configured to perform the machine learning to develop the model using the training information by obtaining a clinical result associated with a first image included in the plurality of images, performing a comparison of the clinical result and the diagnostic information associated with the first image, assigning a weight to the diagnostic information based on the comparison, and developing the model using the training information based on the weight. 24. The system of claim 23, wherein the clinical result includes at least one image. 25. The system of claim 23, wherein the clinical result includes at least one laboratory result. 26. The system of claim 1, further comprising updating the model based on a clinical result associated with a first image included in the plurality of images. 27. The system of claim 26, wherein the clinical result includes at least one image. 28. The system of claim 26, wherein the clinical result include a laboratory result. 29. The system of claim 1, further comprising updating the model based on feedback designating a correctness of the training information. 30. The system of claim 1, wherein the training information includes a data record specifying coordinates of a first graphical marker of a first image included in the plurality of images. 31. The system of claim 1, wherein the training information includes a first image included in the plurality of images, wherein a first graphical marker is superimposed on the first image and wherein the electronic processor is configured to automatically detect the first graphical marker within the first image.
Methods and systems for performing image analytics using graphical reporting associated with clinical images. One system includes at least one data source and a server. The server includes an electronic processor and an interface for communicating with the data source. The electronic processor is configured to receive training information from the at least one data source over the interface. The training information includes a plurality of images and graphical reporting associated with each of the plurality of images. Each graphical reporting includes a graphical marker designating a portion of one of the plurality of images and diagnostic information associated with the portion of the one of the plurality of images. The electronic processor is also configured to perform machine learning to develop a model using the training information. The model is used to automatically analyze an image.1. A system for performing image analytics using graphical reporting associated with clinical images, the system comprising: at least one data source; and a server including an electronic processor and an interface for communicating with the data source the electronic processor configured to receive training information from the at least one data source over the interface, the training information including a plurality of images and graphical reporting associated with each of the plurality of images, each graphical reporting including a graphical marker designating a portion of one of the plurality of images and diagnostic information associated with the portion of the one of the plurality of images, and perform machine learning to develop a model using the training information, wherein the model is used to automatically analyze an image. 2. The system of claim 1, wherein the graphical marker includes a one-dimensional graphical marker, a two-dimensional graphical marker, or a three-dimensional graphical marker. 3. The system of claim 1, wherein the graphical marker was manually generated by a diagnosing physician. 4. The system of claim 1, wherein the graphical marker was automatically generated by a computer system. 5. The system of claim 1, wherein the diagnostic information includes a classification of a plurality of categories. 6. The system of claim 1, wherein the diagnostic information includes a normal category, an abnormal category, and an indeterminate category. 7. The system of claim 1, wherein the diagnostic information includes a probability. 8. The system of claim 1, wherein the diagnostic information includes a diagnosis. 9. The system of claim 1, wherein the diagnostic information includes an anatomical structure. 10. The system of claim 1, wherein the diagnostic information includes a measurement or a location. 11. The system of claim 1, wherein the diagnostic information includes morphological characteristics of an anomaly within the portion. 12. The system of claim 1, wherein the training information further includes a Digital Imaging and Communications in Medicine (DICOM) header file associated with at least one of the plurality of images. 13. The system of claim 1, wherein the training information further includes procedure information associated with at least one of the plurality of images. 14. The system of claim 1, wherein the training information further includes patient information associated with at least one of the plurality of images. 15. The system of claim 14, wherein the electronic processor is configured to receive the patient information from a hospital information system (HIS) or an electronic medical record (EMR). 16. The system of claim 1, wherein the training information further includes a report associated with at least one of the plurality of images. 17. The system of claim 16, wherein the report includes an order, a DICOM structured radiology report, a pathology report, or a test result. 18. The system of claim 1, wherein the training information further includes an imaging property of a first image included in the plurality of images, the imaging property including at least one selected from a group consisting of a first indication of whether a contrast agent was used, a second indication of whether a radioactive isotope was used, a time after an agent or isotope was introduced, an orientation, and an image acquisition parameter. 19. The system of claim 1, wherein the at least one data source includes a picture archiving and communications system (PACS), a vendor neutral archive, a radiology information system (RIS), an electronic medical record (EMR), a hospital information system (HIS), an image study ordering system, or a computer assisted detection (CAD) system. 20. The system of claim 1, wherein the electronic processor is configured to perform the machine learning to develop the model using the training information by evaluating a first portion designated by a first graphical marker within a first image included in the plurality of images on a second image included in the plurality of images, the second image acquired under a different imaging condition than the first image. 21. The system of claim 20, wherein the different imaging condition includes at least one selected from a group consisting of a different image within the same exam, a different exam, a different time after injection of an agent or isotope, a different magnetic resonance imaging (MRI) acquisition parameter, and a different radiation parameter. 22. The system of claim 1, wherein the electronic processor is configured to perform the machine learning to develop the model using the training information by comparing an anatomical position associated with a first image included in the plurality of images with an anatomical position in a second plurality of images acquired using different image acquisition properties than the first image. 23. The system of claim 1, wherein the electronic processor is configured to perform the machine learning to develop the model using the training information by obtaining a clinical result associated with a first image included in the plurality of images, performing a comparison of the clinical result and the diagnostic information associated with the first image, assigning a weight to the diagnostic information based on the comparison, and developing the model using the training information based on the weight. 24. The system of claim 23, wherein the clinical result includes at least one image. 25. The system of claim 23, wherein the clinical result includes at least one laboratory result. 26. The system of claim 1, further comprising updating the model based on a clinical result associated with a first image included in the plurality of images. 27. The system of claim 26, wherein the clinical result includes at least one image. 28. The system of claim 26, wherein the clinical result include a laboratory result. 29. The system of claim 1, further comprising updating the model based on feedback designating a correctness of the training information. 30. The system of claim 1, wherein the training information includes a data record specifying coordinates of a first graphical marker of a first image included in the plurality of images. 31. The system of claim 1, wherein the training information includes a first image included in the plurality of images, wherein a first graphical marker is superimposed on the first image and wherein the electronic processor is configured to automatically detect the first graphical marker within the first image.
2,600
10,646
10,646
15,303,648
2,628
A method comprising causing display of information on a head mounted display that is worn by a user, receiving eye movement information associated with the user, receiving head movement information associated with the user, determining that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display, and decreasing prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display is disclosed.
1-20. (canceled) 21. An apparatus, comprising: at least one processor; at least one memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: causation of display of information on a head mounted display that is worn by a user; receipt of eye movement information associated with the user; receipt of head movement information associated with the user; determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display; and decrease of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display. 22. The apparatus of claim 21, wherein the decrease of prominence of the information on the head mounted display comprises causation of increase of visual permeability of, at least part of, the head mounted display. 23. The apparatus of claim 21, wherein determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a head deviation direction is opposite to an eye deviation direction. 24. The apparatus of claim 21, wherein determination that they eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a magnitude of the eye movement is proportional to a magnitude of the head movement. 25. The apparatus of claim 21, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: Receipt of other eye movement information associated with the user; Receipt of other head movement information associated with the user; Determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display; and increase of prominence of the information on the head mounted display based, at least in part, on the determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display. 26. The apparatus of claim 25, wherein receipt of the other eye movement information is performed subsequent to the decrease of prominence of the information on the head mounted display. 27. The apparatus of claim 21, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display; and retention of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display. 28. The apparatus of claim 27, wherein the retention of the prominence of the information on the head mounted display is performed prior to the decrease of prominence of the information on the head mounted display. 29. The apparatus of claim 21, wherein the apparatus comprises a display. 30. A method comprising: causing display of information on a head mounted display that is worn by a user; receiving eye movement information associated with the user; receiving head movement information associated with the user; determining that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display; and decreasing prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display. 31. The method of claim 30, wherein the decrease of prominence of the information on the head mounted display comprises causation of increase of visual permeability of, at least part of, the head mounted display. 32. The method of claim 30, wherein determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a head deviation direction is opposite to an eye deviation direction. 33. The method of claim 30, wherein determination that they eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a magnitude of the eye movement is proportional to a magnitude of the head movement. 34. The method of claim 30, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: Receipt of other eye movement information associated with the user; Receipt of other head movement information associated with the user; Determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display; and increase of prominence of the information on the head mounted display based, at least in part, on the determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display. 35. The method of claim 34, wherein receipt of the other eye movement information is performed subsequent to the decrease of prominence of the information on the head mounted display. 36. The method of claim 30, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display; and retention of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display. 37. At least one computer-readable medium encoded with instructions that, when executed by a processor, perform: causation of display of information on a head mounted display that is worn by a user; receipt of eye movement information associated with the user; receipt of head movement information associated with the user; determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display; and decrease of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display. 38. The medium of claim 37, wherein the decrease of prominence of the information on the head mounted display comprises causation of increase of visual permeability of, at least part of, the head mounted display. 39. The medium of claim 37, wherein determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a head deviation direction is opposite to an eye deviation direction. 40. The medium of claim 37, wherein determination that they eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a magnitude of the eye movement is proportional to a magnitude of the head movement.
A method comprising causing display of information on a head mounted display that is worn by a user, receiving eye movement information associated with the user, receiving head movement information associated with the user, determining that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display, and decreasing prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display is disclosed.1-20. (canceled) 21. An apparatus, comprising: at least one processor; at least one memory including computer program code, the memory and the computer program code configured to, working with the processor, cause the apparatus to perform at least the following: causation of display of information on a head mounted display that is worn by a user; receipt of eye movement information associated with the user; receipt of head movement information associated with the user; determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display; and decrease of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display. 22. The apparatus of claim 21, wherein the decrease of prominence of the information on the head mounted display comprises causation of increase of visual permeability of, at least part of, the head mounted display. 23. The apparatus of claim 21, wherein determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a head deviation direction is opposite to an eye deviation direction. 24. The apparatus of claim 21, wherein determination that they eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a magnitude of the eye movement is proportional to a magnitude of the head movement. 25. The apparatus of claim 21, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: Receipt of other eye movement information associated with the user; Receipt of other head movement information associated with the user; Determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display; and increase of prominence of the information on the head mounted display based, at least in part, on the determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display. 26. The apparatus of claim 25, wherein receipt of the other eye movement information is performed subsequent to the decrease of prominence of the information on the head mounted display. 27. The apparatus of claim 21, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display; and retention of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display. 28. The apparatus of claim 27, wherein the retention of the prominence of the information on the head mounted display is performed prior to the decrease of prominence of the information on the head mounted display. 29. The apparatus of claim 21, wherein the apparatus comprises a display. 30. A method comprising: causing display of information on a head mounted display that is worn by a user; receiving eye movement information associated with the user; receiving head movement information associated with the user; determining that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display; and decreasing prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display. 31. The method of claim 30, wherein the decrease of prominence of the information on the head mounted display comprises causation of increase of visual permeability of, at least part of, the head mounted display. 32. The method of claim 30, wherein determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a head deviation direction is opposite to an eye deviation direction. 33. The method of claim 30, wherein determination that they eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a magnitude of the eye movement is proportional to a magnitude of the head movement. 34. The method of claim 30, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: Receipt of other eye movement information associated with the user; Receipt of other head movement information associated with the user; Determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display; and increase of prominence of the information on the head mounted display based, at least in part, on the determination that the other eye movement information and the other head movement information are consistent with the user viewing the information on the head mounted display. 35. The method of claim 34, wherein receipt of the other eye movement information is performed subsequent to the decrease of prominence of the information on the head mounted display. 36. The method of claim 30, wherein the memory includes computer program code configured to, working with the processor, cause the apparatus to perform: determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display; and retention of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are consistent with the user viewing the information on the head mounted display. 37. At least one computer-readable medium encoded with instructions that, when executed by a processor, perform: causation of display of information on a head mounted display that is worn by a user; receipt of eye movement information associated with the user; receipt of head movement information associated with the user; determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display; and decrease of prominence of the information on the head mounted display based, at least in part, on the determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display. 38. The medium of claim 37, wherein the decrease of prominence of the information on the head mounted display comprises causation of increase of visual permeability of, at least part of, the head mounted display. 39. The medium of claim 37, wherein determination that the eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a head deviation direction is opposite to an eye deviation direction. 40. The medium of claim 37, wherein determination that they eye movement information and the head movement information are inconsistent with the user viewing the information on the head mounted display comprises determination that a magnitude of the eye movement is proportional to a magnitude of the head movement.
2,600
10,647
10,647
15,833,172
2,647
Remote triggering of a communication through a computing device is discussed herein. In the context of home security devices or a personal emergency response system (PERS) (e.g., a local device), a local device can provide an alert to a user equipment (e.g., a remote device). The remote device can respond to the alert by instructing the local device to initiate a communication between the local device, the remote device, and a network device (e.g., implemented as a public-safety answering point (PSAP)), wherein a location associated with the network device is based at least in part on a location of the local device. The alerts and communications can be directed or routed by a computing device (e.g., implemented as a communication server). A communication identifier can be associated with the local device and the remote device to allow for a communication to be reestablished in the event of an interruption.
1. A system comprising: one or more processors; a memory; and one or more components stored in the memory and executable by the one or more processors to perform operations comprising: transmitting an alert associated with a computing device to a user equipment, the alert indicative of a security event associated with an environment in which the computing device is installed; receiving, from the user equipment, a request to initiate a first communication based at least in part on the alert; initiating the first communication between the computing device and the user equipment; initiating a second communication between the computing device and a network device associated with a public-safety answering point, wherein a first location associated with the network device is based at least in part on a second location of the computing device; and transmitting data associated with the alert to at least one of the user equipment or the network device. 2. The system of claim 1, wherein the audio data is first audio data, wherein the image data is first image data, and wherein the operations further comprise: transmitting, by the computing device, at least one of second audio data or second image data to the user equipment prior to initiating the first communication between the computing device and the user equipment. 3. The system of claim 1, wherein the operations further comprise: transmitting an invitation to the user equipment to receive one or more alerts from the computing device; receiving, from the user equipment, an indication of an acceptance of the invitation; and associating, based at least in part on the indication, a communication identifier between the user equipment and the computing device. 4. The system of claim 3, wherein the operations further comprise: determining that the second communication has ended; receiving, from the network device, a request to initiate a third communication between the network device and the computing device; and initiating, based at least in part on the request and based at least in part on the communication identifier, the third communication between the network device, the computing device, and the user equipment. 5. The system of claim 1, wherein the first communication and the second communication collectively form a conference call between the computing device, the user equipment, and the network device. 6. A system comprising: one or more processors; a memory; and one or more components stored in the memory and executable by the one or more processors to perform operations comprising: transmitting data associated with a computing device to a user equipment; receiving, from the user equipment, an indication based at least in part on the data; and initiating a communication between at least the user equipment and a network device, wherein a first location associated with the network device is based at least in part on a second location of the computing device. 7. The system of claim 6, wherein the computing device is a personal emergency response system (PERS). 8. The system of claim 6, wherein the computing device is a home security device. 9. The system of claim 6, wherein the indication includes a request to initiate the communication based at least in part on the data. 10. The system of claim 6, wherein the operations further comprise: associating a communication identifier between the user equipment and the computing device. 11. The system of claim 10, wherein the communication is a first communication, and wherein the operations further comprise: determining that the first communication has ended; receiving, from the network device, a request to initiate a second communication between the network device and at least one of a native number or the communication identifier associated with the computing device; and initiating, based at least in part on the request, the second communication between at least the network device and the user equipment. 12. The system of claim 6, wherein operations further comprise: transmitting the indication to the computing device, wherein the communication is a cellular communication initiated by the computing device further based at least in part on the indication. 13. The system of claim 6, wherein the operations further comprise: receiving location information associated with the computing device; and determining the second location based at least in part on the location information. 14. The system of claim 6, wherein the operations further comprise: receiving identity information associated with the computing device; determining that the user equipment is associated with the computing device based at least in part on the identity information; and transmitting the data to the user equipment based at least in part on determining that the user equipment is associated with the computing device. 15. A processor-implemented method comprising: transmitting an alert associated with a computing device to a user equipment; receiving, from the user equipment, an indication based at least in part on the alert; and initiating a communication between the user equipment and a network device, wherein a first location associated with the network device is based at least in part on a second location of the computing device. 16. The processor-implemented method of claim 15, wherein the processor-implemented method further comprises: receiving the alert from the computing device, wherein the indication includes a request to initiate the communication based at least in part on the alert. 17. The processor-implemented method of claim 15, wherein the indication is a first indication, and wherein the processor-implemented method further comprises: transmitting an invitation to the user equipment to receive one or more alerts from the computing device; receiving, from the user equipment, a second indication of an acceptance of the invitation; and associating, based at least in part on the second indication, a communication identifier between the user equipment and the computing device. 18. The processor-implemented method of claim 17, wherein the communication is a first communication, and wherein the processor-implemented method further comprises: determining that the first communication has ended; receiving, from the network device, a request to initiate a second communication; and initiating, based at least in part on the request, the second communication between the network device and the user equipment. 19. The processor-implemented method of claim 15, wherein the processor-implemented method further comprises: transmitting the indication to the computing device, wherein the communication is a cellular communication initiated by the computing device further based at least in part on the indication. 20. The processor-implemented method of claim 15, wherein the processor-implemented method further comprises: receiving identity information associated with the computing device; determining that the user equipment is associated with the computing device based at least in part on the identity information; and transmitting the alert to the user equipment based at least in part on determining that the user equipment is associated with the computing device.
Remote triggering of a communication through a computing device is discussed herein. In the context of home security devices or a personal emergency response system (PERS) (e.g., a local device), a local device can provide an alert to a user equipment (e.g., a remote device). The remote device can respond to the alert by instructing the local device to initiate a communication between the local device, the remote device, and a network device (e.g., implemented as a public-safety answering point (PSAP)), wherein a location associated with the network device is based at least in part on a location of the local device. The alerts and communications can be directed or routed by a computing device (e.g., implemented as a communication server). A communication identifier can be associated with the local device and the remote device to allow for a communication to be reestablished in the event of an interruption.1. A system comprising: one or more processors; a memory; and one or more components stored in the memory and executable by the one or more processors to perform operations comprising: transmitting an alert associated with a computing device to a user equipment, the alert indicative of a security event associated with an environment in which the computing device is installed; receiving, from the user equipment, a request to initiate a first communication based at least in part on the alert; initiating the first communication between the computing device and the user equipment; initiating a second communication between the computing device and a network device associated with a public-safety answering point, wherein a first location associated with the network device is based at least in part on a second location of the computing device; and transmitting data associated with the alert to at least one of the user equipment or the network device. 2. The system of claim 1, wherein the audio data is first audio data, wherein the image data is first image data, and wherein the operations further comprise: transmitting, by the computing device, at least one of second audio data or second image data to the user equipment prior to initiating the first communication between the computing device and the user equipment. 3. The system of claim 1, wherein the operations further comprise: transmitting an invitation to the user equipment to receive one or more alerts from the computing device; receiving, from the user equipment, an indication of an acceptance of the invitation; and associating, based at least in part on the indication, a communication identifier between the user equipment and the computing device. 4. The system of claim 3, wherein the operations further comprise: determining that the second communication has ended; receiving, from the network device, a request to initiate a third communication between the network device and the computing device; and initiating, based at least in part on the request and based at least in part on the communication identifier, the third communication between the network device, the computing device, and the user equipment. 5. The system of claim 1, wherein the first communication and the second communication collectively form a conference call between the computing device, the user equipment, and the network device. 6. A system comprising: one or more processors; a memory; and one or more components stored in the memory and executable by the one or more processors to perform operations comprising: transmitting data associated with a computing device to a user equipment; receiving, from the user equipment, an indication based at least in part on the data; and initiating a communication between at least the user equipment and a network device, wherein a first location associated with the network device is based at least in part on a second location of the computing device. 7. The system of claim 6, wherein the computing device is a personal emergency response system (PERS). 8. The system of claim 6, wherein the computing device is a home security device. 9. The system of claim 6, wherein the indication includes a request to initiate the communication based at least in part on the data. 10. The system of claim 6, wherein the operations further comprise: associating a communication identifier between the user equipment and the computing device. 11. The system of claim 10, wherein the communication is a first communication, and wherein the operations further comprise: determining that the first communication has ended; receiving, from the network device, a request to initiate a second communication between the network device and at least one of a native number or the communication identifier associated with the computing device; and initiating, based at least in part on the request, the second communication between at least the network device and the user equipment. 12. The system of claim 6, wherein operations further comprise: transmitting the indication to the computing device, wherein the communication is a cellular communication initiated by the computing device further based at least in part on the indication. 13. The system of claim 6, wherein the operations further comprise: receiving location information associated with the computing device; and determining the second location based at least in part on the location information. 14. The system of claim 6, wherein the operations further comprise: receiving identity information associated with the computing device; determining that the user equipment is associated with the computing device based at least in part on the identity information; and transmitting the data to the user equipment based at least in part on determining that the user equipment is associated with the computing device. 15. A processor-implemented method comprising: transmitting an alert associated with a computing device to a user equipment; receiving, from the user equipment, an indication based at least in part on the alert; and initiating a communication between the user equipment and a network device, wherein a first location associated with the network device is based at least in part on a second location of the computing device. 16. The processor-implemented method of claim 15, wherein the processor-implemented method further comprises: receiving the alert from the computing device, wherein the indication includes a request to initiate the communication based at least in part on the alert. 17. The processor-implemented method of claim 15, wherein the indication is a first indication, and wherein the processor-implemented method further comprises: transmitting an invitation to the user equipment to receive one or more alerts from the computing device; receiving, from the user equipment, a second indication of an acceptance of the invitation; and associating, based at least in part on the second indication, a communication identifier between the user equipment and the computing device. 18. The processor-implemented method of claim 17, wherein the communication is a first communication, and wherein the processor-implemented method further comprises: determining that the first communication has ended; receiving, from the network device, a request to initiate a second communication; and initiating, based at least in part on the request, the second communication between the network device and the user equipment. 19. The processor-implemented method of claim 15, wherein the processor-implemented method further comprises: transmitting the indication to the computing device, wherein the communication is a cellular communication initiated by the computing device further based at least in part on the indication. 20. The processor-implemented method of claim 15, wherein the processor-implemented method further comprises: receiving identity information associated with the computing device; determining that the user equipment is associated with the computing device based at least in part on the identity information; and transmitting the alert to the user equipment based at least in part on determining that the user equipment is associated with the computing device.
2,600
10,648
10,648
15,636,359
2,611
Methods and systems for controlling a view of a virtual camera in a virtual world. A view of user viewing a virtual world may be controlled or changed while accounting for a user's head position. For example, a virtual camera may be wrapped in a container such that rotation of the container causes rotation of the virtual camera relative to a global coordinate system. Based on a position of a head-mounted display, an initial virtual camera rotation angle relative to a global coordinate system of the virtual world may be identified. An indication to change to view to particular direction may be received. A desired rotation angle relative to the global coordinate system for a view to correspond to the particular direction is then determined. The container is then rotated by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle.
1. A method for controlling a view of a virtual camera in a virtual world, the method comprising: based on a position of a head-mounted display, identifying an initial virtual camera rotation angle, relative to a global coordinate system of the virtual world, wherein the virtual camera is wrapped in a container such that rotation of the container causes rotation of the virtual camera relative to the global coordinate system; receiving an indication to change the view to a particular direction; identifying a desired rotation angle, relative to the global coordinate system, for a view to correspond to the particular direction; and rotating the container by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle. 2. The method of claim 1, wherein rotating the container causes a final virtual camera rotation angle to be equivalent to the desired rotation angle relative to the global coordinate system. 3. The method of claim 1, further comprising: displaying, on a display screen of the HMD, the view of the virtual camera according to the initial virtual camera angle; based on receiving an indication to change the view to a particular direction, fading the screen to black; and fading the screen from black to display the view of the virtual camera according to the final virtual camera rotation angle. 4. The method of claim 1, wherein receiving the indication comprises receiving a selection of a locomotion marker. 5. The method of claim 1, further comprising displaying a selectable locomotion marker, wherein the display of the selectable locomotion marker indicates the particular direction. 6. The method of claim 1, wherein the rotation angle comprises at least one of a pitch angle, a yaw angle, or a roll angle. 7. The method of claim 1, further comprising determining the rotation value by: subtracting the initial virtual camera rotation angle from 360 degrees to generate an intermediate value; and adding the intermediate value to the desired rotation angle to generate the rotation value. 8. A system comprising: a head mounted display (HMD); at least one processor operatively connected to the HMD; and a memory storing instructions that, when executed by the at least one processor, perform a set of operations comprising: based on a position of the HMD, identifying an initial virtual camera rotation angle of a virtual camera, relative to a global coordinate system of the virtual world, wherein the virtual camera is wrapped in a container such that rotation of the container causes rotation of the virtual camera relative to the global coordinate system; receiving an indication to change the view to a particular direction; identifying a desired rotation angle, relative to the global coordinate system, for a view to correspond to the particular direction; and rotating the container by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle. 9. The system of claim 8, wherein rotating the container causes a final virtual camera rotation angle to be equivalent to the desired rotation angle relative to the global coordinate system. 10. The system of claim 8, wherein the operations further comprise: displaying, on a display screen of the HMD, the view of the virtual camera according to the initial virtual camera angle; based on receiving an indication to change the view to a particular direction, fading the screen to black; and fading the screen from black to display the view of the virtual camera according to the final virtual camera rotation angle. 11. The system of claim 8, wherein receiving the indication comprises receiving a selection of a locomotion marker. 12. The system of claim 8, wherein the operations further comprise displaying a selectable locomotion marker, wherein the display of the selectable locomotion marker indicates the particular direction. 13. The system of claim 8, wherein the rotation angle comprises at least one of a pitch angle, a yaw angle, or a roll angle. 14. The system of claim 8, wherein the operations further comprise determining the rotation value by: subtracting the initial virtual camera rotation angle from 360 degrees to generate an intermediate value; and adding the intermediate value to the desired rotation angle to generate the rotation value. 15. A method for controlling a view of a virtual camera in a virtual world, the method comprising: based on a position of a head-mounted display, identifying an initial virtual camera rotation angle of a virtual camera, wherein the virtual camera is associated with a container such that rotation of the container causes rotation of the virtual camera relative to the global coordinate system; displaying, on a display screen of the HMD, the view of the virtual camera according to the initial virtual camera angle; displaying, on the display screen of the HMD, a locomotion marker; receiving a selection of the locomotion marker; identifying a particular direction for a view corresponding to the selected locomotion marker; identifying a desired rotation angle for a view to correspond to the particular direction; and rotating the container by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle. 16. The method of claim 15, wherein rotating the container causes a final virtual camera rotation angle to be equivalent to the desired rotation angle relative to the global coordinate system. 17. The method of claim 15, wherein the rotation angle comprises at least one of a pitch angle, a yaw angle, or a roll angle. 18. The method of claim 15, wherein the operations further comprise determining the rotation value by: subtracting the initial virtual camera rotation angle from 360 degrees to generate an intermediate value; and adding the intermediate value to the desired rotation angle to generate the rotation value. 19. The method of claim 15, further comprising determining whether a view of the user is within a predetermined area, and wherein displaying the locomotion marker is based on the view of the user being within the predetermined area. 20. The method of claim 15, further comprising displaying a snap zone and a view direction marker.
Methods and systems for controlling a view of a virtual camera in a virtual world. A view of user viewing a virtual world may be controlled or changed while accounting for a user's head position. For example, a virtual camera may be wrapped in a container such that rotation of the container causes rotation of the virtual camera relative to a global coordinate system. Based on a position of a head-mounted display, an initial virtual camera rotation angle relative to a global coordinate system of the virtual world may be identified. An indication to change to view to particular direction may be received. A desired rotation angle relative to the global coordinate system for a view to correspond to the particular direction is then determined. The container is then rotated by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle.1. A method for controlling a view of a virtual camera in a virtual world, the method comprising: based on a position of a head-mounted display, identifying an initial virtual camera rotation angle, relative to a global coordinate system of the virtual world, wherein the virtual camera is wrapped in a container such that rotation of the container causes rotation of the virtual camera relative to the global coordinate system; receiving an indication to change the view to a particular direction; identifying a desired rotation angle, relative to the global coordinate system, for a view to correspond to the particular direction; and rotating the container by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle. 2. The method of claim 1, wherein rotating the container causes a final virtual camera rotation angle to be equivalent to the desired rotation angle relative to the global coordinate system. 3. The method of claim 1, further comprising: displaying, on a display screen of the HMD, the view of the virtual camera according to the initial virtual camera angle; based on receiving an indication to change the view to a particular direction, fading the screen to black; and fading the screen from black to display the view of the virtual camera according to the final virtual camera rotation angle. 4. The method of claim 1, wherein receiving the indication comprises receiving a selection of a locomotion marker. 5. The method of claim 1, further comprising displaying a selectable locomotion marker, wherein the display of the selectable locomotion marker indicates the particular direction. 6. The method of claim 1, wherein the rotation angle comprises at least one of a pitch angle, a yaw angle, or a roll angle. 7. The method of claim 1, further comprising determining the rotation value by: subtracting the initial virtual camera rotation angle from 360 degrees to generate an intermediate value; and adding the intermediate value to the desired rotation angle to generate the rotation value. 8. A system comprising: a head mounted display (HMD); at least one processor operatively connected to the HMD; and a memory storing instructions that, when executed by the at least one processor, perform a set of operations comprising: based on a position of the HMD, identifying an initial virtual camera rotation angle of a virtual camera, relative to a global coordinate system of the virtual world, wherein the virtual camera is wrapped in a container such that rotation of the container causes rotation of the virtual camera relative to the global coordinate system; receiving an indication to change the view to a particular direction; identifying a desired rotation angle, relative to the global coordinate system, for a view to correspond to the particular direction; and rotating the container by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle. 9. The system of claim 8, wherein rotating the container causes a final virtual camera rotation angle to be equivalent to the desired rotation angle relative to the global coordinate system. 10. The system of claim 8, wherein the operations further comprise: displaying, on a display screen of the HMD, the view of the virtual camera according to the initial virtual camera angle; based on receiving an indication to change the view to a particular direction, fading the screen to black; and fading the screen from black to display the view of the virtual camera according to the final virtual camera rotation angle. 11. The system of claim 8, wherein receiving the indication comprises receiving a selection of a locomotion marker. 12. The system of claim 8, wherein the operations further comprise displaying a selectable locomotion marker, wherein the display of the selectable locomotion marker indicates the particular direction. 13. The system of claim 8, wherein the rotation angle comprises at least one of a pitch angle, a yaw angle, or a roll angle. 14. The system of claim 8, wherein the operations further comprise determining the rotation value by: subtracting the initial virtual camera rotation angle from 360 degrees to generate an intermediate value; and adding the intermediate value to the desired rotation angle to generate the rotation value. 15. A method for controlling a view of a virtual camera in a virtual world, the method comprising: based on a position of a head-mounted display, identifying an initial virtual camera rotation angle of a virtual camera, wherein the virtual camera is associated with a container such that rotation of the container causes rotation of the virtual camera relative to the global coordinate system; displaying, on a display screen of the HMD, the view of the virtual camera according to the initial virtual camera angle; displaying, on the display screen of the HMD, a locomotion marker; receiving a selection of the locomotion marker; identifying a particular direction for a view corresponding to the selected locomotion marker; identifying a desired rotation angle for a view to correspond to the particular direction; and rotating the container by a rotation value based at least on both the desired rotation angle and the initial virtual camera rotation angle. 16. The method of claim 15, wherein rotating the container causes a final virtual camera rotation angle to be equivalent to the desired rotation angle relative to the global coordinate system. 17. The method of claim 15, wherein the rotation angle comprises at least one of a pitch angle, a yaw angle, or a roll angle. 18. The method of claim 15, wherein the operations further comprise determining the rotation value by: subtracting the initial virtual camera rotation angle from 360 degrees to generate an intermediate value; and adding the intermediate value to the desired rotation angle to generate the rotation value. 19. The method of claim 15, further comprising determining whether a view of the user is within a predetermined area, and wherein displaying the locomotion marker is based on the view of the user being within the predetermined area. 20. The method of claim 15, further comprising displaying a snap zone and a view direction marker.
2,600
10,649
10,649
15,673,691
2,659
Techniques are described for automatically analyzing received values to determine their semantic meaning and apply one or more formatting modifications and/or emphases to the received values based on the determined semantic meaning. In one example, a value to be formatted based on a semantic context associated with at least two portions of the received value is received. In response, a semantic rules associated with the received value is identified. The received value is semantically processed using the semantic rules, where processing includes identifying at least two portions of the value corresponding to their contexts. At least one formatting rule is determined as associated with the two or more semantic contexts, each formatting rule associated with a particular context. The formatting rules are applied to the corresponding portions of the received values associated their semantic contexts to generate a modified version of the received value, which is then provided for presentation.
1. A computer-implemented method performed by one or more processors, the method comprising: receiving a value to be formatted based on a semantic context associated with at least two portions of the received value; in response to receiving the value to be formatted, automatically and without user input: identifying at least one semantic rule associated with the received value; semantically processing the received value using the at least one identified semantic rule, wherein semantically processing the received value comprises identifying at least two portions of the received value corresponding to two or more semantic contexts defined by the at least one semantic rule; determining at least one formatting rule from a plurality of formatting rules associated with at least one of the two or more identified semantic contexts, each formatting rule of the plurality of formatting rules associated with a particular semantic context; applying each of the at least one identified formatting rules to the at least two portions of the received value associated with the two or more identified semantic contexts to generate a modified version of the received value based on the at least one applied formatting rules; and providing the modified version of the received value for presentation. 2. The method of claim 1, wherein the received value is of a particular data type, and wherein the at least one semantic rule is associated with the particular data type. 3. The method of claim 2, further comprising, prior to identifying the at least one semantic rule associated with the received value, analyzing the received value to identify the particular data type of the received value. 4. The method of claim 3, wherein the particular data type is one of a plurality of data types, and wherein each of the plurality of data types is associated with a different set of semantic rules. 5. The method of claim 4, where each particular data type is associated with a particular set of formatting rules. 6. The method of claim 3, wherein analyzing the received value to identify the particular data type of the received value includes identifying metadata received from a calling entity associated with the received value, the metadata identifying the particular data type of the received value. 7. The method of claim 1, wherein the modified version of the received value comprises an image of a formatted version of the received value. 8. The method of claim 1, wherein the modified version of the received value comprises a formatted string with the at least one identified formatting rules applied to the corresponding at least two portions of the received value. 9. The method of claim 1, wherein the modified version of the received value includes metadata identifying each portion of the received value to be formatted and an identification of at least one formatting-related modification to be performed on each of the identified portions to be formatted. 10. The method of claim 9, wherein the received value is received from a calling application, wherein providing the modified version of the received value for presentation includes providing the received value and the metadata associated with the modified version of the received value to the calling application, and wherein the application of the formatting-related modifications included in the metadata are performed at runtime by the calling application. 11. The method of claim 1, wherein the received value comprises one of an International Standard Book Number (ISBN), a phone number, a social security number (SSN), a bank account number, an Internet Protocol version 6 (IPv6) address, a computer identification, and addressing or routing information. 12. A system comprising: at least one processor; and a memory communicatively coupled to the at least one processor, the memory storing instructions which, when executed, cause the at least one processor to perform operations comprising: receiving a value to be formatted based on a semantic context associated with at least two portions of the received value; in response to receiving the value to be formatted, automatically and without user input: identifying at least one semantic rule associated with the received value; semantically processing the received value using the at least one identified semantic rule, wherein semantically processing the received value comprises identifying at least two portions of the received value corresponding to two or more semantic contexts defined by the at least one semantic rule; determining at least one formatting rule from a plurality of formatting rules associated with at least one of the two or more identified semantic contexts, each formatting rule of the plurality of formatting rules associated with a particular semantic context; applying each of the at least one identified formatting rules to the at least two portions of the received value associated with the two or more identified semantic contexts to generate a modified version of the received value based on the at least one applied formatting rules; and providing the modified version of the received value for presentation. 13. The system of claim 12, wherein the received value is of a particular data type, and wherein the at least one semantic rule is associated with the particular data type. 14. The system of claim 13, the operations further comprising, prior to identifying the at least one semantic rule associated with the received value, analyzing the received value to identify the particular data type of the received value. 15. The system of claim 14, wherein the particular data type is one of a plurality of data types, and wherein each of the plurality of data types is associated with a different set of semantic rules, and where each particular data type is associated with a particular set of formatting rules, and wherein analyzing the received value to identify the particular data type of the received value includes identifying metadata received from a calling entity associated with the received value, the metadata identifying the particular data type of the received value. 16. The system of claim 12, wherein the modified version of the received value comprises a formatted string with the at least one identified formatting rules applied to the corresponding at least two portions of the received value. 17. The system of claim 12, wherein the modified version of the received value includes metadata identifying each portion of the received value to be formatted and an identification of at least one formatting-related modification to be performed on each of the identified portions to be formatted, wherein the received value is received from a calling application, wherein providing the modified version of the received value for presentation includes providing the received value and the metadata associated with the modified version of the received value to the calling application, and wherein the application of the formatting-related modifications included in the metadata are performed at runtime by the calling application. 18. A non-transitory computer-readable medium storing instructions which, when executed, cause at least one processor to perform operations comprising: receiving a value to be formatted based on a semantic context associated with at least two portions of the received value; in response to receiving the value to be formatted, automatically and without user input: identifying at least one semantic rule associated with the received value; semantically processing the received value using the at least one identified semantic rule, wherein semantically processing the received value comprises identifying at least two portions of the received value corresponding to two or more semantic contexts defined by the at least one semantic rule; determining at least one formatting rule from a plurality of formatting rules associated with at least one of the two or more identified semantic contexts, each formatting rule of the plurality of formatting rules associated with a particular semantic context; applying each of the at least one identified formatting rules to the at least two portions of the received value associated with the two or more identified semantic contexts to generate a modified version of the received value based on the at least one applied formatting rules; and providing the modified version of the received value for presentation. 19. The computer-readable medium of claim 18, wherein the received value is of a particular data type, and wherein the at least one semantic rule is associated with the particular data type, the operations further comprising, prior to identifying the at least one semantic rule associated with the received value, analyzing the received value to identify the particular data type of the received value. 20. The computer-readable medium of claim 19, wherein the particular data type is one of a plurality of data types, and wherein each of the plurality of data types is associated with a different set of semantic rules, and where each particular data type is associated with a particular set of formatting rules, and wherein analyzing the received value to identify the particular data type of the received value includes identifying metadata received from a calling entity associated with the received value, the metadata identifying the particular data type of the received value.
Techniques are described for automatically analyzing received values to determine their semantic meaning and apply one or more formatting modifications and/or emphases to the received values based on the determined semantic meaning. In one example, a value to be formatted based on a semantic context associated with at least two portions of the received value is received. In response, a semantic rules associated with the received value is identified. The received value is semantically processed using the semantic rules, where processing includes identifying at least two portions of the value corresponding to their contexts. At least one formatting rule is determined as associated with the two or more semantic contexts, each formatting rule associated with a particular context. The formatting rules are applied to the corresponding portions of the received values associated their semantic contexts to generate a modified version of the received value, which is then provided for presentation.1. A computer-implemented method performed by one or more processors, the method comprising: receiving a value to be formatted based on a semantic context associated with at least two portions of the received value; in response to receiving the value to be formatted, automatically and without user input: identifying at least one semantic rule associated with the received value; semantically processing the received value using the at least one identified semantic rule, wherein semantically processing the received value comprises identifying at least two portions of the received value corresponding to two or more semantic contexts defined by the at least one semantic rule; determining at least one formatting rule from a plurality of formatting rules associated with at least one of the two or more identified semantic contexts, each formatting rule of the plurality of formatting rules associated with a particular semantic context; applying each of the at least one identified formatting rules to the at least two portions of the received value associated with the two or more identified semantic contexts to generate a modified version of the received value based on the at least one applied formatting rules; and providing the modified version of the received value for presentation. 2. The method of claim 1, wherein the received value is of a particular data type, and wherein the at least one semantic rule is associated with the particular data type. 3. The method of claim 2, further comprising, prior to identifying the at least one semantic rule associated with the received value, analyzing the received value to identify the particular data type of the received value. 4. The method of claim 3, wherein the particular data type is one of a plurality of data types, and wherein each of the plurality of data types is associated with a different set of semantic rules. 5. The method of claim 4, where each particular data type is associated with a particular set of formatting rules. 6. The method of claim 3, wherein analyzing the received value to identify the particular data type of the received value includes identifying metadata received from a calling entity associated with the received value, the metadata identifying the particular data type of the received value. 7. The method of claim 1, wherein the modified version of the received value comprises an image of a formatted version of the received value. 8. The method of claim 1, wherein the modified version of the received value comprises a formatted string with the at least one identified formatting rules applied to the corresponding at least two portions of the received value. 9. The method of claim 1, wherein the modified version of the received value includes metadata identifying each portion of the received value to be formatted and an identification of at least one formatting-related modification to be performed on each of the identified portions to be formatted. 10. The method of claim 9, wherein the received value is received from a calling application, wherein providing the modified version of the received value for presentation includes providing the received value and the metadata associated with the modified version of the received value to the calling application, and wherein the application of the formatting-related modifications included in the metadata are performed at runtime by the calling application. 11. The method of claim 1, wherein the received value comprises one of an International Standard Book Number (ISBN), a phone number, a social security number (SSN), a bank account number, an Internet Protocol version 6 (IPv6) address, a computer identification, and addressing or routing information. 12. A system comprising: at least one processor; and a memory communicatively coupled to the at least one processor, the memory storing instructions which, when executed, cause the at least one processor to perform operations comprising: receiving a value to be formatted based on a semantic context associated with at least two portions of the received value; in response to receiving the value to be formatted, automatically and without user input: identifying at least one semantic rule associated with the received value; semantically processing the received value using the at least one identified semantic rule, wherein semantically processing the received value comprises identifying at least two portions of the received value corresponding to two or more semantic contexts defined by the at least one semantic rule; determining at least one formatting rule from a plurality of formatting rules associated with at least one of the two or more identified semantic contexts, each formatting rule of the plurality of formatting rules associated with a particular semantic context; applying each of the at least one identified formatting rules to the at least two portions of the received value associated with the two or more identified semantic contexts to generate a modified version of the received value based on the at least one applied formatting rules; and providing the modified version of the received value for presentation. 13. The system of claim 12, wherein the received value is of a particular data type, and wherein the at least one semantic rule is associated with the particular data type. 14. The system of claim 13, the operations further comprising, prior to identifying the at least one semantic rule associated with the received value, analyzing the received value to identify the particular data type of the received value. 15. The system of claim 14, wherein the particular data type is one of a plurality of data types, and wherein each of the plurality of data types is associated with a different set of semantic rules, and where each particular data type is associated with a particular set of formatting rules, and wherein analyzing the received value to identify the particular data type of the received value includes identifying metadata received from a calling entity associated with the received value, the metadata identifying the particular data type of the received value. 16. The system of claim 12, wherein the modified version of the received value comprises a formatted string with the at least one identified formatting rules applied to the corresponding at least two portions of the received value. 17. The system of claim 12, wherein the modified version of the received value includes metadata identifying each portion of the received value to be formatted and an identification of at least one formatting-related modification to be performed on each of the identified portions to be formatted, wherein the received value is received from a calling application, wherein providing the modified version of the received value for presentation includes providing the received value and the metadata associated with the modified version of the received value to the calling application, and wherein the application of the formatting-related modifications included in the metadata are performed at runtime by the calling application. 18. A non-transitory computer-readable medium storing instructions which, when executed, cause at least one processor to perform operations comprising: receiving a value to be formatted based on a semantic context associated with at least two portions of the received value; in response to receiving the value to be formatted, automatically and without user input: identifying at least one semantic rule associated with the received value; semantically processing the received value using the at least one identified semantic rule, wherein semantically processing the received value comprises identifying at least two portions of the received value corresponding to two or more semantic contexts defined by the at least one semantic rule; determining at least one formatting rule from a plurality of formatting rules associated with at least one of the two or more identified semantic contexts, each formatting rule of the plurality of formatting rules associated with a particular semantic context; applying each of the at least one identified formatting rules to the at least two portions of the received value associated with the two or more identified semantic contexts to generate a modified version of the received value based on the at least one applied formatting rules; and providing the modified version of the received value for presentation. 19. The computer-readable medium of claim 18, wherein the received value is of a particular data type, and wherein the at least one semantic rule is associated with the particular data type, the operations further comprising, prior to identifying the at least one semantic rule associated with the received value, analyzing the received value to identify the particular data type of the received value. 20. The computer-readable medium of claim 19, wherein the particular data type is one of a plurality of data types, and wherein each of the plurality of data types is associated with a different set of semantic rules, and where each particular data type is associated with a particular set of formatting rules, and wherein analyzing the received value to identify the particular data type of the received value includes identifying metadata received from a calling entity associated with the received value, the metadata identifying the particular data type of the received value.
2,600
10,650
10,650
16,213,453
2,631
One aspect of the present invention includes a bi-phase communication receiver system. The system includes an analog-to-digital converter (ADC) configured to sample a bi-phase modulation signal to generate digital samples of the bi-phase modulation signal. The system also includes a bi-phase signal decoder configured to decode the bi-phase modulation signal based on the digital samples. The system further includes a preamble detector comprising a digital filter configured to evaluate the digital samples to generate an output and to detect a preamble of the bi-phase modulation signal for decoding the bi-phase modulation signal based on the output.
1. A bi-phase communication receiver comprising: a preamble detector configured to detect a preamble of a bi-phase modulation signal and generate a synchronization signal based on the detected preamble; and a bi-phase decoder coupled to receive the synchronization signal, the bi-phase decoder configured to decode the bi-phase modulation signal aligned with the synchronization signal. 2. The bi-phase communication receiver of claim 1, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the bi-phase modulation signal; a second filter configured with a second set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first filter output signal; and a preamble comparator coupled to the second filter, the preamble comparator configured to generate the synchronization signal based on the second filter output signal. 3. The bi-phase communication receiver of claim 2, wherein the first filter includes a first finite impulse response (FIR) filter, and the second filter includes a second FIR filter. 4. The bi-phase communication receiver of claim 2, wherein the preamble detector includes a threshold generator configured to generate a threshold signal associated with a moving average of the bi-phase modulation signal, and the preamble comparator is coupled to the threshold generator and configured to generate the synchronization signal based on the second filter output signal and the threshold signal. 5. The bi-phase communication receiver of claim 4, wherein the threshold generator includes an infinite impulse response (IIR) filter. 6. The bi-phase communication receiver of claim 2, further comprising: an input terminal; and an analog-to-digital converter (ADC) coupled to the input terminal, and configured to generate the bi-phase modulation signal. 7. The bi-phase communication receiver of claim 6, wherein: the preamble detector includes a synchronization controller configured to generate a sampling rate signal based on the second filter output signal; and the ADC is configured to generate the bi-phase modulation signal by sampling an analog input signal received from the input terminal, and the ADC is configured to adjust a sampling rate of the analog input signal based on the sampling rate signal. 8. The bi-phase communication receiver of claim 1, further comprising: first and second reception channels; and a channel selection controller configured to select only one of the first or second reception channel for providing the bi-phase modulation signal. 9. The bi-phase communication receiver of claim 8, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the first reception channel; a second filter configured with a second set of tap weights associated with the preamble bit-period of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first reception channel; and a third filter configured with a third set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the third filter configured to generate a third filter output signal by applying the second set of tap weights on a selected one of the first or second filter output signal. 10. The bi-phase communication receiver of claim 9, wherein the channel selection controller includes: a channel comparator configured to generate a comparison signal based on a relative amplitude between the first and second filter output signals; and a channel multiplexer configured to select one of the first or second filter output signal based on the comparison signal, and configured to provide the selected one of the first or second filter output signal to the third filter. 11. A wireless charger comprising: a transmission circuit configured to transmit power wirelessly; and a bi-phase receiver coupled to receive a bi-phase modulation signal derived from the transmission circuit, the bi-phase receiver including: a preamble detector configured to detect a preamble of the bi-phase modulation signal and generate a synchronization signal based on the detected preamble; and a bi-phase decoder coupled to receive the synchronization signal, the bi-phase decoder configured to decode the bi-phase modulation signal aligned with the synchronization signal. 12. The wireless charger of claim 11, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the bi-phase modulation signal; a second filter configured with a second set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first filter output signal; and a preamble comparator coupled to the second filter, the preamble comparator configured to generate the synchronization signal based on the second filter output signal. 13. The wireless charger of claim 12, wherein the preamble detector includes a threshold generator configured to generate a threshold signal associated with a moving average of the bi-phase modulation signal, and the preamble comparator is coupled to the threshold generator and configured to generate the synchronization signal based on the second filter output signal and the threshold signal. 14. The wireless charger of claim 12, wherein: the bi-phase receiver includes an analog-to-digital converter (ADC) coupled to the transmission circuit, and configured to generate the bi-phase modulation signal; the preamble detector includes a synchronization controller configured to generate a sampling rate signal based on the second filter output signal; and the ADC is configured to generate the bi-phase modulation signal by sampling an analog input signal received from the transmission circuit, and the ADC is configured to adjust a sampling rate of the analog input signal based on the sampling rate signal. 15. The wireless charger of claim 11, further comprising: first and second reception channels; and a channel selection controller configured to select only one of the first or second reception channel for providing the bi-phase modulation signal. 16. The wireless charger of claim 15, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the first reception channel; a second filter configured with a second set of tap weights associated with the preamble bit-period of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first reception channel; and a third filter configured with a third set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the third filter configured to generate a third filter output signal by applying the second set of tap weights on a selected one of the first or second filter output signal. 17. The wireless charger of claim 16, wherein the channel selection controller includes: a channel comparator configured to generate a comparison signal based on a relative amplitude between the first and second filter output signals; and a channel multiplexer configured to select one of the first or second filter output signal based on the comparison signal, and configured to provide the selected one of the first or second filter output signal to the third filter. 18. A wireless charger comprising: a power transmission circuit; and a bi-phase receiver including: a digitized channel associated with the power transmission circuit; a preamble detector having a detector input coupled to the digitized channel, and a detector output; and a bi-phase decoder having a synchronization input coupled to the detector output of the preamble detector, a bi-phase modulation input coupled to the digitized channel, and a demodulation output. 19. The wireless charger of claim 18, wherein: the bi-phase receiver includes an analog-to-digital converter (ADC) coupled to the power transmission circuit, and configured to deliver a bi-phase modulation signal at the digitized channel; the preamble detector is configured to detect a preamble of the bi-phase modulation signal and generate a synchronization signal based on the detected preamble; and the bi-phase decoder is configured to decode the bi-phase modulation signal aligned with the synchronization signal. 20. The wireless charger of claim 18, wherein the bi-phase receiver includes: first and second reception channels coupled to the power transmission circuit; and a channel selection controller configured to selectively couple only one of the first or second reception channel to the digitized channel.
One aspect of the present invention includes a bi-phase communication receiver system. The system includes an analog-to-digital converter (ADC) configured to sample a bi-phase modulation signal to generate digital samples of the bi-phase modulation signal. The system also includes a bi-phase signal decoder configured to decode the bi-phase modulation signal based on the digital samples. The system further includes a preamble detector comprising a digital filter configured to evaluate the digital samples to generate an output and to detect a preamble of the bi-phase modulation signal for decoding the bi-phase modulation signal based on the output.1. A bi-phase communication receiver comprising: a preamble detector configured to detect a preamble of a bi-phase modulation signal and generate a synchronization signal based on the detected preamble; and a bi-phase decoder coupled to receive the synchronization signal, the bi-phase decoder configured to decode the bi-phase modulation signal aligned with the synchronization signal. 2. The bi-phase communication receiver of claim 1, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the bi-phase modulation signal; a second filter configured with a second set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first filter output signal; and a preamble comparator coupled to the second filter, the preamble comparator configured to generate the synchronization signal based on the second filter output signal. 3. The bi-phase communication receiver of claim 2, wherein the first filter includes a first finite impulse response (FIR) filter, and the second filter includes a second FIR filter. 4. The bi-phase communication receiver of claim 2, wherein the preamble detector includes a threshold generator configured to generate a threshold signal associated with a moving average of the bi-phase modulation signal, and the preamble comparator is coupled to the threshold generator and configured to generate the synchronization signal based on the second filter output signal and the threshold signal. 5. The bi-phase communication receiver of claim 4, wherein the threshold generator includes an infinite impulse response (IIR) filter. 6. The bi-phase communication receiver of claim 2, further comprising: an input terminal; and an analog-to-digital converter (ADC) coupled to the input terminal, and configured to generate the bi-phase modulation signal. 7. The bi-phase communication receiver of claim 6, wherein: the preamble detector includes a synchronization controller configured to generate a sampling rate signal based on the second filter output signal; and the ADC is configured to generate the bi-phase modulation signal by sampling an analog input signal received from the input terminal, and the ADC is configured to adjust a sampling rate of the analog input signal based on the sampling rate signal. 8. The bi-phase communication receiver of claim 1, further comprising: first and second reception channels; and a channel selection controller configured to select only one of the first or second reception channel for providing the bi-phase modulation signal. 9. The bi-phase communication receiver of claim 8, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the first reception channel; a second filter configured with a second set of tap weights associated with the preamble bit-period of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first reception channel; and a third filter configured with a third set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the third filter configured to generate a third filter output signal by applying the second set of tap weights on a selected one of the first or second filter output signal. 10. The bi-phase communication receiver of claim 9, wherein the channel selection controller includes: a channel comparator configured to generate a comparison signal based on a relative amplitude between the first and second filter output signals; and a channel multiplexer configured to select one of the first or second filter output signal based on the comparison signal, and configured to provide the selected one of the first or second filter output signal to the third filter. 11. A wireless charger comprising: a transmission circuit configured to transmit power wirelessly; and a bi-phase receiver coupled to receive a bi-phase modulation signal derived from the transmission circuit, the bi-phase receiver including: a preamble detector configured to detect a preamble of the bi-phase modulation signal and generate a synchronization signal based on the detected preamble; and a bi-phase decoder coupled to receive the synchronization signal, the bi-phase decoder configured to decode the bi-phase modulation signal aligned with the synchronization signal. 12. The wireless charger of claim 11, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the bi-phase modulation signal; a second filter configured with a second set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first filter output signal; and a preamble comparator coupled to the second filter, the preamble comparator configured to generate the synchronization signal based on the second filter output signal. 13. The wireless charger of claim 12, wherein the preamble detector includes a threshold generator configured to generate a threshold signal associated with a moving average of the bi-phase modulation signal, and the preamble comparator is coupled to the threshold generator and configured to generate the synchronization signal based on the second filter output signal and the threshold signal. 14. The wireless charger of claim 12, wherein: the bi-phase receiver includes an analog-to-digital converter (ADC) coupled to the transmission circuit, and configured to generate the bi-phase modulation signal; the preamble detector includes a synchronization controller configured to generate a sampling rate signal based on the second filter output signal; and the ADC is configured to generate the bi-phase modulation signal by sampling an analog input signal received from the transmission circuit, and the ADC is configured to adjust a sampling rate of the analog input signal based on the sampling rate signal. 15. The wireless charger of claim 11, further comprising: first and second reception channels; and a channel selection controller configured to select only one of the first or second reception channel for providing the bi-phase modulation signal. 16. The wireless charger of claim 15, wherein the preamble detector includes: a first filter configured with a first set of tap weights associated with a preamble bit-period of the bi-phase modulation signal, the first filter configured to generate a first filter output signal by applying the first set of tap weights on the first reception channel; a second filter configured with a second set of tap weights associated with the preamble bit-period of the bi-phase modulation signal, the second filter configured to generate a second filter output signal by applying the second set of tap weights on the first reception channel; and a third filter configured with a third set of tap weights associated with a preamble logic transition of the bi-phase modulation signal, the third filter configured to generate a third filter output signal by applying the second set of tap weights on a selected one of the first or second filter output signal. 17. The wireless charger of claim 16, wherein the channel selection controller includes: a channel comparator configured to generate a comparison signal based on a relative amplitude between the first and second filter output signals; and a channel multiplexer configured to select one of the first or second filter output signal based on the comparison signal, and configured to provide the selected one of the first or second filter output signal to the third filter. 18. A wireless charger comprising: a power transmission circuit; and a bi-phase receiver including: a digitized channel associated with the power transmission circuit; a preamble detector having a detector input coupled to the digitized channel, and a detector output; and a bi-phase decoder having a synchronization input coupled to the detector output of the preamble detector, a bi-phase modulation input coupled to the digitized channel, and a demodulation output. 19. The wireless charger of claim 18, wherein: the bi-phase receiver includes an analog-to-digital converter (ADC) coupled to the power transmission circuit, and configured to deliver a bi-phase modulation signal at the digitized channel; the preamble detector is configured to detect a preamble of the bi-phase modulation signal and generate a synchronization signal based on the detected preamble; and the bi-phase decoder is configured to decode the bi-phase modulation signal aligned with the synchronization signal. 20. The wireless charger of claim 18, wherein the bi-phase receiver includes: first and second reception channels coupled to the power transmission circuit; and a channel selection controller configured to selectively couple only one of the first or second reception channel to the digitized channel.
2,600
10,651
10,651
15,845,364
2,653
An earphone device includes a housing having a driver unit, and a sound guide tube mounted on a front surface of the housing to protrude from the front surface, in which the sound guide tube is disposed at a position deviated from a center position of the housing.
1. (canceled) 2. An earphone device comprising: a driver unit; a housing; and a sound guide tube mounted on a front portion of the housing to protrude from the front portion, wherein the sound guide tube is at a position deviated from a center position of the housing, and wherein the position of the front portion of the housing is fixed with respect to the rest of the housing; a cushion member interposed between the driver unit and an opening that connects an interior of the housing to exterior of the housing, wherein a center line of the housing and a center line of the sound guide tube meet at a point behind the driver unit. 3. The earphone device of claim 2, wherein the sound guide tube includes a sound guide tip, and wherein a distance between a center point of the sound guide tube tip and the center line of the housing is no less than 3 mm and no more than 7 mm. 4. The earphone device according to claim 2, wherein an angle between the center line of the housing and the center line of the sound guide tube is no less than 10° and no more than 60°. 5. The earphone device according to claim 2, wherein the sound guide tube has a tip made of flexible material. 6. The earphone device according to claim 2, wherein the opening is at the front portion of the housing. 7. The earphone device according to claim 2, wherein the front portion of the housing is fixed onto a rear portion of the housing, and further comprising: a projecting portion projecting from the rear portion of the housing. 8. The earphone device according to claim 7, wherein the projecting portion includes a cylindrical type shape. 9. The earphone device according to claim 7, wherein the projecting portion is a cord retainer configured to retain a cord. 10. The earphone device according to claim 7, wherein at least a part of the projecting portion is located at outside of a cavum conchae when the earphone device is worn by a user. 11. The earphone device according to claim 2, further comprising: an annular part provided around a periphery of the housing. 12. The earphone device according to claim 2, wherein the front portion of the housing is fixed onto a rear portion of the housing, and further comprising: an annular part provided at a connecting portion of the front portion and the rear portion. 13. The earphone device according to claim 2, wherein the earphone is configured to generate sound corresponding to an audio signal supplied from a music player. 14. The earphone device of claim 2, further comprising: a cord connected to the driver unit. 15. The earphone device of claim 2, further comprising: an earpiece mounted on a tip portion of the sound guide tube, the earpiece being formed using a flexible material. 16. The earphone device of claim 15, wherein the earpiece has a flange profile that deforms corresponding to a profile of an external auditory meatus. 17. The earphone device of claim 15, wherein, when the earpiece is fixed to an external auditory meatus, the earpiece deforms corresponding to a profile of the external auditory meatus and is closely attached to the external auditory meatus.
An earphone device includes a housing having a driver unit, and a sound guide tube mounted on a front surface of the housing to protrude from the front surface, in which the sound guide tube is disposed at a position deviated from a center position of the housing.1. (canceled) 2. An earphone device comprising: a driver unit; a housing; and a sound guide tube mounted on a front portion of the housing to protrude from the front portion, wherein the sound guide tube is at a position deviated from a center position of the housing, and wherein the position of the front portion of the housing is fixed with respect to the rest of the housing; a cushion member interposed between the driver unit and an opening that connects an interior of the housing to exterior of the housing, wherein a center line of the housing and a center line of the sound guide tube meet at a point behind the driver unit. 3. The earphone device of claim 2, wherein the sound guide tube includes a sound guide tip, and wherein a distance between a center point of the sound guide tube tip and the center line of the housing is no less than 3 mm and no more than 7 mm. 4. The earphone device according to claim 2, wherein an angle between the center line of the housing and the center line of the sound guide tube is no less than 10° and no more than 60°. 5. The earphone device according to claim 2, wherein the sound guide tube has a tip made of flexible material. 6. The earphone device according to claim 2, wherein the opening is at the front portion of the housing. 7. The earphone device according to claim 2, wherein the front portion of the housing is fixed onto a rear portion of the housing, and further comprising: a projecting portion projecting from the rear portion of the housing. 8. The earphone device according to claim 7, wherein the projecting portion includes a cylindrical type shape. 9. The earphone device according to claim 7, wherein the projecting portion is a cord retainer configured to retain a cord. 10. The earphone device according to claim 7, wherein at least a part of the projecting portion is located at outside of a cavum conchae when the earphone device is worn by a user. 11. The earphone device according to claim 2, further comprising: an annular part provided around a periphery of the housing. 12. The earphone device according to claim 2, wherein the front portion of the housing is fixed onto a rear portion of the housing, and further comprising: an annular part provided at a connecting portion of the front portion and the rear portion. 13. The earphone device according to claim 2, wherein the earphone is configured to generate sound corresponding to an audio signal supplied from a music player. 14. The earphone device of claim 2, further comprising: a cord connected to the driver unit. 15. The earphone device of claim 2, further comprising: an earpiece mounted on a tip portion of the sound guide tube, the earpiece being formed using a flexible material. 16. The earphone device of claim 15, wherein the earpiece has a flange profile that deforms corresponding to a profile of an external auditory meatus. 17. The earphone device of claim 15, wherein, when the earpiece is fixed to an external auditory meatus, the earpiece deforms corresponding to a profile of the external auditory meatus and is closely attached to the external auditory meatus.
2,600
10,652
10,652
14,959,337
2,625
Disclosed is a touch screen device for transmitting button manipulation information based on a touch pen without using a separate wireless communication module. The touch screen device includes a touch screen including a plurality of touch electrodes, a touch driving circuit applying a touch electrode driving signal to the plurality of touch electrodes, and a touch pen receiving the touch electrode driving signal applied to the plurality of touch electrodes and transmitting a pen output signal to the touch screen in response to the received touch electrode driving signal. The touch pen includes at least one button, and when a user manipulates the at least one button, the touch pen adjusts the pen output signal.
1. A touch screen device comprising: a touch screen including a plurality of touch electrodes; a touch driving circuit applying a touch electrode driving signal to the plurality of touch electrodes; and a touch pen receiving the touch electrode driving signal applied to the plurality of touch electrodes and transmitting a pen output signal to the touch screen in response to the received touch electrode driving signal, wherein the touch pen comprises at least one button, and when a user manipulates the at least one button, the touch pen adjusts the pen output signal. 2. The touch screen device of claim 1, wherein the touch pen comprises: a conductive tip, a portion of the conductive tip protruding to one side of a housing; a switching unit connected to the conductive tip; a receiver amplifying, processing, and outputting the touch electrode driving signal received through the switching unit; a signal processor analyzing a signal supplied from the receiver to output a synchronization signal, and outputting a pen output variable signal for adjusting the pen output signal according to button manipulation of the user; and a driver supplying the pen output signal, synchronized with the touch electrode driving signal, to the switching unit in response to the synchronization signal and adjusting the pen output signal in response to the pen output variable signal. 3. The touch screen device of claim 2, wherein the signal processor controls the driver to invert a phase of the pen output signal at every specific period when the user manipulates the at least one button. 4. The touch screen device of claim 3, wherein the signal processor controls the driver to adjust the specific period according to a kind of the at least one button or predefined button manipulation. 5. The touch screen device of claim 2, wherein the signal processor controls the driver to adjust amplitude of the pen output signal when the user manipulates the at least one button. 6. The touch screen device of claim 5, wherein the signal processor controls the driver to adjust the amplitude according to a kind of the at least one button or predefined button manipulation. 7. The touch screen device of claim 2, wherein the driver generates the pen output signal including N number of pulses in normal driving, and the signal processor controls the driver to adjust number of pulses included in the pen output signal when the user manipulates the at least one button. 8. The touch screen device of claim 7, wherein the signal processor controls the driver to generate the pen output signal, including M number of pulses different from the N pulses, according to a kind of the at least one button or predefined button manipulation.
Disclosed is a touch screen device for transmitting button manipulation information based on a touch pen without using a separate wireless communication module. The touch screen device includes a touch screen including a plurality of touch electrodes, a touch driving circuit applying a touch electrode driving signal to the plurality of touch electrodes, and a touch pen receiving the touch electrode driving signal applied to the plurality of touch electrodes and transmitting a pen output signal to the touch screen in response to the received touch electrode driving signal. The touch pen includes at least one button, and when a user manipulates the at least one button, the touch pen adjusts the pen output signal.1. A touch screen device comprising: a touch screen including a plurality of touch electrodes; a touch driving circuit applying a touch electrode driving signal to the plurality of touch electrodes; and a touch pen receiving the touch electrode driving signal applied to the plurality of touch electrodes and transmitting a pen output signal to the touch screen in response to the received touch electrode driving signal, wherein the touch pen comprises at least one button, and when a user manipulates the at least one button, the touch pen adjusts the pen output signal. 2. The touch screen device of claim 1, wherein the touch pen comprises: a conductive tip, a portion of the conductive tip protruding to one side of a housing; a switching unit connected to the conductive tip; a receiver amplifying, processing, and outputting the touch electrode driving signal received through the switching unit; a signal processor analyzing a signal supplied from the receiver to output a synchronization signal, and outputting a pen output variable signal for adjusting the pen output signal according to button manipulation of the user; and a driver supplying the pen output signal, synchronized with the touch electrode driving signal, to the switching unit in response to the synchronization signal and adjusting the pen output signal in response to the pen output variable signal. 3. The touch screen device of claim 2, wherein the signal processor controls the driver to invert a phase of the pen output signal at every specific period when the user manipulates the at least one button. 4. The touch screen device of claim 3, wherein the signal processor controls the driver to adjust the specific period according to a kind of the at least one button or predefined button manipulation. 5. The touch screen device of claim 2, wherein the signal processor controls the driver to adjust amplitude of the pen output signal when the user manipulates the at least one button. 6. The touch screen device of claim 5, wherein the signal processor controls the driver to adjust the amplitude according to a kind of the at least one button or predefined button manipulation. 7. The touch screen device of claim 2, wherein the driver generates the pen output signal including N number of pulses in normal driving, and the signal processor controls the driver to adjust number of pulses included in the pen output signal when the user manipulates the at least one button. 8. The touch screen device of claim 7, wherein the signal processor controls the driver to generate the pen output signal, including M number of pulses different from the N pulses, according to a kind of the at least one button or predefined button manipulation.
2,600
10,653
10,653
15,599,072
2,657
An information transmission device includes a processor, and a memory storing instructions. The processor executes the stored instructions to recognize a phrase of voice represented by a voice signal, specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases, and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item.
1. An information transmission device comprising: a processor; and a memory storing instructions, the processor executing the stored instructions to: recognize a phrase of voice represented by a voice signal; specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item. 2. The information transmission device according to claim 1, wherein the processor executes the stored instructions to refer to registration information in which a plurality of identification information items correspond to a plurality of phrases having been registered in advance, and in a case where any one of the plurality of phrases having been registered in the registration information is recognized, specify the identification information item corresponding to the phrase from the registration information. 3. The information transmission device according to claim 1, wherein the processor executes the stored instructions to recognize the phrase of the voice represented by the voice signal supplied to a sound emission device from a sound collection device installed in a moving body that moves while accommodating the terminal devices. 4. The information transmission device according to claim 1, wherein the processor executes the stored instructions to transmit the content representing the recognized phrase and the identification information item corresponding to the phrase to a distribution device that provides, to the terminal device, the content corresponding to the identification information item requested by the terminal device. 5. The information transmission device according to claim 1, wherein the processor executes the stored instructions to transmit the identification information item by a sound communication in which sound is used as a transmission medium. 6. The information transmission device according to claim 1, wherein the voice signal contains a guide voice to be used for voice guide given from a manager to a user. 7. The information transmission device according to claim 6, wherein the content contains representation of information obtained by translating pronunciation content of the guide voice into another language. 8. The information transmission device according to claim 7, wherein the processor executes the stored instructions to transmit the identification information item after recognizing the phrase and before an end of reproduction of the guide voice. 9. The information transmission device according to claim 7, wherein the processor executes the stored instructions to transmit the identification information item after an end of reproduction of the guide voice. 10. An information transmission method comprising: recognizing a phrase of voice represented by a voice signal; specifying identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmitting the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item. 11. A guide system comprising: an information transmission device comprising: a processor; and a memory storing instructions, the processor executing the stored instructions to: recognize a phrase of voice represented by a voice signal; specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item; and a voice guide device, wherein the voice guide device includes a sound collection device and a sound emission device for collecting a sound signal and for emitting sound, respectively, and the information processing device recognizes the phrase of the voice represented by the voice signal supplied from the sound collection device to the sound emission device. 12. A communication system comprising: a guide system comprising an information transmission device comprising: a processor; and a memory storing instructions, the processor executing the stored instructions to: recognize a phrase of voice represented by a voice signal; specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item; and a voice guide device, wherein the voice guide device includes a sound collection device and a sound emission device for collecting a sound signal and for emitting sound, respectively, and the information processing device recognizes the phrase of the voice represented by the voice signal supplied from the sound collection device to the sound emission device; and a terminal device, wherein the terminal device includes a receiver for receiving the identification information item from the transmitting means of the information transmission device and a reproduction device for reproducing the content corresponding to the identification information item.
An information transmission device includes a processor, and a memory storing instructions. The processor executes the stored instructions to recognize a phrase of voice represented by a voice signal, specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases, and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item.1. An information transmission device comprising: a processor; and a memory storing instructions, the processor executing the stored instructions to: recognize a phrase of voice represented by a voice signal; specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item. 2. The information transmission device according to claim 1, wherein the processor executes the stored instructions to refer to registration information in which a plurality of identification information items correspond to a plurality of phrases having been registered in advance, and in a case where any one of the plurality of phrases having been registered in the registration information is recognized, specify the identification information item corresponding to the phrase from the registration information. 3. The information transmission device according to claim 1, wherein the processor executes the stored instructions to recognize the phrase of the voice represented by the voice signal supplied to a sound emission device from a sound collection device installed in a moving body that moves while accommodating the terminal devices. 4. The information transmission device according to claim 1, wherein the processor executes the stored instructions to transmit the content representing the recognized phrase and the identification information item corresponding to the phrase to a distribution device that provides, to the terminal device, the content corresponding to the identification information item requested by the terminal device. 5. The information transmission device according to claim 1, wherein the processor executes the stored instructions to transmit the identification information item by a sound communication in which sound is used as a transmission medium. 6. The information transmission device according to claim 1, wherein the voice signal contains a guide voice to be used for voice guide given from a manager to a user. 7. The information transmission device according to claim 6, wherein the content contains representation of information obtained by translating pronunciation content of the guide voice into another language. 8. The information transmission device according to claim 7, wherein the processor executes the stored instructions to transmit the identification information item after recognizing the phrase and before an end of reproduction of the guide voice. 9. The information transmission device according to claim 7, wherein the processor executes the stored instructions to transmit the identification information item after an end of reproduction of the guide voice. 10. An information transmission method comprising: recognizing a phrase of voice represented by a voice signal; specifying identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmitting the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item. 11. A guide system comprising: an information transmission device comprising: a processor; and a memory storing instructions, the processor executing the stored instructions to: recognize a phrase of voice represented by a voice signal; specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item; and a voice guide device, wherein the voice guide device includes a sound collection device and a sound emission device for collecting a sound signal and for emitting sound, respectively, and the information processing device recognizes the phrase of the voice represented by the voice signal supplied from the sound collection device to the sound emission device. 12. A communication system comprising: a guide system comprising an information transmission device comprising: a processor; and a memory storing instructions, the processor executing the stored instructions to: recognize a phrase of voice represented by a voice signal; specify identification information item corresponding to the recognized phrase from a plurality of identification information items corresponding to mutually different phrases; and transmit the specified identification information item to a terminal device capable of reproducing a content represented by the identification information item; and a voice guide device, wherein the voice guide device includes a sound collection device and a sound emission device for collecting a sound signal and for emitting sound, respectively, and the information processing device recognizes the phrase of the voice represented by the voice signal supplied from the sound collection device to the sound emission device; and a terminal device, wherein the terminal device includes a receiver for receiving the identification information item from the transmitting means of the information transmission device and a reproduction device for reproducing the content corresponding to the identification information item.
2,600
10,654
10,654
15,986,489
2,683
A system and method for enabling set up of a controlling device capable of controlling a plurality of appliances provides an interactive instruction set and associated programming which is downloadable to a controllable appliance having an associated display, such as an Internet enabled television. The programming is accessible by the controllable appliance and is configured to appropriately display interactive instructions from the interactive instruction set to a user during a user initiated set up procedure for setting up the controlling device to communicate commands to another controllable device (e.g., a DVD, VCR, DVR, etc).
1. A non-transitory, computer readable media having instructions stored thereon, the instructions, when executed by a processing device of a first controllable appliance, performing steps for configuring a controlling device to command at least one functional operation of a second controllable appliance, the steps comprising: causing a plurality of interactive prompts to be displayed in a display device associated with the first controllable appliance wherein the plurality of interactive prompts are used to elicit one or more communications from the controlling device to identify at least a brand of the second controllable appliance; in response to at least the brand of the second controllable appliance being identified via the one or more communications, repeatedly causing a code data associated with at least the identified brand of the second controllable appliance to be provisioned on the controlling device and causing a further interactive prompt to be displayed in the display device wherein the further interactive prompt is used to elicit a further communication from the controlling device to confirm that a transmission of a command from the controlling device to the second controlling device, created via use of a currently provisioned code data, caused the second controllable device to perform an observable functional operation; and in response to the first controllable appliance receiving a positive confirmation that the transmission of the command from the controlling device to the second controllable appliance caused the second controllable device to perform the observable functional operation, causing the repeated provisioning of code data on the controlling device to be stopped whereupon the controlling device is configured to use the currently provisioned code data that caused the second controllable appliance to perform the observable action when the controlling device is subsequently utilized to transmit a command to control the functional operation of the second controllable appliance. 2. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause the first controllable appliance to repeatedly transmit a communication having a pointer to the controlling device whereupon the controlling device will repeatedly provision itself with a code data that is stored in a memory of the controlling device as indicated by the pointer received from the first controllable appliance. 3. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause the first controllable appliance to repeatedly transmit a communication having a code data to the controlling device whereupon the controlling device will repeatedly provision itself with the code data that is received from the first controllable appliance. 4. The non-transitory, computer readable media as recited in claim 1, wherein the one or more prompts comprise alphanumeric indicators which are selectable via the one or more communications received by the first appliance from the controlling device. 5. The non-transitory, computer readable media as recited in claim 4, wherein the instructions function to auto-complete at least the brand and model of the second controllable appliance in response to a partial completion of at least the brand of the second controllable appliance via the one or more communications.
A system and method for enabling set up of a controlling device capable of controlling a plurality of appliances provides an interactive instruction set and associated programming which is downloadable to a controllable appliance having an associated display, such as an Internet enabled television. The programming is accessible by the controllable appliance and is configured to appropriately display interactive instructions from the interactive instruction set to a user during a user initiated set up procedure for setting up the controlling device to communicate commands to another controllable device (e.g., a DVD, VCR, DVR, etc).1. A non-transitory, computer readable media having instructions stored thereon, the instructions, when executed by a processing device of a first controllable appliance, performing steps for configuring a controlling device to command at least one functional operation of a second controllable appliance, the steps comprising: causing a plurality of interactive prompts to be displayed in a display device associated with the first controllable appliance wherein the plurality of interactive prompts are used to elicit one or more communications from the controlling device to identify at least a brand of the second controllable appliance; in response to at least the brand of the second controllable appliance being identified via the one or more communications, repeatedly causing a code data associated with at least the identified brand of the second controllable appliance to be provisioned on the controlling device and causing a further interactive prompt to be displayed in the display device wherein the further interactive prompt is used to elicit a further communication from the controlling device to confirm that a transmission of a command from the controlling device to the second controlling device, created via use of a currently provisioned code data, caused the second controllable device to perform an observable functional operation; and in response to the first controllable appliance receiving a positive confirmation that the transmission of the command from the controlling device to the second controllable appliance caused the second controllable device to perform the observable functional operation, causing the repeated provisioning of code data on the controlling device to be stopped whereupon the controlling device is configured to use the currently provisioned code data that caused the second controllable appliance to perform the observable action when the controlling device is subsequently utilized to transmit a command to control the functional operation of the second controllable appliance. 2. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause the first controllable appliance to repeatedly transmit a communication having a pointer to the controlling device whereupon the controlling device will repeatedly provision itself with a code data that is stored in a memory of the controlling device as indicated by the pointer received from the first controllable appliance. 3. The non-transitory, computer readable media as recited in claim 1, wherein the instructions cause the first controllable appliance to repeatedly transmit a communication having a code data to the controlling device whereupon the controlling device will repeatedly provision itself with the code data that is received from the first controllable appliance. 4. The non-transitory, computer readable media as recited in claim 1, wherein the one or more prompts comprise alphanumeric indicators which are selectable via the one or more communications received by the first appliance from the controlling device. 5. The non-transitory, computer readable media as recited in claim 4, wherein the instructions function to auto-complete at least the brand and model of the second controllable appliance in response to a partial completion of at least the brand of the second controllable appliance via the one or more communications.
2,600
10,655
10,655
16,568,858
2,656
A method for encoding an audio signal, comprising using one or more algorithms operating on a processor to filter the audio signal into two output signals, wherein each output signal has a sampling rate that is equal to a sampling rate of the audio signal, and wherein one of the output signals includes high frequency data. Using one or more algorithms operating on the processor to window the high frequency data by selecting a set of the high frequency data. Using one or more algorithms operating on the processor to determine a set of linear predictive coding (LPC) coefficients for the windowed data. Using one or more algorithms operating on the processor to generate energy scale values for the windowed data. Using one or more algorithms operating on the processor to generate an encoded high frequency bitstream.
1. A method for encoding an audio signal, comprising: using one or more algorithms operating on a processor to filter an input audio signal into two output signals, wherein each output signal has a sampling rate that is equal to a sampling rate of the input audio signal, and wherein one of the output signals includes high frequency data; using one or more algorithms operating on the processor to window the high frequency data by selecting a set of the high frequency data and windowing the selected high frequency data in time domain; using one or more algorithms operating on the processor to determine a set of linear predictive coding (LPC) coefficients for the windowed data; using one or more algorithms operating on the processor to generate energy scale values for the windowed data; and using one or more algorithms operating on the processor to generate an encoded high frequency bitstream that is transmitted separately from low frequency data. 2. The method of claim 1, further comprising using one or more algorithms operating on the processor to detect a position and an amplitude for each of a plurality of peak values from the windowed data using the determined LPC coefficients. 3. The method of claim 2, further comprising using one or more algorithms operating on the processor to remove the peak values from the windowed data to generate peak-removed windowed data. 4. The method of claim 3, further comprising using one or more algorithms operating on the processor to generate energy scale values for the peak-removed windowed data. 5. The method of claim 4, further comprising using one or more algorithms operating on the processor to encode the position and the amplitude for each of the peak values, the determined LPC coefficients, and the energy scale values. 6. A method for decoding data, comprising: decoding an encoded high frequency audio signal bitstream that is transmitted separately from low frequency data and encoded spectral parameters of the encoded high frequency audio signal bitstream using one or more algorithms operating on a processor, wherein the encoded spectral parameters include linear predictive coding (LPC) coefficients and energy scale values; generating a windowed noise signal corresponding to the energy scale values using one or more algorithms operating on the processor; and reconstructing a decoded high frequency signal from the windowed noise signal using the LPC coefficients, using one or more algorithms operating on the processor. 7. The method of claim 6, wherein the encoded spectral parameters include at least one peak value of the encoded high frequency audio signal bitstream. 8. The method of claim 7, further comprising generating an impulse response of a peak value of the encoded high frequency audio signal bitstream, using one or more algorithms operating on the processor. 9. The method of claim 8 further comprising adding the impulse response to the windowed noise signal, using one or more algorithms operating on the processor. 10. An apparatus for encoding an audio signal, comprising: a computer-usable non-transitory storage resource, and a processor communicatively coupled to the storage resource, wherein the processor is configured to: filter an input audio signal into two output signals, wherein each output signal has a sampling rate that is equal to a sampling rate of the input audio signal, and wherein one of the output signals includes high frequency data; window the high frequency data by selecting a set of the high frequency data and windowing the selected high frequency data in time domain; determine a set of linear predictive coding (LPC) coefficients for the windowed data; generate energy scale values for the windowed data; and generate an encoded high frequency bitstream that is transmitted separately from low frequency data. 11. The apparatus of claim 10, wherein the processor is further configured to detect a position and an amplitude for each of a plurality of peak values from the windowed data using the determined LPC coefficients. 12. The apparatus of claim 11, wherein the processor is further configured to remove the peak values from the windowed data to generate peak-removed windowed data. 13. The apparatus of claim 12, wherein the processor is further configured to generate energy scale values for the peak-removed windowed data. 14. The apparatus of claim 13, wherein the processor is further configured to encode the position and the amplitude for each of the peak values, the determined LPC coefficients, and the energy scale values. 15. An apparatus for decoding an encoded high frequency audio signal bitstream, comprising: a computer-usable non-transitory storage resource, and a processor communicatively coupled to the storage resource, wherein the processor is configured to: decode the encoded high frequency audio signal bitstream that is transmitted separately from low frequency data and encoded spectral parameters of the encoded high frequency audio signal bitstream, wherein the encoded spectral parameters include linear predictive coding (LPC) coefficients and energy scale values; generate a windowed noise signal corresponding to the energy scale values; and reconstruct a decoded high frequency signal from the windowed noise signal using the LPC coefficients. 16. The apparatus of claim 15, wherein the encoded spectral parameters include at least one peak value of the encoded high frequency audio signal bitstream. 17. The apparatus of claim 16, wherein the processor is further configured to generate an impulse response of peak value of the encoded high frequency audio signal bitstream. 18. The apparatus of claim 17, wherein the processor is further configured to add the impulse response of the peak values to the windowed noise signal, using one or more algorithms operating on the processor. 19. The method of claim 1 wherein the energy scale values are generated by performing a fast Fourier Transform on the windowed data to generate an output and then by multiplying each frequency bin of the output with its complex conjugate. 20. The method of claim 3 wherein the energy scale values are generated by performing a fast Fourier Transform on the peak-removed windowed data to generate an output and then by multiplying each frequency bin of the output with its complex conjugate.
A method for encoding an audio signal, comprising using one or more algorithms operating on a processor to filter the audio signal into two output signals, wherein each output signal has a sampling rate that is equal to a sampling rate of the audio signal, and wherein one of the output signals includes high frequency data. Using one or more algorithms operating on the processor to window the high frequency data by selecting a set of the high frequency data. Using one or more algorithms operating on the processor to determine a set of linear predictive coding (LPC) coefficients for the windowed data. Using one or more algorithms operating on the processor to generate energy scale values for the windowed data. Using one or more algorithms operating on the processor to generate an encoded high frequency bitstream.1. A method for encoding an audio signal, comprising: using one or more algorithms operating on a processor to filter an input audio signal into two output signals, wherein each output signal has a sampling rate that is equal to a sampling rate of the input audio signal, and wherein one of the output signals includes high frequency data; using one or more algorithms operating on the processor to window the high frequency data by selecting a set of the high frequency data and windowing the selected high frequency data in time domain; using one or more algorithms operating on the processor to determine a set of linear predictive coding (LPC) coefficients for the windowed data; using one or more algorithms operating on the processor to generate energy scale values for the windowed data; and using one or more algorithms operating on the processor to generate an encoded high frequency bitstream that is transmitted separately from low frequency data. 2. The method of claim 1, further comprising using one or more algorithms operating on the processor to detect a position and an amplitude for each of a plurality of peak values from the windowed data using the determined LPC coefficients. 3. The method of claim 2, further comprising using one or more algorithms operating on the processor to remove the peak values from the windowed data to generate peak-removed windowed data. 4. The method of claim 3, further comprising using one or more algorithms operating on the processor to generate energy scale values for the peak-removed windowed data. 5. The method of claim 4, further comprising using one or more algorithms operating on the processor to encode the position and the amplitude for each of the peak values, the determined LPC coefficients, and the energy scale values. 6. A method for decoding data, comprising: decoding an encoded high frequency audio signal bitstream that is transmitted separately from low frequency data and encoded spectral parameters of the encoded high frequency audio signal bitstream using one or more algorithms operating on a processor, wherein the encoded spectral parameters include linear predictive coding (LPC) coefficients and energy scale values; generating a windowed noise signal corresponding to the energy scale values using one or more algorithms operating on the processor; and reconstructing a decoded high frequency signal from the windowed noise signal using the LPC coefficients, using one or more algorithms operating on the processor. 7. The method of claim 6, wherein the encoded spectral parameters include at least one peak value of the encoded high frequency audio signal bitstream. 8. The method of claim 7, further comprising generating an impulse response of a peak value of the encoded high frequency audio signal bitstream, using one or more algorithms operating on the processor. 9. The method of claim 8 further comprising adding the impulse response to the windowed noise signal, using one or more algorithms operating on the processor. 10. An apparatus for encoding an audio signal, comprising: a computer-usable non-transitory storage resource, and a processor communicatively coupled to the storage resource, wherein the processor is configured to: filter an input audio signal into two output signals, wherein each output signal has a sampling rate that is equal to a sampling rate of the input audio signal, and wherein one of the output signals includes high frequency data; window the high frequency data by selecting a set of the high frequency data and windowing the selected high frequency data in time domain; determine a set of linear predictive coding (LPC) coefficients for the windowed data; generate energy scale values for the windowed data; and generate an encoded high frequency bitstream that is transmitted separately from low frequency data. 11. The apparatus of claim 10, wherein the processor is further configured to detect a position and an amplitude for each of a plurality of peak values from the windowed data using the determined LPC coefficients. 12. The apparatus of claim 11, wherein the processor is further configured to remove the peak values from the windowed data to generate peak-removed windowed data. 13. The apparatus of claim 12, wherein the processor is further configured to generate energy scale values for the peak-removed windowed data. 14. The apparatus of claim 13, wherein the processor is further configured to encode the position and the amplitude for each of the peak values, the determined LPC coefficients, and the energy scale values. 15. An apparatus for decoding an encoded high frequency audio signal bitstream, comprising: a computer-usable non-transitory storage resource, and a processor communicatively coupled to the storage resource, wherein the processor is configured to: decode the encoded high frequency audio signal bitstream that is transmitted separately from low frequency data and encoded spectral parameters of the encoded high frequency audio signal bitstream, wherein the encoded spectral parameters include linear predictive coding (LPC) coefficients and energy scale values; generate a windowed noise signal corresponding to the energy scale values; and reconstruct a decoded high frequency signal from the windowed noise signal using the LPC coefficients. 16. The apparatus of claim 15, wherein the encoded spectral parameters include at least one peak value of the encoded high frequency audio signal bitstream. 17. The apparatus of claim 16, wherein the processor is further configured to generate an impulse response of peak value of the encoded high frequency audio signal bitstream. 18. The apparatus of claim 17, wherein the processor is further configured to add the impulse response of the peak values to the windowed noise signal, using one or more algorithms operating on the processor. 19. The method of claim 1 wherein the energy scale values are generated by performing a fast Fourier Transform on the windowed data to generate an output and then by multiplying each frequency bin of the output with its complex conjugate. 20. The method of claim 3 wherein the energy scale values are generated by performing a fast Fourier Transform on the peak-removed windowed data to generate an output and then by multiplying each frequency bin of the output with its complex conjugate.
2,600
10,656
10,656
15,500,672
2,627
A system includes an image capturing device, a user input device, and a processor coupled to the image capturing device and user input device. The processor includes instructions for capturing a data image with the image capturing device. The data image includes a signal from the user input device. The processor further includes instructions for deactivating the signal from the user input device and, after deactivating the signal from the user input device, capturing an ambient image. The processor further includes instructions for subtracting the ambient image from the data image and determining a position of the user input device in a three-dimensional space using a result of the subtracting.
1. A method comprising: capturing a data image, the data image comprising a signal from a user input device; deactivating the signal from the user input device; after deactivating the signal from the user input device, capturing an ambient image: subtracting, using a processor, the ambient image from the data image; and determining a position of the user input device in a three-dimensional space using a product of said subtracting. 2. The method of claim 1 further comprising displaying a position of the user input device. 3. The method of claim 1 wherein capturing a data image and capturing an ambient image comprise capturing data and ambient images of a work space. 4. The method of claim 3 wherein the data image and the ambient image are captured by a camera positioned with a fixed optical geometry to the work space. 5. The method of claim 1 wherein the data and ambient images are two-dimensional images. 6. The method of claim 1 wherein: the signal from the user input device comprises an infrared signal; and the data image and the ambient image are captured by an infrared camera. 7. The method of claim 1 wherein the user input device is a stylus. 8. A system comprising: an image capturing device; a user input device comprising a retroreflective material comprising a first retroreflective pattern and a second retroreflective pattern, wherein the first pattern is different from the second pattern: and a processor coupled to the image capturing device and user input device, the processor comprising instructions for capturing a data image with the image capturing device, the data image comprising a signal from the user input device; deactivating the signal from the user input device; after deactivating the signal from the user input device, capturing an ambient image; subtracting the ambient image from the data image; and determining a position of the user input device in a three-dimensional space using a product of said subtracting. 9. The system of claim 8 wherein the image capturing device comprises an infrared camera and a depth sensor. 10. The system of claim 8 wherein the instructions for capturing a data image and capturing an ambient image comprise instructions for capturing a data image and an ambient image of a work space. 11. The system of claim 10 wherein the image capturing device is positioned with a fixed optical geometry relative to the work space. 12. The system of claim 9 further comprising a projector for displaying the position of the user input device, wherein optics associated with the projector are coaxially aligned with optics associated with the image capturing device. 13. The system of claim 12 wherein the optics associated with the projector and the optics associated with the image capturing device have nearly identical chief ray angles. 14. A non-transitory computer readable medium encoded with instructions executable by a processor to: capture a first image, the first image comprising an infrared signal from a stylus; deactivate the infrared signal from the stylus: capture a second image after deactivating the infrared signal from the stylus; use a processor to subtract the second image from the first image; and determine a position of the stylus in a three-dimensional space. 15. The non-transitory computer readable medium of claim 14, wherein the instructions are further executable by the processor to display the position of the stylus on a display device.
A system includes an image capturing device, a user input device, and a processor coupled to the image capturing device and user input device. The processor includes instructions for capturing a data image with the image capturing device. The data image includes a signal from the user input device. The processor further includes instructions for deactivating the signal from the user input device and, after deactivating the signal from the user input device, capturing an ambient image. The processor further includes instructions for subtracting the ambient image from the data image and determining a position of the user input device in a three-dimensional space using a result of the subtracting.1. A method comprising: capturing a data image, the data image comprising a signal from a user input device; deactivating the signal from the user input device; after deactivating the signal from the user input device, capturing an ambient image: subtracting, using a processor, the ambient image from the data image; and determining a position of the user input device in a three-dimensional space using a product of said subtracting. 2. The method of claim 1 further comprising displaying a position of the user input device. 3. The method of claim 1 wherein capturing a data image and capturing an ambient image comprise capturing data and ambient images of a work space. 4. The method of claim 3 wherein the data image and the ambient image are captured by a camera positioned with a fixed optical geometry to the work space. 5. The method of claim 1 wherein the data and ambient images are two-dimensional images. 6. The method of claim 1 wherein: the signal from the user input device comprises an infrared signal; and the data image and the ambient image are captured by an infrared camera. 7. The method of claim 1 wherein the user input device is a stylus. 8. A system comprising: an image capturing device; a user input device comprising a retroreflective material comprising a first retroreflective pattern and a second retroreflective pattern, wherein the first pattern is different from the second pattern: and a processor coupled to the image capturing device and user input device, the processor comprising instructions for capturing a data image with the image capturing device, the data image comprising a signal from the user input device; deactivating the signal from the user input device; after deactivating the signal from the user input device, capturing an ambient image; subtracting the ambient image from the data image; and determining a position of the user input device in a three-dimensional space using a product of said subtracting. 9. The system of claim 8 wherein the image capturing device comprises an infrared camera and a depth sensor. 10. The system of claim 8 wherein the instructions for capturing a data image and capturing an ambient image comprise instructions for capturing a data image and an ambient image of a work space. 11. The system of claim 10 wherein the image capturing device is positioned with a fixed optical geometry relative to the work space. 12. The system of claim 9 further comprising a projector for displaying the position of the user input device, wherein optics associated with the projector are coaxially aligned with optics associated with the image capturing device. 13. The system of claim 12 wherein the optics associated with the projector and the optics associated with the image capturing device have nearly identical chief ray angles. 14. A non-transitory computer readable medium encoded with instructions executable by a processor to: capture a first image, the first image comprising an infrared signal from a stylus; deactivate the infrared signal from the stylus: capture a second image after deactivating the infrared signal from the stylus; use a processor to subtract the second image from the first image; and determine a position of the stylus in a three-dimensional space. 15. The non-transitory computer readable medium of claim 14, wherein the instructions are further executable by the processor to display the position of the stylus on a display device.
2,600
10,657
10,657
16,347,905
2,683
It is presented a method for controlling a lock configured to control access to a restricted physical space, the method being performed in a lock controller. There is a respective active space associated with each lock. The method comprises the steps of: receiving an activation signal from an activation device, the activation signal being based on the portable key device being located within the active space associated with the lock; obtaining an indication that the portable key device is granted access to the lock; determining a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determining intent to open based on the second indication of position; and transmitting an unlock signal to the lock associated with the lock controller.
1. A method for controlling a lock configured to control access to a restricted physical space, the method being performed in a lock controller connected to the lock, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the method being performed by the lock controller and comprising the steps of: entering a sleep state, in which the lock controller is unable to receive an activation signal; entering a communication state, in which the lock controller is able to receive an activation signal; receiving, while in the communication state, an activation signal from an activation device, the activation signal being based on a portable key device being located within the active space associated with the lock, based on a first indication of position of the portable key device obtained from a first positioning procedure; obtaining an indication that the portable key device is granted access to the lock; determining a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determining intent to open based on the second indication of position; and transmitting an unlock signal to the lock associated with the lock controller. 2. The method according to claim 1, wherein the indication that the portable key device is granted access forms part of the activation signal. 3. The method according to claim 1, wherein the step of obtaining the indication that the portable key device is granted access to the lock comprises determining access based on communication between the lock controller and the portable key device to authenticate the portable key device. 4. The method according to claim 1, wherein in the step of determining intent to open, a threshold of determining intent is based on the identity of the portable key devices. 5. The method according to claim 4, wherein in the step of determining intent to open, a threshold of determining intent is based on historic data associated with the portable key devices. 6. The method according to claim 1, wherein in the step of determining intent to open, a threshold of determining intent of is based on time. 7. The method according to claim 1, wherein in the step of determining intent to open, a threshold of determining intent is based on statistics of previously determined intent and corresponding opening of a barrier associated with the lock. 8. The method according to claim 1, further comprising the step of: detecting, using the second positioning procedure, how many portable key devices pass through a physical barrier associated with the lock. 9. The method according to claim 1, further comprising the step of: determining whether there is strong intent to open, wherein the step of transmitting an unlock signal is performed when there is strong intent. 10. A lock controller for controlling a lock configured to control access to a restricted physical space, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the lock controller comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the lock controller to: enter a sleep state, in which the lock controller is unable to receive an activation signal; enter a communication state, in which the lock controller is able to receive an activation signal; receive, while in the communication state, an activation signal from an activation device, the activation signal being based on a portable key device being located within the active space associated with the lock, based on a first indication of position of the portable key device obtained from a first positioning procedure; obtain an indication that the portable key device is granted access to the lock; determine a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determine intent to open based on the second indication of position; and transmit an unlock signal to the lock associated with the lock controller. 11. The lock controller according to claim 10, wherein the instructions to determine intent to open comprise instructions that, when executed by the processor, cause the lock controller to use a threshold of determining intent based on the identity of the portable key devices. 12. The lock controller according to claim 10, wherein instructions to determine intent to open comprise instructions that, when executed by the processor, cause the lock controller to use a threshold of determining intent based on historic data associated with the portable key devices. 13. A computer program for controlling a lock configured to control access to a restricted physical space, the method being performed in a lock controller connected to the lock, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the computer program comprising computer program code which, when run on a lock controller, causes the lock controller to: enter a sleep state, in which the lock controller is unable to receive an activation signal; enter a communication state, in which the lock controller is able to receive an activation signal; receive, while in the communication state, an activation signal from an activation device, the activation signal being based on a portable key device being located within the active space associated with the lock, based on a first indication of position of the portable key device, obtained from a first positioning procedure; obtain an indication that the portable key device is granted access to the lock; determine a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determine intent to open based on the second indication of position; and transmit an unlock signal to the lock associated with the lock controller. 14. A computer program product comprising a computer program according to claim 13 and a computer readable means on which the computer program is stored. 15. An access control system for controlling a lock configured to control access to a restricted physical space, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the access control system comprising an activation device comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the activation device to: determine a first indication of position of the portable key device using a first positioning procedure; determine when the portable key device is located within the active space associated with the lock, based on the first indication of position; transmit an activation signal to the lock controller associated with the lock of the active space, when the portable key device is located within the active space associated with the lock; wherein the access control system further comprising a plurality of lock controllers, each one of which comprises: a processor; and a memory storing instructions that, when executed by the processor, cause the lock controller to: enter a sleep state, in which the lock controller is unable to receive an activation signal; enter a communication state, in which the lock controller is able to receive an activation signal; receive, while in the communication state, an activation signal from the activation device; obtain an indication that the portable key device is granted access to the lock; determine a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determine intent to open based on the second indication of position; and transmit an unlock signal to the lock associated with the lock controller.
It is presented a method for controlling a lock configured to control access to a restricted physical space, the method being performed in a lock controller. There is a respective active space associated with each lock. The method comprises the steps of: receiving an activation signal from an activation device, the activation signal being based on the portable key device being located within the active space associated with the lock; obtaining an indication that the portable key device is granted access to the lock; determining a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determining intent to open based on the second indication of position; and transmitting an unlock signal to the lock associated with the lock controller.1. A method for controlling a lock configured to control access to a restricted physical space, the method being performed in a lock controller connected to the lock, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the method being performed by the lock controller and comprising the steps of: entering a sleep state, in which the lock controller is unable to receive an activation signal; entering a communication state, in which the lock controller is able to receive an activation signal; receiving, while in the communication state, an activation signal from an activation device, the activation signal being based on a portable key device being located within the active space associated with the lock, based on a first indication of position of the portable key device obtained from a first positioning procedure; obtaining an indication that the portable key device is granted access to the lock; determining a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determining intent to open based on the second indication of position; and transmitting an unlock signal to the lock associated with the lock controller. 2. The method according to claim 1, wherein the indication that the portable key device is granted access forms part of the activation signal. 3. The method according to claim 1, wherein the step of obtaining the indication that the portable key device is granted access to the lock comprises determining access based on communication between the lock controller and the portable key device to authenticate the portable key device. 4. The method according to claim 1, wherein in the step of determining intent to open, a threshold of determining intent is based on the identity of the portable key devices. 5. The method according to claim 4, wherein in the step of determining intent to open, a threshold of determining intent is based on historic data associated with the portable key devices. 6. The method according to claim 1, wherein in the step of determining intent to open, a threshold of determining intent of is based on time. 7. The method according to claim 1, wherein in the step of determining intent to open, a threshold of determining intent is based on statistics of previously determined intent and corresponding opening of a barrier associated with the lock. 8. The method according to claim 1, further comprising the step of: detecting, using the second positioning procedure, how many portable key devices pass through a physical barrier associated with the lock. 9. The method according to claim 1, further comprising the step of: determining whether there is strong intent to open, wherein the step of transmitting an unlock signal is performed when there is strong intent. 10. A lock controller for controlling a lock configured to control access to a restricted physical space, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the lock controller comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the lock controller to: enter a sleep state, in which the lock controller is unable to receive an activation signal; enter a communication state, in which the lock controller is able to receive an activation signal; receive, while in the communication state, an activation signal from an activation device, the activation signal being based on a portable key device being located within the active space associated with the lock, based on a first indication of position of the portable key device obtained from a first positioning procedure; obtain an indication that the portable key device is granted access to the lock; determine a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determine intent to open based on the second indication of position; and transmit an unlock signal to the lock associated with the lock controller. 11. The lock controller according to claim 10, wherein the instructions to determine intent to open comprise instructions that, when executed by the processor, cause the lock controller to use a threshold of determining intent based on the identity of the portable key devices. 12. The lock controller according to claim 10, wherein instructions to determine intent to open comprise instructions that, when executed by the processor, cause the lock controller to use a threshold of determining intent based on historic data associated with the portable key devices. 13. A computer program for controlling a lock configured to control access to a restricted physical space, the method being performed in a lock controller connected to the lock, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the computer program comprising computer program code which, when run on a lock controller, causes the lock controller to: enter a sleep state, in which the lock controller is unable to receive an activation signal; enter a communication state, in which the lock controller is able to receive an activation signal; receive, while in the communication state, an activation signal from an activation device, the activation signal being based on a portable key device being located within the active space associated with the lock, based on a first indication of position of the portable key device, obtained from a first positioning procedure; obtain an indication that the portable key device is granted access to the lock; determine a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determine intent to open based on the second indication of position; and transmit an unlock signal to the lock associated with the lock controller. 14. A computer program product comprising a computer program according to claim 13 and a computer readable means on which the computer program is stored. 15. An access control system for controlling a lock configured to control access to a restricted physical space, the lock being one of a plurality of locks, wherein there is a respective active space associated with each one of the plurality of locks, the access control system comprising an activation device comprising: a processor; and a memory storing instructions that, when executed by the processor, cause the activation device to: determine a first indication of position of the portable key device using a first positioning procedure; determine when the portable key device is located within the active space associated with the lock, based on the first indication of position; transmit an activation signal to the lock controller associated with the lock of the active space, when the portable key device is located within the active space associated with the lock; wherein the access control system further comprising a plurality of lock controllers, each one of which comprises: a processor; and a memory storing instructions that, when executed by the processor, cause the lock controller to: enter a sleep state, in which the lock controller is unable to receive an activation signal; enter a communication state, in which the lock controller is able to receive an activation signal; receive, while in the communication state, an activation signal from the activation device; obtain an indication that the portable key device is granted access to the lock; determine a second indication of position of the portable key device using a second positioning procedure, wherein the second positioning procedure is more accurate than the first positioning procedure; determine intent to open based on the second indication of position; and transmit an unlock signal to the lock associated with the lock controller.
2,600
10,658
10,658
15,144,618
2,658
Systems and processes for intelligent device identification are provided. In one example process, audio input may be sampled with a microphone at each of two or more of the plurality of electronic devices. A first electronic device of the plurality of electronic devices for determining a task associated with sampled audio input may be identified. The process may determine the task based on the sampled audio input with the first electronic device and identify identifying a second electronic device of the plurality of electronic devices for performing the task. The task be performed with the second electronic device. The second electronic device is not the first electronic device in some examples.
1. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: receive, at a first electronic device, data indicative of a task, wherein the task is associated with an audio input; determine, with the first electronic device, whether the first electronic device or a second electronic device is to perform the task; in accordance with a determination that the task is associated with the first electronic device, perform the task with the first electronic device; and in accordance with a determination that the task is associated with the second electronic device, cause data indicative of the task to be transmitted to the second electronic device. 2. The non-transitory computer-readable storage medium of claim 1, wherein causing the data indicative of the task to be provided to a second electronic device comprises: providing data indicative of the task to a third electronic device. 3. The non-transitory computer-readable storage medium of claim 1, wherein receiving comprises: receiving, at a microphone of the first electronic device, the audio input; and in response to receipt of the audio input, generating, with the first electronic device, the data indicative of the task based at least in part on the received audio input. 4. The non-transitory computer-readable storage medium of claim 1, wherein receiving comprises: receiving, at a microphone of the first electronic device, the audio input; transmitting data representing the received audio input to one or more servers; and receiving from the one or more servers the data indicative of the task, wherein the task was determined by the one or more servers based on the data representing the received audio input. 5. The non-transitory computer-readable storage medium of any of claim 1, wherein determining comprises: determining whether the task is associated with the first electronic device or the second electronic device in accordance with a plurality of prioritization rules. 6. The non-transitory computer-readable storage medium of claim 5, wherein determining whether the task is associated with the first electronic device or the second electronic device in accordance with a plurality of prioritization rules comprises: determining whether the task is associated with the first electronic device or the second electronic device based on a display capability of the first electronic device, mobility of the first electronic device, or combination thereof. 7. The non-transitory computer-readable storage medium of claim 5, wherein determining whether the task is associated with the first electronic device or the second electronic device in accordance with a plurality of prioritization rules comprises: determining whether the task is associated with the first electronic device or the second electronic device based on prior use of the first and second electronic devices. 8. The non-transitory computer-readable storage medium of claim 1, wherein performing the task with the first electronic device comprises performing a first task associated with a first user and wherein the method further comprises: performing, with the first electronic device, a second task associated with a second user different from the first user. 9. The non-transitory computer-readable storage medium of claim 1, wherein performing the task with the first electronic device comprises: authenticating a user associated with the first electronic device; and responsive to authenticating the user, performing the task. 10. The non-transitory computer-readable storage medium of claim 9, wherein authenticating the user comprises receiving a biometric input. 11. A method of identifying an electronic device from a plurality of electronic devices for performing a task, the method comprising: receiving, at a first electronic device, data indicative of a task, wherein the task is associated with an audio input; determining, with the first electronic device, whether the first electronic device or a second electronic device is to perform the task; in accordance with a determination that the task is associated with the first electronic device, performing the task with the first electronic device; and in accordance with a determination that the task is associated with the second electronic device, causing data indicative of the task to be transmitted to the second electronic device. 12. An electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, at a first electronic device, data indicative of a task, wherein the task is associated with an audio input; determining, with the first electronic device, whether the first electronic device or a second electronic device is to perform the task; in accordance with a determination that the task is associated with the first electronic device, performing the task with the first electronic device; and in accordance with a determination that the task is associated with the second electronic device, causing data indicative of the task to be transmitted to the second electronic device.
Systems and processes for intelligent device identification are provided. In one example process, audio input may be sampled with a microphone at each of two or more of the plurality of electronic devices. A first electronic device of the plurality of electronic devices for determining a task associated with sampled audio input may be identified. The process may determine the task based on the sampled audio input with the first electronic device and identify identifying a second electronic device of the plurality of electronic devices for performing the task. The task be performed with the second electronic device. The second electronic device is not the first electronic device in some examples.1. A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to: receive, at a first electronic device, data indicative of a task, wherein the task is associated with an audio input; determine, with the first electronic device, whether the first electronic device or a second electronic device is to perform the task; in accordance with a determination that the task is associated with the first electronic device, perform the task with the first electronic device; and in accordance with a determination that the task is associated with the second electronic device, cause data indicative of the task to be transmitted to the second electronic device. 2. The non-transitory computer-readable storage medium of claim 1, wherein causing the data indicative of the task to be provided to a second electronic device comprises: providing data indicative of the task to a third electronic device. 3. The non-transitory computer-readable storage medium of claim 1, wherein receiving comprises: receiving, at a microphone of the first electronic device, the audio input; and in response to receipt of the audio input, generating, with the first electronic device, the data indicative of the task based at least in part on the received audio input. 4. The non-transitory computer-readable storage medium of claim 1, wherein receiving comprises: receiving, at a microphone of the first electronic device, the audio input; transmitting data representing the received audio input to one or more servers; and receiving from the one or more servers the data indicative of the task, wherein the task was determined by the one or more servers based on the data representing the received audio input. 5. The non-transitory computer-readable storage medium of any of claim 1, wherein determining comprises: determining whether the task is associated with the first electronic device or the second electronic device in accordance with a plurality of prioritization rules. 6. The non-transitory computer-readable storage medium of claim 5, wherein determining whether the task is associated with the first electronic device or the second electronic device in accordance with a plurality of prioritization rules comprises: determining whether the task is associated with the first electronic device or the second electronic device based on a display capability of the first electronic device, mobility of the first electronic device, or combination thereof. 7. The non-transitory computer-readable storage medium of claim 5, wherein determining whether the task is associated with the first electronic device or the second electronic device in accordance with a plurality of prioritization rules comprises: determining whether the task is associated with the first electronic device or the second electronic device based on prior use of the first and second electronic devices. 8. The non-transitory computer-readable storage medium of claim 1, wherein performing the task with the first electronic device comprises performing a first task associated with a first user and wherein the method further comprises: performing, with the first electronic device, a second task associated with a second user different from the first user. 9. The non-transitory computer-readable storage medium of claim 1, wherein performing the task with the first electronic device comprises: authenticating a user associated with the first electronic device; and responsive to authenticating the user, performing the task. 10. The non-transitory computer-readable storage medium of claim 9, wherein authenticating the user comprises receiving a biometric input. 11. A method of identifying an electronic device from a plurality of electronic devices for performing a task, the method comprising: receiving, at a first electronic device, data indicative of a task, wherein the task is associated with an audio input; determining, with the first electronic device, whether the first electronic device or a second electronic device is to perform the task; in accordance with a determination that the task is associated with the first electronic device, performing the task with the first electronic device; and in accordance with a determination that the task is associated with the second electronic device, causing data indicative of the task to be transmitted to the second electronic device. 12. An electronic device, comprising: one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving, at a first electronic device, data indicative of a task, wherein the task is associated with an audio input; determining, with the first electronic device, whether the first electronic device or a second electronic device is to perform the task; in accordance with a determination that the task is associated with the first electronic device, performing the task with the first electronic device; and in accordance with a determination that the task is associated with the second electronic device, causing data indicative of the task to be transmitted to the second electronic device.
2,600
10,659
10,659
16,033,778
2,685
A device ( 10 ) for the alarm server ( 100 ) includes an interface ( 12 ) for communication with one or more alarm sources ( 200 ) and with one or more alarm generators ( 300 ). A memory ( 14 ) stores information relating to alarms. A control module ( 16 ) is configured to control the one or more interfaces ( 12 ) and the memory ( 14 ) and receive an indication relating to an alarm from an alarm source ( 200 ) via the one or more interfaces ( 12 ) and to store information relating to the alarm in the memory ( 14 ) and to receive an alarm request from an alarm generator ( 300 ) via the one or more interfaces and to provide information relating to the alarm for the alarm generator ( 300 ) upon receipt of the alarm request if information relating to the alarm is stored in the memory ( 14 ).
1. An alarm server device comprising: one or more interfaces for communication with one or more alarm sources and with one or more alarm generators; a memory for storing information relating to one or more alarms; and a control module configured to control the one or more interfaces and the memory, configured to receive an indication relating to an alarm via the one or more interfaces from an alarm source and to store the information relating to the alarm in the memory and configured to receive an alarm request from an alarm generator via the one or more interfaces and to provide the alarm for the alarm generator upon receipt of the alarm request if information relating to the alarm is stored in the memory. 2. An alarm server device in accordance with claim 1, wherein: the indication relating to an alarm comprises an indication of an alarm source that indicates a state of an alarm of the alarm source; and the control module is configured to store status information relating to a state of alarm of the alarm source in the memory. 3. An alarm server device in accordance with claim 1, wherein the control module is configured to provide or confirm the information relating to the alarm to the alarm generator according to a pull method for the alarm generator. 4. An alarm server device in accordance with claim 1, wherein the control module is configured to request the indication relating to the alarm at the alarm source according to a pull method for the alarm source or to receive the indication relating to the alarm according to a push method for the alarm source or both to request the indication relating to the alarm at the alarm source according to a pull method for the alarm source and also to receive the indication relating to the alarm according to a push method for the alarm source. 5. An alarm server device in accordance with claim 1, wherein the control module is configured to store one or more components of a group comprising: an identification of the alarm source; properties of the alarm; a category of the alarm; a location of the alarm; a priority of the alarm in the memory associatively with the indication, and to forward it to the alarm generator (300) with confirmation of the alarm. 6. An alarm server device in accordance with claim 1, wherein the control module is configured to trigger a forwarding to additional alarm generators or to trigger a local alarm at the alarm server or to trigger both a forwarding to additional alarm generators and a local alarm at the alarm server, after confirmation of the presence of an alarm to the alarm generator if the alarm generator fails to confirm an acknowledgment of a presence of an alarm by a user after a predefined time. 7. An alarm source device comprising: one or more interfaces for communication with an alarm server; a monitoring device for monitoring a patient parameter or monitoring a status of a medical device or monitoring both a patient parameter and a status of a medical device and for providing an alarm indication if the patient parameter or status meets a predefined condition; and a control module configured to control the one or more interfaces and the monitoring device and configured to forward an alarm indication provided by the monitoring device via the one or more interfaces to the alarm server and configured to trigger a local alarm at the alarm source after receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a predefined time that the alarm indication has been received by an alarm generator. 8. An alarm source device in accordance with claim 7, wherein the monitoring device is configured to monitor the patient parameter taking into account at least one sensor signal of a medical device or taking into account at least one operating parameter of an actuator of a medical device relative to the predefined condition or taking into account at least one sensor signal of a medical device and at least one operating parameter of an actuator of a medical device relative to the predefined condition. 9. An alarm source device in accordance with claim 7, wherein: the control module is configured to provide the alarm indication via an alarm for the alarm server upon request by the alarm server according to a pull method for the alarm source; or to provide the alarm indication relating to the alarm to the alarm server according to a push method for the alarm source; or the control module is configured to provide the alarm indication via an alarm for the alarm server upon request by the alarm server according to a pull method for the alarm source and to provide the alarm indication relating to the alarm to the alarm server according to a push method for the alarm source. 10. An alarm source device in accordance with claim 7, wherein the predefined time corresponds to a second predefined time; the control module is configured to trigger a local alarm at the alarm source upon receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after another predefined time, corresponding to a first predefined time, that the alarm indication has been received by the alarm server. 11. An alarm source device in accordance with claim 10, wherein the control module is configured to trigger a local alarm at the alarm source upon receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a third predefined time that the alarm indication has been acknowledged by a user. 12. An alarm source device in accordance with claim 11, wherein the control module is configured to trigger a local alarm at the alarm source upon receipt of an alarm indication from the monitoring device if no acknowledgment of the alarm indication by a user has been received at the alarm source after a fourth predefined time and the patient parameter still continues to meet the predefined condition. 13. An alarm source device in accordance with claim 12, wherein the control module is configured to adapt at least one of the predefined times according to the priority of the alarm. 14. An alarm generator device comprising: one or more interfaces for communication with an alarm server; an output device for outputting information relating to an alarm to a user; and a control module configured to control the one or more interfaces and the output device and configured to poll an alarm server via the one or more interfaces about a presence of an alarm and to receive information relating to the presence of an alarm at the alarm server on request and configured to signal the presence of the alarm to the user via the output device upon receipt of the information relating to the presence of an alarm from the alarm server. 15. An alarm generator device in accordance with claim 14, wherein the control module is configured to poll the alarm server about a presence of an alarm according to a pull method for the alarm generator device. 16. An alarm generator device in accordance with claim 14, wherein the control module is configured to poll the alarm server about the presence of alarms at regular, configurable time intervals in based on an event or as a function of an operating state of the alarm generator device. 17. An alarm generator device in accordance with claim 14, wherein the control module is configured to trigger a local alarm at the alarm generator device upon receipt of the information or after confirmation of the presence of an alarm from the alarm server if no acknowledgment of the presence of an alarm by a user was obtained after a predefined time. 18. An alarm generator device in accordance with claim 17, wherein the control module is configured to adapt the predefined time according to a priority of the alarm. 19. An alarm system comprising: alarm server device comprising one or more interfaces for communication with one or more alarm sources and with one or more alarm generators, a memory for storing information relating to one or more alarms, and an alarm server control module configured to control the one or more interfaces and the memory, configured to receive an indication relating to an alarm via the one or more interfaces from an alarm source and to store information relating to the alarm in the memory and configured to receive an alarm request from an alarm generator via the one or more interfaces and to provide the alarm for the alarm generator upon receipt of the alarm request if information relating to the alarm is stored in the memory; an alarm source device comprising one or more interfaces for communication with the alarm server, a monitoring device for monitoring a patient parameter or monitoring a status of a medical device or monitoring both a patient parameter and a status of a medical device and for providing an alarm indication if the patient parameter or status meets a predefined condition, and an alarm source control module configured to control the one or more interfaces and the monitoring device and configured to forward an alarm indication provided by the monitoring device via the one or more interfaces to the alarm server and configured to trigger a local alarm at the alarm source after receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a predefined time that the alarm indication has been received by an alarm generator; and the alarm generator device comprising one or more interfaces for communication with the alarm server, an output device for outputting information relating to an alarm to a user, and an alarm generator control module configured to control the one or more interfaces and the output device and configured to poll the alarm server via the one or more interfaces about a presence of an alarm and to receive information relating to the presence of an alarm at the alarm server on request and configured to signal the presence of the alarm to the user via the output device upon receipt of the information relating to the presence of an alarm from the alarm server. 20. An alarm server process comprising the steps of: providing an alarm server device comprising one or more interfaces for communication with one or more alarm sources and with one or more alarm generators, a memory for storing information relating to one or more alarms, and a control module configured to control the one or more interfaces and the memory, configured to receive an indication relating to an alarm via the one or more interfaces from an alarm source and to store information relating to the alarm in the memory and configured to receive an alarm request from an alarm generator via the one or more interfaces and to provide the alarm for the alarm generator upon receipt of the alarm request if information relating to the alarm is stored in the memory; receiving an indication of an alarm from an alarm source; storing information relating to the alarm; receiving an alarm request from an alarm generator; and supplying information relating to the alarm to the alarm generator if information relating to the alarm is stored. 21. An alarm server process according to claim 20, further comprising providing a computer program with a program code for executing the steps of receiving an indication of an alarm, storing information relating to the alarm, receiving an alarm request from an alarm generator; and supplying information relating to the alarm to the alarm generator, wherein the program code is executed on a computer, a processor or a programmable hardware component. 22. An alarm source process comprising the steps of: providing an alarm source device comprising one or more interfaces for communication with an alarm server, a monitoring device for monitoring a patient parameter or monitoring a status of a medical device or monitoring both a patient parameter and a status of a medical device and for providing an alarm indication if the patient parameter or status meets a predefined condition, and a control module configured to control the one or more interfaces and the monitoring device and configured to forward an alarm indication provided by the monitoring device via the one or more interfaces to the alarm server and configured to trigger a local alarm at the alarm source after receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a predefined time that the alarm indication has been received by an alarm generator; monitoring a patient parameter or monitoring a status of a medical device or monitoring a patient parameter and a status of a medical device; providing an alarm indication if the patient parameter or medical device status meets a predefined condition; forwarding a provided alarm indication to an alarm server; and triggering a local alarm if no confirmation of the alarm server was received after a predefined time that the alarm indication has been received by an alarm generator. 23. An alarm source process according to claim 22, further comprising providing a computer program with a program code for executing the steps of monitoring a patient parameter or monitoring a status of a medical device or monitoring a patient parameter and a status of a medical device, providing an alarm indication, forwarding a provided alarm indication and triggering a local alarm, wherein the program code is executed on a computer, a processor or a programmable hardware component. 24. An alarm generator process comprising: providing an alarm generator device comprising one or more interfaces for communication with an alarm server, an output device for outputting information relating to an alarm to a user, and a control module configured to control the one or more interfaces and the output device and configured to poll an alarm server via the one or more interfaces about a presence of an alarm and to receive information relating to the presence of an alarm at the alarm server on request and configured to signal the presence of the alarm to the user via the output device upon receipt of the information relating to the presence of an alarm from the alarm server; polling for a presence of an alarm at an alarm server; receiving information relating to a presence of an alarm on request from the alarm server; and signaling a presence of an alarm at a user. 25. An alarm generator process according to claim 24, further comprising providing a computer program with a program code for executing the steps of polling for a presence of an alarm at an alarm server, receiving information relating to a presence of an alarm, and signaling a presence of an alarm at a user, wherein the program code is executed on a computer, a processor or a programmable hardware component.
A device ( 10 ) for the alarm server ( 100 ) includes an interface ( 12 ) for communication with one or more alarm sources ( 200 ) and with one or more alarm generators ( 300 ). A memory ( 14 ) stores information relating to alarms. A control module ( 16 ) is configured to control the one or more interfaces ( 12 ) and the memory ( 14 ) and receive an indication relating to an alarm from an alarm source ( 200 ) via the one or more interfaces ( 12 ) and to store information relating to the alarm in the memory ( 14 ) and to receive an alarm request from an alarm generator ( 300 ) via the one or more interfaces and to provide information relating to the alarm for the alarm generator ( 300 ) upon receipt of the alarm request if information relating to the alarm is stored in the memory ( 14 ).1. An alarm server device comprising: one or more interfaces for communication with one or more alarm sources and with one or more alarm generators; a memory for storing information relating to one or more alarms; and a control module configured to control the one or more interfaces and the memory, configured to receive an indication relating to an alarm via the one or more interfaces from an alarm source and to store the information relating to the alarm in the memory and configured to receive an alarm request from an alarm generator via the one or more interfaces and to provide the alarm for the alarm generator upon receipt of the alarm request if information relating to the alarm is stored in the memory. 2. An alarm server device in accordance with claim 1, wherein: the indication relating to an alarm comprises an indication of an alarm source that indicates a state of an alarm of the alarm source; and the control module is configured to store status information relating to a state of alarm of the alarm source in the memory. 3. An alarm server device in accordance with claim 1, wherein the control module is configured to provide or confirm the information relating to the alarm to the alarm generator according to a pull method for the alarm generator. 4. An alarm server device in accordance with claim 1, wherein the control module is configured to request the indication relating to the alarm at the alarm source according to a pull method for the alarm source or to receive the indication relating to the alarm according to a push method for the alarm source or both to request the indication relating to the alarm at the alarm source according to a pull method for the alarm source and also to receive the indication relating to the alarm according to a push method for the alarm source. 5. An alarm server device in accordance with claim 1, wherein the control module is configured to store one or more components of a group comprising: an identification of the alarm source; properties of the alarm; a category of the alarm; a location of the alarm; a priority of the alarm in the memory associatively with the indication, and to forward it to the alarm generator (300) with confirmation of the alarm. 6. An alarm server device in accordance with claim 1, wherein the control module is configured to trigger a forwarding to additional alarm generators or to trigger a local alarm at the alarm server or to trigger both a forwarding to additional alarm generators and a local alarm at the alarm server, after confirmation of the presence of an alarm to the alarm generator if the alarm generator fails to confirm an acknowledgment of a presence of an alarm by a user after a predefined time. 7. An alarm source device comprising: one or more interfaces for communication with an alarm server; a monitoring device for monitoring a patient parameter or monitoring a status of a medical device or monitoring both a patient parameter and a status of a medical device and for providing an alarm indication if the patient parameter or status meets a predefined condition; and a control module configured to control the one or more interfaces and the monitoring device and configured to forward an alarm indication provided by the monitoring device via the one or more interfaces to the alarm server and configured to trigger a local alarm at the alarm source after receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a predefined time that the alarm indication has been received by an alarm generator. 8. An alarm source device in accordance with claim 7, wherein the monitoring device is configured to monitor the patient parameter taking into account at least one sensor signal of a medical device or taking into account at least one operating parameter of an actuator of a medical device relative to the predefined condition or taking into account at least one sensor signal of a medical device and at least one operating parameter of an actuator of a medical device relative to the predefined condition. 9. An alarm source device in accordance with claim 7, wherein: the control module is configured to provide the alarm indication via an alarm for the alarm server upon request by the alarm server according to a pull method for the alarm source; or to provide the alarm indication relating to the alarm to the alarm server according to a push method for the alarm source; or the control module is configured to provide the alarm indication via an alarm for the alarm server upon request by the alarm server according to a pull method for the alarm source and to provide the alarm indication relating to the alarm to the alarm server according to a push method for the alarm source. 10. An alarm source device in accordance with claim 7, wherein the predefined time corresponds to a second predefined time; the control module is configured to trigger a local alarm at the alarm source upon receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after another predefined time, corresponding to a first predefined time, that the alarm indication has been received by the alarm server. 11. An alarm source device in accordance with claim 10, wherein the control module is configured to trigger a local alarm at the alarm source upon receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a third predefined time that the alarm indication has been acknowledged by a user. 12. An alarm source device in accordance with claim 11, wherein the control module is configured to trigger a local alarm at the alarm source upon receipt of an alarm indication from the monitoring device if no acknowledgment of the alarm indication by a user has been received at the alarm source after a fourth predefined time and the patient parameter still continues to meet the predefined condition. 13. An alarm source device in accordance with claim 12, wherein the control module is configured to adapt at least one of the predefined times according to the priority of the alarm. 14. An alarm generator device comprising: one or more interfaces for communication with an alarm server; an output device for outputting information relating to an alarm to a user; and a control module configured to control the one or more interfaces and the output device and configured to poll an alarm server via the one or more interfaces about a presence of an alarm and to receive information relating to the presence of an alarm at the alarm server on request and configured to signal the presence of the alarm to the user via the output device upon receipt of the information relating to the presence of an alarm from the alarm server. 15. An alarm generator device in accordance with claim 14, wherein the control module is configured to poll the alarm server about a presence of an alarm according to a pull method for the alarm generator device. 16. An alarm generator device in accordance with claim 14, wherein the control module is configured to poll the alarm server about the presence of alarms at regular, configurable time intervals in based on an event or as a function of an operating state of the alarm generator device. 17. An alarm generator device in accordance with claim 14, wherein the control module is configured to trigger a local alarm at the alarm generator device upon receipt of the information or after confirmation of the presence of an alarm from the alarm server if no acknowledgment of the presence of an alarm by a user was obtained after a predefined time. 18. An alarm generator device in accordance with claim 17, wherein the control module is configured to adapt the predefined time according to a priority of the alarm. 19. An alarm system comprising: alarm server device comprising one or more interfaces for communication with one or more alarm sources and with one or more alarm generators, a memory for storing information relating to one or more alarms, and an alarm server control module configured to control the one or more interfaces and the memory, configured to receive an indication relating to an alarm via the one or more interfaces from an alarm source and to store information relating to the alarm in the memory and configured to receive an alarm request from an alarm generator via the one or more interfaces and to provide the alarm for the alarm generator upon receipt of the alarm request if information relating to the alarm is stored in the memory; an alarm source device comprising one or more interfaces for communication with the alarm server, a monitoring device for monitoring a patient parameter or monitoring a status of a medical device or monitoring both a patient parameter and a status of a medical device and for providing an alarm indication if the patient parameter or status meets a predefined condition, and an alarm source control module configured to control the one or more interfaces and the monitoring device and configured to forward an alarm indication provided by the monitoring device via the one or more interfaces to the alarm server and configured to trigger a local alarm at the alarm source after receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a predefined time that the alarm indication has been received by an alarm generator; and the alarm generator device comprising one or more interfaces for communication with the alarm server, an output device for outputting information relating to an alarm to a user, and an alarm generator control module configured to control the one or more interfaces and the output device and configured to poll the alarm server via the one or more interfaces about a presence of an alarm and to receive information relating to the presence of an alarm at the alarm server on request and configured to signal the presence of the alarm to the user via the output device upon receipt of the information relating to the presence of an alarm from the alarm server. 20. An alarm server process comprising the steps of: providing an alarm server device comprising one or more interfaces for communication with one or more alarm sources and with one or more alarm generators, a memory for storing information relating to one or more alarms, and a control module configured to control the one or more interfaces and the memory, configured to receive an indication relating to an alarm via the one or more interfaces from an alarm source and to store information relating to the alarm in the memory and configured to receive an alarm request from an alarm generator via the one or more interfaces and to provide the alarm for the alarm generator upon receipt of the alarm request if information relating to the alarm is stored in the memory; receiving an indication of an alarm from an alarm source; storing information relating to the alarm; receiving an alarm request from an alarm generator; and supplying information relating to the alarm to the alarm generator if information relating to the alarm is stored. 21. An alarm server process according to claim 20, further comprising providing a computer program with a program code for executing the steps of receiving an indication of an alarm, storing information relating to the alarm, receiving an alarm request from an alarm generator; and supplying information relating to the alarm to the alarm generator, wherein the program code is executed on a computer, a processor or a programmable hardware component. 22. An alarm source process comprising the steps of: providing an alarm source device comprising one or more interfaces for communication with an alarm server, a monitoring device for monitoring a patient parameter or monitoring a status of a medical device or monitoring both a patient parameter and a status of a medical device and for providing an alarm indication if the patient parameter or status meets a predefined condition, and a control module configured to control the one or more interfaces and the monitoring device and configured to forward an alarm indication provided by the monitoring device via the one or more interfaces to the alarm server and configured to trigger a local alarm at the alarm source after receipt of an alarm indication from the monitoring device if no confirmation was received from the alarm server after a predefined time that the alarm indication has been received by an alarm generator; monitoring a patient parameter or monitoring a status of a medical device or monitoring a patient parameter and a status of a medical device; providing an alarm indication if the patient parameter or medical device status meets a predefined condition; forwarding a provided alarm indication to an alarm server; and triggering a local alarm if no confirmation of the alarm server was received after a predefined time that the alarm indication has been received by an alarm generator. 23. An alarm source process according to claim 22, further comprising providing a computer program with a program code for executing the steps of monitoring a patient parameter or monitoring a status of a medical device or monitoring a patient parameter and a status of a medical device, providing an alarm indication, forwarding a provided alarm indication and triggering a local alarm, wherein the program code is executed on a computer, a processor or a programmable hardware component. 24. An alarm generator process comprising: providing an alarm generator device comprising one or more interfaces for communication with an alarm server, an output device for outputting information relating to an alarm to a user, and a control module configured to control the one or more interfaces and the output device and configured to poll an alarm server via the one or more interfaces about a presence of an alarm and to receive information relating to the presence of an alarm at the alarm server on request and configured to signal the presence of the alarm to the user via the output device upon receipt of the information relating to the presence of an alarm from the alarm server; polling for a presence of an alarm at an alarm server; receiving information relating to a presence of an alarm on request from the alarm server; and signaling a presence of an alarm at a user. 25. An alarm generator process according to claim 24, further comprising providing a computer program with a program code for executing the steps of polling for a presence of an alarm at an alarm server, receiving information relating to a presence of an alarm, and signaling a presence of an alarm at a user, wherein the program code is executed on a computer, a processor or a programmable hardware component.
2,600
10,660
10,660
14,890,143
2,643
A method, apparatus and computer program product are provided to facilitate interworking a plurality of cellular radio access networks and wireless local area networks (WLANs). In the context of a method, a set of information is maintained for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks. The method also includes maintaining wireless access selection information defining a relative priority of selection among one or more WLANs relative to the cellular radio access networks. The method also permits the relative priority of the WLANs to be modified without modification of the set of information. For example, the relative priority the WLANs may be modified in a manner specific to the user equipment or specific to a cell. The method may also include causing notification to be provided to user equipment of the set of information and the wireless access selection information.
1-48. (canceled) 49. A method comprising: maintaining a set of information for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks; maintaining wireless access selection information defining a relative priority of selection among one or more wireless local area networks (WLANs) relative to the cellular radio access networks; permitting the relative priority of the WLANs to be modified without modification of the set of information; and causing notification to be provided to user equipment of at least some of the set of information and the wireless access selection information. 50. A method according to claim 49 wherein the set of information includes an indication of the WLANs. 51. A method according to claim 49 wherein the set of information is independent of any indication of the WLANs. 52. A method according to claim 49 wherein permitting the relative priority of the WLANs to be modified comprises permitting the relative priority of the WLANs to be modified in a manner specific to the user equipment, specific to a subscribed user class, specific to a group of users, specific to a cell, specific to a usage profile, specific to a traffic type or specific to one or more active sessions. 53. A method according to claim 49 wherein causing notification to be provided comprises causing signaling of the wireless access selection information in an information element independent of the set of information. 54. A method according to claim 53 wherein causing notification to be provided comprises causing notification to be provided in a connected state prior to a transition to an idle state. 55. A method according to claim 49 wherein the notification provided to the user equipment causes the user equipment to search, detect and select among one or more WLANs according to information maintained by the user equipment while the user equipment maintains a connected state to a cellular radio access network. 56. A method according to claim 49 wherein the notification provided to the user equipment causes the user equipment to search, detect and select one or more WLANs for a conditional handover with the handover being executed in an instance in which a handover criterion is satisfied and with the user equipment continuing to be serviced by a cellular radio access network in an instance in which the handover criterion is not satisfied. 57. An apparatus comprising: at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the processor, cause the apparatus to at least: maintain a set of information for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks; maintain wireless access selection information defining a relative priority of selection among one or more wireless local area networks (WLANs) relative to the cellular radio access networks; permit the relative priority of the WLANs to be modified without modification of the set of information; and cause notification to be provided to user equipment of at least some of the set of information and the wireless access selection information. 58. An apparatus according to claim 57 wherein the set of information includes an indication of the WLANs. 59. An apparatus according to claim 57 wherein the set of information is independent of any indication of the WLANs. 60. An apparatus according to claim 57 wherein the at least one memory and the computer program code configured to, with the processor, cause the apparatus to permit the relative priority of the WLANs to be modified by permitting the relative priority of the WLANs to be modified in a manner specific to the user equipment, specific to a subscribed user class, specific to a group of users, specific to a cell, specific to a usage profile, specific to a traffic type or specific to one or more active sessions. 61. An apparatus according to claim 57 wherein the at least one memory and the computer program code configured to, with the processor, cause the apparatus to cause notification to be provided by causing signaling of the wireless access selection information in an information element independent of the set of information. 62. An apparatus according to claim 61 wherein the at least one memory and the computer program code configured to, with the processor, cause the apparatus to cause notification to be provided by causing notification to be provided in a connected state prior to a transition to an idle state. 63. An apparatus according to claim 57 wherein the notification provided to the user equipment causes the user equipment to search, detect and select among one or more WLANs according to information maintained by the user equipment while the user equipment maintains a connected state to a cellular radio access network. 64. An apparatus according to claim 57 wherein the notification provided to the user equipment causes the user equipment to search, detect and select among one or more WLANs for a conditional handover with the handover being executed in an instance in which a handover criterion is satisfied and with the user equipment continuing to be serviced by a cellular radio access network in an instance in which the handover criterion is not satisfied. 65. An apparatus comprising: at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the processor, cause the apparatus to at least: provide access to at least some of a set of information for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks and to wireless access selection information defining a relative priority of selection among one or more wireless local area networks (WLANs) relative to the cellular radio access networks; and select a respective one of the cellular radio access networks or the WLANs based upon the relative priorities. 66. An apparatus according to claim 65 wherein the set of information includes an indication of the WLANs. 67. An apparatus according to claim 65 wherein the set of information is independent of any indication of the WLANs. 68. An apparatus according to claim 65 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to provide information that permits the relative priority of the WLANs to be modified without modification of the set of information. 69. An apparatus according to claim 65 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus, in an instance in which one of the WLANs is to be selected, to determine the respective WLAN to be selected based upon a set of preferences, independent of the set of information and the wireless access selection information. 70. An apparatus according to claim 69 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to determine a respective set of preferences from among a plurality of sets of preferences to govern selection of the respective WLAN. 71. An apparatus according to claim 69 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to camp on a cellular radio access network based upon the set of information while selecting the respective WLAN. 72. An apparatus according to claim 69 wherein the set of preferences is provided by a management object, an access network discovery and selection object, a hotspot object, a home network object, a visited network object, a device management object, a vendor specific object or an organization specific object. 73. An apparatus according to claim 69 wherein the set of preferences is based upon a plugin object, WLAN preferences, a selection history or a definition present in the subscriber identification module or an access network certification stamp as a software configuration.
A method, apparatus and computer program product are provided to facilitate interworking a plurality of cellular radio access networks and wireless local area networks (WLANs). In the context of a method, a set of information is maintained for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks. The method also includes maintaining wireless access selection information defining a relative priority of selection among one or more WLANs relative to the cellular radio access networks. The method also permits the relative priority of the WLANs to be modified without modification of the set of information. For example, the relative priority the WLANs may be modified in a manner specific to the user equipment or specific to a cell. The method may also include causing notification to be provided to user equipment of the set of information and the wireless access selection information.1-48. (canceled) 49. A method comprising: maintaining a set of information for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks; maintaining wireless access selection information defining a relative priority of selection among one or more wireless local area networks (WLANs) relative to the cellular radio access networks; permitting the relative priority of the WLANs to be modified without modification of the set of information; and causing notification to be provided to user equipment of at least some of the set of information and the wireless access selection information. 50. A method according to claim 49 wherein the set of information includes an indication of the WLANs. 51. A method according to claim 49 wherein the set of information is independent of any indication of the WLANs. 52. A method according to claim 49 wherein permitting the relative priority of the WLANs to be modified comprises permitting the relative priority of the WLANs to be modified in a manner specific to the user equipment, specific to a subscribed user class, specific to a group of users, specific to a cell, specific to a usage profile, specific to a traffic type or specific to one or more active sessions. 53. A method according to claim 49 wherein causing notification to be provided comprises causing signaling of the wireless access selection information in an information element independent of the set of information. 54. A method according to claim 53 wherein causing notification to be provided comprises causing notification to be provided in a connected state prior to a transition to an idle state. 55. A method according to claim 49 wherein the notification provided to the user equipment causes the user equipment to search, detect and select among one or more WLANs according to information maintained by the user equipment while the user equipment maintains a connected state to a cellular radio access network. 56. A method according to claim 49 wherein the notification provided to the user equipment causes the user equipment to search, detect and select one or more WLANs for a conditional handover with the handover being executed in an instance in which a handover criterion is satisfied and with the user equipment continuing to be serviced by a cellular radio access network in an instance in which the handover criterion is not satisfied. 57. An apparatus comprising: at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the processor, cause the apparatus to at least: maintain a set of information for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks; maintain wireless access selection information defining a relative priority of selection among one or more wireless local area networks (WLANs) relative to the cellular radio access networks; permit the relative priority of the WLANs to be modified without modification of the set of information; and cause notification to be provided to user equipment of at least some of the set of information and the wireless access selection information. 58. An apparatus according to claim 57 wherein the set of information includes an indication of the WLANs. 59. An apparatus according to claim 57 wherein the set of information is independent of any indication of the WLANs. 60. An apparatus according to claim 57 wherein the at least one memory and the computer program code configured to, with the processor, cause the apparatus to permit the relative priority of the WLANs to be modified by permitting the relative priority of the WLANs to be modified in a manner specific to the user equipment, specific to a subscribed user class, specific to a group of users, specific to a cell, specific to a usage profile, specific to a traffic type or specific to one or more active sessions. 61. An apparatus according to claim 57 wherein the at least one memory and the computer program code configured to, with the processor, cause the apparatus to cause notification to be provided by causing signaling of the wireless access selection information in an information element independent of the set of information. 62. An apparatus according to claim 61 wherein the at least one memory and the computer program code configured to, with the processor, cause the apparatus to cause notification to be provided by causing notification to be provided in a connected state prior to a transition to an idle state. 63. An apparatus according to claim 57 wherein the notification provided to the user equipment causes the user equipment to search, detect and select among one or more WLANs according to information maintained by the user equipment while the user equipment maintains a connected state to a cellular radio access network. 64. An apparatus according to claim 57 wherein the notification provided to the user equipment causes the user equipment to search, detect and select among one or more WLANs for a conditional handover with the handover being executed in an instance in which a handover criterion is satisfied and with the user equipment continuing to be serviced by a cellular radio access network in an instance in which the handover criterion is not satisfied. 65. An apparatus comprising: at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the processor, cause the apparatus to at least: provide access to at least some of a set of information for a plurality of different cellular radio access networks that defines a relative priority of selection among the cellular radio access networks and to wireless access selection information defining a relative priority of selection among one or more wireless local area networks (WLANs) relative to the cellular radio access networks; and select a respective one of the cellular radio access networks or the WLANs based upon the relative priorities. 66. An apparatus according to claim 65 wherein the set of information includes an indication of the WLANs. 67. An apparatus according to claim 65 wherein the set of information is independent of any indication of the WLANs. 68. An apparatus according to claim 65 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to provide information that permits the relative priority of the WLANs to be modified without modification of the set of information. 69. An apparatus according to claim 65 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus, in an instance in which one of the WLANs is to be selected, to determine the respective WLAN to be selected based upon a set of preferences, independent of the set of information and the wireless access selection information. 70. An apparatus according to claim 69 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to determine a respective set of preferences from among a plurality of sets of preferences to govern selection of the respective WLAN. 71. An apparatus according to claim 69 wherein the at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to camp on a cellular radio access network based upon the set of information while selecting the respective WLAN. 72. An apparatus according to claim 69 wherein the set of preferences is provided by a management object, an access network discovery and selection object, a hotspot object, a home network object, a visited network object, a device management object, a vendor specific object or an organization specific object. 73. An apparatus according to claim 69 wherein the set of preferences is based upon a plugin object, WLAN preferences, a selection history or a definition present in the subscriber identification module or an access network certification stamp as a software configuration.
2,600
10,661
10,661
15,889,256
2,661
A scanning apparatus is disposed on a vehicle and included an external rearview assembly having a housing and an electro-optic element. The electro-optic element includes a first substrate comprising a first surface and a second surface. The electro-optic element also includes a second substrate comprising a third surface and a fourth surface, wherein the first substrate and the second substrate define a cavity. An electro-optic medium is contained in the cavity. An image sensor is disposed on the housing and directed outward, the image sensor configured to capture biometric data from an individual that is processed by a controller to unlock a door of the vehicle.
1. A scanning apparatus disposed on a vehicle comprising: an external rearview assembly comprising: a housing; an electro-optic element comprising: a first substrate comprising a first surface and a second surface; a second substrate comprising a third surface and a fourth surface, wherein the first substrate and the second substrate define a cavity; an electro-optic medium contained in the cavity; and an image sensor disposed on the housing and directed outward, the image sensor configured to capture biometric data from an individual that is processed by a controller to unlock a door of said vehicle. 2. The scanning apparatus of claim 1, further comprising: a display disposed in the external rearview assembly behind the electro-optic element. 3. The scanning apparatus of claim 2, wherein the captured biometric data is presented on the display. 4. The scanning apparatus of claim 1, wherein the image sensor includes a lens disposed on a sail panel of the housing. 5. The scanning apparatus of claim 1, wherein the captured biometric data corresponds to distinct eye characteristics of the individual. 6. The scanning apparatus of claim 1, further comprising: a light source configured to emit light in a near infrared (NIR) range to illuminate an eye of the individual. 7. The scanning apparatus of claim 6, wherein the light source is disposed behind the electro-optic element. 8. The scanning apparatus of claim 1, further comprising: an indicator configured to show an operation state of the image sensor. 9. The scanning apparatus of claim 1, further comprising: a secondary image sensor disposed on the door of said vehicle. 10. A scanning apparatus disposed on a vehicle comprising: an external rearview assembly having a dimmable electro-optic element; a light source configured to emit light in a near infrared (NIR) range to illuminate a face of an individual; an image sensor operably coupled with the electro-optic element, the image sensor configured to capture biometric data from the individual, wherein the captured biometric data is processed by a controller to unlock a door of said vehicle; and a display disposed in the external rearview assembly behind the electro-optic element, and wherein the captured biometric data is presented on the display. 11. The scanning apparatus of claim 10, wherein the image sensor is disposed proximate a window of the door. 12. The scanning apparatus of claim 10, wherein the captured biometric data corresponds to distinct eye characteristics of the individual. 13. The scanning apparatus of claim 10, wherein the light source is disposed behind the electro-optic element. 14. The scanning apparatus of claim 10, further comprising: an indicator configured to show an operation state of the image sensor. 15. A scanning apparatus disposed on a vehicle comprising: an external rearview assembly having a dimmable electro-optic element; an image sensor operably coupled with and spaced from the electro-optic element, the image sensor configured to capture biometric data from an individual that is processed by a controller to unlock a door of said vehicle; and a light source configured to emit light in a near infrared (NIR) range to illuminate an eye of the individual. 16. The scanning apparatus of claim 15, wherein the captured biometric data corresponds to distinct eye characteristics of the individual. 17. The scanning apparatus of claim 15, wherein the light source is disposed behind the electro-optic element. 18. The scanning apparatus of claim 15, further comprising: an indicator configured to show an operation state of the image sensor. 19. The scanning apparatus of claim 15, wherein the image sensor is disposed proximate a window of the door.
A scanning apparatus is disposed on a vehicle and included an external rearview assembly having a housing and an electro-optic element. The electro-optic element includes a first substrate comprising a first surface and a second surface. The electro-optic element also includes a second substrate comprising a third surface and a fourth surface, wherein the first substrate and the second substrate define a cavity. An electro-optic medium is contained in the cavity. An image sensor is disposed on the housing and directed outward, the image sensor configured to capture biometric data from an individual that is processed by a controller to unlock a door of the vehicle.1. A scanning apparatus disposed on a vehicle comprising: an external rearview assembly comprising: a housing; an electro-optic element comprising: a first substrate comprising a first surface and a second surface; a second substrate comprising a third surface and a fourth surface, wherein the first substrate and the second substrate define a cavity; an electro-optic medium contained in the cavity; and an image sensor disposed on the housing and directed outward, the image sensor configured to capture biometric data from an individual that is processed by a controller to unlock a door of said vehicle. 2. The scanning apparatus of claim 1, further comprising: a display disposed in the external rearview assembly behind the electro-optic element. 3. The scanning apparatus of claim 2, wherein the captured biometric data is presented on the display. 4. The scanning apparatus of claim 1, wherein the image sensor includes a lens disposed on a sail panel of the housing. 5. The scanning apparatus of claim 1, wherein the captured biometric data corresponds to distinct eye characteristics of the individual. 6. The scanning apparatus of claim 1, further comprising: a light source configured to emit light in a near infrared (NIR) range to illuminate an eye of the individual. 7. The scanning apparatus of claim 6, wherein the light source is disposed behind the electro-optic element. 8. The scanning apparatus of claim 1, further comprising: an indicator configured to show an operation state of the image sensor. 9. The scanning apparatus of claim 1, further comprising: a secondary image sensor disposed on the door of said vehicle. 10. A scanning apparatus disposed on a vehicle comprising: an external rearview assembly having a dimmable electro-optic element; a light source configured to emit light in a near infrared (NIR) range to illuminate a face of an individual; an image sensor operably coupled with the electro-optic element, the image sensor configured to capture biometric data from the individual, wherein the captured biometric data is processed by a controller to unlock a door of said vehicle; and a display disposed in the external rearview assembly behind the electro-optic element, and wherein the captured biometric data is presented on the display. 11. The scanning apparatus of claim 10, wherein the image sensor is disposed proximate a window of the door. 12. The scanning apparatus of claim 10, wherein the captured biometric data corresponds to distinct eye characteristics of the individual. 13. The scanning apparatus of claim 10, wherein the light source is disposed behind the electro-optic element. 14. The scanning apparatus of claim 10, further comprising: an indicator configured to show an operation state of the image sensor. 15. A scanning apparatus disposed on a vehicle comprising: an external rearview assembly having a dimmable electro-optic element; an image sensor operably coupled with and spaced from the electro-optic element, the image sensor configured to capture biometric data from an individual that is processed by a controller to unlock a door of said vehicle; and a light source configured to emit light in a near infrared (NIR) range to illuminate an eye of the individual. 16. The scanning apparatus of claim 15, wherein the captured biometric data corresponds to distinct eye characteristics of the individual. 17. The scanning apparatus of claim 15, wherein the light source is disposed behind the electro-optic element. 18. The scanning apparatus of claim 15, further comprising: an indicator configured to show an operation state of the image sensor. 19. The scanning apparatus of claim 15, wherein the image sensor is disposed proximate a window of the door.
2,600
10,662
10,662
16,056,221
2,667
Images that are associated with an identification of a tracking target of a patient to receive radiation treatment may be received. The images may be sorted into a sequence based on a motion of the patient. The sorted images may be provided via a graphical user interface. The sequence of the sorted images that are based on the motion of the patient may be provided.
1. A method comprising: receiving a plurality of images that are associated with an identification of a tracking target of a patient to receive radiation treatment; sorting the plurality of images into a sequence based on a motion of the patient; providing, via a graphical user interface (GUI), the sorted plurality of images; and providing, by a processing device, the sequence of the sorted plurality of images that are based on the motion of the patient; and providing a visual indicator on each of the sorted images to represent the tracking target of the patient that has been identified. 2. The method of claim 1, wherein the motion comprises a respiratory motion of the patient. 3. The method of claim 2, further comprising: receiving an indication that an image of the sorted plurality of images corresponds to a false positive during the providing of the sequence, wherein the false positive is based on a corresponding visual indicator of the image being at a position that deviates from a path associated with other visual indicators of the plurality of images. 4. The method of claim 1, wherein the motion comprises a respiratory motion of the patient. 5. The method of claim 4, wherein the sorting of the plurality of images into the sequence based on the motion of the patient is in view of a phase and an amplitude associated with the respiratory motion of the patient when each of the plurality of images was taken of the patient. 6. The method of claim 1, further comprising: receiving an input selection via the GUI to provide the sorted plurality of images in the sequence that is based on the motion of the patient, wherein the providing of the sequence is in response to the input selection. 7. The method of claim 1, wherein the plurality of images are x-ray images of the patient. 8. A system comprising: a memory to store a plurality of images that are associated with an identification of a tracking target of a patient to receive radiation treatment; and a processing device operatively coupled with the memory to: receive the plurality of images; sort the plurality of images into a sequence based on a motion of the patient; provide, via a graphical user interface (GUI), the sorted plurality of images; provide the sequence of the sorted plurality of images that are based on the motion of the patient; and provide a visual indicator on each of the sorted images to represent the tracking target of the patient that has been identified. 9. The system of claim 8, wherein the motion comprises a respiratory motion of the patient. 10. The system of claim 9, wherein the processing device is further to: receive an indication that an image of the sorted plurality of images corresponds to a false positive during the providing of the sequence, wherein the false positive is based on a corresponding visual indicator of the image being at a position that deviates from a path associated with other visual indicators of the plurality of images. 11. The system of claim 8, wherein the motion comprises a respiratory motion of the patient. 12. The system of claim 11, wherein the sorting of the plurality of images into the sequence based on the motion of the patient is in view of a phase and an amplitude associated with the respiratory motion of the patient when each of the plurality of images was taken of the patient. 13. The system of claim 8, wherein the processing device is further to: receive an input selection via the graphical user interface to provide the sorted plurality of images in the sequence that is based on the motion of the patient, wherein the providing of the sequence is in response to the input selection. 14. The system of claim 8, wherein the plurality of images are x-ray images of the patient. 15. A non-transitory computer readable medium comprising instructions that, when executed by a processing device, cause the processing device to: receive a plurality of images that are associated with an identification of a tracking target of a patient to receive radiation treatment; and sort the plurality of images into a sequence based on a motion of the patient; provide, via a graphical user interface (GUI), the sorted plurality of images; provide, by the processing device, the sequence of the sorted plurality of images that are based on the motion of the patient; and provide a visual indicator on each of the sorted images to represent the tracking target of the patient that has been identified. 16. The non-transitory computer readable medium of claim 15, wherein the motion comprises a respiratory motion of the patient. 17. The non-transitory computer readable medium of claim 16, wherein the processing device is further to: receive an indication that an image of the sorted plurality of images corresponds to a false positive during the providing of the sequence, wherein the false positive is based on a corresponding visual indicator of the image being at a position that deviates from a path associated with other visual indicators of the plurality of images. 18. The non-transitory computer readable medium of claim 15, wherein the sorting of the plurality of images into the sequence based on the motion of the patient is in view of a phase and an amplitude associated with the respiratory motion of the patient when each of the plurality of images was taken of the patient. 19. The non-transitory computer readable medium of claim 15, wherein the processing device is further to: receive an input selection via the GUI to provide the sorted plurality of images in the sequence that is based on the motion of the patient, wherein the providing of the sequence is in response to the input selection. 20. The non-transitory computer readable medium of claim 15, wherein the plurality of images are x-ray images of the patient.
Images that are associated with an identification of a tracking target of a patient to receive radiation treatment may be received. The images may be sorted into a sequence based on a motion of the patient. The sorted images may be provided via a graphical user interface. The sequence of the sorted images that are based on the motion of the patient may be provided.1. A method comprising: receiving a plurality of images that are associated with an identification of a tracking target of a patient to receive radiation treatment; sorting the plurality of images into a sequence based on a motion of the patient; providing, via a graphical user interface (GUI), the sorted plurality of images; and providing, by a processing device, the sequence of the sorted plurality of images that are based on the motion of the patient; and providing a visual indicator on each of the sorted images to represent the tracking target of the patient that has been identified. 2. The method of claim 1, wherein the motion comprises a respiratory motion of the patient. 3. The method of claim 2, further comprising: receiving an indication that an image of the sorted plurality of images corresponds to a false positive during the providing of the sequence, wherein the false positive is based on a corresponding visual indicator of the image being at a position that deviates from a path associated with other visual indicators of the plurality of images. 4. The method of claim 1, wherein the motion comprises a respiratory motion of the patient. 5. The method of claim 4, wherein the sorting of the plurality of images into the sequence based on the motion of the patient is in view of a phase and an amplitude associated with the respiratory motion of the patient when each of the plurality of images was taken of the patient. 6. The method of claim 1, further comprising: receiving an input selection via the GUI to provide the sorted plurality of images in the sequence that is based on the motion of the patient, wherein the providing of the sequence is in response to the input selection. 7. The method of claim 1, wherein the plurality of images are x-ray images of the patient. 8. A system comprising: a memory to store a plurality of images that are associated with an identification of a tracking target of a patient to receive radiation treatment; and a processing device operatively coupled with the memory to: receive the plurality of images; sort the plurality of images into a sequence based on a motion of the patient; provide, via a graphical user interface (GUI), the sorted plurality of images; provide the sequence of the sorted plurality of images that are based on the motion of the patient; and provide a visual indicator on each of the sorted images to represent the tracking target of the patient that has been identified. 9. The system of claim 8, wherein the motion comprises a respiratory motion of the patient. 10. The system of claim 9, wherein the processing device is further to: receive an indication that an image of the sorted plurality of images corresponds to a false positive during the providing of the sequence, wherein the false positive is based on a corresponding visual indicator of the image being at a position that deviates from a path associated with other visual indicators of the plurality of images. 11. The system of claim 8, wherein the motion comprises a respiratory motion of the patient. 12. The system of claim 11, wherein the sorting of the plurality of images into the sequence based on the motion of the patient is in view of a phase and an amplitude associated with the respiratory motion of the patient when each of the plurality of images was taken of the patient. 13. The system of claim 8, wherein the processing device is further to: receive an input selection via the graphical user interface to provide the sorted plurality of images in the sequence that is based on the motion of the patient, wherein the providing of the sequence is in response to the input selection. 14. The system of claim 8, wherein the plurality of images are x-ray images of the patient. 15. A non-transitory computer readable medium comprising instructions that, when executed by a processing device, cause the processing device to: receive a plurality of images that are associated with an identification of a tracking target of a patient to receive radiation treatment; and sort the plurality of images into a sequence based on a motion of the patient; provide, via a graphical user interface (GUI), the sorted plurality of images; provide, by the processing device, the sequence of the sorted plurality of images that are based on the motion of the patient; and provide a visual indicator on each of the sorted images to represent the tracking target of the patient that has been identified. 16. The non-transitory computer readable medium of claim 15, wherein the motion comprises a respiratory motion of the patient. 17. The non-transitory computer readable medium of claim 16, wherein the processing device is further to: receive an indication that an image of the sorted plurality of images corresponds to a false positive during the providing of the sequence, wherein the false positive is based on a corresponding visual indicator of the image being at a position that deviates from a path associated with other visual indicators of the plurality of images. 18. The non-transitory computer readable medium of claim 15, wherein the sorting of the plurality of images into the sequence based on the motion of the patient is in view of a phase and an amplitude associated with the respiratory motion of the patient when each of the plurality of images was taken of the patient. 19. The non-transitory computer readable medium of claim 15, wherein the processing device is further to: receive an input selection via the GUI to provide the sorted plurality of images in the sequence that is based on the motion of the patient, wherein the providing of the sequence is in response to the input selection. 20. The non-transitory computer readable medium of claim 15, wherein the plurality of images are x-ray images of the patient.
2,600
10,663
10,663
15,837,658
2,647
A content sharing device for use in controlling content replicated from a video source device on a large common display screen, the content sharing device comprising a housing, a processor supported by the housing, at least first and second control buttons supported by the housing and linked to the processor, a cable linked to the processor and having a distal end opposite the processor, the cable including a connector at the distal end for linking to the video source device, wherein, the processor is programmed to enable content from the video source device linked to the connector to be replicated in a first field on the common display screen upon selection of the first control button and to enable content from the video source device linked to the connector to be replicated in a second field adjacent to and separate from the first field upon selection of the second control button.
1. A method of selecting content from a content source to be displayed in at least one common presentation space that is presented by at least one common display screen in a collaborative workspace, the method for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the method comprising the steps of: providing a separate control interface for each of the plurality of device display screens, each control interface including at least one selectable control; and for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device. 2. The method of claim 1 wherein the at least one common display screen presents a plurality of common presentation spaces, each control interface including a separately selectable control for each of the common presentation spaces. 3. The method of claim 2 wherein the at least one common display screen includes a single display screen that presents a plurality of presentation spaces. 4. The method of claim 2 wherein the common presentation spaces are arranged in a first pattern and wherein the separately selectable controls are arranged in a second pattern that mirrors the first pattern. 5. The method of claim 1 wherein at least a subset of the source devices provides content to the common device display screen via wireless transmission. 6. The method of claim 5 wherein at least a subset of the source devices provides content to the common device display screen via a cable linked to the source device. 7. The method of claim 1 wherein at least a subset of the source devices provides content to the common device display screen via a cable linked to the source device. 8. The method of claim 1 further including a switching device that includes at least one input and at least one output, the switching device receiving content from the source devices via the at least one input and providing the content to the common display screen via the at least one output. 9. The method of claim 1 wherein each of at least a subset of the control interfaces includes a mechanical control subassembly including a housing and at least one selectable control button mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device. 10. The method of claim 9 wherein at least a subset of the mechanical control subassemblies includes a second cable that links to a switching device that is in turn linked to the at least one common display screen. 11. The method of claim 1 wherein each of at least a subset of the control interfaces includes a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 12. The method of claim 1 wherein each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least one selectable control button mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 13. The method of claim 12 wherein each control interface includes a plurality of selectable control buttons that form a button pattern and wherein each of the button patterns on each of the control interfaces is similar. 14. The method of claim 13 wherein, when content from a source device is presented in one of the common presentation spaces, the control button associated with the sharing source device and the common presentation space in which the content is shared is visually distinguished from the other control buttons. 15. A method of selecting information from a content source to be displayed in at least first and second common presentation spaces that are presented by at least one common display screen in a collaborative workspace, the method for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the method comprising the steps of: providing a separate control interface for each of the plurality of device display screens, each control interface including a separate selectable control for each of the common presentation spaces; and for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least first and second selectable control buttons mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least first and second selectable controls that are presented on the display of an associated source device. 16. The method of claim 15 wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device. 17. The method of claim 15 wherein the at least one common display screen includes a single display screen that presents a plurality of presentation spaces. 18. The method of claim 15 wherein the common presentation spaces are arranged in a first pattern and wherein the separately selectable controls are arranged in a second pattern that mirrors the first pattern. 19. The method of claim 15 wherein at least a subset of the source devices provides content to the common device display screen via wireless transmission. 20. The method of claim 19 wherein at least a subset of the source devices provides content to the common device display screen via a cable linking the source device to a switcher device that is linked to the at least one common display screen. 21. The method of claim 15 wherein at least a subset of the source devices provides content to the common device display screen via a cable linking the source device to a switcher device that is linked to the at least one common display screen. 22. The method of claim 15 further including a switching device that includes at least one input and at least one output, the switching device receiving content from the source devices via the at least one input and providing the content to the common display screen via the at least one output. 23. The method of claim 15 wherein the system includes at least some mechanical control subassemblies, at least a subset of the mechanical control subassemblies includes a second cable that links to a switching device that is in turn linked to the at least one common display screen. 24. The method of claim 1 wherein each of at least a subset of the control interfaces includes a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 25. A system of controlling information from content sources to be displayed in at least one common presentation space that is presented by at least one common display screen in a collaborative workspace, the system for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the system comprising: a separate control interface for each of the plurality of device display screens, each control interface including at least one selectable control; and a processor programmed to perform the steps of, for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device. 26. The system of claim 25 wherein each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least one selectable control button mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 27. A system for selecting information from content sources to be displayed in at least first and second common presentation spaces that are presented by at least one common display screen in a collaborative workspace, the method for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the system comprising: separate control interface for each of the plurality of device display screens, each control interface including a separate selectable control for each of the common presentation spaces; and a processor programmed to perform the step of, for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least first and second selectable control buttons mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least first and second selectable controls that are presented on the display of an associated source device. 28. The system of claim 27 wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device.
A content sharing device for use in controlling content replicated from a video source device on a large common display screen, the content sharing device comprising a housing, a processor supported by the housing, at least first and second control buttons supported by the housing and linked to the processor, a cable linked to the processor and having a distal end opposite the processor, the cable including a connector at the distal end for linking to the video source device, wherein, the processor is programmed to enable content from the video source device linked to the connector to be replicated in a first field on the common display screen upon selection of the first control button and to enable content from the video source device linked to the connector to be replicated in a second field adjacent to and separate from the first field upon selection of the second control button.1. A method of selecting content from a content source to be displayed in at least one common presentation space that is presented by at least one common display screen in a collaborative workspace, the method for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the method comprising the steps of: providing a separate control interface for each of the plurality of device display screens, each control interface including at least one selectable control; and for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device. 2. The method of claim 1 wherein the at least one common display screen presents a plurality of common presentation spaces, each control interface including a separately selectable control for each of the common presentation spaces. 3. The method of claim 2 wherein the at least one common display screen includes a single display screen that presents a plurality of presentation spaces. 4. The method of claim 2 wherein the common presentation spaces are arranged in a first pattern and wherein the separately selectable controls are arranged in a second pattern that mirrors the first pattern. 5. The method of claim 1 wherein at least a subset of the source devices provides content to the common device display screen via wireless transmission. 6. The method of claim 5 wherein at least a subset of the source devices provides content to the common device display screen via a cable linked to the source device. 7. The method of claim 1 wherein at least a subset of the source devices provides content to the common device display screen via a cable linked to the source device. 8. The method of claim 1 further including a switching device that includes at least one input and at least one output, the switching device receiving content from the source devices via the at least one input and providing the content to the common display screen via the at least one output. 9. The method of claim 1 wherein each of at least a subset of the control interfaces includes a mechanical control subassembly including a housing and at least one selectable control button mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device. 10. The method of claim 9 wherein at least a subset of the mechanical control subassemblies includes a second cable that links to a switching device that is in turn linked to the at least one common display screen. 11. The method of claim 1 wherein each of at least a subset of the control interfaces includes a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 12. The method of claim 1 wherein each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least one selectable control button mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 13. The method of claim 12 wherein each control interface includes a plurality of selectable control buttons that form a button pattern and wherein each of the button patterns on each of the control interfaces is similar. 14. The method of claim 13 wherein, when content from a source device is presented in one of the common presentation spaces, the control button associated with the sharing source device and the common presentation space in which the content is shared is visually distinguished from the other control buttons. 15. A method of selecting information from a content source to be displayed in at least first and second common presentation spaces that are presented by at least one common display screen in a collaborative workspace, the method for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the method comprising the steps of: providing a separate control interface for each of the plurality of device display screens, each control interface including a separate selectable control for each of the common presentation spaces; and for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least first and second selectable control buttons mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least first and second selectable controls that are presented on the display of an associated source device. 16. The method of claim 15 wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device. 17. The method of claim 15 wherein the at least one common display screen includes a single display screen that presents a plurality of presentation spaces. 18. The method of claim 15 wherein the common presentation spaces are arranged in a first pattern and wherein the separately selectable controls are arranged in a second pattern that mirrors the first pattern. 19. The method of claim 15 wherein at least a subset of the source devices provides content to the common device display screen via wireless transmission. 20. The method of claim 19 wherein at least a subset of the source devices provides content to the common device display screen via a cable linking the source device to a switcher device that is linked to the at least one common display screen. 21. The method of claim 15 wherein at least a subset of the source devices provides content to the common device display screen via a cable linking the source device to a switcher device that is linked to the at least one common display screen. 22. The method of claim 15 further including a switching device that includes at least one input and at least one output, the switching device receiving content from the source devices via the at least one input and providing the content to the common display screen via the at least one output. 23. The method of claim 15 wherein the system includes at least some mechanical control subassemblies, at least a subset of the mechanical control subassemblies includes a second cable that links to a switching device that is in turn linked to the at least one common display screen. 24. The method of claim 1 wherein each of at least a subset of the control interfaces includes a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 25. A system of controlling information from content sources to be displayed in at least one common presentation space that is presented by at least one common display screen in a collaborative workspace, the system for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the system comprising: a separate control interface for each of the plurality of device display screens, each control interface including at least one selectable control; and a processor programmed to perform the steps of, for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device. 26. The system of claim 25 wherein each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least one selectable control button mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least one selectable control that is presented on the display of an associated source device. 27. A system for selecting information from content sources to be displayed in at least first and second common presentation spaces that are presented by at least one common display screen in a collaborative workspace, the method for use with a plurality of content source devices, each content source device including a device display screen and presenting content on the device display screen, the system comprising: separate control interface for each of the plurality of device display screens, each control interface including a separate selectable control for each of the common presentation spaces; and a processor programmed to perform the step of, for each of the control interfaces, when a selectable control is selected, replicating the content from the device display screen in the presentation space associated with the selected control; and wherein, each of the control interfaces includes one of: (i) a mechanical control subassembly including a housing and at least first and second selectable control buttons mounted within the housing, each control interface also including a first cable that extends from the housing to a distal end that includes a plug for connecting the control interface to a source device; and (ii) a virtual control interface including at least first and second selectable controls that are presented on the display of an associated source device. 28. The system of claim 27 wherein, each source device provides content to the common device display screen upon selection of an associated selectable control via one of wireless transmission of the content from the source device and transmission through a cable linked to the source device.
2,600
10,664
10,664
16,118,882
2,637
A system comprising an optical receiver for multi-wavelength-channel optical communication, an optical source of spontaneous emission light and a tunable optical filter connected to receive the light at an input. The tunable optical filter can have a filter spectrum with spectral passbands separated by spectral notches. The system also includes an optical fiber link connecting an output of the optical filter to the optical receiver for multi-wavelength-channel optical communication. The receiver can be configured to make a measurement indicative of an optical power level in at least one of the notches or to make measurements of optical power levels and at least one of the passbands and at least one of the notches in response to the optical source transmitting the filtered light to the optical fiber link. Another embodiment includes an apparatus comprising an optical test module including a source of spontaneous emission light and an optical filter connected to receive the spontaneous emission light from the source.
1. A system, comprising: an optical receiver for multi-wavelength-channel optical communication; an optical source of spontaneous emission light and a tunable optical filter connected to receive the light at an input, the tunable optical filter having a filter spectrum with spectral passbands separated by spectral notches; and an optical fiber link connecting an output of the optical filter to the optical receiver for multi- wavelength-channel optical communication, wherein the optical receiver is configured to make a measurement indicative of an optical power level in at least one of the notches or to make measurements of optical power levels in at least one of said passbands and at least one of said notches in response to the optical source transmitting the filtered light to the optical fiber link. 2. The system of claim 1, further including an optical transmitter configured to set one or more modulation parameters and/or an optical signal power of at least one signal-bearing optical wavelength channel based on the made measurement or made measurements. 3. A system, comprising: an optical receiver for multi-wavelength-channel optical communication; an optical source of spontaneous emission light and a tunable optical filter connected to receive the light at an input, the tunable optical filter having a filter spectrum with spectral passbands separated by spectral notches; an optical fiber link connecting an output of the optical filter to the optical receiver for multi-wavelength-channel optical communication, wherein the optical receiver is configured to make a measurement indicative of an optical power level in at least one of the notches or to make measurements of optical power levels in at least one of said passbands and at least one of said notches in response to the optical source transmitting the filtered light to the optical fiber link; and an optical transmitter configured to set one or more modulation parameters and/or an optical signal power of at least one signal-bearing optical wavelength channel based on the made measurement or made measurements, wherein the optical transmitter is configured to set a symbol rate of at least one signal-bearing optical wavelength channel based on the measurement or measurements. 4. The system of claim 3, wherein the optical transmitter is capable of setting one or more power shaping parameters and/or symbol rate of at least one signal-bearing optical wavelength channel such that different signal-bearing optical wavelength channels have substantially different signal to noise ratios than each other as measured at the optical receiver. 5. The system of claim 4, wherein the optical transmitter is configured to perform the setting such that the signal to noise ratios of the signal-bearing optical wavelength channels vary by more than about 1 dB over the channels. 6. The system of claim 1, wherein the tunable optical filter is configured to adapt the filtering of the filtered light based on the made measurement or made measurements. 7. The system of claim 1, wherein the system is part of a terrestrial or a submarine optical point-to-point or mesh network system. 8. An apparatus, comprising: an optical test module including a source of spontaneous emission light and an optical filter connected to receive said spontaneous emission light from the source, the module capable of transmitting filtered spontaneous emission light from the optical filter to a first end of an optical fiber link, the filter having a filtering spectrum with optical passbands separated by optical notches, and wherein the optical test module is configured to determine an optical transmission characteristic of said optical fiber link based on a measurement, at a second end of the optical fiber link, indicative of an optical power received in one or more of the notches or indicative of an optical power received in one or more of said passbands over an optical power received in one or more of the notches received at the second end of the optical fiber link. 9. The apparatus of claim 8, wherein the optical test module is capable of determining modulation parameters and/or an optical signal power of an optical transmitter such that the optical transmitter transmits different signal-bearing optical wavelength channels to the second end of said optical fiber link with substantially different signal to noise ratios thereat. 10. The apparatus of claim 9, wherein the signal to noise ratios of the signal- bearing optical wavelength channels vary by more than 1 dB over the channel. 11. An apparatus, comprising: an optical test module including a source of spontaneous emission light and an optical filter connected to receive said spontaneous emission light from the source, the module capable of transmitting filtered spontaneous emission light from the optical filter to a first end of an optical fiber link, the filter having a filtering spectrum with optical passbands separated by optical notches, and wherein the optical test module is configured to determine an optical transmission characteristic of said optical fiber link based on a measurement, at a second end of the optical fiber link, indicative of an optical power received in one or more of the notches or indicative of an optical power received in one or more of said passbands over an optical power received in one or more of the notches received at the second end of the optical fiber link, and the optical test module includes an optical interleaver or a cascade of optical interleavers configured to receive the spontaneous emission light and provide a pattern of wavelength bands to the filter. 12. The apparatus of claim 11, wherein the pattern of the wavelength bands includes an approximately periodic series of bands of about 50 percent spectral-duty cycle or less. 13. The apparatus of claim 12, wherein the optical test module includes an optical spectrum analyzer configured to measure the filtered spontaneous emission light transmitted to the optical fiber link. 14. The apparatus of claim 8, further including a control module having an electronic digital processing unit configured to read and execute instructions stored in non-transient computer readable memory of the module, the instructions for performing steps of a method to cause the optical test module to generate control signals to change the optical output from one or more of an optical transmitter, the optical source or the tunable optical filter or an adjustable gain equalization filter in the optical fiber link. 15. The apparatus of claim 14, wherein the control module is configured to receive signals indicative of the made measurement or measurements, and based the made measurement, cause the unit to change one or more of the control signals to alter the optical output from one or more of the optical transmitter, the optical source, the tunable optical filter or the adjustable gain equalization filter. 16. The apparatus of claim 14, wherein the change in optical output caused by the control signals includes changing at least one of a optical modulation parameter, an optical transmission power, and a filtering characteristic of at least one data-bearing optical wavelength channel. 17. The apparatus of claim 9, wherein the optical transmitter is configured to set a symbol rate of at least one signal-bearing optical wavelength channel based on the measurement or measurements.
A system comprising an optical receiver for multi-wavelength-channel optical communication, an optical source of spontaneous emission light and a tunable optical filter connected to receive the light at an input. The tunable optical filter can have a filter spectrum with spectral passbands separated by spectral notches. The system also includes an optical fiber link connecting an output of the optical filter to the optical receiver for multi-wavelength-channel optical communication. The receiver can be configured to make a measurement indicative of an optical power level in at least one of the notches or to make measurements of optical power levels and at least one of the passbands and at least one of the notches in response to the optical source transmitting the filtered light to the optical fiber link. Another embodiment includes an apparatus comprising an optical test module including a source of spontaneous emission light and an optical filter connected to receive the spontaneous emission light from the source.1. A system, comprising: an optical receiver for multi-wavelength-channel optical communication; an optical source of spontaneous emission light and a tunable optical filter connected to receive the light at an input, the tunable optical filter having a filter spectrum with spectral passbands separated by spectral notches; and an optical fiber link connecting an output of the optical filter to the optical receiver for multi- wavelength-channel optical communication, wherein the optical receiver is configured to make a measurement indicative of an optical power level in at least one of the notches or to make measurements of optical power levels in at least one of said passbands and at least one of said notches in response to the optical source transmitting the filtered light to the optical fiber link. 2. The system of claim 1, further including an optical transmitter configured to set one or more modulation parameters and/or an optical signal power of at least one signal-bearing optical wavelength channel based on the made measurement or made measurements. 3. A system, comprising: an optical receiver for multi-wavelength-channel optical communication; an optical source of spontaneous emission light and a tunable optical filter connected to receive the light at an input, the tunable optical filter having a filter spectrum with spectral passbands separated by spectral notches; an optical fiber link connecting an output of the optical filter to the optical receiver for multi-wavelength-channel optical communication, wherein the optical receiver is configured to make a measurement indicative of an optical power level in at least one of the notches or to make measurements of optical power levels in at least one of said passbands and at least one of said notches in response to the optical source transmitting the filtered light to the optical fiber link; and an optical transmitter configured to set one or more modulation parameters and/or an optical signal power of at least one signal-bearing optical wavelength channel based on the made measurement or made measurements, wherein the optical transmitter is configured to set a symbol rate of at least one signal-bearing optical wavelength channel based on the measurement or measurements. 4. The system of claim 3, wherein the optical transmitter is capable of setting one or more power shaping parameters and/or symbol rate of at least one signal-bearing optical wavelength channel such that different signal-bearing optical wavelength channels have substantially different signal to noise ratios than each other as measured at the optical receiver. 5. The system of claim 4, wherein the optical transmitter is configured to perform the setting such that the signal to noise ratios of the signal-bearing optical wavelength channels vary by more than about 1 dB over the channels. 6. The system of claim 1, wherein the tunable optical filter is configured to adapt the filtering of the filtered light based on the made measurement or made measurements. 7. The system of claim 1, wherein the system is part of a terrestrial or a submarine optical point-to-point or mesh network system. 8. An apparatus, comprising: an optical test module including a source of spontaneous emission light and an optical filter connected to receive said spontaneous emission light from the source, the module capable of transmitting filtered spontaneous emission light from the optical filter to a first end of an optical fiber link, the filter having a filtering spectrum with optical passbands separated by optical notches, and wherein the optical test module is configured to determine an optical transmission characteristic of said optical fiber link based on a measurement, at a second end of the optical fiber link, indicative of an optical power received in one or more of the notches or indicative of an optical power received in one or more of said passbands over an optical power received in one or more of the notches received at the second end of the optical fiber link. 9. The apparatus of claim 8, wherein the optical test module is capable of determining modulation parameters and/or an optical signal power of an optical transmitter such that the optical transmitter transmits different signal-bearing optical wavelength channels to the second end of said optical fiber link with substantially different signal to noise ratios thereat. 10. The apparatus of claim 9, wherein the signal to noise ratios of the signal- bearing optical wavelength channels vary by more than 1 dB over the channel. 11. An apparatus, comprising: an optical test module including a source of spontaneous emission light and an optical filter connected to receive said spontaneous emission light from the source, the module capable of transmitting filtered spontaneous emission light from the optical filter to a first end of an optical fiber link, the filter having a filtering spectrum with optical passbands separated by optical notches, and wherein the optical test module is configured to determine an optical transmission characteristic of said optical fiber link based on a measurement, at a second end of the optical fiber link, indicative of an optical power received in one or more of the notches or indicative of an optical power received in one or more of said passbands over an optical power received in one or more of the notches received at the second end of the optical fiber link, and the optical test module includes an optical interleaver or a cascade of optical interleavers configured to receive the spontaneous emission light and provide a pattern of wavelength bands to the filter. 12. The apparatus of claim 11, wherein the pattern of the wavelength bands includes an approximately periodic series of bands of about 50 percent spectral-duty cycle or less. 13. The apparatus of claim 12, wherein the optical test module includes an optical spectrum analyzer configured to measure the filtered spontaneous emission light transmitted to the optical fiber link. 14. The apparatus of claim 8, further including a control module having an electronic digital processing unit configured to read and execute instructions stored in non-transient computer readable memory of the module, the instructions for performing steps of a method to cause the optical test module to generate control signals to change the optical output from one or more of an optical transmitter, the optical source or the tunable optical filter or an adjustable gain equalization filter in the optical fiber link. 15. The apparatus of claim 14, wherein the control module is configured to receive signals indicative of the made measurement or measurements, and based the made measurement, cause the unit to change one or more of the control signals to alter the optical output from one or more of the optical transmitter, the optical source, the tunable optical filter or the adjustable gain equalization filter. 16. The apparatus of claim 14, wherein the change in optical output caused by the control signals includes changing at least one of a optical modulation parameter, an optical transmission power, and a filtering characteristic of at least one data-bearing optical wavelength channel. 17. The apparatus of claim 9, wherein the optical transmitter is configured to set a symbol rate of at least one signal-bearing optical wavelength channel based on the measurement or measurements.
2,600
10,665
10,665
15,864,690
2,643
A mobile terminal and a method for a mobile terminal are provided. The mobile terminal includes a display unit, a controller to separate a main screen and a sub screen from video data, the main screen to be displayed on an external display device and the sub screen to be displayed on the display unit, and an interface to wirelessly transmit the main screen to the external display device for display while the sub screen is displayed on the display unit.
1. A mobile terminal comprising: a display unit; a controller to separate a main screen and a sub screen from video data, the main screen to be displayed on an external display device and the sub screen to be displayed on the display unit; and an interface to wirelessly transmit the main screen to the external display device for display while the sub screen is displayed on the display unit. 2. The mobile terminal of claim 1, wherein the sub screen displays information related to the main screen being displayed on the external display device. 3. The mobile terminal of claim 1, wherein the interface is a wireless Local Area Network (LAN) interface. 4. A method for a mobile terminal, the method comprising: processing video data in the mobile terminal; separating a main screen and a sub screen from the video data, the main screen to be displayed on an external display device and the sub screen to be displayed on the mobile terminal; and wirelessly transmitting the main screen to the external display device for display while displaying the sub screen on the mobile terminal. 5. The method of claim 4, wherein the sub screen displays information related to the main screen being displayed on the external display device. 6. The method of claim 4, wherein the main screen is transmitted to the external display device via a wireless Local Area Network (LAN).
A mobile terminal and a method for a mobile terminal are provided. The mobile terminal includes a display unit, a controller to separate a main screen and a sub screen from video data, the main screen to be displayed on an external display device and the sub screen to be displayed on the display unit, and an interface to wirelessly transmit the main screen to the external display device for display while the sub screen is displayed on the display unit.1. A mobile terminal comprising: a display unit; a controller to separate a main screen and a sub screen from video data, the main screen to be displayed on an external display device and the sub screen to be displayed on the display unit; and an interface to wirelessly transmit the main screen to the external display device for display while the sub screen is displayed on the display unit. 2. The mobile terminal of claim 1, wherein the sub screen displays information related to the main screen being displayed on the external display device. 3. The mobile terminal of claim 1, wherein the interface is a wireless Local Area Network (LAN) interface. 4. A method for a mobile terminal, the method comprising: processing video data in the mobile terminal; separating a main screen and a sub screen from the video data, the main screen to be displayed on an external display device and the sub screen to be displayed on the mobile terminal; and wirelessly transmitting the main screen to the external display device for display while displaying the sub screen on the mobile terminal. 5. The method of claim 4, wherein the sub screen displays information related to the main screen being displayed on the external display device. 6. The method of claim 4, wherein the main screen is transmitted to the external display device via a wireless Local Area Network (LAN).
2,600
10,666
10,666
14,991,976
2,644
A system for providing roadside and emergency assistance to a vehicle includes a vehicle unit with several connectivity options. A user interface unit permits a user to request assistance and communicate with an emergency dispatcher and/or service provider. A server receives requests for assistance from the vehicle unit and relays information between the vehicle unit and a dispatcher or service provider to provide communication between the driver of the vehicle and the dispatcher or service provider. Alternatively, such as in an emergency (e.g. crash) situation, the server directly requests assistance to be sent to the vehicle.
1. A roadside assistance system comprising: a vehicle unit installed in a vehicle, the vehicle unit including a processor and at least one wireless communication circuit; a user interface device including a touch-screen display, a processor, and a communication interface for communicating with the vehicle unit, the user interface device permitting a user to initiate an assistance request in which a location of the vehicle and the assistance request are transmitted to a remote server. 2. The system of claim 1 wherein the at least one wireless communication circuit of the vehicle unit communicates the assistance request wirelessly to the remote server. 3. The system of claim 2 wherein the vehicle unit transmits vehicle diagnostic information from an on-board diagnostics system with the assistance request. 4. The system of claim 3 wherein the at least one communication circuit of the vehicle unit is capable of transmitting the assistance request via a plurality of wireless protocols. 5. The system of claim 4 wherein the vehicle unit automatically transmits the assistance request via a second one of the plurality of protocols if a first one of the plurality of protocols is unavailable. 6. The system of claim 3 wherein the at least one communication circuit of the vehicle unit is capable of transmitting the assistance request via a mobile device. 7. The system of claim 2 wherein the vehicle unit is programmed to automatically communicate with an assistance provider directly if the remote server is unavailable. 8. The system of claim 1 further including the remote server, wherein the remote server provides a telephone number of the vehicle to an assistance provider in response to the assistance request. 9. The system of claim 1 wherein the remote server is a first remote server, the system including a second remote server, the first remote server sending the second remote server the assistance request, the second remote server selecting an assistance provider based upon the assistance request. 10. A method of requesting assistance from a vehicle including the steps of: a) generating an assistance request from a mobile device; b) the mobile device communicating with a vehicle unit installed in the vehicle; and c) transmitting information and the assistance request based upon said steps a) and b). 11. The method of claim 10 wherein said step c) further includes the step of transmitting the request directly to a service provider. 12. The method of claim 11 wherein said step c) further includes transmitting vehicle diagnostic information with the assistance request.
A system for providing roadside and emergency assistance to a vehicle includes a vehicle unit with several connectivity options. A user interface unit permits a user to request assistance and communicate with an emergency dispatcher and/or service provider. A server receives requests for assistance from the vehicle unit and relays information between the vehicle unit and a dispatcher or service provider to provide communication between the driver of the vehicle and the dispatcher or service provider. Alternatively, such as in an emergency (e.g. crash) situation, the server directly requests assistance to be sent to the vehicle.1. A roadside assistance system comprising: a vehicle unit installed in a vehicle, the vehicle unit including a processor and at least one wireless communication circuit; a user interface device including a touch-screen display, a processor, and a communication interface for communicating with the vehicle unit, the user interface device permitting a user to initiate an assistance request in which a location of the vehicle and the assistance request are transmitted to a remote server. 2. The system of claim 1 wherein the at least one wireless communication circuit of the vehicle unit communicates the assistance request wirelessly to the remote server. 3. The system of claim 2 wherein the vehicle unit transmits vehicle diagnostic information from an on-board diagnostics system with the assistance request. 4. The system of claim 3 wherein the at least one communication circuit of the vehicle unit is capable of transmitting the assistance request via a plurality of wireless protocols. 5. The system of claim 4 wherein the vehicle unit automatically transmits the assistance request via a second one of the plurality of protocols if a first one of the plurality of protocols is unavailable. 6. The system of claim 3 wherein the at least one communication circuit of the vehicle unit is capable of transmitting the assistance request via a mobile device. 7. The system of claim 2 wherein the vehicle unit is programmed to automatically communicate with an assistance provider directly if the remote server is unavailable. 8. The system of claim 1 further including the remote server, wherein the remote server provides a telephone number of the vehicle to an assistance provider in response to the assistance request. 9. The system of claim 1 wherein the remote server is a first remote server, the system including a second remote server, the first remote server sending the second remote server the assistance request, the second remote server selecting an assistance provider based upon the assistance request. 10. A method of requesting assistance from a vehicle including the steps of: a) generating an assistance request from a mobile device; b) the mobile device communicating with a vehicle unit installed in the vehicle; and c) transmitting information and the assistance request based upon said steps a) and b). 11. The method of claim 10 wherein said step c) further includes the step of transmitting the request directly to a service provider. 12. The method of claim 11 wherein said step c) further includes transmitting vehicle diagnostic information with the assistance request.
2,600
10,667
10,667
15,719,885
2,621
Medical devices may have the ability to connect through a secure gateway to a network, including both local and external networks. According to the described system, a connection component of the medical device may include a wireless connection dongle system using a wireless adapter, such as a dongle, that is inserted into and/or otherwise coupled to the medical device and that transmits or casts information wirelessly, such as via real-time streaming, to a separate receiving display. The communication may be facilitated by another dongle inserted into and/or otherwise coupled to the receiving display that receives the casted display screen. This transmitted casting capability provides the ability to connect the medical device, such as a peritoneal dialysis machine, to other display devices to duplicate the screen of the medical device on one or more larger or more easily accessible displays via secure one-way communication.
1. A medical system, comprising: a medical device having an originating display; a receiving display; and a transmitted display casting system that wirelessly transmits a screen of the originating display that is received at the receiving display via a one-way wireless communication channel, wherein the transmitted display casting system includes a first wireless dongle coupled to the medical device that casts the screen of the originating display to a second wireless dongle coupled to the receiving display via the one-way wireless communication channel, wherein the first wireless dongle is securely paired with the second wireless dongle, wherein the receiving display displays a duplicate screen of the screen of the originating display, and wherein the transmitted display casting system prevents the receiving display from controlling or changing the originating display of the medical device. 2. The medical system of claim 1, wherein the medical device is a dialysis machine. 3. (canceled) 4. The medical system of claim 1, wherein the first wireless dongle and the second wireless dongle are joined to a same wireless local area network. 5. (canceled) 6. (canceled) 7. The medical system of claim 1, wherein the transmitted display casting system uses a wireless gateway device providing a local area network. 8. The medical system of claim 7, wherein the wireless gateway device is disposed in a same home as is disposed the medical device. 9. The medical system of claim 1, wherein the one-way wireless communication channel is provided via a local area network connection. 10. The medical system of claim 1, wherein the one-way wireless communication channel is provided over the Internet via a network infrastructure. 11. The medical system of claim 1, wherein the receiving display has a display screen that is larger than a display screen of the originating display. 12. A transmitted display casting system for a medical device, comprising: a first wireless dongle that casts a screen of an originating display of the medical device via a one-way wireless communication channel; a second wireless dongle that receives the screen of the originating display over the one-way wireless communication channel; and a communication pairing component that securely pairs the first wireless dongle and the second wireless dongle for casting the screen of the originating display securely over the one-way wireless communication channel, wherein the second wireless dongle enables display on a receiving display of a duplicate screen of the screen of the originating display, and wherein the transmitted display casting system prevents the receiving display from controlling or changing the originating display of the medical device. 13. The transmitted display casting system of claim 12, wherein the medical device is a dialysis machine. 14. (canceled) 15. The transmitted display casting system of claim 12, wherein the communication pairing component includes electronics and software distributed between the first wireless dongle and the second wireless dongle, wherein the electronics and software establish the one-way communication channel between the first wireless dongle and the second wireless dongle. 16. The transmitted display casting system of claim 12, wherein the communication pairing component includes a wireless gateway device providing a local area network, and wherein the first wireless dongle and the second wireless dongle are joined to the local area network. 17. The transmitted display casting system of claim 12, wherein the communication pairing component includes a component of a connected health system, wherein the wireless receiving component is disposed at a location remote from the medical device, and wherein the first wireless dongle casts the originating display screen of the medical device to the second wireless dongle via the one-way wireless communication channel through the connected health system. 18. The transmitted display casting system of claim 12, wherein the one-way communication channel is provided via a local area network connection. 19. The transmitted display casting system of claim 12, wherein the one-way wireless communication channel is provided over the Internet via a network infrastructure to the receiving display. 20. The transmitted display casting system of claim 12, further comprising: the receiving display coupled to the second wireless dongle.
Medical devices may have the ability to connect through a secure gateway to a network, including both local and external networks. According to the described system, a connection component of the medical device may include a wireless connection dongle system using a wireless adapter, such as a dongle, that is inserted into and/or otherwise coupled to the medical device and that transmits or casts information wirelessly, such as via real-time streaming, to a separate receiving display. The communication may be facilitated by another dongle inserted into and/or otherwise coupled to the receiving display that receives the casted display screen. This transmitted casting capability provides the ability to connect the medical device, such as a peritoneal dialysis machine, to other display devices to duplicate the screen of the medical device on one or more larger or more easily accessible displays via secure one-way communication.1. A medical system, comprising: a medical device having an originating display; a receiving display; and a transmitted display casting system that wirelessly transmits a screen of the originating display that is received at the receiving display via a one-way wireless communication channel, wherein the transmitted display casting system includes a first wireless dongle coupled to the medical device that casts the screen of the originating display to a second wireless dongle coupled to the receiving display via the one-way wireless communication channel, wherein the first wireless dongle is securely paired with the second wireless dongle, wherein the receiving display displays a duplicate screen of the screen of the originating display, and wherein the transmitted display casting system prevents the receiving display from controlling or changing the originating display of the medical device. 2. The medical system of claim 1, wherein the medical device is a dialysis machine. 3. (canceled) 4. The medical system of claim 1, wherein the first wireless dongle and the second wireless dongle are joined to a same wireless local area network. 5. (canceled) 6. (canceled) 7. The medical system of claim 1, wherein the transmitted display casting system uses a wireless gateway device providing a local area network. 8. The medical system of claim 7, wherein the wireless gateway device is disposed in a same home as is disposed the medical device. 9. The medical system of claim 1, wherein the one-way wireless communication channel is provided via a local area network connection. 10. The medical system of claim 1, wherein the one-way wireless communication channel is provided over the Internet via a network infrastructure. 11. The medical system of claim 1, wherein the receiving display has a display screen that is larger than a display screen of the originating display. 12. A transmitted display casting system for a medical device, comprising: a first wireless dongle that casts a screen of an originating display of the medical device via a one-way wireless communication channel; a second wireless dongle that receives the screen of the originating display over the one-way wireless communication channel; and a communication pairing component that securely pairs the first wireless dongle and the second wireless dongle for casting the screen of the originating display securely over the one-way wireless communication channel, wherein the second wireless dongle enables display on a receiving display of a duplicate screen of the screen of the originating display, and wherein the transmitted display casting system prevents the receiving display from controlling or changing the originating display of the medical device. 13. The transmitted display casting system of claim 12, wherein the medical device is a dialysis machine. 14. (canceled) 15. The transmitted display casting system of claim 12, wherein the communication pairing component includes electronics and software distributed between the first wireless dongle and the second wireless dongle, wherein the electronics and software establish the one-way communication channel between the first wireless dongle and the second wireless dongle. 16. The transmitted display casting system of claim 12, wherein the communication pairing component includes a wireless gateway device providing a local area network, and wherein the first wireless dongle and the second wireless dongle are joined to the local area network. 17. The transmitted display casting system of claim 12, wherein the communication pairing component includes a component of a connected health system, wherein the wireless receiving component is disposed at a location remote from the medical device, and wherein the first wireless dongle casts the originating display screen of the medical device to the second wireless dongle via the one-way wireless communication channel through the connected health system. 18. The transmitted display casting system of claim 12, wherein the one-way communication channel is provided via a local area network connection. 19. The transmitted display casting system of claim 12, wherein the one-way wireless communication channel is provided over the Internet via a network infrastructure to the receiving display. 20. The transmitted display casting system of claim 12, further comprising: the receiving display coupled to the second wireless dongle.
2,600
10,668
10,668
15,386,472
2,689
The present invention discloses a tag identification method and apparatus, relates to the field of communications network technologies, which can implement that there are few steps of determining a format of a tag by a device host in an NFC terminal, so that the tag can be quickly processed. In the embodiments of the present invention, a Near Field Communication NFC controller reads a type of a tag; the NFC controller determines whether a format of the tag is an NFC data exchange format NDEF according to the type of the tag; and the NFC controller sends a notification message to a device host when the NFC controller determines that the format of the tag is the NDEF, where the notification message includes that the format of the tag is the NDEF. The solutions provided in the embodiments of the present invention are applicable to identifying a tag.
1. A tag identification method, comprising: reading, by a Near Field Communication (NFC) controller, a type of a tag; performing, by the NFC controller, NFC data exchange format (NDEF) detection on the tag according to the type of the tag; and sending, by the NFC controller, a notification message to a device host when the NDEF detection indicates a presence of an NDEF message in the tag, wherein the notification message comprises the presence of the NDEF message in the tag. 2. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 1, performing, by the NFC controller, the NDEF detection on the tag according to header read-only memory HR0 in the tag. 3. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 2, performing, by the NFC controller, the NDEF detection on the tag according to a capability container (CC) in the tag. 4. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 3, performing, by the NFC controller, the NDEF detection on the tag according to System Code in the tag. 5. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 4, performing, by the NFC controller, the NDEF detection on the tag according to a file identifier of a capability container (CC) file in the tag. 6. The method according to claim 1, further comprising: receiving, by the NFC controller via an NDEF radio frequency interface, a read/write command sent by the device host; converting, by the NFC controller, the read/write command into an NDEF read/write command; and performing, by the NFC controller, data reading/writing on the tag according to the NDEF read/write command. 7. A tag identification apparatus, comprising: a Near Field Communication (NFC) controller; and a device host coupled to the NFC controller, wherein the NFC controller is configured to: read a type of a tag; perform NFC data exchange format (NDEF) detection on the tag according to the type of the tag; and send a notification message to the device host when the NDEF detection indicates a presence of an NDEF message in the tag, wherein the notification message comprises the presence of the NDEF message in the tag. 8. The apparatus according to claim 7, wherein when the type of the tag is Type 1, the NFC controller is configured to perform the NDEF detection on the tag according to header read-only memory HR0 in the tag. 9. The apparatus according to claim 7, wherein when the type of the tag is Type 2, the NFC controller is configured to perform the NDEF detection on the tag according to a capability container (CC) in the tag. 10. The apparatus according to claim 7, wherein when the type of the tag is Type 3, the NFC controller is configured to perform the NDEF detection on the tag according to System Code in the tag. 11. The apparatus according to claim 7, wherein when the type of the tag is Type 4, the NFC controller is configured to perform the NDEF detection on the tag according to a file identifier of a capability container (CC) file in the tag. 12. The apparatus according to claim 7, wherein the NFC controller is configured to: receive a read/write command via an NDEF radio frequency interface, wherein the read/write command is sent by the device host; convert the read/write command into an NDEF read/write command; and perform data reading/writing on the tag according to the NDEF read/write command.
The present invention discloses a tag identification method and apparatus, relates to the field of communications network technologies, which can implement that there are few steps of determining a format of a tag by a device host in an NFC terminal, so that the tag can be quickly processed. In the embodiments of the present invention, a Near Field Communication NFC controller reads a type of a tag; the NFC controller determines whether a format of the tag is an NFC data exchange format NDEF according to the type of the tag; and the NFC controller sends a notification message to a device host when the NFC controller determines that the format of the tag is the NDEF, where the notification message includes that the format of the tag is the NDEF. The solutions provided in the embodiments of the present invention are applicable to identifying a tag.1. A tag identification method, comprising: reading, by a Near Field Communication (NFC) controller, a type of a tag; performing, by the NFC controller, NFC data exchange format (NDEF) detection on the tag according to the type of the tag; and sending, by the NFC controller, a notification message to a device host when the NDEF detection indicates a presence of an NDEF message in the tag, wherein the notification message comprises the presence of the NDEF message in the tag. 2. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 1, performing, by the NFC controller, the NDEF detection on the tag according to header read-only memory HR0 in the tag. 3. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 2, performing, by the NFC controller, the NDEF detection on the tag according to a capability container (CC) in the tag. 4. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 3, performing, by the NFC controller, the NDEF detection on the tag according to System Code in the tag. 5. The method according to claim 1, wherein the performing, by the NFC controller, the NDEF detection on the tag according to the type of the tag comprises: when the type of the tag is Type 4, performing, by the NFC controller, the NDEF detection on the tag according to a file identifier of a capability container (CC) file in the tag. 6. The method according to claim 1, further comprising: receiving, by the NFC controller via an NDEF radio frequency interface, a read/write command sent by the device host; converting, by the NFC controller, the read/write command into an NDEF read/write command; and performing, by the NFC controller, data reading/writing on the tag according to the NDEF read/write command. 7. A tag identification apparatus, comprising: a Near Field Communication (NFC) controller; and a device host coupled to the NFC controller, wherein the NFC controller is configured to: read a type of a tag; perform NFC data exchange format (NDEF) detection on the tag according to the type of the tag; and send a notification message to the device host when the NDEF detection indicates a presence of an NDEF message in the tag, wherein the notification message comprises the presence of the NDEF message in the tag. 8. The apparatus according to claim 7, wherein when the type of the tag is Type 1, the NFC controller is configured to perform the NDEF detection on the tag according to header read-only memory HR0 in the tag. 9. The apparatus according to claim 7, wherein when the type of the tag is Type 2, the NFC controller is configured to perform the NDEF detection on the tag according to a capability container (CC) in the tag. 10. The apparatus according to claim 7, wherein when the type of the tag is Type 3, the NFC controller is configured to perform the NDEF detection on the tag according to System Code in the tag. 11. The apparatus according to claim 7, wherein when the type of the tag is Type 4, the NFC controller is configured to perform the NDEF detection on the tag according to a file identifier of a capability container (CC) file in the tag. 12. The apparatus according to claim 7, wherein the NFC controller is configured to: receive a read/write command via an NDEF radio frequency interface, wherein the read/write command is sent by the device host; convert the read/write command into an NDEF read/write command; and perform data reading/writing on the tag according to the NDEF read/write command.
2,600
10,669
10,669
14,871,012
2,625
A multi-finger touch (e.g., a simultaneous touch of two or more touch-sensitive regions of a touchpad or touchscreen by three or more objects) that causes a touch-sensitive device to perform a function (e.g., open an application control program, open a device control menu, and/or interrupt a startup process and perform a function other than launching the operating system in the normal mode).
1. A touch-sensitive device, comprising: memory; a touch-sensitive input including touch-sensitive regions; and a processor that performs a function in response to a simultaneous touch of two or more of the touch-sensitive regions by three or more objects. 2. The device of claim 1, wherein the touch-sensitive regions are invisible. 3. The device of claim 1, wherein, in response to a touch of the touch-sensitive input by a single object, the processor first determines whether the single object breaks contact with the touch-sensitive input before performing a second function in response to the touch by the single object. 4. The device of claim 1, wherein the processor opens an application control program or a device control menu. 5. The device of claim 4, wherein the touch-sensitive device is configured to run a plurality of software applications and the application control program or device control menu is opened in response to the simultaneous touch regardless of the software application running on the touch-sensitive device. 6. The device of claim 1, wherein the function is selected based on a software application running on the touch-sensitive device. 7. A graphical user interface on a device having memory, a touch-sensitive input, and a processor, the graphical user interface comprising: a plurality of touch-sensitive regions, wherein a function is performed in response to a simultaneous touch of two or more of the plurality of touch-sensitive regions by three or more objects. 8. The graphical user interface of claim 7, wherein the touch-sensitive regions are invisible. 9. The graphical user interface of claim 7, wherein, in response to a touch of the touch-sensitive input by a single object, the processor first determines whether the single object breaks contact with the touch-sensitive input before performing a second function in response to the touch by the single object. 10. The graphical user interface of claim 7, wherein the function is an application control program or a device control menu. 11. The graphical user interface of claim 10, wherein the touch-sensitive device is configured to run a plurality of software applications and the application control program or device control menu is opened in response to the simultaneous touch regardless of the software application running on the touch-sensitive device. 12. The graphical user interface of claim 1, wherein the function is selected based on a software application running on the touch-sensitive device. 13. A method implemented by a device having memory, a touch-sensitive input, and a processor, the method comprising: outputting, by the touch-sensitive input, a plurality of touch-sensitive regions; performing a function, by the device, in response to a simultaneous touch of two or more of the touch-sensitive regions by three or more objects. 14. The method of claim 13, wherein the touch-sensitive regions are invisible. 15. The method of claim 13, further comprising: determining whether a single object touches the touch-sensitive input; determining whether the object no longer touches the touch-sensitive input; and in response to a determination that the single object no longer touches the touch-sensitive input, performing a second function in response to the single object. 16. The method of claim 13, wherein function is an application control program or a device control menu. 17. The method of claim 16, wherein the application control program or device control menu is opened regardless of a software application running on the touch-sensitive device. 18. The method of claim 13, further comprising: selecting the function based on a software application running on the touch-sensitive device. 19. A non-transitory computer readable storage medium (CRSM) storing instructions that, when executed by a processor, cause a device having a touch-sensitive input to: output a plurality of touch-sensitive regions via the touch-sensitive input; perform a function in response to a simultaneous touch of two or more of the touch-sensitive regions by three or more objects. 20. The CRSM of claim 19, wherein the touch-sensitive regions are invisible. 21. The CRSM of claim 19, further comprising a display that outputs an image indicative of the boundaries of the touch-sensitive regions. 22. The CRSM of claim 19, wherein function is an application control program or a device control menu. 23. The CRSM of claim 22, wherein the application control program or device control menu is opened regardless of a software application running on the touch-sensitive device. 24. The CRSM of claim 19, further comprising instructions that cause the device to: select the function based on a software application running on the touch-sensitive device.
A multi-finger touch (e.g., a simultaneous touch of two or more touch-sensitive regions of a touchpad or touchscreen by three or more objects) that causes a touch-sensitive device to perform a function (e.g., open an application control program, open a device control menu, and/or interrupt a startup process and perform a function other than launching the operating system in the normal mode).1. A touch-sensitive device, comprising: memory; a touch-sensitive input including touch-sensitive regions; and a processor that performs a function in response to a simultaneous touch of two or more of the touch-sensitive regions by three or more objects. 2. The device of claim 1, wherein the touch-sensitive regions are invisible. 3. The device of claim 1, wherein, in response to a touch of the touch-sensitive input by a single object, the processor first determines whether the single object breaks contact with the touch-sensitive input before performing a second function in response to the touch by the single object. 4. The device of claim 1, wherein the processor opens an application control program or a device control menu. 5. The device of claim 4, wherein the touch-sensitive device is configured to run a plurality of software applications and the application control program or device control menu is opened in response to the simultaneous touch regardless of the software application running on the touch-sensitive device. 6. The device of claim 1, wherein the function is selected based on a software application running on the touch-sensitive device. 7. A graphical user interface on a device having memory, a touch-sensitive input, and a processor, the graphical user interface comprising: a plurality of touch-sensitive regions, wherein a function is performed in response to a simultaneous touch of two or more of the plurality of touch-sensitive regions by three or more objects. 8. The graphical user interface of claim 7, wherein the touch-sensitive regions are invisible. 9. The graphical user interface of claim 7, wherein, in response to a touch of the touch-sensitive input by a single object, the processor first determines whether the single object breaks contact with the touch-sensitive input before performing a second function in response to the touch by the single object. 10. The graphical user interface of claim 7, wherein the function is an application control program or a device control menu. 11. The graphical user interface of claim 10, wherein the touch-sensitive device is configured to run a plurality of software applications and the application control program or device control menu is opened in response to the simultaneous touch regardless of the software application running on the touch-sensitive device. 12. The graphical user interface of claim 1, wherein the function is selected based on a software application running on the touch-sensitive device. 13. A method implemented by a device having memory, a touch-sensitive input, and a processor, the method comprising: outputting, by the touch-sensitive input, a plurality of touch-sensitive regions; performing a function, by the device, in response to a simultaneous touch of two or more of the touch-sensitive regions by three or more objects. 14. The method of claim 13, wherein the touch-sensitive regions are invisible. 15. The method of claim 13, further comprising: determining whether a single object touches the touch-sensitive input; determining whether the object no longer touches the touch-sensitive input; and in response to a determination that the single object no longer touches the touch-sensitive input, performing a second function in response to the single object. 16. The method of claim 13, wherein function is an application control program or a device control menu. 17. The method of claim 16, wherein the application control program or device control menu is opened regardless of a software application running on the touch-sensitive device. 18. The method of claim 13, further comprising: selecting the function based on a software application running on the touch-sensitive device. 19. A non-transitory computer readable storage medium (CRSM) storing instructions that, when executed by a processor, cause a device having a touch-sensitive input to: output a plurality of touch-sensitive regions via the touch-sensitive input; perform a function in response to a simultaneous touch of two or more of the touch-sensitive regions by three or more objects. 20. The CRSM of claim 19, wherein the touch-sensitive regions are invisible. 21. The CRSM of claim 19, further comprising a display that outputs an image indicative of the boundaries of the touch-sensitive regions. 22. The CRSM of claim 19, wherein function is an application control program or a device control menu. 23. The CRSM of claim 22, wherein the application control program or device control menu is opened regardless of a software application running on the touch-sensitive device. 24. The CRSM of claim 19, further comprising instructions that cause the device to: select the function based on a software application running on the touch-sensitive device.
2,600
10,670
10,670
16,038,706
2,689
A device includes a signaling means and a motion sensor, and logic for activating or controlling the signaling means in response to a sensed motion according to an embedded logic. The device may be used as a toy, and may be shaped like a play ball or as a handheld unit. It may be powered from a battery, either chargeable from an AC power source directly or contactless by using induction or by converting electrical energy from harvested kinetic energy. The embedded logic may activate or control the signaling means, predictably or randomly, in response to sensed acceleration magnitude or direction, such as sensing the crossing of a preset threshold or sensing the peak value. The visual means may be a numeric display for displaying a value associated with the count of the number of times the threshold has been exceeded or the peak magnitude of the acceleration sensed.
1. A device comprising: an accelerometer for producing an output signal responsive to the device acceleration; a visible light emitter for emitting a visible light signaling a first status to a person; an electric motor for affecting a physical movement; a software and a processor for executing the software, the processor coupled to receive the output signal from the accelerometer and to control the visible light emitter; a rechargeable battery connected to power the device; a battery charger connected for induction-based contactless charging of the rechargeable battery; and an single handheld enclosure for housing the accelerometer, the electric motor, the processor, the rechargeable battery, and the battery charger. 2. The device according to claim 1, wherein the battery charger comprises, or consists of, an induction coil for inductively receiving AC power when in an electromagnetic field, and for charging the rechargeable battery from the received AC power. 3. The device according to claim 1, wherein the physical movement is associated with the device image, theme, or shape. 4. The device according to claim 1, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric, piezoresistive, capacitive, Micro-mechanical Electrical Systems (MEMS), or electromechanical accelerometer. 5. The device according to claim 1, wherein the accelerometer produces the output signal in response to an absolute acceleration or to a relative-to-freefall acceleration of the enclosure, or wherein the output signal is responsive to the magnitude or the direction of the device acceleration, and wherein the accelerometer is a single-axis, two-axis, or a three-axis accelerometer. 6. The device according to claim 1, further operative to sense or measure the device mechanical orientation, vibration, shock, or falling, based on, or using, the output signal. 7. The device according to claim 1, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric accelerometer that utilizes a piezoelectric effect and comprises, consists of, uses, or is based on, piezoceramics or a single crystal or quartz. 8. The device according to claim 1, wherein the visible light emitter consists of, comprises, is based on, or uses, a semiconductor component, an incandescent lamp, or fluorescent lamp, and wherein the semiconductor component consists of, or comprises, a single color Light Emitting Diode (LED) or a multi-color LED. 9. The device according to claim 1, wherein the visible light emitter is operative to illuminate in multiple colors, and wherein the first status is indicated by changing between colors. 10. The device according to claim 1, further comprising an additional visible light emitter attached to the single enclosure and coupled to the processor for emitting a visible light indicating a second status to the person, and wherein the first status is indicated by steadiness, blinking, intensity level, duty-cycle, or flashing, of the illumination of the visible light emitter. 11. The device according to claim 1, wherein the visible light emitter consists of, or comprises, a numerical display for displaying one or more digits representing a number, and wherein the numerical display comprises, consists of, or uses, seven-segments display. 12. The device according to claim 1, wherein the visible light emitter consists of, or comprises, an alphanumeric display for displaying characters, numbers, letters, or symbols, and wherein the visible light emitter consists of, or comprises, a flat-panel digital display for displaying graphical or text information. 13. The device according to claim 12, wherein the digital display is based on, comprises, or uses, a Liquid Crystal Display (LCD), a Thin-Film Transistor (TFT), or a Field Emission Display (FED) display. 14. The device according to claim 1, further operative as a toy for an amusement of a person or a pet, wherein the single enclosure is configured, dimensioned, formed, or structured as a toy. 15. The device according to claim 1, wherein the single enclosure comprises, or shaped as, two substantially circular plates attached to both ends of a rod or wherein the single enclosure is substantially sphere shaped. 16. The device according to claim 15, wherein the single enclosure is ball-shaped. 17. The device according to claim 15, wherein the single enclosure is cylinder, half-sphere, prolate-spheroid, football, or ovoid shaped. 18. The device according to claim 1, wherein the single enclose is substantially cylinder, cone, pyramid, or torus shaped, or wherein the single enclosure is substantially box-shaped having a rectangular, square, elongated, or oval, horizontal or vertical cross-section. 19. The device according to claim 1, further comprising a sensor coupled to the processor and having an output responsive to a physical phenomenon, and wherein the activating or the controlling of the emitted light or of the electric motor is in response to the sensor output. 20. The device according to claim 19, wherein the sensor is an electric sensor that responds to an electrical characteristics or electrical phenomenon quantity in an electrical circuit. 21. The device according to claim 20, wherein the electric sensor consists of, comprises, or is based on, a voltage or current sensor. 22. The device according to claim 19, wherein the sensor is a light sensor. 23. The device according to claim 22, wherein the light sensor consists of, comprises, or is based on, a photocell. 24. The device according to claim 19, wherein the sensor is a force sensor. 25. The device according to claim 24, wherein the force sensor consists of, comprises, or is based on, a pressure sensor. 26. The device according to claim 1, further operative to sense or measure the device tilt angle based on the output signal, and wherein the visible light emitter or the electric motor is activated or controlled in response to the sensed or measured device tilt angle. 27. The device according to claim 1, further comprising a counter coupled to the accelerometer for counting a number of events based on the output signal, and wherein events are occurrences when the magnitude of the output signal cross an acceleration threshold. 28. The device according to claim 27, wherein the visible light emitter or the electric motor is activated or controlled in response to the counted number. 29. The device according to claim 27, wherein the counter is an electromechanical counter or a mechanical counter, or wherein the counter is software-based counter included in the software. 30. The device according to claim 1, further comprising a peak-detector and a storage, respectively for detecting and storing a peak value of the sensed acceleration, and wherein the device is further operative to activate or control the visible light emitter or the electric motor in response to the peak value. 31. The device according to claim 1, further comprising a timer for measuring a time interval, wherein the visible light emitter or the electric motor is activated or controlled to indicate to the person in response to a measured time interval between two events. 32. The device according to claim 1, wherein the handheld enclosure further comprises the visible light emitter. 33. The device according to claim 1, wherein the processor is further coupled to the accelerometer, to the electric motor, and to the visible light emitter, for activating or controlling the emitted light or the electric motor in response to the output signal. 34. A device comprising: a sensor for producing an output signal responsive to a physical phenomenon; a visible light emitter for emitting a visible light signaling a first status to a person; an electric motor for affecting a physical movement; a software and a processor for executing the software, the processor coupled to control the visible light emitter in response to the output signal from the sensor; a rechargeable battery connected to power the device; a battery charger connected for induction-based contactless charging of the rechargeable battery; and an single handheld enclosure for housing the sensor, the electric motor, the processor, the rechargeable battery, and the battery charger. 35. The device according to claim 34, wherein the battery charger comprises, or consists of, an induction coil for inductively receiving AC power when in an electromagnetic field, and for charging the rechargeable battery from the received AC power. 36. The device according to claim 34, wherein the physical movement is associated with the device image, theme, or shape. 37. The device according to claim 34, wherein the sensor consists of, or comprises, an accelerometer attached to the single enclosure for producing an output signal responsive to the device acceleration. 38. The device according to claim 37, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric, piezoresistive, capacitive, Micro-mechanical Electrical Systems (MEMS), or electromechanical accelerometer. 39. The device according to claim 37, wherein the accelerometer produces the output signal in response to an absolute acceleration or to a relative-to-freefall acceleration of the enclosure, or wherein the output signal is responsive to the magnitude or the direction of the device acceleration, and wherein the accelerometer is a single-axis, two-axis, or a three-axis accelerometer. 40. The device according to claim 37, further operative to sense or measure the device mechanical orientation, vibration, shock, or falling, based on, or using, the output signal. 41. The device according to claim 37, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric accelerometer that utilizes a piezoelectric effect and comprises, consists of, uses, or is based on, piezoceramics or a single crystal or quartz. 42. The device according to claim 34, wherein the visible light emitter consists of, comprises, is based on, or uses, a semiconductor component, an incandescent lamp, or fluorescent lamp, and wherein the semiconductor component consists of, or comprises, a single color Light Emitting Diode (LED) or a multi-color LED. 43. The device according to claim 34, wherein the visible light emitter is operative to illuminate in multiple colors, and wherein the first status is indicated by changing between colors. 44. The device according to claim 34, further comprising an additional visible light emitter attached to the single enclosure and coupled to the processor for emitting a visible light indicating a second status to the person, and wherein the first status is indicated by steadiness, blinking, intensity level, duty-cycle, or flashing, of the illumination of the visible light emitter. 45. The device according to claim 34, wherein the visible light emitter consists of, or comprises, a numerical display for displaying one or more digits representing a number, and wherein the numerical display comprises, consists of, or uses, seven-segments display. 46. The device according to claim 34, wherein the visible light emitter consists of, or comprises, an alphanumeric display for displaying characters, numbers, letters, or symbols, and wherein the visible light emitter consists of, or comprises, a flat-panel digital display for displaying graphical or text information. 47. The device according to claim 46, wherein the digital display is based on, comprises, or uses, a Liquid Crystal Display (LCD), a Thin-Film Transistor (TFT), or a Field Emission Display (FED) display. 48. The device according to claim 34, further operative as a toy for an amusement of a person or a pet, wherein the single enclosure is configured, dimensioned, formed, or structured as a toy. 49. The device according to claim 34, wherein the single enclosure comprises, or shaped as, two substantially circular plates attached to both ends of a rod or wherein the single enclosure is substantially sphere shaped. 50. The device according to claim 49, wherein the single enclosure is ball-shaped. 51. The device according to claim 49, wherein the single enclosure is cylinder, half-sphere, prolate-spheroid, football, or ovoid shaped. 52. The device according to claim 34, wherein the single enclose is substantially cylinder, cone, pyramid, or torus shaped, or wherein the single enclosure is substantially box-shaped having a rectangular, square, elongated, or oval, horizontal or vertical cross-section. 53. The device according to claim 34, wherein the activating or the controlling of the emitted light or of the electric motor is in response to the sensor output. 54. The device according to claim 34, wherein the sensor is an electric sensor that responds to an electrical characteristics or electrical phenomenon quantity in an electrical circuit. 55. The device according to claim 54, wherein the electric sensor consists of, comprises, or is based on, a voltage or current sensor. 56. The device according to claim 34, wherein the sensor is a light sensor. 57. The device according to claim 56, wherein the light sensor consists of, comprises, or is based on, a photocell. 58. The device according to claim 34, wherein the sensor is a force sensor. 59. The device according to claim 58, wherein the force sensor consists of, comprises, or is based on, a pressure sensor. 60. The device according to claim 34, further operative to sense or measure the device tilt angle based on the output signal, and wherein the visible light emitter or the electric motor is activated or controlled in response to the sensed or measured device tilt angle. 61. The device according to claim 34, further comprising a counter coupled to the accelerometer for counting a number of events based on the output signal, and wherein events are occurrences when the magnitude of the output signal cross an acceleration threshold. 62. The device according to claim 61, wherein the visible light emitter or the electric motor is activated or controlled in response to the counted number. 63. The device according to claim 61, wherein the counter is an electromechanical counter or a mechanical counter, or wherein the counter is software-based counter included in the software. 64. The device according to claim 34, further comprising a peak-detector and a storage, respectively for detecting and storing a peak value of the sensed acceleration, and wherein the device is further operative to activate or control the visible light emitter or the electric motor in response to the peak value. 65. The device according to claim 34, further comprising a timer for measuring a time interval, wherein the visible light emitter or the electric motor is activated or controlled to indicate to the person in response to a measured time interval between two events. 66. The device according to claim 34, wherein the handheld enclosure further comprises the visible light emitter. 67. The device according to claim 34, wherein the processor is further coupled to the accelerometer, to the electric motor, and to the visible light emitter, for activating or controlling the electric motor in response to the output signal.
A device includes a signaling means and a motion sensor, and logic for activating or controlling the signaling means in response to a sensed motion according to an embedded logic. The device may be used as a toy, and may be shaped like a play ball or as a handheld unit. It may be powered from a battery, either chargeable from an AC power source directly or contactless by using induction or by converting electrical energy from harvested kinetic energy. The embedded logic may activate or control the signaling means, predictably or randomly, in response to sensed acceleration magnitude or direction, such as sensing the crossing of a preset threshold or sensing the peak value. The visual means may be a numeric display for displaying a value associated with the count of the number of times the threshold has been exceeded or the peak magnitude of the acceleration sensed.1. A device comprising: an accelerometer for producing an output signal responsive to the device acceleration; a visible light emitter for emitting a visible light signaling a first status to a person; an electric motor for affecting a physical movement; a software and a processor for executing the software, the processor coupled to receive the output signal from the accelerometer and to control the visible light emitter; a rechargeable battery connected to power the device; a battery charger connected for induction-based contactless charging of the rechargeable battery; and an single handheld enclosure for housing the accelerometer, the electric motor, the processor, the rechargeable battery, and the battery charger. 2. The device according to claim 1, wherein the battery charger comprises, or consists of, an induction coil for inductively receiving AC power when in an electromagnetic field, and for charging the rechargeable battery from the received AC power. 3. The device according to claim 1, wherein the physical movement is associated with the device image, theme, or shape. 4. The device according to claim 1, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric, piezoresistive, capacitive, Micro-mechanical Electrical Systems (MEMS), or electromechanical accelerometer. 5. The device according to claim 1, wherein the accelerometer produces the output signal in response to an absolute acceleration or to a relative-to-freefall acceleration of the enclosure, or wherein the output signal is responsive to the magnitude or the direction of the device acceleration, and wherein the accelerometer is a single-axis, two-axis, or a three-axis accelerometer. 6. The device according to claim 1, further operative to sense or measure the device mechanical orientation, vibration, shock, or falling, based on, or using, the output signal. 7. The device according to claim 1, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric accelerometer that utilizes a piezoelectric effect and comprises, consists of, uses, or is based on, piezoceramics or a single crystal or quartz. 8. The device according to claim 1, wherein the visible light emitter consists of, comprises, is based on, or uses, a semiconductor component, an incandescent lamp, or fluorescent lamp, and wherein the semiconductor component consists of, or comprises, a single color Light Emitting Diode (LED) or a multi-color LED. 9. The device according to claim 1, wherein the visible light emitter is operative to illuminate in multiple colors, and wherein the first status is indicated by changing between colors. 10. The device according to claim 1, further comprising an additional visible light emitter attached to the single enclosure and coupled to the processor for emitting a visible light indicating a second status to the person, and wherein the first status is indicated by steadiness, blinking, intensity level, duty-cycle, or flashing, of the illumination of the visible light emitter. 11. The device according to claim 1, wherein the visible light emitter consists of, or comprises, a numerical display for displaying one or more digits representing a number, and wherein the numerical display comprises, consists of, or uses, seven-segments display. 12. The device according to claim 1, wherein the visible light emitter consists of, or comprises, an alphanumeric display for displaying characters, numbers, letters, or symbols, and wherein the visible light emitter consists of, or comprises, a flat-panel digital display for displaying graphical or text information. 13. The device according to claim 12, wherein the digital display is based on, comprises, or uses, a Liquid Crystal Display (LCD), a Thin-Film Transistor (TFT), or a Field Emission Display (FED) display. 14. The device according to claim 1, further operative as a toy for an amusement of a person or a pet, wherein the single enclosure is configured, dimensioned, formed, or structured as a toy. 15. The device according to claim 1, wherein the single enclosure comprises, or shaped as, two substantially circular plates attached to both ends of a rod or wherein the single enclosure is substantially sphere shaped. 16. The device according to claim 15, wherein the single enclosure is ball-shaped. 17. The device according to claim 15, wherein the single enclosure is cylinder, half-sphere, prolate-spheroid, football, or ovoid shaped. 18. The device according to claim 1, wherein the single enclose is substantially cylinder, cone, pyramid, or torus shaped, or wherein the single enclosure is substantially box-shaped having a rectangular, square, elongated, or oval, horizontal or vertical cross-section. 19. The device according to claim 1, further comprising a sensor coupled to the processor and having an output responsive to a physical phenomenon, and wherein the activating or the controlling of the emitted light or of the electric motor is in response to the sensor output. 20. The device according to claim 19, wherein the sensor is an electric sensor that responds to an electrical characteristics or electrical phenomenon quantity in an electrical circuit. 21. The device according to claim 20, wherein the electric sensor consists of, comprises, or is based on, a voltage or current sensor. 22. The device according to claim 19, wherein the sensor is a light sensor. 23. The device according to claim 22, wherein the light sensor consists of, comprises, or is based on, a photocell. 24. The device according to claim 19, wherein the sensor is a force sensor. 25. The device according to claim 24, wherein the force sensor consists of, comprises, or is based on, a pressure sensor. 26. The device according to claim 1, further operative to sense or measure the device tilt angle based on the output signal, and wherein the visible light emitter or the electric motor is activated or controlled in response to the sensed or measured device tilt angle. 27. The device according to claim 1, further comprising a counter coupled to the accelerometer for counting a number of events based on the output signal, and wherein events are occurrences when the magnitude of the output signal cross an acceleration threshold. 28. The device according to claim 27, wherein the visible light emitter or the electric motor is activated or controlled in response to the counted number. 29. The device according to claim 27, wherein the counter is an electromechanical counter or a mechanical counter, or wherein the counter is software-based counter included in the software. 30. The device according to claim 1, further comprising a peak-detector and a storage, respectively for detecting and storing a peak value of the sensed acceleration, and wherein the device is further operative to activate or control the visible light emitter or the electric motor in response to the peak value. 31. The device according to claim 1, further comprising a timer for measuring a time interval, wherein the visible light emitter or the electric motor is activated or controlled to indicate to the person in response to a measured time interval between two events. 32. The device according to claim 1, wherein the handheld enclosure further comprises the visible light emitter. 33. The device according to claim 1, wherein the processor is further coupled to the accelerometer, to the electric motor, and to the visible light emitter, for activating or controlling the emitted light or the electric motor in response to the output signal. 34. A device comprising: a sensor for producing an output signal responsive to a physical phenomenon; a visible light emitter for emitting a visible light signaling a first status to a person; an electric motor for affecting a physical movement; a software and a processor for executing the software, the processor coupled to control the visible light emitter in response to the output signal from the sensor; a rechargeable battery connected to power the device; a battery charger connected for induction-based contactless charging of the rechargeable battery; and an single handheld enclosure for housing the sensor, the electric motor, the processor, the rechargeable battery, and the battery charger. 35. The device according to claim 34, wherein the battery charger comprises, or consists of, an induction coil for inductively receiving AC power when in an electromagnetic field, and for charging the rechargeable battery from the received AC power. 36. The device according to claim 34, wherein the physical movement is associated with the device image, theme, or shape. 37. The device according to claim 34, wherein the sensor consists of, or comprises, an accelerometer attached to the single enclosure for producing an output signal responsive to the device acceleration. 38. The device according to claim 37, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric, piezoresistive, capacitive, Micro-mechanical Electrical Systems (MEMS), or electromechanical accelerometer. 39. The device according to claim 37, wherein the accelerometer produces the output signal in response to an absolute acceleration or to a relative-to-freefall acceleration of the enclosure, or wherein the output signal is responsive to the magnitude or the direction of the device acceleration, and wherein the accelerometer is a single-axis, two-axis, or a three-axis accelerometer. 40. The device according to claim 37, further operative to sense or measure the device mechanical orientation, vibration, shock, or falling, based on, or using, the output signal. 41. The device according to claim 37, wherein the accelerometer comprises, consists of, uses, or is based on, a piezoelectric accelerometer that utilizes a piezoelectric effect and comprises, consists of, uses, or is based on, piezoceramics or a single crystal or quartz. 42. The device according to claim 34, wherein the visible light emitter consists of, comprises, is based on, or uses, a semiconductor component, an incandescent lamp, or fluorescent lamp, and wherein the semiconductor component consists of, or comprises, a single color Light Emitting Diode (LED) or a multi-color LED. 43. The device according to claim 34, wherein the visible light emitter is operative to illuminate in multiple colors, and wherein the first status is indicated by changing between colors. 44. The device according to claim 34, further comprising an additional visible light emitter attached to the single enclosure and coupled to the processor for emitting a visible light indicating a second status to the person, and wherein the first status is indicated by steadiness, blinking, intensity level, duty-cycle, or flashing, of the illumination of the visible light emitter. 45. The device according to claim 34, wherein the visible light emitter consists of, or comprises, a numerical display for displaying one or more digits representing a number, and wherein the numerical display comprises, consists of, or uses, seven-segments display. 46. The device according to claim 34, wherein the visible light emitter consists of, or comprises, an alphanumeric display for displaying characters, numbers, letters, or symbols, and wherein the visible light emitter consists of, or comprises, a flat-panel digital display for displaying graphical or text information. 47. The device according to claim 46, wherein the digital display is based on, comprises, or uses, a Liquid Crystal Display (LCD), a Thin-Film Transistor (TFT), or a Field Emission Display (FED) display. 48. The device according to claim 34, further operative as a toy for an amusement of a person or a pet, wherein the single enclosure is configured, dimensioned, formed, or structured as a toy. 49. The device according to claim 34, wherein the single enclosure comprises, or shaped as, two substantially circular plates attached to both ends of a rod or wherein the single enclosure is substantially sphere shaped. 50. The device according to claim 49, wherein the single enclosure is ball-shaped. 51. The device according to claim 49, wherein the single enclosure is cylinder, half-sphere, prolate-spheroid, football, or ovoid shaped. 52. The device according to claim 34, wherein the single enclose is substantially cylinder, cone, pyramid, or torus shaped, or wherein the single enclosure is substantially box-shaped having a rectangular, square, elongated, or oval, horizontal or vertical cross-section. 53. The device according to claim 34, wherein the activating or the controlling of the emitted light or of the electric motor is in response to the sensor output. 54. The device according to claim 34, wherein the sensor is an electric sensor that responds to an electrical characteristics or electrical phenomenon quantity in an electrical circuit. 55. The device according to claim 54, wherein the electric sensor consists of, comprises, or is based on, a voltage or current sensor. 56. The device according to claim 34, wherein the sensor is a light sensor. 57. The device according to claim 56, wherein the light sensor consists of, comprises, or is based on, a photocell. 58. The device according to claim 34, wherein the sensor is a force sensor. 59. The device according to claim 58, wherein the force sensor consists of, comprises, or is based on, a pressure sensor. 60. The device according to claim 34, further operative to sense or measure the device tilt angle based on the output signal, and wherein the visible light emitter or the electric motor is activated or controlled in response to the sensed or measured device tilt angle. 61. The device according to claim 34, further comprising a counter coupled to the accelerometer for counting a number of events based on the output signal, and wherein events are occurrences when the magnitude of the output signal cross an acceleration threshold. 62. The device according to claim 61, wherein the visible light emitter or the electric motor is activated or controlled in response to the counted number. 63. The device according to claim 61, wherein the counter is an electromechanical counter or a mechanical counter, or wherein the counter is software-based counter included in the software. 64. The device according to claim 34, further comprising a peak-detector and a storage, respectively for detecting and storing a peak value of the sensed acceleration, and wherein the device is further operative to activate or control the visible light emitter or the electric motor in response to the peak value. 65. The device according to claim 34, further comprising a timer for measuring a time interval, wherein the visible light emitter or the electric motor is activated or controlled to indicate to the person in response to a measured time interval between two events. 66. The device according to claim 34, wherein the handheld enclosure further comprises the visible light emitter. 67. The device according to claim 34, wherein the processor is further coupled to the accelerometer, to the electric motor, and to the visible light emitter, for activating or controlling the electric motor in response to the output signal.
2,600
10,671
10,671
15,829,332
2,685
An anti-theft device for protecting a portable electronic device from theft or unauthorized removal in a retail environment is provided. The anti-theft device includes a shroud configured to at least partially receive and engage a portable electronic device. The anti-theft device also includes a dock configured to releasably engage the shroud. The dock is configured to engage the shroud in a locked configuration so as to prevent removal of the shroud and the portable electronic device from the dock and to disengage the shroud in an unlocked configuration so as to allow the shroud and the portable electronic device to be removed from the dock.
1-20. (canceled) 21. An anti-theft device for securing a portable electronic device from unauthorized removal or theft, the anti-theft device comprising: a shroud configured to at least partially receive and engage a portable electronic device; and a dock configured to releasably engage the shroud, the dock configured to engage the shroud in a locked configuration so as to prevent removal of the shroud and the portable electronic device from the dock and to disengage the shroud in an unlocked configuration so as to allow the shroud and the portable electronic device to be removed from the dock, wherein the dock is configured to disengage the shroud in response to receiving a command at the portable electronic device. 22. The anti-theft device of claim 21, further comprising a mobile payment interface pivotably coupled to the shroud for moving between an open position and a closed position, the mobile payment interface configured to operably couple to a mobile payment device. 23. The anti-theft device of claim 21, wherein the shroud comprises a wireless communications interface configured to communicate and be paired with the portable electronic device. 24. The anti-theft device of claim 21, wherein the dock is configured to communicate power and/or data signals with the portable electronic device. 25. The anti-theft device of claim 21, further comprising at least one adapter configured to releasably engage an input port on the portable electronic device, wherein the shroud is configured to at least partially receive and engage the portable electronic device for establishing electrical communication with the at least one adapter, and wherein the dock is configured to be in electrical communication with the portable electronic device via the at least one adapter. 26. The anti-theft device of claim 21, further comprising a mobile payment interface coupled to the shroud, the mobile payment interface configured to operably couple to a mobile payment device. 27. The anti-theft device of claim 26, wherein the mobile payment interface is removably attached to the shroud. 28. The anti-theft device of claim 26, wherein the mobile payment interface is attached to the shroud via a hinge for moving between an open position and a closed position relative to the shroud. 29. The anti-theft device of claim 26, wherein the dock is configured to transfer power to both the shroud and the mobile payment interface when the shroud is engaged with the dock. 30. The anti-theft device of claim 26, wherein the dock is configured to communicate power and/or data signals with the portable electronic device and the mobile payment device. 31. The anti-theft device of claim 21, wherein the shroud is configured to rotate relative to the dock about three axes of rotation. 32. The anti-theft device of claim 21, wherein the shroud is configured to be removably attached to the portable electronic device. 33. The anti-theft device of claim 21, wherein the dock is configured to disengage the shroud in response to communication with an electronic key. 34. The anti-theft device of claim 21, wherein the dock comprises an alarm configured to be activated in response to unauthorized removal of the portable electronic device and/or the shroud from the dock. 35. The anti-theft device of claim 21, wherein the dock comprises an alarm configured to be activated in response to unauthorized removal of the dock from a support surface. 36. The anti-theft device of claim 21, wherein the dock is configured to disengage the shroud in response to receiving a command at the portable electronic device via a software application on the portable electronic device. 37. A method for securing a portable electronic device from unauthorized removal or theft, the method comprising: positioning a portable electronic device within a shroud; positioning the shroud and the portable electronic device on a dock such that the shroud and the portable electronic device are locked to the dock; and inputting a command at the portable electronic device to cause the dock to unlock the shroud from the dock. 38. The method of claim 37, further comprising moving a mobile payment interface coupled to the shroud to an open position. 39. The method of claim 37, further comprising wirelessly pairing the shroud with the portable electronic device. 40. The method of claim 37, further comprising operably coupling a mobile payment device to a mobile payment interface coupled to the shroud for communicating power and/or data signals with the dock.
An anti-theft device for protecting a portable electronic device from theft or unauthorized removal in a retail environment is provided. The anti-theft device includes a shroud configured to at least partially receive and engage a portable electronic device. The anti-theft device also includes a dock configured to releasably engage the shroud. The dock is configured to engage the shroud in a locked configuration so as to prevent removal of the shroud and the portable electronic device from the dock and to disengage the shroud in an unlocked configuration so as to allow the shroud and the portable electronic device to be removed from the dock.1-20. (canceled) 21. An anti-theft device for securing a portable electronic device from unauthorized removal or theft, the anti-theft device comprising: a shroud configured to at least partially receive and engage a portable electronic device; and a dock configured to releasably engage the shroud, the dock configured to engage the shroud in a locked configuration so as to prevent removal of the shroud and the portable electronic device from the dock and to disengage the shroud in an unlocked configuration so as to allow the shroud and the portable electronic device to be removed from the dock, wherein the dock is configured to disengage the shroud in response to receiving a command at the portable electronic device. 22. The anti-theft device of claim 21, further comprising a mobile payment interface pivotably coupled to the shroud for moving between an open position and a closed position, the mobile payment interface configured to operably couple to a mobile payment device. 23. The anti-theft device of claim 21, wherein the shroud comprises a wireless communications interface configured to communicate and be paired with the portable electronic device. 24. The anti-theft device of claim 21, wherein the dock is configured to communicate power and/or data signals with the portable electronic device. 25. The anti-theft device of claim 21, further comprising at least one adapter configured to releasably engage an input port on the portable electronic device, wherein the shroud is configured to at least partially receive and engage the portable electronic device for establishing electrical communication with the at least one adapter, and wherein the dock is configured to be in electrical communication with the portable electronic device via the at least one adapter. 26. The anti-theft device of claim 21, further comprising a mobile payment interface coupled to the shroud, the mobile payment interface configured to operably couple to a mobile payment device. 27. The anti-theft device of claim 26, wherein the mobile payment interface is removably attached to the shroud. 28. The anti-theft device of claim 26, wherein the mobile payment interface is attached to the shroud via a hinge for moving between an open position and a closed position relative to the shroud. 29. The anti-theft device of claim 26, wherein the dock is configured to transfer power to both the shroud and the mobile payment interface when the shroud is engaged with the dock. 30. The anti-theft device of claim 26, wherein the dock is configured to communicate power and/or data signals with the portable electronic device and the mobile payment device. 31. The anti-theft device of claim 21, wherein the shroud is configured to rotate relative to the dock about three axes of rotation. 32. The anti-theft device of claim 21, wherein the shroud is configured to be removably attached to the portable electronic device. 33. The anti-theft device of claim 21, wherein the dock is configured to disengage the shroud in response to communication with an electronic key. 34. The anti-theft device of claim 21, wherein the dock comprises an alarm configured to be activated in response to unauthorized removal of the portable electronic device and/or the shroud from the dock. 35. The anti-theft device of claim 21, wherein the dock comprises an alarm configured to be activated in response to unauthorized removal of the dock from a support surface. 36. The anti-theft device of claim 21, wherein the dock is configured to disengage the shroud in response to receiving a command at the portable electronic device via a software application on the portable electronic device. 37. A method for securing a portable electronic device from unauthorized removal or theft, the method comprising: positioning a portable electronic device within a shroud; positioning the shroud and the portable electronic device on a dock such that the shroud and the portable electronic device are locked to the dock; and inputting a command at the portable electronic device to cause the dock to unlock the shroud from the dock. 38. The method of claim 37, further comprising moving a mobile payment interface coupled to the shroud to an open position. 39. The method of claim 37, further comprising wirelessly pairing the shroud with the portable electronic device. 40. The method of claim 37, further comprising operably coupling a mobile payment device to a mobile payment interface coupled to the shroud for communicating power and/or data signals with the dock.
2,600
10,672
10,672
15,203,928
2,674
A display operation device has a display-integrated operation panel with a display screen, on which a monochrome copy start key and a color copy start key are displayed. When black-toner-out or color-toner-out occurs, on the monochrome copy start key and the color copy start key, another UI component such as black-toner-out key and color-toner-out key is displayed. Such a UI component is displayed at the display position of the monochrome copy start key and the color copy start key, being superposed on these keys.
1. (canceled) 2: A display operation device, comprising: a display-integrated operation panel including a display screen; and processor circuitry; wherein said processor circuitry is configured and/or programmed to: display a first user interface (UI) component on said display screen; activate, responsive to a user operation of said first UI component displayed on said display screen, a process associated with said first UI component; and responsive to activation of said process, disable a user operation of said first UI component. 3: The display operation device according to claim 2, wherein said processor circuitry is configured and/or programmed to display, responsive to activation of said process, a second UI component on said display screen and thereby to disable a user operation of said first UI component. 4: The display operation device according to claim 3, wherein said second UI component is displayed replacing said first UI component. 5: The display operation device according to claim 3, wherein said second UI component is displayed superposed on said first UI component. 6: The display operation device according to claim 3, wherein said second UI component is displayed in a color different from said first UI component. 7: The display operation device according to claim 3, wherein said first UI component is a key that instructs a start of said process; and said second UI component is a key that instructs another process related to said process. 8: The display operation device according to claim 3, wherein said second UI component is a key that instructs cancelling or a stop of said process. 9: The display operation device according to claim 3, wherein if an event hindering said process is occurring when said second UI component is to be displayed, said second UI component includes a message related to said event hindering said process. 10: The display operation device according to claim 9, wherein said message relates to a cause of or a measure against said event hindering said process. 11: The display operation device according to claim 9, mounted on an image processing apparatus being capable of forming an image on a recording medium using toner and/or transmitting image data through a communication device, wherein said event hindering said process is a trouble related to any of said toner, said recording medium and a transmission destination. 12: A method of receiving a user operation using a display-integrated operation panel including a display screen, comprising the steps of: displaying a first UI component on said display screen; activating, responsive to a user operation of said first UI component displayed on said display screen at said step of displaying said first UI component, a process associated with said first UI component; and responsive to activation of said process, disabling a user operation of said first UI component. 13: An image processing apparatus, comprising: the display operation device according to claim 2; an image forming device forming image data; and an image processing unit, connected to said display operation device and said image forming device, for processing the image data formed by said image forming device, based on an instruction from said display operation device.
A display operation device has a display-integrated operation panel with a display screen, on which a monochrome copy start key and a color copy start key are displayed. When black-toner-out or color-toner-out occurs, on the monochrome copy start key and the color copy start key, another UI component such as black-toner-out key and color-toner-out key is displayed. Such a UI component is displayed at the display position of the monochrome copy start key and the color copy start key, being superposed on these keys.1. (canceled) 2: A display operation device, comprising: a display-integrated operation panel including a display screen; and processor circuitry; wherein said processor circuitry is configured and/or programmed to: display a first user interface (UI) component on said display screen; activate, responsive to a user operation of said first UI component displayed on said display screen, a process associated with said first UI component; and responsive to activation of said process, disable a user operation of said first UI component. 3: The display operation device according to claim 2, wherein said processor circuitry is configured and/or programmed to display, responsive to activation of said process, a second UI component on said display screen and thereby to disable a user operation of said first UI component. 4: The display operation device according to claim 3, wherein said second UI component is displayed replacing said first UI component. 5: The display operation device according to claim 3, wherein said second UI component is displayed superposed on said first UI component. 6: The display operation device according to claim 3, wherein said second UI component is displayed in a color different from said first UI component. 7: The display operation device according to claim 3, wherein said first UI component is a key that instructs a start of said process; and said second UI component is a key that instructs another process related to said process. 8: The display operation device according to claim 3, wherein said second UI component is a key that instructs cancelling or a stop of said process. 9: The display operation device according to claim 3, wherein if an event hindering said process is occurring when said second UI component is to be displayed, said second UI component includes a message related to said event hindering said process. 10: The display operation device according to claim 9, wherein said message relates to a cause of or a measure against said event hindering said process. 11: The display operation device according to claim 9, mounted on an image processing apparatus being capable of forming an image on a recording medium using toner and/or transmitting image data through a communication device, wherein said event hindering said process is a trouble related to any of said toner, said recording medium and a transmission destination. 12: A method of receiving a user operation using a display-integrated operation panel including a display screen, comprising the steps of: displaying a first UI component on said display screen; activating, responsive to a user operation of said first UI component displayed on said display screen at said step of displaying said first UI component, a process associated with said first UI component; and responsive to activation of said process, disabling a user operation of said first UI component. 13: An image processing apparatus, comprising: the display operation device according to claim 2; an image forming device forming image data; and an image processing unit, connected to said display operation device and said image forming device, for processing the image data formed by said image forming device, based on an instruction from said display operation device.
2,600
10,673
10,673
15,856,803
2,636
A method comprising: receiving a multiplexed optical signal comprising optical channels for N wavelengths, N being a positive integer greater than or equal to two; and separating the multiplexed optical signal into a first multiplexed light and a second multiplexed light, each of the first multiplexed light and the second multiplexed light having the same polarization, and each of the first multiplexed light and the second multiplexed light having N wavelengths; separating the first multiplexed light into a plurality of first lights, each having a different wavelength; separating the second multiplexed light into a plurality of second lights simultaneous with separating the first multiplexed light into the first lights, each of the second lights having a different wavelength; and generating optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength.
1. A method comprising: receiving a multiplexed optical signal comprising optical channels for N wavelengths, N being a positive integer greater than or equal to two; separating the multiplexed optical signal into a first multiplexed light and a second multiplexed light, each of the first multiplexed light and the second multiplexed light having the same polarization, and each of the first multiplexed light and the second multiplexed light having N wavelengths; separating the first multiplexed light into a plurality of first lights, each having a different wavelength; separating the second multiplexed light into a plurality of second lights simultaneous with separating the first multiplexed light into the first lights, each of the second lights having a different wavelength; and generating optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength. 2. The method of claim 1, wherein generating the optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength comprises: splitting the first light into a first light first part and a first light second part for each wavelength; splitting the second light into a second light first part and a second light second part for each wavelength; generating a first one of the optical parameters using the first light first part and the second light first part for each wavelength; passing the first light second part and the second light second part through a 90° optical hybrid; and generating a second one of the optical parameters and a third one of the optical parameters for each optical channel in the N wavelengths using outputs of the 90° optical hybrid. 3. The method of claim 2, wherein the second one of the optical parameters and the third one of the optical parameters for each optical channel in the N wavelengths is generated simultaneous with generating the first one of the optical parameters. 4. The method of claim 1, wherein generating the optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength comprises: splitting the first light into a first light first part, a first light second part, and a first light third part for each wavelength; splitting the second light into a second light first part, a second light second part, and a second light third part for each wavelength; generating a first one of the optical parameters using the first light first part and the second light first part for each wavelength; phase shifting the second light second part; combining the first light second part and the phase shifted second light second part; generating a second one of the optical parameters for each optical channel in the N wavelengths using the combination of the first light second part and the phase shifted second light second part; combining the first light third part and the second light third part; and generating a third one of the optical parameters for each optical channel in the N wavelengths using the combination of the first light third part and the second light third part. 5. The method of claim 4, wherein the second one of the optical parameters and the third one of the optical parameters for each optical channel in the N wavelengths is generated simultaneous with generating the first one of the optical parameters. 6. The method of claim 4, wherein phase shifting the second light second part comprises phase shifting the second light second part −90°. 7. The method of claim 1, wherein the optical parameters are Stokes parameters S1, S2, and S3. 8. An apparatus comprising: a polarization splitter and rotator (PSR) configured to: receive a multiplexed optical signal comprising optical channels for N wavelengths, N being a positive integer greater than or equal to two; and separate the multiplexed optical signal into a first multiplexed light and a second multiplexed light, each of the first multiplexed light and the second multiplexed light having the same polarization and each of the first multiplexed light and the second multiplexed light having N wavelengths; a first wavelength separator coupled to the PSR and configured to separate the first multiplexed light into the N separate first lights each having a different wavelength; a second wavelength separator coupled to the PSR in parallel with the first wavelength separator, the second wavelength separator configured to separate the second multiplexed light into the N separate second lights each having a different wavelength; and N optical parameter generators configured in parallel to the first wavelength separator and the second wavelength separator and configured to generate optical parameters for the optical channels for the N wavelengths. 9. The apparatus of claim 8, wherein the PSR, the first wavelength separator, the second wavelength separator, and the N optical parameter generators are all based on waveguides and integrated onto a single optical chip. 10. The apparatus of claim 8, wherein the PSR receives the multiplexed optical signal at a PSR input port, the N optical parameter generators each comprise at least one balanced photodetector (BPD) comprising a BPD input port for the first light at each wavelength and a BPD input port for the second light at each wavelength, and a difference in a first light travel time between a time it take the first light to travel from the PSR input port to the BPD input port for the first light and a time it take the second light to travel from the PSR input port to the BPD input port for the second light is no more than 2 picoseconds. 11. The apparatus of claim 8, wherein the N optical parameter generators each comprise: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first balanced photodetector (BPD) coupled to the first optical splitter and the second optical splitter; a 90° optical hybrid coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the 90° optical hybrid; and a third BPD coupled to the 90° optical hybrid, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channels for the N wavelengths. 12. The apparatus of claim 11, wherein the optical parameters are Stokes parameters S1, S2, and S3. 13. The apparatus of claim 8, wherein the N optical parameter generators each comprise: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first balanced photodetector (BPD) coupled to the first optical splitter and the second optical splitter; a phase shifter coupled to the second optical splitter; a first optical coupler coupled to the first optical splitter and the phase shifter; a second optical coupler coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the first optical coupler; and a third BPD coupled to the second optical coupler, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channels for the N wavelengths. 14. The apparatus of claim 13, wherein the optical parameters are Stokes parameters S1, S2, and S3. 15. An apparatus comprising: a polarization splitter and rotator (PSR) comprising a PSR input port configured to receive a multiplexed optical signal, the multiplexed optical signal comprising optical channels for N wavelengths, and N being a positive integer greater than or equal to two; a first wavelength separator coupled to the PSR; a second wavelength separator coupled to the PSR in parallel with the first wavelength separator; and a first optical parameter generator coupled to the first wavelength separator and the second wavelength separator and configured to generate optical parameters for a first wavelength, the first optical parameter generator comprising at least one balanced photodetector (BPD) comprising a BPD input port for a first light and a BPD input port for a second light, wherein a difference in a first light travel time between a time it take the first light to travel from the PSR input port to the BPD input port for the first light and a time it take the second light to travel from the PSR input port to the BPD input port for the second light is no more than 2 picoseconds. 16. The apparatus of claim 15, wherein the PSR, the first wavelength separator, the second wavelength separator, and the N optical parameter generators are all based on waveguides and integrated onto a single optical chip. 17. The apparatus of claim 15, wherein the first optical parameter generator comprises: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first BPD coupled to the first optical splitter and the second optical splitter; a 90° optical hybrid coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the 90° optical hybrid; and a third BPD coupled to the 90° optical hybrid, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channel for the first wavelength. 18. The apparatus of claim 15, wherein the first optical parameter generator comprises: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first BPD coupled to the first optical splitter and the second optical splitter; a phase shiner coupled to the second optical splitter; a first optical coupler coupled to the first optical splitter and the phase shifter; a second optical coupler coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the first optical coupler; and a third BPD coupled to the second optical coupler, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channels for the N wavelengths. 19. The apparatus of claim 15, wherein the first wavelength separator and the second wavelength separator each comprise an arrayed waveguide grating, a micro-ring resonator, or a Mach-Zehnder interferometer. 20. The apparatus of claim 15, wherein the optical parameters are Stokes parameters.
A method comprising: receiving a multiplexed optical signal comprising optical channels for N wavelengths, N being a positive integer greater than or equal to two; and separating the multiplexed optical signal into a first multiplexed light and a second multiplexed light, each of the first multiplexed light and the second multiplexed light having the same polarization, and each of the first multiplexed light and the second multiplexed light having N wavelengths; separating the first multiplexed light into a plurality of first lights, each having a different wavelength; separating the second multiplexed light into a plurality of second lights simultaneous with separating the first multiplexed light into the first lights, each of the second lights having a different wavelength; and generating optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength.1. A method comprising: receiving a multiplexed optical signal comprising optical channels for N wavelengths, N being a positive integer greater than or equal to two; separating the multiplexed optical signal into a first multiplexed light and a second multiplexed light, each of the first multiplexed light and the second multiplexed light having the same polarization, and each of the first multiplexed light and the second multiplexed light having N wavelengths; separating the first multiplexed light into a plurality of first lights, each having a different wavelength; separating the second multiplexed light into a plurality of second lights simultaneous with separating the first multiplexed light into the first lights, each of the second lights having a different wavelength; and generating optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength. 2. The method of claim 1, wherein generating the optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength comprises: splitting the first light into a first light first part and a first light second part for each wavelength; splitting the second light into a second light first part and a second light second part for each wavelength; generating a first one of the optical parameters using the first light first part and the second light first part for each wavelength; passing the first light second part and the second light second part through a 90° optical hybrid; and generating a second one of the optical parameters and a third one of the optical parameters for each optical channel in the N wavelengths using outputs of the 90° optical hybrid. 3. The method of claim 2, wherein the second one of the optical parameters and the third one of the optical parameters for each optical channel in the N wavelengths is generated simultaneous with generating the first one of the optical parameters. 4. The method of claim 1, wherein generating the optical parameters for each optical channel in the N wavelengths using the first light and the second light for each wavelength comprises: splitting the first light into a first light first part, a first light second part, and a first light third part for each wavelength; splitting the second light into a second light first part, a second light second part, and a second light third part for each wavelength; generating a first one of the optical parameters using the first light first part and the second light first part for each wavelength; phase shifting the second light second part; combining the first light second part and the phase shifted second light second part; generating a second one of the optical parameters for each optical channel in the N wavelengths using the combination of the first light second part and the phase shifted second light second part; combining the first light third part and the second light third part; and generating a third one of the optical parameters for each optical channel in the N wavelengths using the combination of the first light third part and the second light third part. 5. The method of claim 4, wherein the second one of the optical parameters and the third one of the optical parameters for each optical channel in the N wavelengths is generated simultaneous with generating the first one of the optical parameters. 6. The method of claim 4, wherein phase shifting the second light second part comprises phase shifting the second light second part −90°. 7. The method of claim 1, wherein the optical parameters are Stokes parameters S1, S2, and S3. 8. An apparatus comprising: a polarization splitter and rotator (PSR) configured to: receive a multiplexed optical signal comprising optical channels for N wavelengths, N being a positive integer greater than or equal to two; and separate the multiplexed optical signal into a first multiplexed light and a second multiplexed light, each of the first multiplexed light and the second multiplexed light having the same polarization and each of the first multiplexed light and the second multiplexed light having N wavelengths; a first wavelength separator coupled to the PSR and configured to separate the first multiplexed light into the N separate first lights each having a different wavelength; a second wavelength separator coupled to the PSR in parallel with the first wavelength separator, the second wavelength separator configured to separate the second multiplexed light into the N separate second lights each having a different wavelength; and N optical parameter generators configured in parallel to the first wavelength separator and the second wavelength separator and configured to generate optical parameters for the optical channels for the N wavelengths. 9. The apparatus of claim 8, wherein the PSR, the first wavelength separator, the second wavelength separator, and the N optical parameter generators are all based on waveguides and integrated onto a single optical chip. 10. The apparatus of claim 8, wherein the PSR receives the multiplexed optical signal at a PSR input port, the N optical parameter generators each comprise at least one balanced photodetector (BPD) comprising a BPD input port for the first light at each wavelength and a BPD input port for the second light at each wavelength, and a difference in a first light travel time between a time it take the first light to travel from the PSR input port to the BPD input port for the first light and a time it take the second light to travel from the PSR input port to the BPD input port for the second light is no more than 2 picoseconds. 11. The apparatus of claim 8, wherein the N optical parameter generators each comprise: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first balanced photodetector (BPD) coupled to the first optical splitter and the second optical splitter; a 90° optical hybrid coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the 90° optical hybrid; and a third BPD coupled to the 90° optical hybrid, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channels for the N wavelengths. 12. The apparatus of claim 11, wherein the optical parameters are Stokes parameters S1, S2, and S3. 13. The apparatus of claim 8, wherein the N optical parameter generators each comprise: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first balanced photodetector (BPD) coupled to the first optical splitter and the second optical splitter; a phase shifter coupled to the second optical splitter; a first optical coupler coupled to the first optical splitter and the phase shifter; a second optical coupler coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the first optical coupler; and a third BPD coupled to the second optical coupler, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channels for the N wavelengths. 14. The apparatus of claim 13, wherein the optical parameters are Stokes parameters S1, S2, and S3. 15. An apparatus comprising: a polarization splitter and rotator (PSR) comprising a PSR input port configured to receive a multiplexed optical signal, the multiplexed optical signal comprising optical channels for N wavelengths, and N being a positive integer greater than or equal to two; a first wavelength separator coupled to the PSR; a second wavelength separator coupled to the PSR in parallel with the first wavelength separator; and a first optical parameter generator coupled to the first wavelength separator and the second wavelength separator and configured to generate optical parameters for a first wavelength, the first optical parameter generator comprising at least one balanced photodetector (BPD) comprising a BPD input port for a first light and a BPD input port for a second light, wherein a difference in a first light travel time between a time it take the first light to travel from the PSR input port to the BPD input port for the first light and a time it take the second light to travel from the PSR input port to the BPD input port for the second light is no more than 2 picoseconds. 16. The apparatus of claim 15, wherein the PSR, the first wavelength separator, the second wavelength separator, and the N optical parameter generators are all based on waveguides and integrated onto a single optical chip. 17. The apparatus of claim 15, wherein the first optical parameter generator comprises: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first BPD coupled to the first optical splitter and the second optical splitter; a 90° optical hybrid coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the 90° optical hybrid; and a third BPD coupled to the 90° optical hybrid, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channel for the first wavelength. 18. The apparatus of claim 15, wherein the first optical parameter generator comprises: a first optical splitter coupled to the first wavelength separator and the second wavelength separator; a second optical splitter coupled to the first wavelength separator and the second wavelength separator; a first BPD coupled to the first optical splitter and the second optical splitter; a phase shiner coupled to the second optical splitter; a first optical coupler coupled to the first optical splitter and the phase shifter; a second optical coupler coupled to the first optical splitter and the second optical splitter; a second BPD coupled to the first optical coupler; and a third BPD coupled to the second optical coupler, and wherein the first BPD, the second BPD, and the third BPD are configured to simultaneously generate the optical parameters for the optical channels for the N wavelengths. 19. The apparatus of claim 15, wherein the first wavelength separator and the second wavelength separator each comprise an arrayed waveguide grating, a micro-ring resonator, or a Mach-Zehnder interferometer. 20. The apparatus of claim 15, wherein the optical parameters are Stokes parameters.
2,600
10,674
10,674
15,966,369
2,632
Provided is a frame configuration usable for both SISO transmission and MISO and/or MIMO transmission. A frame configurator of a transmission device configures a frame by gathering data for SISO and configures a frame by gathering data for MISO and/or MIMO data, thereby to improve the reception performance (detection performance) of a reception device.
1. A transmission device for transmitting OFDM symbols, the transmission device comprising: a transmission signal generator generating a transmission signal of a frame including multiple subframes containing OFDM symbols, and a transmitter transmitting the transmission signal generated by the transmission signal generator, wherein the subframe includes a subframe boundary symbol and a data symbol, the subframe boundary symbol is the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, and the number of pilots in the subframe boundary symbol is more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe is selected independently for each subframe. 2. A transmission method for transmitting OFDM symbols, the transmission method comprising: a transmission signal generating step of generating a transmission signal of a frame including multiple subframes containing OFDM symbols, and a transmission step of transmitting the transmission signal generated by the transmission signal generating step, wherein the subframe includes a subframe boundary symbol and a data symbol, the subframe boundary symbol is the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, and the number of the pilots in the subframe boundary symbol is more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe is selected independently for each subframe. 3. A reception device comprising: a receiver receiving, from a transmission device for transmitting OFDM symbols, a signal of a frame including multiple subframes containing OFDM symbols, each of the subframes including a subframe boundary symbol and a data symbol, the subframe boundary symbol being the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, the number of pilots in the subframe boundary symbol being more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe being selected independently for each subframe, and a demodulator decoding the OFDM symbols contained in each subframe of the signal received by the receiver by using the subframe boundary symbol in each subframe. 4. A reception method for use by a reception device, comprising: a receiving step of receiving, from a transmission device for transmitting OFDM symbols, a signal of a frame including multiple subframes containing OFDM symbols, the subframe including a subframe boundary symbol and a data symbol, the subframe boundary symbol being the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, the number of pilots in the subframe boundary symbol being more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe being selected independently for each subframe, and a demodulating step of decoding OFDM symbols contained in the signal received in the receiving step by using the subframe boundary symbol in each subframe.
Provided is a frame configuration usable for both SISO transmission and MISO and/or MIMO transmission. A frame configurator of a transmission device configures a frame by gathering data for SISO and configures a frame by gathering data for MISO and/or MIMO data, thereby to improve the reception performance (detection performance) of a reception device.1. A transmission device for transmitting OFDM symbols, the transmission device comprising: a transmission signal generator generating a transmission signal of a frame including multiple subframes containing OFDM symbols, and a transmitter transmitting the transmission signal generated by the transmission signal generator, wherein the subframe includes a subframe boundary symbol and a data symbol, the subframe boundary symbol is the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, and the number of pilots in the subframe boundary symbol is more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe is selected independently for each subframe. 2. A transmission method for transmitting OFDM symbols, the transmission method comprising: a transmission signal generating step of generating a transmission signal of a frame including multiple subframes containing OFDM symbols, and a transmission step of transmitting the transmission signal generated by the transmission signal generating step, wherein the subframe includes a subframe boundary symbol and a data symbol, the subframe boundary symbol is the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, and the number of the pilots in the subframe boundary symbol is more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe is selected independently for each subframe. 3. A reception device comprising: a receiver receiving, from a transmission device for transmitting OFDM symbols, a signal of a frame including multiple subframes containing OFDM symbols, each of the subframes including a subframe boundary symbol and a data symbol, the subframe boundary symbol being the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, the number of pilots in the subframe boundary symbol being more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe being selected independently for each subframe, and a demodulator decoding the OFDM symbols contained in each subframe of the signal received by the receiver by using the subframe boundary symbol in each subframe. 4. A reception method for use by a reception device, comprising: a receiving step of receiving, from a transmission device for transmitting OFDM symbols, a signal of a frame including multiple subframes containing OFDM symbols, the subframe including a subframe boundary symbol and a data symbol, the subframe boundary symbol being the leading OFDM symbol of the subframe and/or the trailing OFDM symbol of the subframe, the number of pilots in the subframe boundary symbol being more than that in the data symbol, and whether or not the subframe boundary symbol is provided in each subframe being selected independently for each subframe, and a demodulating step of decoding OFDM symbols contained in the signal received in the receiving step by using the subframe boundary symbol in each subframe.
2,600
10,675
10,675
15,455,890
2,651
Embodiments of the present disclosure provide systems and methods for perspective shifting in a video conferencing session. In one exemplary method, a video stream may be generated. A foreground element may be identified in a frame of the video stream and distinguished from a background element of the frame. Data may be received representing a viewing condition at a terminal that will display the generated video stream. The frame of the video stream may be modified based on the received data to shift of the foreground element relative to the background element. The modified video stream may be displayed at the displaying terminal.
1. A method comprising: generating, by a first terminal, a video stream; distinguishing a foreground element in a frame of the video stream from a background element of the frame; responsive to information representing a viewing condition at a second terminal, modifying the frame of the video stream by shifting the foreground element relative to the background element; and transmitting the modified video frame to the second terminal. 2. The method of claim 1, wherein the modifying the frame of the video stream by shifting the foreground element relative to the background element is further responsive to depth information representing a relative depth between the foreground element and the background element in the frame of the video stream. 3. The method of claim 1, wherein the information representing the viewing condition at the second terminal comprises an indication of a change in relative positions of the second terminal and an operator of the second terminal. 4. The method of claim 1, wherein the information representing the viewing condition at the second terminal comprises a relative rotation of the second terminal with respect to an operator of the second terminal. 5. The method of claim 1, wherein: the information representing the viewing condition at the second terminal comprises at least one of: a side-to-side movement of the second terminal with respect to an operator of the second terminal and a rotation of the second terminal about a longitudinal axis of the second terminal; and the shift of the foreground element relative to the background element comprises a horizontal shift of the foreground element relative to the background element. 6. The method of claim 1, wherein the information representing the viewing condition at the second terminal comprises an input to a touch-sensitive display of the second terminal. 7. The method of claim 1, further comprising deriving new content for a background element of the frame revealed by the shifting from content of other frame(s) of the video stream. 8. The method of claim 7, wherein: the video stream is captured by two or more cameras; and the portion of the background element without image content is filled in, at least in part, with image content captured from the two or more cameras. 9. The method of claim 7, further comprising: estimating an amount of background content that can be derived for the frame based on identified background content for other frame(s) of the video stream; and determining a distance of the shifting based on the estimated amount of background content that can be derived. 10. The method of claim 2, wherein the depth information is determined based at least in part on data from a depth sensor associated with the first terminal. 11. The method of claim 2, wherein the depth information is determined from analysis of the video stream. 12. The method of claim 1, wherein the video stream is captured in part by a camera. 13. The method of claim 1, wherein the video stream includes synthetic video data at least in part. 14. A method comprising: receiving, by a terminal, a video stream; distinguishing a foreground element in a frame of the video stream from a background element of the frame; responsive to information representing a viewing condition at the terminal, modifying the frame of the video stream by shifting the foreground element relative to the background element; and displaying the modified video frame. 15. The method of claim 14, wherein the viewing condition information is entered at a motion sensor of the terminal. 16. The method of claim 14, wherein the viewing condition information is entered at a camera of the terminal. 17. The method of claim 14, wherein the viewing condition information is entered at a touch screen of the terminal. 18. The method of claim 14, wherein the video stream is received as coded video data and the method comprises decoding by the terminal the coded video data. 19. The method of claim 18, wherein the distinguishing is performed, at least in part, with reference to metadata provided with the coded video data identifying depth of content elements of the video stream. 20. An apparatus comprising: an image source generating a video sequence; an image analyzer, responsive to depth indicators associated with the video sequence, to parse frames of the video sequence into foreground and background content elements; a compositor, responsive to data indicating a viewing condition at a display terminal, to shift a foreground element of a frame with respect to a background element of the same frame; and a transmitter to transmit the shifted frame to the display terminal. 21. The apparatus of claim 20, wherein the compositor derives new content for a background element of the frame revealed by the shifting from content of other frame(s) of the video stream. 22. The apparatus of claim 20, wherein the compositor: estimates an amount of background content that can be derived for the frame based on identified background content for other frame(s) of the video stream; and determines a distance of the shifting based on the estimated amount of background content that can be derived. 23. The apparatus of claim 20, further comprising a depth sensor, wherein the depth information is determined based at least in part on data from the depth sensor. 24. The apparatus of claim 20, wherein the depth information is determined from analysis of the image analyzer. 25. The apparatus of claim 20, wherein the image source includes a camera. 26. The apparatus of claim 20, wherein the image source includes an application executing on a computer. 27. A computer-readable medium having instructions that, when executed by a processor, effectuate operations comprising: generating, by a first terminal, a video stream; distinguishing a foreground element in a frame of the video stream from a background element of the frame; receiving an operator input from a second terminal; responsive to information representing a viewing condition at a second terminal, modifying the frame of the video stream by shifting the foreground element relative to the background element; and transmitting the modified video frame to the second terminal. 28. An apparatus comprising: a receiver to receive a video stream from a network; an image analyzer, responsive to depth indicators associated with the video stream, to parse frames of the video stream into foreground and background content elements; a compositor, responsive to data indicating a viewing condition at a display, to shift a foreground element of a frame with respect to a background element of the same frame; and the display displaying video data output by the compositor. 29. The apparatus of claim 28, further comprising a position sensor associated with the display, wherein the viewing condition information is entered at the position sensor. 30. The apparatus of claim 28, further comprising a camera associated with the display, wherein the viewing condition information is entered at the camera. 31. The apparatus of claim 28, wherein the display is a touch-screen display, wherein the viewing condition information is entered at the touch screen.
Embodiments of the present disclosure provide systems and methods for perspective shifting in a video conferencing session. In one exemplary method, a video stream may be generated. A foreground element may be identified in a frame of the video stream and distinguished from a background element of the frame. Data may be received representing a viewing condition at a terminal that will display the generated video stream. The frame of the video stream may be modified based on the received data to shift of the foreground element relative to the background element. The modified video stream may be displayed at the displaying terminal.1. A method comprising: generating, by a first terminal, a video stream; distinguishing a foreground element in a frame of the video stream from a background element of the frame; responsive to information representing a viewing condition at a second terminal, modifying the frame of the video stream by shifting the foreground element relative to the background element; and transmitting the modified video frame to the second terminal. 2. The method of claim 1, wherein the modifying the frame of the video stream by shifting the foreground element relative to the background element is further responsive to depth information representing a relative depth between the foreground element and the background element in the frame of the video stream. 3. The method of claim 1, wherein the information representing the viewing condition at the second terminal comprises an indication of a change in relative positions of the second terminal and an operator of the second terminal. 4. The method of claim 1, wherein the information representing the viewing condition at the second terminal comprises a relative rotation of the second terminal with respect to an operator of the second terminal. 5. The method of claim 1, wherein: the information representing the viewing condition at the second terminal comprises at least one of: a side-to-side movement of the second terminal with respect to an operator of the second terminal and a rotation of the second terminal about a longitudinal axis of the second terminal; and the shift of the foreground element relative to the background element comprises a horizontal shift of the foreground element relative to the background element. 6. The method of claim 1, wherein the information representing the viewing condition at the second terminal comprises an input to a touch-sensitive display of the second terminal. 7. The method of claim 1, further comprising deriving new content for a background element of the frame revealed by the shifting from content of other frame(s) of the video stream. 8. The method of claim 7, wherein: the video stream is captured by two or more cameras; and the portion of the background element without image content is filled in, at least in part, with image content captured from the two or more cameras. 9. The method of claim 7, further comprising: estimating an amount of background content that can be derived for the frame based on identified background content for other frame(s) of the video stream; and determining a distance of the shifting based on the estimated amount of background content that can be derived. 10. The method of claim 2, wherein the depth information is determined based at least in part on data from a depth sensor associated with the first terminal. 11. The method of claim 2, wherein the depth information is determined from analysis of the video stream. 12. The method of claim 1, wherein the video stream is captured in part by a camera. 13. The method of claim 1, wherein the video stream includes synthetic video data at least in part. 14. A method comprising: receiving, by a terminal, a video stream; distinguishing a foreground element in a frame of the video stream from a background element of the frame; responsive to information representing a viewing condition at the terminal, modifying the frame of the video stream by shifting the foreground element relative to the background element; and displaying the modified video frame. 15. The method of claim 14, wherein the viewing condition information is entered at a motion sensor of the terminal. 16. The method of claim 14, wherein the viewing condition information is entered at a camera of the terminal. 17. The method of claim 14, wherein the viewing condition information is entered at a touch screen of the terminal. 18. The method of claim 14, wherein the video stream is received as coded video data and the method comprises decoding by the terminal the coded video data. 19. The method of claim 18, wherein the distinguishing is performed, at least in part, with reference to metadata provided with the coded video data identifying depth of content elements of the video stream. 20. An apparatus comprising: an image source generating a video sequence; an image analyzer, responsive to depth indicators associated with the video sequence, to parse frames of the video sequence into foreground and background content elements; a compositor, responsive to data indicating a viewing condition at a display terminal, to shift a foreground element of a frame with respect to a background element of the same frame; and a transmitter to transmit the shifted frame to the display terminal. 21. The apparatus of claim 20, wherein the compositor derives new content for a background element of the frame revealed by the shifting from content of other frame(s) of the video stream. 22. The apparatus of claim 20, wherein the compositor: estimates an amount of background content that can be derived for the frame based on identified background content for other frame(s) of the video stream; and determines a distance of the shifting based on the estimated amount of background content that can be derived. 23. The apparatus of claim 20, further comprising a depth sensor, wherein the depth information is determined based at least in part on data from the depth sensor. 24. The apparatus of claim 20, wherein the depth information is determined from analysis of the image analyzer. 25. The apparatus of claim 20, wherein the image source includes a camera. 26. The apparatus of claim 20, wherein the image source includes an application executing on a computer. 27. A computer-readable medium having instructions that, when executed by a processor, effectuate operations comprising: generating, by a first terminal, a video stream; distinguishing a foreground element in a frame of the video stream from a background element of the frame; receiving an operator input from a second terminal; responsive to information representing a viewing condition at a second terminal, modifying the frame of the video stream by shifting the foreground element relative to the background element; and transmitting the modified video frame to the second terminal. 28. An apparatus comprising: a receiver to receive a video stream from a network; an image analyzer, responsive to depth indicators associated with the video stream, to parse frames of the video stream into foreground and background content elements; a compositor, responsive to data indicating a viewing condition at a display, to shift a foreground element of a frame with respect to a background element of the same frame; and the display displaying video data output by the compositor. 29. The apparatus of claim 28, further comprising a position sensor associated with the display, wherein the viewing condition information is entered at the position sensor. 30. The apparatus of claim 28, further comprising a camera associated with the display, wherein the viewing condition information is entered at the camera. 31. The apparatus of claim 28, wherein the display is a touch-screen display, wherein the viewing condition information is entered at the touch screen.
2,600
10,676
10,676
15,289,082
2,626
This application is directed to detecting touch events on a touch sensing surface coupled to a capacitive sense array and one or more force sensors. The capacitive sense array includes a plurality of sense electrodes configured to provide a plurality of capacitive sense signals. The force sensors are configured to provide one or more force signals. In accordance with the plurality of capacitive sense signals, one or more candidate touches are determined on the touch sensing surface, and used to determine an expected force shape. An actual force shape caused on the touch sensing surface by the candidate touches is determined from the one or more force signals. The actual and expected force shapes are compared to determine magnitudes of force associated with each of the one or more touch candidates, thereby determining whether each of the one or more candidate touches is a valid touch.
1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising: at a processing device coupled to a capacitive sense array and one or more force electrodes, wherein the capacitive sense array includes a plurality of sense electrodes: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals, determining one or more candidate touches on the touch sensing surface; determining an expected force shape based on the one or more candidate touches; obtaining one or more force signals from the one or more force electrodes; in accordance with the one or more force signals, determining an actual force shape caused on the touch sensing surface by the one or more candidate touches; and comparing the actual force shape and the expected force shape to determine a magnitude of force associated with each of the one or more touch candidates, including determining whether each of the one or more candidate touches is a valid touch. 2. The method of claim 1, wherein the expected force shape includes an array of expected values associated with force applied on the one or more force electrodes. 3. The method of claim 1, wherein: the expected force shape includes an expected force shape matrix that combines a plurality of computed force shape matrices based on a plurality of force weight factors; each of the plurality of computed force shape matrices represents one of the one or more candidate touches; each of the plurality of computed force shape matrices is associated with one of the plurality of force weight factors; and each force weight factor represents the magnitude of force associated with one of the one or more candidate touches. 4. The method of claim 3, wherein comparing the actual force shape and the expected force shape further comprises: calculating an error vector from the actual force shape and the expected force shape; and adjusting the plurality of force weight factors to minimize the magnitude of the error vector or to reduce the magnitude of the error vector below an error threshold. 5. The method of claim 3, wherein each of the plurality of computed force shape matrices is computed based on a calibration table, and the calibration table includes a plurality of calibrated force shape matrices, each representing one of a plurality of predetermined calibration sampled points that are distributed on the touch sensing surface. 6. The method of claim 3, wherein each of the plurality of computed force shape matrices combines a subset of the calibrated force shape matrices corresponding to a subset of predetermined calibration sampled points that surround a respective touch candidate. 7. The method of claim 6, wherein the subset of the calibrated force shape matrices are combined based on distances between the first touch and the predetermined calibration sampled points. 8. The method of claim 3, wherein a first candidate touch matches a first predetermined calibration sampled point, and a computed force shape matrix corresponding to the first candidate touch is represented by a calibrated force shape matrix corresponding to the first predetermined calibration sampled point. 9. The method of claim 3, wherein the plurality of predetermined calibration sampled points has a resolution that is distinct from that of the one or more force electrodes. 10. The method of claim 1, wherein determining whether each of the one or more touches is a valid touch further comprises: identifying at least one of the one or more candidate touches that is impacted by an unwanted factor other than a touch event, and determining that at least one of the one or more candidate touches is not a valid touch, wherein the unwanted factor includes at least one of: a water drop disposed on the touch sensing surface, common mode noise, and hovering of a stylus or a conductive passive object. 11. The method of claim 1, wherein the one or more candidate touches include a first touch, and determining whether each of the one or more touches is a valid touch further comprises: in accordance with the comparison between the force shape and the capacitive touch profile, determining that the first touch is not a valid touch. 12. The method of claim 11, wherein the first touch is caused by a stylus or a conductive passive object that hovers above an area of the touch sensing surface corresponding to the first touch. 13. The method of claim 1, wherein determining the force shape further comprises determining, according to the force shape, that a second number of touches are applied on the touch sensing surface, and determining one or more candidate touches on the touch sensing surface further includes determining that one or more candidate touches associated with the capacitive touch profile include a first number of candidate touches, wherein the first number is distinct from the second number. 14. The method of claim 1, wherein the one or more force electrodes are disposed below the capacitive sense array and separated from the capacitive sense array. 15. A processing device, comprising: a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals, determining one or more candidate touches on the touch sensing surface; determining an expected force shape based on the one or more candidate touches; obtaining one or more force signals from the one or more force electrodes; in accordance with the one or more force signals, determining an actual force shape caused on the touch sensing surface by the one or more candidate touches; and comparing the actual force shape and the expected force shape to determine a magnitude of force associated with each of the one or more touch candidates, including determining whether each of the one or more candidate touches is a valid touch. 16. The processing device of claim 15, wherein the expected force shape includes an array of expected values associated with force applied on the one or more force electrodes. 17. The processing device of claim 15, wherein: the expected force shape includes an expected force shape matrix that combines a plurality of computed force shape matrices based on a plurality of force weight factors; each of the plurality of computed force shape matrices represents one of the one or more candidate touches; each of the plurality of computed force shape matrices is associated with one of the plurality of force weight factors; and each force weight factor represents the magnitude of force associated with one of the one or more candidate touches. 18. An electronic system, comprising: a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured to: obtain a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals, determine one or more candidate touches on the touch sensing surface; determine an expected force shape based on the one or more candidate touches; obtain one or more force signals from the one or more force electrodes; in accordance with the one or more force signals, determine an actual force shape caused on the touch sensing surface by the one or more candidate touches; and compare the actual force shape and the expected force shape to determine a magnitude of force associated with each of the one or more touch candidates, including determining whether each of the one or more candidate touches is a valid touch. 19. The electronic system of claim 18, wherein the processing device is configured to determine whether each of the one or more touches is a valid touch by: identifying at least one of the one or more candidate touches that is impacted by an unwanted factor other than a touch event, and determining that at least one of the one or more candidate touches is not a valid touch, wherein the unwanted factor includes at least one of: a water drop disposed on the touch sensing surface, common mode noise, and hovering of a stylus or a conductive passive object. 20. The electronic system of claim 18, wherein the processing device is configured to determine the force shape by: determining, according to the force shape, that a second number of touches are applied on the touch sensing surface, and determining one or more candidate touches on the touch sensing surface further includes determining that one or more candidate touches associated with the capacitive touch profile include a first number of candidate touches, wherein the first number is distinct from the second number.
This application is directed to detecting touch events on a touch sensing surface coupled to a capacitive sense array and one or more force sensors. The capacitive sense array includes a plurality of sense electrodes configured to provide a plurality of capacitive sense signals. The force sensors are configured to provide one or more force signals. In accordance with the plurality of capacitive sense signals, one or more candidate touches are determined on the touch sensing surface, and used to determine an expected force shape. An actual force shape caused on the touch sensing surface by the candidate touches is determined from the one or more force signals. The actual and expected force shapes are compared to determine magnitudes of force associated with each of the one or more touch candidates, thereby determining whether each of the one or more candidate touches is a valid touch.1. A method of detecting touch events on a touch sensing surface coupled to a capacitive sense array, comprising: at a processing device coupled to a capacitive sense array and one or more force electrodes, wherein the capacitive sense array includes a plurality of sense electrodes: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals, determining one or more candidate touches on the touch sensing surface; determining an expected force shape based on the one or more candidate touches; obtaining one or more force signals from the one or more force electrodes; in accordance with the one or more force signals, determining an actual force shape caused on the touch sensing surface by the one or more candidate touches; and comparing the actual force shape and the expected force shape to determine a magnitude of force associated with each of the one or more touch candidates, including determining whether each of the one or more candidate touches is a valid touch. 2. The method of claim 1, wherein the expected force shape includes an array of expected values associated with force applied on the one or more force electrodes. 3. The method of claim 1, wherein: the expected force shape includes an expected force shape matrix that combines a plurality of computed force shape matrices based on a plurality of force weight factors; each of the plurality of computed force shape matrices represents one of the one or more candidate touches; each of the plurality of computed force shape matrices is associated with one of the plurality of force weight factors; and each force weight factor represents the magnitude of force associated with one of the one or more candidate touches. 4. The method of claim 3, wherein comparing the actual force shape and the expected force shape further comprises: calculating an error vector from the actual force shape and the expected force shape; and adjusting the plurality of force weight factors to minimize the magnitude of the error vector or to reduce the magnitude of the error vector below an error threshold. 5. The method of claim 3, wherein each of the plurality of computed force shape matrices is computed based on a calibration table, and the calibration table includes a plurality of calibrated force shape matrices, each representing one of a plurality of predetermined calibration sampled points that are distributed on the touch sensing surface. 6. The method of claim 3, wherein each of the plurality of computed force shape matrices combines a subset of the calibrated force shape matrices corresponding to a subset of predetermined calibration sampled points that surround a respective touch candidate. 7. The method of claim 6, wherein the subset of the calibrated force shape matrices are combined based on distances between the first touch and the predetermined calibration sampled points. 8. The method of claim 3, wherein a first candidate touch matches a first predetermined calibration sampled point, and a computed force shape matrix corresponding to the first candidate touch is represented by a calibrated force shape matrix corresponding to the first predetermined calibration sampled point. 9. The method of claim 3, wherein the plurality of predetermined calibration sampled points has a resolution that is distinct from that of the one or more force electrodes. 10. The method of claim 1, wherein determining whether each of the one or more touches is a valid touch further comprises: identifying at least one of the one or more candidate touches that is impacted by an unwanted factor other than a touch event, and determining that at least one of the one or more candidate touches is not a valid touch, wherein the unwanted factor includes at least one of: a water drop disposed on the touch sensing surface, common mode noise, and hovering of a stylus or a conductive passive object. 11. The method of claim 1, wherein the one or more candidate touches include a first touch, and determining whether each of the one or more touches is a valid touch further comprises: in accordance with the comparison between the force shape and the capacitive touch profile, determining that the first touch is not a valid touch. 12. The method of claim 11, wherein the first touch is caused by a stylus or a conductive passive object that hovers above an area of the touch sensing surface corresponding to the first touch. 13. The method of claim 1, wherein determining the force shape further comprises determining, according to the force shape, that a second number of touches are applied on the touch sensing surface, and determining one or more candidate touches on the touch sensing surface further includes determining that one or more candidate touches associated with the capacitive touch profile include a first number of candidate touches, wherein the first number is distinct from the second number. 14. The method of claim 1, wherein the one or more force electrodes are disposed below the capacitive sense array and separated from the capacitive sense array. 15. A processing device, comprising: a processing core; a capacitance sense circuit; and memory storing one or more programs configured for execution by the processing core, the one or more programs comprising instructions for: obtaining a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals, determining one or more candidate touches on the touch sensing surface; determining an expected force shape based on the one or more candidate touches; obtaining one or more force signals from the one or more force electrodes; in accordance with the one or more force signals, determining an actual force shape caused on the touch sensing surface by the one or more candidate touches; and comparing the actual force shape and the expected force shape to determine a magnitude of force associated with each of the one or more touch candidates, including determining whether each of the one or more candidate touches is a valid touch. 16. The processing device of claim 15, wherein the expected force shape includes an array of expected values associated with force applied on the one or more force electrodes. 17. The processing device of claim 15, wherein: the expected force shape includes an expected force shape matrix that combines a plurality of computed force shape matrices based on a plurality of force weight factors; each of the plurality of computed force shape matrices represents one of the one or more candidate touches; each of the plurality of computed force shape matrices is associated with one of the plurality of force weight factors; and each force weight factor represents the magnitude of force associated with one of the one or more candidate touches. 18. An electronic system, comprising: a capacitive sense array coupled to a touch sensing surface; one or more force electrodes; and a processing device coupled to the capacitive sense array and the one or more force electrodes, wherein the processing device is configured to: obtain a plurality of capacitive sense signals from the plurality of sense electrodes of the capacitive sense array; in accordance with the plurality of capacitive sense signals, determine one or more candidate touches on the touch sensing surface; determine an expected force shape based on the one or more candidate touches; obtain one or more force signals from the one or more force electrodes; in accordance with the one or more force signals, determine an actual force shape caused on the touch sensing surface by the one or more candidate touches; and compare the actual force shape and the expected force shape to determine a magnitude of force associated with each of the one or more touch candidates, including determining whether each of the one or more candidate touches is a valid touch. 19. The electronic system of claim 18, wherein the processing device is configured to determine whether each of the one or more touches is a valid touch by: identifying at least one of the one or more candidate touches that is impacted by an unwanted factor other than a touch event, and determining that at least one of the one or more candidate touches is not a valid touch, wherein the unwanted factor includes at least one of: a water drop disposed on the touch sensing surface, common mode noise, and hovering of a stylus or a conductive passive object. 20. The electronic system of claim 18, wherein the processing device is configured to determine the force shape by: determining, according to the force shape, that a second number of touches are applied on the touch sensing surface, and determining one or more candidate touches on the touch sensing surface further includes determining that one or more candidate touches associated with the capacitive touch profile include a first number of candidate touches, wherein the first number is distinct from the second number.
2,600
10,677
10,677
15,670,999
2,683
Systems and methods for reporting account and sensor configuration information to a central system are disclosed. A security, monitoring, and automation (SMA) system may operate one or more sensors according to sensor configuration information. The sensor configuration information may be used by a remote monitoring system to send notification messages based on sensor events from the SMA system. The sensor configuration information may be transmitted to the remote monitoring system via another remote computing system. The computing system may format the sensor configuration information to a format compatible with the remote monitoring system and transmit the formatted sensor configuration information to the remote monitoring system.
1. A method comprising: receiving, by a computing system and from a controller of a security, monitoring, and automation (SMA) system comprising a plurality of sensors, sensor configuration information associated with one or more of the plurality of sensors, wherein the SMA system and the controller are located at a premises, and wherein the computing system is located external to the premises; determining, based on the sensor configuration information, a remote monitoring system from a plurality of remote monitoring systems, wherein the plurality of remote monitoring systems are located remote from the computing system and external to the premises; generating, based on the sensor configuration information, formatted sensor configuration information, wherein the formatted sensor configuration information is in a format compatible with the determined remote monitoring system of the plurality of remote monitoring systems; and transmitting, to the determined remote monitoring system of the plurality of remote monitoring systems, the formatted sensor configuration information. 2. The method of claim 1, further comprising: storing, based on an account identifier associated with a user of the SMA system, the sensor configuration information. 3. The method of claim 2, further comprising: generating, based on the account identifier, a formatted account identifier, wherein the formatted account identifier is in a format compatible with the determined remote monitoring system; and transmitting, to the determined remote monitoring system and in association with the transmitting the formatted sensor configuration information, the formatted account identifier. 4. The method of claim 1, wherein the transmitting the formatted sensor configuration information is via a web service. 5. The method of claim 1, wherein the sensor configuration information comprises one or more sensor identifiers and one or more zones associated with the one or more sensor identifiers. 6. The method of claim 1, further comprising: receiving account configuration information associated with the SMA system; generating, based on the account configuration information, formatted account configuration information, wherein the formatted account configuration information is in a format compatible with the determined remote monitoring system; and transmitting, to the determined remote monitoring system, the formatted account configuration information. 7. The method of claim 6, wherein the account configuration information comprises an account identifier, one or more account configuration parameters, and an SMA controller identifier associated with the account identifier. 8. The method of claim 7, wherein the one or more account configuration parameters comprise at least one of account passwords, account secret words, and emergency contact information. 9. The method of claim 7, further comprising: storing the account configuration information, wherein the storing the account configuration information comprises: if the account identifier of the account configuration information is not previously stored, creating a new record associated with the account identifier; and if the account identifier of the account configuration information is previously stored, modifying a previously stored record associated with the account identifier. 10. The method of claim 9, further comprising: if the account configuration information is for an account identifier not previously stored, modifying the account configuration information to comprises an account creation parameter usable by the determined remote monitoring system. 11. A system comprising: a controller of a security, monitoring, and automation (SMA) system comprising a plurality of sensors, wherein the controller is configured to transmit sensor configuration information associated with one or more of the plurality of sensors, and wherein the SMA system and the controller are located at a premises; and a computing system located external to the premises and configured to: receive, from the controller, the sensor configuration information, determine, based on the sensor configuration information, a remote monitoring system from a plurality of remote monitoring systems, wherein the plurality of remote monitoring systems are located remote from the computing system and external to the premises; generate, based on the sensor configuration information, formatted sensor configuration information, wherein the formatted sensor configuration information is in a format compatible with the determined remote monitoring system of the plurality of remote monitoring systems; and transmit, to the determined remote monitoring system of the plurality of remote monitoring systems, the formatted sensor configuration information. 12. The system of claim 11, wherein the computing system if further configured to: generate, based on an account identifier, a formatted account identifier associated with a user of the SMA system, wherein the formatted account identifier is in a format compatible with the determined remote monitoring system; and transmit, to the determined remote monitoring system and in association with the transmitting the formatted sensor configuration information, the formatted account identifier. 13. The system of claim 11, wherein the sensor configuration information comprises: a plurality of sensor identifiers each associated with a corresponding sensor of the plurality of sensors of the SMA system; and zone data indicating an association of a zone of a plurality of zones with a corresponding sensor identifier of the plurality of sensor identifier. 14. The system of claim 11, wherein the computing system is further configured to: receive account configuration information; generate, based on the account configuration information, formatted account configuration information, wherein the formatted account configuration information is in a format compatible with the determined remote monitoring system; and transmit, to the determined remote monitoring system, the formatted account configuration information. 15. The system of claim 14, wherein the account configuration information comprises an account identifier, one or more account configuration parameters, and an SMA controller identifier associated with the account identifier, wherein the one or more account configuration parameters comprises at least one of account passwords, account secret words, and emergency contact information. 16. The system of claim 15, wherein the computing system is further configured to: store the account configuration information, wherein the storing the account configuration information comprises: if the account identifier of the account configuration information is not previously stored, create a new record associated with the account identifier; and if the account identifier of the account configuration information is previously stored, modify a previously stored record associated with the account identifier. 17. An apparatus comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the apparatus to: receive, from a controller of a security, monitoring, and automation (SMA) system comprising a plurality of sensors, sensor configuration information associated with one or more of the plurality of sensors, wherein the SMA system and the controller are located at a premises; determine, based on the sensor configuration information, a remote monitoring system from a plurality of remote monitoring systems, wherein the plurality of remote monitoring systems are located external to the premises; generate, based on the sensor configuration information, formatted sensor configuration information, wherein the formatted sensor configuration information is in a format compatible with the determined remote monitoring system of the plurality of remote monitoring systems; and transmit, to the determined remote monitoring system of the plurality of remote monitoring systems, the formatted sensor configuration information. 18. The apparatus of claim 17, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: generate, based on an account identifier, a formatted account identifier, wherein the formatted account identifier is in a format compatible with the determined remote monitoring system, and wherein the account identifier is associated with a user of the SMA system; and transmit, to the determined remote monitoring system and in association with the transmitting the formatted sensor configuration information, the formatted account identifier. 19. The apparatus of claim 18, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: receive account configuration information; generate, based on the account configuration information, formatted account configuration information, wherein the formatted account configuration information is in a format compatible with the determined remote monitoring system; and transmit, to the determined remote monitoring system, the formatted account configuration information. 20. The apparatus of claim 19, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: store the account configuration information, wherein the storing the account configuration information comprises: if the account identifier of the account configuration information is not previously stored, create a new record associated with the account identifier; and if the account identifier of the account configuration information is previously stored, modify a previously stored record associated with the account identifier.
Systems and methods for reporting account and sensor configuration information to a central system are disclosed. A security, monitoring, and automation (SMA) system may operate one or more sensors according to sensor configuration information. The sensor configuration information may be used by a remote monitoring system to send notification messages based on sensor events from the SMA system. The sensor configuration information may be transmitted to the remote monitoring system via another remote computing system. The computing system may format the sensor configuration information to a format compatible with the remote monitoring system and transmit the formatted sensor configuration information to the remote monitoring system.1. A method comprising: receiving, by a computing system and from a controller of a security, monitoring, and automation (SMA) system comprising a plurality of sensors, sensor configuration information associated with one or more of the plurality of sensors, wherein the SMA system and the controller are located at a premises, and wherein the computing system is located external to the premises; determining, based on the sensor configuration information, a remote monitoring system from a plurality of remote monitoring systems, wherein the plurality of remote monitoring systems are located remote from the computing system and external to the premises; generating, based on the sensor configuration information, formatted sensor configuration information, wherein the formatted sensor configuration information is in a format compatible with the determined remote monitoring system of the plurality of remote monitoring systems; and transmitting, to the determined remote monitoring system of the plurality of remote monitoring systems, the formatted sensor configuration information. 2. The method of claim 1, further comprising: storing, based on an account identifier associated with a user of the SMA system, the sensor configuration information. 3. The method of claim 2, further comprising: generating, based on the account identifier, a formatted account identifier, wherein the formatted account identifier is in a format compatible with the determined remote monitoring system; and transmitting, to the determined remote monitoring system and in association with the transmitting the formatted sensor configuration information, the formatted account identifier. 4. The method of claim 1, wherein the transmitting the formatted sensor configuration information is via a web service. 5. The method of claim 1, wherein the sensor configuration information comprises one or more sensor identifiers and one or more zones associated with the one or more sensor identifiers. 6. The method of claim 1, further comprising: receiving account configuration information associated with the SMA system; generating, based on the account configuration information, formatted account configuration information, wherein the formatted account configuration information is in a format compatible with the determined remote monitoring system; and transmitting, to the determined remote monitoring system, the formatted account configuration information. 7. The method of claim 6, wherein the account configuration information comprises an account identifier, one or more account configuration parameters, and an SMA controller identifier associated with the account identifier. 8. The method of claim 7, wherein the one or more account configuration parameters comprise at least one of account passwords, account secret words, and emergency contact information. 9. The method of claim 7, further comprising: storing the account configuration information, wherein the storing the account configuration information comprises: if the account identifier of the account configuration information is not previously stored, creating a new record associated with the account identifier; and if the account identifier of the account configuration information is previously stored, modifying a previously stored record associated with the account identifier. 10. The method of claim 9, further comprising: if the account configuration information is for an account identifier not previously stored, modifying the account configuration information to comprises an account creation parameter usable by the determined remote monitoring system. 11. A system comprising: a controller of a security, monitoring, and automation (SMA) system comprising a plurality of sensors, wherein the controller is configured to transmit sensor configuration information associated with one or more of the plurality of sensors, and wherein the SMA system and the controller are located at a premises; and a computing system located external to the premises and configured to: receive, from the controller, the sensor configuration information, determine, based on the sensor configuration information, a remote monitoring system from a plurality of remote monitoring systems, wherein the plurality of remote monitoring systems are located remote from the computing system and external to the premises; generate, based on the sensor configuration information, formatted sensor configuration information, wherein the formatted sensor configuration information is in a format compatible with the determined remote monitoring system of the plurality of remote monitoring systems; and transmit, to the determined remote monitoring system of the plurality of remote monitoring systems, the formatted sensor configuration information. 12. The system of claim 11, wherein the computing system if further configured to: generate, based on an account identifier, a formatted account identifier associated with a user of the SMA system, wherein the formatted account identifier is in a format compatible with the determined remote monitoring system; and transmit, to the determined remote monitoring system and in association with the transmitting the formatted sensor configuration information, the formatted account identifier. 13. The system of claim 11, wherein the sensor configuration information comprises: a plurality of sensor identifiers each associated with a corresponding sensor of the plurality of sensors of the SMA system; and zone data indicating an association of a zone of a plurality of zones with a corresponding sensor identifier of the plurality of sensor identifier. 14. The system of claim 11, wherein the computing system is further configured to: receive account configuration information; generate, based on the account configuration information, formatted account configuration information, wherein the formatted account configuration information is in a format compatible with the determined remote monitoring system; and transmit, to the determined remote monitoring system, the formatted account configuration information. 15. The system of claim 14, wherein the account configuration information comprises an account identifier, one or more account configuration parameters, and an SMA controller identifier associated with the account identifier, wherein the one or more account configuration parameters comprises at least one of account passwords, account secret words, and emergency contact information. 16. The system of claim 15, wherein the computing system is further configured to: store the account configuration information, wherein the storing the account configuration information comprises: if the account identifier of the account configuration information is not previously stored, create a new record associated with the account identifier; and if the account identifier of the account configuration information is previously stored, modify a previously stored record associated with the account identifier. 17. An apparatus comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the apparatus to: receive, from a controller of a security, monitoring, and automation (SMA) system comprising a plurality of sensors, sensor configuration information associated with one or more of the plurality of sensors, wherein the SMA system and the controller are located at a premises; determine, based on the sensor configuration information, a remote monitoring system from a plurality of remote monitoring systems, wherein the plurality of remote monitoring systems are located external to the premises; generate, based on the sensor configuration information, formatted sensor configuration information, wherein the formatted sensor configuration information is in a format compatible with the determined remote monitoring system of the plurality of remote monitoring systems; and transmit, to the determined remote monitoring system of the plurality of remote monitoring systems, the formatted sensor configuration information. 18. The apparatus of claim 17, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: generate, based on an account identifier, a formatted account identifier, wherein the formatted account identifier is in a format compatible with the determined remote monitoring system, and wherein the account identifier is associated with a user of the SMA system; and transmit, to the determined remote monitoring system and in association with the transmitting the formatted sensor configuration information, the formatted account identifier. 19. The apparatus of claim 18, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: receive account configuration information; generate, based on the account configuration information, formatted account configuration information, wherein the formatted account configuration information is in a format compatible with the determined remote monitoring system; and transmit, to the determined remote monitoring system, the formatted account configuration information. 20. The apparatus of claim 19, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: store the account configuration information, wherein the storing the account configuration information comprises: if the account identifier of the account configuration information is not previously stored, create a new record associated with the account identifier; and if the account identifier of the account configuration information is previously stored, modify a previously stored record associated with the account identifier.
2,600
10,678
10,678
14,166,197
2,674
An embodiment provides a method, including: obtaining, using a processor, contextual information relating to an information handling device; adjusting, using a processor, an automated speech recognition engine using the contextual information; receiving, at an audio receiver of the information handling device, user speech input; and providing, using a processor, recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine. Other aspects are described and claimed.
1. A method, comprising: obtaining, using a processor, contextual information relating to an information handling device; adjusting, using a processor, an automated speech recognition engine using the contextual information; receiving, at an audio receiver of the information handling device, user speech input; and providing, using a processor, recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine. 2. The method of claim 1, wherein said adjusting comprises selecting a knowledge domain based on the contextual information. 3. The method of claim 1, wherein said adjusting comprises selecting a lexicon based on the contextual information. 4. The method of claim 1, wherein said adjusting comprises weighting one or more words based on the contextual information. 5. The method of claim 1, further comprising committing a predetermined action matching the recognized speech; wherein said adjusting comprises adjusting the matching between recognized speech and a predetermined action based on the contextual information. 6. The method of claim 1, further comprising: providing a communication to the user including estimated recognized speech based on the user speech input and the contextual information adjustment to the automated speech recognition engine; and committing a predetermined action matching the recognized speech. 7. The method of claim 6, further comprising receiving user input associated with the communication; wherein said committing a predetermined action matching the recognized speech proceeds responsive thereto. 8. The method of claim 1, wherein said contextual information is selected from the group consisting of contextual information relating to user interaction with the information handling device, contextual information relating to running applications of the information handling device, contextual information relating to received stimulus of the information handling device, and contextual information relating to sensed environment of the information handling device. 9. The method of claim 1, wherein the contextual information is derived from the information handling device. 10. The method of claim 1, wherein the contextual information is transferrable to another information handling device. 11. An information handling device, comprising: an audio receiver; a processor operatively coupled to the audio receiver; and a memory device storing instructions executable by the processor to: obtain contextual information relating to the information handling device; adjust an automated speech recognition engine using the contextual information; receive, at the audio receiver, user speech input; and provide recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine. 12. The information handling device of claim 11, wherein to adjust comprises selecting a knowledge domain based on the contextual information. 13. The information handling device of claim 11, wherein to adjust comprises selecting a lexicon based on the contextual information. 14. The information handling device of claim 11, wherein to adjust comprises weighting one or more words based on the contextual information. 15. The information handling device of claim 11, wherein the instructions are further executable by the processor to commit a predetermined action matching the recognized speech; wherein to adjust comprises adjusting the matching between recognized speech and a predetermined action based on the contextual information. 16. The information handling device of claim 11, wherein the instructions are further executable by the processor to: provide a communication to the user including estimated recognized speech based on the user speech input and the contextual information adjustment to the automated speech recognition engine; and commit a predetermined action matching the recognized speech. 17. The information handling device of claim 16, wherein the instructions are further executable by the processor to receive user input associated with the communication; wherein committing of a predetermined action matching the recognized speech proceeds responsive thereto. 18. The information handling device of claim 11, wherein said contextual information is selected from the group consisting of contextual information relating to user interaction with the information handling device, contextual information relating to running applications of the information handling device, contextual information relating to received stimulus of the information handling device, and contextual information relating to sensed environment of the information handling device. 19. The information handling device of claim 11, wherein the contextual information is derived from the information handling device. 20. A program product, comprising: a storage medium comprising device readable program code, the code being executable by a processor and comprising: code that obtains, using a processor, contextual information relating to an information handling device; code that adjusts, using a processor, an automated speech recognition engine using the contextual information; code that receives, at an audio receiver of the information handling device, user speech input; and code that provides, using a processor, recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine.
An embodiment provides a method, including: obtaining, using a processor, contextual information relating to an information handling device; adjusting, using a processor, an automated speech recognition engine using the contextual information; receiving, at an audio receiver of the information handling device, user speech input; and providing, using a processor, recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine. Other aspects are described and claimed.1. A method, comprising: obtaining, using a processor, contextual information relating to an information handling device; adjusting, using a processor, an automated speech recognition engine using the contextual information; receiving, at an audio receiver of the information handling device, user speech input; and providing, using a processor, recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine. 2. The method of claim 1, wherein said adjusting comprises selecting a knowledge domain based on the contextual information. 3. The method of claim 1, wherein said adjusting comprises selecting a lexicon based on the contextual information. 4. The method of claim 1, wherein said adjusting comprises weighting one or more words based on the contextual information. 5. The method of claim 1, further comprising committing a predetermined action matching the recognized speech; wherein said adjusting comprises adjusting the matching between recognized speech and a predetermined action based on the contextual information. 6. The method of claim 1, further comprising: providing a communication to the user including estimated recognized speech based on the user speech input and the contextual information adjustment to the automated speech recognition engine; and committing a predetermined action matching the recognized speech. 7. The method of claim 6, further comprising receiving user input associated with the communication; wherein said committing a predetermined action matching the recognized speech proceeds responsive thereto. 8. The method of claim 1, wherein said contextual information is selected from the group consisting of contextual information relating to user interaction with the information handling device, contextual information relating to running applications of the information handling device, contextual information relating to received stimulus of the information handling device, and contextual information relating to sensed environment of the information handling device. 9. The method of claim 1, wherein the contextual information is derived from the information handling device. 10. The method of claim 1, wherein the contextual information is transferrable to another information handling device. 11. An information handling device, comprising: an audio receiver; a processor operatively coupled to the audio receiver; and a memory device storing instructions executable by the processor to: obtain contextual information relating to the information handling device; adjust an automated speech recognition engine using the contextual information; receive, at the audio receiver, user speech input; and provide recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine. 12. The information handling device of claim 11, wherein to adjust comprises selecting a knowledge domain based on the contextual information. 13. The information handling device of claim 11, wherein to adjust comprises selecting a lexicon based on the contextual information. 14. The information handling device of claim 11, wherein to adjust comprises weighting one or more words based on the contextual information. 15. The information handling device of claim 11, wherein the instructions are further executable by the processor to commit a predetermined action matching the recognized speech; wherein to adjust comprises adjusting the matching between recognized speech and a predetermined action based on the contextual information. 16. The information handling device of claim 11, wherein the instructions are further executable by the processor to: provide a communication to the user including estimated recognized speech based on the user speech input and the contextual information adjustment to the automated speech recognition engine; and commit a predetermined action matching the recognized speech. 17. The information handling device of claim 16, wherein the instructions are further executable by the processor to receive user input associated with the communication; wherein committing of a predetermined action matching the recognized speech proceeds responsive thereto. 18. The information handling device of claim 11, wherein said contextual information is selected from the group consisting of contextual information relating to user interaction with the information handling device, contextual information relating to running applications of the information handling device, contextual information relating to received stimulus of the information handling device, and contextual information relating to sensed environment of the information handling device. 19. The information handling device of claim 11, wherein the contextual information is derived from the information handling device. 20. A program product, comprising: a storage medium comprising device readable program code, the code being executable by a processor and comprising: code that obtains, using a processor, contextual information relating to an information handling device; code that adjusts, using a processor, an automated speech recognition engine using the contextual information; code that receives, at an audio receiver of the information handling device, user speech input; and code that provides, using a processor, recognized speech based on the user speech input received and the contextual information adjustment to the automated speech recognition engine.
2,600
10,679
10,679
15,252,895
2,625
An apparatus and a method for controlling a touch input in a portable terminal are provided. In the method, a touch pad is activated when a request for a touch activation mode is received. Whether a touch input is received from the touch pad is determined. The touch input is compared with patterns of a pattern table of a corresponding application, wherein the patterns of the pattern table are mapped to an operation control command of the corresponding application. When an input pattern of the touch input matches with a pattern registered in the pattern table, an operation of the corresponding application that is currently running is controlled to perform a function corresponding to the matched pattern.
1. A portable terminal apparatus, the apparatus comprising: a touch display configured to receive a touch input and display data; a memory configured to store a plurality of user defined characters with associated operations and instructions for execution; and at least one processor executing the instructions stored in the memory, the at least one processor configured to: receive, via the touch display, an input of a drawing of a character if the touch display is in a state where receiving of a touch input is enabled and displaying is disabled, determine an operation associated with the character, and execute the determined operation. 2. The apparatus of claim 1, wherein the operation is determined by: recognizing the character input by a user; comparing the character to the plurality of user defined characters; and providing the operation associated with a matching comparison between the character and one of the plurality of user defined characters. 3. The apparatus of claim 1, wherein the operation is one of an application, an application sub-function, an applet or a group of applications. 4. The apparatus of claim 1, wherein the touch display is further configured to receive a command to set the touch display to the state where displaying is disabled and the receiving of the touch input is enabled, and wherein the at least one processor, when the command is received, is further configured to: control to set the touch display to the state where displaying is disabled, and control to set the touch display to the state where the receiving of the touch input is enabled. 5. The apparatus of claim 1, wherein, when receiving the input in the form of the character, the at least one processor is further configured to: control to set the touch display to a state where displaying is enabled, and control to drive the touch display to display a representation of the received character. 6. The apparatus of claim 1, wherein the at least one processor is further configured to: enter a setup mode of the portable terminal, receive an input in the form of a new character, receive an operation associated with the new character, and store in the memory the association between the new character and the operation in the plurality of user defined characters. 7. A touch input method in a portable terminal including a touch display, the method comprising: receiving, when the touch display is in a state where receiving a touch input is enabled and displaying is disabled, an input in the form of a character; determining an operation associated with the character; and execute the determined operation. 8. The method of claim 7, wherein the determining of the operation further comprises: recognizing the character input by a user; comparing the character to a plurality of user defined characters; and providing the operation associated with a matching comparison between the character and one of the plurality of user defined characters. 9. The method of claim 7, wherein the determined target operation is one of an application, an application sub-function, an applet or a group of applications. 10. The method of claim 7, further comprising: receiving, via a user input, a command to set the touch display to the state where the receiving of the touch input is enabled and displaying is disabled; setting, based upon receiving the command, the touch display to a state where displaying is disabled; and setting, based upon receiving the command, the touch display to a state where the receiving of the touch input is enabled. 11. The method of claim 7, wherein the receiving of the input in the form of the character further comprises: setting the touch display to a state where displaying is enabled; and driving the touch display to display a representation of the received character. 12. The method of claim 7, further comprising: entering a setup mode of the portable terminal; receiving an input in the form of a new character; receiving an operation associated with the new character; and storing the association between the new character and the operation in a plurality of user defined characters. 13. A portable terminal apparatus, the apparatus comprising: a touch display, the touch display having a plurality of states. a memory configured to store a plurality of user defined characters with associated operations and instructions for execution; and at least one processor executing the instructions stored in the memory, the at least one processor configured to: receive, via the touch display, an input of a drawing of a character if the touch display is in a state where a receiving function is enabled and a displaying function is disabled, determine an operation associated with the character, and execute the determined operation. 14. The apparatus of claim 13, wherein the operation is determined by: recognizing the character input by a user; comparing the character to the plurality of user defined characters, and providing the operation associated with a matching comparison between the character and one of the plurality of user defined characters. 15. The apparatus of claim 13, wherein the operation is one of an application, an application sub-function, an applet or a group of applications. 16. The apparatus of claim 13, wherein the touch display is further configured to receive a command to set the touch display to the state where displaying function is disabled and the receiving function is enabled, and wherein the at least one processor, when the command is received, is further configured to: control to set the touch display to the state where displaying function is disabled, and control to set the touch display to the state where the receiving function is enabled. 17. The apparatus of claim 13, wherein, when receiving the input in the form of the character, the at least one processor is further configured to: control to set the touch display to a state where displaying function is enabled, and control to drive the touch display to display a representation of the received character. 18. The apparatus of claim 13, wherein the at least one processor is further configured to: enter a setup mode of the portable terminal, receive an input in the form of a new character, receive an operation associated with the new character, and store in the memory the association between the new character and the operation in the plurality of user defined characters.
An apparatus and a method for controlling a touch input in a portable terminal are provided. In the method, a touch pad is activated when a request for a touch activation mode is received. Whether a touch input is received from the touch pad is determined. The touch input is compared with patterns of a pattern table of a corresponding application, wherein the patterns of the pattern table are mapped to an operation control command of the corresponding application. When an input pattern of the touch input matches with a pattern registered in the pattern table, an operation of the corresponding application that is currently running is controlled to perform a function corresponding to the matched pattern.1. A portable terminal apparatus, the apparatus comprising: a touch display configured to receive a touch input and display data; a memory configured to store a plurality of user defined characters with associated operations and instructions for execution; and at least one processor executing the instructions stored in the memory, the at least one processor configured to: receive, via the touch display, an input of a drawing of a character if the touch display is in a state where receiving of a touch input is enabled and displaying is disabled, determine an operation associated with the character, and execute the determined operation. 2. The apparatus of claim 1, wherein the operation is determined by: recognizing the character input by a user; comparing the character to the plurality of user defined characters; and providing the operation associated with a matching comparison between the character and one of the plurality of user defined characters. 3. The apparatus of claim 1, wherein the operation is one of an application, an application sub-function, an applet or a group of applications. 4. The apparatus of claim 1, wherein the touch display is further configured to receive a command to set the touch display to the state where displaying is disabled and the receiving of the touch input is enabled, and wherein the at least one processor, when the command is received, is further configured to: control to set the touch display to the state where displaying is disabled, and control to set the touch display to the state where the receiving of the touch input is enabled. 5. The apparatus of claim 1, wherein, when receiving the input in the form of the character, the at least one processor is further configured to: control to set the touch display to a state where displaying is enabled, and control to drive the touch display to display a representation of the received character. 6. The apparatus of claim 1, wherein the at least one processor is further configured to: enter a setup mode of the portable terminal, receive an input in the form of a new character, receive an operation associated with the new character, and store in the memory the association between the new character and the operation in the plurality of user defined characters. 7. A touch input method in a portable terminal including a touch display, the method comprising: receiving, when the touch display is in a state where receiving a touch input is enabled and displaying is disabled, an input in the form of a character; determining an operation associated with the character; and execute the determined operation. 8. The method of claim 7, wherein the determining of the operation further comprises: recognizing the character input by a user; comparing the character to a plurality of user defined characters; and providing the operation associated with a matching comparison between the character and one of the plurality of user defined characters. 9. The method of claim 7, wherein the determined target operation is one of an application, an application sub-function, an applet or a group of applications. 10. The method of claim 7, further comprising: receiving, via a user input, a command to set the touch display to the state where the receiving of the touch input is enabled and displaying is disabled; setting, based upon receiving the command, the touch display to a state where displaying is disabled; and setting, based upon receiving the command, the touch display to a state where the receiving of the touch input is enabled. 11. The method of claim 7, wherein the receiving of the input in the form of the character further comprises: setting the touch display to a state where displaying is enabled; and driving the touch display to display a representation of the received character. 12. The method of claim 7, further comprising: entering a setup mode of the portable terminal; receiving an input in the form of a new character; receiving an operation associated with the new character; and storing the association between the new character and the operation in a plurality of user defined characters. 13. A portable terminal apparatus, the apparatus comprising: a touch display, the touch display having a plurality of states. a memory configured to store a plurality of user defined characters with associated operations and instructions for execution; and at least one processor executing the instructions stored in the memory, the at least one processor configured to: receive, via the touch display, an input of a drawing of a character if the touch display is in a state where a receiving function is enabled and a displaying function is disabled, determine an operation associated with the character, and execute the determined operation. 14. The apparatus of claim 13, wherein the operation is determined by: recognizing the character input by a user; comparing the character to the plurality of user defined characters, and providing the operation associated with a matching comparison between the character and one of the plurality of user defined characters. 15. The apparatus of claim 13, wherein the operation is one of an application, an application sub-function, an applet or a group of applications. 16. The apparatus of claim 13, wherein the touch display is further configured to receive a command to set the touch display to the state where displaying function is disabled and the receiving function is enabled, and wherein the at least one processor, when the command is received, is further configured to: control to set the touch display to the state where displaying function is disabled, and control to set the touch display to the state where the receiving function is enabled. 17. The apparatus of claim 13, wherein, when receiving the input in the form of the character, the at least one processor is further configured to: control to set the touch display to a state where displaying function is enabled, and control to drive the touch display to display a representation of the received character. 18. The apparatus of claim 13, wherein the at least one processor is further configured to: enter a setup mode of the portable terminal, receive an input in the form of a new character, receive an operation associated with the new character, and store in the memory the association between the new character and the operation in the plurality of user defined characters.
2,600
10,680
10,680
15,865,581
2,667
Systems and methods are provided for performing medical imaging analysis. Input medical imaging data is received for performing a particular one of a plurality of medical imaging analyses. An output that provides a result of the particular medical imaging analysis on the input medical imaging data is generated using a neural network trained to perform the plurality of medical imaging analyses. The neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses. The generated output is outputted for performing the particular medical imaging analysis.
1. A method for performing medical imaging analysis, comprising: receiving input medical imaging data for performing a particular one of a plurality of medical imaging analyses; generating an output that provides a result of the particular medical imaging analysis on the input medical imaging data using a neural network trained to perform the plurality of medical imaging analyses, wherein the neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses; and outputting the generated output for performing the particular medical imaging analysis. 2. The method of claim 1, wherein each of the plurality of medical imaging analyses are associated with at least one of a different modality, anatomy, and task. 3. The method of claim 2, wherein the task comprises at least one of detection, recognition, segmentation, and registration. 4. The method of claim 1, further comprising: training the neural network by learning a set of weights for each node of the neural network, the weights in the set of weights for each node having a hierarchical relationship such that a weight at a top level of the hierarchical relationship is associated with each of the plurality of medical imaging analyses and weights at a bottom level of the hierarchical relationship are each associated with a respective one of the plurality of medical imaging analyses. 5. The method of claim 4, wherein the set of weights for each node comprises: a hypernet weight comprising the weight at the top level of the hierarchical relationship; one or more ultranet weights each associated with a modality and one or more ultranet weights each associated with an anatomy; one or more supernet weights each associated with a modality and an anatomy; and a plurality of target network weights comprising the weights at the bottom level of the hierarchical relationship. 6. The method of claim 4, wherein learning a set of weights for each node of the neural network comprises: cascading weights at a higher level of the hierarchical relationship to learn weights at a lower level of the hierarchical relationship associated with a same modality and/or anatomy. 7. The method of claim 4, wherein learning a set of weights for each node of the neural network comprises: combining weights for a first node in the neural network that are associated with at least one of a same modality, anatomy, and task to form a combined weight; and learning weights for a second node in the neural network using the combined weight. 8. The method of claim 4, wherein training the neural network comprises: training the neural network using datasets of training medical imaging data, each of the datasets being associated with a respective one of the plurality of medical imaging analyses and used to train a target network representing a branch of the neural network for performing the respective one of the plurality of medical imaging analyses. 9. The method of claim 8, wherein the datasets of training medical imaging data include input training medical imaging data, the method further comprising: generating output training medical imaging data corresponding to the input training medical imaging data using multi-task learning, the multi-task learning trained based on a relationship learned using an image as an input and an output. 10. An apparatus for performing medical imaging analysis, comprising: means for receiving input medical imaging data for performing a particular one of a plurality of medical imaging analyses; means for generating an output that provides a result of the particular medical imaging analysis on the input medical imaging data using a neural network trained to perform the plurality of medical imaging analyses, wherein the neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses; and means for outputting the generated output for performing the particular medical imaging analysis. 11. The apparatus of claim 10, wherein each of the plurality of medical imaging analyses are associated with at least one of a different modality, anatomy, and task. 12. The apparatus of claim 11, wherein the task comprises at least one of detection, recognition, segmentation, and registration. 13. The apparatus of claim 10, further comprising: means for training the neural network by learning a set of weights for each node of the neural network, the weights in the set of weights for each node having a hierarchical relationship such that a weight at a top level of the hierarchical relationship is associated with each of the plurality of medical imaging analyses and weights at a bottom level of the hierarchical relationship are each associated with a respective one of the plurality of medical imaging analyses. 14. The apparatus of claim 13, wherein the set of weights for each node comprises: a hypernet weight comprising the weight at the top level of the hierarchical relationship; one or more ultranet weights each associated with a modality and one or more ultranet weights each associated with an anatomy; one or more supernet weights each associated with a modality and an anatomy; and a plurality of target network weights comprising the weights at the bottom level of the hierarchical relationship. 15. A non-transitory computer readable medium storing computer program instructions for performing medical imaging analysis, the computer program instructions when executed by a processor cause the processor to perform operations comprising: receiving input medical imaging data for performing a particular one of a plurality of medical imaging analyses; generating an output that provides a result of the particular medical imaging analysis on the input medical imaging data using a neural network trained to perform the plurality of medical imaging analyses, wherein the neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses; and outputting the generated output for performing the particular medical imaging analysis. 16. The non-transitory computer readable medium of claim 15, the operations further comprising: training the neural network by learning a set of weights for each node of the neural network, the weights in the set of weights for each node having a hierarchical relationship such that a weight at a top level of the hierarchical relationship is associated with each of the plurality of medical imaging analyses and weights at a bottom level of the hierarchical relationship are each associated with a respective one of the plurality of medical imaging analyses. 17. The non-transitory computer readable medium of claim 16, wherein learning a set of weights for each node of the neural network comprises: cascading weights at a higher level of the hierarchical relationship to learn weights at a lower level of the hierarchical relationship associated with a same modality and/or anatomy. 18. The non-transitory computer readable medium of claim 16, wherein learning a set of weights for each node of the neural network comprises: combining weights for a first node in the neural network that are associated with at least one of a same modality, anatomy, and task to form a combined weight; and learning weights for a second node in the neural network using the combined weight. 19. The non-transitory computer readable medium of claim 16, wherein training the neural network comprises: training the neural network using datasets of training medical imaging data, each of the datasets being associated with a respective one of the plurality of medical imaging analyses and used to train a target network representing a branch of the neural network for performing the respective one of the plurality of medical imaging analyses. 20. The non-transitory computer readable medium of claim 19, wherein the datasets of training medical imaging data include input training medical imaging data, the method further comprising: generating output training medical imaging data corresponding to the input training medical imaging data using multi-task learning, the multi-task learning trained based on a relationship learned using an image as an input and an output.
Systems and methods are provided for performing medical imaging analysis. Input medical imaging data is received for performing a particular one of a plurality of medical imaging analyses. An output that provides a result of the particular medical imaging analysis on the input medical imaging data is generated using a neural network trained to perform the plurality of medical imaging analyses. The neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses. The generated output is outputted for performing the particular medical imaging analysis.1. A method for performing medical imaging analysis, comprising: receiving input medical imaging data for performing a particular one of a plurality of medical imaging analyses; generating an output that provides a result of the particular medical imaging analysis on the input medical imaging data using a neural network trained to perform the plurality of medical imaging analyses, wherein the neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses; and outputting the generated output for performing the particular medical imaging analysis. 2. The method of claim 1, wherein each of the plurality of medical imaging analyses are associated with at least one of a different modality, anatomy, and task. 3. The method of claim 2, wherein the task comprises at least one of detection, recognition, segmentation, and registration. 4. The method of claim 1, further comprising: training the neural network by learning a set of weights for each node of the neural network, the weights in the set of weights for each node having a hierarchical relationship such that a weight at a top level of the hierarchical relationship is associated with each of the plurality of medical imaging analyses and weights at a bottom level of the hierarchical relationship are each associated with a respective one of the plurality of medical imaging analyses. 5. The method of claim 4, wherein the set of weights for each node comprises: a hypernet weight comprising the weight at the top level of the hierarchical relationship; one or more ultranet weights each associated with a modality and one or more ultranet weights each associated with an anatomy; one or more supernet weights each associated with a modality and an anatomy; and a plurality of target network weights comprising the weights at the bottom level of the hierarchical relationship. 6. The method of claim 4, wherein learning a set of weights for each node of the neural network comprises: cascading weights at a higher level of the hierarchical relationship to learn weights at a lower level of the hierarchical relationship associated with a same modality and/or anatomy. 7. The method of claim 4, wherein learning a set of weights for each node of the neural network comprises: combining weights for a first node in the neural network that are associated with at least one of a same modality, anatomy, and task to form a combined weight; and learning weights for a second node in the neural network using the combined weight. 8. The method of claim 4, wherein training the neural network comprises: training the neural network using datasets of training medical imaging data, each of the datasets being associated with a respective one of the plurality of medical imaging analyses and used to train a target network representing a branch of the neural network for performing the respective one of the plurality of medical imaging analyses. 9. The method of claim 8, wherein the datasets of training medical imaging data include input training medical imaging data, the method further comprising: generating output training medical imaging data corresponding to the input training medical imaging data using multi-task learning, the multi-task learning trained based on a relationship learned using an image as an input and an output. 10. An apparatus for performing medical imaging analysis, comprising: means for receiving input medical imaging data for performing a particular one of a plurality of medical imaging analyses; means for generating an output that provides a result of the particular medical imaging analysis on the input medical imaging data using a neural network trained to perform the plurality of medical imaging analyses, wherein the neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses; and means for outputting the generated output for performing the particular medical imaging analysis. 11. The apparatus of claim 10, wherein each of the plurality of medical imaging analyses are associated with at least one of a different modality, anatomy, and task. 12. The apparatus of claim 11, wherein the task comprises at least one of detection, recognition, segmentation, and registration. 13. The apparatus of claim 10, further comprising: means for training the neural network by learning a set of weights for each node of the neural network, the weights in the set of weights for each node having a hierarchical relationship such that a weight at a top level of the hierarchical relationship is associated with each of the plurality of medical imaging analyses and weights at a bottom level of the hierarchical relationship are each associated with a respective one of the plurality of medical imaging analyses. 14. The apparatus of claim 13, wherein the set of weights for each node comprises: a hypernet weight comprising the weight at the top level of the hierarchical relationship; one or more ultranet weights each associated with a modality and one or more ultranet weights each associated with an anatomy; one or more supernet weights each associated with a modality and an anatomy; and a plurality of target network weights comprising the weights at the bottom level of the hierarchical relationship. 15. A non-transitory computer readable medium storing computer program instructions for performing medical imaging analysis, the computer program instructions when executed by a processor cause the processor to perform operations comprising: receiving input medical imaging data for performing a particular one of a plurality of medical imaging analyses; generating an output that provides a result of the particular medical imaging analysis on the input medical imaging data using a neural network trained to perform the plurality of medical imaging analyses, wherein the neural network is trained by learning one or more weights associated with the particular medical imaging analysis using one or more weights associated with a different one of the plurality of medical imaging analyses; and outputting the generated output for performing the particular medical imaging analysis. 16. The non-transitory computer readable medium of claim 15, the operations further comprising: training the neural network by learning a set of weights for each node of the neural network, the weights in the set of weights for each node having a hierarchical relationship such that a weight at a top level of the hierarchical relationship is associated with each of the plurality of medical imaging analyses and weights at a bottom level of the hierarchical relationship are each associated with a respective one of the plurality of medical imaging analyses. 17. The non-transitory computer readable medium of claim 16, wherein learning a set of weights for each node of the neural network comprises: cascading weights at a higher level of the hierarchical relationship to learn weights at a lower level of the hierarchical relationship associated with a same modality and/or anatomy. 18. The non-transitory computer readable medium of claim 16, wherein learning a set of weights for each node of the neural network comprises: combining weights for a first node in the neural network that are associated with at least one of a same modality, anatomy, and task to form a combined weight; and learning weights for a second node in the neural network using the combined weight. 19. The non-transitory computer readable medium of claim 16, wherein training the neural network comprises: training the neural network using datasets of training medical imaging data, each of the datasets being associated with a respective one of the plurality of medical imaging analyses and used to train a target network representing a branch of the neural network for performing the respective one of the plurality of medical imaging analyses. 20. The non-transitory computer readable medium of claim 19, wherein the datasets of training medical imaging data include input training medical imaging data, the method further comprising: generating output training medical imaging data corresponding to the input training medical imaging data using multi-task learning, the multi-task learning trained based on a relationship learned using an image as an input and an output.
2,600
10,681
10,681
14,230,170
2,641
The invention provides a computer-based method for logging a user mobile device onto a server computer system including registering a unique identifier of a user mobile device, receiving a first message from the user mobile device, detecting an Internet Protocol address associated with the user mobile device, receiving the unique identifier corresponding to the Internet Protocol address; and transmitting to the user mobile device a second message.
1. A computer-based method comprising: performing a first determination at the server computer system, the first determination comparing login information with registered user data stored in a database supported by the server computer system to determine whether the login information is received from a user computer system associated with a user; in response to determining that login information is received from the particular user computer system, performing a second determination, the second determination comparing the login information with the registered user data stored in the database supported by the server computer system to determine whether the user is a registered user; and in response to a favorable comparison of the login information and the registered user data, providing the user access, via the user computer system, to a registered user area of a web site supported by the server computer system. 2. The computer-based method of claim 1, wherein the registered user data includes user information associated with a particular user, the user information including a unique identifier of a user mobile device associated with the user. 3. The computer-based method of claim 2, wherein the unique identifier is a unique phone number of the user mobile device. 4. The computer-based method of claim 2, wherein providing the user access to the registered user area of the web site comprises transmitting a user-specific homepage associated with the particular user to the user computer system, the user-specific homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device. 5. The computer-based method of claim 4, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user is authorized to access. 6. The computer-based method of claim 4, the user-specific homepage includes links to a file database corresponding to media files the particular user has previously accessed. 7. The computer-based method of claim 1, further comprising: in response to determining that the login information is received from a computer system unassociated with the user, providing the user restricted access to an unregistered user area of the web site, the unregistered area configured to display a general homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device upon receipt of a unique identifier of the user mobile device at the server computer system. 8. The computer-based method of claim 7, further comprising: in response to receiving registration information from the user computer system via the unregistered user area of the web site, storing the registration information as registered user data and providing the user access to the registered user area of the web site. 9. A non-transitory computer readable medium tangibly embodying a program of instructions configured to be executed by at least one processor, the program of instructions comprising: at least one instruction to perform a first determination at the server computer system, the first determination comparing login information with registered user data stored in a database supported by the server computer system to determine whether the login information is received from a user computer system associated with a user; at least one instruction to perform a second determination in response to determining that login information is received from the particular user computer system, the second determination comparing the login information with the registered user data stored in the database to determine whether the user is a registered user; and at least one instruction to provide the user access to a registered user area of a web site supported by the server computer system in response to a favorable comparison of the login information and the registered user data, wherein the access is provided via the user computer system. 10. The computer readable medium of claim 9, wherein the registered user data includes user information associated with a particular user, the user information including a unique identifier of a user mobile device associated with the user. 11. The computer readable medium of claim 10, wherein the unique identifier is a unique phone number of the user mobile device. 12. The computer readable medium of claim 10, wherein providing the user access to the registered user area of the web site comprises transmitting a user-specific homepage associated with the particular user at the user computer system, the user-specific homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device. 13. The computer readable medium of claim 12, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user is authorized to access. 14. The computer readable medium of claim 12, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user has previously accessed. 15. The computer readable medium of claim 9, further comprising: at least one instruction to provide the user restricted access to an unregistered user area of the web site in response to the first determination indicating that the login information is received from a computer system unassociated with the user, wherein the unregistered area is configured to display a general homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device upon receipt of a unique identifier of the user mobile device at the server computer system. 16. The computer readable medium of claim 15, further comprising: at least one instruction to, store registration information as registered user data and provide the user access to the registered user area of the web site, in response to receiving the registration information from the user computer system via the unregistered user area of the web site. 17. A system comprising: at least one processor; memory coupled to at least one processor; and a program of instructions stored in the memory and configured to be executed by the processor, the program of instructions including: at least one instruction to perform a first determination at the server computer system, the first determination comparing login information with registered user data stored in a database supported by the server computer system to determine whether the login information is received from a user computer system associated with a user; at least one instruction to perform a second determination in response to determining that login information is received from the particular user computer system, the second determination comparing the login information with the registered user data stored in the database to determine whether the user is a registered user; and at least one instruction to provide the user access to a registered user area of a web site supported by the server computer system in response to a favorable comparison of the login information and the registered user data, wherein the access is provided via the user computer system. 18. The system of claim 17, wherein the registered user data includes user information associated with a particular user, the user information including a unique identifier of a user mobile device associated with the user. 19. The system of claim 18, wherein providing access to the registered user area of the web site comprises transmitting, to the user computer system, a user-specific homepage associated with the particular user, the user-specific homepage providing functionality that allows the user to select links corresponding to media files the particular user is authorized to access, and wherein selection of the links causes media files to be transmitted to the user mobile device. 20. The system medium of claim 18, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user has previously accessed.
The invention provides a computer-based method for logging a user mobile device onto a server computer system including registering a unique identifier of a user mobile device, receiving a first message from the user mobile device, detecting an Internet Protocol address associated with the user mobile device, receiving the unique identifier corresponding to the Internet Protocol address; and transmitting to the user mobile device a second message.1. A computer-based method comprising: performing a first determination at the server computer system, the first determination comparing login information with registered user data stored in a database supported by the server computer system to determine whether the login information is received from a user computer system associated with a user; in response to determining that login information is received from the particular user computer system, performing a second determination, the second determination comparing the login information with the registered user data stored in the database supported by the server computer system to determine whether the user is a registered user; and in response to a favorable comparison of the login information and the registered user data, providing the user access, via the user computer system, to a registered user area of a web site supported by the server computer system. 2. The computer-based method of claim 1, wherein the registered user data includes user information associated with a particular user, the user information including a unique identifier of a user mobile device associated with the user. 3. The computer-based method of claim 2, wherein the unique identifier is a unique phone number of the user mobile device. 4. The computer-based method of claim 2, wherein providing the user access to the registered user area of the web site comprises transmitting a user-specific homepage associated with the particular user to the user computer system, the user-specific homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device. 5. The computer-based method of claim 4, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user is authorized to access. 6. The computer-based method of claim 4, the user-specific homepage includes links to a file database corresponding to media files the particular user has previously accessed. 7. The computer-based method of claim 1, further comprising: in response to determining that the login information is received from a computer system unassociated with the user, providing the user restricted access to an unregistered user area of the web site, the unregistered area configured to display a general homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device upon receipt of a unique identifier of the user mobile device at the server computer system. 8. The computer-based method of claim 7, further comprising: in response to receiving registration information from the user computer system via the unregistered user area of the web site, storing the registration information as registered user data and providing the user access to the registered user area of the web site. 9. A non-transitory computer readable medium tangibly embodying a program of instructions configured to be executed by at least one processor, the program of instructions comprising: at least one instruction to perform a first determination at the server computer system, the first determination comparing login information with registered user data stored in a database supported by the server computer system to determine whether the login information is received from a user computer system associated with a user; at least one instruction to perform a second determination in response to determining that login information is received from the particular user computer system, the second determination comparing the login information with the registered user data stored in the database to determine whether the user is a registered user; and at least one instruction to provide the user access to a registered user area of a web site supported by the server computer system in response to a favorable comparison of the login information and the registered user data, wherein the access is provided via the user computer system. 10. The computer readable medium of claim 9, wherein the registered user data includes user information associated with a particular user, the user information including a unique identifier of a user mobile device associated with the user. 11. The computer readable medium of claim 10, wherein the unique identifier is a unique phone number of the user mobile device. 12. The computer readable medium of claim 10, wherein providing the user access to the registered user area of the web site comprises transmitting a user-specific homepage associated with the particular user at the user computer system, the user-specific homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device. 13. The computer readable medium of claim 12, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user is authorized to access. 14. The computer readable medium of claim 12, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user has previously accessed. 15. The computer readable medium of claim 9, further comprising: at least one instruction to provide the user restricted access to an unregistered user area of the web site in response to the first determination indicating that the login information is received from a computer system unassociated with the user, wherein the unregistered area is configured to display a general homepage providing functionality that allows the user to select media files to be transmitted to the user mobile device upon receipt of a unique identifier of the user mobile device at the server computer system. 16. The computer readable medium of claim 15, further comprising: at least one instruction to, store registration information as registered user data and provide the user access to the registered user area of the web site, in response to receiving the registration information from the user computer system via the unregistered user area of the web site. 17. A system comprising: at least one processor; memory coupled to at least one processor; and a program of instructions stored in the memory and configured to be executed by the processor, the program of instructions including: at least one instruction to perform a first determination at the server computer system, the first determination comparing login information with registered user data stored in a database supported by the server computer system to determine whether the login information is received from a user computer system associated with a user; at least one instruction to perform a second determination in response to determining that login information is received from the particular user computer system, the second determination comparing the login information with the registered user data stored in the database to determine whether the user is a registered user; and at least one instruction to provide the user access to a registered user area of a web site supported by the server computer system in response to a favorable comparison of the login information and the registered user data, wherein the access is provided via the user computer system. 18. The system of claim 17, wherein the registered user data includes user information associated with a particular user, the user information including a unique identifier of a user mobile device associated with the user. 19. The system of claim 18, wherein providing access to the registered user area of the web site comprises transmitting, to the user computer system, a user-specific homepage associated with the particular user, the user-specific homepage providing functionality that allows the user to select links corresponding to media files the particular user is authorized to access, and wherein selection of the links causes media files to be transmitted to the user mobile device. 20. The system medium of claim 18, wherein the user-specific homepage includes links to a file database corresponding to media files the particular user has previously accessed.
2,600
10,682
10,682
15,641,095
2,683
A networked radio frequency identification system includes a plurality of radio frequency identification (RFID) tag readers, a computer in signal communication with the RFID tag readers over a network, and a software module for storage on and operable by the computer that localizes RFID tags based on information received from the RFID tag readers using a network model having endpoints and oriented links. In an additional example, at least one of the RFID tag readers includes an adjustable configuration setting selected from RF signal strength, antenna gain, antenna polarization, and antenna orientation. In a further aspect, the system localizes RFID tags based on hierarchical threshold limit calculations. In an additional aspect, the system controls a locking device associated with an access point based on localization of an authorized RFID tag at the access point and reception of additional authorizing information from an input device.
1. A radio frequency system comprising: a first RF tag comprising a radio frequency transmitter logic coupled to a first antenna having a first set of radiating elements forming a radiation pattern; a second RF tag comprising a radio frequency receiver logic coupled to a second antenna having a second set of radiating elements forming a radiation pattern; a sensor; a computer system coupled with the first and the second RF tags; a plurality of semantic attributes stored in the computer system; the computer system being configured to determine a location for the second RF tag based on a signal received by the second RF tag from the first RF tag; the computer system further being configured to infer a first semantic attribute from among a plurality of semantic attributes based on an input from sensor and the determined location for the second RF tag; and wherein the computer system instructs at least one of the first RF tag and the second RF tag to adjust the radiation pattern of the first RF tag or the radiation pattern of the second RF tag based on the first semantic attribute. 2. The system of claim 1, wherein the computer system is configured cause the first RF tag to adjust one or more parameters of the first RF tag, the one or more parameters selected from power level, antenna polarization, gain, radiation pattern and orientation. 3. The system of claim 1, wherein the computer system is configured to cause the second RF tag to tune one or more parameters of the second RF tag dynamically based on the angle of arrival, phase, signal strength and signal polarization. 4. The system of claim 1, wherein the computer system is configured to cause the second RF device to orient the antenna of the second RF device dynamically based on the angle of arrival, phase, signal strength and signal polarization. 5. The system of claim 1, wherein the semantic attribute expires after an interval of time and wherein the computer system is further configured to cause the at least one of the first RF tag and the second RF tag to re-adjust the radiation pattern based on the expiration of the first semantic attribute. 6. The system of claim 1, wherein the computer system further is configured with a plurality of time management rules, at least one of the time management rules associating a time interval and a presence of the second RF tag at one or more of the location based endpoints with the first semantic attribute, the computer system inferring the first semantic attribute based on the application of the at least one time management rule and a comparison of the time interval with a determined time associated with the second RF tag in relation to the determined location for the second RF tag. 7. The system of claim 1, wherein the computer system is configured to infer a composite semantic attribute based on the first and a second semantic attribute. 8. The system of claim 1, wherein the first semantic attribute is a composite semantic attribute. 9. The system of claim 5, wherein the time interval is defined as one of a plurality of predefined semantic intervals. 10. The system of claim 1, wherein the second RF tag includes an energy harvesting component. 11. A radio frequency system comprising: a first radio frequency tag and a second radio frequency tag; wherein the first radio frequency tag includes a radio frequency transmitter logic and a first antenna coupled to the radio frequency transmitter logic, wherein the first antenna comprises a first set of radiating elements; wherein the second radio frequency tag includes a radio frequency receiver logic and a second antenna coupled to the radio frequency receiver logic, wherein the second antenna comprises a second set of radiating elements; a sensor; a computer system coupled with the first and the second radio frequency tags; the computer system storing a plurality of semantic attributes; the computer system being configured to determine a location for the second radio frequency tag based on a signal received by the second radio frequency tag from the first radio frequency tag; wherein the computer system is configured to infer a first semantic attribute from among a plurality of semantic attributes based on an input from sensor and the determined location for the second radio frequency tag; and wherein the computer system instructs at least one of the first and second radio frequency tags to tune specific radiating elements from among the first antenna or the second antenna based on the first semantic attribute. 12. The system of claim 11, wherein the tuning includes adjusting parameters selected from power level, polarization, gain, radiation pattern and orientation. 13. The system of claim 11, wherein the second radio frequency tag tunes the second antenna radiating elements dynamically based on the angle of arrival, phase, signal strength and signal polarization. 14. The system of claim 11, wherein the second radio frequency tag orients the second antenna dynamically based on the angle of arrival, phase, signal strength and signal polarization. 15. The system of claim 11, wherein the semantic attribute expires after an interval of time and the at least one of the first and second radio frequency tags re-tunes the first or the second radiating elements based on the expiration of the first semantic attribute. 16. The system of claim 15, wherein the interval of time is defined as one of a plurality of predefined semantic intervals. 17. The system of claim 11, wherein the computer system further is configured with a plurality of time management rules, at least one of the time management rules associating a predefined time interval and a presence of the second radio frequency tag at one or more of the location based endpoints with the first semantic attribute, the computer system inferring the first semantic attribute based on the application of the at least one time management rule and a comparison of the predefined time interval with a determined time associated with the second radio frequency tag in relation to the determined location for the second radio frequency tag. 18. The system of claim 11, wherein the computer system infers a composite semantic attribute based on the first semantic attribute and a second semantic attribute. 19. The system of claim 11, wherein the first semantic attribute is a composite semantic attribute. 20. The system of claim 11, wherein the second radio frequency tag includes an energy harvesting component. 21. The system of claim 11, wherein the RF tag comprises a card. 22. The system of claim 11, wherein the RF tag comprises a module. 23. The system of claim 11, wherein the RF tag comprises an electronic device. 24. The system of claim 11, wherein the RF tag comprises a non-passive RF device. 25. The system of claim 11, wherein the RF tag is embedded within an electronic device.
A networked radio frequency identification system includes a plurality of radio frequency identification (RFID) tag readers, a computer in signal communication with the RFID tag readers over a network, and a software module for storage on and operable by the computer that localizes RFID tags based on information received from the RFID tag readers using a network model having endpoints and oriented links. In an additional example, at least one of the RFID tag readers includes an adjustable configuration setting selected from RF signal strength, antenna gain, antenna polarization, and antenna orientation. In a further aspect, the system localizes RFID tags based on hierarchical threshold limit calculations. In an additional aspect, the system controls a locking device associated with an access point based on localization of an authorized RFID tag at the access point and reception of additional authorizing information from an input device.1. A radio frequency system comprising: a first RF tag comprising a radio frequency transmitter logic coupled to a first antenna having a first set of radiating elements forming a radiation pattern; a second RF tag comprising a radio frequency receiver logic coupled to a second antenna having a second set of radiating elements forming a radiation pattern; a sensor; a computer system coupled with the first and the second RF tags; a plurality of semantic attributes stored in the computer system; the computer system being configured to determine a location for the second RF tag based on a signal received by the second RF tag from the first RF tag; the computer system further being configured to infer a first semantic attribute from among a plurality of semantic attributes based on an input from sensor and the determined location for the second RF tag; and wherein the computer system instructs at least one of the first RF tag and the second RF tag to adjust the radiation pattern of the first RF tag or the radiation pattern of the second RF tag based on the first semantic attribute. 2. The system of claim 1, wherein the computer system is configured cause the first RF tag to adjust one or more parameters of the first RF tag, the one or more parameters selected from power level, antenna polarization, gain, radiation pattern and orientation. 3. The system of claim 1, wherein the computer system is configured to cause the second RF tag to tune one or more parameters of the second RF tag dynamically based on the angle of arrival, phase, signal strength and signal polarization. 4. The system of claim 1, wherein the computer system is configured to cause the second RF device to orient the antenna of the second RF device dynamically based on the angle of arrival, phase, signal strength and signal polarization. 5. The system of claim 1, wherein the semantic attribute expires after an interval of time and wherein the computer system is further configured to cause the at least one of the first RF tag and the second RF tag to re-adjust the radiation pattern based on the expiration of the first semantic attribute. 6. The system of claim 1, wherein the computer system further is configured with a plurality of time management rules, at least one of the time management rules associating a time interval and a presence of the second RF tag at one or more of the location based endpoints with the first semantic attribute, the computer system inferring the first semantic attribute based on the application of the at least one time management rule and a comparison of the time interval with a determined time associated with the second RF tag in relation to the determined location for the second RF tag. 7. The system of claim 1, wherein the computer system is configured to infer a composite semantic attribute based on the first and a second semantic attribute. 8. The system of claim 1, wherein the first semantic attribute is a composite semantic attribute. 9. The system of claim 5, wherein the time interval is defined as one of a plurality of predefined semantic intervals. 10. The system of claim 1, wherein the second RF tag includes an energy harvesting component. 11. A radio frequency system comprising: a first radio frequency tag and a second radio frequency tag; wherein the first radio frequency tag includes a radio frequency transmitter logic and a first antenna coupled to the radio frequency transmitter logic, wherein the first antenna comprises a first set of radiating elements; wherein the second radio frequency tag includes a radio frequency receiver logic and a second antenna coupled to the radio frequency receiver logic, wherein the second antenna comprises a second set of radiating elements; a sensor; a computer system coupled with the first and the second radio frequency tags; the computer system storing a plurality of semantic attributes; the computer system being configured to determine a location for the second radio frequency tag based on a signal received by the second radio frequency tag from the first radio frequency tag; wherein the computer system is configured to infer a first semantic attribute from among a plurality of semantic attributes based on an input from sensor and the determined location for the second radio frequency tag; and wherein the computer system instructs at least one of the first and second radio frequency tags to tune specific radiating elements from among the first antenna or the second antenna based on the first semantic attribute. 12. The system of claim 11, wherein the tuning includes adjusting parameters selected from power level, polarization, gain, radiation pattern and orientation. 13. The system of claim 11, wherein the second radio frequency tag tunes the second antenna radiating elements dynamically based on the angle of arrival, phase, signal strength and signal polarization. 14. The system of claim 11, wherein the second radio frequency tag orients the second antenna dynamically based on the angle of arrival, phase, signal strength and signal polarization. 15. The system of claim 11, wherein the semantic attribute expires after an interval of time and the at least one of the first and second radio frequency tags re-tunes the first or the second radiating elements based on the expiration of the first semantic attribute. 16. The system of claim 15, wherein the interval of time is defined as one of a plurality of predefined semantic intervals. 17. The system of claim 11, wherein the computer system further is configured with a plurality of time management rules, at least one of the time management rules associating a predefined time interval and a presence of the second radio frequency tag at one or more of the location based endpoints with the first semantic attribute, the computer system inferring the first semantic attribute based on the application of the at least one time management rule and a comparison of the predefined time interval with a determined time associated with the second radio frequency tag in relation to the determined location for the second radio frequency tag. 18. The system of claim 11, wherein the computer system infers a composite semantic attribute based on the first semantic attribute and a second semantic attribute. 19. The system of claim 11, wherein the first semantic attribute is a composite semantic attribute. 20. The system of claim 11, wherein the second radio frequency tag includes an energy harvesting component. 21. The system of claim 11, wherein the RF tag comprises a card. 22. The system of claim 11, wherein the RF tag comprises a module. 23. The system of claim 11, wherein the RF tag comprises an electronic device. 24. The system of claim 11, wherein the RF tag comprises a non-passive RF device. 25. The system of claim 11, wherein the RF tag is embedded within an electronic device.
2,600
10,683
10,683
16,500,851
2,667
An image processing system and related method. The system comprises an input interface (IN) configured for receiving an input image. A filter (FIL) of the system filters said input image to obtain a structure image from said input image, said structure image including a range of image values. A range identifier (RID) of the system identifies, based on an image histogram for the structure image, an image value sub-range within said range. The sub-range being associated with a region of interest. The system output through an output interface (OUT) a specification for said image value sub-range. In addition or instead, a mask image for the region of interest or for region or low information is output.
1. An image processing system, comprising: an input interface configured for receiving an input image; a filter configured to filter said input image to obtain a structure image from said input image, said structure image including a range of image values; a range identifier configured to identify, based on an image histogram for said structure image, an image value sub-range within said range, said sub-range being associated with a region of interest; an output interface for outputting at least one of a specification for said image value sub-range, a mask image associated with the sub-range and configured to indicate the region of interest, and a complementary mask image associated with the complement of said sub-range, and configured to indicate the complement of said region of interest, and an image value range evaluator configured to compute a respective weight for the image values outside the sub-range, said weight determining a contribution of the respective image values in a visualization of the input image, wherein said weight measures a separation between at least two classes, one corresponding to the region of interest and the at least one other corresponding to the background or to at least one radio-opaque object. 2. The system of claim 1, comprising a histogram former configured to form said image histogram by forming said image histogram from image values in the structure image or a histogram transformer, the histogram former configured to form an intermediate image histogram for image values in the structure image and the histogram transformer configured to transform said intermediate image histogram into said image histogram, or to transform the input image into an intermediate image and to form the histogram from said intermediate image. 3. The system of claim 1, wherein said histogram transformer is configured to apply an area preserving interpolation when transforming the intermediate image histogram. 4. The system of claim 1, further comprising an image renderer configured to render a visualization on a display unit of said input image based on the mask for the region of interest. 5. The system of claim 1, wherein an image renderer is configured to render a visualization on a display unit of said input image whilst a contribution of the image value inside a region of low information for a contrast and/or brightness adaptation is according to said weight. 6. The system of claim 5, wherein the image renderer is configured to render a visualization on a display unit of said mask for the complement of the region of interest, with a visualization scheme that represents the weight computed by the image value range evaluator. 7. The system of claim 1, wherein the range identifier is configured to identify said sub-range by fitting a statistical mixture model to the image histogram or to the transformed image histogram. 8. The system of claim 7, wherein the statistical mixture model includes at least two components corresponding to the at least two classes. 9. The system of claim 8, wherein one of the components corresponds to the background or to the at least one radio-opaque object whilst the at least one other component corresponds to the region of interest including one or more anatomical structures of interest. 10. The system of claim 9, wherein the statistical mixture model includes at least three components, wherein the at least one further component corresponds to an edge structure. 11. A method for image processing, comprising: receiving an input image; filtering said input image to obtain a structure image from said input image, said structure image including a range of image values at different image locations; identifying, based on an image histogram for said structure image, an image value sub-range within said range, said sub-range being associated with a region of interest; and outputting at least one of a specification for said image value sub-range, a mask image associated with the sub-range and configured to indicate the region of interest, and a complementary mask image associated with the complement of said sub-range, and indicating the complement of said region of interest; and computing a respective weight for the image values outside the sub-range, said weight determining a contribution of the respective image values in a visualization of the input image, wherein said weight measures a separation between at least two classes, one corresponding to the region of interest and the at least one other corresponding to the background or to at least one radio-opaque object. 12-14. (canceled) 15. A non-transitory computer-readable medium having one or more executable instructions stored thereon which, when executed by at least one processor, cause the at least one processor to perform a method for image processing, the method comprising: receiving an input image; filtering said input image to obtain a structure image from said input image, said structure image including a range of image values at different image locations; identifying, based on an image histogram for said structure image, an image value sub-range within said range, said sub-range being associated with a region of interest; and outputting at least one of a specification for said image value sub-range, a mask image associated with the sub-range and configured to indicate the region of interest, and a complementary mask image associated with the complement of said sub-range, and configured to indicate the complement of said region of interest.
An image processing system and related method. The system comprises an input interface (IN) configured for receiving an input image. A filter (FIL) of the system filters said input image to obtain a structure image from said input image, said structure image including a range of image values. A range identifier (RID) of the system identifies, based on an image histogram for the structure image, an image value sub-range within said range. The sub-range being associated with a region of interest. The system output through an output interface (OUT) a specification for said image value sub-range. In addition or instead, a mask image for the region of interest or for region or low information is output.1. An image processing system, comprising: an input interface configured for receiving an input image; a filter configured to filter said input image to obtain a structure image from said input image, said structure image including a range of image values; a range identifier configured to identify, based on an image histogram for said structure image, an image value sub-range within said range, said sub-range being associated with a region of interest; an output interface for outputting at least one of a specification for said image value sub-range, a mask image associated with the sub-range and configured to indicate the region of interest, and a complementary mask image associated with the complement of said sub-range, and configured to indicate the complement of said region of interest, and an image value range evaluator configured to compute a respective weight for the image values outside the sub-range, said weight determining a contribution of the respective image values in a visualization of the input image, wherein said weight measures a separation between at least two classes, one corresponding to the region of interest and the at least one other corresponding to the background or to at least one radio-opaque object. 2. The system of claim 1, comprising a histogram former configured to form said image histogram by forming said image histogram from image values in the structure image or a histogram transformer, the histogram former configured to form an intermediate image histogram for image values in the structure image and the histogram transformer configured to transform said intermediate image histogram into said image histogram, or to transform the input image into an intermediate image and to form the histogram from said intermediate image. 3. The system of claim 1, wherein said histogram transformer is configured to apply an area preserving interpolation when transforming the intermediate image histogram. 4. The system of claim 1, further comprising an image renderer configured to render a visualization on a display unit of said input image based on the mask for the region of interest. 5. The system of claim 1, wherein an image renderer is configured to render a visualization on a display unit of said input image whilst a contribution of the image value inside a region of low information for a contrast and/or brightness adaptation is according to said weight. 6. The system of claim 5, wherein the image renderer is configured to render a visualization on a display unit of said mask for the complement of the region of interest, with a visualization scheme that represents the weight computed by the image value range evaluator. 7. The system of claim 1, wherein the range identifier is configured to identify said sub-range by fitting a statistical mixture model to the image histogram or to the transformed image histogram. 8. The system of claim 7, wherein the statistical mixture model includes at least two components corresponding to the at least two classes. 9. The system of claim 8, wherein one of the components corresponds to the background or to the at least one radio-opaque object whilst the at least one other component corresponds to the region of interest including one or more anatomical structures of interest. 10. The system of claim 9, wherein the statistical mixture model includes at least three components, wherein the at least one further component corresponds to an edge structure. 11. A method for image processing, comprising: receiving an input image; filtering said input image to obtain a structure image from said input image, said structure image including a range of image values at different image locations; identifying, based on an image histogram for said structure image, an image value sub-range within said range, said sub-range being associated with a region of interest; and outputting at least one of a specification for said image value sub-range, a mask image associated with the sub-range and configured to indicate the region of interest, and a complementary mask image associated with the complement of said sub-range, and indicating the complement of said region of interest; and computing a respective weight for the image values outside the sub-range, said weight determining a contribution of the respective image values in a visualization of the input image, wherein said weight measures a separation between at least two classes, one corresponding to the region of interest and the at least one other corresponding to the background or to at least one radio-opaque object. 12-14. (canceled) 15. A non-transitory computer-readable medium having one or more executable instructions stored thereon which, when executed by at least one processor, cause the at least one processor to perform a method for image processing, the method comprising: receiving an input image; filtering said input image to obtain a structure image from said input image, said structure image including a range of image values at different image locations; identifying, based on an image histogram for said structure image, an image value sub-range within said range, said sub-range being associated with a region of interest; and outputting at least one of a specification for said image value sub-range, a mask image associated with the sub-range and configured to indicate the region of interest, and a complementary mask image associated with the complement of said sub-range, and configured to indicate the complement of said region of interest.
2,600
10,684
10,684
15,623,969
2,611
A method and apparatus for visualizing energy flows. In one embodiment, the method comprises (I) obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value is a measurement of energy flow from an energy source to two or more energy sinks; (II) computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and (III) generating a display image representing at least one computed energy flow value of the plurality of energy flow values.
1. A method for visualizing energy flows, comprising: obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value of the plurality of measured energy flow values is a measurement of energy flow from an energy source of the plurality of energy sources to two or more energy sinks of the plurality of energy sinks; computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and generating a display image representing at least one computed energy flow value of the plurality of energy flow values. 2. The method of claim 1, wherein the plurality of measured energy flow values comprises (i) a measurement of energy production by at least one distributed energy resource (DER) generator, (ii) a measurement of total consumption from a power grid, (iii) a measurement of charge for at least one energy storage device of the DER that is charging, and (iv) a measurement of discharge for at least one energy storage device of the DER that is discharging. 3. The method of claim 1, wherein the plurality of computed energy flow values comprises (i) a value representing energy flow from at least one distributed energy resource (DER) generator to a power grid; (ii) a value representing energy flow from the at least one DER generator to a first at least one energy storage device of the DER, (iii) a value representing energy flow from the at least one DER generator to at least one load, (iv) a value representing energy flow from a second at least one energy storage device to the at least one load, (v) a value representing energy flow from the second at least one energy storage device to the power grid, (vi) a value representing energy flow from the power grid to the at least one load, and (vii) a value representing energy flow from the power grid to the first at least one energy storage device. 4. The method of claim 1, wherein the set of energy priority allocation rules defines a priority for each energy sink of the plurality of energy sinks to receive energy generated by the plurality of energy sources. 5. The method of claim 4, wherein the set of energy priority allocation rules defines priorities for (i) energy derived from at least one distributed energy resource (DER) generator to be used first by at least one load, followed by a first at least one energy storage device of the DER, followed by a power grid, and (ii) energy derived from a second at least one energy storage device of the DER to be used first by the at least one load, followed by the power grid. 6. The method of claim 1, wherein the display image depicts amounts of energy flow from at least one an energy source of the plurality of energy sources to each of at least two energy sinks of the plurality of energy sinks. 7. The method of claim 1, wherein the display image depicts amounts of energy flow to at least one an energy sink of the plurality of energy sinks from each of at least two energy sources of the plurality of energy sources. 8. Apparatus for visualizing energy flows, comprising: a controller comprising at least one processor and an energy flow visualization module for: obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value of the plurality of measured energy flow values is a measurement of energy flow from an energy source of the plurality of energy sources to two or more energy sinks of the plurality of energy sinks; computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and generating a display image representing at least one computed energy flow value of the plurality of energy flow values. 9. The apparatus of claim 8, wherein the plurality of measured energy flow values comprises (i) a measurement of energy production by at least one distributed energy resource (DER) generator, (ii) a measurement of total consumption from a power grid, (iii) a measurement of charge for at least one energy storage device of the DER that is charging, and (iv) a measurement of discharge for at least one energy storage device of the DER that is discharging. 10. The apparatus of claim 8, wherein the plurality of computed energy flow values comprises (i) a value representing energy flow from at least one distributed energy resource (DER) generator to a power grid; (ii) a value representing energy flow from the at least one DER generator to a first at least one energy storage device of the DER, (iii) a value representing energy flow from the at least one DER generator to at least one load, (iv) a value representing energy flow from a second at least one energy storage device to the at least one load, (v) a value representing energy flow from the second at least one energy storage device to the power grid, (vi) a value representing energy flow from the power grid to the at least one load, and (vii) a value representing energy flow from the power grid to the first at least one energy storage device. 11. The apparatus of claim 8, wherein the set of energy priority allocation rules defines a priority for each energy sink of the plurality of energy sinks to receive energy generated by the plurality of energy sources. 12. The apparatus of claim 11, wherein the set of energy priority allocation rules defines priorities for (i) energy derived from at least one distributed energy resource (DER) generator to be used first by at least one load, followed by a first at least one energy storage device of the DER, followed by a power grid, and (ii) energy derived from a second at least one energy storage device of the DER to be used first by the at least one load, followed by the power grid. 13. The apparatus of claim 8, wherein the display image depicts amounts of energy flow from at least one an energy source of the plurality of energy sources to each of at least two energy sinks of the plurality of energy sinks. 14. The apparatus of claim 8, wherein the display image depicts amounts of energy flow to at least one energy sink of the plurality of energy sinks from each of at least two energy sources of the plurality of energy sources 15. A computer readable medium comprising a program that, when executed by a processor, performs a method for visualizing energy flows, the method comprising: obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value of the plurality of measured energy flow values is a measurement of energy flow from an energy source of the plurality of energy sources to two or more energy sinks of the plurality of energy sinks; computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and generating a display image representing at least one computed energy flow value of the plurality of energy flow values. 16. The computer readable medium of claim 15, wherein the plurality of measured energy flow values comprises (i) a measurement of energy production by at least one distributed energy resource (DER) generator, (ii) a measurement of total consumption from a power grid, (iii) a measurement of charge for at least one energy storage device of the DER that is charging, and (iv) a measurement of discharge for at least one energy storage device of the DER that is discharging. 17. The computer readable medium of claim 15, wherein the plurality of computed energy flow values comprises (i) a value representing energy flow from at least one distributed energy resource (DER) generator to a power grid; (ii) a value representing energy flow from the at least one DER generator to a first at least one energy storage device of the DER, (iii) a value representing energy flow from the at least one DER generator to at least one load, (iv) a value representing energy flow from a second at least one energy storage device to the at least one load, (v) a value representing energy flow from the second at least one energy storage device to the power grid, (vi) a value representing energy flow from the power grid to the at least one load, and (vii) a value representing energy flow from the power grid to the first at least one energy storage device. 18. The computer readable medium of claim 15, wherein the set of energy priority allocation rules defines priorities for (i) energy derived from at least one distributed energy resource (DER) generator to be used first by at least one load, followed by a first at least one energy storage device of the DER, followed by a power grid, and (ii) energy derived from a second at least one energy storage device of the DER to be used first by the at least one load, followed by the power grid. 19. The computer readable medium of claim 15, wherein the display image depicts amounts of energy flow from at least one an energy source of the plurality of energy sources to each of at least two energy sinks of the plurality of energy sinks. 20. The computer readable medium of claim 15, wherein the display image depicts amounts of energy flow to at least one an energy sink of the plurality of energy sinks from each of at least two energy sources of the plurality of energy sources.
A method and apparatus for visualizing energy flows. In one embodiment, the method comprises (I) obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value is a measurement of energy flow from an energy source to two or more energy sinks; (II) computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and (III) generating a display image representing at least one computed energy flow value of the plurality of energy flow values.1. A method for visualizing energy flows, comprising: obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value of the plurality of measured energy flow values is a measurement of energy flow from an energy source of the plurality of energy sources to two or more energy sinks of the plurality of energy sinks; computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and generating a display image representing at least one computed energy flow value of the plurality of energy flow values. 2. The method of claim 1, wherein the plurality of measured energy flow values comprises (i) a measurement of energy production by at least one distributed energy resource (DER) generator, (ii) a measurement of total consumption from a power grid, (iii) a measurement of charge for at least one energy storage device of the DER that is charging, and (iv) a measurement of discharge for at least one energy storage device of the DER that is discharging. 3. The method of claim 1, wherein the plurality of computed energy flow values comprises (i) a value representing energy flow from at least one distributed energy resource (DER) generator to a power grid; (ii) a value representing energy flow from the at least one DER generator to a first at least one energy storage device of the DER, (iii) a value representing energy flow from the at least one DER generator to at least one load, (iv) a value representing energy flow from a second at least one energy storage device to the at least one load, (v) a value representing energy flow from the second at least one energy storage device to the power grid, (vi) a value representing energy flow from the power grid to the at least one load, and (vii) a value representing energy flow from the power grid to the first at least one energy storage device. 4. The method of claim 1, wherein the set of energy priority allocation rules defines a priority for each energy sink of the plurality of energy sinks to receive energy generated by the plurality of energy sources. 5. The method of claim 4, wherein the set of energy priority allocation rules defines priorities for (i) energy derived from at least one distributed energy resource (DER) generator to be used first by at least one load, followed by a first at least one energy storage device of the DER, followed by a power grid, and (ii) energy derived from a second at least one energy storage device of the DER to be used first by the at least one load, followed by the power grid. 6. The method of claim 1, wherein the display image depicts amounts of energy flow from at least one an energy source of the plurality of energy sources to each of at least two energy sinks of the plurality of energy sinks. 7. The method of claim 1, wherein the display image depicts amounts of energy flow to at least one an energy sink of the plurality of energy sinks from each of at least two energy sources of the plurality of energy sources. 8. Apparatus for visualizing energy flows, comprising: a controller comprising at least one processor and an energy flow visualization module for: obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value of the plurality of measured energy flow values is a measurement of energy flow from an energy source of the plurality of energy sources to two or more energy sinks of the plurality of energy sinks; computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and generating a display image representing at least one computed energy flow value of the plurality of energy flow values. 9. The apparatus of claim 8, wherein the plurality of measured energy flow values comprises (i) a measurement of energy production by at least one distributed energy resource (DER) generator, (ii) a measurement of total consumption from a power grid, (iii) a measurement of charge for at least one energy storage device of the DER that is charging, and (iv) a measurement of discharge for at least one energy storage device of the DER that is discharging. 10. The apparatus of claim 8, wherein the plurality of computed energy flow values comprises (i) a value representing energy flow from at least one distributed energy resource (DER) generator to a power grid; (ii) a value representing energy flow from the at least one DER generator to a first at least one energy storage device of the DER, (iii) a value representing energy flow from the at least one DER generator to at least one load, (iv) a value representing energy flow from a second at least one energy storage device to the at least one load, (v) a value representing energy flow from the second at least one energy storage device to the power grid, (vi) a value representing energy flow from the power grid to the at least one load, and (vii) a value representing energy flow from the power grid to the first at least one energy storage device. 11. The apparatus of claim 8, wherein the set of energy priority allocation rules defines a priority for each energy sink of the plurality of energy sinks to receive energy generated by the plurality of energy sources. 12. The apparatus of claim 11, wherein the set of energy priority allocation rules defines priorities for (i) energy derived from at least one distributed energy resource (DER) generator to be used first by at least one load, followed by a first at least one energy storage device of the DER, followed by a power grid, and (ii) energy derived from a second at least one energy storage device of the DER to be used first by the at least one load, followed by the power grid. 13. The apparatus of claim 8, wherein the display image depicts amounts of energy flow from at least one an energy source of the plurality of energy sources to each of at least two energy sinks of the plurality of energy sinks. 14. The apparatus of claim 8, wherein the display image depicts amounts of energy flow to at least one energy sink of the plurality of energy sinks from each of at least two energy sources of the plurality of energy sources 15. A computer readable medium comprising a program that, when executed by a processor, performs a method for visualizing energy flows, the method comprising: obtaining a plurality of measured energy flow values for a plurality of energy flows between a plurality of energy sources and a plurality of energy sinks, wherein at least one of measured energy flow value of the plurality of measured energy flow values is a measurement of energy flow from an energy source of the plurality of energy sources to two or more energy sinks of the plurality of energy sinks; computing a plurality of energy flow values based on the measured energy flow values and a set of energy priority allocation rules, wherein each computed energy flow value of the plurality of energy flow values represents energy flow between an energy source of the plurality of energy sources and an energy sink of the plurality of energy sinks; and generating a display image representing at least one computed energy flow value of the plurality of energy flow values. 16. The computer readable medium of claim 15, wherein the plurality of measured energy flow values comprises (i) a measurement of energy production by at least one distributed energy resource (DER) generator, (ii) a measurement of total consumption from a power grid, (iii) a measurement of charge for at least one energy storage device of the DER that is charging, and (iv) a measurement of discharge for at least one energy storage device of the DER that is discharging. 17. The computer readable medium of claim 15, wherein the plurality of computed energy flow values comprises (i) a value representing energy flow from at least one distributed energy resource (DER) generator to a power grid; (ii) a value representing energy flow from the at least one DER generator to a first at least one energy storage device of the DER, (iii) a value representing energy flow from the at least one DER generator to at least one load, (iv) a value representing energy flow from a second at least one energy storage device to the at least one load, (v) a value representing energy flow from the second at least one energy storage device to the power grid, (vi) a value representing energy flow from the power grid to the at least one load, and (vii) a value representing energy flow from the power grid to the first at least one energy storage device. 18. The computer readable medium of claim 15, wherein the set of energy priority allocation rules defines priorities for (i) energy derived from at least one distributed energy resource (DER) generator to be used first by at least one load, followed by a first at least one energy storage device of the DER, followed by a power grid, and (ii) energy derived from a second at least one energy storage device of the DER to be used first by the at least one load, followed by the power grid. 19. The computer readable medium of claim 15, wherein the display image depicts amounts of energy flow from at least one an energy source of the plurality of energy sources to each of at least two energy sinks of the plurality of energy sinks. 20. The computer readable medium of claim 15, wherein the display image depicts amounts of energy flow to at least one an energy sink of the plurality of energy sinks from each of at least two energy sources of the plurality of energy sources.
2,600
10,685
10,685
15,713,544
2,626
The present disclosure generally relates to receiving a user input corresponding to a rotation of a rotatable input mechanism and in accordance with the user input, adjusting a brightness level of a display screen during a brightening configuration session.
1. An electronic device, comprising: a display screen; an accelerometer; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a user input at the electronic device, wherein the user input is registered by the accelerometer; in accordance with a determination that the user input is received while a quiet mode is active, maintaining the display screen in an inactive state; in accordance with a determination that the user input is received while the quiet mode is inactive, activating, in response to the user input, the display screen from the inactive state; receiving an alert at the electronic device; in accordance with a determination that the alert is received while the quiet mode is active, foregoing output of an audible notification corresponding to the alert; and in accordance with a determination that the alert is received while the quiet mode is inactive, outputting the audible notification. 2. The electronic device of claim 1, the one or more programs further including instructions for: in accordance with a determination that the user input is received while the quiet mode is active and the user input is a physical input on a surface of the electronic device, activating, in response to the user input, the display screen from the inactive state. 3. The electronic device of claim 2, wherein the physical input comprises a touch input on the display screen and activating the display screen comprises turning on the display screen. 4. The electronic device of claim 2, wherein the electronic device includes a hardware button and the physical input comprises a depression of the hardware button, further wherein activating the display screen comprises turning on the display screen. 5. The electronic device of claim 2, wherein the electronic device includes a rotatable input mechanism and the physical input comprises a rotation of the rotatable input mechanism. 6. The electronic device of claim 7, wherein activating the display screen comprises increasing a brightness level of the display screen toward a predetermined brightness level at a rate that varies in accordance with the rotational velocity of the rotation at the rotatable input mechanism. 7. The electronic device of claim 1, the one or more programs further including instructions for: in accordance with the determination that the user input is received while the quiet mode is active, maintaining the quiet mode as active. 8. The electronic device of claim 1, wherein the user input and the alert are not received simultaneously by the electronic device. 9. The electronic device of claim 1, the one or more programs further including instructions for: receiving a user request to activate the quiet mode from an inactive state, wherein activating the quiet mode further comprises activating a silent mode at the electronic device. 10. The electronic device of claim 9, wherein the user request comprises a touch input on a quiet mode affordance of a control panel displayed on the display screen. 11. The electronic device of claim 10, the one or more programs further including instructions for: in response to the touch input on the quiet mode affordance, highlighting display of the quiet mode affordance and a silent mode affordance displayed on the control panel on the display screen. 12. The electronic device of claim 11, the one or more programs further including instructions for: receiving a second user request to deactivate the quiet mode, wherein the second user request comprises a second touch input on the highlighted quiet mode affordance; and in response to the second user request, deactivating the quiet mode and the silent mode from the active state to the inactive state and removing highlighting of the displayed quiet mode affordance and the silent mode affordance. 13. The electronic device of claim 9, the one or more programs further including instructions for: in response to the user request to activate the quiet mode, displaying an instruction screen having instructions, a confirmation affordance, and a cancellation affordance; in accordance with a determination that the confirmation affordance is selected, removing display of the instruction screen and activating the quiet mode; and in accordance with a determination that the cancellation affordance is selected, removing display of the instruction screen and foregoing activation of the quiet mode. 14. The electronic device of claim 1, wherein activating the display screen comprises displaying a user interface having a quiet mode icon that indicates the quiet mode is active. 15. A method, comprising: at an electronic device with an accelerometer and a display screen: receiving a user input at the electronic device, wherein the user input is registered by the accelerometer; in accordance with a determination that the user input is received while a quiet mode is active, maintaining the display screen in an inactive state; in accordance with a determination that the user input is received while the quiet mode is inactive, activating, in response to the user input, the display screen from the inactive state; receiving an alert at the electronic device; in accordance with a determination that the alert is received while the quiet mode is active, foregoing output of an audible notification corresponding to the alert; and in accordance with a determination that the alert is received while the quiet mode is inactive, outputting the audible notification. 16. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display screen and a rotatable input mechanism, cause the device to: receive a user input at the electronic device, wherein the user input is registered by the accelerometer; in accordance with a determination that the user input is received while a quiet mode is active, maintain the display screen in an inactive state; in accordance with a determination that the user input is received while the quiet mode is inactive, activate, in response to the user input, the display screen from the inactive state; receive an alert at the electronic device; in accordance with a determination that the alert is received while the quiet mode is active, forego output of an audible notification corresponding to the alert; and in accordance with a determination that the alert is received while the quiet mode is inactive, output the audible notification.
The present disclosure generally relates to receiving a user input corresponding to a rotation of a rotatable input mechanism and in accordance with the user input, adjusting a brightness level of a display screen during a brightening configuration session.1. An electronic device, comprising: a display screen; an accelerometer; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: receiving a user input at the electronic device, wherein the user input is registered by the accelerometer; in accordance with a determination that the user input is received while a quiet mode is active, maintaining the display screen in an inactive state; in accordance with a determination that the user input is received while the quiet mode is inactive, activating, in response to the user input, the display screen from the inactive state; receiving an alert at the electronic device; in accordance with a determination that the alert is received while the quiet mode is active, foregoing output of an audible notification corresponding to the alert; and in accordance with a determination that the alert is received while the quiet mode is inactive, outputting the audible notification. 2. The electronic device of claim 1, the one or more programs further including instructions for: in accordance with a determination that the user input is received while the quiet mode is active and the user input is a physical input on a surface of the electronic device, activating, in response to the user input, the display screen from the inactive state. 3. The electronic device of claim 2, wherein the physical input comprises a touch input on the display screen and activating the display screen comprises turning on the display screen. 4. The electronic device of claim 2, wherein the electronic device includes a hardware button and the physical input comprises a depression of the hardware button, further wherein activating the display screen comprises turning on the display screen. 5. The electronic device of claim 2, wherein the electronic device includes a rotatable input mechanism and the physical input comprises a rotation of the rotatable input mechanism. 6. The electronic device of claim 7, wherein activating the display screen comprises increasing a brightness level of the display screen toward a predetermined brightness level at a rate that varies in accordance with the rotational velocity of the rotation at the rotatable input mechanism. 7. The electronic device of claim 1, the one or more programs further including instructions for: in accordance with the determination that the user input is received while the quiet mode is active, maintaining the quiet mode as active. 8. The electronic device of claim 1, wherein the user input and the alert are not received simultaneously by the electronic device. 9. The electronic device of claim 1, the one or more programs further including instructions for: receiving a user request to activate the quiet mode from an inactive state, wherein activating the quiet mode further comprises activating a silent mode at the electronic device. 10. The electronic device of claim 9, wherein the user request comprises a touch input on a quiet mode affordance of a control panel displayed on the display screen. 11. The electronic device of claim 10, the one or more programs further including instructions for: in response to the touch input on the quiet mode affordance, highlighting display of the quiet mode affordance and a silent mode affordance displayed on the control panel on the display screen. 12. The electronic device of claim 11, the one or more programs further including instructions for: receiving a second user request to deactivate the quiet mode, wherein the second user request comprises a second touch input on the highlighted quiet mode affordance; and in response to the second user request, deactivating the quiet mode and the silent mode from the active state to the inactive state and removing highlighting of the displayed quiet mode affordance and the silent mode affordance. 13. The electronic device of claim 9, the one or more programs further including instructions for: in response to the user request to activate the quiet mode, displaying an instruction screen having instructions, a confirmation affordance, and a cancellation affordance; in accordance with a determination that the confirmation affordance is selected, removing display of the instruction screen and activating the quiet mode; and in accordance with a determination that the cancellation affordance is selected, removing display of the instruction screen and foregoing activation of the quiet mode. 14. The electronic device of claim 1, wherein activating the display screen comprises displaying a user interface having a quiet mode icon that indicates the quiet mode is active. 15. A method, comprising: at an electronic device with an accelerometer and a display screen: receiving a user input at the electronic device, wherein the user input is registered by the accelerometer; in accordance with a determination that the user input is received while a quiet mode is active, maintaining the display screen in an inactive state; in accordance with a determination that the user input is received while the quiet mode is inactive, activating, in response to the user input, the display screen from the inactive state; receiving an alert at the electronic device; in accordance with a determination that the alert is received while the quiet mode is active, foregoing output of an audible notification corresponding to the alert; and in accordance with a determination that the alert is received while the quiet mode is inactive, outputting the audible notification. 16. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display screen and a rotatable input mechanism, cause the device to: receive a user input at the electronic device, wherein the user input is registered by the accelerometer; in accordance with a determination that the user input is received while a quiet mode is active, maintain the display screen in an inactive state; in accordance with a determination that the user input is received while the quiet mode is inactive, activate, in response to the user input, the display screen from the inactive state; receive an alert at the electronic device; in accordance with a determination that the alert is received while the quiet mode is active, forego output of an audible notification corresponding to the alert; and in accordance with a determination that the alert is received while the quiet mode is inactive, output the audible notification.
2,600
10,686
10,686
16,297,662
2,651
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.
1.-20. (canceled) 21. A method that provides binaural sound to a listener watching a feature length movie in virtual reality (VR), the method comprising: displaying, with a head mounted display (HMD) worn by the listener, a VR person seated in a VR seat in a VR movie theater watching the feature length movie on a VR movie screen; determining a distance and an angle from the VR seat where the VR person is seated to the VR movie screen; and processing, with one or more processors and based on the distance and the angle, sound from the feature length movie so the sound externally localizes to the listener as the binaural sound with a sound localization point (SLP) in empty space at the VR movie screen. 22. The method of claim 21 further comprising: processing, with the one or more processors and based on the distance and the angle, the sound such that a first sound externally localizes as the binaural sound with a first SLP in empty space at a first location on the VR movie screen and a second sound externally localizes as the binaural sound with a second SLP in empty space at a second location on the VR movie screen. 23. The method of claim 21 further comprising: determining a position of the VR seat with respect to an image that appears on the VR movie screen during the feature length movie; and selecting, based on the position of the VR seat with respect to the image that appears on the VR movie screen, head-related transfer functions (HRTFs) to process the sound so the sound originates to the listener from a SLP at the image that appears on the VR movie screen. 24. The method of claim 21 further comprising: tracking, with the HMD, a head orientation of the listener with respect to an image that appears on the VR movie screen during the feature length movie; and processing, with the one or more processors and based on the head orientation, the sound so the SLP of the sound follows the image as the image moves across the VR movie screen. 25. The method of claim 21 further comprising: processing, with the one or more processors, a voice of a character appearing on the VR movie screen such that the voice of the character originates from a location where the character is displayed on the VR movie screen such that from a point-of-view of the listener the voice of the character externally localizes as the binaural sound to the location of the character on the VR movie screen. 26. The method of claim 21 further comprising: processing, with the one or more processors, a voice of a character appearing on the VR movie screen as the character movies across the VR movie screen such that a SLP of the voice of the character moves so the listener continues to hear the voice of the character from a location of the character as the character moves across the VR movie screen. 27. The method of claim 21 further comprising: enabling the listener to be immersed in a space of the feature length movie by processing, with the one or more processors, the sound from the feature length movie so SLPs of the sound occur at locations around a head of the listener as if the listener were at a location in a scene of the feature length movie. 28. The method of claim 21 further comprising: enabling the listener to be immersed in a space of the feature length movie by processing, with the one or more processors, the sound from the feature length so the listener hears the sounds from a point-of-view of a character in a scene of the feature length movie. 29. A non-transitory computer-readable storage medium that stores instructions that one or more electronic devices execute as a method that provides three-dimensional (3D) sound to a listener watching a feature length movie in virtual reality (VR), the method comprising: displaying, with a head mounted display (HMD) worn by the listener, the listener seated in a VR seat in a VR movie theater that plays the feature length movie on a VR movie screen; determining an angle from the VR seat where the listener is seated to the VR movie screen; and processing, based on the angle, sound from the feature length movie so the sound externally localizes to the listener as the 3D sound with a sound localization point (SLP) in empty space at the VR movie screen. 30. The non-transitory computer-readable storage medium of claim 29 further comprising: determining a distance between the VR seat where the listener is seated to the VR movie screen; and automatically adjusting a volume of the sound based on the distance. 31. The non-transitory computer-readable storage medium of claim 29 further comprising: determining head orientations of the listener with respect to the VR movie screen; calculating, from the head orientations, an azimuth angle between a line-of-sight of the listener and a location of a character on the VR movie screen; and processing, based on the azimuth angle between the line-of-sight of the listener and the location where the character is on the VR movie screen, a voice of the character with head-related transfer functions (HRTFs) so the voice of the character externally localizes as the 3D in empty space to a location of the character on the VR movie screen. 32. The non-transitory computer-readable storage medium of claim 29 further comprising: determining a size and shape of the VR movie theater where the listener watches the feature length movie; and processing the sound with room impulse responses (RIRs) based on the size and the shape of the VR movie theater. 33. The non-transitory computer-readable storage medium of claim 29 further comprising: providing the feature length movie to multiple listeners seated at different locations in the VR movie theater with different angles to the VR movie screen; and processing the sound differently for each of the multiple listeners watching the feature length movie in the VR movie theater based on the different locations and the different angles where each of the multiple listeners are seated in the VR movie theater. 34. The non-transitory computer-readable storage medium of claim 29 further comprising: tracking head movements of the listener with respect to a location of a character on the VR movie screen of the feature length movie; and changing, based on the head movements with respect to the location of the character on the VR movie screen, transfer functions processing a voice of the character so the voice of the character continues to emanate from a SLP at the location on the VR movie screen where the character is located while the head movements of the listener change. 35. The non-transitory computer-readable storage medium of claim 29 further comprising: distinguishing a voice of a narrator in the feature length movie from a voice of a character in the feature length movie by providing the voice of the narrator in stereo sound that internally localizes to the listener and the voice of the character as the 3D sound that externally localizes to the listener. 36. The non-transitory computer-readable storage medium of claim 29 further comprising: processing sound from the feature length movie so some of the sound externally localizes to the listener as the 3D sound with the SLP in empty space at the VR movie screen and some of the sound externally localizes to the listener as 3D sound with a SLP in empty space behind a head of the listener. 37. A method that provides binaural sound to a listener watching a movie in virtual reality (VR), the method comprising: displaying, with a head mounted display (HMD) worn by the listener, the listener at a VR seat in a VR movie theater playing the movie on a VR movie screen; tracking, with the HMD, head orientations of the listener seated in the VR seat with respect to the VR movie screen; and processing, with a digital signal processor (DSP) in the HMD and based on the head orientations, head-related transfer functions (HRTFs) with sound of the movie so the sound externally localizes as the binaural sound in empty space at the VR movie screen. 38. The method of claim 37 further comprising: determining an angle from the VR seat where the listener is seated to the VR movie screen; and selecting, based on the angle and the head orientations, the HRTFs so the sound externally localizes as the binaural sound in empty space at the VR movie screen. 39. The method of claim 37 further comprising: receiving, at the HMD and from the listener, selection of a character in the movie; and processing, with the DSP, the sound of the movie so the listener hears the sound from a point-of-view of the character as if the listener were at locations of the character in the movie as the character moves about in scenes in the movie. 40. The method of claim 37 further comprising: providing, with the HMD, the listener with a list of different characters in the movie that are available as audial points-of-view such that when the listener selects one of the different characters then the listener hears the sound from a point-of-view of the one of the different characters that the listener selected.
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.1.-20. (canceled) 21. A method that provides binaural sound to a listener watching a feature length movie in virtual reality (VR), the method comprising: displaying, with a head mounted display (HMD) worn by the listener, a VR person seated in a VR seat in a VR movie theater watching the feature length movie on a VR movie screen; determining a distance and an angle from the VR seat where the VR person is seated to the VR movie screen; and processing, with one or more processors and based on the distance and the angle, sound from the feature length movie so the sound externally localizes to the listener as the binaural sound with a sound localization point (SLP) in empty space at the VR movie screen. 22. The method of claim 21 further comprising: processing, with the one or more processors and based on the distance and the angle, the sound such that a first sound externally localizes as the binaural sound with a first SLP in empty space at a first location on the VR movie screen and a second sound externally localizes as the binaural sound with a second SLP in empty space at a second location on the VR movie screen. 23. The method of claim 21 further comprising: determining a position of the VR seat with respect to an image that appears on the VR movie screen during the feature length movie; and selecting, based on the position of the VR seat with respect to the image that appears on the VR movie screen, head-related transfer functions (HRTFs) to process the sound so the sound originates to the listener from a SLP at the image that appears on the VR movie screen. 24. The method of claim 21 further comprising: tracking, with the HMD, a head orientation of the listener with respect to an image that appears on the VR movie screen during the feature length movie; and processing, with the one or more processors and based on the head orientation, the sound so the SLP of the sound follows the image as the image moves across the VR movie screen. 25. The method of claim 21 further comprising: processing, with the one or more processors, a voice of a character appearing on the VR movie screen such that the voice of the character originates from a location where the character is displayed on the VR movie screen such that from a point-of-view of the listener the voice of the character externally localizes as the binaural sound to the location of the character on the VR movie screen. 26. The method of claim 21 further comprising: processing, with the one or more processors, a voice of a character appearing on the VR movie screen as the character movies across the VR movie screen such that a SLP of the voice of the character moves so the listener continues to hear the voice of the character from a location of the character as the character moves across the VR movie screen. 27. The method of claim 21 further comprising: enabling the listener to be immersed in a space of the feature length movie by processing, with the one or more processors, the sound from the feature length movie so SLPs of the sound occur at locations around a head of the listener as if the listener were at a location in a scene of the feature length movie. 28. The method of claim 21 further comprising: enabling the listener to be immersed in a space of the feature length movie by processing, with the one or more processors, the sound from the feature length so the listener hears the sounds from a point-of-view of a character in a scene of the feature length movie. 29. A non-transitory computer-readable storage medium that stores instructions that one or more electronic devices execute as a method that provides three-dimensional (3D) sound to a listener watching a feature length movie in virtual reality (VR), the method comprising: displaying, with a head mounted display (HMD) worn by the listener, the listener seated in a VR seat in a VR movie theater that plays the feature length movie on a VR movie screen; determining an angle from the VR seat where the listener is seated to the VR movie screen; and processing, based on the angle, sound from the feature length movie so the sound externally localizes to the listener as the 3D sound with a sound localization point (SLP) in empty space at the VR movie screen. 30. The non-transitory computer-readable storage medium of claim 29 further comprising: determining a distance between the VR seat where the listener is seated to the VR movie screen; and automatically adjusting a volume of the sound based on the distance. 31. The non-transitory computer-readable storage medium of claim 29 further comprising: determining head orientations of the listener with respect to the VR movie screen; calculating, from the head orientations, an azimuth angle between a line-of-sight of the listener and a location of a character on the VR movie screen; and processing, based on the azimuth angle between the line-of-sight of the listener and the location where the character is on the VR movie screen, a voice of the character with head-related transfer functions (HRTFs) so the voice of the character externally localizes as the 3D in empty space to a location of the character on the VR movie screen. 32. The non-transitory computer-readable storage medium of claim 29 further comprising: determining a size and shape of the VR movie theater where the listener watches the feature length movie; and processing the sound with room impulse responses (RIRs) based on the size and the shape of the VR movie theater. 33. The non-transitory computer-readable storage medium of claim 29 further comprising: providing the feature length movie to multiple listeners seated at different locations in the VR movie theater with different angles to the VR movie screen; and processing the sound differently for each of the multiple listeners watching the feature length movie in the VR movie theater based on the different locations and the different angles where each of the multiple listeners are seated in the VR movie theater. 34. The non-transitory computer-readable storage medium of claim 29 further comprising: tracking head movements of the listener with respect to a location of a character on the VR movie screen of the feature length movie; and changing, based on the head movements with respect to the location of the character on the VR movie screen, transfer functions processing a voice of the character so the voice of the character continues to emanate from a SLP at the location on the VR movie screen where the character is located while the head movements of the listener change. 35. The non-transitory computer-readable storage medium of claim 29 further comprising: distinguishing a voice of a narrator in the feature length movie from a voice of a character in the feature length movie by providing the voice of the narrator in stereo sound that internally localizes to the listener and the voice of the character as the 3D sound that externally localizes to the listener. 36. The non-transitory computer-readable storage medium of claim 29 further comprising: processing sound from the feature length movie so some of the sound externally localizes to the listener as the 3D sound with the SLP in empty space at the VR movie screen and some of the sound externally localizes to the listener as 3D sound with a SLP in empty space behind a head of the listener. 37. A method that provides binaural sound to a listener watching a movie in virtual reality (VR), the method comprising: displaying, with a head mounted display (HMD) worn by the listener, the listener at a VR seat in a VR movie theater playing the movie on a VR movie screen; tracking, with the HMD, head orientations of the listener seated in the VR seat with respect to the VR movie screen; and processing, with a digital signal processor (DSP) in the HMD and based on the head orientations, head-related transfer functions (HRTFs) with sound of the movie so the sound externally localizes as the binaural sound in empty space at the VR movie screen. 38. The method of claim 37 further comprising: determining an angle from the VR seat where the listener is seated to the VR movie screen; and selecting, based on the angle and the head orientations, the HRTFs so the sound externally localizes as the binaural sound in empty space at the VR movie screen. 39. The method of claim 37 further comprising: receiving, at the HMD and from the listener, selection of a character in the movie; and processing, with the DSP, the sound of the movie so the listener hears the sound from a point-of-view of the character as if the listener were at locations of the character in the movie as the character moves about in scenes in the movie. 40. The method of claim 37 further comprising: providing, with the HMD, the listener with a list of different characters in the movie that are available as audial points-of-view such that when the listener selects one of the different characters then the listener hears the sound from a point-of-view of the one of the different characters that the listener selected.
2,600
10,687
10,687
15,688,294
2,616
Technologies for selectively augmenting communications transmitted by a communication device include a communication device configured to acquire new user environment information relating to the environment of the user if such new user environment information becomes available. The communication device is further configured to create one or more user environment indicators based on the new user environment information, to display the one or more created user environment indicators via a display of the communication device and include the created user environment indicator in a communication to be transmitted by the communication device if the created user environment indicator is selected for inclusion in the communication.
1. A communication device for selectively augmenting communications, the communication device comprising: a display, a user environment information acquiring module to acquire user environment information relating to an environment of a user of the communication device, a user environment indicator creation module to create one or more user environment indicators based on acquired user environment information, a user environment indicator display module to display the one or more created user environment indicators via the display, and a user environment indicator selection module to include a selected one or ones of the one or more displayed user environment indicators in a communication to be transmitted by the communication device.
Technologies for selectively augmenting communications transmitted by a communication device include a communication device configured to acquire new user environment information relating to the environment of the user if such new user environment information becomes available. The communication device is further configured to create one or more user environment indicators based on the new user environment information, to display the one or more created user environment indicators via a display of the communication device and include the created user environment indicator in a communication to be transmitted by the communication device if the created user environment indicator is selected for inclusion in the communication.1. A communication device for selectively augmenting communications, the communication device comprising: a display, a user environment information acquiring module to acquire user environment information relating to an environment of a user of the communication device, a user environment indicator creation module to create one or more user environment indicators based on acquired user environment information, a user environment indicator display module to display the one or more created user environment indicators via the display, and a user environment indicator selection module to include a selected one or ones of the one or more displayed user environment indicators in a communication to be transmitted by the communication device.
2,600
10,688
10,688
15,659,790
2,626
A pixel circuit includes a liquid crystal capacitor, a memory circuit, a driving circuit, a mode-switching circuit, and a control circuit. The memory circuit is configured to store a status signal. The driving circuit includes a first terminal configured to receive a data voltage and a second terminal electrically coupled to a first terminal of the liquid crystal capacitor, and the driving circuit is configured to be ON or OFF according to a scan signal selectively. The mode-switching circuit is configured to be ON or OFF according to a mode-switching signal selectively. The control signal is electrically coupled to the mode-switching circuit at a first node, and is configured to control the voltage level of the first node corresponding to the status signal, and output a display voltage to the liquid crystal capacitor via the mode-switching circuit when the mode-switching circuit is ON.
1. A pixel circuit, comprising: a liquid crystal capacitor, comprising a first liquid crystal terminal; a memory circuit, for storing a status signal; a driving circuit, comprising a first terminal for receiving a data voltage and a second terminal electrically connecting to the first liquid crystal terminal of the liquid crystal capacitor, wherein the driving circuit is controlled according to a scan signal; a mode-switching circuit, for receiving and controlled by a mode-switching signal, wherein said mode-switching circuit comprises a first node; and a control circuit, electrically connecting to the first node, controlling a voltage level of the first node corresponding to the status signal, and outputting a display voltage to the liquid crystal capacitor via the mode-switching circuit when the mode-switching circuit is ON. 2. The pixel circuit according to claim 1, wherein the control circuit comprises: a first transistor, comprising: a first terminal, for receiving a drive voltage; and a second terminal, electrically connecting to the first node; and a first control terminal, for receiving the status signal; and a second transistor, comprising: a third terminal, electrically connected to the first node; and a fourth terminal, electrically connected to a second terminal of the liquid crystal capacitor; and a second control terminal, for receiving an inverted phase signal with an inverted phase of that of the status signal. 3. The pixel circuit according to claim 1, wherein the driving circuit comprises: a third transistor, comprising: a fifth terminal, electrically connected to a data line, for receiving the data voltage; a sixth terminal; and a third control terminal, electrically connecting to a scan line, for receiving the scan signal; and a fourth transistor, comprising: a seventh terminal, electrically connecting to the sixth terminal of the third transistor; a eighth terminal, electrically connecting to the first liquid crystal terminal; and a forth control terminal, electrically connecting to the scan line, for receiving the scan signal. 4. The pixel circuit according to claim 3, wherein the mode-switching circuit comprises: a fifth transistor, comprising: a ninth terminal, electrically connecting to the fifth terminal of the third transistor; a tenth terminal, electrically connecting to the memory circuit; and a fifth control terminal, for receiving the mode-switching signal; and a sixth transistor, comprising: a eleventh terminal, electrically connecting to the first liquid crystal terminal; a twelfth terminal, electrically connecting to the first node; and a fifth control terminal, for receiving the mode-switching signal. 5. The pixel circuit according to claim 2, wherein the memory circuit comprises: a first inverter, comprising: a first input terminal, electrically connecting to the first control terminal of the first transistor; and a first output terminal, electrically connected to the second control terminal of the second transistor, for providing the inverted phase signal; and a second inverter, comprising: a second input terminal, electrically connected to the first output terminal of the first inverter; and a second output terminal, electrically connected to the first input terminal of the first inverter. 6. The pixel circuit according to claim 5, wherein the memory circuit further comprises a seventh transistor comprising a thirteenth terminal and a fourteenth terminal, the thirteenth terminal of the seventh transistor electrically connects to the first input terminal of the first inverter, and the fourteenth terminal of the seventh transistor electrically connects to the second output terminal of the second inverter. 7. The pixel circuit according to claim 5, wherein the memory circuit further comprises a resistor, and the resistor electrically connects between the first input terminal of the first inverter and the second output terminal of the second inverter. 8. The pixel circuit according to claim 2, wherein the driving circuit comprises: a third transistor, comprising: a fifth terminal, electrically connecting to a data line, for receiving the data voltage; a sixth terminal; and a third terminal, electrically connecting to a scan line, for receiving the scan signal; and the mode-switching circuit comprises: a fourth transistor, comprising: a seventh terminal, electrically connecting to the sixth terminal of the third transistor; a eighth terminal, electrically connecting to the memory circuit; and a fourth control terminal, for receiving the mode-switching signal; and a fifth transistor, comprising: a ninth terminal, electrically connecting to the first liquid crystal terminal of the liquid crystal capacitor; a tenth terminal, electrically connecting to the first node; and a fifth control terminal, for receiving a second mode-switching signal. 9. The pixel circuit according to claim 1, wherein when the pixel circuit operates in a first mode, the mode-switching circuit is OFF, the liquid crystal capacitor receives the data voltage via the driving circuit; and wherein when the pixel circuit operates in a second mode, the mode-switching circuit is ON, and the liquid crystal capacitor receives the display voltage via the mode-switching circuit and the control circuit. 10. The pixel circuit according to claim 9, wherein the liquid crystal capacitor further comprises a second liquid crystal terminal with a second liquid crystal terminal voltage, and the first liquid crystal terminal has a first liquid crystal terminal voltage; wherein when the pixel circuit operates in the second mode and the status signal is at a first level, the first liquid crystal terminal voltage is different from the second liquid crystal terminal voltage; and wherein when the status signal is at a second level, the first liquid crystal terminal voltage is as same as the second liquid crystal terminal voltage. 11. The pixel circuit according to claim 9, wherein when the pixel circuit switches from the first mode to the second mode and the mode-switching circuit is ON, the memory circuit stores the status signal according to the data voltage. 12. The pixel circuit according to claim 9, wherein when the pixel circuit switches from the first mode to the second mode and the driving circuit is ON according to the scan signal, the memory circuit updates the stored status signal according to the data voltage.
A pixel circuit includes a liquid crystal capacitor, a memory circuit, a driving circuit, a mode-switching circuit, and a control circuit. The memory circuit is configured to store a status signal. The driving circuit includes a first terminal configured to receive a data voltage and a second terminal electrically coupled to a first terminal of the liquid crystal capacitor, and the driving circuit is configured to be ON or OFF according to a scan signal selectively. The mode-switching circuit is configured to be ON or OFF according to a mode-switching signal selectively. The control signal is electrically coupled to the mode-switching circuit at a first node, and is configured to control the voltage level of the first node corresponding to the status signal, and output a display voltage to the liquid crystal capacitor via the mode-switching circuit when the mode-switching circuit is ON.1. A pixel circuit, comprising: a liquid crystal capacitor, comprising a first liquid crystal terminal; a memory circuit, for storing a status signal; a driving circuit, comprising a first terminal for receiving a data voltage and a second terminal electrically connecting to the first liquid crystal terminal of the liquid crystal capacitor, wherein the driving circuit is controlled according to a scan signal; a mode-switching circuit, for receiving and controlled by a mode-switching signal, wherein said mode-switching circuit comprises a first node; and a control circuit, electrically connecting to the first node, controlling a voltage level of the first node corresponding to the status signal, and outputting a display voltage to the liquid crystal capacitor via the mode-switching circuit when the mode-switching circuit is ON. 2. The pixel circuit according to claim 1, wherein the control circuit comprises: a first transistor, comprising: a first terminal, for receiving a drive voltage; and a second terminal, electrically connecting to the first node; and a first control terminal, for receiving the status signal; and a second transistor, comprising: a third terminal, electrically connected to the first node; and a fourth terminal, electrically connected to a second terminal of the liquid crystal capacitor; and a second control terminal, for receiving an inverted phase signal with an inverted phase of that of the status signal. 3. The pixel circuit according to claim 1, wherein the driving circuit comprises: a third transistor, comprising: a fifth terminal, electrically connected to a data line, for receiving the data voltage; a sixth terminal; and a third control terminal, electrically connecting to a scan line, for receiving the scan signal; and a fourth transistor, comprising: a seventh terminal, electrically connecting to the sixth terminal of the third transistor; a eighth terminal, electrically connecting to the first liquid crystal terminal; and a forth control terminal, electrically connecting to the scan line, for receiving the scan signal. 4. The pixel circuit according to claim 3, wherein the mode-switching circuit comprises: a fifth transistor, comprising: a ninth terminal, electrically connecting to the fifth terminal of the third transistor; a tenth terminal, electrically connecting to the memory circuit; and a fifth control terminal, for receiving the mode-switching signal; and a sixth transistor, comprising: a eleventh terminal, electrically connecting to the first liquid crystal terminal; a twelfth terminal, electrically connecting to the first node; and a fifth control terminal, for receiving the mode-switching signal. 5. The pixel circuit according to claim 2, wherein the memory circuit comprises: a first inverter, comprising: a first input terminal, electrically connecting to the first control terminal of the first transistor; and a first output terminal, electrically connected to the second control terminal of the second transistor, for providing the inverted phase signal; and a second inverter, comprising: a second input terminal, electrically connected to the first output terminal of the first inverter; and a second output terminal, electrically connected to the first input terminal of the first inverter. 6. The pixel circuit according to claim 5, wherein the memory circuit further comprises a seventh transistor comprising a thirteenth terminal and a fourteenth terminal, the thirteenth terminal of the seventh transistor electrically connects to the first input terminal of the first inverter, and the fourteenth terminal of the seventh transistor electrically connects to the second output terminal of the second inverter. 7. The pixel circuit according to claim 5, wherein the memory circuit further comprises a resistor, and the resistor electrically connects between the first input terminal of the first inverter and the second output terminal of the second inverter. 8. The pixel circuit according to claim 2, wherein the driving circuit comprises: a third transistor, comprising: a fifth terminal, electrically connecting to a data line, for receiving the data voltage; a sixth terminal; and a third terminal, electrically connecting to a scan line, for receiving the scan signal; and the mode-switching circuit comprises: a fourth transistor, comprising: a seventh terminal, electrically connecting to the sixth terminal of the third transistor; a eighth terminal, electrically connecting to the memory circuit; and a fourth control terminal, for receiving the mode-switching signal; and a fifth transistor, comprising: a ninth terminal, electrically connecting to the first liquid crystal terminal of the liquid crystal capacitor; a tenth terminal, electrically connecting to the first node; and a fifth control terminal, for receiving a second mode-switching signal. 9. The pixel circuit according to claim 1, wherein when the pixel circuit operates in a first mode, the mode-switching circuit is OFF, the liquid crystal capacitor receives the data voltage via the driving circuit; and wherein when the pixel circuit operates in a second mode, the mode-switching circuit is ON, and the liquid crystal capacitor receives the display voltage via the mode-switching circuit and the control circuit. 10. The pixel circuit according to claim 9, wherein the liquid crystal capacitor further comprises a second liquid crystal terminal with a second liquid crystal terminal voltage, and the first liquid crystal terminal has a first liquid crystal terminal voltage; wherein when the pixel circuit operates in the second mode and the status signal is at a first level, the first liquid crystal terminal voltage is different from the second liquid crystal terminal voltage; and wherein when the status signal is at a second level, the first liquid crystal terminal voltage is as same as the second liquid crystal terminal voltage. 11. The pixel circuit according to claim 9, wherein when the pixel circuit switches from the first mode to the second mode and the mode-switching circuit is ON, the memory circuit stores the status signal according to the data voltage. 12. The pixel circuit according to claim 9, wherein when the pixel circuit switches from the first mode to the second mode and the driving circuit is ON according to the scan signal, the memory circuit updates the stored status signal according to the data voltage.
2,600
10,689
10,689
14,505,052
2,631
An OFDM (orthogonal frequency division multiplexing) transmitter includes an inverse fast Fourier transform circuit, which, in operation, generates, based on digital input data, a complex time-varying digital signal having real and imaginary components; and a multiplexer adapted to generate a time-multiplexed digital signal by time-multiplexing one or more of the real components with one or more of the imaginary components.
1. A device, comprising: an inverse fast Fourier transform (IFFT) circuit, which, in operation, generates a complex time-varying digital signal having real and imaginary components based on a digital input signal; and a multiplexer, which, in operation, time-multiplexes one or more of the real components of the complex time-varying digital signal with one or more of the imaginary components of the complex time-varying digital signal to generate a time-multiplexed digital signal. 2. The device of claim 1, comprising: data mapping circuitry, which, in operation, generates N input values based on the input data signal, wherein the IFFT circuit, in operation, performs an N-point transform on the N input values to generate N parallel output values, where N is an integer greater than or equal to three. 3. The device of claim 2 wherein the multiplexer, in operation, converts N parallel output values into serial data of the time-multiplexed digital signal. 4. The device of claim 1, comprising: a digital to analog converter, which, in operation, converts the time-multiplexed digital signal into a time-multiplexed analog signal; and clipping circuitry, which, in operation, renders positive the time-multiplexed analog signal to generate an analog orthogonal frequency division multiplexing (OFDM) output signal for transmission. 5. The device of claim 4 wherein, in operation, the clipping circuitry flips one or more negative values of the time-multiplexed analog signal to make them positive and time-multiplexes one or more positive values of the time-multiplexed analog signal with the one or more flipped values. 6. A device, comprising: a demultiplexer, which, in operation, demultiplexes a time-multiplexed digital signal to extract one or more real components and one or more imaginary components and combines the extracted real and imaginary components to generate a complex time-varying digital signal; and a fast Fourier transform (FFT) circuit, which, in operation, generates a plurality of frequency components representing transmitted data based on the complex time-varying digital signal. 7. The device of claim 6 wherein the demultiplexer, in operation, generates N parallel output values based on the time-multiplexed digital signal, where N is an integer equal to or greater than three, and the device comprises: demapping circuitry, which, in operation generates an output data signal based on the N parallel output values. 8. The device of claim 6 wherein the FFT circuit, in operation, performs an N-point transform to generate N frequency components representing the transmitted data. 9. The device of claim 6, comprising an analog to digital converter, which, in operation, generates the time-multiplexed digital signal based on an analog orthogonal frequency division multiplexing (OFDM) signal. 10. The device of claim 9, comprising: a photosensor, which, in operation, converts an optical signal into the analog OFDM signal. 11. A system, comprising: an orthogonal frequency division multiplexing (OFDM) transmission circuit including: an inverse fast Fourier transform (IFFT) circuit configured to generate complex time-varying digital signals having real and imaginary components based on digital input signals; a multiplexer configured to time-multiplex one or more of real components with one or more of imaginary components of complex time-varying digital signals, generating time-multiplexed digital signals; a digital to analog converter configured to convert time-multiplexed digital signals into time-multiplexed analog signals; and clipping circuitry configured to generate clipped signals based on time-multiplexed analog signals; and an optical transmitter configured to intensity modulate signals generated by the clipping circuitry. 12. The system of claim 11 wherein the output signals of the clipping circuitry are OFDM signals. 13. The system of claim 11 wherein the OFDM transmission circuit comprises: data mapping circuitry configured to generate N input values based on an input data signal, wherein the IFFT circuit is configured to perform an N-point transform on the N input values to generate N parallel output values, where N is an integer greater than or equal to three. 14. The system of claim 13 wherein the multiplexer is configured to convert N parallel output values into serial data of a time-multiplexed digital signal. 15. The system of claim 11, comprising: an optical receiver including: a photosensor configured to convert intensity modulated signals into time-multiplexed analog signals; an analog to digial converter configured to convert time-multiplexed analog signals into time-multiplexed digital signals; a demultiplexer configured to demultiplex time-multiplexed digital signals to extract real components and imaginary components and combine the extracted real and imaginary components to generate complex time-varying digital signals; and a fast Fourier transform (FFT) circuit configured to generate frequency components representing transmitted data based on complex time-varying digital signals. 16. The system of claim 15, comprising: an optical transmission channel configured to link the optical transmitter and optical receiver. 17. A method, comprising: generating using an inverse fast Fourier transform (IFFT) circuit, a complex time-varying digital signal having real and imaginary components based on digital input data; and generating, using a multiplexer, a time-multiplexed digital signal by time-multiplexing one or more of the real components with one or more of the imaginary components of the complex time-varying digital signal. 18. The method of claim 17, comprising: generating, using a data mapper, N input values based on an input data signal; and performing, using the IFFT circuit, an N-point transform on the N input values, generating N parallel output values, where N is an integer greater than or equal to three. 19. The method of claim 18, comprising: converting, using the multiplexer, the N parallel output values into serial data of the time-multiplexed digital signal. 20. The method of claim 17, comprising: converting the time-multiplexed digital signal into a time-multiplexed analog signal; and clipping the time-multiplexed analog signal. 21. A method, comprising: extracting, using a demultiplexer, one or more real components and one or more imaginary components of a time-multiplexed digital signal; combining, using the demultiplexer, the extracted real and imaginary components, generating a complex time-varying digital signal; and generating, using a fast Fourier transform (FFT) circuit, a plurality of frequency components representing data based on the complex time-varying digital signal. 22. The method of claim 21 wherein the generating the plurality of frequency components includes generating N parallel output values, where N is an integer equal to or greater than three, and the method comprises: generating, using demapping circuitry, an output data signal based on the N parallel output values. 23. The method of claim 21, comprising performing, using the FFT circuit, an N-point transform to generate N frequency components representing the data.
An OFDM (orthogonal frequency division multiplexing) transmitter includes an inverse fast Fourier transform circuit, which, in operation, generates, based on digital input data, a complex time-varying digital signal having real and imaginary components; and a multiplexer adapted to generate a time-multiplexed digital signal by time-multiplexing one or more of the real components with one or more of the imaginary components.1. A device, comprising: an inverse fast Fourier transform (IFFT) circuit, which, in operation, generates a complex time-varying digital signal having real and imaginary components based on a digital input signal; and a multiplexer, which, in operation, time-multiplexes one or more of the real components of the complex time-varying digital signal with one or more of the imaginary components of the complex time-varying digital signal to generate a time-multiplexed digital signal. 2. The device of claim 1, comprising: data mapping circuitry, which, in operation, generates N input values based on the input data signal, wherein the IFFT circuit, in operation, performs an N-point transform on the N input values to generate N parallel output values, where N is an integer greater than or equal to three. 3. The device of claim 2 wherein the multiplexer, in operation, converts N parallel output values into serial data of the time-multiplexed digital signal. 4. The device of claim 1, comprising: a digital to analog converter, which, in operation, converts the time-multiplexed digital signal into a time-multiplexed analog signal; and clipping circuitry, which, in operation, renders positive the time-multiplexed analog signal to generate an analog orthogonal frequency division multiplexing (OFDM) output signal for transmission. 5. The device of claim 4 wherein, in operation, the clipping circuitry flips one or more negative values of the time-multiplexed analog signal to make them positive and time-multiplexes one or more positive values of the time-multiplexed analog signal with the one or more flipped values. 6. A device, comprising: a demultiplexer, which, in operation, demultiplexes a time-multiplexed digital signal to extract one or more real components and one or more imaginary components and combines the extracted real and imaginary components to generate a complex time-varying digital signal; and a fast Fourier transform (FFT) circuit, which, in operation, generates a plurality of frequency components representing transmitted data based on the complex time-varying digital signal. 7. The device of claim 6 wherein the demultiplexer, in operation, generates N parallel output values based on the time-multiplexed digital signal, where N is an integer equal to or greater than three, and the device comprises: demapping circuitry, which, in operation generates an output data signal based on the N parallel output values. 8. The device of claim 6 wherein the FFT circuit, in operation, performs an N-point transform to generate N frequency components representing the transmitted data. 9. The device of claim 6, comprising an analog to digital converter, which, in operation, generates the time-multiplexed digital signal based on an analog orthogonal frequency division multiplexing (OFDM) signal. 10. The device of claim 9, comprising: a photosensor, which, in operation, converts an optical signal into the analog OFDM signal. 11. A system, comprising: an orthogonal frequency division multiplexing (OFDM) transmission circuit including: an inverse fast Fourier transform (IFFT) circuit configured to generate complex time-varying digital signals having real and imaginary components based on digital input signals; a multiplexer configured to time-multiplex one or more of real components with one or more of imaginary components of complex time-varying digital signals, generating time-multiplexed digital signals; a digital to analog converter configured to convert time-multiplexed digital signals into time-multiplexed analog signals; and clipping circuitry configured to generate clipped signals based on time-multiplexed analog signals; and an optical transmitter configured to intensity modulate signals generated by the clipping circuitry. 12. The system of claim 11 wherein the output signals of the clipping circuitry are OFDM signals. 13. The system of claim 11 wherein the OFDM transmission circuit comprises: data mapping circuitry configured to generate N input values based on an input data signal, wherein the IFFT circuit is configured to perform an N-point transform on the N input values to generate N parallel output values, where N is an integer greater than or equal to three. 14. The system of claim 13 wherein the multiplexer is configured to convert N parallel output values into serial data of a time-multiplexed digital signal. 15. The system of claim 11, comprising: an optical receiver including: a photosensor configured to convert intensity modulated signals into time-multiplexed analog signals; an analog to digial converter configured to convert time-multiplexed analog signals into time-multiplexed digital signals; a demultiplexer configured to demultiplex time-multiplexed digital signals to extract real components and imaginary components and combine the extracted real and imaginary components to generate complex time-varying digital signals; and a fast Fourier transform (FFT) circuit configured to generate frequency components representing transmitted data based on complex time-varying digital signals. 16. The system of claim 15, comprising: an optical transmission channel configured to link the optical transmitter and optical receiver. 17. A method, comprising: generating using an inverse fast Fourier transform (IFFT) circuit, a complex time-varying digital signal having real and imaginary components based on digital input data; and generating, using a multiplexer, a time-multiplexed digital signal by time-multiplexing one or more of the real components with one or more of the imaginary components of the complex time-varying digital signal. 18. The method of claim 17, comprising: generating, using a data mapper, N input values based on an input data signal; and performing, using the IFFT circuit, an N-point transform on the N input values, generating N parallel output values, where N is an integer greater than or equal to three. 19. The method of claim 18, comprising: converting, using the multiplexer, the N parallel output values into serial data of the time-multiplexed digital signal. 20. The method of claim 17, comprising: converting the time-multiplexed digital signal into a time-multiplexed analog signal; and clipping the time-multiplexed analog signal. 21. A method, comprising: extracting, using a demultiplexer, one or more real components and one or more imaginary components of a time-multiplexed digital signal; combining, using the demultiplexer, the extracted real and imaginary components, generating a complex time-varying digital signal; and generating, using a fast Fourier transform (FFT) circuit, a plurality of frequency components representing data based on the complex time-varying digital signal. 22. The method of claim 21 wherein the generating the plurality of frequency components includes generating N parallel output values, where N is an integer equal to or greater than three, and the method comprises: generating, using demapping circuitry, an output data signal based on the N parallel output values. 23. The method of claim 21, comprising performing, using the FFT circuit, an N-point transform to generate N frequency components representing the data.
2,600
10,690
10,690
15,616,883
2,654
Participants can control a number of aspects of a virtual reality session. A participant of the session can control the position of an object, such as an avatar. Spectators do not have control over aspects of a session. For instance, spectators cannot control the position of objects or change properties of objects within a virtual environment. In some configurations, the position of a spectator's viewing area is based on the position of an object that is controlled by a participant. In some embodiments, a spectator's viewing area can follow a participant's position but the spectator can look in any direction from that position. By following the participant's position, spectators can follow the action of a session yet have the freedom to control the direction of their viewing area to enhance their viewing experience. Customized spatial audio is also generated for the spectator based on the direction of their viewing area.
1. A computing device, comprising: a processor; a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the computing device to receive session data defining a virtual reality environment comprising a participant object, the session data allowing a participant to provide a participant input for controlling a location of the participant object and a direction of the participant object, generate a first view for display to the participant, the first view originating from the location of the participant object controlled by the participant, wherein a direction of the first view is based on the direction of the participant object, generate a spectator view for display on a computing device associated with a spectator, the spectator view originating from the location of the participant object controlled by the participant, the session data allowing the spectator to provide a spectator input for controlling a direction of the spectator view, and generate a spectator audio output signal of a stream, wherein the spectator audio output signal causes an output device to emanate an audio output of the stream from a speaker object location positioned relative to the spectator, the speaker object location based on the direction of the spectator view and the location of an audio object relative to the location of the participant object. 2. The system of claim 1, wherein the direction of the spectator view is independent of the direction of the participant object, wherein the instructions further cause the system to generate a participant audio output signal of the stream based on the location of the participant object, the direction of the participant object, and the location of the audio object. 3. The system of claim 1, wherein generating the spectator audio output signal comprising configuring the spectator audio output in accordance with an Ambisonics-based technology, wherein the spectator output signal defines at least one sound field modeling the location of the audio object associated with the stream, wherein the data defining the sound field can be interpreted by the computing device associated with the spectator for causing the output device to emanate the audio output of the stream from the speaker object location. 4. The system of claim 1, wherein generating the spectator audio output signal comprising configuring the spectator audio output in accordance with a channel-based audio technology, wherein the spectator audio output signal causes the stream to render to an output device in a channel-based audio format. 5. The system of claim 1, wherein generating the spectator audio output signal comprising configuring the spectator audio output in accordance with an object-based technology, wherein the spectator output signal defines the location of the audio object associated with the stream, the location defined in a three-dimensional coordinate system. 6. The system of claim 1, wherein the output device comprises one or more speakers in communication with the computing device, wherein the instructions further cause the system to transmit the spectator audio output to the computing device, wherein the spectator audio output causes the output device in communication with the computing device to emanate the audio output of the stream from the speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of the audio object relative to the location of the participant object. 7. The system of claim 1, wherein the session data is received from a participant computing device, wherein the session data comprises a 360 canvas, and data indicating a direction of the stream, wherein one or more effects applied to the audio output is based on the data indicating the direction of the stream. 8. A system, comprising: a processor; a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the system to receive session data defining a virtual reality environment comprising a participant object, the session data allowing a participant to provide a participant input for controlling a location of the participant object and a direction of the participant object, generate a first view for display to the participant, the first view originating from the location of the participant object controlled by the participant, wherein a direction of the first view is based on the direction of the participant object, generate a spectator view for display on a computing device associated with a spectator, the spectator view originating from the location of the participant object controlled by the participant, the session data allowing the spectator to provide a spectator input for controlling a direction of the spectator view, and generate a spectator audio output signal of a stream based on the direction of the spectator view, a location of an audio object associated with the stream, and the location of the participant object. 9. The system of claim 8, wherein the direction of the spectator view is independent of the direction of the participant object, where instructions further cause the system to generate a participant audio output signal of the stream based on the location of the participant object, the direction of the participant object, and the location of the audio object. 10. The system of claim 8, wherein generating the spectator audio output signal comprises generating the audio output signal defining an Ambisonics representation. 11. The system of claim 8, wherein generating the spectator audio output signal is processed in accordance with a channel-based audio technology. 12. The system of claim 8, wherein generating the spectator audio output signal is processed in accordance with an object-based technology. 13. The system of claim 8, wherein the output device comprises one or more speakers in communication with the computing device, wherein the instructions further cause the system to transmit the spectator audio output to the computing device, wherein the spectator audio output causes the output device in communication with the computing device to emanate the audio output of the stream from the speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of the audio object relative to the location of the participant object. 14. The system of claim 8, wherein the session data is received from a participant computing device, wherein the session data comprises a 360 canvas, and data indicating a direction of the stream, wherein one or more effects applied to the audio output is based on the data indicating the direction of the stream. 15. A computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by one or more processors of a system, cause the one or more processors of the system to: receive session data defining a virtual reality environment comprising a participant object, the session data allowing the participant to provide a participant input for controlling a location of the participant object and a direction of the participant object, generate a first view for display to the participant, the first view originating from the location of the participant object controlled by the participant, wherein a direction of the first view is based on the direction of the participant object, generate a spectator view for display on a computing device associated with a spectator, the spectator view originating from the location of the participant object controlled by the participant, the session data allowing the spectator to provide a spectator input for controlling a direction of the spectator view, and generate a spectator audio output signal of a stream, wherein the spectator audio output signal causes an output device to emanate an audio output of the stream from a speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of an audio object relative to the location of the participant object. 16. The computer-readable storage medium of claim 15, wherein the direction of the spectator view is independent of the direction of the participant object, wherein the instructions further cause the system to generate a participant audio output signal of the stream based on the location of the participant object, the direction of the participant object, and the location of the audio object. 17. The computer-readable storage medium of claim 15, wherein generating the spectator audio output signal comprises configuring the spectator audio output in accordance with an Ambisonics-based technology, wherein the spectator output signal defines at least one sound field modeling the location of the audio object associated with the stream, wherein the data defining the sound field can be interpreted by the computing device associated with the spectator for causing the output device to emanate the audio output of the stream from the speaker object location. 18. The computer-readable storage medium of claim 15, wherein generating the spectator audio output signal comprises configuring the spectator audio output in accordance with a channel-based audio technology, wherein the spectator audio output signal causes the stream to render to an output device in a channel-based audio format. 19. The computer-readable storage medium of claim 15, wherein generating the spectator audio output signal comprises configuring the spectator audio output in accordance with an object-based technology, wherein the spectator output signal defines the location of the audio object associated with the stream, the location defined in a three-dimensional coordinate system. 20. The computer-readable storage medium of claim 15, wherein the output device comprises one or more speakers in communication with the computing device, wherein the instructions further cause the system to transmit the spectator audio output to the computing device, wherein the spectator audio output causes the output device in communication with the computing device to emanate the audio output of the stream from the speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of the audio object relative to the location of the participant object.
Participants can control a number of aspects of a virtual reality session. A participant of the session can control the position of an object, such as an avatar. Spectators do not have control over aspects of a session. For instance, spectators cannot control the position of objects or change properties of objects within a virtual environment. In some configurations, the position of a spectator's viewing area is based on the position of an object that is controlled by a participant. In some embodiments, a spectator's viewing area can follow a participant's position but the spectator can look in any direction from that position. By following the participant's position, spectators can follow the action of a session yet have the freedom to control the direction of their viewing area to enhance their viewing experience. Customized spatial audio is also generated for the spectator based on the direction of their viewing area.1. A computing device, comprising: a processor; a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the computing device to receive session data defining a virtual reality environment comprising a participant object, the session data allowing a participant to provide a participant input for controlling a location of the participant object and a direction of the participant object, generate a first view for display to the participant, the first view originating from the location of the participant object controlled by the participant, wherein a direction of the first view is based on the direction of the participant object, generate a spectator view for display on a computing device associated with a spectator, the spectator view originating from the location of the participant object controlled by the participant, the session data allowing the spectator to provide a spectator input for controlling a direction of the spectator view, and generate a spectator audio output signal of a stream, wherein the spectator audio output signal causes an output device to emanate an audio output of the stream from a speaker object location positioned relative to the spectator, the speaker object location based on the direction of the spectator view and the location of an audio object relative to the location of the participant object. 2. The system of claim 1, wherein the direction of the spectator view is independent of the direction of the participant object, wherein the instructions further cause the system to generate a participant audio output signal of the stream based on the location of the participant object, the direction of the participant object, and the location of the audio object. 3. The system of claim 1, wherein generating the spectator audio output signal comprising configuring the spectator audio output in accordance with an Ambisonics-based technology, wherein the spectator output signal defines at least one sound field modeling the location of the audio object associated with the stream, wherein the data defining the sound field can be interpreted by the computing device associated with the spectator for causing the output device to emanate the audio output of the stream from the speaker object location. 4. The system of claim 1, wherein generating the spectator audio output signal comprising configuring the spectator audio output in accordance with a channel-based audio technology, wherein the spectator audio output signal causes the stream to render to an output device in a channel-based audio format. 5. The system of claim 1, wherein generating the spectator audio output signal comprising configuring the spectator audio output in accordance with an object-based technology, wherein the spectator output signal defines the location of the audio object associated with the stream, the location defined in a three-dimensional coordinate system. 6. The system of claim 1, wherein the output device comprises one or more speakers in communication with the computing device, wherein the instructions further cause the system to transmit the spectator audio output to the computing device, wherein the spectator audio output causes the output device in communication with the computing device to emanate the audio output of the stream from the speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of the audio object relative to the location of the participant object. 7. The system of claim 1, wherein the session data is received from a participant computing device, wherein the session data comprises a 360 canvas, and data indicating a direction of the stream, wherein one or more effects applied to the audio output is based on the data indicating the direction of the stream. 8. A system, comprising: a processor; a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the system to receive session data defining a virtual reality environment comprising a participant object, the session data allowing a participant to provide a participant input for controlling a location of the participant object and a direction of the participant object, generate a first view for display to the participant, the first view originating from the location of the participant object controlled by the participant, wherein a direction of the first view is based on the direction of the participant object, generate a spectator view for display on a computing device associated with a spectator, the spectator view originating from the location of the participant object controlled by the participant, the session data allowing the spectator to provide a spectator input for controlling a direction of the spectator view, and generate a spectator audio output signal of a stream based on the direction of the spectator view, a location of an audio object associated with the stream, and the location of the participant object. 9. The system of claim 8, wherein the direction of the spectator view is independent of the direction of the participant object, where instructions further cause the system to generate a participant audio output signal of the stream based on the location of the participant object, the direction of the participant object, and the location of the audio object. 10. The system of claim 8, wherein generating the spectator audio output signal comprises generating the audio output signal defining an Ambisonics representation. 11. The system of claim 8, wherein generating the spectator audio output signal is processed in accordance with a channel-based audio technology. 12. The system of claim 8, wherein generating the spectator audio output signal is processed in accordance with an object-based technology. 13. The system of claim 8, wherein the output device comprises one or more speakers in communication with the computing device, wherein the instructions further cause the system to transmit the spectator audio output to the computing device, wherein the spectator audio output causes the output device in communication with the computing device to emanate the audio output of the stream from the speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of the audio object relative to the location of the participant object. 14. The system of claim 8, wherein the session data is received from a participant computing device, wherein the session data comprises a 360 canvas, and data indicating a direction of the stream, wherein one or more effects applied to the audio output is based on the data indicating the direction of the stream. 15. A computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by one or more processors of a system, cause the one or more processors of the system to: receive session data defining a virtual reality environment comprising a participant object, the session data allowing the participant to provide a participant input for controlling a location of the participant object and a direction of the participant object, generate a first view for display to the participant, the first view originating from the location of the participant object controlled by the participant, wherein a direction of the first view is based on the direction of the participant object, generate a spectator view for display on a computing device associated with a spectator, the spectator view originating from the location of the participant object controlled by the participant, the session data allowing the spectator to provide a spectator input for controlling a direction of the spectator view, and generate a spectator audio output signal of a stream, wherein the spectator audio output signal causes an output device to emanate an audio output of the stream from a speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of an audio object relative to the location of the participant object. 16. The computer-readable storage medium of claim 15, wherein the direction of the spectator view is independent of the direction of the participant object, wherein the instructions further cause the system to generate a participant audio output signal of the stream based on the location of the participant object, the direction of the participant object, and the location of the audio object. 17. The computer-readable storage medium of claim 15, wherein generating the spectator audio output signal comprises configuring the spectator audio output in accordance with an Ambisonics-based technology, wherein the spectator output signal defines at least one sound field modeling the location of the audio object associated with the stream, wherein the data defining the sound field can be interpreted by the computing device associated with the spectator for causing the output device to emanate the audio output of the stream from the speaker object location. 18. The computer-readable storage medium of claim 15, wherein generating the spectator audio output signal comprises configuring the spectator audio output in accordance with a channel-based audio technology, wherein the spectator audio output signal causes the stream to render to an output device in a channel-based audio format. 19. The computer-readable storage medium of claim 15, wherein generating the spectator audio output signal comprises configuring the spectator audio output in accordance with an object-based technology, wherein the spectator output signal defines the location of the audio object associated with the stream, the location defined in a three-dimensional coordinate system. 20. The computer-readable storage medium of claim 15, wherein the output device comprises one or more speakers in communication with the computing device, wherein the instructions further cause the system to transmit the spectator audio output to the computing device, wherein the spectator audio output causes the output device in communication with the computing device to emanate the audio output of the stream from the speaker object location positioned relative to the spectator, the speaker object location modeling the direction of the spectator view and the location of the audio object relative to the location of the participant object.
2,600
10,691
10,691
15,663,206
2,691
Systems and methods for authentication code entry in touch-sensitive screen enabled devices are disclosed. In one embodiment, a method for entering data to a data entry device comprising at least one computer processor and a touch-sensitive screen may include (1) the touch-sensitive screen displaying an input interface; (2) the touch-sensitive screen sensing a first input comprising at least one finger touch; (3) the touch-sensitive screen sensing a release of the first input; (4) the at least one computer processor determining a number of finger touches in the first input; and (5) the at least one computer processor using the number of finger touches in the first input to identify at least a first portion of a value in an authentication code.
1. A method for entering data to a data entry device comprising at least one computer processor, a memory, and a touch-sensitive screen, comprising: the touch-sensitive screen providing an input interface comprising a plurality of virtual keys; the touch-sensitive screen sensing a first touch on the touch-sensitive screen; the touch-sensitive screen sensing a release of the first touch and a location of the first touch at the time of release; the at least one computer processor determining a corresponding virtual key based on the location of the first touch at the time of release; the touch-sensitive screen receiving an entry gesture; and the at least one computer processor identifying the corresponding virtual key as a value in an authentication code. 2. The method of claim 1, wherein the entry gesture is received at any position on the touch-sensitive screen. 3. The method of claim 1, wherein the entry gesture comprises a double tap or a checkmark-shaped touch. 4. The method of claim 1, further comprising: the at least one computer processor causing feedback to be provided in response to the touch-sensitive screen being touched in a touch-sensitive area of the touch-sensitive screen. 5. The method of claim 4, wherein the feedback is haptic feedback. 6. The method of claim 4, wherein the feedback is a sound. 7. The method of claim 5, wherein the feedback changes to indicate proximity to a first virtual key of the plurality of virtual keys. 8. The method of claim 1, further comprising: the at least one computer processor causing feedback to be provided in response to one of the plurality of virtual keys being touched. 9. The method of claim 8, wherein the first touch traverses more than one of the plurality of virtual keys; and the at least one computer processor causes feedback to be provided as the first touch traverses from a first virtual key to a second virtual key. 10. The method of claim 8, wherein the feedback is haptic feedback. 11. The method of claim 8, wherein the feedback is a sound. 12. The method of claim 1, wherein a first subset of the plurality of virtual keys are aligned along a first edge of the touch-sensitive screen, and a second subset of the plurality of virtual keys are aligned along a second edge of the touch-sensitive screen, and the first subset of virtual keys and second subset of virtual keys comprise virtual keys that represent values of 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. 13. The method of claim 1, wherein the data entry device comprises a point of sale device. 14. A data entry device comprising: a touch-sensitive screen; a memory; and at least one computer processor; wherein: the touch-sensitive screen provides an input interface comprising a plurality of virtual keys; the touch-sensitive screen senses a first touch on the touch-sensitive screen; the touch-sensitive screen senses a release of the first touch and a location of the first touch at the time of release; the at least one computer processor determines a corresponding virtual key based on the location of the first touch at the time of release; the touch-sensitive screen receives an entry gesture; and the at least one computer processor identifies the corresponding virtual key as a value in an authentication code. 15. The data entry device of claim 14, wherein the entry gesture is received at any position on the touch-sensitive screen. 16. The data entry device of claim 14, wherein the entry gesture comprises a double tap or a checkmark-shaped touch. 17. The data entry device of claim 14, wherein the data entry device provides feedback in response to the touch-sensitive screen being touched in a touch-sensitive area of the touch-sensitive screen. 18. The data entry device of claim 17, wherein the feedback comprises haptic feedback. 19. The data entry device of claim 14, further comprising an audio output device that provides audible feedback in response to the touch-sensitive screen being touched. 20. The data entry device of claim 14, wherein a first subset of the plurality of virtual keys are aligned along a first edge of the touch-sensitive screen, and a second subset of the plurality of virtual keys are aligned along a second edge of the touch-sensitive screen, and the first subset of virtual keys and second subset of virtual keys comprise virtual keys that represent values of 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. 21. The data entry device of claim 14, further comprising: a bezel surrounding the touch-sensitive screen. 22. The data entry device of claim 21, wherein the bezel is raised relative to a surface of the touch-sensitive screen. 23. The data entry device of claim 21, wherein the bezel comprises at least one orientation mark. 24. The data entry device of claim 23, wherein at least one of the virtual keys represents a value, and at least one of the orientation marks comprises the value. 25. The data entry device of claim 14, wherein the data entry device comprises a point of sale device. 26. The data entry device of claim 14, wherein at least one of the at least one computer processor comprises a touch-sensitive screen controller.
Systems and methods for authentication code entry in touch-sensitive screen enabled devices are disclosed. In one embodiment, a method for entering data to a data entry device comprising at least one computer processor and a touch-sensitive screen may include (1) the touch-sensitive screen displaying an input interface; (2) the touch-sensitive screen sensing a first input comprising at least one finger touch; (3) the touch-sensitive screen sensing a release of the first input; (4) the at least one computer processor determining a number of finger touches in the first input; and (5) the at least one computer processor using the number of finger touches in the first input to identify at least a first portion of a value in an authentication code.1. A method for entering data to a data entry device comprising at least one computer processor, a memory, and a touch-sensitive screen, comprising: the touch-sensitive screen providing an input interface comprising a plurality of virtual keys; the touch-sensitive screen sensing a first touch on the touch-sensitive screen; the touch-sensitive screen sensing a release of the first touch and a location of the first touch at the time of release; the at least one computer processor determining a corresponding virtual key based on the location of the first touch at the time of release; the touch-sensitive screen receiving an entry gesture; and the at least one computer processor identifying the corresponding virtual key as a value in an authentication code. 2. The method of claim 1, wherein the entry gesture is received at any position on the touch-sensitive screen. 3. The method of claim 1, wherein the entry gesture comprises a double tap or a checkmark-shaped touch. 4. The method of claim 1, further comprising: the at least one computer processor causing feedback to be provided in response to the touch-sensitive screen being touched in a touch-sensitive area of the touch-sensitive screen. 5. The method of claim 4, wherein the feedback is haptic feedback. 6. The method of claim 4, wherein the feedback is a sound. 7. The method of claim 5, wherein the feedback changes to indicate proximity to a first virtual key of the plurality of virtual keys. 8. The method of claim 1, further comprising: the at least one computer processor causing feedback to be provided in response to one of the plurality of virtual keys being touched. 9. The method of claim 8, wherein the first touch traverses more than one of the plurality of virtual keys; and the at least one computer processor causes feedback to be provided as the first touch traverses from a first virtual key to a second virtual key. 10. The method of claim 8, wherein the feedback is haptic feedback. 11. The method of claim 8, wherein the feedback is a sound. 12. The method of claim 1, wherein a first subset of the plurality of virtual keys are aligned along a first edge of the touch-sensitive screen, and a second subset of the plurality of virtual keys are aligned along a second edge of the touch-sensitive screen, and the first subset of virtual keys and second subset of virtual keys comprise virtual keys that represent values of 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. 13. The method of claim 1, wherein the data entry device comprises a point of sale device. 14. A data entry device comprising: a touch-sensitive screen; a memory; and at least one computer processor; wherein: the touch-sensitive screen provides an input interface comprising a plurality of virtual keys; the touch-sensitive screen senses a first touch on the touch-sensitive screen; the touch-sensitive screen senses a release of the first touch and a location of the first touch at the time of release; the at least one computer processor determines a corresponding virtual key based on the location of the first touch at the time of release; the touch-sensitive screen receives an entry gesture; and the at least one computer processor identifies the corresponding virtual key as a value in an authentication code. 15. The data entry device of claim 14, wherein the entry gesture is received at any position on the touch-sensitive screen. 16. The data entry device of claim 14, wherein the entry gesture comprises a double tap or a checkmark-shaped touch. 17. The data entry device of claim 14, wherein the data entry device provides feedback in response to the touch-sensitive screen being touched in a touch-sensitive area of the touch-sensitive screen. 18. The data entry device of claim 17, wherein the feedback comprises haptic feedback. 19. The data entry device of claim 14, further comprising an audio output device that provides audible feedback in response to the touch-sensitive screen being touched. 20. The data entry device of claim 14, wherein a first subset of the plurality of virtual keys are aligned along a first edge of the touch-sensitive screen, and a second subset of the plurality of virtual keys are aligned along a second edge of the touch-sensitive screen, and the first subset of virtual keys and second subset of virtual keys comprise virtual keys that represent values of 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. 21. The data entry device of claim 14, further comprising: a bezel surrounding the touch-sensitive screen. 22. The data entry device of claim 21, wherein the bezel is raised relative to a surface of the touch-sensitive screen. 23. The data entry device of claim 21, wherein the bezel comprises at least one orientation mark. 24. The data entry device of claim 23, wherein at least one of the virtual keys represents a value, and at least one of the orientation marks comprises the value. 25. The data entry device of claim 14, wherein the data entry device comprises a point of sale device. 26. The data entry device of claim 14, wherein at least one of the at least one computer processor comprises a touch-sensitive screen controller.
2,600
10,692
10,692
14,493,550
2,642
Techniques for location tracking, location utilization, and dissemination and management of location information are disclosed. As a location monitoring system, one embodiment includes at least a plurality of mobile computing devices supported by a wireless network, and a computing device coupled to a wired network (e.g., the Internet) that couples to the wireless network. Each of the mobile computing devices is associated with and proximate to an object whose location is being monitored. The computing device stores the locations of each of the mobile computing devices or the objects proximate thereto, and enables only authorized users to obtain access the locations via the wired network.
1. A location monitoring system for managing access to location information of a plurality of mobile electronic devices supported by at least one wireless network, each of the mobile electronic devices being associated with and proximate to a corresponding object, said location monitoring system comprising: a computing device operatively connectable to the wireless network, said computing device storing locations of said mobile electronic devices, and said computing device managing authorization for access to the locations of said mobile electronic devices via one or more networks, wherein in managing authorization for access to the locations of said mobile electronic devices, said computing device (i) receives, from a requestor, a request to view the location pertaining to a particular one of the objects; (ii) determines whether the requestor is authorized to receive the location of the particular one of the objects; and (iii) permits access or delivery of the location pertaining to the particular one of the objects to the requestor provided that it is determined that the requestor is authorized to receive the location. 2. A location monitoring system as recited in claim 1, wherein said computing device alerts the mobile electronic device corresponding to the particular one of the objects that the location of the mobile electronic device corresponding to the particular one of the objects is being monitored. 3. A location monitoring system for managing location information pertaining to a plurality of mobile computing devices supported by a wireless network, each of the mobile computing devices being associated with and proximate to a corresponding object, said location monitoring system comprising: a computer configured to store locations of each of the mobile computing devices, and said computer enabling authorized users to access the locations of the mobile computing devices, wherein said computer determines whether notification should be sent to an authorized user based on the location of the mobile computing device corresponding to an object, and wherein said computer sends an electronic notification to the authorized user when it has been determined that notification should be sent to the authorized user. 4. A location monitoring system as recited in claim 3, wherein said computer determines whether the location of the mobile computing device corresponding to the object is at one or more of one or more predetermined notification locations, and generates the electronic notification when it is determined that the location of the mobile computing device is at one or more of the one or more predetermined notification locations.
Techniques for location tracking, location utilization, and dissemination and management of location information are disclosed. As a location monitoring system, one embodiment includes at least a plurality of mobile computing devices supported by a wireless network, and a computing device coupled to a wired network (e.g., the Internet) that couples to the wireless network. Each of the mobile computing devices is associated with and proximate to an object whose location is being monitored. The computing device stores the locations of each of the mobile computing devices or the objects proximate thereto, and enables only authorized users to obtain access the locations via the wired network.1. A location monitoring system for managing access to location information of a plurality of mobile electronic devices supported by at least one wireless network, each of the mobile electronic devices being associated with and proximate to a corresponding object, said location monitoring system comprising: a computing device operatively connectable to the wireless network, said computing device storing locations of said mobile electronic devices, and said computing device managing authorization for access to the locations of said mobile electronic devices via one or more networks, wherein in managing authorization for access to the locations of said mobile electronic devices, said computing device (i) receives, from a requestor, a request to view the location pertaining to a particular one of the objects; (ii) determines whether the requestor is authorized to receive the location of the particular one of the objects; and (iii) permits access or delivery of the location pertaining to the particular one of the objects to the requestor provided that it is determined that the requestor is authorized to receive the location. 2. A location monitoring system as recited in claim 1, wherein said computing device alerts the mobile electronic device corresponding to the particular one of the objects that the location of the mobile electronic device corresponding to the particular one of the objects is being monitored. 3. A location monitoring system for managing location information pertaining to a plurality of mobile computing devices supported by a wireless network, each of the mobile computing devices being associated with and proximate to a corresponding object, said location monitoring system comprising: a computer configured to store locations of each of the mobile computing devices, and said computer enabling authorized users to access the locations of the mobile computing devices, wherein said computer determines whether notification should be sent to an authorized user based on the location of the mobile computing device corresponding to an object, and wherein said computer sends an electronic notification to the authorized user when it has been determined that notification should be sent to the authorized user. 4. A location monitoring system as recited in claim 3, wherein said computer determines whether the location of the mobile computing device corresponding to the object is at one or more of one or more predetermined notification locations, and generates the electronic notification when it is determined that the location of the mobile computing device is at one or more of the one or more predetermined notification locations.
2,600
10,693
10,693
16,098,101
2,672
A method is described in which sets of corresponding colorimetric values are obtained. In a set, a colorimetric value is associated with a device color. A number of device colors are selected according to the color differences between colorimetric values associated with the same device color. A color calibration pattern is elaborated for a printer to be calibrated using the selected device colors and calibration is performed using the sets of colorimetric values.
1. A method comprising: obtaining a plurality of sets of corresponding colorimetric values, the corresponding colorimetric values being associated with the same device color, selecting S device colors according to the color differences between corresponding colorimetric values of the plurality of sets, elaborating a color calibration pattern for a printer using the S device colors printing, using the printer, the color calibration pattern elaborated using the selected S device colors selected, elaborating a set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern, calibrating the printer using the printed color calibration pattern elaborated using the selected S device colors and the elaborated set of colorimetric values. 2. A method in accordance with claim 1, wherein obtaining the plurality of sets of corresponding colorimetric values comprises printing a plurality of color calibration patterns each printed under different printing conditions and each pattern comprising corresponding elements, the corresponding elements being associated with the same device color, and measuring the colorimetric value of each element of each printed pattern to obtain the plurality of sets of corresponding colorimetric values. 3. A method in accordance with claim 1, wherein selecting S device colors comprises: calculating a statistical parameter for each device color according to the color differences between corresponding colorimetric values of the plurality of sets, ranking the device colors according to their statistical parameter, selecting the S device colors according to their rank. 4. A method in accordance with claim 3, wherein calculating a statistical parameter comprises computing the color difference between each colorimetric value of each set of colorimetric values with the corresponding colorimetric values of all the other sets of colorimetric values. 5. A method in accordance with claim 3, wherein calculating a statistical parameter comprises calculating, for each device color, the standard deviation between all the corresponding colorimetric values. 6. A method in accordance with claim 1, wherein obtaining a plurality of sets of corresponding colorimetric values comprises storing, in the printer, the plurality of sets of corresponding colorimetric values. 7. A method in accordance with claim 1, wherein the printed color calibration pattern elaborated using the selected S device colors comprises elements each associated with a selected device color, and calibrating the printer comprises measuring the colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors. 8. A method in accordance with claim 7, wherein elaborating the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprises identifying a set of colorimetric values among the plurality of sets of corresponding colorimetric values having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors. 9. A method in accordance with claim 7, wherein elaborating the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprises: identifying T sets of colorimetric values among the plurality of sets of colorimetric values, the T sets of colorimetric values being the sets having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors, calculating colorimetric values of the elaborated set using a statistical calculation of convex weights and the T sets of colorimetric values. 10. A method in accordance with claim 1, wherein calibrating the printer is performed using a one-dimensional look-up table method, an N-dimensional look-up table method or an adjustment of Neugebauer Primary area coverages in a HANS printer imaging pipeline. 11. A device comprising a storage and a processor, the storage comprising executable instructions to: obtain a plurality of sets of colorimetric values, each colorimetric value in a set of colorimetric values corresponding to a device color, determine S device colors according to the color differences between corresponding colorimetric values of the plurality of sets, elaborate a color calibration pattern for a printer using the S device colors print, using the printer, the color calibration pattern elaborated using the selected S device colors selected, elaborate a set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern, calibrate the printer using the printed color calibration pattern elaborated using the selected S device colors and the elaborated set of colorimetric values. 12. A printer comprising a storage and a processor, the storage comprising a plurality of sets of colorimetric values, each colorimetric value in a set of colorimetric values corresponding to a device color, and executable instructions to: select S device colors according to the color differences between corresponding colorimetric values of the plurality of sets, elaborate a color calibration pattern for a printer using the S device colors, print, using the printer, the color calibration pattern elaborated using the selected S device colors selected, elaborate a set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern, calibrate the printer using the printed color calibration pattern elaborated using the selected S device colors and the elaborated set of colorimetric values. 13. A printer in accordance with claim 12, wherein the instructions to elaborate the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprise instructions to identify a set of colorimetric values among the plurality of sets of corresponding colorimetric values having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors. 14. A printer in accordance with claim 12, wherein the instructions to elaborate the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprise instructions to identify T sets of colorimetric values among the plurality of sets of colorimetric values, the T sets of colorimetric values being the sets having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors, calculate colorimetric values of the elaborated set using a statistical calculation of convex weights and the T sets of colorimetric values. 15. A printer in accordance with claim 12, wherein the executable instruction further comprises instructions to performs a one-dimensional look-up table method or an N-dimensional look-up table method using the plurality of sets of colorimetric values.
A method is described in which sets of corresponding colorimetric values are obtained. In a set, a colorimetric value is associated with a device color. A number of device colors are selected according to the color differences between colorimetric values associated with the same device color. A color calibration pattern is elaborated for a printer to be calibrated using the selected device colors and calibration is performed using the sets of colorimetric values.1. A method comprising: obtaining a plurality of sets of corresponding colorimetric values, the corresponding colorimetric values being associated with the same device color, selecting S device colors according to the color differences between corresponding colorimetric values of the plurality of sets, elaborating a color calibration pattern for a printer using the S device colors printing, using the printer, the color calibration pattern elaborated using the selected S device colors selected, elaborating a set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern, calibrating the printer using the printed color calibration pattern elaborated using the selected S device colors and the elaborated set of colorimetric values. 2. A method in accordance with claim 1, wherein obtaining the plurality of sets of corresponding colorimetric values comprises printing a plurality of color calibration patterns each printed under different printing conditions and each pattern comprising corresponding elements, the corresponding elements being associated with the same device color, and measuring the colorimetric value of each element of each printed pattern to obtain the plurality of sets of corresponding colorimetric values. 3. A method in accordance with claim 1, wherein selecting S device colors comprises: calculating a statistical parameter for each device color according to the color differences between corresponding colorimetric values of the plurality of sets, ranking the device colors according to their statistical parameter, selecting the S device colors according to their rank. 4. A method in accordance with claim 3, wherein calculating a statistical parameter comprises computing the color difference between each colorimetric value of each set of colorimetric values with the corresponding colorimetric values of all the other sets of colorimetric values. 5. A method in accordance with claim 3, wherein calculating a statistical parameter comprises calculating, for each device color, the standard deviation between all the corresponding colorimetric values. 6. A method in accordance with claim 1, wherein obtaining a plurality of sets of corresponding colorimetric values comprises storing, in the printer, the plurality of sets of corresponding colorimetric values. 7. A method in accordance with claim 1, wherein the printed color calibration pattern elaborated using the selected S device colors comprises elements each associated with a selected device color, and calibrating the printer comprises measuring the colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors. 8. A method in accordance with claim 7, wherein elaborating the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprises identifying a set of colorimetric values among the plurality of sets of corresponding colorimetric values having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors. 9. A method in accordance with claim 7, wherein elaborating the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprises: identifying T sets of colorimetric values among the plurality of sets of colorimetric values, the T sets of colorimetric values being the sets having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors, calculating colorimetric values of the elaborated set using a statistical calculation of convex weights and the T sets of colorimetric values. 10. A method in accordance with claim 1, wherein calibrating the printer is performed using a one-dimensional look-up table method, an N-dimensional look-up table method or an adjustment of Neugebauer Primary area coverages in a HANS printer imaging pipeline. 11. A device comprising a storage and a processor, the storage comprising executable instructions to: obtain a plurality of sets of colorimetric values, each colorimetric value in a set of colorimetric values corresponding to a device color, determine S device colors according to the color differences between corresponding colorimetric values of the plurality of sets, elaborate a color calibration pattern for a printer using the S device colors print, using the printer, the color calibration pattern elaborated using the selected S device colors selected, elaborate a set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern, calibrate the printer using the printed color calibration pattern elaborated using the selected S device colors and the elaborated set of colorimetric values. 12. A printer comprising a storage and a processor, the storage comprising a plurality of sets of colorimetric values, each colorimetric value in a set of colorimetric values corresponding to a device color, and executable instructions to: select S device colors according to the color differences between corresponding colorimetric values of the plurality of sets, elaborate a color calibration pattern for a printer using the S device colors, print, using the printer, the color calibration pattern elaborated using the selected S device colors selected, elaborate a set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern, calibrate the printer using the printed color calibration pattern elaborated using the selected S device colors and the elaborated set of colorimetric values. 13. A printer in accordance with claim 12, wherein the instructions to elaborate the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprise instructions to identify a set of colorimetric values among the plurality of sets of corresponding colorimetric values having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors. 14. A printer in accordance with claim 12, wherein the instructions to elaborate the set of colorimetric values from the plurality of sets of colorimetric values and adjusted to the printed color calibration pattern comprise instructions to identify T sets of colorimetric values among the plurality of sets of colorimetric values, the T sets of colorimetric values being the sets having colorimetric values associated with the selected S device colors which are the closest to the measured colorimetric value of each element of the printed color calibration pattern elaborated using the selected S device colors, calculate colorimetric values of the elaborated set using a statistical calculation of convex weights and the T sets of colorimetric values. 15. A printer in accordance with claim 12, wherein the executable instruction further comprises instructions to performs a one-dimensional look-up table method or an N-dimensional look-up table method using the plurality of sets of colorimetric values.
2,600
10,694
10,694
14,723,065
2,612
A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a projection of the device user's gaze with a location in a mixed or virtual reality environment. When a projected gaze ray is visibly rendered on other HMD devices (where all the devices are operatively coupled), users of those devices can see what the user is looking at in the environment. In multi-user settings, each HMD device user can see each other's projected gaze rays which can facilitate collaboration in a commonly-shared and experienced mixed or virtual reality environment. The gaze projection can be used much like a finger to point at an object, or to indicate a location on a surface with precision and accuracy.
1. One or more computer readable memories storing computer-executable instructions which, when executed by one or more processors in a local head mounted display (HMD) device located in a physical environment, perform: using data from a sensor package incorporated into the HMD device to dynamically perform head tracking of the user within the physical environment; responsively to the head tracking, determining a field of view of a mixed reality or virtual reality environment that is renderable by the local HMD device, the field of view being variable depending at least in part on a pose of the user's head in the physical environment; receiving data from a remote HMD device including origin and intercept coordinates of a gaze ray that is projected from an origin at a view position of the remote HMD device and terminates at an intercept at a point of intersection between the projected ray and the mixed reality or virtual reality environment; and visibly rendering a gaze ray on the local HMD device using the received data within the field of view. 2. The one or more computer readable memories of claim 1 further including rendering a cursor within the field of view at the intercept coordinate. 3. The one or more computer readable memories of claim 1 further including highlighting an object or an adjoining area that is coincident with the intercept coordinate, the highlighting including one of lighting effect, animation, or marker. 4. The one or more computer readable memories of claim 3 in which the object is one of real object or virtual object. 5. The one or more computer-readable memories of claim 1 further including rendering an avatar to represent a user of the remote HMD device, at least a portion of the avatar being coincident with the origin coordinate. 6. The one or more computer-readable memories of claim 1 in which the intercept coordinate is at an intersection between the projected gaze ray and a surface in the mixed reality or virtual reality environment that is closest to the local HMD device. 7. The one or more computer-readable memories of claim 1 further including operatively coupling the local HMD device and the remote HMD device over a network. 8. The one or more computer-readable memories of claim 1 further including receiving state data from the remote HMD device, the state data describing operations of the remote HMD device. 9. The one or more computer-readable memories of claim 1 further including controlling an appearance of the visibly rendered gaze ray on the local HMD device based on user input. 10. The one or more computer-readable memories of claim 1 further including visibly rendering multiple gaze rays in which each gaze ray is associated with an avatar of a different user of a respective one of a plurality of remote HMD devices. 11. The one or more computer-readable memories of claim 10 further including visibly rendering the multiple gaze rays in which each ray is rendered in a manner to uniquely identify its associated user. 12. A local head mounted display (HMD) device operable by a local user in a physical environment, comprising: one or more processors; a display for rendering a mixed reality or virtual reality environment to the user, a field of view of the mixed reality or virtual reality environment being variable depending at least in part on a pose of the user's head in the physical environment; a sensor package; and one or more memory devices storing computer-readable instructions which, when executed by the one or more processors, perform a method comprising the steps of: performing head tracking of the user within the physical environment using the sensor package, dynamically tracking an origin at the local user's view position of the mixed reality or virtual reality environment responsively to the head tracking, locating an intercept at an intersection between a ray projected from an origin at the view position and a point in the mixed reality or virtual reality environment within a current field of view, and sharing coordinates for the origin and intercept with a remote HMD device over a network, the remote HMD device being configured to visibly render a gaze ray using the coordinates to indicate to a remote user where the local user is looking in the mixed reality or virtual reality environment. 13. The HMD device of claim 12 further including a user interface and operating the HMD device to enable or disable the sharing responsively to a user input to the UI. 14. The HMD device of claim 12 further including sharing new coordinates for the origin and intercept as the view position changes. 15. The HMD device of claim 12 further including tracking the local user's gaze direction and sharing new coordinates for the origin and intercept as the gaze direction changes. 16. A method performed by a head mounted display (HMD) device that supports rendering of a mixed reality or virtual reality environment, comprising: obtaining sensor data describing a real world physical environment adjoining a user of the HMD device; tracking the user's head in the physical environment using the sensor data to determine a view position of the mixed reality or virtual reality environment; projecting a gaze ray outward at an origin at the view position; identifying an intersection between the projected gaze ray and the mixed reality or virtual reality environment; and transmitting the origin of the projected gaze ray and the identified intersection to a remote service or remote device. 17. The method of claim 16 in which the sensor data includes depth data and further including generating the sensor data using a depth sensor and applying surface reconstruction techniques to reconstruct the physical environment geometry. 18. The method of claim 16 further including generating depth data using depth-from-stereo imaging analyses. 19. The method of claim 16 further including exposing a user interface (UI) for receiving user input, the UI providing user controls or supporting gesture recognition or voice recognition. 20. The method of claim 16 further including projecting the gaze ray along a gaze direction of the user and using one or more inward facing sensors located in the HMD device to determine the gaze direction.
A head mounted display (HMD) device operating in a real world physical environment is configured with a sensor package that enables determination of an intersection of a projection of the device user's gaze with a location in a mixed or virtual reality environment. When a projected gaze ray is visibly rendered on other HMD devices (where all the devices are operatively coupled), users of those devices can see what the user is looking at in the environment. In multi-user settings, each HMD device user can see each other's projected gaze rays which can facilitate collaboration in a commonly-shared and experienced mixed or virtual reality environment. The gaze projection can be used much like a finger to point at an object, or to indicate a location on a surface with precision and accuracy.1. One or more computer readable memories storing computer-executable instructions which, when executed by one or more processors in a local head mounted display (HMD) device located in a physical environment, perform: using data from a sensor package incorporated into the HMD device to dynamically perform head tracking of the user within the physical environment; responsively to the head tracking, determining a field of view of a mixed reality or virtual reality environment that is renderable by the local HMD device, the field of view being variable depending at least in part on a pose of the user's head in the physical environment; receiving data from a remote HMD device including origin and intercept coordinates of a gaze ray that is projected from an origin at a view position of the remote HMD device and terminates at an intercept at a point of intersection between the projected ray and the mixed reality or virtual reality environment; and visibly rendering a gaze ray on the local HMD device using the received data within the field of view. 2. The one or more computer readable memories of claim 1 further including rendering a cursor within the field of view at the intercept coordinate. 3. The one or more computer readable memories of claim 1 further including highlighting an object or an adjoining area that is coincident with the intercept coordinate, the highlighting including one of lighting effect, animation, or marker. 4. The one or more computer readable memories of claim 3 in which the object is one of real object or virtual object. 5. The one or more computer-readable memories of claim 1 further including rendering an avatar to represent a user of the remote HMD device, at least a portion of the avatar being coincident with the origin coordinate. 6. The one or more computer-readable memories of claim 1 in which the intercept coordinate is at an intersection between the projected gaze ray and a surface in the mixed reality or virtual reality environment that is closest to the local HMD device. 7. The one or more computer-readable memories of claim 1 further including operatively coupling the local HMD device and the remote HMD device over a network. 8. The one or more computer-readable memories of claim 1 further including receiving state data from the remote HMD device, the state data describing operations of the remote HMD device. 9. The one or more computer-readable memories of claim 1 further including controlling an appearance of the visibly rendered gaze ray on the local HMD device based on user input. 10. The one or more computer-readable memories of claim 1 further including visibly rendering multiple gaze rays in which each gaze ray is associated with an avatar of a different user of a respective one of a plurality of remote HMD devices. 11. The one or more computer-readable memories of claim 10 further including visibly rendering the multiple gaze rays in which each ray is rendered in a manner to uniquely identify its associated user. 12. A local head mounted display (HMD) device operable by a local user in a physical environment, comprising: one or more processors; a display for rendering a mixed reality or virtual reality environment to the user, a field of view of the mixed reality or virtual reality environment being variable depending at least in part on a pose of the user's head in the physical environment; a sensor package; and one or more memory devices storing computer-readable instructions which, when executed by the one or more processors, perform a method comprising the steps of: performing head tracking of the user within the physical environment using the sensor package, dynamically tracking an origin at the local user's view position of the mixed reality or virtual reality environment responsively to the head tracking, locating an intercept at an intersection between a ray projected from an origin at the view position and a point in the mixed reality or virtual reality environment within a current field of view, and sharing coordinates for the origin and intercept with a remote HMD device over a network, the remote HMD device being configured to visibly render a gaze ray using the coordinates to indicate to a remote user where the local user is looking in the mixed reality or virtual reality environment. 13. The HMD device of claim 12 further including a user interface and operating the HMD device to enable or disable the sharing responsively to a user input to the UI. 14. The HMD device of claim 12 further including sharing new coordinates for the origin and intercept as the view position changes. 15. The HMD device of claim 12 further including tracking the local user's gaze direction and sharing new coordinates for the origin and intercept as the gaze direction changes. 16. A method performed by a head mounted display (HMD) device that supports rendering of a mixed reality or virtual reality environment, comprising: obtaining sensor data describing a real world physical environment adjoining a user of the HMD device; tracking the user's head in the physical environment using the sensor data to determine a view position of the mixed reality or virtual reality environment; projecting a gaze ray outward at an origin at the view position; identifying an intersection between the projected gaze ray and the mixed reality or virtual reality environment; and transmitting the origin of the projected gaze ray and the identified intersection to a remote service or remote device. 17. The method of claim 16 in which the sensor data includes depth data and further including generating the sensor data using a depth sensor and applying surface reconstruction techniques to reconstruct the physical environment geometry. 18. The method of claim 16 further including generating depth data using depth-from-stereo imaging analyses. 19. The method of claim 16 further including exposing a user interface (UI) for receiving user input, the UI providing user controls or supporting gesture recognition or voice recognition. 20. The method of claim 16 further including projecting the gaze ray along a gaze direction of the user and using one or more inward facing sensors located in the HMD device to determine the gaze direction.
2,600
10,695
10,695
15,941,726
2,643
Embodiments help to provide a cross-device security schema for an audio device and a master device to which it is tethered (e.g., a smartphone). An example security scheme provides flexible mechanisms for locking and unlocking the audio device and the device to which it is tethered. For instance, an example security scheme may include: (a) an unlock sync feature that unlocks the audio device and keeps the audio device unlocked whenever the master device is unlocked, (b) a separate audio device unlock process that unlocks the audio device only (without unlocking the master device, and (c) an on-head detection process that, in at least some scenarios, locks the audio device in response to a determination that the audio device is not being worn.
1. An audio device comprising: a communication interface configured for communication with a master computing device; at least one microphone; at least one processor; and program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: determine that the master computing device is in an unlocked state, and responsively keep the audio device in an unlocked state so long as the master computing device is unlocked; and while the master computing device is in a locked state, implement an audio device unlock process, wherein the audio device unlock process provides a mechanism to unlock the audio device independently from the master computing device. 2. The audio device of claim 1, wherein the audio device unlock process comprises: receiving audio data via the at least one microphone, wherein the audio data comprises speech; and authenticating a user based on the speech. 3. The audio device of claim 1, further comprising at least one touchpad, wherein the audio device unlock process authenticates a user based on touch data received via the touchpad. 4. The audio device of claim 1, further comprising at least one sensor that is operable to determine whether or not the audio device is being worn. 5. The audio device of claim 4, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to analyze sensor data from the at least one sensor to determine whether the audio device is being worn. 6. The audio device of claim 4, the device further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to, while the audio device is unlocked, detect a transition of the master computing device from an unlocked state to a locked state and responsively: enable an on-head detection feature; and while the on-head detection feature is enabled: (a) keep the audio device unlocked so long as the audio device is determined to be worn, and (b) lock the audio device in response to a determination that the audio device is not being worn. 7. An apparatus comprising: a communication interface configured for communication with an audio device ; a graphic display; at least one processor; and program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to provide a graphical user-interface on the graphic display, wherein the graphical user-interface provides a plurality of interface elements to adjust lock-screen parameters of the wearable audio device, and wherein the plurality of interface elements comprises: an interface element for adjustment of an unlock-sync feature, wherein enabling the unlock-sync feature causes the audio device to operate in an unlocked state whenever the master device is in an unlocked state, and wherein disabling the unlock-sync feature allows the audio device to operate in a locked state when the master device is in an unlocked state; and an interface element for selection of an audio device unlock process, wherein the selected audio device unlock process provides a mechanism to unlock the wearable audio device, independent from whether the master device is in the locked state or the unlocked state. 8. The system of claim 7, further comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to receive a communication from the audio device that is indicative of whether or not the audio device is being worn, wherein the communication comprises or is based on sensor data generated by at least one sensor of the audio device. 9. The system of claim 7, further comprising: a data interface configured to receive sensor data that is indicative of whether or not the audio device is being worn; and an on-head detection module comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: analyze the sensor data to determine whether or not the audio device is being worn; and in response to a determination that the audio device is not being worn, lock the audio device. 10. The system of claim 9, further comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: while the master computing device is in a locked state, allow the audio device to remain in an unlocked state so long as the sensor data indicates that the audio device is being worn. 11. The system of claim 9, further comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: disable the on-head detection module when the phone is in an unlocked state; and enable the on-head detection module in response to an instruction to lock the master device. 12. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to: receive, via the at least one interface element for adjustment of an unlock-sync feature, first input data indicating to enable the unlock-sync feature; in response to the first input data, enable the unlock-sync feature; receive, via the at least one interface element for adjustment of an unlock-sync feature, second input data indicating to disable the unlock-sync feature; and in response to the second input data, disable the unlock-sync feature. 13. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to: while the unlock-sync feature is enabled and the master device is in a locked state, receive authentication input data corresponding to an unlock process for the master device; verify the authentication input data; and in response to verification of the authentication input data, unlock both the master device and the audio device. 14. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to: while the unlock-sync feature is disabled, and while both the master device and the audio device are locked, receive input data indicating an audio device unlock request; and in response to the input data indicating the audio device unlock request, initiating the selected audio device unlock process. 15. The system of claim 14, wherein the selected audio device unlock process comprises: receiving input data indicating the audio device unlock request via an interface component of the master device or the audio device; and in response to the input data indicating the audio device unlock request, enabling at least one microphone on at least one of the master device and the audio device; receiving audio authentication input data via the at least one microphone; verifying the audio authentication input data; and in response to verifying the audio authentication input data, unlocking the audio device. 16. The system of claim 15, wherein verifying the audio authentication input data comprises carrying out a voiceprint process on the audio authentication input to determine if a voiceprint of the audio authentication input data matches a voiceprint stored in an account that is associated with at least one of the audio device or the master device. 17. The system of claim 15, wherein verifying the audio authentication input data comprises: applying a speech-to-text process to the audio authentication input data to determine a speech segment; and determining that the speech segment matches a password specified by an account that is associated with at least one of the audio device or the master device. 18. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to determine whether or not the audio device is being worn. 19. The system of claim 18, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor condition the audio device unlock process upon a determination that the audio device is being worn, such that the audio device cannot be unlocked unless the audio device is being worn. 20. The system of claim 18, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to override the unlock-sync feature in response to a determination that the audio device is not being worn, such that the audio device is locked regardless of whether the master device is locked or unlocked.
Embodiments help to provide a cross-device security schema for an audio device and a master device to which it is tethered (e.g., a smartphone). An example security scheme provides flexible mechanisms for locking and unlocking the audio device and the device to which it is tethered. For instance, an example security scheme may include: (a) an unlock sync feature that unlocks the audio device and keeps the audio device unlocked whenever the master device is unlocked, (b) a separate audio device unlock process that unlocks the audio device only (without unlocking the master device, and (c) an on-head detection process that, in at least some scenarios, locks the audio device in response to a determination that the audio device is not being worn.1. An audio device comprising: a communication interface configured for communication with a master computing device; at least one microphone; at least one processor; and program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: determine that the master computing device is in an unlocked state, and responsively keep the audio device in an unlocked state so long as the master computing device is unlocked; and while the master computing device is in a locked state, implement an audio device unlock process, wherein the audio device unlock process provides a mechanism to unlock the audio device independently from the master computing device. 2. The audio device of claim 1, wherein the audio device unlock process comprises: receiving audio data via the at least one microphone, wherein the audio data comprises speech; and authenticating a user based on the speech. 3. The audio device of claim 1, further comprising at least one touchpad, wherein the audio device unlock process authenticates a user based on touch data received via the touchpad. 4. The audio device of claim 1, further comprising at least one sensor that is operable to determine whether or not the audio device is being worn. 5. The audio device of claim 4, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to analyze sensor data from the at least one sensor to determine whether the audio device is being worn. 6. The audio device of claim 4, the device further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to, while the audio device is unlocked, detect a transition of the master computing device from an unlocked state to a locked state and responsively: enable an on-head detection feature; and while the on-head detection feature is enabled: (a) keep the audio device unlocked so long as the audio device is determined to be worn, and (b) lock the audio device in response to a determination that the audio device is not being worn. 7. An apparatus comprising: a communication interface configured for communication with an audio device ; a graphic display; at least one processor; and program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to provide a graphical user-interface on the graphic display, wherein the graphical user-interface provides a plurality of interface elements to adjust lock-screen parameters of the wearable audio device, and wherein the plurality of interface elements comprises: an interface element for adjustment of an unlock-sync feature, wherein enabling the unlock-sync feature causes the audio device to operate in an unlocked state whenever the master device is in an unlocked state, and wherein disabling the unlock-sync feature allows the audio device to operate in a locked state when the master device is in an unlocked state; and an interface element for selection of an audio device unlock process, wherein the selected audio device unlock process provides a mechanism to unlock the wearable audio device, independent from whether the master device is in the locked state or the unlocked state. 8. The system of claim 7, further comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to receive a communication from the audio device that is indicative of whether or not the audio device is being worn, wherein the communication comprises or is based on sensor data generated by at least one sensor of the audio device. 9. The system of claim 7, further comprising: a data interface configured to receive sensor data that is indicative of whether or not the audio device is being worn; and an on-head detection module comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: analyze the sensor data to determine whether or not the audio device is being worn; and in response to a determination that the audio device is not being worn, lock the audio device. 10. The system of claim 9, further comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: while the master computing device is in a locked state, allow the audio device to remain in an unlocked state so long as the sensor data indicates that the audio device is being worn. 11. The system of claim 9, further comprising program instructions stored on a non-transitory computer-readable medium and executable by the at least one processor to: disable the on-head detection module when the phone is in an unlocked state; and enable the on-head detection module in response to an instruction to lock the master device. 12. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to: receive, via the at least one interface element for adjustment of an unlock-sync feature, first input data indicating to enable the unlock-sync feature; in response to the first input data, enable the unlock-sync feature; receive, via the at least one interface element for adjustment of an unlock-sync feature, second input data indicating to disable the unlock-sync feature; and in response to the second input data, disable the unlock-sync feature. 13. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to: while the unlock-sync feature is enabled and the master device is in a locked state, receive authentication input data corresponding to an unlock process for the master device; verify the authentication input data; and in response to verification of the authentication input data, unlock both the master device and the audio device. 14. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to: while the unlock-sync feature is disabled, and while both the master device and the audio device are locked, receive input data indicating an audio device unlock request; and in response to the input data indicating the audio device unlock request, initiating the selected audio device unlock process. 15. The system of claim 14, wherein the selected audio device unlock process comprises: receiving input data indicating the audio device unlock request via an interface component of the master device or the audio device; and in response to the input data indicating the audio device unlock request, enabling at least one microphone on at least one of the master device and the audio device; receiving audio authentication input data via the at least one microphone; verifying the audio authentication input data; and in response to verifying the audio authentication input data, unlocking the audio device. 16. The system of claim 15, wherein verifying the audio authentication input data comprises carrying out a voiceprint process on the audio authentication input to determine if a voiceprint of the audio authentication input data matches a voiceprint stored in an account that is associated with at least one of the audio device or the master device. 17. The system of claim 15, wherein verifying the audio authentication input data comprises: applying a speech-to-text process to the audio authentication input data to determine a speech segment; and determining that the speech segment matches a password specified by an account that is associated with at least one of the audio device or the master device. 18. The system of claim 7, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to determine whether or not the audio device is being worn. 19. The system of claim 18, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor condition the audio device unlock process upon a determination that the audio device is being worn, such that the audio device cannot be unlocked unless the audio device is being worn. 20. The system of claim 18, further comprising program instructions stored on the non-transitory computer-readable medium and executable by the at least one processor to override the unlock-sync feature in response to a determination that the audio device is not being worn, such that the audio device is locked regardless of whether the master device is locked or unlocked.
2,600
10,696
10,696
13,982,918
2,642
An apparatus for providing management of measurement and/or failure reporting may include a processor and memory storing executable computer code causing the apparatus to at least perform operations including receiving a hand over command message, from a source cell, indicating an identity of a target cell in response to a decision to hand over the apparatus from the source cell to the target cell. The at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to enable provision of at least one generated report to the target cell. The provision of the at least one generated report to the target cell is based at least in part on analyzing data associated with the identity of the target cell. Corresponding methods and computer program products are also provided.
1-28. (canceled) 29. A method comprising: receiving a handover command message, from a source cell, indicating an identity of a target cell in response to a decision to handover an apparatus from the source cell to the target cell; and enabling provision of at least one generated report to the target cell based in part on analyzing data associated with the identity of the target cell. 30. The method of claim 29, wherein the identity comprises at least one cell global identifier identifying the target cell and a location of the target cell. 31. The method of claim 29, further comprising: detecting a failure in connectivity with the target cell after a successful handover of the apparatus from the source cell to the target cell; establishing a connection with a different cell in response to the failure; and enabling provision of an indication of the failure to the different cell in response to the established connection, the indication of the failure comprises data identifying the target cell in which the failure occurred and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 32. The method of claim 29, further comprising: determining that the handover failed at a time subsequent to receiving the handover command message; establishing a connection with a different cell in response to determining that the handover failed; and enabling provision of an indication, specifying the failure of the handover, to the different cell in response to the established connection, the indication comprising data identifying that the handover failed in the target cell and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 33. The method of claim 30, further comprising: analyzing information associated with the cell global identifier of the received handover command message to identify a public land mobile network of the target cell in response to determining that the handover is successfully completed; and directing sending of an indication of availability of one or more items of minimization of drive test measurement data to the target cell, the items of minimization of drive test measurement data are previously obtained in the source cell. 34. An apparatus comprising: at least one processor; and at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive a handover command message, from a source cell, indicating an identity of a target cell in response to a decision to handover the apparatus from the source cell to the target cell; and enable provision of at least one generated report to the target cell based in part on analyzing data associated with the identity of the target cell. 35. The apparatus of claim 34, wherein the identity comprises at least one cell global identifier identifying the target cell and a location of the target cell. 36. The apparatus of claim 34, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: detect a failure in connectivity with the target cell after a successful handover of the apparatus from the source cell to the target cell; establish a connection with a different cell in response to the failure; and enable provision of an indication of the failure to the different cell in response to the established connection, the indication of the failure comprises data identifying the target cell in which the failure occurred and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 37. The apparatus of claim 34, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: determine that the handover failed at a time subsequent to receiving the handover command message; establish a connection with a different cell in response to determining that the handover failed; and enable provision of an indication, specifying the failure of the handover, to the different cell in response to the established connection, the indication comprising data identifying that the handover failed in the target cell and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 38. The apparatus of claim 37, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: analyze information associated with the cell global identifier of the received handover command message to identify a public land mobile network of the target cell in response to determining that the handover is successfully completed; and direct sending of an indication of availability of one or more items of minimization of drive test measurement data to the target cell, the items of minimization of drive test measurement data are previously obtained in the source cell. 39. The apparatus of claim 38, wherein the information associated with the cell global identifier comprises an identifier identifying the public land mobile network. 40. The apparatus of claim 38, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: direct sending of the items of minimization of drive test measurement data to the identified public land mobile network of the target cell in response to receipt of a request from the target cell. 41. A computer program product comprising at least one computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising: program code instructions configured to cause receipt of a handover command message, from a source cell, indicating an identity of a target cell in response to a decision to handover an apparatus from the source cell to the target cell; and program code instructions configured to enable provision of at least one generated report to the target cell based in part on analyzing data associated with the identity of the target cell. 42. A method comprising: receiving an identity of a target cell from an apparatus of the target cell; generating a handover command message comprising data indicating an identity of the target cell in response to a decision to handover a device from a source cell to the target cell; and enabling provision of the handover command message to the device to enable the device to provide at least one generated report to the target cell based at least in part on analyzing data associated with the identity of the target cell. 43. The method of claim 42, wherein prior to receiving the identity of the target cell, the method further comprises: directing sending of a generated handover request to the target cell, the handover request comprising information requesting handover of the device from the source cell to the target cell. 44. The method of claim 42, wherein receiving the identity further comprises receiving the identity from the apparatus during one or more neighbor relation communications between the source cell and the target cell. 45. An apparatus comprising: at least one processor; and at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive an identity of a target cell from another apparatus of the target cell; generate a handover command message comprising data indicating an identity of the target cell in response to a decision to handover a device from a source cell to the target cell; and enable provision of the handover command message to the device to enable the device to provide at least one generated report to the target cell based at least in part on analyzing data associated with the identity of the target cell. 46. The apparatus of claim 45, wherein prior to receive the identity of the target cell, the memory and computer program code are configured to, with the processor, cause the apparatus to: direct sending of a generated handover request to the target cell, the handover request comprising information requesting handover of the device from the source cell to the target cell. 47. The apparatus of claim 45, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: receive the identity by receiving the identity from the apparatus during one or more neighbor relation communications between the source cell and the target cell. 48. A computer program product comprising at least one computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising: program code instructions configured to receive an identity of a target cell from another apparatus of the target cell; program code instructions configured to generate a handover command message comprising data indicating an identity of the target cell in response to a decision to handover a device from a source cell to the target cell; and program code instructions configured to enable provision of the handover command message to the device to enable the device to provide at least one generated report to the target cell based at least in part on analyzing data associated with the identity of the target cell.
An apparatus for providing management of measurement and/or failure reporting may include a processor and memory storing executable computer code causing the apparatus to at least perform operations including receiving a hand over command message, from a source cell, indicating an identity of a target cell in response to a decision to hand over the apparatus from the source cell to the target cell. The at least one memory and the computer program code are further configured to, with the processor, cause the apparatus to enable provision of at least one generated report to the target cell. The provision of the at least one generated report to the target cell is based at least in part on analyzing data associated with the identity of the target cell. Corresponding methods and computer program products are also provided.1-28. (canceled) 29. A method comprising: receiving a handover command message, from a source cell, indicating an identity of a target cell in response to a decision to handover an apparatus from the source cell to the target cell; and enabling provision of at least one generated report to the target cell based in part on analyzing data associated with the identity of the target cell. 30. The method of claim 29, wherein the identity comprises at least one cell global identifier identifying the target cell and a location of the target cell. 31. The method of claim 29, further comprising: detecting a failure in connectivity with the target cell after a successful handover of the apparatus from the source cell to the target cell; establishing a connection with a different cell in response to the failure; and enabling provision of an indication of the failure to the different cell in response to the established connection, the indication of the failure comprises data identifying the target cell in which the failure occurred and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 32. The method of claim 29, further comprising: determining that the handover failed at a time subsequent to receiving the handover command message; establishing a connection with a different cell in response to determining that the handover failed; and enabling provision of an indication, specifying the failure of the handover, to the different cell in response to the established connection, the indication comprising data identifying that the handover failed in the target cell and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 33. The method of claim 30, further comprising: analyzing information associated with the cell global identifier of the received handover command message to identify a public land mobile network of the target cell in response to determining that the handover is successfully completed; and directing sending of an indication of availability of one or more items of minimization of drive test measurement data to the target cell, the items of minimization of drive test measurement data are previously obtained in the source cell. 34. An apparatus comprising: at least one processor; and at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive a handover command message, from a source cell, indicating an identity of a target cell in response to a decision to handover the apparatus from the source cell to the target cell; and enable provision of at least one generated report to the target cell based in part on analyzing data associated with the identity of the target cell. 35. The apparatus of claim 34, wherein the identity comprises at least one cell global identifier identifying the target cell and a location of the target cell. 36. The apparatus of claim 34, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: detect a failure in connectivity with the target cell after a successful handover of the apparatus from the source cell to the target cell; establish a connection with a different cell in response to the failure; and enable provision of an indication of the failure to the different cell in response to the established connection, the indication of the failure comprises data identifying the target cell in which the failure occurred and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 37. The apparatus of claim 34, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: determine that the handover failed at a time subsequent to receiving the handover command message; establish a connection with a different cell in response to determining that the handover failed; and enable provision of an indication, specifying the failure of the handover, to the different cell in response to the established connection, the indication comprising data identifying that the handover failed in the target cell and the data identifying the target cell is determined based in part on analyzing the identity of the target cell in the received handover command message. 38. The apparatus of claim 37, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: analyze information associated with the cell global identifier of the received handover command message to identify a public land mobile network of the target cell in response to determining that the handover is successfully completed; and direct sending of an indication of availability of one or more items of minimization of drive test measurement data to the target cell, the items of minimization of drive test measurement data are previously obtained in the source cell. 39. The apparatus of claim 38, wherein the information associated with the cell global identifier comprises an identifier identifying the public land mobile network. 40. The apparatus of claim 38, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: direct sending of the items of minimization of drive test measurement data to the identified public land mobile network of the target cell in response to receipt of a request from the target cell. 41. A computer program product comprising at least one computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising: program code instructions configured to cause receipt of a handover command message, from a source cell, indicating an identity of a target cell in response to a decision to handover an apparatus from the source cell to the target cell; and program code instructions configured to enable provision of at least one generated report to the target cell based in part on analyzing data associated with the identity of the target cell. 42. A method comprising: receiving an identity of a target cell from an apparatus of the target cell; generating a handover command message comprising data indicating an identity of the target cell in response to a decision to handover a device from a source cell to the target cell; and enabling provision of the handover command message to the device to enable the device to provide at least one generated report to the target cell based at least in part on analyzing data associated with the identity of the target cell. 43. The method of claim 42, wherein prior to receiving the identity of the target cell, the method further comprises: directing sending of a generated handover request to the target cell, the handover request comprising information requesting handover of the device from the source cell to the target cell. 44. The method of claim 42, wherein receiving the identity further comprises receiving the identity from the apparatus during one or more neighbor relation communications between the source cell and the target cell. 45. An apparatus comprising: at least one processor; and at least one memory including computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: receive an identity of a target cell from another apparatus of the target cell; generate a handover command message comprising data indicating an identity of the target cell in response to a decision to handover a device from a source cell to the target cell; and enable provision of the handover command message to the device to enable the device to provide at least one generated report to the target cell based at least in part on analyzing data associated with the identity of the target cell. 46. The apparatus of claim 45, wherein prior to receive the identity of the target cell, the memory and computer program code are configured to, with the processor, cause the apparatus to: direct sending of a generated handover request to the target cell, the handover request comprising information requesting handover of the device from the source cell to the target cell. 47. The apparatus of claim 45, wherein the memory and computer program code are configured to, with the processor, cause the apparatus to: receive the identity by receiving the identity from the apparatus during one or more neighbor relation communications between the source cell and the target cell. 48. A computer program product comprising at least one computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising: program code instructions configured to receive an identity of a target cell from another apparatus of the target cell; program code instructions configured to generate a handover command message comprising data indicating an identity of the target cell in response to a decision to handover a device from a source cell to the target cell; and program code instructions configured to enable provision of the handover command message to the device to enable the device to provide at least one generated report to the target cell based at least in part on analyzing data associated with the identity of the target cell.
2,600
10,697
10,697
16,010,355
2,643
A platform generates a current path of a venue attendee based on the venue attendee's recorded location data. The platform also generates a suggested path for the venue attendee from the venue attendee's location to the point of interest. The platform identifies a dissimilarity between the current path of the venue attendee and the suggested path for the venue attendee, and based on identification of this dissimilarity, sends an alert to a venue staff member who is near the venue attendee.
1. A method for assisting a venue staff member in providing guidance within a predetermined venue area, the method comprising: receiving, from a mobile device associated with the venue staff member, a location of the mobile device associated with the venue staff member; receiving, from a mobile device associated with a venue attendee, a location of the mobile device associated with the venue attendee; retrieving an itinerary associated with the venue attendee, the itinerary identifying a point of interest; generating a suggested path for the venue attendee from the location of the mobile device associated with the venue attendee to a location of the point of interest identified by the itinerary; generating a current path of the venue attendee based on the location of the mobile device associated with the venue attendee and one or more past locations of the mobile device associated with the venue attendee; identifying a dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee; and sending an alert to the mobile device associated with the venue staff member automatically in response to identifying the dissimilarity, the alert identifying at least the venue attendee and the point of interest. 2. The method of claim 1, wherein identifying the dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee includes identifying that an angle between a heading of the suggested path for the venue attendee and a heading of the current path of the venue attendee is greater than a predetermined angle value. 3. The method of claim 1, further comprising: receiving a selection of the point of interest from the mobile device associated with the venue attendee; and generating the itinerary associated with the venue attendee. 4. The method of claim 1, further comprising: retrieving profile information identifying one or more traits of the venue attendee; retrieving point of interest information identifying one or more traits of each of a plurality of points of interest located within the predetermined venue area, the plurality of points of interest including the point of interest; generating a recommendation recommending the point of interest to the venue attendee based on a comparison between the profile information and the point of interest information; and generating the itinerary associated with the venue attendee. 5. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the location of the point of interest appears in a location history associated with the venue attendee at least a predetermined number of times. 6. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the location of the point of interest is missing from a location history associated with the venue attendee. 7. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the venue attendee previously provided a rating of the point of interest, wherein the rating exceeds a predetermined rating value. 8. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the point of interest provides a type of food that the venue attendee prefers. 9. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the point of interest lacks a type of food that the venue attendee dislikes. 10. The method of claim 1, further comprising identifying, based on an estimated movement speed of the venue attendee, that the venue attendee must begin heading toward a location of the point of interest within a predetermined period of time for the venue attendee to arrive at the location of the point of interest at a particular time, wherein the particular time is associated with the point of interest in the itinerary. 11. The method of claim 1, further comprising: receiving, from a mobile device associated with a second venue staff member, a location of the mobile device associated with the second venue staff member; and selecting the mobile device associated with the venue staff member to be sent the alert rather than the mobile device associated with the second venue staff member based on the location of the mobile device associated with the venue staff member being closer than the location of the mobile device associated with the second venue staff member to the location of the mobile device associated with the venue attendee. 12. The method of claim 1, wherein identifying the dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee includes identifying that the current path leads more toward a location of a second point of interest than toward the location of the point of interest identified by the itinerary. 13. The method of claim 1, wherein the venue attendee is in a group, and wherein the itinerary associated with the venue attendee is also associated with one or more other venue attendees in the group. 14. A system for assisting a venue staff member in providing guidance within a predetermined venue area, the system comprising: a communication transceiver that receives a location of a mobile device associated with the venue staff member and a location of a mobile device associated with a venue attendee and that sends an alert to the mobile device associated with the venue staff member; a memory that stores instructions and an itinerary associated with the venue attendee; and a processor, wherein execution of the instructions by the processor causes the processor to: generate a suggested path for the venue attendee from the location of the mobile device associated with the venue attendee to a location of a point of interest identified by the itinerary, generate a current path of the venue attendee based on the location of the mobile device associated with the venue attendee and one or more past locations of the mobile device associated with the venue attendee, identify a dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee, and generate the alert automatically in response to identifying the dissimilarity. 15. The system of claim 14, wherein the communication transceiver receives the location of the mobile device associated with the venue attendee from the mobile device associated with the venue attendee. 16. The system of claim 14, wherein the communication transceiver receives the location of the mobile device associated with the venue staff member from the mobile device associated with the venue staff member. 17. The system of claim 14, wherein the communication transceiver receives the itinerary associated with the venue attendee. 18. A method for assisting a venue staff member in providing guidance within a predetermined venue area, the method comprising: receiving a location of a mobile device associated with the venue staff member; receiving a location of a mobile device associated with a venue attendee; generating a suggested path for the venue attendee from the location of the mobile device associated with the venue attendee to a location of a point of interest identified in an itinerary associated with the venue attendee; generating a current path of the venue attendee based on the location of a mobile device associated with the venue attendee and one or more past locations of the mobile device associated with the venue attendee; identifying a dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee; and sending an alert identifying the venue attendee to the mobile device associated with the venue staff member automatically in response to identifying the dissimilarity. 19. The method of claim 18, further comprising identifying that the location of the mobile device associated with the venue attendee is closer to the location of the mobile device associated with the venue staff member than it is to any of a plurality of locations of other mobile devices associated with other venue staff members before sending the alert to the mobile device associated with the venue staff member. 20. The method of claim 18, wherein the alert also identifies the point of interest identified in the itinerary associated with the venue attendee.
A platform generates a current path of a venue attendee based on the venue attendee's recorded location data. The platform also generates a suggested path for the venue attendee from the venue attendee's location to the point of interest. The platform identifies a dissimilarity between the current path of the venue attendee and the suggested path for the venue attendee, and based on identification of this dissimilarity, sends an alert to a venue staff member who is near the venue attendee.1. A method for assisting a venue staff member in providing guidance within a predetermined venue area, the method comprising: receiving, from a mobile device associated with the venue staff member, a location of the mobile device associated with the venue staff member; receiving, from a mobile device associated with a venue attendee, a location of the mobile device associated with the venue attendee; retrieving an itinerary associated with the venue attendee, the itinerary identifying a point of interest; generating a suggested path for the venue attendee from the location of the mobile device associated with the venue attendee to a location of the point of interest identified by the itinerary; generating a current path of the venue attendee based on the location of the mobile device associated with the venue attendee and one or more past locations of the mobile device associated with the venue attendee; identifying a dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee; and sending an alert to the mobile device associated with the venue staff member automatically in response to identifying the dissimilarity, the alert identifying at least the venue attendee and the point of interest. 2. The method of claim 1, wherein identifying the dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee includes identifying that an angle between a heading of the suggested path for the venue attendee and a heading of the current path of the venue attendee is greater than a predetermined angle value. 3. The method of claim 1, further comprising: receiving a selection of the point of interest from the mobile device associated with the venue attendee; and generating the itinerary associated with the venue attendee. 4. The method of claim 1, further comprising: retrieving profile information identifying one or more traits of the venue attendee; retrieving point of interest information identifying one or more traits of each of a plurality of points of interest located within the predetermined venue area, the plurality of points of interest including the point of interest; generating a recommendation recommending the point of interest to the venue attendee based on a comparison between the profile information and the point of interest information; and generating the itinerary associated with the venue attendee. 5. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the location of the point of interest appears in a location history associated with the venue attendee at least a predetermined number of times. 6. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the location of the point of interest is missing from a location history associated with the venue attendee. 7. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the venue attendee previously provided a rating of the point of interest, wherein the rating exceeds a predetermined rating value. 8. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the point of interest provides a type of food that the venue attendee prefers. 9. The method of claim 4, wherein the comparison between the profile information and the point of interest information indicates that the point of interest lacks a type of food that the venue attendee dislikes. 10. The method of claim 1, further comprising identifying, based on an estimated movement speed of the venue attendee, that the venue attendee must begin heading toward a location of the point of interest within a predetermined period of time for the venue attendee to arrive at the location of the point of interest at a particular time, wherein the particular time is associated with the point of interest in the itinerary. 11. The method of claim 1, further comprising: receiving, from a mobile device associated with a second venue staff member, a location of the mobile device associated with the second venue staff member; and selecting the mobile device associated with the venue staff member to be sent the alert rather than the mobile device associated with the second venue staff member based on the location of the mobile device associated with the venue staff member being closer than the location of the mobile device associated with the second venue staff member to the location of the mobile device associated with the venue attendee. 12. The method of claim 1, wherein identifying the dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee includes identifying that the current path leads more toward a location of a second point of interest than toward the location of the point of interest identified by the itinerary. 13. The method of claim 1, wherein the venue attendee is in a group, and wherein the itinerary associated with the venue attendee is also associated with one or more other venue attendees in the group. 14. A system for assisting a venue staff member in providing guidance within a predetermined venue area, the system comprising: a communication transceiver that receives a location of a mobile device associated with the venue staff member and a location of a mobile device associated with a venue attendee and that sends an alert to the mobile device associated with the venue staff member; a memory that stores instructions and an itinerary associated with the venue attendee; and a processor, wherein execution of the instructions by the processor causes the processor to: generate a suggested path for the venue attendee from the location of the mobile device associated with the venue attendee to a location of a point of interest identified by the itinerary, generate a current path of the venue attendee based on the location of the mobile device associated with the venue attendee and one or more past locations of the mobile device associated with the venue attendee, identify a dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee, and generate the alert automatically in response to identifying the dissimilarity. 15. The system of claim 14, wherein the communication transceiver receives the location of the mobile device associated with the venue attendee from the mobile device associated with the venue attendee. 16. The system of claim 14, wherein the communication transceiver receives the location of the mobile device associated with the venue staff member from the mobile device associated with the venue staff member. 17. The system of claim 14, wherein the communication transceiver receives the itinerary associated with the venue attendee. 18. A method for assisting a venue staff member in providing guidance within a predetermined venue area, the method comprising: receiving a location of a mobile device associated with the venue staff member; receiving a location of a mobile device associated with a venue attendee; generating a suggested path for the venue attendee from the location of the mobile device associated with the venue attendee to a location of a point of interest identified in an itinerary associated with the venue attendee; generating a current path of the venue attendee based on the location of a mobile device associated with the venue attendee and one or more past locations of the mobile device associated with the venue attendee; identifying a dissimilarity between the suggested path for the venue attendee and the current path of the venue attendee; and sending an alert identifying the venue attendee to the mobile device associated with the venue staff member automatically in response to identifying the dissimilarity. 19. The method of claim 18, further comprising identifying that the location of the mobile device associated with the venue attendee is closer to the location of the mobile device associated with the venue staff member than it is to any of a plurality of locations of other mobile devices associated with other venue staff members before sending the alert to the mobile device associated with the venue staff member. 20. The method of claim 18, wherein the alert also identifies the point of interest identified in the itinerary associated with the venue attendee.
2,600
10,698
10,698
16,361,784
2,683
A sub for a wired pipe system that includes a body includes an outer surface and a pin end and a first transmission device located in or near the pin end. The sub also includes a communication collar that at least partially surrounds the outer surface and that is rotatable relative to the body, a second transmission device in electrical communication with the first transmission device and a transmission line that electrically connects the first and second communication devices and that passes at least partially through the body. The sub further includes a third transmission device located in the communication collar in communication with the second transmission device. In the disclosed sub, the first, second and third transmission devices are all of the same type.
1. A sub for a wired pipe system, the sub comprising: a body including an outer surface that defines an inner bore through which a fluid passes during drilling; a first transmission device (34) located within the outer surface and at an end of the body and outside of the inner bore such that the fluid passes through the first transmission device during drilling; a communication collar that at least partially surrounds the outer surface and that is rotatable relative to the body; a second transmission device in electrical communication with the first transmission device and located on the outer surface of the body; a transmission line that electrically connects the first and second transmission devices and that passes at least partially through the body; and a third transmission device located in the communication collar in communication with the second transmission device; wherein the first, second and third transmission devices are all of the same type, wherein, in operation, signals are received from or provided to a surface unit from the communication collar through a communication line. 2. The sub of claim 1, wherein the first, second and third transmission devices are all selected from one of: a capacitive coupler, and a resonant coupler. 3. The sub of claim 1, further comprising: a wireless transmitter in electrical communication with the third transmission device. 4. The sub of claim 1, further comprising: an outer adapter coupled to the outer surface that contains the second transmission device. 5. The sub of claim 4, wherein the outer adapter rotates relative to the communication collar. 6. The sub of claim 1, wherein the communication collar includes a collar body and an output terminal. 7. The sub of claim 6, wherein the collar body includes a communication line that carries signals between the third transmission devices and the output terminal. 8. The sub of claim 7, wherein the output terminal includes a wireless transmitter. 9. The sub of claim 8, wherein the communication collar includes: a retaining ring that surrounds the outer surface; and a first bearing disposed about the outer surface and held in place at least partially by the retaining ring. 10. The sub of claim 9, further wherein the communication collar further includes a second bearing; wherein the collar body is at least partially disposed between the first and second bearings. 11. The sub of claim 10, further comprising: an outer adapter at least partially disposed between the first and second bearings.
A sub for a wired pipe system that includes a body includes an outer surface and a pin end and a first transmission device located in or near the pin end. The sub also includes a communication collar that at least partially surrounds the outer surface and that is rotatable relative to the body, a second transmission device in electrical communication with the first transmission device and a transmission line that electrically connects the first and second communication devices and that passes at least partially through the body. The sub further includes a third transmission device located in the communication collar in communication with the second transmission device. In the disclosed sub, the first, second and third transmission devices are all of the same type.1. A sub for a wired pipe system, the sub comprising: a body including an outer surface that defines an inner bore through which a fluid passes during drilling; a first transmission device (34) located within the outer surface and at an end of the body and outside of the inner bore such that the fluid passes through the first transmission device during drilling; a communication collar that at least partially surrounds the outer surface and that is rotatable relative to the body; a second transmission device in electrical communication with the first transmission device and located on the outer surface of the body; a transmission line that electrically connects the first and second transmission devices and that passes at least partially through the body; and a third transmission device located in the communication collar in communication with the second transmission device; wherein the first, second and third transmission devices are all of the same type, wherein, in operation, signals are received from or provided to a surface unit from the communication collar through a communication line. 2. The sub of claim 1, wherein the first, second and third transmission devices are all selected from one of: a capacitive coupler, and a resonant coupler. 3. The sub of claim 1, further comprising: a wireless transmitter in electrical communication with the third transmission device. 4. The sub of claim 1, further comprising: an outer adapter coupled to the outer surface that contains the second transmission device. 5. The sub of claim 4, wherein the outer adapter rotates relative to the communication collar. 6. The sub of claim 1, wherein the communication collar includes a collar body and an output terminal. 7. The sub of claim 6, wherein the collar body includes a communication line that carries signals between the third transmission devices and the output terminal. 8. The sub of claim 7, wherein the output terminal includes a wireless transmitter. 9. The sub of claim 8, wherein the communication collar includes: a retaining ring that surrounds the outer surface; and a first bearing disposed about the outer surface and held in place at least partially by the retaining ring. 10. The sub of claim 9, further wherein the communication collar further includes a second bearing; wherein the collar body is at least partially disposed between the first and second bearings. 11. The sub of claim 10, further comprising: an outer adapter at least partially disposed between the first and second bearings.
2,600
10,699
10,699
15,077,569
2,663
A method is disclosed for generating sinograms by sampling a plurality of transducers acoustically coupled with the surface of a volume of tissue over a period of time after a light pulse at one wavelength, and after another light pulse at a different wavelength, and for processing those sinograms, reconstructing at least two optoacoustic images from the two sinograms, processing the two optoacoustic images to generate two envelope images and generating a parametric map from information in the two envelope images. In an embodiment, motion and tracking are determined to align the envelope images. In an embodiment, at least a second parametric map is produced from information in the same two envelope images. In an embodiment an ultrasound image is also acquired, and the parametric map is coregistered with and overlayed upon the ultrasound image, and then displayed.
1. A method comprising the steps of: generating a plurality of sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms, the processing comprising one or more steps selected from the group consisting of: mitigating anomalous channels, mitigating common mode stripes, bandpass filtering, mitigating the system transfer function, normalization of dynamic range, normalization of energy, removal of interframe persistent artifacts, compensating for hardware time gain compensation, performing sub-band acoustic compensation and applying a transform operator; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images, the image reconstruction comprising one or more steps selected from the group consisting of: extract quadrature, sub-band acoustic compensation, and reconstruction; performing image post processing on the at least two optoacoustic images to generate at least two envelope images, the image post processing comprising one or more steps selected from the group consisting of: remove interframe persistent artifact, fluence compensation and complex magnitude; and generating at least one parametric map based upon the information contained in the at least two envelope images. 2. The method of claim 1, wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned envelope images; and performing parametric calculations on the at least two aligned envelope images to produce the at least one parametric map. 3. The method of claim 2, further comprising: performing additional parametric calculations on the at least two aligned envelope images to produce an additional parametric map. 4. The method of claim 3, further comprising: generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 5. The method of claim 4, further comprising: receiving an ultrasound image representing the at least a portion of the volume; and coregistering and overlaying the ultrasound image with at least one selected from the group of: the at least one parametric map, the additional parametric map and the third parametric map; and displaying the coregistered image on the optoacoustic imaging system. 6. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images; performing image post processing on the at least two optoacoustic images to generate at least two post-processed images, the image post processing comprising interframe persistent artifact removal; and generating at least one parametric map based upon the information contained in the at least two post-processed images. 7. The method of claim 6, further comprising: wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned post-processed images; and performing parametric calculations on the at least two aligned post-processed images to produce the at least one parametric map; the method further comprising: performing additional parametric calculations on the at least two aligned post-processed images to produce an additional parametric map; and generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 8. The method of claim 7, further comprising: receiving an ultrasound image representing the at least a portion of the volume; and coregistering and overlaying the ultrasound image with the third parametric map; and displaying the coregistered image on the optoacoustic imaging system. 9. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images; performing image post processing on the at least two optoacoustic images to generate at least two post-processed images, the image post processing comprising fluence compensation; and generating at least one parametric map based upon the information contained in the at least two post-processed images. 10. The method of claim 9, further comprising: wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned post-processed images; and performing parametric calculations on the at least two aligned post-processed images to produce the at least one parametric map; the method further comprising: performing additional parametric calculations on the at least two aligned post-processed images to produce the an additional parametric map; and generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 11. The method of claim 10, further comprising: receiving an ultrasound image representing the at least a portion of the volume; and coregistering and overlaying the ultrasound image with the third parametric map; and displaying the coregistered image on the optoacoustic imaging system. 12. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; extracting quadrature from each of the at least two processed sinograms thus providing real and imaginary components of each of the at least two processed sinograms; performing image reconstruction based upon each of the real and imaginary components of the at least two processed sinograms to generate at least four optoacoustic images; performing image post processing on the at least four optoacoustic images to generate at least two envelope images, the image post processing comprising complex magnitude; and generating at least one parametric map based upon the information contained in the at least two envelope images. 13. The method of claim 12, further comprising: wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned envelope images; and performing parametric calculations on the at least two aligned envelope images to produce the at least one parametric map; the method further comprising: performing additional parametric calculations on the at least two aligned envelope images to produce the an additional parametric map; and generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 14. The method of claim 13, further comprising: receiving an ultrasound image representing the at least a portion of the volume; coregistering and overlaying the ultrasound image with the third parametric map; and, displaying the coregistered image on the optoacoustic imaging system. 15. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images; performing image post processing on the at least two optoacoustic images to generate at least two post-processed images, the image post processing comprising fluence compensation; and generating at least one parametric map based upon the information contained in the at least two post-processed images; wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned post-processed images; and performing parametric calculations on the at least two aligned post-processed images to produce the at least one parametric map. 16. The method of claim 15, wherein fluence compensation comprises: determining a common fluence curve, which common fluence curve is a function of parameters comprising a depth parameter and an additional parameter; determining a value for the additional parameter, which value influences the common fluence curve; determining a first wavelength specific fluence curve for a first of the at least two predominant wavelengths, which first wavelength specific fluence curve is a function of parameters comprising the depth parameter and a first wavelength specific parameter; determining a value for the first wavelength specific parameter, which value influences the first wavelength specific fluence curve; applying an overall fluence normalization based on both the common fluence curve and the first wavelength specific fluence curve to a first of the at least two optoacoustic images to compute a first fluence compensated image; determining a second wavelength specific fluence curve for a second of the at least two predominant wavelengths, which second wavelength specific fluence curve is a function of parameters comprising the depth parameter and a second wavelength specific parameter; determining a value for the second wavelength specific parameter; and, applying a second overall fluence normalization based on the common fluence curve and the second wavelength specific fluence curve to a second of the at least two optoacoustic images to compute a second fluence compensated image. 17. The method of claim 16, wherein fluence compensation further comprises determining a region of interest of the volume, which region of interest comprises a depth measure, wherein the depth measure is used in forming a dependent parameter in at least one of the steps consisting of: determining of the value for the additional parameter, determining of the value for the first wavelength specific parameter and determining of the value for the second wavelength specific parameter. 18. The method of claim 16, wherein at least one of the common fluence curve, the first wavelength specific fluence curve or the second wavelength specific fluence curve is computed by computing a statistical feature, the value of the statistical feature varies with depth in an image, wherein the selected fluence curve is based on the value of the statistical feature as a function of depth.
A method is disclosed for generating sinograms by sampling a plurality of transducers acoustically coupled with the surface of a volume of tissue over a period of time after a light pulse at one wavelength, and after another light pulse at a different wavelength, and for processing those sinograms, reconstructing at least two optoacoustic images from the two sinograms, processing the two optoacoustic images to generate two envelope images and generating a parametric map from information in the two envelope images. In an embodiment, motion and tracking are determined to align the envelope images. In an embodiment, at least a second parametric map is produced from information in the same two envelope images. In an embodiment an ultrasound image is also acquired, and the parametric map is coregistered with and overlayed upon the ultrasound image, and then displayed.1. A method comprising the steps of: generating a plurality of sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms, the processing comprising one or more steps selected from the group consisting of: mitigating anomalous channels, mitigating common mode stripes, bandpass filtering, mitigating the system transfer function, normalization of dynamic range, normalization of energy, removal of interframe persistent artifacts, compensating for hardware time gain compensation, performing sub-band acoustic compensation and applying a transform operator; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images, the image reconstruction comprising one or more steps selected from the group consisting of: extract quadrature, sub-band acoustic compensation, and reconstruction; performing image post processing on the at least two optoacoustic images to generate at least two envelope images, the image post processing comprising one or more steps selected from the group consisting of: remove interframe persistent artifact, fluence compensation and complex magnitude; and generating at least one parametric map based upon the information contained in the at least two envelope images. 2. The method of claim 1, wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned envelope images; and performing parametric calculations on the at least two aligned envelope images to produce the at least one parametric map. 3. The method of claim 2, further comprising: performing additional parametric calculations on the at least two aligned envelope images to produce an additional parametric map. 4. The method of claim 3, further comprising: generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 5. The method of claim 4, further comprising: receiving an ultrasound image representing the at least a portion of the volume; and coregistering and overlaying the ultrasound image with at least one selected from the group of: the at least one parametric map, the additional parametric map and the third parametric map; and displaying the coregistered image on the optoacoustic imaging system. 6. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images; performing image post processing on the at least two optoacoustic images to generate at least two post-processed images, the image post processing comprising interframe persistent artifact removal; and generating at least one parametric map based upon the information contained in the at least two post-processed images. 7. The method of claim 6, further comprising: wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned post-processed images; and performing parametric calculations on the at least two aligned post-processed images to produce the at least one parametric map; the method further comprising: performing additional parametric calculations on the at least two aligned post-processed images to produce an additional parametric map; and generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 8. The method of claim 7, further comprising: receiving an ultrasound image representing the at least a portion of the volume; and coregistering and overlaying the ultrasound image with the third parametric map; and displaying the coregistered image on the optoacoustic imaging system. 9. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images; performing image post processing on the at least two optoacoustic images to generate at least two post-processed images, the image post processing comprising fluence compensation; and generating at least one parametric map based upon the information contained in the at least two post-processed images. 10. The method of claim 9, further comprising: wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned post-processed images; and performing parametric calculations on the at least two aligned post-processed images to produce the at least one parametric map; the method further comprising: performing additional parametric calculations on the at least two aligned post-processed images to produce the an additional parametric map; and generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 11. The method of claim 10, further comprising: receiving an ultrasound image representing the at least a portion of the volume; and coregistering and overlaying the ultrasound image with the third parametric map; and displaying the coregistered image on the optoacoustic imaging system. 12. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; extracting quadrature from each of the at least two processed sinograms thus providing real and imaginary components of each of the at least two processed sinograms; performing image reconstruction based upon each of the real and imaginary components of the at least two processed sinograms to generate at least four optoacoustic images; performing image post processing on the at least four optoacoustic images to generate at least two envelope images, the image post processing comprising complex magnitude; and generating at least one parametric map based upon the information contained in the at least two envelope images. 13. The method of claim 12, further comprising: wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned envelope images; and performing parametric calculations on the at least two aligned envelope images to produce the at least one parametric map; the method further comprising: performing additional parametric calculations on the at least two aligned envelope images to produce the an additional parametric map; and generating a third parametric map, the third parametric map reflective of a combination of information in the at least one parametric map and the additional parametric map. 14. The method of claim 13, further comprising: receiving an ultrasound image representing the at least a portion of the volume; coregistering and overlaying the ultrasound image with the third parametric map; and, displaying the coregistered image on the optoacoustic imaging system. 15. A method comprising the steps of: generating a plurality of multi-channel sinograms, each being generated by sampling a plurality of transducers acoustically coupled with a surface of a volume for a predetermined period of time after a pulse of light having a predominant wavelength selected from at least two different predominant wavelengths, each transducer being associated with a channel in an optoacoustic imaging system; processing at least two multi-channel sinograms, each corresponding to a different one of the at least two different predominant wavelengths to create at least two processed sinograms; performing image reconstruction based upon the at least two processed sinograms to generate at least two optoacoustic images; performing image post processing on the at least two optoacoustic images to generate at least two post-processed images, the image post processing comprising fluence compensation; and generating at least one parametric map based upon the information contained in the at least two post-processed images; wherein the step of generating at least one parametric map comprises: determining motion and tracking to produce at least two aligned post-processed images; and performing parametric calculations on the at least two aligned post-processed images to produce the at least one parametric map. 16. The method of claim 15, wherein fluence compensation comprises: determining a common fluence curve, which common fluence curve is a function of parameters comprising a depth parameter and an additional parameter; determining a value for the additional parameter, which value influences the common fluence curve; determining a first wavelength specific fluence curve for a first of the at least two predominant wavelengths, which first wavelength specific fluence curve is a function of parameters comprising the depth parameter and a first wavelength specific parameter; determining a value for the first wavelength specific parameter, which value influences the first wavelength specific fluence curve; applying an overall fluence normalization based on both the common fluence curve and the first wavelength specific fluence curve to a first of the at least two optoacoustic images to compute a first fluence compensated image; determining a second wavelength specific fluence curve for a second of the at least two predominant wavelengths, which second wavelength specific fluence curve is a function of parameters comprising the depth parameter and a second wavelength specific parameter; determining a value for the second wavelength specific parameter; and, applying a second overall fluence normalization based on the common fluence curve and the second wavelength specific fluence curve to a second of the at least two optoacoustic images to compute a second fluence compensated image. 17. The method of claim 16, wherein fluence compensation further comprises determining a region of interest of the volume, which region of interest comprises a depth measure, wherein the depth measure is used in forming a dependent parameter in at least one of the steps consisting of: determining of the value for the additional parameter, determining of the value for the first wavelength specific parameter and determining of the value for the second wavelength specific parameter. 18. The method of claim 16, wherein at least one of the common fluence curve, the first wavelength specific fluence curve or the second wavelength specific fluence curve is computed by computing a statistical feature, the value of the statistical feature varies with depth in an image, wherein the selected fluence curve is based on the value of the statistical feature as a function of depth.
2,600