Unnamed: 0 int64 0 350k | level_0 int64 0 351k | ApplicationNumber int64 9.75M 96.1M | ArtUnit int64 1.6k 3.99k | Abstract stringlengths 1 8.37k | Claims stringlengths 3 292k | abstract-claims stringlengths 68 293k | TechCenter int64 1.6k 3.9k |
|---|---|---|---|---|---|---|---|
8,600 | 8,600 | 14,106,499 | 2,441 | The present invention discloses a method, an apparatus, and a system for acquiring an object. The method includes: receiving a request of a client, where the request includes identifier information of a target object requested by the client; determining, according to the identifier information of the target object, whether the target object has one or more associated objects; when the target object has one or more associated objects, adding association indication information to the target object, and sending the target object to the client; after finishing sending the target object, sending verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; determining a target associated object from the associated objects according to verification result information of the client; and sending the target associated object to the client. | 1. A method for acquiring an object, the method comprising:
receiving a request of a client, wherein the request comprises identifier information of a target object requested by the client; determining, according to the identifier information of the target object, whether the target object has one or more associated objects; when the target object has one or more associated objects, adding association indication information to the target object, and sending the target object to the client, so that the client continues to wait for receiving data after receiving the target object; after finishing sending the target object, sending verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; determining, according to verification result information of the client, a target associated object from the associated objects; and sending the target associated object to the client. 2. The method according to claim 1, wherein determining, according to the identifier information of the target object, whether the target object has one or more associated objects comprises:
transferring the request to an upper layer application, so that the upper layer application determines, according to the identifier information of the target object, whether to invoke a predetermined interface; and determining, according to an invocation situation of the predetermined interface, whether the target object has one or more associated objects. 3. The method according to claim 1, wherein determining, according to the identifier information of the target object, whether the target object has one or more associated objects comprises:
querying, according to the identifier information of the target object, for an association table used to store information of associated objects of the target object, and determining, according to a query result, whether the target object has one or more associated objects. 4. The method according to claim 1, wherein the association indication information is located in a last frame of the target object. 5. The method according to claim 1, wherein the verification information comprises address information and relative expiry time of the associated objects. 6. The method according to claim 1, wherein:
the verification result information comprises address information of an associated object that does not need to be sent; and determining, according to verification result information of the client, a target associated object from the associated objects comprises:
removing, according to the verification result information, the associated object that does not need to be sent to the client from the associated objects, and determining a remaining associated object as a target associated object. 7. The method according to claim 1, wherein:
the verification result information comprises address information of an associated object that needs to be sent; and determining, according to verification result information of the client, a target associated object from the associated objects comprises:
confirming, according to the address information of the associated object that needs to be sent, accuracy of the verification result information, and determining a target associated object from the associated objects according to a confirmation result. 8. A method for acquiring an object, the method comprising:
sending a request to a server, wherein the request comprises identifier information of a requested target object; receiving a target object which is sent by the server after the server receives the request, wherein the target object carries association indication information, wherein the association indication information is used to indicate that the target object has one or more associated objects; receiving verification information of the associated objects which is sent by the server; verifying the associated objects according to the verification information, and sending verification result information to the server, so that the server determines a target associated object according to the verification result information; and receiving the target associated object sent by the server. 9. The method according to claim 8, wherein the verification information of the associated objects comprises address information and relative expiry time of the associated objects; and
verifying the associated objects according to the verification information comprises:
determining, according to the address information of the associated objects, whether a corresponding associated object is temporarily stored locally, and
if a corresponding associated object is temporarily stored locally and a sum of existence time of the corresponding associated object and relative expiry time of the corresponding associated object is greater than a current time, determining that the server does not need to send the corresponding associated object; or
if a corresponding associated object is temporarily stored locally and a sum of existence time of the corresponding associated object and relative expiry time of the corresponding associated object is less than a current time, determining that the server needs to send the corresponding associated object. 10. The method according to claim 8, wherein the verification result information comprises:
address information of a corresponding associated object that does not need to be sent by the server or address information of a corresponding associated object that needs to be sent by the server. 11. A server, comprising:
a receiving unit, configured to receive a request of a client, wherein the request comprises identifier information of a target object requested by the client; a judging unit, configured to determine, according to the identifier information of the target object received by the receiving unit, whether the target object has one or more associated objects; a sending unit, configured to: when the judging unit determines that the target object has one or more associated objects, add association indication information to the target object, and send the target object to the client, so that the client continues to wait for receiving data after receiving the target object; and after finishing sending the target object, send verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; and a selecting unit, configured to determine a target associated object from the associated objects according to verification result information of the client received by the receiving unit; wherein: the sending unit is further configured to send the target associated object to the client according to a processing result of the selecting unit. 12. The server according to claim 11, wherein the judging unit comprises:
a transferring module, configured to transfer the request to an upper layer application, so that the upper layer application determines, according to the identifier information of the target object, whether to invoke a predetermined interface; and a processing module, configured to determine, when the upper layer application invokes the predetermined interface, that the target object has one or more associated objects, and determine, when the upper layer application does not invoke the predetermined interface, that the target object does not have associated objects, wherein, when the upper layer application determines, according to the identifier information of the target object, that there are associated objects, the upper layer application invokes the predetermined interface; and when the upper layer application determines, according to the identifier information of the target object, that there is no associated object, the upper layer application does not invoke the predetermined interface. 13. The server according to claim 11, wherein the judging unit comprises:
a querying module, configured to query, according to the identifier information of the target object, for an association table used to store information of associated objects of the target object; and a processing module, configured to determine, according to a query result of the querying module, whether the target object has one or more associated objects. 14. The server according to claim 11, wherein the selecting unit is configured to:
when the verification result information comprises address information of an associated object that does not need to be sent, remove, according to the verification result information, the associated object that does not need to be sent to the client from the associated objects, and determine a remaining associated object as a target associated object; or, when the verification result information comprises address information of an associated object that needs to be sent, confirm, according to the address information of an associated object that needs to be sent, accuracy of the verification result information, and determine a target associated object from the associated objects according to a confirmation result. 15. A client, comprising:
a sending unit, configured to send a request to a server, wherein the request comprises identifier information of a requested target object; a receiving unit, configured to receive a target object which is sent by the server after the server receives the request, wherein the target object carries association indication information, wherein the association indication information is used to indicate that the target object has one or more associated objects; and receive verification information of the associated objects sent by the server; a processing unit, configured to verify the associated objects according to the verification information received by the receiving unit; and wherein:
the sending unit is further configured to send verification result information to the server according to a verification result of the processing unit, so that the server determines a target associated object according to the verification result information, and
the receiving unit is further configured to receive the target associated object sent by the server. 16. The client according to claim 15, wherein the verification information of the associated objects comprises address information and relative expiry time of the associated objects; and
the processing unit comprises:
a detecting module, configured to determine, according to the address information of the associated objects, whether the client temporarily stores a corresponding associated object,
a comparing module, configured to: when the detecting module determines that a corresponding associated object is temporarily stored, compare a sum of existence time of the corresponding associated object and relative expiry time of the corresponding associated object with a current time, and
a determining module, configured to: when the comparing module determines that the sum of the existence time of the corresponding associated object and the relative expiry time of the corresponding associated object is greater than the current time, determine that the server does not need to send the corresponding associated object; and when the comparing module determines that the sum of the existence time of the corresponding associated object and the relative expiry time of the corresponding associated object is less than the current time, determine that the server needs to send the corresponding associated object. 17. The client according to claim 15, wherein the verification result information comprises:
address information of a corresponding associated object that does not need to be sent by the server or address information of a corresponding associated object that needs to be sent by the server. 18. A system for acquiring an object, comprising:
a server and a client, wherein: the server is configured to receive a request of the client, wherein the request comprises identifier information of a target object requested by the client; determine, according to the identifier information of the target object, whether the target object has one or more associated objects; when determines that the target object has one or more associated objects, add association indication information to the target object, and send the target object to the client, and after finishing sending the target object, send verification information to the client, the client is configured to receive the target object and verification information which is sent by the server, verify the associated objects according to the verification information received by the receiving unit, and send verification result information to the server according to a verification result; the server is further configured to determine a target associated object according to the verification result information and send the target associated object to the client. 19. A server, comprising a processor and a non-transitory processor-readable memory, the processor and the memory being connected through a bus, the memory being configured to store an executable program code; the processor being configured to read the executable program code stored in the memory so as to:
receive a request of a client, wherein the request comprises identifier information of a target object requested by the client; determine, according to the identifier information of the target object, whether the target object has one or more associated objects; when the target object has one or more associated objects, add association indication information to the target object, and send the target object to the client, so that the client continues to wait for receiving data after receiving the target object; send verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; determine, according to verification result information of the client, a target associated object from the associated objects; and send the target associated object to the client. 20. A client, comprising a processor and a non-transitory processor-readable memory, the processor and the memory being connected through a bus, the memory being configured to store an executable program code; the processor being configured to read the executable program code stored in the memory so as to:
send a request to a server, wherein the request comprises identifier information of a requested target object; receive a target object which is sent by the server after the server receives the request, wherein the target object carries association indication information, wherein the association indication information is used to indicate that the target object has one or more associated objects; receive verification information of the associated objects which is sent by the server; verify the associated objects according to the verification information, and sending verification result information to the server, so that the server determines a target associated object according to the verification result information; and receive the target associated object sent by the server. | The present invention discloses a method, an apparatus, and a system for acquiring an object. The method includes: receiving a request of a client, where the request includes identifier information of a target object requested by the client; determining, according to the identifier information of the target object, whether the target object has one or more associated objects; when the target object has one or more associated objects, adding association indication information to the target object, and sending the target object to the client; after finishing sending the target object, sending verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; determining a target associated object from the associated objects according to verification result information of the client; and sending the target associated object to the client.1. A method for acquiring an object, the method comprising:
receiving a request of a client, wherein the request comprises identifier information of a target object requested by the client; determining, according to the identifier information of the target object, whether the target object has one or more associated objects; when the target object has one or more associated objects, adding association indication information to the target object, and sending the target object to the client, so that the client continues to wait for receiving data after receiving the target object; after finishing sending the target object, sending verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; determining, according to verification result information of the client, a target associated object from the associated objects; and sending the target associated object to the client. 2. The method according to claim 1, wherein determining, according to the identifier information of the target object, whether the target object has one or more associated objects comprises:
transferring the request to an upper layer application, so that the upper layer application determines, according to the identifier information of the target object, whether to invoke a predetermined interface; and determining, according to an invocation situation of the predetermined interface, whether the target object has one or more associated objects. 3. The method according to claim 1, wherein determining, according to the identifier information of the target object, whether the target object has one or more associated objects comprises:
querying, according to the identifier information of the target object, for an association table used to store information of associated objects of the target object, and determining, according to a query result, whether the target object has one or more associated objects. 4. The method according to claim 1, wherein the association indication information is located in a last frame of the target object. 5. The method according to claim 1, wherein the verification information comprises address information and relative expiry time of the associated objects. 6. The method according to claim 1, wherein:
the verification result information comprises address information of an associated object that does not need to be sent; and determining, according to verification result information of the client, a target associated object from the associated objects comprises:
removing, according to the verification result information, the associated object that does not need to be sent to the client from the associated objects, and determining a remaining associated object as a target associated object. 7. The method according to claim 1, wherein:
the verification result information comprises address information of an associated object that needs to be sent; and determining, according to verification result information of the client, a target associated object from the associated objects comprises:
confirming, according to the address information of the associated object that needs to be sent, accuracy of the verification result information, and determining a target associated object from the associated objects according to a confirmation result. 8. A method for acquiring an object, the method comprising:
sending a request to a server, wherein the request comprises identifier information of a requested target object; receiving a target object which is sent by the server after the server receives the request, wherein the target object carries association indication information, wherein the association indication information is used to indicate that the target object has one or more associated objects; receiving verification information of the associated objects which is sent by the server; verifying the associated objects according to the verification information, and sending verification result information to the server, so that the server determines a target associated object according to the verification result information; and receiving the target associated object sent by the server. 9. The method according to claim 8, wherein the verification information of the associated objects comprises address information and relative expiry time of the associated objects; and
verifying the associated objects according to the verification information comprises:
determining, according to the address information of the associated objects, whether a corresponding associated object is temporarily stored locally, and
if a corresponding associated object is temporarily stored locally and a sum of existence time of the corresponding associated object and relative expiry time of the corresponding associated object is greater than a current time, determining that the server does not need to send the corresponding associated object; or
if a corresponding associated object is temporarily stored locally and a sum of existence time of the corresponding associated object and relative expiry time of the corresponding associated object is less than a current time, determining that the server needs to send the corresponding associated object. 10. The method according to claim 8, wherein the verification result information comprises:
address information of a corresponding associated object that does not need to be sent by the server or address information of a corresponding associated object that needs to be sent by the server. 11. A server, comprising:
a receiving unit, configured to receive a request of a client, wherein the request comprises identifier information of a target object requested by the client; a judging unit, configured to determine, according to the identifier information of the target object received by the receiving unit, whether the target object has one or more associated objects; a sending unit, configured to: when the judging unit determines that the target object has one or more associated objects, add association indication information to the target object, and send the target object to the client, so that the client continues to wait for receiving data after receiving the target object; and after finishing sending the target object, send verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; and a selecting unit, configured to determine a target associated object from the associated objects according to verification result information of the client received by the receiving unit; wherein: the sending unit is further configured to send the target associated object to the client according to a processing result of the selecting unit. 12. The server according to claim 11, wherein the judging unit comprises:
a transferring module, configured to transfer the request to an upper layer application, so that the upper layer application determines, according to the identifier information of the target object, whether to invoke a predetermined interface; and a processing module, configured to determine, when the upper layer application invokes the predetermined interface, that the target object has one or more associated objects, and determine, when the upper layer application does not invoke the predetermined interface, that the target object does not have associated objects, wherein, when the upper layer application determines, according to the identifier information of the target object, that there are associated objects, the upper layer application invokes the predetermined interface; and when the upper layer application determines, according to the identifier information of the target object, that there is no associated object, the upper layer application does not invoke the predetermined interface. 13. The server according to claim 11, wherein the judging unit comprises:
a querying module, configured to query, according to the identifier information of the target object, for an association table used to store information of associated objects of the target object; and a processing module, configured to determine, according to a query result of the querying module, whether the target object has one or more associated objects. 14. The server according to claim 11, wherein the selecting unit is configured to:
when the verification result information comprises address information of an associated object that does not need to be sent, remove, according to the verification result information, the associated object that does not need to be sent to the client from the associated objects, and determine a remaining associated object as a target associated object; or, when the verification result information comprises address information of an associated object that needs to be sent, confirm, according to the address information of an associated object that needs to be sent, accuracy of the verification result information, and determine a target associated object from the associated objects according to a confirmation result. 15. A client, comprising:
a sending unit, configured to send a request to a server, wherein the request comprises identifier information of a requested target object; a receiving unit, configured to receive a target object which is sent by the server after the server receives the request, wherein the target object carries association indication information, wherein the association indication information is used to indicate that the target object has one or more associated objects; and receive verification information of the associated objects sent by the server; a processing unit, configured to verify the associated objects according to the verification information received by the receiving unit; and wherein:
the sending unit is further configured to send verification result information to the server according to a verification result of the processing unit, so that the server determines a target associated object according to the verification result information, and
the receiving unit is further configured to receive the target associated object sent by the server. 16. The client according to claim 15, wherein the verification information of the associated objects comprises address information and relative expiry time of the associated objects; and
the processing unit comprises:
a detecting module, configured to determine, according to the address information of the associated objects, whether the client temporarily stores a corresponding associated object,
a comparing module, configured to: when the detecting module determines that a corresponding associated object is temporarily stored, compare a sum of existence time of the corresponding associated object and relative expiry time of the corresponding associated object with a current time, and
a determining module, configured to: when the comparing module determines that the sum of the existence time of the corresponding associated object and the relative expiry time of the corresponding associated object is greater than the current time, determine that the server does not need to send the corresponding associated object; and when the comparing module determines that the sum of the existence time of the corresponding associated object and the relative expiry time of the corresponding associated object is less than the current time, determine that the server needs to send the corresponding associated object. 17. The client according to claim 15, wherein the verification result information comprises:
address information of a corresponding associated object that does not need to be sent by the server or address information of a corresponding associated object that needs to be sent by the server. 18. A system for acquiring an object, comprising:
a server and a client, wherein: the server is configured to receive a request of the client, wherein the request comprises identifier information of a target object requested by the client; determine, according to the identifier information of the target object, whether the target object has one or more associated objects; when determines that the target object has one or more associated objects, add association indication information to the target object, and send the target object to the client, and after finishing sending the target object, send verification information to the client, the client is configured to receive the target object and verification information which is sent by the server, verify the associated objects according to the verification information received by the receiving unit, and send verification result information to the server according to a verification result; the server is further configured to determine a target associated object according to the verification result information and send the target associated object to the client. 19. A server, comprising a processor and a non-transitory processor-readable memory, the processor and the memory being connected through a bus, the memory being configured to store an executable program code; the processor being configured to read the executable program code stored in the memory so as to:
receive a request of a client, wherein the request comprises identifier information of a target object requested by the client; determine, according to the identifier information of the target object, whether the target object has one or more associated objects; when the target object has one or more associated objects, add association indication information to the target object, and send the target object to the client, so that the client continues to wait for receiving data after receiving the target object; send verification information to the client, so that the client verifies the associated objects of the target object according to the verification information; determine, according to verification result information of the client, a target associated object from the associated objects; and send the target associated object to the client. 20. A client, comprising a processor and a non-transitory processor-readable memory, the processor and the memory being connected through a bus, the memory being configured to store an executable program code; the processor being configured to read the executable program code stored in the memory so as to:
send a request to a server, wherein the request comprises identifier information of a requested target object; receive a target object which is sent by the server after the server receives the request, wherein the target object carries association indication information, wherein the association indication information is used to indicate that the target object has one or more associated objects; receive verification information of the associated objects which is sent by the server; verify the associated objects according to the verification information, and sending verification result information to the server, so that the server determines a target associated object according to the verification result information; and receive the target associated object sent by the server. | 2,400 |
8,601 | 8,601 | 14,091,533 | 2,459 | Systems, methods and procedures are described for ascertaining tethering of a device in a communication network. In one implementation, a wireless device provides Internet connectivity to a computing device using a wireline or wireless transmission medium. In one arrangement, the wireless device, in the tethered arrangement with the computing device, may provide receiving and sending of data communications capability to the computing device. In one implementation, the communication entity that is hosting the network may ascertain that tethering is occurring by analyzing communications generated or passing through the computing device. | 1. A computer implemented method, comprising:
receiving resolution information from a wireless device; ascertaining that the wireless device is performing tethering based on the received resolution information. 2. The method of claim 1, further comprising comparing the received resolution information to a known screen resolution of the wireless device. 3. The method of claim 2, wherein the act of comparing includes retrieving the known screen resolution of the wireless device from a database. 4. The method of claim 1, further comprising transmitting a notification to the wireless device, the notification indicating observance of tethering being performed by the wireless device. 5. A server device comprising:
one or more processors; and computer-executable instructions which, when executed by the one or more processors, perform operations including:
receiving resolution information from a wireless device;
ascertaining that the wireless device is performing tethering based on the received resolution information. 6. The network system according to claim 5, wherein the operations further include comparing the received resolution information to a known screen resolution of the wireless device. 7. The network system according to claim 6, wherein the operations further include retrieving the known screen resolution of the wireless device from a database coupled to the server device. 8. A computer-implemented method comprising:
analyzing a data packet received from a mobile device; detecting an indicator from the analyzing the data packet; and ascertaining that the wireless device is performing tethering based on the detected indicator. 9. The method to ascertain tethering according to claim 8, wherein the act of ascertaining includes ascertaining that the wireless device is performing tethering based on the detected indicator showing that the data packet is incompatible with the mobile device. 10. The method to ascertain tethering according to claim 8, wherein the indicator indicates operating system information, the operating system information being incompatible with the mobile device. 11. A non-transitory computer-readable medium having stored thereon a plurality of instructions, the plurality of instructions including instructions which, when executed by a processor, cause the processor to perform operations comprising:
analyzing a data packet received from a mobile device; detecting an indicator from the analyzing the data packet; and ascertaining that the wireless device is performing tethering based on the detected indicator. 10. The non-transitory computer-readable medium of claim 11, wherein the analyzing includes examining a header of the data packet. 11. The non-transitory computer-readable medium of claim 11, wherein the analyzing includes examining the data packet for operating system information. 12. The non-transitory computer-readable medium of claim 11, wherein the act of ascertaining includes ascertaining that the wireless device is performing tethering based on the detected indicator showing that the data packet is incompatible with the mobile device. 13. The non-transitory computer-readable medium of claim 11, wherein the indicator indicates operating system information, the operating system information being incompatible with the mobile device. 14. A computer implemented method, comprising:
receiving an indicator from a wireless device; and ascertaining the wireless device is performing tethering based on the indicator, the indictor being associated with a universal resource locator (URL) that is not expected from the wireless device. 15. The method to ascertain tethering according to claim 14, wherein the act of ascertaining ascertains the wireless device is performing tethering based a determination that the detected URL was generated by a device other than the wireless device. 16. The method to ascertain tethering according to claim 14, wherein the act of ascertaining includes at least cross-referencing the received URL against one or more stored URL formats that are compatible with the wireless device. 17. The method ascertain tethering according to claim 14, further comprising receiving at least one packet from the wireless device, the at least one packet having the indicator embedded therein. 18. An apparatus, comprising:
a processor configured to:
receive an indicator from a wireless device; and
ascertain the wireless device is performing tethering based on the indicator, the indictor being associated with a universal resource locator (URL) that is not expected from the wireless device. 19. The apparatus of claim 18, wherein the processor is further configured to cross reference the URL against one or more stored URL formats that are compatible with the wireless device. 20. The apparatus of claim 14, wherein the indicator is embedded in a packet provided by the wireless device. | Systems, methods and procedures are described for ascertaining tethering of a device in a communication network. In one implementation, a wireless device provides Internet connectivity to a computing device using a wireline or wireless transmission medium. In one arrangement, the wireless device, in the tethered arrangement with the computing device, may provide receiving and sending of data communications capability to the computing device. In one implementation, the communication entity that is hosting the network may ascertain that tethering is occurring by analyzing communications generated or passing through the computing device.1. A computer implemented method, comprising:
receiving resolution information from a wireless device; ascertaining that the wireless device is performing tethering based on the received resolution information. 2. The method of claim 1, further comprising comparing the received resolution information to a known screen resolution of the wireless device. 3. The method of claim 2, wherein the act of comparing includes retrieving the known screen resolution of the wireless device from a database. 4. The method of claim 1, further comprising transmitting a notification to the wireless device, the notification indicating observance of tethering being performed by the wireless device. 5. A server device comprising:
one or more processors; and computer-executable instructions which, when executed by the one or more processors, perform operations including:
receiving resolution information from a wireless device;
ascertaining that the wireless device is performing tethering based on the received resolution information. 6. The network system according to claim 5, wherein the operations further include comparing the received resolution information to a known screen resolution of the wireless device. 7. The network system according to claim 6, wherein the operations further include retrieving the known screen resolution of the wireless device from a database coupled to the server device. 8. A computer-implemented method comprising:
analyzing a data packet received from a mobile device; detecting an indicator from the analyzing the data packet; and ascertaining that the wireless device is performing tethering based on the detected indicator. 9. The method to ascertain tethering according to claim 8, wherein the act of ascertaining includes ascertaining that the wireless device is performing tethering based on the detected indicator showing that the data packet is incompatible with the mobile device. 10. The method to ascertain tethering according to claim 8, wherein the indicator indicates operating system information, the operating system information being incompatible with the mobile device. 11. A non-transitory computer-readable medium having stored thereon a plurality of instructions, the plurality of instructions including instructions which, when executed by a processor, cause the processor to perform operations comprising:
analyzing a data packet received from a mobile device; detecting an indicator from the analyzing the data packet; and ascertaining that the wireless device is performing tethering based on the detected indicator. 10. The non-transitory computer-readable medium of claim 11, wherein the analyzing includes examining a header of the data packet. 11. The non-transitory computer-readable medium of claim 11, wherein the analyzing includes examining the data packet for operating system information. 12. The non-transitory computer-readable medium of claim 11, wherein the act of ascertaining includes ascertaining that the wireless device is performing tethering based on the detected indicator showing that the data packet is incompatible with the mobile device. 13. The non-transitory computer-readable medium of claim 11, wherein the indicator indicates operating system information, the operating system information being incompatible with the mobile device. 14. A computer implemented method, comprising:
receiving an indicator from a wireless device; and ascertaining the wireless device is performing tethering based on the indicator, the indictor being associated with a universal resource locator (URL) that is not expected from the wireless device. 15. The method to ascertain tethering according to claim 14, wherein the act of ascertaining ascertains the wireless device is performing tethering based a determination that the detected URL was generated by a device other than the wireless device. 16. The method to ascertain tethering according to claim 14, wherein the act of ascertaining includes at least cross-referencing the received URL against one or more stored URL formats that are compatible with the wireless device. 17. The method ascertain tethering according to claim 14, further comprising receiving at least one packet from the wireless device, the at least one packet having the indicator embedded therein. 18. An apparatus, comprising:
a processor configured to:
receive an indicator from a wireless device; and
ascertain the wireless device is performing tethering based on the indicator, the indictor being associated with a universal resource locator (URL) that is not expected from the wireless device. 19. The apparatus of claim 18, wherein the processor is further configured to cross reference the URL against one or more stored URL formats that are compatible with the wireless device. 20. The apparatus of claim 14, wherein the indicator is embedded in a packet provided by the wireless device. | 2,400 |
8,602 | 8,602 | 14,654,660 | 2,461 | Methods, systems, and devices are described for discontinuous transmission and/or discontinuous reception in time division duplex (TDD) systems that may have data transmission formats dynamically reconfigured. An initial uplink-downlink (UL-DL) configuration for TDD communication between an eNB and user equipment (UE) may be established. This initial UL-DL configuration may be reconfigured to a different UL-DL configuration for one or more UEs in communication with the eNB. When a UE switches to discontinuous reception (DRX) mode, it may monitor control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference UL-DL configuration irrespective of any reconfiguration of UL-DL configuration for a particular UE. In some aspects, a UE that is operating in a UL-DL reconfiguration mode may, upon entering DRX mode, autonomously discontinue operating in the UL-DL re-configuration mode. | 1. A method of wireless communication performed by a user equipment (UE) in time-division duplex (TDD) communication with an eNB, comprising:
determining a first uplink-downlink (UL-DL) configuration for TDD communication with the eNB; and receiving a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 2. The method of claim 1, further comprising:
switching to a discontinuous reception (DRX) mode; monitoring control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration; and changing back to the first UL-DL configuration when DRX mode is active, the first UL-DL configuration being an initial UL-DL configuration. 3. The method of claim 2, further comprising:
switching out of the DRX mode; and determining a new UL-DL configuration to be used for communications with the eNB. 4. The method of claim 3, wherein determining the new UL-DL configuration comprises:
receiving an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and receiving the new UL-DL configuration for a subsequent radio frames. 5. The method of claim 4, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. 6. The method of claim 3, wherein the switching out of the DRX mode comprises:
receiving control information from the eNB during a DRX active period. 7. The method of claim 3, wherein the switching out of the DRX mode comprises:
determining that data is to be sent to the eNB; and transmitting an indication to the eNB that data is to be sent from the UE. 8. The method of claim 1, wherein the reference TDD UL-DL configuration is the first UL-DL configuration, the first UL-DL configuration being an initial UL-DL configuration. 9. The method of claim 8, wherein the initial UL-DL configuration is received in a system information block Type1 (SIB1). 10. The method of claim 1, wherein the reference TDD UL-DL configuration is different than the first UL-DL configuration, the first UL-DL configuration being an initial UL-DL configuration. 11. The method of claim 10, wherein the reference TDD UL-DL configuration is received in a Radio Resource Control message to the UE. 12. A wireless communication user equipment (UE) apparatus configured to operate using one of multiple time-division duplex (TDD) uplink-downlink (UL-DL) configurations, comprising:
means for determining a first uplink-downlink (UL-DL) configuration for TDD communication with an eNB; and means for receiving a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 13. The apparatus of claim 12, further comprising:
means for switching to a discontinuous reception (DRX) mode; means for monitoring control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration; and means for changing back to the first UL-DL configuration when DRX mode is active, the first UL-DL configuration being an initial UL-DL configuration. 14. The apparatus of claim 13, further comprising:
means for switching out of the DRX mode; and means for determining a new UL-DL configuration to be used for communications with the eNB. 15. The apparatus of claim 14, wherein the means for determining the new UL-DL configuration comprises:
means for receiving an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and means for receiving the new UL-DL configuration for a subsequent radio frame. 16. The apparatus of claim 15, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. 17. The apparatus of claim 14, wherein the means for switching out of the DRX mode comprises:
means for receiving control information from the eNB during a DRX active period. 18. The apparatus of claim 14, wherein the means for switching out of the DRX mode comprises:
means for determining that data is to be sent to the eNB; and means for transmitting an indication to the eNB that data is to be sent from the UE. 19. A wireless communication user equipment (UE) apparatus configured to operate using one of multiple time-division duplex (TDD) uplink-downlink (UL-DL) configurations, comprising:
at least one processor configured to: determine a first uplink-downlink (UL-DL) configuration for TDD communication with an eNB; and receive a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 20. The apparatus of claim 19, wherein the at least one processor is further configured to:
switch out of the DRX mode; determine a new UL-DL configuration to be used for communications with the eNB; switch to a discontinuous reception (DRX) mode; and monitor control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration. 21. The apparatus of claim 20, wherein the at least one processor is further configured to:
switch out of the DRX mode; and determine a new UL-DL configuration to be used for communications with the eNB. 22. The apparatus of claim 21, wherein the determine the new UL-DL configuration comprises:
receive an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and receive the new UL-DL configuration for a subsequent radio frame. 23. The apparatus of claim 22, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. 24. The apparatus of claim 21, wherein the switch out of the DRX mode comprises: receive control information from the eNB during a DRX active period. 25. The apparatus of claim 21, wherein the switch out of the DRX mode comprises:
determine that data is to be sent to the eNB; and transmit an indication to the eNB that data is to be sent from the UE. 26. A computer program product, comprising a non-transitory computer-readable medium comprising:
instructions for causing a computer to determine a first uplink-downlink (UL-DL) configuration for TDD communication with the eNB; and instructions for causing the computer to receive a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 27. A computer program product of claim 26, wherein the computer-readable medium further comprises:
instructions for causing the computer to switch to a discontinuous reception (DRX) mode; instructions for causing the computer to monitor control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration; and instructions for causing the computer to change back to the first UL-DL configuration when DRX mode is active, the first UL-DL configuration being an initial UL-DL configuration. 28. A computer program product of claim 27, wherein the computer-readable medium further comprises:
instructions for causing the computer to switch out of the DRX mode; and instructions for causing the computer to determine a new UL-DL configuration to be used for communications with the eNB. 29. A computer program product of claim 28, wherein the computer-readable medium further comprises:
instructions for causing the computer to receive an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and instructions for causing the computer to receive the new UL-DL configuration for a subsequent radio frames. 30. A computer program product of claim 29, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. | Methods, systems, and devices are described for discontinuous transmission and/or discontinuous reception in time division duplex (TDD) systems that may have data transmission formats dynamically reconfigured. An initial uplink-downlink (UL-DL) configuration for TDD communication between an eNB and user equipment (UE) may be established. This initial UL-DL configuration may be reconfigured to a different UL-DL configuration for one or more UEs in communication with the eNB. When a UE switches to discontinuous reception (DRX) mode, it may monitor control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference UL-DL configuration irrespective of any reconfiguration of UL-DL configuration for a particular UE. In some aspects, a UE that is operating in a UL-DL reconfiguration mode may, upon entering DRX mode, autonomously discontinue operating in the UL-DL re-configuration mode.1. A method of wireless communication performed by a user equipment (UE) in time-division duplex (TDD) communication with an eNB, comprising:
determining a first uplink-downlink (UL-DL) configuration for TDD communication with the eNB; and receiving a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 2. The method of claim 1, further comprising:
switching to a discontinuous reception (DRX) mode; monitoring control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration; and changing back to the first UL-DL configuration when DRX mode is active, the first UL-DL configuration being an initial UL-DL configuration. 3. The method of claim 2, further comprising:
switching out of the DRX mode; and determining a new UL-DL configuration to be used for communications with the eNB. 4. The method of claim 3, wherein determining the new UL-DL configuration comprises:
receiving an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and receiving the new UL-DL configuration for a subsequent radio frames. 5. The method of claim 4, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. 6. The method of claim 3, wherein the switching out of the DRX mode comprises:
receiving control information from the eNB during a DRX active period. 7. The method of claim 3, wherein the switching out of the DRX mode comprises:
determining that data is to be sent to the eNB; and transmitting an indication to the eNB that data is to be sent from the UE. 8. The method of claim 1, wherein the reference TDD UL-DL configuration is the first UL-DL configuration, the first UL-DL configuration being an initial UL-DL configuration. 9. The method of claim 8, wherein the initial UL-DL configuration is received in a system information block Type1 (SIB1). 10. The method of claim 1, wherein the reference TDD UL-DL configuration is different than the first UL-DL configuration, the first UL-DL configuration being an initial UL-DL configuration. 11. The method of claim 10, wherein the reference TDD UL-DL configuration is received in a Radio Resource Control message to the UE. 12. A wireless communication user equipment (UE) apparatus configured to operate using one of multiple time-division duplex (TDD) uplink-downlink (UL-DL) configurations, comprising:
means for determining a first uplink-downlink (UL-DL) configuration for TDD communication with an eNB; and means for receiving a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 13. The apparatus of claim 12, further comprising:
means for switching to a discontinuous reception (DRX) mode; means for monitoring control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration; and means for changing back to the first UL-DL configuration when DRX mode is active, the first UL-DL configuration being an initial UL-DL configuration. 14. The apparatus of claim 13, further comprising:
means for switching out of the DRX mode; and means for determining a new UL-DL configuration to be used for communications with the eNB. 15. The apparatus of claim 14, wherein the means for determining the new UL-DL configuration comprises:
means for receiving an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and means for receiving the new UL-DL configuration for a subsequent radio frame. 16. The apparatus of claim 15, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. 17. The apparatus of claim 14, wherein the means for switching out of the DRX mode comprises:
means for receiving control information from the eNB during a DRX active period. 18. The apparatus of claim 14, wherein the means for switching out of the DRX mode comprises:
means for determining that data is to be sent to the eNB; and means for transmitting an indication to the eNB that data is to be sent from the UE. 19. A wireless communication user equipment (UE) apparatus configured to operate using one of multiple time-division duplex (TDD) uplink-downlink (UL-DL) configurations, comprising:
at least one processor configured to: determine a first uplink-downlink (UL-DL) configuration for TDD communication with an eNB; and receive a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 20. The apparatus of claim 19, wherein the at least one processor is further configured to:
switch out of the DRX mode; determine a new UL-DL configuration to be used for communications with the eNB; switch to a discontinuous reception (DRX) mode; and monitor control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration. 21. The apparatus of claim 20, wherein the at least one processor is further configured to:
switch out of the DRX mode; and determine a new UL-DL configuration to be used for communications with the eNB. 22. The apparatus of claim 21, wherein the determine the new UL-DL configuration comprises:
receive an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and receive the new UL-DL configuration for a subsequent radio frame. 23. The apparatus of claim 22, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. 24. The apparatus of claim 21, wherein the switch out of the DRX mode comprises: receive control information from the eNB during a DRX active period. 25. The apparatus of claim 21, wherein the switch out of the DRX mode comprises:
determine that data is to be sent to the eNB; and transmit an indication to the eNB that data is to be sent from the UE. 26. A computer program product, comprising a non-transitory computer-readable medium comprising:
instructions for causing a computer to determine a first uplink-downlink (UL-DL) configuration for TDD communication with the eNB; and instructions for causing the computer to receive a reconfiguration message to change the first UL-DL configuration to a second UL-DL configuration to be used for TDD communication with the eNB. 27. A computer program product of claim 26, wherein the computer-readable medium further comprises:
instructions for causing the computer to switch to a discontinuous reception (DRX) mode; instructions for causing the computer to monitor control information from the eNB during DRX on periods, a frequency of the DRX on periods based on a reference TDD UL-DL configuration irrespective of the changed UL-DL configuration; and instructions for causing the computer to change back to the first UL-DL configuration when DRX mode is active, the first UL-DL configuration being an initial UL-DL configuration. 28. A computer program product of claim 27, wherein the computer-readable medium further comprises:
instructions for causing the computer to switch out of the DRX mode; and instructions for causing the computer to determine a new UL-DL configuration to be used for communications with the eNB. 29. A computer program product of claim 28, wherein the computer-readable medium further comprises:
instructions for causing the computer to receive an indication to switch to a dynamic reconfiguration mode and a timing to initiate the switch from the eNB; and instructions for causing the computer to receive the new UL-DL configuration for a subsequent radio frames. 30. A computer program product of claim 29, wherein the indication to switch to the dynamic reconfiguration mode and the timing to initiate the switch are received via one or more of Layer 1 (L1), Medium Access Control (MAC), or Radio Resource Control (RRC) signaling. | 2,400 |
8,603 | 8,603 | 15,543,731 | 2,419 | There is provided mechanisms for providing configuration for uplink transmission to a wireless device. A method is performed by a network node. The method comprises transmitting a message comprising configuration for uplink transmission with short TTI operation. The method comprises transmitting a fast grant comprising scheduling of an uplink short TTI for the wireless device. | 1. A method for providing configuration for uplink transmission to a wireless device, the method being performed by a network node, the method comprising:
transmitting a message comprising configuration for uplink transmission with short TTI operation; and transmitting a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 2. The method according to claim 1, wherein the uplink transmission, to which the configuration and the scheduling relate, is to be performed in a short TTI frequency band. 3. The method according to claim 1, further comprising:
receiving a data transmission from the wireless device on a Physical Uplink Shared Channel, PUSCH, for short TTI operation according to the fast grant. 4. A method for receiving configuration for uplink transmission from a network node, the method being performed by a wireless device, the method comprising:
receiving, from the network node, a message comprising configuration for uplink transmission with short TTI operation; and receiving, from the network node, a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 5. The method according to claim 4, wherein the uplink transmission, to which the configuration and the scheduling relate, is to be performed in a short TTI frequency band. 6. The method according to claim 4, further comprising:
performing a data transmission on a Physical Uplink Shared Channel, PUSCH, for short TTI operation according to the fast grant. 7. The method according to claim 1, wherein the configuration defines positions of reference symbols and data symbols for short TTI operation. 8. The method according to claim 7, wherein the reference symbols are uplink demodulation reference signals, DMRS. 9. The method according to claim 7, wherein according to the configuration, the reference symbols are positioned at symbols 3 and 10 in each subframe. 10. The method according to claim 7, wherein according to the configuration, each TTI for short TTI operation comprises at most one reference symbol. 11. The method according to claim 7, wherein according to the configuration, the reference symbols are positioned either first or last in each TTI for short TTI operation that comprises a reference symbol. 12. The method according to claim 7, wherein according to the configuration, reference symbols for different TTI for short TTI operation are placed on a common symbol. 13. The method according to claim 7, wherein according to the configuration, all TTIs for short TTI operation are slot contained. 14. The method according to claim 1, wherein the configuration is signaled by a short TTI configuration index. 15. The method according to claim 1, wherein the configuration is signaled by a first parameter indicating length of the short TTI and a second parameter indicating positions of reference symbols and data symbols for short TTI operation. 16. The method according to claim 1, wherein the configuration further specifies a downlink TTI frequency band length. 17. The method according to claim 1, wherein the configuration is fixed for each subframe. 18. The method according to claim 1, wherein at least two TTIs for short TTI operation have mutually different lengths within one subframe. 19. The method according to claim 1, wherein at least two wireless devices for short TTI operation have mutually different lengths of TTI frequency bands within one subframe. 20. The method according to claim 1, wherein the message is a radio resource control message. 21. The method according to claim 1, wherein the message is a slow grant. 22. The method according to claim 12, wherein the fast grant is transmitted more frequently than the slow grant. 23. The method according to claim 21, wherein the slow grant is transmitted once per subframe or less frequent than once per subframe. 24. The method according to claim 1, wherein the fast grant is transmitted more frequently than once per subframe, such as on a per symbol basis. 25. The method according to claim 1, wherein the fast grant is part of the message comprising said configuration. 26. The method according to claim 1, wherein the fast grant is part of a message being different from the message comprising said configuration. 27. (canceled) 28. A network node for providing configuration for uplink transmission to a wireless device, the network node comprising:
processing circuitry; and a computer program product storing instructions that, when executed by the processing circuitry, causes the network node to:
transmit a message comprising configuration for uplink transmission with short TTI operation; and
transmit a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 29. (canceled) 30. (canceled) 31. A wireless device for receiving configuration for uplink transmission from a network node, the wireless device comprising:
processing circuitry; and a computer program product storing instructions that, when executed by the processing circuitry, causes the wireless device to:
receive, from the network node, a message comprising configuration for uplink transmission with short TTI operation; and
receive, from the network node, a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 32. (canceled) 33. (canceled) 34. A computer program for receiving configuration for uplink transmission from a network node, the computer program comprising computer code which, when run on processing circuitry of a wireless device, causes the wireless device to:
receive, from the network node, a message comprising configuration for uplink transmission with short TTI operation; and receive, from the network node, a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 35. (canceled) | There is provided mechanisms for providing configuration for uplink transmission to a wireless device. A method is performed by a network node. The method comprises transmitting a message comprising configuration for uplink transmission with short TTI operation. The method comprises transmitting a fast grant comprising scheduling of an uplink short TTI for the wireless device.1. A method for providing configuration for uplink transmission to a wireless device, the method being performed by a network node, the method comprising:
transmitting a message comprising configuration for uplink transmission with short TTI operation; and transmitting a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 2. The method according to claim 1, wherein the uplink transmission, to which the configuration and the scheduling relate, is to be performed in a short TTI frequency band. 3. The method according to claim 1, further comprising:
receiving a data transmission from the wireless device on a Physical Uplink Shared Channel, PUSCH, for short TTI operation according to the fast grant. 4. A method for receiving configuration for uplink transmission from a network node, the method being performed by a wireless device, the method comprising:
receiving, from the network node, a message comprising configuration for uplink transmission with short TTI operation; and receiving, from the network node, a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 5. The method according to claim 4, wherein the uplink transmission, to which the configuration and the scheduling relate, is to be performed in a short TTI frequency band. 6. The method according to claim 4, further comprising:
performing a data transmission on a Physical Uplink Shared Channel, PUSCH, for short TTI operation according to the fast grant. 7. The method according to claim 1, wherein the configuration defines positions of reference symbols and data symbols for short TTI operation. 8. The method according to claim 7, wherein the reference symbols are uplink demodulation reference signals, DMRS. 9. The method according to claim 7, wherein according to the configuration, the reference symbols are positioned at symbols 3 and 10 in each subframe. 10. The method according to claim 7, wherein according to the configuration, each TTI for short TTI operation comprises at most one reference symbol. 11. The method according to claim 7, wherein according to the configuration, the reference symbols are positioned either first or last in each TTI for short TTI operation that comprises a reference symbol. 12. The method according to claim 7, wherein according to the configuration, reference symbols for different TTI for short TTI operation are placed on a common symbol. 13. The method according to claim 7, wherein according to the configuration, all TTIs for short TTI operation are slot contained. 14. The method according to claim 1, wherein the configuration is signaled by a short TTI configuration index. 15. The method according to claim 1, wherein the configuration is signaled by a first parameter indicating length of the short TTI and a second parameter indicating positions of reference symbols and data symbols for short TTI operation. 16. The method according to claim 1, wherein the configuration further specifies a downlink TTI frequency band length. 17. The method according to claim 1, wherein the configuration is fixed for each subframe. 18. The method according to claim 1, wherein at least two TTIs for short TTI operation have mutually different lengths within one subframe. 19. The method according to claim 1, wherein at least two wireless devices for short TTI operation have mutually different lengths of TTI frequency bands within one subframe. 20. The method according to claim 1, wherein the message is a radio resource control message. 21. The method according to claim 1, wherein the message is a slow grant. 22. The method according to claim 12, wherein the fast grant is transmitted more frequently than the slow grant. 23. The method according to claim 21, wherein the slow grant is transmitted once per subframe or less frequent than once per subframe. 24. The method according to claim 1, wherein the fast grant is transmitted more frequently than once per subframe, such as on a per symbol basis. 25. The method according to claim 1, wherein the fast grant is part of the message comprising said configuration. 26. The method according to claim 1, wherein the fast grant is part of a message being different from the message comprising said configuration. 27. (canceled) 28. A network node for providing configuration for uplink transmission to a wireless device, the network node comprising:
processing circuitry; and a computer program product storing instructions that, when executed by the processing circuitry, causes the network node to:
transmit a message comprising configuration for uplink transmission with short TTI operation; and
transmit a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 29. (canceled) 30. (canceled) 31. A wireless device for receiving configuration for uplink transmission from a network node, the wireless device comprising:
processing circuitry; and a computer program product storing instructions that, when executed by the processing circuitry, causes the wireless device to:
receive, from the network node, a message comprising configuration for uplink transmission with short TTI operation; and
receive, from the network node, a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 32. (canceled) 33. (canceled) 34. A computer program for receiving configuration for uplink transmission from a network node, the computer program comprising computer code which, when run on processing circuitry of a wireless device, causes the wireless device to:
receive, from the network node, a message comprising configuration for uplink transmission with short TTI operation; and receive, from the network node, a fast grant comprising scheduling of an uplink short TTI transmission for the wireless device. 35. (canceled) | 2,400 |
8,604 | 8,604 | 14,423,171 | 2,415 | It is disclosed a network node, a communication device and methods therein, for scheduling of uplink resources in multiple time instances and transmitting in accordance with the scheduled UL resources. By including an indicator in an UL grant comprising an UL resource allocation, the indicator indicates that, for at least one UL time instance of the multiple time instances, at least one UL resource is excluded from, or added to, the UL resource allocation of the UL grant. | 1. A method performed by a network node for scheduling of uplink, UL, resources in multiple time instances, the method comprising:
including an indicator in an UL grant comprising an UL resource allocation, the indicator indicating that, for at least one UL time instance of the multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; transmitting the UL grant including the indicator, to at least one communication device, and receiving from the at least one communication device, an uplink transmission in accordance with the transmitted UL grant. 2. The method according to claim 1, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is excluded from the resource blocks of the UL resource allocation of the UL grant. 3. The method according to claim 1, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is added to the resource blocks of the UL resource allocation of the UL grant. 4. The method according to claim 1 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is transmitted. 5. The method according to claim 1 wherein the indicator indicating that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 6. The method according to claim 1 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 7. The method according to claim 1, wherein the indicator indicates a set of said UL resources configured by higher-layer signaling such as a radio resource control message. 8. The method according to claim 7 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 9. A network node adapted to schedule of uplink, UL, resources in multiple time instances, the network node comprising a processor and a memory, the memory comprising instructions executable by the processor whereby the network node is operative to:
include an indicator in an UL grant comprising an UL resource allocation, the indicator indicating that, for at least one UL time instance of the multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; transmit the UL grant including the indicator, to at least one communication device, and receive from the at least one communication device, an uplink transmission in accordance with the transmitted UL grant. 10. The network node according to claim 9, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is excluded from the resource blocks of the UL resource allocation of the UL grant. 11. The network node according to claim 9, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is added to the resource blocks of the UL resource allocation of the UL grant. 12. The network node according to claim 9 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is transmitted. 13. The network node according to claim 9 wherein the indicator indicating that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 14. The network node according to claim 9 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 15. The network node according to claim 9 wherein the indicator indicating a set of said UL resources configured by higher-layer signaling such as a radio resource control message. 16. The network node according to claim 15 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 17. A method performed by a communication device, the method comprising:
receiving, from a network node, an uplink, UL, grant comprising an UL resource allocation and further comprising an indicator indicating that, for at least one UL time instance of multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; decoding the UL grant and the indicator in the UL grant; determining, from the UL grant, the UL resource allocation and determining, from the indicator, the at least one UL resource; and transmitting, uplink data to the network node, in accordance with the determined UL resource allocation and the determined at least one UL resource. 18. The method according to claim 17, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is excluded from the UL resource allocation of the UL grant; refraining from transmitting uplink data to the network node on the excluded UL resource. 19. The method according to claim 17, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is added to the UL resource allocation of the UL grant; transmitting uplink data to the network node on the UL resources of the resource allocation and also on the added UL resource. 20. The method according to claim 17 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is received. 21. The method according to claim 17 wherein the indicator indicating that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 22. The method according to claim 17 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 23. The method according to claim 17 wherein the indicator indicating a set of said UL resources configured by higher-layer signaling such as a radio resource control message received from the network node. 24. The method according to claim 23 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 25. A communication device comprising a processor and a memory, the memory comprising instructions executable by the processor whereby the communication device is operative to:
receive from a network node, an uplink, UL, grant comprising an UL resource allocation and further comprising an indicator indicating that, for at least one UL time instance of multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; decode the UL grant and the indicator in the UL grant; determine, from the UL grant, the UL resource allocation and determining, from the indicator, the at least one UL resource; and transmit, uplink data to the network node, in accordance with the determined UL resource allocation and the determined at least one UL resource. 26. The communication device according to claim 25, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is excluded from the UL resource allocation of the UL grant; the communication device is operative to refrain from transmitting uplink data to the network node using the excluded UL resource. 27. The communication device according to claim 25, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is added to the UL resource allocation of the UL grant; the communication device is operative to transmit uplink data to the network node using the UL resources of the resource allocation and also using the added UL resource. 28. The communication device according to claim 25 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is received. 29. The communication device according to claim 25 wherein the indicator indicates that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 30. The communication device according to claim 25 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 31. The communication device according to claim 25 wherein the indicator indicates a set of said UL resources configured by higher-layer signaling such as a radio resource control message received from the network node. 32. The communication device according to claim 31 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 33. (canceled) | It is disclosed a network node, a communication device and methods therein, for scheduling of uplink resources in multiple time instances and transmitting in accordance with the scheduled UL resources. By including an indicator in an UL grant comprising an UL resource allocation, the indicator indicates that, for at least one UL time instance of the multiple time instances, at least one UL resource is excluded from, or added to, the UL resource allocation of the UL grant.1. A method performed by a network node for scheduling of uplink, UL, resources in multiple time instances, the method comprising:
including an indicator in an UL grant comprising an UL resource allocation, the indicator indicating that, for at least one UL time instance of the multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; transmitting the UL grant including the indicator, to at least one communication device, and receiving from the at least one communication device, an uplink transmission in accordance with the transmitted UL grant. 2. The method according to claim 1, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is excluded from the resource blocks of the UL resource allocation of the UL grant. 3. The method according to claim 1, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is added to the resource blocks of the UL resource allocation of the UL grant. 4. The method according to claim 1 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is transmitted. 5. The method according to claim 1 wherein the indicator indicating that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 6. The method according to claim 1 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 7. The method according to claim 1, wherein the indicator indicates a set of said UL resources configured by higher-layer signaling such as a radio resource control message. 8. The method according to claim 7 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 9. A network node adapted to schedule of uplink, UL, resources in multiple time instances, the network node comprising a processor and a memory, the memory comprising instructions executable by the processor whereby the network node is operative to:
include an indicator in an UL grant comprising an UL resource allocation, the indicator indicating that, for at least one UL time instance of the multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; transmit the UL grant including the indicator, to at least one communication device, and receive from the at least one communication device, an uplink transmission in accordance with the transmitted UL grant. 10. The network node according to claim 9, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is excluded from the resource blocks of the UL resource allocation of the UL grant. 11. The network node according to claim 9, wherein said indicator indicating that for the at least one UL time instance, the at least one UL resource corresponding to at least one resource block, RB, is added to the resource blocks of the UL resource allocation of the UL grant. 12. The network node according to claim 9 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is transmitted. 13. The network node according to claim 9 wherein the indicator indicating that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 14. The network node according to claim 9 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 15. The network node according to claim 9 wherein the indicator indicating a set of said UL resources configured by higher-layer signaling such as a radio resource control message. 16. The network node according to claim 15 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 17. A method performed by a communication device, the method comprising:
receiving, from a network node, an uplink, UL, grant comprising an UL resource allocation and further comprising an indicator indicating that, for at least one UL time instance of multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; decoding the UL grant and the indicator in the UL grant; determining, from the UL grant, the UL resource allocation and determining, from the indicator, the at least one UL resource; and transmitting, uplink data to the network node, in accordance with the determined UL resource allocation and the determined at least one UL resource. 18. The method according to claim 17, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is excluded from the UL resource allocation of the UL grant; refraining from transmitting uplink data to the network node on the excluded UL resource. 19. The method according to claim 17, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is added to the UL resource allocation of the UL grant; transmitting uplink data to the network node on the UL resources of the resource allocation and also on the added UL resource. 20. The method according to claim 17 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is received. 21. The method according to claim 17 wherein the indicator indicating that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 22. The method according to claim 17 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 23. The method according to claim 17 wherein the indicator indicating a set of said UL resources configured by higher-layer signaling such as a radio resource control message received from the network node. 24. The method according to claim 23 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 25. A communication device comprising a processor and a memory, the memory comprising instructions executable by the processor whereby the communication device is operative to:
receive from a network node, an uplink, UL, grant comprising an UL resource allocation and further comprising an indicator indicating that, for at least one UL time instance of multiple time instances, at least one UL resource is excluded from or added to the UL resource allocation of the UL grant; decode the UL grant and the indicator in the UL grant; determine, from the UL grant, the UL resource allocation and determining, from the indicator, the at least one UL resource; and transmit, uplink data to the network node, in accordance with the determined UL resource allocation and the determined at least one UL resource. 26. The communication device according to claim 25, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is excluded from the UL resource allocation of the UL grant; the communication device is operative to refrain from transmitting uplink data to the network node using the excluded UL resource. 27. The communication device according to claim 25, wherein if the indicator indicates that for the least one UL time instance, the at least one UL resource is added to the UL resource allocation of the UL grant; the communication device is operative to transmit uplink data to the network node using the UL resources of the resource allocation and also using the added UL resource. 28. The communication device according to claim 25 where the at least one UL time instance is a first UL subframe following the time instance during which the UL grant including the indicator is received. 29. The communication device according to claim 25 wherein the indicator indicates that for a set of UL time instances comprising the at least one UL time instance, the at least one UL resource, is excluded from, or added to, the UL resource allocation of the UL grant. 30. The communication device according to claim 25 wherein the indicator is a bit field comprising at least one bit and wherein the bit field points at the at least one UL time instance of the multiple time instances. 31. The communication device according to claim 25 wherein the indicator indicates a set of said UL resources configured by higher-layer signaling such as a radio resource control message received from the network node. 32. The communication device according to claim 31 wherein the set of said UL resources is indicated by an index in the UL grant indicating that for, the at least one UL time instance of the multiple time instances, the at least one UL resource of the set of resources, is excluded from, or added to, the UL resource allocation of the UL grant. 33. (canceled) | 2,400 |
8,605 | 8,605 | 14,921,378 | 2,483 | Inter frame candidate selection may include identifying a current block from a current input frame from an input video stream, and generating an encoded block by encoding the current block, wherein encoding the current block includes determining an inter-coding candidate motion vector. Determining the inter-coding candidate motion vector may include identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame, determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, and identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector. | 1. A method, comprising:
identifying a current block from a current input frame from an input video stream; generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the current block, wherein encoding the current block includes determining an inter-coding candidate motion vector, wherein determining the inter-coding candidate motion vector includes:
identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame,
determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, wherein determining the cost value for a motion vector from the plurality of motion vectors includes:
determining a distortion measurement for encoding the current block using the motion vector;
determining an estimated encoding cost for encoding the current block using the motion vector;
identifying a weighting value; and
determining the cost value as a sum of the distortion measurement and a product of the weighting value and the estimated encoding cost, and
identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector;
including the encoded block in an output bitstream; and storing or transmitting the output bitstream. 2. The method of claim 1, wherein identifying the plurality of motion vectors includes generating the estimated motion vector, wherein generating the estimated motion vector includes:
generating a first candidate estimated motion vector based on the current block; generating a second candidate estimated motion vector based on a first sub-block of the current block; generating a third candidate estimated motion vector based on a second sub-block of the current block; determining a first motion estimation cost value as a sum of a first penalty value and sum of absolute differences for the first candidate estimated motion vector; determining a second motion estimation cost value as a sum of a second penalty value and sum of absolute differences for the second candidate estimated motion vector and the third candidate estimated motion vector; selecting the first candidate estimated motion vector as the estimated motion vector on a condition that the second motion estimation cost value exceeds the first motion estimation cost value; and selecting the second candidate estimated motion vector as the estimated motion vector on a condition that the first motion estimation cost value exceeds the second motion estimation cost value. 3. The method of claim 1, wherein identifying the motion vector from the plurality of motion vectors having the minimal cost value as the inter-coding candidate motion vector includes:
selecting the context motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the context motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the context motion vector; selecting the zero valued motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the context motion vector exceeds the cost value associated with the zero valued motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the zero valued motion vector; selecting the estimated motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the estimated motion vector and the cost value associated with the context vector exceeds the cost value associated with the estimated motion vector. 4. The method of claim 1, wherein determining the cost value for the motion vector from the plurality of motion vectors includes:
determining a predicted block based on the current block and the motion vector; determining a residual block as a difference between the current block and the predicted block; generating a transform block based on the residual block; and generating a quantized block based on the transform block. 5. The method of claim 4, wherein determining the distortion measurement for encoding the current block using the motion vector includes:
determining an inverse-quantized block based on the quantized block; determining a sum of squared errors between the transform block and the inverse-quantized block as the distortion measurement. 6. The method of claim 4, wherein determining the estimated encoding cost for encoding the current block using the motion vector includes:
identifying a defined scan order for the quantized block, wherein the quantized block includes a plurality of quantized transform coefficients; determining a plurality of estimated coefficient encoding costs by determining an estimated coefficient encoding cost for each respective quantized transform coefficient from the plurality of quantized transform coefficients; and determining a sum of the plurality of estimated coefficient encoding costs as the estimated encoding cost. 7. The method of claim 1, wherein encoding the current block includes determining an intra-coding candidate motion vector. 8. The method of claim 7, wherein encoding the current block includes:
determining a first sum of absolute differences for the intra-coding candidate motion vector; determining a second sum of absolute differences for the inter-coding candidate motion vector; encoding the current block using the intra-coding candidate motion vector on a condition that the second sum of absolute differences exceeds the first sum of absolute differences; and encoding the current block using the inter-coding candidate motion vector on a condition that the first sum of absolute differences exceeds the second sum of absolute differences. 9. The method of claim 1, wherein determining plurality of cost values includes determining each cost value from the plurality of cost values in parallel. 10. The method of claim 1, wherein identifying the context motion vector includes identifying the context motion vector such that the context motion vector is a non-zero motion vector. 11. The method of claim 1, wherein identifying the context motion vector includes identifying the context motion vector from the block neighboring the current block in the current input frame on a condition that the block neighboring the current block in the current input frame is a previously coded block encoded using the context motion vector. 12. The method of claim 1, wherein determining the inter-coding candidate motion vector includes:
identifying a second context motion vector from a second block neighboring the current block in the current input frame. 13. The method of claim 12, wherein identifying the second context motion vector includes identifying the second context motion vector such that the second context motion vector is a non-zero motion vector. 14. The method of claim 13, wherein identifying the second context motion vector includes identifying the second context motion vector from the second block neighboring the current block in the current input frame on a condition that the second block neighboring the current block in the current input frame is a previously coded block encoded using the second context motion vector. 15. A method, comprising:
identifying a current block from a current input frame from an input video stream; generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the current block, wherein encoding the current block includes determining an inter-coding candidate motion vector, wherein determining the inter-coding candidate motion vector includes:
identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame,
determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, wherein determining the cost value for a motion vector from the plurality of motion vectors includes:
determining a predicted block based on the current block and the motion vector;
determining a residual block as a difference between the current block and the predicted block;
generating a transform block based on the residual block;
generating a quantized block based on the transform block;
determining an inverse-quantized block based on the quantized block;
determining a sum of squared errors between the transform block and the inverse-quantized block as a distortion measurement for encoding the current block using the motion vector;
identifying a defined scan order for the quantized block, wherein the quantized block includes a plurality of quantized transform coefficients;
determining a plurality of estimated coefficient encoding costs by determining an estimated coefficient encoding cost for each respective quantized transform coefficient from the plurality of quantized transform coefficients;
determining a sum of the plurality of estimated coefficient encoding costs as an estimated encoding cost for encoding the current block using the motion vector;
identifying a weighting value; and
determining the cost value as a sum of the distortion measurement and a product of the weighting value and the estimated encoding cost, and
identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector;
including the encoded block in an output bitstream; and storing or transmitting the output bitstream. 16. The method of claim 15, wherein identifying the plurality of motion vectors includes generating the estimated motion vector, wherein generating the estimated motion vector includes:
generating a first candidate estimated motion vector based on the current block; generating a second candidate estimated motion vector based on a first sub-block of the current block; generating a third candidate estimated motion vector based on a second sub-block of the current block; determining a first motion estimation cost value as a sum of a first penalty value and sum of absolute differences for the first candidate estimated motion vector; determining a second motion estimation cost value as a sum of a second penalty value and sum of absolute differences for the second candidate estimated motion vector and the third candidate estimated motion vector; selecting the first candidate estimated motion vector as the estimated motion vector on a condition that the second motion estimation cost value exceeds the first motion estimation cost value; and selecting the second candidate estimated motion vector as the estimated motion vector on a condition that the first motion estimation cost value exceeds the second motion estimation cost value. 17. The method of claim 15, wherein identifying the motion vector from the plurality of motion vectors having the minimal cost value as the inter-coding candidate motion vector includes:
selecting the context motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the context motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the context motion vector; selecting the zero valued motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the context motion vector exceeds the cost value associated with the zero valued motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the zero valued motion vector; selecting the estimated motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the estimated motion vector and the cost value associated with the context vector exceeds the cost value associated with the estimated motion vector. 18. The method of claim 15, wherein encoding the current block includes:
determining an intra-coding candidate motion vector; determining a first sum of absolute differences for the intra-coding candidate motion vector; determining a second sum of absolute differences for the inter-coding candidate motion vector; encoding the current block using the intra-coding candidate motion vector on a condition that the second sum of absolute differences exceeds the first sum of absolute differences; and encoding the current block using the inter-coding candidate motion vector on a condition that the first sum of absolute differences exceeds the second sum of absolute differences. 19. The method of claim 15, wherein determining plurality of cost values includes determining each cost value from the plurality of cost values in parallel. 20. A method, comprising:
identifying a current block from a current input frame from an input video stream; generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the current block, wherein encoding the current block includes:
determining an inter-coding candidate motion vector, wherein determining the inter-coding candidate motion vector includes:
identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame;
determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, wherein determining the cost value for a motion vector from the plurality of motion vectors includes:
determining a distortion measurement for encoding the current block using the motion vector,
determining an estimated encoding cost for encoding the current block using the motion vector,
identifying a weighting value, and
determining the cost value as a sum of the distortion measurement and a product of the weighting value and the estimated encoding cost; and
identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector,
determining an intra-coding candidate motion vector,
determining a first sum of absolute differences for the intra-coding candidate motion vector,
determining a second sum of absolute differences for the inter-coding candidate motion vector,
encoding the current block using the intra-coding candidate motion vector on a condition that the second sum of absolute differences exceeds the first sum of absolute differences, and
encoding the current block using the inter-coding candidate motion vector on a condition that the first sum of absolute differences exceeds the second sum of absolute differences;
including the encoded block in an output bitstream; and storing or transmitting the output bitstream. | Inter frame candidate selection may include identifying a current block from a current input frame from an input video stream, and generating an encoded block by encoding the current block, wherein encoding the current block includes determining an inter-coding candidate motion vector. Determining the inter-coding candidate motion vector may include identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame, determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, and identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector.1. A method, comprising:
identifying a current block from a current input frame from an input video stream; generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the current block, wherein encoding the current block includes determining an inter-coding candidate motion vector, wherein determining the inter-coding candidate motion vector includes:
identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame,
determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, wherein determining the cost value for a motion vector from the plurality of motion vectors includes:
determining a distortion measurement for encoding the current block using the motion vector;
determining an estimated encoding cost for encoding the current block using the motion vector;
identifying a weighting value; and
determining the cost value as a sum of the distortion measurement and a product of the weighting value and the estimated encoding cost, and
identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector;
including the encoded block in an output bitstream; and storing or transmitting the output bitstream. 2. The method of claim 1, wherein identifying the plurality of motion vectors includes generating the estimated motion vector, wherein generating the estimated motion vector includes:
generating a first candidate estimated motion vector based on the current block; generating a second candidate estimated motion vector based on a first sub-block of the current block; generating a third candidate estimated motion vector based on a second sub-block of the current block; determining a first motion estimation cost value as a sum of a first penalty value and sum of absolute differences for the first candidate estimated motion vector; determining a second motion estimation cost value as a sum of a second penalty value and sum of absolute differences for the second candidate estimated motion vector and the third candidate estimated motion vector; selecting the first candidate estimated motion vector as the estimated motion vector on a condition that the second motion estimation cost value exceeds the first motion estimation cost value; and selecting the second candidate estimated motion vector as the estimated motion vector on a condition that the first motion estimation cost value exceeds the second motion estimation cost value. 3. The method of claim 1, wherein identifying the motion vector from the plurality of motion vectors having the minimal cost value as the inter-coding candidate motion vector includes:
selecting the context motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the context motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the context motion vector; selecting the zero valued motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the context motion vector exceeds the cost value associated with the zero valued motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the zero valued motion vector; selecting the estimated motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the estimated motion vector and the cost value associated with the context vector exceeds the cost value associated with the estimated motion vector. 4. The method of claim 1, wherein determining the cost value for the motion vector from the plurality of motion vectors includes:
determining a predicted block based on the current block and the motion vector; determining a residual block as a difference between the current block and the predicted block; generating a transform block based on the residual block; and generating a quantized block based on the transform block. 5. The method of claim 4, wherein determining the distortion measurement for encoding the current block using the motion vector includes:
determining an inverse-quantized block based on the quantized block; determining a sum of squared errors between the transform block and the inverse-quantized block as the distortion measurement. 6. The method of claim 4, wherein determining the estimated encoding cost for encoding the current block using the motion vector includes:
identifying a defined scan order for the quantized block, wherein the quantized block includes a plurality of quantized transform coefficients; determining a plurality of estimated coefficient encoding costs by determining an estimated coefficient encoding cost for each respective quantized transform coefficient from the plurality of quantized transform coefficients; and determining a sum of the plurality of estimated coefficient encoding costs as the estimated encoding cost. 7. The method of claim 1, wherein encoding the current block includes determining an intra-coding candidate motion vector. 8. The method of claim 7, wherein encoding the current block includes:
determining a first sum of absolute differences for the intra-coding candidate motion vector; determining a second sum of absolute differences for the inter-coding candidate motion vector; encoding the current block using the intra-coding candidate motion vector on a condition that the second sum of absolute differences exceeds the first sum of absolute differences; and encoding the current block using the inter-coding candidate motion vector on a condition that the first sum of absolute differences exceeds the second sum of absolute differences. 9. The method of claim 1, wherein determining plurality of cost values includes determining each cost value from the plurality of cost values in parallel. 10. The method of claim 1, wherein identifying the context motion vector includes identifying the context motion vector such that the context motion vector is a non-zero motion vector. 11. The method of claim 1, wherein identifying the context motion vector includes identifying the context motion vector from the block neighboring the current block in the current input frame on a condition that the block neighboring the current block in the current input frame is a previously coded block encoded using the context motion vector. 12. The method of claim 1, wherein determining the inter-coding candidate motion vector includes:
identifying a second context motion vector from a second block neighboring the current block in the current input frame. 13. The method of claim 12, wherein identifying the second context motion vector includes identifying the second context motion vector such that the second context motion vector is a non-zero motion vector. 14. The method of claim 13, wherein identifying the second context motion vector includes identifying the second context motion vector from the second block neighboring the current block in the current input frame on a condition that the second block neighboring the current block in the current input frame is a previously coded block encoded using the second context motion vector. 15. A method, comprising:
identifying a current block from a current input frame from an input video stream; generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the current block, wherein encoding the current block includes determining an inter-coding candidate motion vector, wherein determining the inter-coding candidate motion vector includes:
identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame,
determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, wherein determining the cost value for a motion vector from the plurality of motion vectors includes:
determining a predicted block based on the current block and the motion vector;
determining a residual block as a difference between the current block and the predicted block;
generating a transform block based on the residual block;
generating a quantized block based on the transform block;
determining an inverse-quantized block based on the quantized block;
determining a sum of squared errors between the transform block and the inverse-quantized block as a distortion measurement for encoding the current block using the motion vector;
identifying a defined scan order for the quantized block, wherein the quantized block includes a plurality of quantized transform coefficients;
determining a plurality of estimated coefficient encoding costs by determining an estimated coefficient encoding cost for each respective quantized transform coefficient from the plurality of quantized transform coefficients;
determining a sum of the plurality of estimated coefficient encoding costs as an estimated encoding cost for encoding the current block using the motion vector;
identifying a weighting value; and
determining the cost value as a sum of the distortion measurement and a product of the weighting value and the estimated encoding cost, and
identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector;
including the encoded block in an output bitstream; and storing or transmitting the output bitstream. 16. The method of claim 15, wherein identifying the plurality of motion vectors includes generating the estimated motion vector, wherein generating the estimated motion vector includes:
generating a first candidate estimated motion vector based on the current block; generating a second candidate estimated motion vector based on a first sub-block of the current block; generating a third candidate estimated motion vector based on a second sub-block of the current block; determining a first motion estimation cost value as a sum of a first penalty value and sum of absolute differences for the first candidate estimated motion vector; determining a second motion estimation cost value as a sum of a second penalty value and sum of absolute differences for the second candidate estimated motion vector and the third candidate estimated motion vector; selecting the first candidate estimated motion vector as the estimated motion vector on a condition that the second motion estimation cost value exceeds the first motion estimation cost value; and selecting the second candidate estimated motion vector as the estimated motion vector on a condition that the first motion estimation cost value exceeds the second motion estimation cost value. 17. The method of claim 15, wherein identifying the motion vector from the plurality of motion vectors having the minimal cost value as the inter-coding candidate motion vector includes:
selecting the context motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the context motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the context motion vector; selecting the zero valued motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the context motion vector exceeds the cost value associated with the zero valued motion vector and the cost value associated with the estimated motion vector exceeds the cost value associated with the zero valued motion vector; selecting the estimated motion vector as the inter-coding candidate motion vector on a condition that the cost value associated with the zero valued motion vector exceeds the cost value associated with the estimated motion vector and the cost value associated with the context vector exceeds the cost value associated with the estimated motion vector. 18. The method of claim 15, wherein encoding the current block includes:
determining an intra-coding candidate motion vector; determining a first sum of absolute differences for the intra-coding candidate motion vector; determining a second sum of absolute differences for the inter-coding candidate motion vector; encoding the current block using the intra-coding candidate motion vector on a condition that the second sum of absolute differences exceeds the first sum of absolute differences; and encoding the current block using the inter-coding candidate motion vector on a condition that the first sum of absolute differences exceeds the second sum of absolute differences. 19. The method of claim 15, wherein determining plurality of cost values includes determining each cost value from the plurality of cost values in parallel. 20. A method, comprising:
identifying a current block from a current input frame from an input video stream; generating, by a processor in response to instructions stored on a non-transitory computer readable medium, an encoded block by encoding the current block, wherein encoding the current block includes:
determining an inter-coding candidate motion vector, wherein determining the inter-coding candidate motion vector includes:
identifying a plurality of motion vectors, wherein the plurality of motion vectors includes a context motion vector identified from a block neighboring the current block in the current input frame, a zero valued motion vector, and an estimated motion vector based on the current block and a reference frame;
determining a plurality of cost values by determining a cost value for each respective motion vector from the plurality of motion vectors, wherein determining the cost value for a motion vector from the plurality of motion vectors includes:
determining a distortion measurement for encoding the current block using the motion vector,
determining an estimated encoding cost for encoding the current block using the motion vector,
identifying a weighting value, and
determining the cost value as a sum of the distortion measurement and a product of the weighting value and the estimated encoding cost; and
identifying a motion vector from the plurality of motion vectors having a minimal cost value as the inter-coding candidate motion vector,
determining an intra-coding candidate motion vector,
determining a first sum of absolute differences for the intra-coding candidate motion vector,
determining a second sum of absolute differences for the inter-coding candidate motion vector,
encoding the current block using the intra-coding candidate motion vector on a condition that the second sum of absolute differences exceeds the first sum of absolute differences, and
encoding the current block using the inter-coding candidate motion vector on a condition that the first sum of absolute differences exceeds the second sum of absolute differences;
including the encoded block in an output bitstream; and storing or transmitting the output bitstream. | 2,400 |
8,606 | 8,606 | 13,953,576 | 2,447 | Embodiments of the present invention disclose an information transmission method, apparatus, system, and terminal. The information transmission method includes: sending, by a first mobile terminal, an information transmission instruction to a plug-in bound to the first mobile terminal, and obtaining, by the plug-in, according to the information transmission instruction, webpage information of a current web page in a browser where the plug-in is located; processing, by the plug-in, the webpage information, and displaying the processed webpage information by adopting a floating layer; and sending, by the plug-in, the webpage information in the floating layer to the first mobile terminal. The present invention can implement information transmission and improve the convenience of information transmission. | 1. An information transmission method performed at a first mobile terminal having a processor and memory for storing one or more programs to be executed by the processor, the method comprising:
initiating an information retrieval application at the first mobile terminal; detecting a predefined user operation on the first mobile terminal to retrieve information from a web browser running on a second terminal, wherein the web browser includes a plug-in bound to a user account associated with the first mobile terminal; in response to the predefined user operation, sending an information transmission instruction to the plug-in at the second terminal, wherein the plug-in is configured to process, according to the information transmission instruction, webpage information of a current web page in the web browser, and overlay the processed webpage information on top of the current web page in the web browser; receiving at least a portion of the processed webpage information from the second terminal; and displaying the received webpage information on a display of the first mobile terminal. 2. The method according to claim 1, wherein the plug-in was pre-installed in the web browser and bound to the first mobile terminal before receiving the information transmission instruction from the first mobile terminal. 3. The method according to claim 2, wherein the first mobile terminal and the plug-in in the web browser are configured to perform the following operations to be bounded together:
the first mobile terminal:
obtaining ingress information of the plug-in from the second terminal, wherein the ingress information comprises address information and identification information of the plug-in;
sending the ingress information of the plug-in and the user account information associated with the first mobile terminal to a user account binding server;
receiving a first binding notification message from the user account binding server, wherein the first binding notification message comprises the ingress information of the plug-in bound to the first mobile terminal; and
the plug-in receiving a second binding notification message from the user account binding server, wherein the second binding notification message comprises the user account information associated with the first mobile terminal bound to the plug-in. 4. The method according to claim 3, wherein the second terminal provides the ingress information of the plug-in to the first mobile terminal by:
generating a 2D barcode according to the ingress information of the plug-in; and displaying the 2D barcode on a display of the second terminal so that the first mobile terminal can obtain the ingress information of the plug-in by scanning the 2D barcode. 5. The method according to claim 2, wherein, after being bound to the first mobile terminal, the plug-in is activated to receive the information transmission instruction from the first mobile terminal within a preset time window; if the information transmission instruction from the first mobile terminal is received within the preset time window, the plug-in is triggered to process the webpage information; and if the information transmission instruction from the first mobile terminal is not received within the preset time window, the plug-in is de-activated. 6. The method according to claim 1, wherein detecting the predefined user operation is one selected from the group consisting of:
detecting a gravity sensing event caused by a predefined user movement of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected gravity sensing event; or detecting a voice control command caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected voice control command; or detecting a predefined key pressing event caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected predefined key pressing event. 7. The method according to claim 1, wherein the webpage information comprises address information of the current web page and information of images in the current web page; and the processing of the webpage information by the plug-in further includes:
selecting images having a minimum side length larger than a preset value from the current web page, or images in a preset format from the current web page; compressing the selected images to form a thumbnail image for each image; and overlaying the thumbnail images on top of the current web page. 8. The method according to claim 7, wherein the second terminal transmits at least a portion of the processed webpage information to the first mobile terminal by:
receiving a user selection of at least a subset of the thumbnail images; and sending the user-selected thumbnail images and the address information of the current web page to the first mobile terminal. 9. An information transmission method performed at a second terminal having a processor and memory for storing one or more programs to be executed by the processor, the method comprising:
activating a plug-in in a web browser at the second terminal, wherein the plug-in is bound to a user account associated with a first mobile terminal; receiving an information transmission instruction from the first mobile terminal; in response to the information transmission instruction, processing webpage information of a current web page in the web browser, and overlaying the processed webpage information on top of the current web page in the web browser; and sending at least a portion of the processed webpage information to the first mobile terminal. 10. The method according to claim 9, wherein the plug-in was pre-installed in the web browser and bound to the first mobile terminal before receiving the information transmission instruction from the first mobile terminal. 11. The method according to claim 10, wherein the first mobile terminal and the plug-in in the web browser are configured to perform the following operations to be bounded together:
the first mobile terminal:
obtaining ingress information of the plug-in from the second terminal, wherein the ingress information comprises address information and identification information of the plug-in;
sending the ingress information of the plug-in and the user account information associated with the first mobile terminal to a user account binding server;
receiving a first binding notification message from the user account binding server, wherein the first binding notification message comprises the ingress information of the plug-in bound to the first mobile terminal; and
the plug-in receiving a second binding notification message from the user account binding server, wherein the second binding notification message comprises the user account information associated with the first mobile terminal bound to the plug-in. 12. The method according to claim 11, wherein the second terminal provides the ingress information of the plug-in to the first mobile terminal by:
generating a 2D barcode according to the ingress information of the plug-in; and displaying the 2D barcode on a display of the second terminal so that the first mobile terminal can obtain the ingress information of the plug-in by scanning the 2D barcode. 13. The method according to claim 10, wherein, after being bound to the first mobile terminal, the plug-in is activated to receive the information transmission instruction from the first mobile terminal within a preset time window; if the information transmission instruction from the first mobile terminal is received within the preset time window, the plug-in is triggered to process the webpage information; and if the information transmission instruction from the first mobile terminal is not received within the preset time window, the plug-in is de-activated. 14. The method according to claim 9, wherein the webpage information comprises address information of the current web page and information of images in the current web page; and the processing of the webpage information by the plug-in further includes:
selecting images having a minimum side length larger than a preset value from the current web page, or images in a preset format from the current web page; compressing the selected images to form a thumbnail image for each image; and overlaying the thumbnail images on top of the current web page. 15. The method according to claim 14, wherein the second terminal transmits at least a portion of the processed webpage information to the first mobile terminal by:
receiving a user selection of at least a subset of the thumbnail images; and sending the user-selected thumbnail images and the address information of the current web page to the first mobile terminal. 16. A first mobile terminal, comprising:
one or more processors; and memory storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the first mobile terminal to:
initiate an information retrieval application at the first mobile terminal;
detect a predefined user operation on the first mobile terminal to retrieve information from a web browser running on a second terminal, wherein the web browser includes a plug-in bound to a user account associated with the first mobile terminal;
in response to the predefined user operation, send an information transmission instruction to the plug-in at the second terminal, wherein the plug-in is configured to process, according to the information transmission instruction, webpage information of a current web page in the web browser, and overlay the processed webpage information on top of the current web page in the web browser;
receive at least a portion of the processed webpage information from the second terminal; and
display the received webpage information on a display of the first mobile terminal. 17. The first mobile terminal according to claim 16, wherein the first mobile terminal is configured to perform the following operations to be bounded to the plug-in:
obtaining ingress information of the plug-in from the second terminal, wherein the ingress information comprises address information and identification information of the plug-in; sending the ingress information of the plug-in and the user account information associated with the first mobile terminal to a user account binding server; and receiving a first binding notification message from the user account binding server, wherein the first binding notification message comprises the ingress information of the plug-in bound to the first mobile terminal. 18. The first mobile terminal according to claim 16, wherein the first mobile terminal detects the predefined user operation in one manner selected from the group consisting of:
detecting a gravity sensing event caused by a predefined user movement of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected gravity sensing event; or detecting a voice control command caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected voice control command; or detecting a predefined key pressing event caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected predefined key pressing event. | Embodiments of the present invention disclose an information transmission method, apparatus, system, and terminal. The information transmission method includes: sending, by a first mobile terminal, an information transmission instruction to a plug-in bound to the first mobile terminal, and obtaining, by the plug-in, according to the information transmission instruction, webpage information of a current web page in a browser where the plug-in is located; processing, by the plug-in, the webpage information, and displaying the processed webpage information by adopting a floating layer; and sending, by the plug-in, the webpage information in the floating layer to the first mobile terminal. The present invention can implement information transmission and improve the convenience of information transmission.1. An information transmission method performed at a first mobile terminal having a processor and memory for storing one or more programs to be executed by the processor, the method comprising:
initiating an information retrieval application at the first mobile terminal; detecting a predefined user operation on the first mobile terminal to retrieve information from a web browser running on a second terminal, wherein the web browser includes a plug-in bound to a user account associated with the first mobile terminal; in response to the predefined user operation, sending an information transmission instruction to the plug-in at the second terminal, wherein the plug-in is configured to process, according to the information transmission instruction, webpage information of a current web page in the web browser, and overlay the processed webpage information on top of the current web page in the web browser; receiving at least a portion of the processed webpage information from the second terminal; and displaying the received webpage information on a display of the first mobile terminal. 2. The method according to claim 1, wherein the plug-in was pre-installed in the web browser and bound to the first mobile terminal before receiving the information transmission instruction from the first mobile terminal. 3. The method according to claim 2, wherein the first mobile terminal and the plug-in in the web browser are configured to perform the following operations to be bounded together:
the first mobile terminal:
obtaining ingress information of the plug-in from the second terminal, wherein the ingress information comprises address information and identification information of the plug-in;
sending the ingress information of the plug-in and the user account information associated with the first mobile terminal to a user account binding server;
receiving a first binding notification message from the user account binding server, wherein the first binding notification message comprises the ingress information of the plug-in bound to the first mobile terminal; and
the plug-in receiving a second binding notification message from the user account binding server, wherein the second binding notification message comprises the user account information associated with the first mobile terminal bound to the plug-in. 4. The method according to claim 3, wherein the second terminal provides the ingress information of the plug-in to the first mobile terminal by:
generating a 2D barcode according to the ingress information of the plug-in; and displaying the 2D barcode on a display of the second terminal so that the first mobile terminal can obtain the ingress information of the plug-in by scanning the 2D barcode. 5. The method according to claim 2, wherein, after being bound to the first mobile terminal, the plug-in is activated to receive the information transmission instruction from the first mobile terminal within a preset time window; if the information transmission instruction from the first mobile terminal is received within the preset time window, the plug-in is triggered to process the webpage information; and if the information transmission instruction from the first mobile terminal is not received within the preset time window, the plug-in is de-activated. 6. The method according to claim 1, wherein detecting the predefined user operation is one selected from the group consisting of:
detecting a gravity sensing event caused by a predefined user movement of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected gravity sensing event; or detecting a voice control command caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected voice control command; or detecting a predefined key pressing event caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected predefined key pressing event. 7. The method according to claim 1, wherein the webpage information comprises address information of the current web page and information of images in the current web page; and the processing of the webpage information by the plug-in further includes:
selecting images having a minimum side length larger than a preset value from the current web page, or images in a preset format from the current web page; compressing the selected images to form a thumbnail image for each image; and overlaying the thumbnail images on top of the current web page. 8. The method according to claim 7, wherein the second terminal transmits at least a portion of the processed webpage information to the first mobile terminal by:
receiving a user selection of at least a subset of the thumbnail images; and sending the user-selected thumbnail images and the address information of the current web page to the first mobile terminal. 9. An information transmission method performed at a second terminal having a processor and memory for storing one or more programs to be executed by the processor, the method comprising:
activating a plug-in in a web browser at the second terminal, wherein the plug-in is bound to a user account associated with a first mobile terminal; receiving an information transmission instruction from the first mobile terminal; in response to the information transmission instruction, processing webpage information of a current web page in the web browser, and overlaying the processed webpage information on top of the current web page in the web browser; and sending at least a portion of the processed webpage information to the first mobile terminal. 10. The method according to claim 9, wherein the plug-in was pre-installed in the web browser and bound to the first mobile terminal before receiving the information transmission instruction from the first mobile terminal. 11. The method according to claim 10, wherein the first mobile terminal and the plug-in in the web browser are configured to perform the following operations to be bounded together:
the first mobile terminal:
obtaining ingress information of the plug-in from the second terminal, wherein the ingress information comprises address information and identification information of the plug-in;
sending the ingress information of the plug-in and the user account information associated with the first mobile terminal to a user account binding server;
receiving a first binding notification message from the user account binding server, wherein the first binding notification message comprises the ingress information of the plug-in bound to the first mobile terminal; and
the plug-in receiving a second binding notification message from the user account binding server, wherein the second binding notification message comprises the user account information associated with the first mobile terminal bound to the plug-in. 12. The method according to claim 11, wherein the second terminal provides the ingress information of the plug-in to the first mobile terminal by:
generating a 2D barcode according to the ingress information of the plug-in; and displaying the 2D barcode on a display of the second terminal so that the first mobile terminal can obtain the ingress information of the plug-in by scanning the 2D barcode. 13. The method according to claim 10, wherein, after being bound to the first mobile terminal, the plug-in is activated to receive the information transmission instruction from the first mobile terminal within a preset time window; if the information transmission instruction from the first mobile terminal is received within the preset time window, the plug-in is triggered to process the webpage information; and if the information transmission instruction from the first mobile terminal is not received within the preset time window, the plug-in is de-activated. 14. The method according to claim 9, wherein the webpage information comprises address information of the current web page and information of images in the current web page; and the processing of the webpage information by the plug-in further includes:
selecting images having a minimum side length larger than a preset value from the current web page, or images in a preset format from the current web page; compressing the selected images to form a thumbnail image for each image; and overlaying the thumbnail images on top of the current web page. 15. The method according to claim 14, wherein the second terminal transmits at least a portion of the processed webpage information to the first mobile terminal by:
receiving a user selection of at least a subset of the thumbnail images; and sending the user-selected thumbnail images and the address information of the current web page to the first mobile terminal. 16. A first mobile terminal, comprising:
one or more processors; and memory storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the first mobile terminal to:
initiate an information retrieval application at the first mobile terminal;
detect a predefined user operation on the first mobile terminal to retrieve information from a web browser running on a second terminal, wherein the web browser includes a plug-in bound to a user account associated with the first mobile terminal;
in response to the predefined user operation, send an information transmission instruction to the plug-in at the second terminal, wherein the plug-in is configured to process, according to the information transmission instruction, webpage information of a current web page in the web browser, and overlay the processed webpage information on top of the current web page in the web browser;
receive at least a portion of the processed webpage information from the second terminal; and
display the received webpage information on a display of the first mobile terminal. 17. The first mobile terminal according to claim 16, wherein the first mobile terminal is configured to perform the following operations to be bounded to the plug-in:
obtaining ingress information of the plug-in from the second terminal, wherein the ingress information comprises address information and identification information of the plug-in; sending the ingress information of the plug-in and the user account information associated with the first mobile terminal to a user account binding server; and receiving a first binding notification message from the user account binding server, wherein the first binding notification message comprises the ingress information of the plug-in bound to the first mobile terminal. 18. The first mobile terminal according to claim 16, wherein the first mobile terminal detects the predefined user operation in one manner selected from the group consisting of:
detecting a gravity sensing event caused by a predefined user movement of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected gravity sensing event; or detecting a voice control command caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected voice control command; or detecting a predefined key pressing event caused by a user of the first mobile terminal, and sending an information transmission instruction to the plug-in according to the detected predefined key pressing event. | 2,400 |
8,607 | 8,607 | 16,372,751 | 2,411 | A user equipment is configured to receive an extensible authentication protocol (EAP) request from a session management function (SMF) that serves as an EAP authenticator for secondary authentication of the user equipment. The secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment. The user equipment is also configured to, responsive to the EAP request, transmit an EAP response to the SMF. | 1. A method of secondary authentication of a user equipment, implemented by the user equipment, wherein the method comprises:
receiving an extensible authentication protocol (EAP) request from a session management function (SMF) that serves as an EAP authenticator for secondary authentication of the user equipment, wherein the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and responsive to the EAP request, transmitting an EAP response from the user equipment to the SMF. 2. The method of claim 1, wherein the SMF is configured to forward the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 3. The method of claim 2, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 4. The method of claim 2, wherein the EAP request and the EAP response are transmitted between the SMF and the EAP server via a user plane function selected by the SMF. 5. The method of claim 1, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 6. The method of claim 1, further comprising:
transmitting a session establishment request that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; receiving a session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 7. A method of secondary authentication of a user equipment, implemented by an authentication system, wherein the method comprises:
transmitting an extensible authentication protocol (EAP) request from a session management function (SMF) to a user equipment, wherein the SMF serves as an EAP authenticator for secondary authentication of the user equipment, and the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and responsive to the EAP request, receiving at the SMF an EAP response from the user equipment. 8. The method of claim 7, further comprising forwarding the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 9. The method of claim 8, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 10. The method of claim 8, further comprising transmitting the EAP request and the EAP response between the SMF and the EAP server via a user plane function selected by the SMF. 11. The method of claim 7, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 12. The method of claim 7, further comprising:
receiving a session establishment request from the user equipment that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; transmitting a session establishment response to the user equipment, the session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 13. A user equipment comprising:
processing circuitry and memory, the memory containing instructions executable by the processing circuitry whereby the user equipment is configured to:
receive an extensible authentication protocol (EAP) request from a session management function (SMF) that serves as an EAP authenticator for secondary authentication of the user equipment, wherein the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and
responsive to the EAP request, transmit an EAP response to the SMF. 14. The user equipment of claim 13, wherein the SMF is configured to forward the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 15. The user equipment of claim 14, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 16. The user equipment of claim 14, wherein the EAP request and the EAP response are transmitted between the SMF and the EAP server via a user plane function selected by the SMF. 17. The user equipment of claim 13, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 18. The user equipment of claim 13, further configured to:
transmit a session establishment request that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; receive a session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 19. An authentication system comprising:
processing circuitry and memory, the memory containing instructions executable by the processing circuitry whereby the authentication system is configured to:
transmitting an extensible authentication protocol (EAP) request from a session management function (SMF) to a user equipment, wherein the SMF serves as an EAP authenticator for secondary authentication of the user equipment, and the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and
responsive to the EAP request, receiving at the SMF an EAP response from the user equipment. 20. The authentication system of claim 19, further configured to forward the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 21. The authentication system of claim 20, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 22. The authentication system of claim 20, further configured to transmit the EAP request and the EAP response between the SMF and the EAP server via a user plane function selected by the SMF. 23. The authentication system of claim 19, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 24. The authentication system of claim 19, further configured to:
receive a session establishment request from the user equipment that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; transmit a session establishment response to the user equipment, the session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 25. The authentication system of claim 19, further comprising the SMF, the EAP server, or both. | A user equipment is configured to receive an extensible authentication protocol (EAP) request from a session management function (SMF) that serves as an EAP authenticator for secondary authentication of the user equipment. The secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment. The user equipment is also configured to, responsive to the EAP request, transmit an EAP response to the SMF.1. A method of secondary authentication of a user equipment, implemented by the user equipment, wherein the method comprises:
receiving an extensible authentication protocol (EAP) request from a session management function (SMF) that serves as an EAP authenticator for secondary authentication of the user equipment, wherein the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and responsive to the EAP request, transmitting an EAP response from the user equipment to the SMF. 2. The method of claim 1, wherein the SMF is configured to forward the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 3. The method of claim 2, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 4. The method of claim 2, wherein the EAP request and the EAP response are transmitted between the SMF and the EAP server via a user plane function selected by the SMF. 5. The method of claim 1, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 6. The method of claim 1, further comprising:
transmitting a session establishment request that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; receiving a session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 7. A method of secondary authentication of a user equipment, implemented by an authentication system, wherein the method comprises:
transmitting an extensible authentication protocol (EAP) request from a session management function (SMF) to a user equipment, wherein the SMF serves as an EAP authenticator for secondary authentication of the user equipment, and the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and responsive to the EAP request, receiving at the SMF an EAP response from the user equipment. 8. The method of claim 7, further comprising forwarding the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 9. The method of claim 8, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 10. The method of claim 8, further comprising transmitting the EAP request and the EAP response between the SMF and the EAP server via a user plane function selected by the SMF. 11. The method of claim 7, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 12. The method of claim 7, further comprising:
receiving a session establishment request from the user equipment that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; transmitting a session establishment response to the user equipment, the session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 13. A user equipment comprising:
processing circuitry and memory, the memory containing instructions executable by the processing circuitry whereby the user equipment is configured to:
receive an extensible authentication protocol (EAP) request from a session management function (SMF) that serves as an EAP authenticator for secondary authentication of the user equipment, wherein the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and
responsive to the EAP request, transmit an EAP response to the SMF. 14. The user equipment of claim 13, wherein the SMF is configured to forward the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 15. The user equipment of claim 14, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 16. The user equipment of claim 14, wherein the EAP request and the EAP response are transmitted between the SMF and the EAP server via a user plane function selected by the SMF. 17. The user equipment of claim 13, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 18. The user equipment of claim 13, further configured to:
transmit a session establishment request that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; receive a session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 19. An authentication system comprising:
processing circuitry and memory, the memory containing instructions executable by the processing circuitry whereby the authentication system is configured to:
transmitting an extensible authentication protocol (EAP) request from a session management function (SMF) to a user equipment, wherein the SMF serves as an EAP authenticator for secondary authentication of the user equipment, and the secondary authentication is authentication of the user equipment in addition to primary authentication of the user equipment; and
responsive to the EAP request, receiving at the SMF an EAP response from the user equipment. 20. The authentication system of claim 19, further configured to forward the EAP request and the EAP response between the user equipment and an EAP server that executes an EAP authentication method for the EAP authenticator. 21. The authentication system of claim 20, wherein:
the user equipment and the SMF are configured for use in a wireless communication network that delegates the secondary authentication to a data network that comprises the EAP server and with which the user equipment requests a user plane session; the secondary authentication of the user equipment is authentication of the user equipment to establish the user plane session. 22. The authentication system of claim 20, further configured to transmit the EAP request and the EAP response between the SMF and the EAP server via a user plane function selected by the SMF. 23. The authentication system of claim 19, wherein the EAP request and the EAP response are encapsulated within respective non-access stratum (NAS) protocol messages between the SMF and the UE. 24. The authentication system of claim 19, further configured to:
receive a session establishment request from the user equipment that triggers the secondary authentication of the user equipment, wherein the session establishment request comprises a secondary identity of the user equipment used for the secondary authentication; transmit a session establishment response to the user equipment, the session establishment response comprising either an EAP success message indicating success of the secondary authentication or an EAP failure message indicating failure of the secondary authentication. 25. The authentication system of claim 19, further comprising the SMF, the EAP server, or both. | 2,400 |
8,608 | 8,608 | 15,666,855 | 2,426 | Systems and methods for enhanced video encoding identify patterns in sequences of raw digital video frames to extract features and identify the type of content represented by the extracted features. The system simulates many outcomes of encoding the sequence of digital video frames by using various different encoding strategies to find the relative best encoding strategy for each sequence of frames. As the encoder processes video, it passes digital video frames to a modeling system which determines whether the video, or video having that same type of content, has been previously observed by the system. The system then selectively applies a saved encoding strategy that had been determined by the system to be particularly suitable for encoding the same sequence of video frames or that same type of content. | 1. A method in a digital video encoding system, the method comprising:
receiving, by a digital video encoder of the video encoding system, a sequence of digital video frames; determining, by the video encoding system, whether an identified type of content shown throughout the sequence of video frames has been previously processed by the video encoding system; in response to the video encoding system determining that the identified type of content shown throughout the sequence of video frames has been previously processed by the video encoding system, selecting, by the video encoding system, a video encoding strategy previously used to encode one or more features associated the identified type of content and previously saved by the video encoding system for being particularly suitable for encoding the one or more features associated the identified type of content; and encoding, by the digital video encoder, the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the one or more features associated the identified type of content. 2. The method of claim 1 further comprising:
before encoding the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the one or more features associated the identified type of content;
receiving, by a machine learning module of the digital video encoding system, a plurality of different sequences of digital video frames;
detecting, by the machine learning module of the digital video encoding system, a type of content of the plurality of different sequences of digital video frames as the identified type of content;
encoding, by the machine learning module of the digital video encoding system, the plurality of different sequences of digital video frames using various different encoding strategies;
determining, by the machine learning module of the digital video encoding system, that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content;
associating, by the machine learning module of the digital video encoding system, with the identified type of content the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content;
saving in an encoding library, by the machine learning module of the digital video encoding system, the association of the identified type of content with the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content; and
saving in the encoding model library, by the machine learning module of the digital video encoding system, the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content. 3. The method of claim 2 wherein the detecting a type of content of the plurality of different sequences of digital video frames as the identified type of content includes:
extracting various features from the plurality of different sequences of digital video frames; and
identifying the extracted various features as being the one or more features associated with the identified type of content. 4. The method of claim 3 wherein the various features are representations of physical objects throughout plurality of different sequences of digital video frames associated with the identified type of content. 5. The method of claim 4 wherein the physical objects include one or more objects associated with sports and the identified type of content is sports. 6. The method of claim 5 wherein the physical objects include one or more of: a baseball, a bat, a basketball, a football, a football helmet, a soccer ball, a stadium, a large crowd of people. 7. The method of claim 2 wherein the determining that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content includes:
saving results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies;
comparing, using predetermined criteria, the results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies; and
selecting, based on the predetermined criteria, one of the various different encoding strategies used to encode the plurality of different sequences of digital video frames as the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content based on the comparison of, using the predetermined criteria, the results of encoding the one or more features associated with the identified type of content. 8. The method of claim 7 wherein the predetermined criteria are regarding measurements related to one or more of: reduction of color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 9. The method of claim 2 wherein the various different encoding strategies include two or more of: reducing color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 10. A digital video encoding system comprising:
at least one processor: at least one memory coupled to the at least one processor having computer executable instructions stored thereon that, when executed, cause the at least one processor to: receive a sequence of digital video frames; determine that the sequence of digital video frames has been previously processed by the video encoding system; in response to determining that the sequence of digital video frames has been previously processed by the video encoding system, select a video encoding strategy previously used to encode the sequence of digital video frames, the video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the sequence of digital video frames; encode the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the sequence of digital video frames; and output the sequence of video frames encoded using the selected video encoding strategy. 11. The system of claim 10 wherein the computer executable instructions, when executed, further cause the at least one processor to:
before encoding the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the sequence of video frames;
receive a plurality of sequences of digital video frames on multiple different occasions;
detect that each of the plurality of sequences of digital video frames is the same sequence of digital video frames;
encode the plurality of different sequences of digital video frames using various different encoding strategies; and
determine that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the same sequence of digital video frames;
associate the encoding strategy particularly suitable for encoding the same sequence of digital video frames with the same sequence of digital video frames;
save in an encoding library the association of the encoding strategy particularly suitable for encoding the same sequence of digital video frames with the same sequence of digital video frames; and
save in the encoding model library the encoding strategy particularly suitable for encoding the same sequence of digital video frames. 12. The system of claim 11 wherein the detection that each of the plurality of sequences of digital video frames is the same sequence of digital video frames includes:
comparison of the plurality of different sequences of digital video frames to each other; and
determining that the plurality of sequences of digital video frames is the same sequence of digital video frames based on the comparison of the plurality of different sequences of digital video frames to each other. 13. The system of claim 12 wherein the determining that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the same sequence of digital video frames includes:
saving results of encoding the plurality of different sequences of digital video frames using various different encoding strategies;
comparing, using predetermined criteria, the results of encoding the plurality of different sequences of digital video frames using various different encoding strategies; and
selecting, based on the predetermined criteria, one of the various different encoding strategies used to encode the plurality of different sequences of digital video frames as the encoding strategy particularly suitable for encoding the same sequence of digital video frames based on the comparison of, using the predetermined criteria, the results of encoding the plurality of different sequences of digital video frames using various different encoding strategies. 14. The system of claim 13 wherein the predetermined criteria are regarding measurements related to one or more of: reduction of color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 15. The system of claim 11 wherein the various different encoding strategies include two or more of: reducing color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 16. A non-transitory computer-readable storage medium having computer-executable instructions stored thereon that, when executed, cause at least one processor to perform:
receiving, by a machine learning module of a digital video encoding system, a plurality of different sequences of digital video frames; detecting, by the machine learning module of the digital video encoding system, a type of content of the plurality of different sequences of digital video frames as an identified type of content; encoding, by the machine learning module of the digital video encoding system, the plurality of different sequences of digital video frames using various different encoding strategies; and determining, by the machine learning module of the digital video encoding system, that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content. 17. The non-transitory computer-readable storage medium of claim 16 wherein the computer-executable instructions, when executed, further cause at least one processor to perform:
associating, by the machine learning module of the digital video encoding system, with the identified type of content the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content;
saving in an encoding library, by the machine learning module of the digital video encoding system, the association of the identified type of content with the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content; and
saving in the encoding model library, by the machine learning module of the digital video encoding system, the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content. 18. The non-transitory computer-readable storage medium of claim 17 wherein the detecting a type of content of the plurality of different sequences of digital video frames as the identified type of content includes:
extracting various features from the plurality of different sequences of digital video frames; and
identifying the extracted various features as being the one or more features associated with the identified type of content. 19. The non-transitory computer-readable storage medium of claim 18 wherein the various features are representations of physical objects throughout plurality of different sequences of digital video frames associated with the identified type of content. 20. The non-transitory computer-readable storage medium of claim 19 wherein the physical objects include one or more objects associated with a news broadcast and the identified type of content is news. 21. The non-transitory computer-readable storage medium of claim 16 wherein the determining that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content includes:
saving results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies;
comparing, using predetermined criteria, the results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies; and
selecting, based on the predetermined criteria, one of the various different encoding strategies used to encode the plurality of different sequences of digital video frames as the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content based on the comparison of, using the predetermined criteria, the results of encoding the one or more features associated with the identified type of content. 22. The non-transitory computer-readable storage medium of claim 16 wherein the machine learning module of the digital video encoding system uses a neural network. | Systems and methods for enhanced video encoding identify patterns in sequences of raw digital video frames to extract features and identify the type of content represented by the extracted features. The system simulates many outcomes of encoding the sequence of digital video frames by using various different encoding strategies to find the relative best encoding strategy for each sequence of frames. As the encoder processes video, it passes digital video frames to a modeling system which determines whether the video, or video having that same type of content, has been previously observed by the system. The system then selectively applies a saved encoding strategy that had been determined by the system to be particularly suitable for encoding the same sequence of video frames or that same type of content.1. A method in a digital video encoding system, the method comprising:
receiving, by a digital video encoder of the video encoding system, a sequence of digital video frames; determining, by the video encoding system, whether an identified type of content shown throughout the sequence of video frames has been previously processed by the video encoding system; in response to the video encoding system determining that the identified type of content shown throughout the sequence of video frames has been previously processed by the video encoding system, selecting, by the video encoding system, a video encoding strategy previously used to encode one or more features associated the identified type of content and previously saved by the video encoding system for being particularly suitable for encoding the one or more features associated the identified type of content; and encoding, by the digital video encoder, the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the one or more features associated the identified type of content. 2. The method of claim 1 further comprising:
before encoding the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the one or more features associated the identified type of content;
receiving, by a machine learning module of the digital video encoding system, a plurality of different sequences of digital video frames;
detecting, by the machine learning module of the digital video encoding system, a type of content of the plurality of different sequences of digital video frames as the identified type of content;
encoding, by the machine learning module of the digital video encoding system, the plurality of different sequences of digital video frames using various different encoding strategies;
determining, by the machine learning module of the digital video encoding system, that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content;
associating, by the machine learning module of the digital video encoding system, with the identified type of content the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content;
saving in an encoding library, by the machine learning module of the digital video encoding system, the association of the identified type of content with the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content; and
saving in the encoding model library, by the machine learning module of the digital video encoding system, the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content. 3. The method of claim 2 wherein the detecting a type of content of the plurality of different sequences of digital video frames as the identified type of content includes:
extracting various features from the plurality of different sequences of digital video frames; and
identifying the extracted various features as being the one or more features associated with the identified type of content. 4. The method of claim 3 wherein the various features are representations of physical objects throughout plurality of different sequences of digital video frames associated with the identified type of content. 5. The method of claim 4 wherein the physical objects include one or more objects associated with sports and the identified type of content is sports. 6. The method of claim 5 wherein the physical objects include one or more of: a baseball, a bat, a basketball, a football, a football helmet, a soccer ball, a stadium, a large crowd of people. 7. The method of claim 2 wherein the determining that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content includes:
saving results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies;
comparing, using predetermined criteria, the results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies; and
selecting, based on the predetermined criteria, one of the various different encoding strategies used to encode the plurality of different sequences of digital video frames as the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content based on the comparison of, using the predetermined criteria, the results of encoding the one or more features associated with the identified type of content. 8. The method of claim 7 wherein the predetermined criteria are regarding measurements related to one or more of: reduction of color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 9. The method of claim 2 wherein the various different encoding strategies include two or more of: reducing color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 10. A digital video encoding system comprising:
at least one processor: at least one memory coupled to the at least one processor having computer executable instructions stored thereon that, when executed, cause the at least one processor to: receive a sequence of digital video frames; determine that the sequence of digital video frames has been previously processed by the video encoding system; in response to determining that the sequence of digital video frames has been previously processed by the video encoding system, select a video encoding strategy previously used to encode the sequence of digital video frames, the video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the sequence of digital video frames; encode the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the sequence of digital video frames; and output the sequence of video frames encoded using the selected video encoding strategy. 11. The system of claim 10 wherein the computer executable instructions, when executed, further cause the at least one processor to:
before encoding the sequence of video frames using the selected video encoding strategy previously saved by the video encoding system for being particularly suitable for encoding the sequence of video frames;
receive a plurality of sequences of digital video frames on multiple different occasions;
detect that each of the plurality of sequences of digital video frames is the same sequence of digital video frames;
encode the plurality of different sequences of digital video frames using various different encoding strategies; and
determine that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the same sequence of digital video frames;
associate the encoding strategy particularly suitable for encoding the same sequence of digital video frames with the same sequence of digital video frames;
save in an encoding library the association of the encoding strategy particularly suitable for encoding the same sequence of digital video frames with the same sequence of digital video frames; and
save in the encoding model library the encoding strategy particularly suitable for encoding the same sequence of digital video frames. 12. The system of claim 11 wherein the detection that each of the plurality of sequences of digital video frames is the same sequence of digital video frames includes:
comparison of the plurality of different sequences of digital video frames to each other; and
determining that the plurality of sequences of digital video frames is the same sequence of digital video frames based on the comparison of the plurality of different sequences of digital video frames to each other. 13. The system of claim 12 wherein the determining that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the same sequence of digital video frames includes:
saving results of encoding the plurality of different sequences of digital video frames using various different encoding strategies;
comparing, using predetermined criteria, the results of encoding the plurality of different sequences of digital video frames using various different encoding strategies; and
selecting, based on the predetermined criteria, one of the various different encoding strategies used to encode the plurality of different sequences of digital video frames as the encoding strategy particularly suitable for encoding the same sequence of digital video frames based on the comparison of, using the predetermined criteria, the results of encoding the plurality of different sequences of digital video frames using various different encoding strategies. 14. The system of claim 13 wherein the predetermined criteria are regarding measurements related to one or more of: reduction of color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 15. The system of claim 11 wherein the various different encoding strategies include two or more of: reducing color range, uniform or targeted sharpness reduction, frame duplication, proactive bandwidth allocation. 16. A non-transitory computer-readable storage medium having computer-executable instructions stored thereon that, when executed, cause at least one processor to perform:
receiving, by a machine learning module of a digital video encoding system, a plurality of different sequences of digital video frames; detecting, by the machine learning module of the digital video encoding system, a type of content of the plurality of different sequences of digital video frames as an identified type of content; encoding, by the machine learning module of the digital video encoding system, the plurality of different sequences of digital video frames using various different encoding strategies; and determining, by the machine learning module of the digital video encoding system, that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content. 17. The non-transitory computer-readable storage medium of claim 16 wherein the computer-executable instructions, when executed, further cause at least one processor to perform:
associating, by the machine learning module of the digital video encoding system, with the identified type of content the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content;
saving in an encoding library, by the machine learning module of the digital video encoding system, the association of the identified type of content with the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content; and
saving in the encoding model library, by the machine learning module of the digital video encoding system, the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content. 18. The non-transitory computer-readable storage medium of claim 17 wherein the detecting a type of content of the plurality of different sequences of digital video frames as the identified type of content includes:
extracting various features from the plurality of different sequences of digital video frames; and
identifying the extracted various features as being the one or more features associated with the identified type of content. 19. The non-transitory computer-readable storage medium of claim 18 wherein the various features are representations of physical objects throughout plurality of different sequences of digital video frames associated with the identified type of content. 20. The non-transitory computer-readable storage medium of claim 19 wherein the physical objects include one or more objects associated with a news broadcast and the identified type of content is news. 21. The non-transitory computer-readable storage medium of claim 16 wherein the determining that one of the various different encoding strategies is the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content includes:
saving results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies;
comparing, using predetermined criteria, the results of encoding the one or more features associated with the identified type of content from the encoding of the plurality of different sequences of digital video frames using the various different encoding strategies; and
selecting, based on the predetermined criteria, one of the various different encoding strategies used to encode the plurality of different sequences of digital video frames as the encoding strategy particularly suitable for encoding the one or more features associated with the identified type of content based on the comparison of, using the predetermined criteria, the results of encoding the one or more features associated with the identified type of content. 22. The non-transitory computer-readable storage medium of claim 16 wherein the machine learning module of the digital video encoding system uses a neural network. | 2,400 |
8,609 | 8,609 | 15,475,528 | 2,482 | Aspects of the present invention are directed towards a more natural way to interact with an intelligent personal assistant. An intelligent personal assistant comprises a camera that provides images of an area surrounding the assistant. The assistant monitors images provided by the camera to determine when a user is addressing the assistant. When voice input is received after determining that a user is addressing the assistant, the assistant understands that the voice input is intended for the assistant, and acts on the voice input to respond to the user. | 1. An intelligent personal assistant, comprising:
an audio transducer for receiving audio input from an area near the intelligent personal assistant; a memory for storing processor-executable instructions; a camera for providing images of an area around the intelligent personal assistant; a user output for providing audible responses to voice input submitted by a user; a network interface; and a processor coupled to the audio transducer, the memory, the camera the network interface, and the user output, for executing the processor-executable instructions that causes the intelligent personal assistant to:
determine, by the processor via visual information provided by the camera, that the user is addressing the intelligent personal assistant;
after determining that the user is addressing the intelligent personal assistant, record, by the processor via the audio transducer, a first recording representative of a first audio input in the memory; and
provide, by the processor via the network interface, the first recording to a remote server for processing the first audio input after the processor determines that the user has addressed the intelligent personal assistant. 2. The intelligent personal assistant of claim 1, wherein the processor-executable instructions that cause the intelligent personal assistant to determine that the user is addressing the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 3. The intelligent personal assistant of claim 2, wherein the processor-executable instructions that cause the processor to determine that the user is gazing at the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determine, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 4. The intelligent personal assistant of claim 1, further comprising:
a reflective, convex surface comprising an apex aligned with the camera for reflecting light from an area around the intelligent personal assistant to the camera; wherein the visual information is generated by the camera in response to receiving the reflected light from the convex surface. 5. The intelligent personal assistant of claim 4, wherein the processor-executable instructions further comprise instructions that causes the intelligent personal assistant to:
apply, by the processor, an inverse function related to a curvature of the convex surface to the visual information to eliminate visual distortion caused by curvature of the convex surface. 6. A method performed by an intelligent personal assistant for interacting with a user, comprising:
receiving, by a processor coupled to a digital camera, visual information of an area around the intelligent personal assistant from the digital camera; determining, by the processor from the visual information, that the user is addressing the intelligent personal assistant; after determining that the user is addressing the intelligent personal assistant, recording, by the processor, a first recording in the memory representative of a first audio input from an area near the intelligent personal assistant; and providing, by the processor via the network interface, the first recording to a remote server for processing the first audio input when the processor determines that the user has addressed the intelligent personal assistant. 7. The method of claim 6, wherein determining that the user is addressing the intelligent personal assistant comprises determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 8. The method of claim 7, wherein determining that the user is gazing at the intelligent personal assistant comprises determining, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 9. The method of claim 6, wherein the camera receives light reflected from a reflective, convex surface, and the visual information is generated by the camera in response to receiving the reflected light from the convex surface. 10. The method of claim 9, further comprising:
applying, by the processor, an inverse function related to a curvature of the convex surface to the visual information to eliminate visual distortion caused by curvature of the convex surface. 11. An intelligent personal assistant, comprising:
an audio transducer for receiving audio input from an area near the intelligent personal assistant; a memory for storing processor-executable instructions and a wake word; a camera for providing digital images of an area around the intelligent personal assistant; a user output for providing audible responses voice input from a user; a network interface; and a processor coupled to the audio transducer, the memory, the camera the network interface, and the user output, for executing the processor-executable instructions that causes the intelligent personal assistant to:
receive, by the processor via the audio transducer, a first signal representative of a first voice input from the user;
determine, by the processor, that the first voice input comprises the wake word stored in the memory;
provide, by the processor via the network interface, at least a portion of the first signal to a remote server;
receive, by the processor via the network interface, a response from the remote server;
provide, by the processor via the user output, the response;
determine, by the processor via the digital images provided by the camera, that the user is addressing the intelligent personal assistant;
after determining that the user is addressing the intelligent personal assistant, receive, by the processor via the audio transducer, a second signal representative of a second voice input; and
provide, by the processor via the network interface, the second signal to the remote server for processing the second signal when the processor determines that the user has addressed the intelligent personal assistant within a predetermined time period from providing the response. 12. The intelligent personal assistant of claim 11, wherein the processor-executable instructions that cause the intelligent personal assistant to determine that the user is addressing the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 13. The intelligent personal assistant of claim 12, wherein the processor-executable instructions that cause the processor to determine that the user is gazing at the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determine, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 14. The intelligent personal assistant of claim 11, further comprising:
a reflective, convex surface for reflecting light from an area around the intelligent personal assistant to the camera; wherein the digital images are generated by the camera in response to receiving the reflected light from the convex surface. 15. The intelligent personal assistant of claim 14, wherein the processor-executable instructions further comprise instructions that causes the intelligent personal assistant to:
apply, by the processor, an inverse function related to a curvature of the convex surface to the digital images to eliminate visual distortion caused by curvature of the convex surface. 16. A method performed by an intelligent personal assistant for interacting with a user, comprising:
receiving, by a processor coupled to an audio transducer, a first signal representative of a first voice input from the user; determining, by the processor, that the first signal comprises a wake word stored in the memory; providing, by the processor via a network interface, at least a portion of the first signal to a remote server; receiving, by the processor via the network interface, a response from the remote server; providing, by the processor to a user output, the response; receiving, by the processor via the audio transducer, a second signal representative of a second voice input from the user; determining, by the processor via images provided by the camera, that the user is addressing the intelligent personal assistant; after determining that the user is addressing the intelligent personal assistant, receiving, by the processor via the audio transducer, a second signal representative of a second voice input; and providing, by the processor via the network interface, the second signal to the remote server for processing the second signal when the processor determines that the user has addressed the intelligent personal assistant within a predetermined time period from receiving the first signal. 17. The method of claim 16, wherein determining that the user is addressing the intelligent personal assistant comprises determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 18. The method of claim 17, wherein determining that the user is gazing at the intelligent personal assistant comprises determining, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 19. The method of claim 16, wherein the camera receives light reflected from a reflective, convex surface, and the images are generated by the camera in response to receiving the reflected light from the convex surface. 20. The method of claim 19, further comprising:
applying, by the processor, an inverse function related to a curvature of the convex surface to the images to eliminate visual distortion caused by curvature of the convex surface. | Aspects of the present invention are directed towards a more natural way to interact with an intelligent personal assistant. An intelligent personal assistant comprises a camera that provides images of an area surrounding the assistant. The assistant monitors images provided by the camera to determine when a user is addressing the assistant. When voice input is received after determining that a user is addressing the assistant, the assistant understands that the voice input is intended for the assistant, and acts on the voice input to respond to the user.1. An intelligent personal assistant, comprising:
an audio transducer for receiving audio input from an area near the intelligent personal assistant; a memory for storing processor-executable instructions; a camera for providing images of an area around the intelligent personal assistant; a user output for providing audible responses to voice input submitted by a user; a network interface; and a processor coupled to the audio transducer, the memory, the camera the network interface, and the user output, for executing the processor-executable instructions that causes the intelligent personal assistant to:
determine, by the processor via visual information provided by the camera, that the user is addressing the intelligent personal assistant;
after determining that the user is addressing the intelligent personal assistant, record, by the processor via the audio transducer, a first recording representative of a first audio input in the memory; and
provide, by the processor via the network interface, the first recording to a remote server for processing the first audio input after the processor determines that the user has addressed the intelligent personal assistant. 2. The intelligent personal assistant of claim 1, wherein the processor-executable instructions that cause the intelligent personal assistant to determine that the user is addressing the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 3. The intelligent personal assistant of claim 2, wherein the processor-executable instructions that cause the processor to determine that the user is gazing at the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determine, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 4. The intelligent personal assistant of claim 1, further comprising:
a reflective, convex surface comprising an apex aligned with the camera for reflecting light from an area around the intelligent personal assistant to the camera; wherein the visual information is generated by the camera in response to receiving the reflected light from the convex surface. 5. The intelligent personal assistant of claim 4, wherein the processor-executable instructions further comprise instructions that causes the intelligent personal assistant to:
apply, by the processor, an inverse function related to a curvature of the convex surface to the visual information to eliminate visual distortion caused by curvature of the convex surface. 6. A method performed by an intelligent personal assistant for interacting with a user, comprising:
receiving, by a processor coupled to a digital camera, visual information of an area around the intelligent personal assistant from the digital camera; determining, by the processor from the visual information, that the user is addressing the intelligent personal assistant; after determining that the user is addressing the intelligent personal assistant, recording, by the processor, a first recording in the memory representative of a first audio input from an area near the intelligent personal assistant; and providing, by the processor via the network interface, the first recording to a remote server for processing the first audio input when the processor determines that the user has addressed the intelligent personal assistant. 7. The method of claim 6, wherein determining that the user is addressing the intelligent personal assistant comprises determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 8. The method of claim 7, wherein determining that the user is gazing at the intelligent personal assistant comprises determining, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 9. The method of claim 6, wherein the camera receives light reflected from a reflective, convex surface, and the visual information is generated by the camera in response to receiving the reflected light from the convex surface. 10. The method of claim 9, further comprising:
applying, by the processor, an inverse function related to a curvature of the convex surface to the visual information to eliminate visual distortion caused by curvature of the convex surface. 11. An intelligent personal assistant, comprising:
an audio transducer for receiving audio input from an area near the intelligent personal assistant; a memory for storing processor-executable instructions and a wake word; a camera for providing digital images of an area around the intelligent personal assistant; a user output for providing audible responses voice input from a user; a network interface; and a processor coupled to the audio transducer, the memory, the camera the network interface, and the user output, for executing the processor-executable instructions that causes the intelligent personal assistant to:
receive, by the processor via the audio transducer, a first signal representative of a first voice input from the user;
determine, by the processor, that the first voice input comprises the wake word stored in the memory;
provide, by the processor via the network interface, at least a portion of the first signal to a remote server;
receive, by the processor via the network interface, a response from the remote server;
provide, by the processor via the user output, the response;
determine, by the processor via the digital images provided by the camera, that the user is addressing the intelligent personal assistant;
after determining that the user is addressing the intelligent personal assistant, receive, by the processor via the audio transducer, a second signal representative of a second voice input; and
provide, by the processor via the network interface, the second signal to the remote server for processing the second signal when the processor determines that the user has addressed the intelligent personal assistant within a predetermined time period from providing the response. 12. The intelligent personal assistant of claim 11, wherein the processor-executable instructions that cause the intelligent personal assistant to determine that the user is addressing the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 13. The intelligent personal assistant of claim 12, wherein the processor-executable instructions that cause the processor to determine that the user is gazing at the intelligent personal assistant comprise instructions that causes the intelligent personal assistant to:
determine, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 14. The intelligent personal assistant of claim 11, further comprising:
a reflective, convex surface for reflecting light from an area around the intelligent personal assistant to the camera; wherein the digital images are generated by the camera in response to receiving the reflected light from the convex surface. 15. The intelligent personal assistant of claim 14, wherein the processor-executable instructions further comprise instructions that causes the intelligent personal assistant to:
apply, by the processor, an inverse function related to a curvature of the convex surface to the digital images to eliminate visual distortion caused by curvature of the convex surface. 16. A method performed by an intelligent personal assistant for interacting with a user, comprising:
receiving, by a processor coupled to an audio transducer, a first signal representative of a first voice input from the user; determining, by the processor, that the first signal comprises a wake word stored in the memory; providing, by the processor via a network interface, at least a portion of the first signal to a remote server; receiving, by the processor via the network interface, a response from the remote server; providing, by the processor to a user output, the response; receiving, by the processor via the audio transducer, a second signal representative of a second voice input from the user; determining, by the processor via images provided by the camera, that the user is addressing the intelligent personal assistant; after determining that the user is addressing the intelligent personal assistant, receiving, by the processor via the audio transducer, a second signal representative of a second voice input; and providing, by the processor via the network interface, the second signal to the remote server for processing the second signal when the processor determines that the user has addressed the intelligent personal assistant within a predetermined time period from receiving the first signal. 17. The method of claim 16, wherein determining that the user is addressing the intelligent personal assistant comprises determining, by the processor via the camera, that the user is gazing at the intelligent personal assistant. 18. The method of claim 17, wherein determining that the user is gazing at the intelligent personal assistant comprises determining, by the processor via the camera, the presence of a first eye, a second eye and a mouth of the user. 19. The method of claim 16, wherein the camera receives light reflected from a reflective, convex surface, and the images are generated by the camera in response to receiving the reflected light from the convex surface. 20. The method of claim 19, further comprising:
applying, by the processor, an inverse function related to a curvature of the convex surface to the images to eliminate visual distortion caused by curvature of the convex surface. | 2,400 |
8,610 | 8,610 | 15,535,257 | 2,426 | The present disclosure presents an improved system and method for displaying thermographic characteristics in a broadcast. The method comprises utilizing a camera to record a broadcast, determining thermographic characteristics relative to the broadcast, and rendering graphics in a broadcast displaying said thermographic characteristics. In exemplary embodiments, thermographic characteristics are determined by sensors or lenses for one or more objects in a venue and thermographic characteristics are displayed in the broadcast. | 1. A method for displaying thermographic characteristics in a broadcast, comprising:
utilizing a camera to record a broadcast; determining thermographic characteristics relative to the broadcast; and rendering graphics in a broadcast displaying said thermographic characteristics. 2. A method in accordance with claim 1, wherein thermographic characteristics are sampled via at least one sampling point in a venue. 3. A method in accordance with claim 2, wherein thermographic characteristics are sampled at multiple sampling points and wherein thermographic characteristics are displayed in said broadcast. 4. A method in accordance with claim 3, wherein said thermographic characteristics are simulated relative to the broadcast. 5. A method in accordance with claim 1, wherein thermographic characteristics are determined from analyzation of said broadcast. 6. A method in accordance with claim 1, wherein thermographic characteristics are determined from thermographic or infrared camera equipment. 7. A method in accordance with claim 1, wherein thermographic characteristics are analyzed relative to environmental conditions. 8. A method in accordance with claim 7, wherein thermographic are analyzed relative to ground or atmospheric effects. 9. A method in accordance with claim 1, wherein thermographic characteristics of objects in the broadcast are displayed. 10. A method in accordance with claim 9, wherein said characteristics include one or more of thermographic characteristics of a ball, a vehicle and a player, with overlaying a graphic on said broadcast indicative of thermographic characteristics. 11. A method in accordance with claim 9, further comprising measuring thermographics, followed by overlaying a graphic on said broadcast indicative of thermographics. 12. A system for displaying thermographic characteristics and effects in a broadcast, comprising:
a camera configured to record a broadcast; a graphics rendering component configured to display thermographic characteristics relative to the broadcast; and an output component configured to output said rendered graphics in a broadcast displaying said thermographic characteristics. 13. A system in accordance with claim 11, wherein thermographic characteristics are sampled via at least one sampling component in a venue. 14. A system in accordance with claim 13, wherein thermographic characteristics are sampled at multiple sampling points via plural sampling components and wherein thermographic characteristics are displayed in said broadcast. 15. A system in accordance with claim 14, wherein said thermographic characteristics are simulated relative to the broadcast. 16. A system in accordance with claim 12, wherein thermographic characteristics are determined from a computer analyzation component configured to analyze said broadcast. 17. A system in accordance with claim 12, wherein thermographic characteristics are determined from thermographic or infrared camera equipment. 18. A system in accordance with claim 16, wherein thermographic characteristics are analyzed relative to environmental conditions. 19. A system in accordance with claim 18, wherein thermographic characteristics are analyzed relative to ground or atmospheric effects. 20. A system in accordance with claim 18, wherein thermographic characteristics of objects in the broadcast are displayed. 21. A system in accordance with claim 19, wherein said thermographic characteristics include one or more of characteristics of a ball, a vehicle and a player, with overlaying a graphic on said broadcast indicative of thermographic characteristics. 22. A system in accordance with claim 12, further comprising a thermographics measuring component, followed by overlaying a graphic on said broadcast indicative of thermographic characteristics. | The present disclosure presents an improved system and method for displaying thermographic characteristics in a broadcast. The method comprises utilizing a camera to record a broadcast, determining thermographic characteristics relative to the broadcast, and rendering graphics in a broadcast displaying said thermographic characteristics. In exemplary embodiments, thermographic characteristics are determined by sensors or lenses for one or more objects in a venue and thermographic characteristics are displayed in the broadcast.1. A method for displaying thermographic characteristics in a broadcast, comprising:
utilizing a camera to record a broadcast; determining thermographic characteristics relative to the broadcast; and rendering graphics in a broadcast displaying said thermographic characteristics. 2. A method in accordance with claim 1, wherein thermographic characteristics are sampled via at least one sampling point in a venue. 3. A method in accordance with claim 2, wherein thermographic characteristics are sampled at multiple sampling points and wherein thermographic characteristics are displayed in said broadcast. 4. A method in accordance with claim 3, wherein said thermographic characteristics are simulated relative to the broadcast. 5. A method in accordance with claim 1, wherein thermographic characteristics are determined from analyzation of said broadcast. 6. A method in accordance with claim 1, wherein thermographic characteristics are determined from thermographic or infrared camera equipment. 7. A method in accordance with claim 1, wherein thermographic characteristics are analyzed relative to environmental conditions. 8. A method in accordance with claim 7, wherein thermographic are analyzed relative to ground or atmospheric effects. 9. A method in accordance with claim 1, wherein thermographic characteristics of objects in the broadcast are displayed. 10. A method in accordance with claim 9, wherein said characteristics include one or more of thermographic characteristics of a ball, a vehicle and a player, with overlaying a graphic on said broadcast indicative of thermographic characteristics. 11. A method in accordance with claim 9, further comprising measuring thermographics, followed by overlaying a graphic on said broadcast indicative of thermographics. 12. A system for displaying thermographic characteristics and effects in a broadcast, comprising:
a camera configured to record a broadcast; a graphics rendering component configured to display thermographic characteristics relative to the broadcast; and an output component configured to output said rendered graphics in a broadcast displaying said thermographic characteristics. 13. A system in accordance with claim 11, wherein thermographic characteristics are sampled via at least one sampling component in a venue. 14. A system in accordance with claim 13, wherein thermographic characteristics are sampled at multiple sampling points via plural sampling components and wherein thermographic characteristics are displayed in said broadcast. 15. A system in accordance with claim 14, wherein said thermographic characteristics are simulated relative to the broadcast. 16. A system in accordance with claim 12, wherein thermographic characteristics are determined from a computer analyzation component configured to analyze said broadcast. 17. A system in accordance with claim 12, wherein thermographic characteristics are determined from thermographic or infrared camera equipment. 18. A system in accordance with claim 16, wherein thermographic characteristics are analyzed relative to environmental conditions. 19. A system in accordance with claim 18, wherein thermographic characteristics are analyzed relative to ground or atmospheric effects. 20. A system in accordance with claim 18, wherein thermographic characteristics of objects in the broadcast are displayed. 21. A system in accordance with claim 19, wherein said thermographic characteristics include one or more of characteristics of a ball, a vehicle and a player, with overlaying a graphic on said broadcast indicative of thermographic characteristics. 22. A system in accordance with claim 12, further comprising a thermographics measuring component, followed by overlaying a graphic on said broadcast indicative of thermographic characteristics. | 2,400 |
8,611 | 8,611 | 15,821,920 | 2,456 | A mission-specific computer peripheral provides a portable linkable work platform, useful for establishing a collaborative electronic work group quickly, at low cost, and without professional computing expertise. The office infrastructure device (“OID”) includes data storage (for storage of system and user data files), a unique device identification code (for identification when the device is plugged into a host personal computer), and an index (for registering user data files available within the work group). When connected, user executable code within the device is accessed through the host personal computer to launch thereon a user-definable work space. The work space provides, among other office infrastructure functions, access to programming that enables sharing of personal user work and data files among the authorized member nodes of the work group. The sharing is facilitated by the index, preferably in combination with a complementary work group server integrated within the underlying OID network. | 1. An office infrastructure device, suited for connection to a host personal computer, comprising:
(a) user-accessible data storage for writing and reading work files; (b) system-support data storage distinct from said user-accessible data storage; (c) user-executable code stored in said system-support data storage that launches a user-definable work space on said host personal computer when the office infrastructure device is connected thereto, the launched user-definable work space providing access to said user-accessible data storage and to a private internet site hosting work files from said user-accessible data storage; and (d) a work file index maintained within said system-support data storage and used by the work space to track work files stored in said user-accessible data storage. 2. The office infrastructure device of claim 1, wherein said system-support data storage comprises solid-state non-volatile memory. 3. The office infrastructure device of claim 1, wherein the work-file index data tracks the name, size, and location of work files stored in said user-accessible data storage. 4. A collaborative electronic work group comprising two or more communicatively linked work group nodes, wherein each work group node comprises an office infrastructure device connected peripherally to a host personal computer, and wherein the office infrastructure device at each node comprises:
(i) user-accessible data storage for writing and reading work files, (ii) system-support data storage distinct from said user-accessible data storage, (iii) user-executable code stored in said system-support data storage that launches a user-definable work space on the connected host personal computer, the launched user-definable work space providing access to work files stored both locally in said user-accessible data storage and remotely in the user-accessible data storage at another node; and (iv) a work file index maintained within said system-support data storage and used by the work space to track said work files stored both locally in said user-accessible data storage and remotely in the user-accessible data storage at another node. 5. The collaborative electronic work group of claim 4, wherein the office infrastructure device at each node further comprises:
(v) a unique identification code used for authentication by said work space to effect said communicative linkage of said two or more work group nodes. 6. The collaborative electronic work group of claim 5, further comprising a work group server;
wherein the two or more work group nodes are communicatively linked directly or indirectly through said work group server. 7. The collaborative electronic work group of claim 6, wherein the work group server provides access to a private internet site for hosting work files shared by said two or more work group nodes. 8. A collaborative electronic work group comprising:
(a) a work group server; and (b) two or more work group nodes communicatively linked directly or indirectly through said work group server, wherein
(i) each work group node comprises an office infrastructure device connected peripherally to a host personal computer, the office infrastructure device at each node comprising user-accessible data storage, a unique device identification code, and user-executable code for effecting local acquisition of data stored in the user-accessible data storage of another of said work group nodes through said work-group server as a function of said unique device identification code; and
(ii) the work group server effecting access of said two or more work group nodes to a VPN server and a remote private internet site for hosting work files shared by two or more work group nodes. | A mission-specific computer peripheral provides a portable linkable work platform, useful for establishing a collaborative electronic work group quickly, at low cost, and without professional computing expertise. The office infrastructure device (“OID”) includes data storage (for storage of system and user data files), a unique device identification code (for identification when the device is plugged into a host personal computer), and an index (for registering user data files available within the work group). When connected, user executable code within the device is accessed through the host personal computer to launch thereon a user-definable work space. The work space provides, among other office infrastructure functions, access to programming that enables sharing of personal user work and data files among the authorized member nodes of the work group. The sharing is facilitated by the index, preferably in combination with a complementary work group server integrated within the underlying OID network.1. An office infrastructure device, suited for connection to a host personal computer, comprising:
(a) user-accessible data storage for writing and reading work files; (b) system-support data storage distinct from said user-accessible data storage; (c) user-executable code stored in said system-support data storage that launches a user-definable work space on said host personal computer when the office infrastructure device is connected thereto, the launched user-definable work space providing access to said user-accessible data storage and to a private internet site hosting work files from said user-accessible data storage; and (d) a work file index maintained within said system-support data storage and used by the work space to track work files stored in said user-accessible data storage. 2. The office infrastructure device of claim 1, wherein said system-support data storage comprises solid-state non-volatile memory. 3. The office infrastructure device of claim 1, wherein the work-file index data tracks the name, size, and location of work files stored in said user-accessible data storage. 4. A collaborative electronic work group comprising two or more communicatively linked work group nodes, wherein each work group node comprises an office infrastructure device connected peripherally to a host personal computer, and wherein the office infrastructure device at each node comprises:
(i) user-accessible data storage for writing and reading work files, (ii) system-support data storage distinct from said user-accessible data storage, (iii) user-executable code stored in said system-support data storage that launches a user-definable work space on the connected host personal computer, the launched user-definable work space providing access to work files stored both locally in said user-accessible data storage and remotely in the user-accessible data storage at another node; and (iv) a work file index maintained within said system-support data storage and used by the work space to track said work files stored both locally in said user-accessible data storage and remotely in the user-accessible data storage at another node. 5. The collaborative electronic work group of claim 4, wherein the office infrastructure device at each node further comprises:
(v) a unique identification code used for authentication by said work space to effect said communicative linkage of said two or more work group nodes. 6. The collaborative electronic work group of claim 5, further comprising a work group server;
wherein the two or more work group nodes are communicatively linked directly or indirectly through said work group server. 7. The collaborative electronic work group of claim 6, wherein the work group server provides access to a private internet site for hosting work files shared by said two or more work group nodes. 8. A collaborative electronic work group comprising:
(a) a work group server; and (b) two or more work group nodes communicatively linked directly or indirectly through said work group server, wherein
(i) each work group node comprises an office infrastructure device connected peripherally to a host personal computer, the office infrastructure device at each node comprising user-accessible data storage, a unique device identification code, and user-executable code for effecting local acquisition of data stored in the user-accessible data storage of another of said work group nodes through said work-group server as a function of said unique device identification code; and
(ii) the work group server effecting access of said two or more work group nodes to a VPN server and a remote private internet site for hosting work files shared by two or more work group nodes. | 2,400 |
8,612 | 8,612 | 15,057,490 | 2,438 | Disclosed are various examples for enrolling a client device and synchronizing user attributes for the client device across multiple directory services. A search request for user attributes can be sent to a first directory service with an identifier for a user account. The first directory service can query for the identifier and send back user attributes. If a global identifier is included in the attributes, another search request for user attributes can be sent to a second directory service with the global identifier. The second directory service can query for the global identifier and send back user attributes. | 1. A method, comprising:
searching a first directory service for a plurality of first user attributes based at least in part on an identifier; receiving the plurality of first user attributes from the first directory service; determining whether the plurality of first user attributes includes a global identifier; and in response to determining that the plurality of first user attributes includes the global identifier:
searching a second directory service for a plurality of second user attributes based at least in part on the global identifier; and
receiving the plurality of second user attributes from the second directory service. 2. The method of claim 1, further comprising receiving an authentication confirmation comprising the identifier from a client device associated with a user account. 3. The method of claim 2, further comprising updating a plurality of user properties corresponding to the user account based at least in part on at least one of: the plurality of first user attributes or the plurality of second user attributes. 4. The method of claim 3, further comprising scheduling a periodic query of the first directory service and the second directory service for changes to the user account, wherein updating the plurality of user properties corresponding to the user account occurs in response in response to the periodic query. 5. The method of claim 1, wherein the searching of the first directory service for the plurality of first user attributes is performed in response to detecting that a user account is omitted from a list of managed users. 6. The method of claim 1, wherein the global identifier is an immutable identifier. 7. The method of claim 1, further comprising
detecting a conflict between the plurality of first user attributes and the plurality of first user attributes; and resolving the conflict based at least in part on a last modified timestamp associated with a conflicting set of user attributes. 8. A non-transitory computer-readable medium embodying a program that, when executed by at least one computing device, causes the at least one computing device to at least:
search a first directory service for a plurality of first user attributes based at least in part on an identifier; receive the plurality of first user attributes from the first directory service; determine whether the plurality of first user attributes includes a global identifier; and in response to a determination that the plurality of first user attributes includes the global identifier:
searching a second directory service for a plurality of second user attributes based at least in part on the global identifier; and
receiving the plurality of second user attributes from the second directory service. 9. The non-transitory computer-readable medium of claim 8, wherein the program further causes the at least one computing device to at least receive an authentication confirmation comprising the identifier from a client device associated with a user account. 10. The non-transitory computer-readable medium of claim 9, wherein the program further causes the at least one computing device to at least update a plurality of user properties corresponding to the user account based at least in part on at least one of: the plurality of first user attributes or the plurality of second user attributes. 11. The non-transitory computer-readable medium of claim 10, wherein the program further causes the at least one computing device to at least schedule a periodic query of the first directory service and the second directory service for changes to the user account, wherein updating the plurality of user properties corresponding to the user account occurs in response in response to the periodic query. 12. The non-transitory computer-readable medium of claim 8, wherein the search of the first directory service for the plurality of first user attributes is in response to detecting that a user account is omitted from a list of managed users. 13. The non-transitory computer-readable medium of claim 8, wherein the global identifier is an immutable identifier. 14. The non-transitory computer-readable medium of claim 8, wherein the program further causes the at least one computing device to at least:
detect a conflict between the plurality of first user attributes and the plurality of first user attributes; and resolve the conflict based at least in part on a last modified timestamp associated with a conflicting set of user attributes. 15. A system, comprising:
a data store; at least one computing device in communication with the data store, the at least one computing device being configured to at least:
search a first directory service for a plurality of first user attributes based at least in part on an identifier;
receive the plurality of first user attributes from the first directory service;
determine whether the plurality of first user attributes includes a global identifier; and
in response to a determination that the plurality of first user attributes includes the global identifier:
searching a second directory service for a plurality of second user attributes based at least in part on the global identifier; and
receiving the plurality of second user attributes from the second directory service. 16. The system of claim 15, wherein the at least one computing device is further configured to at least receive an authentication confirmation comprising the identifier from a client device associated with a user account. 17. The system of claim 16, wherein the at least one computing device is further configured to at least update a plurality of user properties corresponding to the user account based at least in part on at least one of: the plurality of first user attributes or the plurality of second user attributes. 18. The system of claim 17, wherein the at least one computing device is further configured to at least schedule a periodic query of the first directory service and the second directory service for changes to the user account, wherein updating the plurality of user properties corresponding to the user account occurs in response in response to the periodic query. 19. The system of claim 15, wherein the search of the first directory service for the plurality of first user attributes is in response to detecting that a user account is omitted from a list of managed users. 20. The system of claim 15, wherein the global identifier is an immutable identifier. | Disclosed are various examples for enrolling a client device and synchronizing user attributes for the client device across multiple directory services. A search request for user attributes can be sent to a first directory service with an identifier for a user account. The first directory service can query for the identifier and send back user attributes. If a global identifier is included in the attributes, another search request for user attributes can be sent to a second directory service with the global identifier. The second directory service can query for the global identifier and send back user attributes.1. A method, comprising:
searching a first directory service for a plurality of first user attributes based at least in part on an identifier; receiving the plurality of first user attributes from the first directory service; determining whether the plurality of first user attributes includes a global identifier; and in response to determining that the plurality of first user attributes includes the global identifier:
searching a second directory service for a plurality of second user attributes based at least in part on the global identifier; and
receiving the plurality of second user attributes from the second directory service. 2. The method of claim 1, further comprising receiving an authentication confirmation comprising the identifier from a client device associated with a user account. 3. The method of claim 2, further comprising updating a plurality of user properties corresponding to the user account based at least in part on at least one of: the plurality of first user attributes or the plurality of second user attributes. 4. The method of claim 3, further comprising scheduling a periodic query of the first directory service and the second directory service for changes to the user account, wherein updating the plurality of user properties corresponding to the user account occurs in response in response to the periodic query. 5. The method of claim 1, wherein the searching of the first directory service for the plurality of first user attributes is performed in response to detecting that a user account is omitted from a list of managed users. 6. The method of claim 1, wherein the global identifier is an immutable identifier. 7. The method of claim 1, further comprising
detecting a conflict between the plurality of first user attributes and the plurality of first user attributes; and resolving the conflict based at least in part on a last modified timestamp associated with a conflicting set of user attributes. 8. A non-transitory computer-readable medium embodying a program that, when executed by at least one computing device, causes the at least one computing device to at least:
search a first directory service for a plurality of first user attributes based at least in part on an identifier; receive the plurality of first user attributes from the first directory service; determine whether the plurality of first user attributes includes a global identifier; and in response to a determination that the plurality of first user attributes includes the global identifier:
searching a second directory service for a plurality of second user attributes based at least in part on the global identifier; and
receiving the plurality of second user attributes from the second directory service. 9. The non-transitory computer-readable medium of claim 8, wherein the program further causes the at least one computing device to at least receive an authentication confirmation comprising the identifier from a client device associated with a user account. 10. The non-transitory computer-readable medium of claim 9, wherein the program further causes the at least one computing device to at least update a plurality of user properties corresponding to the user account based at least in part on at least one of: the plurality of first user attributes or the plurality of second user attributes. 11. The non-transitory computer-readable medium of claim 10, wherein the program further causes the at least one computing device to at least schedule a periodic query of the first directory service and the second directory service for changes to the user account, wherein updating the plurality of user properties corresponding to the user account occurs in response in response to the periodic query. 12. The non-transitory computer-readable medium of claim 8, wherein the search of the first directory service for the plurality of first user attributes is in response to detecting that a user account is omitted from a list of managed users. 13. The non-transitory computer-readable medium of claim 8, wherein the global identifier is an immutable identifier. 14. The non-transitory computer-readable medium of claim 8, wherein the program further causes the at least one computing device to at least:
detect a conflict between the plurality of first user attributes and the plurality of first user attributes; and resolve the conflict based at least in part on a last modified timestamp associated with a conflicting set of user attributes. 15. A system, comprising:
a data store; at least one computing device in communication with the data store, the at least one computing device being configured to at least:
search a first directory service for a plurality of first user attributes based at least in part on an identifier;
receive the plurality of first user attributes from the first directory service;
determine whether the plurality of first user attributes includes a global identifier; and
in response to a determination that the plurality of first user attributes includes the global identifier:
searching a second directory service for a plurality of second user attributes based at least in part on the global identifier; and
receiving the plurality of second user attributes from the second directory service. 16. The system of claim 15, wherein the at least one computing device is further configured to at least receive an authentication confirmation comprising the identifier from a client device associated with a user account. 17. The system of claim 16, wherein the at least one computing device is further configured to at least update a plurality of user properties corresponding to the user account based at least in part on at least one of: the plurality of first user attributes or the plurality of second user attributes. 18. The system of claim 17, wherein the at least one computing device is further configured to at least schedule a periodic query of the first directory service and the second directory service for changes to the user account, wherein updating the plurality of user properties corresponding to the user account occurs in response in response to the periodic query. 19. The system of claim 15, wherein the search of the first directory service for the plurality of first user attributes is in response to detecting that a user account is omitted from a list of managed users. 20. The system of claim 15, wherein the global identifier is an immutable identifier. | 2,400 |
8,613 | 8,613 | 15,396,599 | 2,421 | A recently-viewed stack of session objects is formed, corresponding to assets recently played on consumer premises equipment in a video content network. The session objects include linear session objects corresponding to linear ones of the assets and non-linear session objects corresponding to non-linear ones of the assets. A user selection to resume playing one of the non-linear assets is obtained (e.g., from a channel surfer application). Responsive to the user selection, a switch is made from a currently viewed program to the aforementioned one of the non-linear assets. | 1. A method comprising the steps of:
forming a recently-viewed stack of session objects corresponding to assets recently played on consumer premises equipment in a video content network, wherein said session objects comprise linear session objects corresponding to linear ones of said assets and non-linear session objects corresponding to non-linear ones of said assets; obtaining a user selection to resume playing one of said non-linear ones of said assets; and responsive to said user selection, switching from a currently viewed program to said one of said non-linear ones of said assets. 2. The method of claim 1, wherein said obtaining step comprises obtaining said user selection from a channel surfer application. 3. The method of claim 2, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step, said user selection comprises a user selection to resume playing one of said video-on-demand ones of said assets; and said switching step comprises switching from said currently viewed program to said one of said video-on-demand ones of said assets. 4. The method of claim 2, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step, said user selection comprises a user selection to resume playing one of said digital video recorder ones of said assets; and said switching step comprises switching from said currently viewed program to said one of said digital video recorder ones of said assets. 5. The method of claim 2, wherein, in said switching step, said currently viewed program comprises a linear program. 6. The method of claim 2, wherein, in said switching step, said currently viewed program comprises a non-linear program. 7. The method of claim 2, further comprising persisting said recently-viewed stack of session objects to non-volatile memory at an appropriate time. 8. The method of claim 7, wherein said appropriate time comprises update of said stack. 9. The method of claim 8, further comprising retrieving said recently-viewed stack of session objects from said non-volatile memory upon subsequent system re-start. 10. The method of claim 8, further comprising:
upon said user switching to a new program, obtaining a request to push a session object corresponding to said new program to said recently-viewed stack of session objects; and adding said session object corresponding to said new program to said recently-viewed stack of session objects, responsive to obtaining said request. 11. The method of claim 10, wherein said recently-viewed stack of session objects has a maximum number of entries, further comprising deleting an oldest one of said entries, responsive to obtaining said request. 12. The method of claim 10, further comprising:
responsive to obtaining said request, checking whether a duplicate of said session object corresponding to said new program already exists in said recently-viewed stack of session objects; and if said duplicate of said session object corresponding to said new program already exists in said recently-viewed stack of session objects, deleting said duplicate of said session object corresponding to said new program. 13. The method of claim 12, wherein said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets, further comprising:
assigning each of said session objects to a RecentlyViewedItemBase class; assigning each of said linear session objects to a RecentlyViewedLinearItem class; assigning each of said video-on-demand session objects to a RecentlyViewedVodItem class; and assigning each of said digital video recorder session objects to a RecentlyViewedDvrItem class; wherein a manner of said checking for said duplicate is based on membership in one of said RecentlyViewedLinearItem class, said RecentlyViewedVodItem class, and said RecentlyViewedDvrItem class. 14. An apparatus comprising:
a memory; at least one processor coupled to said memory; and a non-transitory persistent storage medium which contains instructions which, when loaded into said memory, configure said at least one processor to be operative to:
form a recently-viewed stack of session objects corresponding to assets recently played on consumer premises equipment in a video content network, wherein said session objects comprise linear session objects corresponding to linear ones of said assets and non-linear session objects corresponding to non-linear ones of said assets;
obtain a user selection to resume playing one of said non-linear ones of said assets; and
responsive to said user selection, switch from a currently viewed program to said one of said non-linear ones of said assets. 15. The apparatus of claim 14, wherein said user selection is obtained from a channel surfer application. 16. The apparatus of claim 15, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; said user selection comprises a user selection to resume playing one of said video-on-demand ones of said assets; and said switching comprises switching from said currently viewed program to said one of said video-on-demand ones of said assets. 17. The apparatus of claim 15, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; said user selection comprises a user selection to resume playing one of said digital video recorder ones of said assets; and said switching comprises switching from said currently viewed program to said one of said digital video recorder ones of said assets. 18. A non-transitory computer readable medium comprising computer executable instructions which when executed by a computer cause the computer to perform the method of:
forming a recently-viewed stack of session objects corresponding to assets recently played on consumer premises equipment in a video content network, wherein said session objects comprise linear session objects corresponding to linear ones of said assets and non-linear session objects corresponding to non-linear ones of said assets; obtaining a user selection to resume playing one of said non-linear ones of said assets; and responsive to said user selection, switching from a currently viewed program to said one of said non-linear ones of said assets. 19. The non-transitory computer readable medium of claim 18, wherein said obtaining step of said method comprises obtaining said user selection from a channel surfer application. 20. The non-transitory computer readable medium of claim 19, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step of said method, said user selection comprises a user selection to resume playing one of said video-on-demand ones of said assets; and said switching step of said method comprises switching from said currently viewed program to said one of said video-on-demand ones of said assets. 21. The non-transitory computer readable medium of claim 19, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step of said method, said user selection comprises a user selection to resume playing one of said digital video recorder ones of said assets; and said switching step of said method comprises switching from said currently viewed program to said one of said digital video recorder ones of said assets. | A recently-viewed stack of session objects is formed, corresponding to assets recently played on consumer premises equipment in a video content network. The session objects include linear session objects corresponding to linear ones of the assets and non-linear session objects corresponding to non-linear ones of the assets. A user selection to resume playing one of the non-linear assets is obtained (e.g., from a channel surfer application). Responsive to the user selection, a switch is made from a currently viewed program to the aforementioned one of the non-linear assets.1. A method comprising the steps of:
forming a recently-viewed stack of session objects corresponding to assets recently played on consumer premises equipment in a video content network, wherein said session objects comprise linear session objects corresponding to linear ones of said assets and non-linear session objects corresponding to non-linear ones of said assets; obtaining a user selection to resume playing one of said non-linear ones of said assets; and responsive to said user selection, switching from a currently viewed program to said one of said non-linear ones of said assets. 2. The method of claim 1, wherein said obtaining step comprises obtaining said user selection from a channel surfer application. 3. The method of claim 2, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step, said user selection comprises a user selection to resume playing one of said video-on-demand ones of said assets; and said switching step comprises switching from said currently viewed program to said one of said video-on-demand ones of said assets. 4. The method of claim 2, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step, said user selection comprises a user selection to resume playing one of said digital video recorder ones of said assets; and said switching step comprises switching from said currently viewed program to said one of said digital video recorder ones of said assets. 5. The method of claim 2, wherein, in said switching step, said currently viewed program comprises a linear program. 6. The method of claim 2, wherein, in said switching step, said currently viewed program comprises a non-linear program. 7. The method of claim 2, further comprising persisting said recently-viewed stack of session objects to non-volatile memory at an appropriate time. 8. The method of claim 7, wherein said appropriate time comprises update of said stack. 9. The method of claim 8, further comprising retrieving said recently-viewed stack of session objects from said non-volatile memory upon subsequent system re-start. 10. The method of claim 8, further comprising:
upon said user switching to a new program, obtaining a request to push a session object corresponding to said new program to said recently-viewed stack of session objects; and adding said session object corresponding to said new program to said recently-viewed stack of session objects, responsive to obtaining said request. 11. The method of claim 10, wherein said recently-viewed stack of session objects has a maximum number of entries, further comprising deleting an oldest one of said entries, responsive to obtaining said request. 12. The method of claim 10, further comprising:
responsive to obtaining said request, checking whether a duplicate of said session object corresponding to said new program already exists in said recently-viewed stack of session objects; and if said duplicate of said session object corresponding to said new program already exists in said recently-viewed stack of session objects, deleting said duplicate of said session object corresponding to said new program. 13. The method of claim 12, wherein said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets, further comprising:
assigning each of said session objects to a RecentlyViewedItemBase class; assigning each of said linear session objects to a RecentlyViewedLinearItem class; assigning each of said video-on-demand session objects to a RecentlyViewedVodItem class; and assigning each of said digital video recorder session objects to a RecentlyViewedDvrItem class; wherein a manner of said checking for said duplicate is based on membership in one of said RecentlyViewedLinearItem class, said RecentlyViewedVodItem class, and said RecentlyViewedDvrItem class. 14. An apparatus comprising:
a memory; at least one processor coupled to said memory; and a non-transitory persistent storage medium which contains instructions which, when loaded into said memory, configure said at least one processor to be operative to:
form a recently-viewed stack of session objects corresponding to assets recently played on consumer premises equipment in a video content network, wherein said session objects comprise linear session objects corresponding to linear ones of said assets and non-linear session objects corresponding to non-linear ones of said assets;
obtain a user selection to resume playing one of said non-linear ones of said assets; and
responsive to said user selection, switch from a currently viewed program to said one of said non-linear ones of said assets. 15. The apparatus of claim 14, wherein said user selection is obtained from a channel surfer application. 16. The apparatus of claim 15, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; said user selection comprises a user selection to resume playing one of said video-on-demand ones of said assets; and said switching comprises switching from said currently viewed program to said one of said video-on-demand ones of said assets. 17. The apparatus of claim 15, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; said user selection comprises a user selection to resume playing one of said digital video recorder ones of said assets; and said switching comprises switching from said currently viewed program to said one of said digital video recorder ones of said assets. 18. A non-transitory computer readable medium comprising computer executable instructions which when executed by a computer cause the computer to perform the method of:
forming a recently-viewed stack of session objects corresponding to assets recently played on consumer premises equipment in a video content network, wherein said session objects comprise linear session objects corresponding to linear ones of said assets and non-linear session objects corresponding to non-linear ones of said assets; obtaining a user selection to resume playing one of said non-linear ones of said assets; and responsive to said user selection, switching from a currently viewed program to said one of said non-linear ones of said assets. 19. The non-transitory computer readable medium of claim 18, wherein said obtaining step of said method comprises obtaining said user selection from a channel surfer application. 20. The non-transitory computer readable medium of claim 19, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step of said method, said user selection comprises a user selection to resume playing one of said video-on-demand ones of said assets; and said switching step of said method comprises switching from said currently viewed program to said one of said video-on-demand ones of said assets. 21. The non-transitory computer readable medium of claim 19, wherein:
said non-linear session objects comprise video-on-demand session objects corresponding to video-on-demand ones of said assets and digital video recorder objects corresponding to digital video recorder ones of said assets; in said obtaining step of said method, said user selection comprises a user selection to resume playing one of said digital video recorder ones of said assets; and said switching step of said method comprises switching from said currently viewed program to said one of said digital video recorder ones of said assets. | 2,400 |
8,614 | 8,614 | 14,424,661 | 2,447 | For delivering an audio-visual content to a client device, an interconnecting device interconnecting a first network to a second network, the client device being connected to the second network, an equipment adapted to provide the audio-visual content being connected to the first network, said equipment performs: receiving, from the client device, a first request for receiving the audio-visual content; transmitting a redirecting message to the client device, said redirecting message redirecting the client device toward an agent implemented in the interconnecting device. Furthermore, said agent performs: receiving, from the client device, a second request for receiving the audio-visual content; acting as a relay between said equipment and the client device. | 1. A method for delivering an audio-visual content to a client device, an interconnecting device interconnecting a first network a second network, the client device being connected to the second network, an equipment adapted to provide the audio-visual content being connected to the first network, wherein said equipment performs:
receiving, from the client device, a first request for receiving the audio-visual content; transmitting a redirecting message to the client device, said redirecting message redirecting the client device toward an agent implemented in the interconnecting device;
and in that said agent performs:
receiving, from the client device, a second request for receiving the audio-visual content; and
acting as a relay between said equipment and the client device, in response to said second request. 2. The method according to claim 1, wherein said first and second requests are requests for receiving the audio-visual content in the form of a unicast stream, in that said equipment is adapted to provide the audio-visual content in live streaming, in that said equipment transmits the redirecting message to the client device when the audio-visual content is made available by said equipment in the form of at least one multicast stream, and wherein, when acting as a relay, said agent performs:
joining said at least one multicast stream; and converting data received in the form of the at least one multicast stream into data in the form of the unicast stream. 3. The method according to claim 2, wherein:
the redirecting message includes parameters notifying at least one multicast address and at least one associated port; the request for receiving the audio-visual content in the form of the unicast stream comprises said parameters notifying the at least one multicast address and the at least one associated port;
and wherein said agent joins at least one multicast stream corresponding to the at least one multicast address and the at least one associated port. 4. The method according to claim 3, wherein:
the redirecting message includes parameters notifying a quantity of layers made available for the audio-visual content in the form of at least one multicast stream; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters notifying the quantity of layers;
and wherein said agent determines at least one multicast address and/or at least one associated port, as a function of said quantity of layers. 5. The method according to claim 4, wherein:
the redirecting message includes parameters notifying one multicast address and one associated port and notifying said quantity of layers; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters notifying said one multicast address and said one associated port and notifying said quantity of layers;
and wherein said agent determines one multicast address per layer and one associated port, as a function of said quantity of layers and of said one multicast address and said one associated port included in said request. 6. The method according to claim 4, wherein:
the redirecting message includes parameters notifying one multicast address and one associated port and notifying said quantity of layers; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters notifying said one multicast address and said one associated port and notifying said quantity of layers;
and wherein said agent determines one multicast address for all layers and one associated port per layer, as a function of said quantity of layers and of said one multicast address and said one associated port included in said request. 7. The method according to claim 2, wherein, plural layers being made available for the audio-visual content in the form of at least one multicast stream, said equipment being adapted to provide the audio-visual content using HyperText Transfer Protocol Live Streaming, and wherein:
the redirecting message includes parameters representative of a Uniform Resource Locator related to a playlist for the audio-visual content; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters representative of the Uniform Resource Locator;
and wherein said agent performs:
requesting said playlist on the basis of said Uniform Resource Locator; receiving said playlist;
performing a parsing operation on said playlist for determining one playlist associated with each layer;
receiving one layer playlist from each joined multicast stream;
transmitting said received playlist(s) to the client device;
receiving a request from the client device, indicating a playlist associated with one layer or indicating a file of a playlist associated with one layer; and
selecting a multicast stream as a function of said indicated playlist associated with one layer or of said indicated file. 8. The method according to claim 2, wherein, plural layers being made available for the audio-visual content in the form of at least one multicast stream, the agent having joined one multicast stream corresponding to one layer, said agent performs:
detecting a need for the client device to switch from said one layer to another layer; joining a multicast stream corresponding to said another layer; and leaving the multicast stream corresponding to said one layer. 9. The method according to claim 2, wherein, plural layers being made available for the audio-visual content in the form of at least one multicast stream, the agent having joined at least two multicast streams corresponding respectively to one layer and another layer, said agent performs:
detecting a need for the client device to switch in between layers; and selecting data from one multicast stream among said at least two multicast streams, as a function of said detected need. 10. The method according to claim 1, wherein said first and second requests are requests for receiving the audio-visual content in the form of a unicast stream, in that said equipment transmits the redirecting message to the client device when the audio-visual content is made available by plural sources, and wherein, when acting as a relay said agent performs:
requesting said audio-visual content from said plural sources; and recreating a unicast stream from data received from said plural sources. 11. The method according to claim 10, wherein:
the redirecting message includes parameters notifying from which sources the audio-visual content is made available; and the request for receiving the audio-visual content in the form of the unicast stream comprises said parameters. 12. The method according to claim 10, wherein said interconnecting device is a home gateway and said plural sources are servers of said equipment and/or other home gateways. 13. The method according to claim 1, wherein the redirecting message indicates a temporary relocation of the audio-visual content. 14. A system for delivering an audio-visual content to a client device, said system comprising an equipment and an interconnecting device, said interconnecting device aiming at interconnecting a first network to a second network, the client device being connected to the second network, said equipment being adapted to provide the audio-visual content and aiming at being connected to the first network, wherein said equipment comprises:
a receiver for receiving a first request for receiving the audio-visual content; a transmitter for transmitting a redirecting message, said redirecting message aiming at redirecting the client device toward an agent implemented in the interconnecting device;
and wherein said agent comprises:
a receiver for receiving a second request for receiving the audio-visual content; and
wherein said agent is configured for acting as a relay between said equipment and the client device, in response to said second request. | For delivering an audio-visual content to a client device, an interconnecting device interconnecting a first network to a second network, the client device being connected to the second network, an equipment adapted to provide the audio-visual content being connected to the first network, said equipment performs: receiving, from the client device, a first request for receiving the audio-visual content; transmitting a redirecting message to the client device, said redirecting message redirecting the client device toward an agent implemented in the interconnecting device. Furthermore, said agent performs: receiving, from the client device, a second request for receiving the audio-visual content; acting as a relay between said equipment and the client device.1. A method for delivering an audio-visual content to a client device, an interconnecting device interconnecting a first network a second network, the client device being connected to the second network, an equipment adapted to provide the audio-visual content being connected to the first network, wherein said equipment performs:
receiving, from the client device, a first request for receiving the audio-visual content; transmitting a redirecting message to the client device, said redirecting message redirecting the client device toward an agent implemented in the interconnecting device;
and in that said agent performs:
receiving, from the client device, a second request for receiving the audio-visual content; and
acting as a relay between said equipment and the client device, in response to said second request. 2. The method according to claim 1, wherein said first and second requests are requests for receiving the audio-visual content in the form of a unicast stream, in that said equipment is adapted to provide the audio-visual content in live streaming, in that said equipment transmits the redirecting message to the client device when the audio-visual content is made available by said equipment in the form of at least one multicast stream, and wherein, when acting as a relay, said agent performs:
joining said at least one multicast stream; and converting data received in the form of the at least one multicast stream into data in the form of the unicast stream. 3. The method according to claim 2, wherein:
the redirecting message includes parameters notifying at least one multicast address and at least one associated port; the request for receiving the audio-visual content in the form of the unicast stream comprises said parameters notifying the at least one multicast address and the at least one associated port;
and wherein said agent joins at least one multicast stream corresponding to the at least one multicast address and the at least one associated port. 4. The method according to claim 3, wherein:
the redirecting message includes parameters notifying a quantity of layers made available for the audio-visual content in the form of at least one multicast stream; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters notifying the quantity of layers;
and wherein said agent determines at least one multicast address and/or at least one associated port, as a function of said quantity of layers. 5. The method according to claim 4, wherein:
the redirecting message includes parameters notifying one multicast address and one associated port and notifying said quantity of layers; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters notifying said one multicast address and said one associated port and notifying said quantity of layers;
and wherein said agent determines one multicast address per layer and one associated port, as a function of said quantity of layers and of said one multicast address and said one associated port included in said request. 6. The method according to claim 4, wherein:
the redirecting message includes parameters notifying one multicast address and one associated port and notifying said quantity of layers; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters notifying said one multicast address and said one associated port and notifying said quantity of layers;
and wherein said agent determines one multicast address for all layers and one associated port per layer, as a function of said quantity of layers and of said one multicast address and said one associated port included in said request. 7. The method according to claim 2, wherein, plural layers being made available for the audio-visual content in the form of at least one multicast stream, said equipment being adapted to provide the audio-visual content using HyperText Transfer Protocol Live Streaming, and wherein:
the redirecting message includes parameters representative of a Uniform Resource Locator related to a playlist for the audio-visual content; the request for receiving the audio-visual content in the form of the unicast stream includes said parameters representative of the Uniform Resource Locator;
and wherein said agent performs:
requesting said playlist on the basis of said Uniform Resource Locator; receiving said playlist;
performing a parsing operation on said playlist for determining one playlist associated with each layer;
receiving one layer playlist from each joined multicast stream;
transmitting said received playlist(s) to the client device;
receiving a request from the client device, indicating a playlist associated with one layer or indicating a file of a playlist associated with one layer; and
selecting a multicast stream as a function of said indicated playlist associated with one layer or of said indicated file. 8. The method according to claim 2, wherein, plural layers being made available for the audio-visual content in the form of at least one multicast stream, the agent having joined one multicast stream corresponding to one layer, said agent performs:
detecting a need for the client device to switch from said one layer to another layer; joining a multicast stream corresponding to said another layer; and leaving the multicast stream corresponding to said one layer. 9. The method according to claim 2, wherein, plural layers being made available for the audio-visual content in the form of at least one multicast stream, the agent having joined at least two multicast streams corresponding respectively to one layer and another layer, said agent performs:
detecting a need for the client device to switch in between layers; and selecting data from one multicast stream among said at least two multicast streams, as a function of said detected need. 10. The method according to claim 1, wherein said first and second requests are requests for receiving the audio-visual content in the form of a unicast stream, in that said equipment transmits the redirecting message to the client device when the audio-visual content is made available by plural sources, and wherein, when acting as a relay said agent performs:
requesting said audio-visual content from said plural sources; and recreating a unicast stream from data received from said plural sources. 11. The method according to claim 10, wherein:
the redirecting message includes parameters notifying from which sources the audio-visual content is made available; and the request for receiving the audio-visual content in the form of the unicast stream comprises said parameters. 12. The method according to claim 10, wherein said interconnecting device is a home gateway and said plural sources are servers of said equipment and/or other home gateways. 13. The method according to claim 1, wherein the redirecting message indicates a temporary relocation of the audio-visual content. 14. A system for delivering an audio-visual content to a client device, said system comprising an equipment and an interconnecting device, said interconnecting device aiming at interconnecting a first network to a second network, the client device being connected to the second network, said equipment being adapted to provide the audio-visual content and aiming at being connected to the first network, wherein said equipment comprises:
a receiver for receiving a first request for receiving the audio-visual content; a transmitter for transmitting a redirecting message, said redirecting message aiming at redirecting the client device toward an agent implemented in the interconnecting device;
and wherein said agent comprises:
a receiver for receiving a second request for receiving the audio-visual content; and
wherein said agent is configured for acting as a relay between said equipment and the client device, in response to said second request. | 2,400 |
8,615 | 8,615 | 14,633,265 | 2,461 | A wireless communication method including: transmitting, from a base station, a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel, and receiving, by a wireless terminal, second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal. | 1. A wireless communication method comprising:
transmitting, from a base station, a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel; and receiving, by a wireless terminal, second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal. 2. The wireless communication method according to claim 1, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 3. The wireless communication method according to claim 1, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 4. The wireless communication method according to claim 1, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 5. A wireless communication system comprising:
a wireless base station configured to transmit a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel; and a wireless terminal configure to receive second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal. 6. The wireless communication system according to claim 5, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 7. The wireless communication system according to claim 5, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 8. The wireless communication system according to claim 5, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 9. A wireless terminal comprising:
a memory; and a processor coupled to the memory and configured to receive a signal from a wireless base station using a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel, and to receive second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal. 10. The wireless terminal according to claim 9, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 11. The wireless terminal according to claim 9, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 12. The wireless terminal according to claim 9, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 13. A wireless base station comprising:
a memory; and a processor coupled to the memory and configured to transmit a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel; and to transmit second resource information that indicates the resource of the first control channel before a transmission of the second control channel from the wireless base station to the wireless terminal. 14. The wireless base station according to claim 13, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 15. The wireless base station according to claim 13, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 16. The wireless base station according to claim 13, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 17. A wireless communication system comprising:
a wireless base station configured to transmit a wireless frame including a data region and a control region, the data region in which a data signal is arranged, the control region in which a control signal indicating a resource of the data signal is arranged, the data region including an extended control region in which the control signal is arranged; and a wireless terminal configured to receive a specified signal indicating a location of the extended control region in the wireless frame, the specified signal being transmitted from the wireless base station before a transmission of the control signal arranged in the control region from the wireless base station to the wireless terminal. | A wireless communication method including: transmitting, from a base station, a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel, and receiving, by a wireless terminal, second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal.1. A wireless communication method comprising:
transmitting, from a base station, a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel; and receiving, by a wireless terminal, second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal. 2. The wireless communication method according to claim 1, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 3. The wireless communication method according to claim 1, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 4. The wireless communication method according to claim 1, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 5. A wireless communication system comprising:
a wireless base station configured to transmit a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel; and a wireless terminal configure to receive second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal. 6. The wireless communication system according to claim 5, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 7. The wireless communication system according to claim 5, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 8. The wireless communication system according to claim 5, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 9. A wireless terminal comprising:
a memory; and a processor coupled to the memory and configured to receive a signal from a wireless base station using a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel, and to receive second resource information that indicates the resource of the first control channel and that is transmitted from the wireless base station before a transmission of the second control channel from the wireless base station to the wireless terminal. 10. The wireless terminal according to claim 9, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 11. The wireless terminal according to claim 9, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 12. The wireless terminal according to claim 9, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 13. A wireless base station comprising:
a memory; and a processor coupled to the memory and configured to transmit a signal in a second control channel that is used for transmitting information indicating a resource of first resource information indicating a resource of a first control channel; and to transmit second resource information that indicates the resource of the first control channel before a transmission of the second control channel from the wireless base station to the wireless terminal. 14. The wireless base station according to claim 13, wherein
the wireless terminal receives the second resource information, in broadcast information transmitted from the wireless base station. 15. The wireless base station according to claim 13, wherein
the wireless terminal is in an idle mode at the time of receiving the second resource information. 16. The wireless base station according to claim 13, wherein
the first control channel is Enhanced-Physical Downlink Control Channel (E-PDCCH) and the second control channel is Physical Downlink Control Channel (PDCCH) in LTE standard. 17. A wireless communication system comprising:
a wireless base station configured to transmit a wireless frame including a data region and a control region, the data region in which a data signal is arranged, the control region in which a control signal indicating a resource of the data signal is arranged, the data region including an extended control region in which the control signal is arranged; and a wireless terminal configured to receive a specified signal indicating a location of the extended control region in the wireless frame, the specified signal being transmitted from the wireless base station before a transmission of the control signal arranged in the control region from the wireless base station to the wireless terminal. | 2,400 |
8,616 | 8,616 | 15,510,692 | 2,426 | Various features described herein may be embodied in various apparatuses. An apparatus may recommend content to a viewer. The apparatus may receive a selection from the viewer of a condition to associate with the content. The condition may be included in a list of conditions displayed for the viewer. The condition may be a period of time selected by the viewer. The condition may be a weather condition selected by the viewer. The context condition may a location selected by the viewer. When the condition does not exist, the apparatus may refrain from providing the content as a recommendation for the viewer. Afterwards, the apparatus may add the content to a queue associated with the content. When the condition does exist, the apparatus may provide the content as a recommendation for the viewer. Various methods and computer-readable medium may also provide various embodiments of such features. | 1. A method for recommending content to a viewer, the method comprising:
determining a condition to associate with the content; refraining from providing the content as a recommendation for the viewer when the condition does not exist; and providing the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 2. The method of claim 1, further comprising:
adding the content to the queue after refraining from providing the content as a recommendation for the viewer. 3. (canceled) 4. The method of claim 1, wherein the condition is included in a list of conditions displayed to the viewer. 5. The method of claim 1, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 6. The method of claim 1, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exists when a current weather condition near the viewer matches the selected weather condition. 7. The method of claim 1, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. 8. An apparatus for recommending content to a viewer, the apparatus comprising:
a memory; and at least one processor coupled to the memory and configured to:
determine a condition to associate with content;
refrain from providing the content as a recommendation for the viewer when the condition does not exist; and
provide the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 9. The apparatus of claim 8, wherein the at least one processor is further configured to:
add the content to the queue after refraining from providing the content as a recommendation for the viewer. 10. (canceled) 11. The apparatus of claim 8, wherein the condition is included in a list of conditions displayed to the viewer. 12. The apparatus of claim 8, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 13. The apparatus of claim 8, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exists when a current weather condition near the viewer matches the selected weather condition. 14. The apparatus of claim 8, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. 15. A non-transitory computer-readable medium comprising computer-executable instructions executable by a processor to:
determine a condition to associate with content; refrain from providing the content as s recommendation for the viewer when the condition does not exist; and provide the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 16. The non-transitory computer-readable medium of claim 15, wherein the non-transitory computer-readable medium further comprises computer-executable instructions executable to:
add the content to the queue after refraining from providing the content as a recommendation for the viewer. 17. (canceled) 18. The non-transitory computer-readable medium of claim 15, wherein the condition is included in a list of contexts conditions displayed to the viewer. 19. The non-transitory computer-readable medium of claim 15, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 20. The non-transitory computer-readable medium of claim 15, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exits when a current weather condition near the viewer matches the selected weather condition. 21. The non-transitory computer-readable medium of claim 15, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. 22. An apparatus for recommending content to a viewer, the apparatus comprising:
means for determining a condition to associate with content; means for refraining from providing the content as a recommendation for the viewer when the condition does not exist; and means for providing the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 23. The apparatus of claim 22, further comprising:
means for adding the content to the queue after refraining from providing the content as a recommendation for the viewer. 24. (canceled) 25. The apparatus of claim 22, wherein the condition is included in a list of conditions displayed to the viewer. 26. The apparatus of claim 22, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 27. The apparatus of claim 22, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exists when a current weather condition near the viewer matches the selected weather condition. 28. The apparatus of claim 22, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. | Various features described herein may be embodied in various apparatuses. An apparatus may recommend content to a viewer. The apparatus may receive a selection from the viewer of a condition to associate with the content. The condition may be included in a list of conditions displayed for the viewer. The condition may be a period of time selected by the viewer. The condition may be a weather condition selected by the viewer. The context condition may a location selected by the viewer. When the condition does not exist, the apparatus may refrain from providing the content as a recommendation for the viewer. Afterwards, the apparatus may add the content to a queue associated with the content. When the condition does exist, the apparatus may provide the content as a recommendation for the viewer. Various methods and computer-readable medium may also provide various embodiments of such features.1. A method for recommending content to a viewer, the method comprising:
determining a condition to associate with the content; refraining from providing the content as a recommendation for the viewer when the condition does not exist; and providing the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 2. The method of claim 1, further comprising:
adding the content to the queue after refraining from providing the content as a recommendation for the viewer. 3. (canceled) 4. The method of claim 1, wherein the condition is included in a list of conditions displayed to the viewer. 5. The method of claim 1, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 6. The method of claim 1, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exists when a current weather condition near the viewer matches the selected weather condition. 7. The method of claim 1, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. 8. An apparatus for recommending content to a viewer, the apparatus comprising:
a memory; and at least one processor coupled to the memory and configured to:
determine a condition to associate with content;
refrain from providing the content as a recommendation for the viewer when the condition does not exist; and
provide the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 9. The apparatus of claim 8, wherein the at least one processor is further configured to:
add the content to the queue after refraining from providing the content as a recommendation for the viewer. 10. (canceled) 11. The apparatus of claim 8, wherein the condition is included in a list of conditions displayed to the viewer. 12. The apparatus of claim 8, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 13. The apparatus of claim 8, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exists when a current weather condition near the viewer matches the selected weather condition. 14. The apparatus of claim 8, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. 15. A non-transitory computer-readable medium comprising computer-executable instructions executable by a processor to:
determine a condition to associate with content; refrain from providing the content as s recommendation for the viewer when the condition does not exist; and provide the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 16. The non-transitory computer-readable medium of claim 15, wherein the non-transitory computer-readable medium further comprises computer-executable instructions executable to:
add the content to the queue after refraining from providing the content as a recommendation for the viewer. 17. (canceled) 18. The non-transitory computer-readable medium of claim 15, wherein the condition is included in a list of contexts conditions displayed to the viewer. 19. The non-transitory computer-readable medium of claim 15, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 20. The non-transitory computer-readable medium of claim 15, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exits when a current weather condition near the viewer matches the selected weather condition. 21. The non-transitory computer-readable medium of claim 15, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. 22. An apparatus for recommending content to a viewer, the apparatus comprising:
means for determining a condition to associate with content; means for refraining from providing the content as a recommendation for the viewer when the condition does not exist; and means for providing the content as a recommendation for the viewer when the condition exists, wherein the recommendation is a queue associated with the condition for the content. 23. The apparatus of claim 22, further comprising:
means for adding the content to the queue after refraining from providing the content as a recommendation for the viewer. 24. (canceled) 25. The apparatus of claim 22, wherein the condition is included in a list of conditions displayed to the viewer. 26. The apparatus of claim 22, wherein:
the condition comprises a period of time selected by the viewer; and the condition exists when a current time is during the selected period of time. 27. The apparatus of claim 22, wherein:
the condition comprises a weather condition selected by the viewer; and the condition exists when a current weather condition near the viewer matches the selected weather condition. 28. The apparatus of claim 22, wherein:
the condition comprises a location selected by the viewer; and the condition exists when a current location of the viewer matches the selected location. | 2,400 |
8,617 | 8,617 | 14,843,772 | 2,482 | A system is provided for determining the position of an offshore structure in a fixed reference frame by observations from a moving reference frame. The system comprises a plurality of markers located at defined positions on the structure and an imaging device, arranged to view the markers and generate image data representative of the relative positions of the markers in the moving reference frame. The imaging device may be mounted aboard a vessel. A local positioning system is arranged to determine a position of the imaging device in the fixed reference frame and generate position data. A processing unit is arranged to receive the image data from the imaging device and the position data from the local positioning system and compute the position of the structure based on the defined positions of the markers. | 1. A system for determining the position of an offshore structure in a fixed reference frame by observations from a moving reference frame, comprising
a plurality of markers located at defined positions on the structure; an imaging device, arranged to view the markers and generate image data representative of the relative positions of the markers in the moving reference frame; a local positioning system, arranged to determine a position of the imaging device in the fixed reference frame and generate position data; and a processing unit, arranged to receive the image data from the imaging device and the position data from the local positioning system and compute the position of the structure based on the defined positions of the markers. 2. The system according to claim 1, wherein at least one of the markers is removably affixed to the structure for the purpose of the position determination. 3. The system according to claim 1, wherein at least four markers are affixed to the structure for the purpose of the position determination. 4. The system according to claim 1, wherein at least one of the markers is a unique marker and the system is arranged to automatically identify the unique marker. 5. The system according to claim 1, further comprising a display, operatively connected to the processing unit and the processing unit is arranged to display a representation of the computed position of the structure on the display. 6. The system according to claim 5, wherein the imaging device is located at a first location and the display is located at a second location and the representation of the position of the structure on the display is a representation as viewed from the second location. 7. The system according to claim 1, wherein the imaging device is mounted onboard a vessel or an unmanned vehicle. 8. The system according to claim 7, further comprising a datum point fixed with respect to the vessel and located within a field of view of the imaging device. 9. A method of determining the position of an offshore structure in a fixed frame of reference by observations from a moving frame of reference, the structure comprising a plurality of markers positioned on the structure, the method comprising:
determining the positions of the plurality of markers with respect to a reference frame of the structure; viewing the structure with an imaging device from a moving reference frame to generate image data representative of the relative positions of the markers in the moving reference frame; determining a position of the imaging device in the fixed reference frame and generating position data; and processing the image data and the position data to compute the position of the structure based on the predetermined positions of the markers. 10. The method of claim 9, further comprising affixing a plurality of markers to the structure and surveying the structure to determine the positions of the markers in the reference frame of the structure prior to viewing the structure from the moving reference frame. 11. The method of claim 9, wherein determining a position of the imaging device takes place with a local positioning system which tracks the momentary position of the imaging device. 12. The method according to claim 9, further comprising removing at least one of the markers from the structure on completion of the position determination. 13. The method according to claim 9, further comprising
displaying a representation of the computed position of the structure on a display, and overlaying the representation of the computed position onto a representation of a desired position of the structure. 14. The method according to claim 9, wherein the imaging device is mounted onboard a vessel having a crane and the method comprises supporting the structure by the crane. 15. The method according to claim 9, further comprising moving the structure on the basis of the computed position of the structure in order to approach it to a desired position of the structure. 16. The method according to claim 9, further comprising fixing the structure at a position that corresponds to a desired position of the structure. 17. A computer readable medium storing instructions to perform the method according to claim 9, when installed on a computer or processing device. 18. The computer readable medium according to claim 17, comprising:
an image acquisition software module for acquiring time-stamped image data; a position software module for acquiring time-stamped position data; a marker identification module for identifying individual markers within the image; a marker tracking module for actively tracking an identified marker; an image reconstruction module for transforming the image from the moving frame into the fixed frame of reference; and a GUI module that presents a display of the structure in the fixed frame of reference. 19. The computer readable medium according to claim 18, wherein the GUI module presents the display of the structure overlaid onto a desired position of the structure. 20. The computer readable medium according to claim 18, further comprising a model importation module to import a computer representation of the jacket; and a structure recognition module which detects structure within the image data based on the computer representation imported by the model importation module. | A system is provided for determining the position of an offshore structure in a fixed reference frame by observations from a moving reference frame. The system comprises a plurality of markers located at defined positions on the structure and an imaging device, arranged to view the markers and generate image data representative of the relative positions of the markers in the moving reference frame. The imaging device may be mounted aboard a vessel. A local positioning system is arranged to determine a position of the imaging device in the fixed reference frame and generate position data. A processing unit is arranged to receive the image data from the imaging device and the position data from the local positioning system and compute the position of the structure based on the defined positions of the markers.1. A system for determining the position of an offshore structure in a fixed reference frame by observations from a moving reference frame, comprising
a plurality of markers located at defined positions on the structure; an imaging device, arranged to view the markers and generate image data representative of the relative positions of the markers in the moving reference frame; a local positioning system, arranged to determine a position of the imaging device in the fixed reference frame and generate position data; and a processing unit, arranged to receive the image data from the imaging device and the position data from the local positioning system and compute the position of the structure based on the defined positions of the markers. 2. The system according to claim 1, wherein at least one of the markers is removably affixed to the structure for the purpose of the position determination. 3. The system according to claim 1, wherein at least four markers are affixed to the structure for the purpose of the position determination. 4. The system according to claim 1, wherein at least one of the markers is a unique marker and the system is arranged to automatically identify the unique marker. 5. The system according to claim 1, further comprising a display, operatively connected to the processing unit and the processing unit is arranged to display a representation of the computed position of the structure on the display. 6. The system according to claim 5, wherein the imaging device is located at a first location and the display is located at a second location and the representation of the position of the structure on the display is a representation as viewed from the second location. 7. The system according to claim 1, wherein the imaging device is mounted onboard a vessel or an unmanned vehicle. 8. The system according to claim 7, further comprising a datum point fixed with respect to the vessel and located within a field of view of the imaging device. 9. A method of determining the position of an offshore structure in a fixed frame of reference by observations from a moving frame of reference, the structure comprising a plurality of markers positioned on the structure, the method comprising:
determining the positions of the plurality of markers with respect to a reference frame of the structure; viewing the structure with an imaging device from a moving reference frame to generate image data representative of the relative positions of the markers in the moving reference frame; determining a position of the imaging device in the fixed reference frame and generating position data; and processing the image data and the position data to compute the position of the structure based on the predetermined positions of the markers. 10. The method of claim 9, further comprising affixing a plurality of markers to the structure and surveying the structure to determine the positions of the markers in the reference frame of the structure prior to viewing the structure from the moving reference frame. 11. The method of claim 9, wherein determining a position of the imaging device takes place with a local positioning system which tracks the momentary position of the imaging device. 12. The method according to claim 9, further comprising removing at least one of the markers from the structure on completion of the position determination. 13. The method according to claim 9, further comprising
displaying a representation of the computed position of the structure on a display, and overlaying the representation of the computed position onto a representation of a desired position of the structure. 14. The method according to claim 9, wherein the imaging device is mounted onboard a vessel having a crane and the method comprises supporting the structure by the crane. 15. The method according to claim 9, further comprising moving the structure on the basis of the computed position of the structure in order to approach it to a desired position of the structure. 16. The method according to claim 9, further comprising fixing the structure at a position that corresponds to a desired position of the structure. 17. A computer readable medium storing instructions to perform the method according to claim 9, when installed on a computer or processing device. 18. The computer readable medium according to claim 17, comprising:
an image acquisition software module for acquiring time-stamped image data; a position software module for acquiring time-stamped position data; a marker identification module for identifying individual markers within the image; a marker tracking module for actively tracking an identified marker; an image reconstruction module for transforming the image from the moving frame into the fixed frame of reference; and a GUI module that presents a display of the structure in the fixed frame of reference. 19. The computer readable medium according to claim 18, wherein the GUI module presents the display of the structure overlaid onto a desired position of the structure. 20. The computer readable medium according to claim 18, further comprising a model importation module to import a computer representation of the jacket; and a structure recognition module which detects structure within the image data based on the computer representation imported by the model importation module. | 2,400 |
8,618 | 8,618 | 13,374,526 | 2,454 | Methods, apparatuses, computer program products, devices and systems are described that carry out accepting at least one request for personal information from a party to a transaction; evaluating the transaction; selecting a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data; and presenting the persona in response to the request for personal information. | 1-34. (canceled) 35. A computer-implemented system comprising:
a persona selection agent configured to accept at least one request for personal information from a party to a transaction, wherein the persona selection agent is also configured to evaluate the transaction and select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data. 36. The system of claim 35 wherein the persona selection agent further comprises at least one of a transaction evaluation module, a transaction value analysis module, or a web page parser module. 37. The system of claim 35 wherein the persona selection agent is configured to accept a plurality of fields on an online web page as the at least one request for personal information. 38. The system of claim 35 wherein the persona selection agent is configured to evaluate a monetary value for the transaction. 39. The system of claim 35 wherein the persona selection agent further comprises a party history evaluation module. 40. The system of claim 39 wherein the party history evaluation module is configured to evaluate a transaction history about the party to the transaction. 41. The system of claim 35 wherein the persona selection agent further comprises at least one of a cost adjustment module, a condition-setting module, a unique identifier database, or a persona database. 42. The system of claim 35 wherein the persona selection agent is configured to select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on at least one of a user's UDID, MAC address, SIM data, IP address, or IMEI as the device-identifier data; and the user's network-participation data. 43. The system of claim 35 wherein the persona selection agent is configured to select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and at least one of Facebook information, Twitter information, or gmail information as the user's network-participation data. 44. The system of claim 35 wherein the persona selection agent is configured to select at least one persona including reduced personal information than initially requested at least partly based on an evaluation of the transaction, wherein the cost of the transaction is increased in exchange for the reduced personal information. 45. The system of claim 35 wherein the persona selection agent is configured to select at least one persona including additional personal information than initially requested at least partly based on an evaluation of the transaction, wherein the cost of the transaction is decreased in exchange for the additional personal information. 46. A computer-implemented system comprising:
a persona selection agent operative on a computational platform, said persona selection agent configured to accept at least one request for personal information from a party to a transaction, wherein the persona selection agent is also configured to evaluate the transaction and select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data. 47. A computer-implemented system comprising:
a persona selection agent operative on a computational platform, said persona selection agent configured to accept at least one request for personal information from a party to a transaction at a user interface, wherein the persona selection agent is also configured to evaluate the transaction and select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data. | Methods, apparatuses, computer program products, devices and systems are described that carry out accepting at least one request for personal information from a party to a transaction; evaluating the transaction; selecting a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data; and presenting the persona in response to the request for personal information.1-34. (canceled) 35. A computer-implemented system comprising:
a persona selection agent configured to accept at least one request for personal information from a party to a transaction, wherein the persona selection agent is also configured to evaluate the transaction and select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data. 36. The system of claim 35 wherein the persona selection agent further comprises at least one of a transaction evaluation module, a transaction value analysis module, or a web page parser module. 37. The system of claim 35 wherein the persona selection agent is configured to accept a plurality of fields on an online web page as the at least one request for personal information. 38. The system of claim 35 wherein the persona selection agent is configured to evaluate a monetary value for the transaction. 39. The system of claim 35 wherein the persona selection agent further comprises a party history evaluation module. 40. The system of claim 39 wherein the party history evaluation module is configured to evaluate a transaction history about the party to the transaction. 41. The system of claim 35 wherein the persona selection agent further comprises at least one of a cost adjustment module, a condition-setting module, a unique identifier database, or a persona database. 42. The system of claim 35 wherein the persona selection agent is configured to select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on at least one of a user's UDID, MAC address, SIM data, IP address, or IMEI as the device-identifier data; and the user's network-participation data. 43. The system of claim 35 wherein the persona selection agent is configured to select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and at least one of Facebook information, Twitter information, or gmail information as the user's network-participation data. 44. The system of claim 35 wherein the persona selection agent is configured to select at least one persona including reduced personal information than initially requested at least partly based on an evaluation of the transaction, wherein the cost of the transaction is increased in exchange for the reduced personal information. 45. The system of claim 35 wherein the persona selection agent is configured to select at least one persona including additional personal information than initially requested at least partly based on an evaluation of the transaction, wherein the cost of the transaction is decreased in exchange for the additional personal information. 46. A computer-implemented system comprising:
a persona selection agent operative on a computational platform, said persona selection agent configured to accept at least one request for personal information from a party to a transaction, wherein the persona selection agent is also configured to evaluate the transaction and select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data. 47. A computer-implemented system comprising:
a persona selection agent operative on a computational platform, said persona selection agent configured to accept at least one request for personal information from a party to a transaction at a user interface, wherein the persona selection agent is also configured to evaluate the transaction and select a persona at least partly based on an evaluation of the transaction, wherein the persona is linked to a unique identifier that is at least partly based on a user's device-identifier data and the user's network-participation data. | 2,400 |
8,619 | 8,619 | 15,675,271 | 2,467 | A method and apparatus include establishing a communication connection with a communication network via an access point. As part of establishing a communication connection, information is received for defining a format of the communication connection. The format to be used as part of the communication connection includes one or more parameters which are received from the communication network when establishing the communication connection. At least one of the one or more received parameters serves to define a control channel transmission structure to be used as part of the format. A control channel including one or more control channel transmissions is then received in support of the established communication connection, based upon the control channel transmission structure defined by the at least one of the one or more received parameters. | 1. A method in a device comprising:
establishing a communication connection with a communication network via an access point including receiving information for defining a format of the communication connection, whereby the format to be used as part of the communication connection includes one or more parameters which are received from the communication network when establishing the communication connection, wherein at least one of the one or more received parameters serve to define a control channel transmission structure to be used as part of the format; and receiving a control channel including one or more control channel transmissions in support of the established communication connection, based upon the control channel transmission structure defined by the at least one of the one or more received parameters. 2. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which define a periodicity value for a series of control channel transmissions and a time-frequency domain length value for each transmission of the one or more control channel transmissions. 3. A method in accordance with claim 2, wherein the periodicity value comprises an integer multiple of subframes where a subframe comprises a plurality of orthogonal frequency division multiplexing (OFDM) symbols, and the time-frequency domain length value comprises a set of OFDM symbols and a set of physical resource blocks; and wherein at least one of the set of OFDM symbols, number of OFDM symbols, the set of physical resource blocks, and number of physical resource blocks is different from a first subframe of the control channel transmission to a second subframe of the control channel transmission. 4. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which define an OFDM subcarrier spacing value used for receiving control channel transmissions. 5. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which define a cyclic prefix value associated with the control channel. 6. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which indicate whether the control channel can be received based on a single antenna port or whether the control channel can be received based on multiple antenna ports. 7. A method in accordance with claim 1, wherein establishing the communication connection with the communication network via the access point includes receiving a broadcast channel while performing an initial access procedure with the communication network, wherein at least some of the information for defining the format of the communication connection including the control channel transmission structure is determined from the received broadcast channel. 8. A method in accordance with claim 7, wherein a numerology for receiving control channel transmissions is different from a numerology used for receiving the broadcast channel, wherein a numerology comprises one or more of a subcarrier spacing and a cyclic prefix length. 9. A method in accordance with claim 7, further comprising determining from the broadcast channel, a set of time-domain resources where downlink transmissions are not present. 10. A method in accordance with claim 7, further comprising determining from the broadcast channel, a set of time-domain resources for uplink transmission. 11. A method in accordance with claim 7, wherein receiving the broadcast channel comprises determining an OFDM subcarrier spacing value used for receiving the broadcast channel, wherein the OFDM subcarrier spacing value is determined based on the operating band in which the broadcast channel is received. 12. A method in accordance with claim 7, further comprising receiving the broadcast channel using a first set of reference signal transmissions; receiving the control channel using a second set of reference signal transmissions, wherein the first and second set of reference signal transmissions are transmitted on different antenna ports. 13. A method in accordance with claim 12, further comprising receiving the first set of reference signal transmissions within OFDM symbols comprising the broadcast channel. 14. A method in accordance with claim 1, wherein establishing the communication connection with the communication network via the access point includes receiving at least some of the information for defining a format of the communication connection including the control channel transmission structure, which is determined from one or more parameters included as part of control information received from a control channel transmission related to a target access point in anticipation of a possible handover to the target access point. 15. A method in accordance with claim 1, wherein establishing the communication connection with the communication network via the access point includes reconfiguring a previously established communication connection, where the reconfiguration allows for the format to be used as part of the communication connection to be modified, so as to define a different format having a different control channel transmission structure. 16. A method in accordance with claim 1, wherein the at least one of the one or more received parameters which serve to define the control channel transmission structure vary from a first time instance of the control channel transmission to a second time instance of the control channel transmission based on a predetermined pattern. 17. A method in accordance with claim 16, wherein the predetermined pattern is based on one or more of identification information determined from a received synchronization signal, and at least one of the one or more received parameters that is not varying based on the predetermined pattern. 18. A method in accordance with claim 1, further comprising receiving on the control channel information regarding default numerology for at least one of control channel scheduling data, or data transmission; wherein the default numerology comprises a default subcarrier spacing and a default cyclic prefix length. 19. A method in accordance with claim 18, wherein the default numerology for downlink transmissions from the access point to the device is different than the default numerology for uplink transmissions from the device to the access point. 20. A method in accordance with claim 1, further comprising receiving on the control channel information regarding one or more preconfigured timing relation parameters for data transmission, wherein the preconfigured timing relation parameters includes at least one from the set of a timing relation between downlink control channel and downlink data, a timing relation between downlink data and acknowledgment transmission, a timing relation between downlink control channel and uplink data, and a timing relation between uplink data and earliest uplink acknowledgment transmission or uplink transmission/retransmission scheduling for a same hybrid automatic repeat request process. 21. A user equipment in a communication network, the user equipment comprising:
a transceiver that sends and receives signals between the user equipment and a communication network entity including information for defining a format of a communication connection; a controller that identifies one or more parameters from the information for defining the format of the communication connection, wherein at least one of the one or more received parameters serve to define a control channel transmission structure; and wherein the transceiver receives a control channel as part of the communication connection, the control channel including one or more control channel transmissions in support of the communication connection, based upon the control channel transmission structure defined by the at least one of the one or more received parameters. 22. A user equipment in accordance with claim 21, wherein the at least one of the one or more parameters that serve to define a control channel transmission structure include at least one of
a) a periodicity value for a series of control channel transmissions and a time-frequency domain length value for each transmission of the one or more control channel transmissions; b) an OFDM subcarrier spacing value used for receiving control channel transmissions; c) a cyclic prefix value associated with the control channel; and d) a parameter indicating whether the control channel can be received based on a single antenna port or whether the control channel can be received based on multiple antenna ports. 23. A user equipment in accordance with claim 21, wherein the transceiver receives a broadcast channel while performing an initial access procedure with the communication network, wherein at least some of the information for defining the format of the communication connection including the control channel transmission structure is determined from the received broadcast channel. 24. A user equipment in accordance with claim 21, wherein the transceiver receives at least some of the information for defining a format of the communication connection including the control channel transmission structure, which is determined by the controller from one or more parameters included as part of control information received from a control channel transmission related to a target access point in anticipation of a possible handover to the target access point. 25. A user equipment in accordance with claim 21, wherein the controller reconfigures a previously established communication connection, where the reconfiguration allows for the format to be used as part of the communication connection to be modified, so as to define a different format having a different control channel transmission structure. | A method and apparatus include establishing a communication connection with a communication network via an access point. As part of establishing a communication connection, information is received for defining a format of the communication connection. The format to be used as part of the communication connection includes one or more parameters which are received from the communication network when establishing the communication connection. At least one of the one or more received parameters serves to define a control channel transmission structure to be used as part of the format. A control channel including one or more control channel transmissions is then received in support of the established communication connection, based upon the control channel transmission structure defined by the at least one of the one or more received parameters.1. A method in a device comprising:
establishing a communication connection with a communication network via an access point including receiving information for defining a format of the communication connection, whereby the format to be used as part of the communication connection includes one or more parameters which are received from the communication network when establishing the communication connection, wherein at least one of the one or more received parameters serve to define a control channel transmission structure to be used as part of the format; and receiving a control channel including one or more control channel transmissions in support of the established communication connection, based upon the control channel transmission structure defined by the at least one of the one or more received parameters. 2. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which define a periodicity value for a series of control channel transmissions and a time-frequency domain length value for each transmission of the one or more control channel transmissions. 3. A method in accordance with claim 2, wherein the periodicity value comprises an integer multiple of subframes where a subframe comprises a plurality of orthogonal frequency division multiplexing (OFDM) symbols, and the time-frequency domain length value comprises a set of OFDM symbols and a set of physical resource blocks; and wherein at least one of the set of OFDM symbols, number of OFDM symbols, the set of physical resource blocks, and number of physical resource blocks is different from a first subframe of the control channel transmission to a second subframe of the control channel transmission. 4. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which define an OFDM subcarrier spacing value used for receiving control channel transmissions. 5. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which define a cyclic prefix value associated with the control channel. 6. A method in accordance with claim 1, wherein the at least one of the one or more received parameters, which serve to define the control channel transmission structure to be used as part of the format includes one or more parameters, which indicate whether the control channel can be received based on a single antenna port or whether the control channel can be received based on multiple antenna ports. 7. A method in accordance with claim 1, wherein establishing the communication connection with the communication network via the access point includes receiving a broadcast channel while performing an initial access procedure with the communication network, wherein at least some of the information for defining the format of the communication connection including the control channel transmission structure is determined from the received broadcast channel. 8. A method in accordance with claim 7, wherein a numerology for receiving control channel transmissions is different from a numerology used for receiving the broadcast channel, wherein a numerology comprises one or more of a subcarrier spacing and a cyclic prefix length. 9. A method in accordance with claim 7, further comprising determining from the broadcast channel, a set of time-domain resources where downlink transmissions are not present. 10. A method in accordance with claim 7, further comprising determining from the broadcast channel, a set of time-domain resources for uplink transmission. 11. A method in accordance with claim 7, wherein receiving the broadcast channel comprises determining an OFDM subcarrier spacing value used for receiving the broadcast channel, wherein the OFDM subcarrier spacing value is determined based on the operating band in which the broadcast channel is received. 12. A method in accordance with claim 7, further comprising receiving the broadcast channel using a first set of reference signal transmissions; receiving the control channel using a second set of reference signal transmissions, wherein the first and second set of reference signal transmissions are transmitted on different antenna ports. 13. A method in accordance with claim 12, further comprising receiving the first set of reference signal transmissions within OFDM symbols comprising the broadcast channel. 14. A method in accordance with claim 1, wherein establishing the communication connection with the communication network via the access point includes receiving at least some of the information for defining a format of the communication connection including the control channel transmission structure, which is determined from one or more parameters included as part of control information received from a control channel transmission related to a target access point in anticipation of a possible handover to the target access point. 15. A method in accordance with claim 1, wherein establishing the communication connection with the communication network via the access point includes reconfiguring a previously established communication connection, where the reconfiguration allows for the format to be used as part of the communication connection to be modified, so as to define a different format having a different control channel transmission structure. 16. A method in accordance with claim 1, wherein the at least one of the one or more received parameters which serve to define the control channel transmission structure vary from a first time instance of the control channel transmission to a second time instance of the control channel transmission based on a predetermined pattern. 17. A method in accordance with claim 16, wherein the predetermined pattern is based on one or more of identification information determined from a received synchronization signal, and at least one of the one or more received parameters that is not varying based on the predetermined pattern. 18. A method in accordance with claim 1, further comprising receiving on the control channel information regarding default numerology for at least one of control channel scheduling data, or data transmission; wherein the default numerology comprises a default subcarrier spacing and a default cyclic prefix length. 19. A method in accordance with claim 18, wherein the default numerology for downlink transmissions from the access point to the device is different than the default numerology for uplink transmissions from the device to the access point. 20. A method in accordance with claim 1, further comprising receiving on the control channel information regarding one or more preconfigured timing relation parameters for data transmission, wherein the preconfigured timing relation parameters includes at least one from the set of a timing relation between downlink control channel and downlink data, a timing relation between downlink data and acknowledgment transmission, a timing relation between downlink control channel and uplink data, and a timing relation between uplink data and earliest uplink acknowledgment transmission or uplink transmission/retransmission scheduling for a same hybrid automatic repeat request process. 21. A user equipment in a communication network, the user equipment comprising:
a transceiver that sends and receives signals between the user equipment and a communication network entity including information for defining a format of a communication connection; a controller that identifies one or more parameters from the information for defining the format of the communication connection, wherein at least one of the one or more received parameters serve to define a control channel transmission structure; and wherein the transceiver receives a control channel as part of the communication connection, the control channel including one or more control channel transmissions in support of the communication connection, based upon the control channel transmission structure defined by the at least one of the one or more received parameters. 22. A user equipment in accordance with claim 21, wherein the at least one of the one or more parameters that serve to define a control channel transmission structure include at least one of
a) a periodicity value for a series of control channel transmissions and a time-frequency domain length value for each transmission of the one or more control channel transmissions; b) an OFDM subcarrier spacing value used for receiving control channel transmissions; c) a cyclic prefix value associated with the control channel; and d) a parameter indicating whether the control channel can be received based on a single antenna port or whether the control channel can be received based on multiple antenna ports. 23. A user equipment in accordance with claim 21, wherein the transceiver receives a broadcast channel while performing an initial access procedure with the communication network, wherein at least some of the information for defining the format of the communication connection including the control channel transmission structure is determined from the received broadcast channel. 24. A user equipment in accordance with claim 21, wherein the transceiver receives at least some of the information for defining a format of the communication connection including the control channel transmission structure, which is determined by the controller from one or more parameters included as part of control information received from a control channel transmission related to a target access point in anticipation of a possible handover to the target access point. 25. A user equipment in accordance with claim 21, wherein the controller reconfigures a previously established communication connection, where the reconfiguration allows for the format to be used as part of the communication connection to be modified, so as to define a different format having a different control channel transmission structure. | 2,400 |
8,620 | 8,620 | 14,917,931 | 2,463 | Described is an Evolved Node-B (eNB) to communicate with one or more User Equipment (UEs) on a Long Term Evolution (LTE) network operating in an unlicensed spectrum, the eNB comprising hardware processing circuitry including: an antenna; and a transmitter, coupled to the antenna, the transmitter operable to: inhibit transmission of system information to a UE when the spectrum is unlicensed. The transmitter may also be operable to: refrain from transmission of one or more synchronization signals to a UE when a spectrum is unlicensed. The transmitter may also be operable to transmit the one or more synchronization signals in frequencies away from the centers of the six PRBs of the transmission bandwidth. The transmitter may also be operable to: turn off transmitting in the unlicensed spectrum when the eNB is not servicing any UE; and turn on transmitting in the unlicensed spectrum when a UE is discovered. | 1. An Evolved Node-B (eNB) to communicate with one or more User Equipment (UEs) on a network operating in an unlicensed spectrum, the eNB comprising hardware processing circuitry including:
an antenna; and a transmitter, coupled to the antenna, the transmitter operable to:
inhibit transmission of system information to a UE when the spectrum is unlicensed; and
transmit, in a licensed spectrum, the system information for a carrier to the UE using a Physical Broadcast Channel (PBCH) on that carrier if the carrier is operating on the licensed spectrum,
wherein the carrier on the licensed spectrum can transmit the information associated with the PBCH of another carrier, which is operated on the unlicensed spectrum, to the UE. 2. The eNB of claim 1, wherein the carrier, transmitted on the licensed spectrum, is for a primary cell (PCell) associated with the UE. 3. The eNB of claim 1, wherein the carrier, transmitted on the unlicensed spectrum, is for a secondary cell (SCell) associated with the UE. 4. A User Equipment (UE) to communicate with an Evolved Node-B (eNB) on a network operating in an unlicensed spectrum, the UE comprising hardware processing circuitry including:
an antenna; and a receiver, coupled to the antenna, the receiver operable to:
receive, from the eNB, system information using a Physical Broadcast Channel (PBCH) relevant to a carrier operating on the unlicensed spectrum from another carrier operating on a licensed spectrum, wherein the carrier operating on the licensed spectrum is for a primary cell (PCell) associated with the UE; and
receive the system information from the eNB when the spectrum is licensed. 5. The UE of claim 4, wherein the carrier, operating on the unlicensed spectrum, is for a secondary cell (SCell) associated with the UE. 6. An Evolved Node-B (eNB) to communicate with one or more User Equipment (UEs) on a network operating in an unlicensed spectrum, the eNB comprising hardware processing circuitry including:
an antenna; and a transmitter, coupled to the antenna, the transmitter operable to:
refrain from transmission of one or both of Primary Synchronization Signal (PSS) or Secondary Synchronization Signal (SSS) to a UE when a spectrum is unlicensed; and
transmit, in a licensed spectrum, the one or both of PSS and SSS to the UE. 7. The eNB of claim 6, wherein the transmitter to transmit a carrier, on the licensed spectrum, which is for a primary cell (PCell) associated with the UE. 8. The eNB of claim 6, wherein the transmitter to transmit a carrier, on the unlicensed spectrum, which is for a secondary cell (SCell) associated with the UE. 9. The eNB of claim 6, wherein the transmitter to transmit a carrier operating on the unlicensed spectrum to transmit the one or both of PSS and SS at centers of six Physical Resource Blocks (PRBs) of transmission bandwidth. 10. The eNB of claim 9 comprising logic to determine whether the centers of the six PRBs are occupied. 11. The eNB of claim 10, wherein the transmitter to transmit the PSS and SSS using unoccupied frequencies in response to determining that the centers of the six PRBs are occupied. 12. The eNB of claim 11, wherein the transmitter to transmit the PSS and SSS in frequencies away from the centers of the six PRBs of the transmission bandwidth. 13. A User Equipment (UE) to communicate with an Evolved Node-B (eNB) on a network operating in an unlicensed spectrum, the UE comprising hardware processing circuitry including:
an antenna; and a receiver, coupled to the antenna, the receiver operable to:
receive on a first carrier in a licensed spectrum one or both of Primary Synchronization Signal (PSS) or Secondary Synchronization Signal (SSS) from the eNB; and
receive on a second carrier operating on the unlicensed spectrum to infer information associated with the one or both of PSS and SSS. 14. The UE of claim 13, wherein the first and second carriers are transmitted by the same transmitter. 15. The UE of claim 13, wherein the first and second carriers are adjacent in frequency band. 16. The UE of claim 13, comprising logic which is time synchronized with time synchronization information from the first carrier. 17. The UE of claim 16, wherein the receiver to receive the one or both of PSS and SSS using unoccupied frequencies in response to determining that center frequencies of six PRBs are occupied. 18. The UE of claim 17, wherein the receiver to receive the one or both of PSS and SSS in frequencies away from the center frequencies of the six PRBs of the transmission bandwidth. 19. The UE of claim 13, wherein the first carrier, operating on the licensed spectrum, is for a primary cell (PCell) associated with the UE. 20. The UE of claim 13, wherein the second carrier, operating on the unlicensed spectrum, is for a secondary cell (SCell) associated with the UE. 21. The UE of claim 13, wherein the receiver to receive the one or both of PSS and SSS in unoccupied frequencies. 22. The UE of claim 13, wherein the receiver to receive the one or both of PSS and SSS in frequencies away from the physical resource blocks (PRBs) and near the center frequencies of transmission bandwidth. 23. A User Equipment (UE) to communicate with an Evolved Node-B (eNB) on a network operating in an unlicensed spectrum, the UE comprising hardware processing circuitry including:
an antenna; a receiver coupled to the antenna, the receiver including a sensor to detect ongoing transmission in a carrier frequency of the unlicensed spectrum; and a transmitter coupled to the antenna, the transmitter to transmit a notification, associated with the detected ongoing transmission, to the eNB via a licensed spectrum. 24. The UE of claim 23, wherein the transmitter to transmit the notification associated with the detected ongoing transmission to a scheduler associated with the eNB. 25. The UE of claim 23, wherein the licensed spectrum is an uplink spectrum. 26. The UE of claim 23, wherein the sensor is at least one of:
an energy detection based sensor; or a waveform detection based sensor. 27. The UE of claim 23, wherein the sensor to detect or sense any ongoing transmission over the unlicensed spectrum used by the eNB or the UE. 28. The UE of claim 23, wherein the eNB includes a sensor to sense or detect ongoing transmission in a carrier frequency of the unlicensed spectrum. 29. The UE of claim 28, wherein the sensor in the eNB is at least one of: an energy detection based sensor; or a waveform detection based sensor. | Described is an Evolved Node-B (eNB) to communicate with one or more User Equipment (UEs) on a Long Term Evolution (LTE) network operating in an unlicensed spectrum, the eNB comprising hardware processing circuitry including: an antenna; and a transmitter, coupled to the antenna, the transmitter operable to: inhibit transmission of system information to a UE when the spectrum is unlicensed. The transmitter may also be operable to: refrain from transmission of one or more synchronization signals to a UE when a spectrum is unlicensed. The transmitter may also be operable to transmit the one or more synchronization signals in frequencies away from the centers of the six PRBs of the transmission bandwidth. The transmitter may also be operable to: turn off transmitting in the unlicensed spectrum when the eNB is not servicing any UE; and turn on transmitting in the unlicensed spectrum when a UE is discovered.1. An Evolved Node-B (eNB) to communicate with one or more User Equipment (UEs) on a network operating in an unlicensed spectrum, the eNB comprising hardware processing circuitry including:
an antenna; and a transmitter, coupled to the antenna, the transmitter operable to:
inhibit transmission of system information to a UE when the spectrum is unlicensed; and
transmit, in a licensed spectrum, the system information for a carrier to the UE using a Physical Broadcast Channel (PBCH) on that carrier if the carrier is operating on the licensed spectrum,
wherein the carrier on the licensed spectrum can transmit the information associated with the PBCH of another carrier, which is operated on the unlicensed spectrum, to the UE. 2. The eNB of claim 1, wherein the carrier, transmitted on the licensed spectrum, is for a primary cell (PCell) associated with the UE. 3. The eNB of claim 1, wherein the carrier, transmitted on the unlicensed spectrum, is for a secondary cell (SCell) associated with the UE. 4. A User Equipment (UE) to communicate with an Evolved Node-B (eNB) on a network operating in an unlicensed spectrum, the UE comprising hardware processing circuitry including:
an antenna; and a receiver, coupled to the antenna, the receiver operable to:
receive, from the eNB, system information using a Physical Broadcast Channel (PBCH) relevant to a carrier operating on the unlicensed spectrum from another carrier operating on a licensed spectrum, wherein the carrier operating on the licensed spectrum is for a primary cell (PCell) associated with the UE; and
receive the system information from the eNB when the spectrum is licensed. 5. The UE of claim 4, wherein the carrier, operating on the unlicensed spectrum, is for a secondary cell (SCell) associated with the UE. 6. An Evolved Node-B (eNB) to communicate with one or more User Equipment (UEs) on a network operating in an unlicensed spectrum, the eNB comprising hardware processing circuitry including:
an antenna; and a transmitter, coupled to the antenna, the transmitter operable to:
refrain from transmission of one or both of Primary Synchronization Signal (PSS) or Secondary Synchronization Signal (SSS) to a UE when a spectrum is unlicensed; and
transmit, in a licensed spectrum, the one or both of PSS and SSS to the UE. 7. The eNB of claim 6, wherein the transmitter to transmit a carrier, on the licensed spectrum, which is for a primary cell (PCell) associated with the UE. 8. The eNB of claim 6, wherein the transmitter to transmit a carrier, on the unlicensed spectrum, which is for a secondary cell (SCell) associated with the UE. 9. The eNB of claim 6, wherein the transmitter to transmit a carrier operating on the unlicensed spectrum to transmit the one or both of PSS and SS at centers of six Physical Resource Blocks (PRBs) of transmission bandwidth. 10. The eNB of claim 9 comprising logic to determine whether the centers of the six PRBs are occupied. 11. The eNB of claim 10, wherein the transmitter to transmit the PSS and SSS using unoccupied frequencies in response to determining that the centers of the six PRBs are occupied. 12. The eNB of claim 11, wherein the transmitter to transmit the PSS and SSS in frequencies away from the centers of the six PRBs of the transmission bandwidth. 13. A User Equipment (UE) to communicate with an Evolved Node-B (eNB) on a network operating in an unlicensed spectrum, the UE comprising hardware processing circuitry including:
an antenna; and a receiver, coupled to the antenna, the receiver operable to:
receive on a first carrier in a licensed spectrum one or both of Primary Synchronization Signal (PSS) or Secondary Synchronization Signal (SSS) from the eNB; and
receive on a second carrier operating on the unlicensed spectrum to infer information associated with the one or both of PSS and SSS. 14. The UE of claim 13, wherein the first and second carriers are transmitted by the same transmitter. 15. The UE of claim 13, wherein the first and second carriers are adjacent in frequency band. 16. The UE of claim 13, comprising logic which is time synchronized with time synchronization information from the first carrier. 17. The UE of claim 16, wherein the receiver to receive the one or both of PSS and SSS using unoccupied frequencies in response to determining that center frequencies of six PRBs are occupied. 18. The UE of claim 17, wherein the receiver to receive the one or both of PSS and SSS in frequencies away from the center frequencies of the six PRBs of the transmission bandwidth. 19. The UE of claim 13, wherein the first carrier, operating on the licensed spectrum, is for a primary cell (PCell) associated with the UE. 20. The UE of claim 13, wherein the second carrier, operating on the unlicensed spectrum, is for a secondary cell (SCell) associated with the UE. 21. The UE of claim 13, wherein the receiver to receive the one or both of PSS and SSS in unoccupied frequencies. 22. The UE of claim 13, wherein the receiver to receive the one or both of PSS and SSS in frequencies away from the physical resource blocks (PRBs) and near the center frequencies of transmission bandwidth. 23. A User Equipment (UE) to communicate with an Evolved Node-B (eNB) on a network operating in an unlicensed spectrum, the UE comprising hardware processing circuitry including:
an antenna; a receiver coupled to the antenna, the receiver including a sensor to detect ongoing transmission in a carrier frequency of the unlicensed spectrum; and a transmitter coupled to the antenna, the transmitter to transmit a notification, associated with the detected ongoing transmission, to the eNB via a licensed spectrum. 24. The UE of claim 23, wherein the transmitter to transmit the notification associated with the detected ongoing transmission to a scheduler associated with the eNB. 25. The UE of claim 23, wherein the licensed spectrum is an uplink spectrum. 26. The UE of claim 23, wherein the sensor is at least one of:
an energy detection based sensor; or a waveform detection based sensor. 27. The UE of claim 23, wherein the sensor to detect or sense any ongoing transmission over the unlicensed spectrum used by the eNB or the UE. 28. The UE of claim 23, wherein the eNB includes a sensor to sense or detect ongoing transmission in a carrier frequency of the unlicensed spectrum. 29. The UE of claim 28, wherein the sensor in the eNB is at least one of: an energy detection based sensor; or a waveform detection based sensor. | 2,400 |
8,621 | 8,621 | 15,375,142 | 2,456 | Example methods are provided for a first routing component to handle failure at a logical router in a first network. One method may comprise learning first path information associated with a first path provided by an active second routing component, and second path information associated with a second path provided by a standby second routing component. The method may also comprise in response to detecting a first egress packet destined for a second network, sending the first egress packet to the active second routing component based on the first path information. The method may further comprise in response to detecting a failure at the active second routing component and detecting a second egress packet destined for the second network, sending the second egress packet to a new active second routing component based on the second path information. | 1. A method for a first routing component to handle failure at a logical router in a first network, wherein the logical router includes the first routing component, an active second routing component and a standby second routing component, and the method comprises:
learning first path information associated with a first path provided by the active second routing component, and second path information associated with a second path provided by the standby second routing component, wherein the first path and second path connect the first routing component to a second network; in response to detecting a first egress packet that is destined for the second network, sending the first egress packet to the active second routing component over the first path based on the first path information; and in response to detecting a failure at the active second routing component,
assigning the standby second routing component to be a new active second routing component; and
in response to detecting a second egress packet that is destined for the second network, sending the second egress packet to the new active second routing component over the second path based on the second path information. 2. The method of claim 1, wherein the method further comprises:
based on at least the first path information and second path information, selecting the active second routing component and the standby second routing component from multiple second routing components. 3. The method of claim 2, wherein selecting the active second routing component and the standby second routing component comprises:
comparing multiple distance values associated with the respective multiple second routing components; and based on the comparison, selecting the active second routing component based on a lowest distance value among the distance values; and the standby second routing component based on a higher distance value compared to the lowest distance value. 4. The method of claim 1, wherein detecting the failure associated with the active second routing component comprises:
detecting that the first path is down or the active second routing component has lost physical connectivity with the second network, or both. 5. The method of claim 1, wherein learning the first path information and second path information comprises:
receiving, from a network management entity, the first path information that includes a first Media Access control (MAC) address, an Internet Protocol (IP) address and first virtual tunnel endpoint (VTEP) information associated with the active second routing component; and receiving, from the network management entity, the second path information that includes a second MAC address, the IP address and second VTEP information associated with the standby second routing component. 6. The method of claim 4, wherein the method further comprises:
prior to sending the first egress packet, encapsulating the first egress packet with header information that includes the first MAC address, the IP address and the first VTEP information; and prior to sending the second egress packet, encapsulating the second egress packet with header information that includes the second MAC address, the IP address and the second VTEP information. 7. The method of claim 1, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; and replacing the new active second routing component with the recovered second routing component according to a preemptive mode. 8. The method of claim 1, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; updating path information associated with the recovered second routing component to decrease its distance value; and assigning the recovered second routing component as a standby second routing component according to a non-preemptive mode. 9. A non-transitory computer-readable storage medium that includes a set of instructions which, in response to execution by a processor of a computer system, cause the processor to implement a first routing component to handle failure at a logical router in a first network, wherein the logical router includes the first routing component, an active second routing component and a standby second routing component, and the method comprises:
learning first path information associated with a first path provided by the active second routing component, and second path information associated with a second path provided by the standby second routing component, wherein the first path and second path connect the first routing component to a second network; in response to detecting a first egress packet that is destined for the second network, sending the first egress packet to the active second routing component over the first path based on the first path information; and in response to detecting a failure at the active second routing component,
assigning the standby second routing component to be a new active second routing component; and
in response to detecting a second egress packet that is destined for the second network, sending the second egress packet to the new active second routing component over the second path based on the second path information. 10. The non-transitory computer-readable storage medium of claim 9, wherein the method further comprises:
based on at least the first path information and second path information, selecting the active second routing component and the standby second routing component from multiple second routing components. 11. The non-transitory computer-readable storage medium of claim 10, wherein selecting the active second routing component and the standby second routing component comprises:
comparing multiple distance values associated with the respective multiple second routing components; and based on the comparison, selecting the active second routing component based on a lowest distance value among the distance values; and the standby second routing component based on a higher distance value compared to the lowest distance value. 12. The non-transitory computer-readable storage medium of claim 9, wherein detecting the failure associated with the active second routing component comprises:
detecting that the first path is down or the active second routing component has lost physical connectivity with the second network, or both. 13. The non-transitory computer-readable storage medium of claim 9, wherein learning the first path information and second path information comprises:
receiving, from a network management entity, the first path information that includes a first Media Access control (MAC) address, an Internet Protocol (IP) address and first virtual tunnel endpoint (VTEP) information associated with the active second routing component; and receiving, from the network management entity, the second path information that includes a second MAC address, the IP address and second VTEP information associated with the standby second routing component. 14. The non-transitory computer-readable storage medium of claim 13, wherein the method further comprises:
prior to sending the first egress packet, encapsulating the first egress packet with header information that includes the first MAC address, the IP address and the first VTEP information; and prior to sending the second egress packet, encapsulating the second egress packet with header information that includes the second MAC address, the IP address and the second VTEP information. 15. The non-transitory computer-readable storage medium of claim 9, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; and replacing the new active second routing component with the recovered second routing component according to a preemptive mode. 16. The non-transitory computer-readable storage medium of claim 9, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; updating path information associated with the recovered second routing component to decrease its distance value; and assigning the recovered second routing component as a standby second routing component according to a non-preemptive mode. 17. A computer system configured to implement a first routing component to handle failure at a logical router in a first network, wherein the logical router includes the first routing component, an active second routing component and a standby second routing component, and the computer system comprises:
a processor; and a non-transitory computer-readable medium having stored thereon instructions that, when executed by the processor, cause the processor to: learn first path information associated with a first path provided by the active second routing component, and second path information associated with a second path provided by the standby second routing component, wherein the first path and second path connect the first routing component to a second network; in response to detecting a first egress packet destined for the second network, send the first egress packet to the active second routing component over the first path based on the first path information; and in response to detecting a failure at the active second routing component,
assign the standby second routing component to be a new active second routing component; and
in response to detecting a second egress packet destined for the second network, send the second egress packet to the new active second routing component over the second path based on the second path information. 18. The computer system of claim 17, wherein the instructions further cause the processor to:
based on at least the first path information and second path information, select the active second routing component and the standby second routing component from multiple second routing components. 19. The computer system of claim 18, wherein instructions for selecting the active second routing component and the standby second routing component cause the processor to:
compare multiple distance values associated with the respective multiple second routing components; and based on the comparison, select the active second routing component based on a lowest distance value among the distance values; and the standby second routing component based on a higher distance value compared to the lowest distance value. 20. The computer system of claim 17, wherein instructions for detecting the failure associated with the active second routing component cause the processor to:
detect that the first path is down or the active second routing component has lost physical connectivity with the second network, or both. 21. The computer system of claim 17, wherein instructions for learning the first path information and second path information cause the processor to:
receive, from a network management entity, the first path information that includes a first Media Access control (MAC) address, an Internet Protocol (IP) address and first virtual tunnel endpoint (VTEP) information associated with the active second routing component; and receive, from the network management entity, the second path information that includes a second MAC address, the IP address and second VTEP information associated with the standby second routing component. 22. The computer system of claim 21, wherein the instructions further cause the processor to:
prior to sending the first egress packet, encapsulate the first egress packet with header information that includes the first MAC address, the IP address and the first VTEP information; and prior to sending the second egress packet, encapsulate the second egress packet with header information that includes the second MAC address, the IP address and the second VTEP information. 23. The computer system of claim 17, wherein the instructions further cause the processor to:
detect recovery of the active second routing component, being a recovered second routing component, from the failure; and replace the new active second routing component with the recovered second routing component according to a preemptive mode. 24. The computer system of claim 17, wherein the instructions further cause the processor to:
detect recovery of the active second routing component, being a recovered second routing component, from the failure; update path information associated with the recovered second routing component to decrease its distance value; and assign the recovered second routing component as a standby second routing component according to a non-preemptive mode. | Example methods are provided for a first routing component to handle failure at a logical router in a first network. One method may comprise learning first path information associated with a first path provided by an active second routing component, and second path information associated with a second path provided by a standby second routing component. The method may also comprise in response to detecting a first egress packet destined for a second network, sending the first egress packet to the active second routing component based on the first path information. The method may further comprise in response to detecting a failure at the active second routing component and detecting a second egress packet destined for the second network, sending the second egress packet to a new active second routing component based on the second path information.1. A method for a first routing component to handle failure at a logical router in a first network, wherein the logical router includes the first routing component, an active second routing component and a standby second routing component, and the method comprises:
learning first path information associated with a first path provided by the active second routing component, and second path information associated with a second path provided by the standby second routing component, wherein the first path and second path connect the first routing component to a second network; in response to detecting a first egress packet that is destined for the second network, sending the first egress packet to the active second routing component over the first path based on the first path information; and in response to detecting a failure at the active second routing component,
assigning the standby second routing component to be a new active second routing component; and
in response to detecting a second egress packet that is destined for the second network, sending the second egress packet to the new active second routing component over the second path based on the second path information. 2. The method of claim 1, wherein the method further comprises:
based on at least the first path information and second path information, selecting the active second routing component and the standby second routing component from multiple second routing components. 3. The method of claim 2, wherein selecting the active second routing component and the standby second routing component comprises:
comparing multiple distance values associated with the respective multiple second routing components; and based on the comparison, selecting the active second routing component based on a lowest distance value among the distance values; and the standby second routing component based on a higher distance value compared to the lowest distance value. 4. The method of claim 1, wherein detecting the failure associated with the active second routing component comprises:
detecting that the first path is down or the active second routing component has lost physical connectivity with the second network, or both. 5. The method of claim 1, wherein learning the first path information and second path information comprises:
receiving, from a network management entity, the first path information that includes a first Media Access control (MAC) address, an Internet Protocol (IP) address and first virtual tunnel endpoint (VTEP) information associated with the active second routing component; and receiving, from the network management entity, the second path information that includes a second MAC address, the IP address and second VTEP information associated with the standby second routing component. 6. The method of claim 4, wherein the method further comprises:
prior to sending the first egress packet, encapsulating the first egress packet with header information that includes the first MAC address, the IP address and the first VTEP information; and prior to sending the second egress packet, encapsulating the second egress packet with header information that includes the second MAC address, the IP address and the second VTEP information. 7. The method of claim 1, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; and replacing the new active second routing component with the recovered second routing component according to a preemptive mode. 8. The method of claim 1, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; updating path information associated with the recovered second routing component to decrease its distance value; and assigning the recovered second routing component as a standby second routing component according to a non-preemptive mode. 9. A non-transitory computer-readable storage medium that includes a set of instructions which, in response to execution by a processor of a computer system, cause the processor to implement a first routing component to handle failure at a logical router in a first network, wherein the logical router includes the first routing component, an active second routing component and a standby second routing component, and the method comprises:
learning first path information associated with a first path provided by the active second routing component, and second path information associated with a second path provided by the standby second routing component, wherein the first path and second path connect the first routing component to a second network; in response to detecting a first egress packet that is destined for the second network, sending the first egress packet to the active second routing component over the first path based on the first path information; and in response to detecting a failure at the active second routing component,
assigning the standby second routing component to be a new active second routing component; and
in response to detecting a second egress packet that is destined for the second network, sending the second egress packet to the new active second routing component over the second path based on the second path information. 10. The non-transitory computer-readable storage medium of claim 9, wherein the method further comprises:
based on at least the first path information and second path information, selecting the active second routing component and the standby second routing component from multiple second routing components. 11. The non-transitory computer-readable storage medium of claim 10, wherein selecting the active second routing component and the standby second routing component comprises:
comparing multiple distance values associated with the respective multiple second routing components; and based on the comparison, selecting the active second routing component based on a lowest distance value among the distance values; and the standby second routing component based on a higher distance value compared to the lowest distance value. 12. The non-transitory computer-readable storage medium of claim 9, wherein detecting the failure associated with the active second routing component comprises:
detecting that the first path is down or the active second routing component has lost physical connectivity with the second network, or both. 13. The non-transitory computer-readable storage medium of claim 9, wherein learning the first path information and second path information comprises:
receiving, from a network management entity, the first path information that includes a first Media Access control (MAC) address, an Internet Protocol (IP) address and first virtual tunnel endpoint (VTEP) information associated with the active second routing component; and receiving, from the network management entity, the second path information that includes a second MAC address, the IP address and second VTEP information associated with the standby second routing component. 14. The non-transitory computer-readable storage medium of claim 13, wherein the method further comprises:
prior to sending the first egress packet, encapsulating the first egress packet with header information that includes the first MAC address, the IP address and the first VTEP information; and prior to sending the second egress packet, encapsulating the second egress packet with header information that includes the second MAC address, the IP address and the second VTEP information. 15. The non-transitory computer-readable storage medium of claim 9, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; and replacing the new active second routing component with the recovered second routing component according to a preemptive mode. 16. The non-transitory computer-readable storage medium of claim 9, wherein the method further comprises:
detecting recovery of the active second routing component, being a recovered second routing component, from the failure; updating path information associated with the recovered second routing component to decrease its distance value; and assigning the recovered second routing component as a standby second routing component according to a non-preemptive mode. 17. A computer system configured to implement a first routing component to handle failure at a logical router in a first network, wherein the logical router includes the first routing component, an active second routing component and a standby second routing component, and the computer system comprises:
a processor; and a non-transitory computer-readable medium having stored thereon instructions that, when executed by the processor, cause the processor to: learn first path information associated with a first path provided by the active second routing component, and second path information associated with a second path provided by the standby second routing component, wherein the first path and second path connect the first routing component to a second network; in response to detecting a first egress packet destined for the second network, send the first egress packet to the active second routing component over the first path based on the first path information; and in response to detecting a failure at the active second routing component,
assign the standby second routing component to be a new active second routing component; and
in response to detecting a second egress packet destined for the second network, send the second egress packet to the new active second routing component over the second path based on the second path information. 18. The computer system of claim 17, wherein the instructions further cause the processor to:
based on at least the first path information and second path information, select the active second routing component and the standby second routing component from multiple second routing components. 19. The computer system of claim 18, wherein instructions for selecting the active second routing component and the standby second routing component cause the processor to:
compare multiple distance values associated with the respective multiple second routing components; and based on the comparison, select the active second routing component based on a lowest distance value among the distance values; and the standby second routing component based on a higher distance value compared to the lowest distance value. 20. The computer system of claim 17, wherein instructions for detecting the failure associated with the active second routing component cause the processor to:
detect that the first path is down or the active second routing component has lost physical connectivity with the second network, or both. 21. The computer system of claim 17, wherein instructions for learning the first path information and second path information cause the processor to:
receive, from a network management entity, the first path information that includes a first Media Access control (MAC) address, an Internet Protocol (IP) address and first virtual tunnel endpoint (VTEP) information associated with the active second routing component; and receive, from the network management entity, the second path information that includes a second MAC address, the IP address and second VTEP information associated with the standby second routing component. 22. The computer system of claim 21, wherein the instructions further cause the processor to:
prior to sending the first egress packet, encapsulate the first egress packet with header information that includes the first MAC address, the IP address and the first VTEP information; and prior to sending the second egress packet, encapsulate the second egress packet with header information that includes the second MAC address, the IP address and the second VTEP information. 23. The computer system of claim 17, wherein the instructions further cause the processor to:
detect recovery of the active second routing component, being a recovered second routing component, from the failure; and replace the new active second routing component with the recovered second routing component according to a preemptive mode. 24. The computer system of claim 17, wherein the instructions further cause the processor to:
detect recovery of the active second routing component, being a recovered second routing component, from the failure; update path information associated with the recovered second routing component to decrease its distance value; and assign the recovered second routing component as a standby second routing component according to a non-preemptive mode. | 2,400 |
8,622 | 8,622 | 14,636,448 | 2,483 | An optical system includes a first relay rod of a first material, a second relay rod of a second material, different from the first material, and a lens between the first and second relay rods. | 1. An optical system, comprising:
a first relay rod of a first material; a second relay rod of a second material, different from the first material; and a lens between the first and second relay rods. 2. The optical system as claimed in claim 1, wherein the first and second materials are selected to reduce the wavefront error across the field of view of the optical system as compared with using a same material for both the first and second relay rods. 3. The optical system as claimed in claim 1, wherein the first and second relay rods have no optical power. 4. The optical system as claimed in claim 1, wherein the first and second relay rods have no optically powered surfaces. 5. The optical system as claimed in claim 1, wherein the first and second relay rods are geometrically symmetric to one another relative to a pupil or conjugate pupil of the optical system. 6. The optical system as claimed in claim 1, wherein the lens includes a first relay objective and a second relay objective. 7. The optical system as claimed in claim 6, wherein the first and second relay objectives have a same design and material. 8. The optical system as claimed in claim 1, wherein a pupil region of the system is in air. 9. The optical system as claimed in claim 1, further comprising:
a third relay rod of a third material; a fourth relay rod of a fourth material, the fourth material being different from the third material; and a lens between the third and fourth relay rods. 10. The optical system as claimed in claim 9, wherein the first and third materials are the same and the second and fourth materials are the same. 11. The optical system as claimed in claim 1, further comprising a detector to detect light from 460 nm to 850 nm. 12. An endoscope including the optical system as claimed in claim 1. 13. A kit, comprising:
a first relay rod of a first material; a second relay rod of a second material, different from the first material; and a lens to be inserted between the first and second relay rods. 14. The kit as claimed in claim 13, wherein the lens includes a first relay objective and a second relay objective. 15. The kit as claimed in claim 14, wherein the first and second relay objectives have a same design and material. 16. A method of compensating for dispersion in a relay lens system, the method comprising:
providing a first relay rod of a first material; providing a second relay rod of a second material, different from the first material; and providing a lens between the first and second relay rods. | An optical system includes a first relay rod of a first material, a second relay rod of a second material, different from the first material, and a lens between the first and second relay rods.1. An optical system, comprising:
a first relay rod of a first material; a second relay rod of a second material, different from the first material; and a lens between the first and second relay rods. 2. The optical system as claimed in claim 1, wherein the first and second materials are selected to reduce the wavefront error across the field of view of the optical system as compared with using a same material for both the first and second relay rods. 3. The optical system as claimed in claim 1, wherein the first and second relay rods have no optical power. 4. The optical system as claimed in claim 1, wherein the first and second relay rods have no optically powered surfaces. 5. The optical system as claimed in claim 1, wherein the first and second relay rods are geometrically symmetric to one another relative to a pupil or conjugate pupil of the optical system. 6. The optical system as claimed in claim 1, wherein the lens includes a first relay objective and a second relay objective. 7. The optical system as claimed in claim 6, wherein the first and second relay objectives have a same design and material. 8. The optical system as claimed in claim 1, wherein a pupil region of the system is in air. 9. The optical system as claimed in claim 1, further comprising:
a third relay rod of a third material; a fourth relay rod of a fourth material, the fourth material being different from the third material; and a lens between the third and fourth relay rods. 10. The optical system as claimed in claim 9, wherein the first and third materials are the same and the second and fourth materials are the same. 11. The optical system as claimed in claim 1, further comprising a detector to detect light from 460 nm to 850 nm. 12. An endoscope including the optical system as claimed in claim 1. 13. A kit, comprising:
a first relay rod of a first material; a second relay rod of a second material, different from the first material; and a lens to be inserted between the first and second relay rods. 14. The kit as claimed in claim 13, wherein the lens includes a first relay objective and a second relay objective. 15. The kit as claimed in claim 14, wherein the first and second relay objectives have a same design and material. 16. A method of compensating for dispersion in a relay lens system, the method comprising:
providing a first relay rod of a first material; providing a second relay rod of a second material, different from the first material; and providing a lens between the first and second relay rods. | 2,400 |
8,623 | 8,623 | 14,585,041 | 2,481 | A method for coding a reference picture set (RPS) in multi-layer coding is disclosed. In one aspect, the method may involve determining whether a current picture of video information is a discardable picture. The method may also involve refraining from including the current picture in an RPS based on the determination that the current picture is a discardable picture. The method may further involve encoding the video information based at least in part on the RPS. | 1. A method for encoding video information of a multi-layer bitstream, comprising:
determining whether a current picture of the video information is a discardable picture; refraining from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 2. The method of claim 1, wherein the determining whether the current picture is a discardable picture comprises determining whether the current picture is used for inter-layer prediction or inter prediction, a discardable picture being a picture that is not used for inter-layer prediction or inter prediction. 3. The method of claim 1, wherein determining whether the current picture is a discardable picture is based at least in part on a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 4. The method of claim 3, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 5. The method of claim 1, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 6. The method of claim 1, further comprising encoding the video information based at least in part on the RPS. 7. A device for encoding video information of a multi-layer bitstream, comprising:
a memory configured to store the video information; and a processor in communication with the memory and configured to:
determine whether a current picture of the video information is a discardable picture; and
refrain from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 8. The device of claim 7, wherein the processor is further configured to determine whether the current picture is used for inter-layer prediction or inter prediction and wherein a discardable picture is a picture that is not used for inter-layer prediction or inter prediction. 9. The device of claim 7, wherein the processor is further configured to determine whether the current picture is a discardable picture based at least in part on a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 10. The device of claim 9, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 11. The device of claim 7, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 12. The device of claim 7, wherein the processor is further configured to encode the video information based at least in part on the RPS. 13. An apparatus, comprising:
means for determining whether a current picture of video information is a discardable picture; and means for refraining from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 14. The apparatus of claim 13, wherein the means for determining whether the current picture is a discardable picture comprises means for determining whether the current picture is used for inter-layer prediction or inter prediction, a discardable picture being a picture that is not used for inter-layer prediction or inter prediction. 15. The apparatus of claim 13, wherein the means for determining whether the current picture is a discardable picture comprises means for determining a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 16. The apparatus of claim 15, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 17. The apparatus of claim 13, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 18. The apparatus of claim 13, further comprising means for encoding the video information based at least in part on the RPS. 19. A non-transitory computer readable storage medium having stored thereon instructions that, when executed, cause a processor of a device to:
determine whether a current picture of video information is a discardable picture; and refrain from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 20. The non-transitory computer readable storage medium of claim 19, further having stored thereon instructions that, when executed, cause the processor to determine whether the current picture is used for inter-layer prediction or inter prediction, a discardable picture being a picture that is not used for inter-layer prediction or inter prediction. 21. The non-transitory computer readable storage medium of claim 19, further having stored thereon instructions that, when executed, cause the processor to determine a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 22. The non-transitory computer readable storage medium of claim 21, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 23. The non-transitory computer readable storage medium of claim 19, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 24. The non-transitory computer readable storage medium of claim 19, further having stored thereon instructions that, when executed, cause the processor to encode the video information based at least in part on the RPS. | A method for coding a reference picture set (RPS) in multi-layer coding is disclosed. In one aspect, the method may involve determining whether a current picture of video information is a discardable picture. The method may also involve refraining from including the current picture in an RPS based on the determination that the current picture is a discardable picture. The method may further involve encoding the video information based at least in part on the RPS.1. A method for encoding video information of a multi-layer bitstream, comprising:
determining whether a current picture of the video information is a discardable picture; refraining from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 2. The method of claim 1, wherein the determining whether the current picture is a discardable picture comprises determining whether the current picture is used for inter-layer prediction or inter prediction, a discardable picture being a picture that is not used for inter-layer prediction or inter prediction. 3. The method of claim 1, wherein determining whether the current picture is a discardable picture is based at least in part on a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 4. The method of claim 3, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 5. The method of claim 1, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 6. The method of claim 1, further comprising encoding the video information based at least in part on the RPS. 7. A device for encoding video information of a multi-layer bitstream, comprising:
a memory configured to store the video information; and a processor in communication with the memory and configured to:
determine whether a current picture of the video information is a discardable picture; and
refrain from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 8. The device of claim 7, wherein the processor is further configured to determine whether the current picture is used for inter-layer prediction or inter prediction and wherein a discardable picture is a picture that is not used for inter-layer prediction or inter prediction. 9. The device of claim 7, wherein the processor is further configured to determine whether the current picture is a discardable picture based at least in part on a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 10. The device of claim 9, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 11. The device of claim 7, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 12. The device of claim 7, wherein the processor is further configured to encode the video information based at least in part on the RPS. 13. An apparatus, comprising:
means for determining whether a current picture of video information is a discardable picture; and means for refraining from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 14. The apparatus of claim 13, wherein the means for determining whether the current picture is a discardable picture comprises means for determining whether the current picture is used for inter-layer prediction or inter prediction, a discardable picture being a picture that is not used for inter-layer prediction or inter prediction. 15. The apparatus of claim 13, wherein the means for determining whether the current picture is a discardable picture comprises means for determining a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 16. The apparatus of claim 15, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 17. The apparatus of claim 13, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 18. The apparatus of claim 13, further comprising means for encoding the video information based at least in part on the RPS. 19. A non-transitory computer readable storage medium having stored thereon instructions that, when executed, cause a processor of a device to:
determine whether a current picture of video information is a discardable picture; and refrain from including the current picture in a reference picture set (RPS) based on the determination that the current picture is a discardable picture. 20. The non-transitory computer readable storage medium of claim 19, further having stored thereon instructions that, when executed, cause the processor to determine whether the current picture is used for inter-layer prediction or inter prediction, a discardable picture being a picture that is not used for inter-layer prediction or inter prediction. 21. The non-transitory computer readable storage medium of claim 19, further having stored thereon instructions that, when executed, cause the processor to determine a discardable value associated with the current picture that indicates whether the current picture is used for inter-layer prediction or inter prediction. 22. The non-transitory computer readable storage medium of claim 21, wherein the discardable value is a discardable flag and wherein the discardable flag indicates that the current picture is a discardable picture when the discardable flag has a value equal to one. 23. The non-transitory computer readable storage medium of claim 19, wherein the RPS comprises an inter-layer RPS or a temporal RPS. 24. The non-transitory computer readable storage medium of claim 19, further having stored thereon instructions that, when executed, cause the processor to encode the video information based at least in part on the RPS. | 2,400 |
8,624 | 8,624 | 15,538,853 | 2,485 | An encoding apparatus for encoding an image includes: a communicator configured to receive, from a device, device information related to the device; and a processor configured to encode the image by using image information of the image and the device information, wherein the processor is further configured to process the image according to at least one of the device information and the image information, determine a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information, performs block-based encoding on the block-based encoding region by using a quantization parameter determined according to at least one of the device information and the image information, perform pixel-based encoding on the pixel-based encoding region, generates an encoded image by entropy encoding a symbol determined by the block-based encoding or the pixel-based encoding, and generate a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter, and wherein the communicator is further configured to transmit the bitstream to the device. | 1. An encoding apparatus for encoding an image, the encoding apparatus comprising:
a communicator configured to receive, from a device, device information related to the device; and a processor configured to encode the image by using image information of the image and the device information, wherein the processor is further configured to determine a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information, performs block-based encoding on the block-based encoding region by using a quantization parameter determined according to at least one of the device information and the image information, perform pixel-based encoding on the pixel-based encoding region, generates an encoded image by entropy encoding a symbol determined by the block-based encoding or a symbol determined by the pixel-based encoding, and generate a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter, and wherein the communicator is further configured to transmit the bitstream to the device. 2. The encoding apparatus of claim 1, wherein the processor is further configured to perform intra prediction or inter prediction on a coding unit of the block-based encoding region, transform residual data generated from intra prediction or inter prediction of a block-based encoding region, and quantize the transformed residual data by using the quantization parameter. 3. The encoding apparatus of claim 1, wherein the processor is further configured to perform pixel-based prediction on an adjacent pixel with respect to each of pixels of the pixel-based encoding region, match residual data generated from pixel-based prediction of a pixel-based encoding region with a repeated residual string of the residual data, and compress the residual data by using a length and position information of the residual string. 4. The encoding apparatus of claim 1, wherein the processor is further configured to generate an index map comprising an index corresponding to colors indicated by pixels of the pixel-based encoding region, match the index map with a repeated index string of the index map, and compress the index map by using a lens of the index string and position information. 5. The encoding apparatus of claim 1,
wherein the communicator is further configured to receive display shape information about a shape of a display included in a device reproducing the image, and wherein the processor is further configured to determine a region of the image that is not displayed on the display as the non-encoding region according to the display shape information, and wherein the non-encoding region is a region that is not encoded. 6. The encoding apparatus of claim 1, wherein the processor is further configured to split the image into a plurality of regions, expand sizes of the split regions such that the split regions are split in an integer number of coding units used by a block-based encoding region, obtain an area ratio of the expanded regions with respect to unexpanded regions, and determine the regions as one of the block-based encoding regions and the pixel-based encoding regions according to the area ratio. 7. The encoding apparatus of claim 1, wherein the processor is further configured to obtain color information indicating the number of colors constituting the image, determine, according to the color information, a region in which the number of used colors is below a threshold value as the pixel-based encoding region, and determine the region in which the number of used colors is above the threshold value as the block-based encoding region. 8. The encoding apparatus of claim 1, wherein the processor is further configured to obtain pixel gradient information indicating values indicating size differences between pixel values of adjacent pixels with respect to pixels included in the image, select a maximum value from among the values indicating the size differences between the pixel values of the adjacent pixels according to the pixel gradient information, determine a region in which the maximum value is above a threshold value as the pixel-based encoding region, and determine a region in which the maximum value is below the threshold value as the block-based encoding region. 9. The encoding apparatus of claim 1,
wherein the communicator is further configured to receive illumination information about illumination of a device reproducing the image, and wherein the processor is further configured to adjust a frame rate of the image according to a change in brightness of a display included in the device by the illumination information and determine the quantization parameter according to the change in the brightness of the display included in the device by the illumination information. 10. The encoding apparatus of claim 1,
wherein the communicator is further configured to receive power information indicating that a device reproducing the image is in a low power mode, and wherein, when the power information indicates the low power mode, the processor is further configured to reduce a frame rate of the image or converts a color format of the image and, determine the quantization parameter to be greater than the quantization parameter of the low power mode. 11. A method of encoding an image, the method being performed by an encoding apparatus and comprising:
receiving, from a device, device information related to the device; obtaining image information of the image; determining a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information; determining a quantization parameter for the block-based encoding region according to at least one of the device information and the image information; performing block-based encoding on the block-based encoding region by using the quantization parameter; performing pixel-based encoding on the pixel-based encoding region; generating an encoded image by entropy encoding a symbol determined by the block-based encoding and a symbol determined by the pixel-based encoding; and transmitting a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter to the device. 12. A decoding apparatus for decoding an encoded image, the decoding apparatus comprising:
a communicator configured to transmit device information related to a device comprising the decoding apparatus to an encoding apparatus and receive, from the encoding apparatus, the encoded image generated by encoding an image and image information of the image, region information of a block-based encoding region to which block-based encoding is applied and a pixel-based encoding region to which pixel-based encoding is applied in a region of the image, and quantization information of a quantization parameter used in an encoding process of the block-based encoding region of the image; and a processor configured to entropy decode the encoded image, perform block-based decoding on a portion corresponding to the block-based encoding region of the entropy decoded image by using the quantization parameter, perform pixel-based decoding on a portion corresponding to the pixel-based encoding region of the entropy decoded image, and reconstruct an image that is to be reproduced by the device by combining the portion corresponding to the block-based encoding region of the block-based decoded image and the portion corresponding to the pixel-based encoding region of the pixel-based decoded image. 13. (canceled) 14. A non-transitory computer-readable recording medium having recorded thereon a computer program for executing the encoding method of claim 11. 15. A non-transitory computer-readable recording medium having recorded thereon a computer program for executing the decoding method of claim 13. | An encoding apparatus for encoding an image includes: a communicator configured to receive, from a device, device information related to the device; and a processor configured to encode the image by using image information of the image and the device information, wherein the processor is further configured to process the image according to at least one of the device information and the image information, determine a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information, performs block-based encoding on the block-based encoding region by using a quantization parameter determined according to at least one of the device information and the image information, perform pixel-based encoding on the pixel-based encoding region, generates an encoded image by entropy encoding a symbol determined by the block-based encoding or the pixel-based encoding, and generate a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter, and wherein the communicator is further configured to transmit the bitstream to the device.1. An encoding apparatus for encoding an image, the encoding apparatus comprising:
a communicator configured to receive, from a device, device information related to the device; and a processor configured to encode the image by using image information of the image and the device information, wherein the processor is further configured to determine a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information, performs block-based encoding on the block-based encoding region by using a quantization parameter determined according to at least one of the device information and the image information, perform pixel-based encoding on the pixel-based encoding region, generates an encoded image by entropy encoding a symbol determined by the block-based encoding or a symbol determined by the pixel-based encoding, and generate a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter, and wherein the communicator is further configured to transmit the bitstream to the device. 2. The encoding apparatus of claim 1, wherein the processor is further configured to perform intra prediction or inter prediction on a coding unit of the block-based encoding region, transform residual data generated from intra prediction or inter prediction of a block-based encoding region, and quantize the transformed residual data by using the quantization parameter. 3. The encoding apparatus of claim 1, wherein the processor is further configured to perform pixel-based prediction on an adjacent pixel with respect to each of pixels of the pixel-based encoding region, match residual data generated from pixel-based prediction of a pixel-based encoding region with a repeated residual string of the residual data, and compress the residual data by using a length and position information of the residual string. 4. The encoding apparatus of claim 1, wherein the processor is further configured to generate an index map comprising an index corresponding to colors indicated by pixels of the pixel-based encoding region, match the index map with a repeated index string of the index map, and compress the index map by using a lens of the index string and position information. 5. The encoding apparatus of claim 1,
wherein the communicator is further configured to receive display shape information about a shape of a display included in a device reproducing the image, and wherein the processor is further configured to determine a region of the image that is not displayed on the display as the non-encoding region according to the display shape information, and wherein the non-encoding region is a region that is not encoded. 6. The encoding apparatus of claim 1, wherein the processor is further configured to split the image into a plurality of regions, expand sizes of the split regions such that the split regions are split in an integer number of coding units used by a block-based encoding region, obtain an area ratio of the expanded regions with respect to unexpanded regions, and determine the regions as one of the block-based encoding regions and the pixel-based encoding regions according to the area ratio. 7. The encoding apparatus of claim 1, wherein the processor is further configured to obtain color information indicating the number of colors constituting the image, determine, according to the color information, a region in which the number of used colors is below a threshold value as the pixel-based encoding region, and determine the region in which the number of used colors is above the threshold value as the block-based encoding region. 8. The encoding apparatus of claim 1, wherein the processor is further configured to obtain pixel gradient information indicating values indicating size differences between pixel values of adjacent pixels with respect to pixels included in the image, select a maximum value from among the values indicating the size differences between the pixel values of the adjacent pixels according to the pixel gradient information, determine a region in which the maximum value is above a threshold value as the pixel-based encoding region, and determine a region in which the maximum value is below the threshold value as the block-based encoding region. 9. The encoding apparatus of claim 1,
wherein the communicator is further configured to receive illumination information about illumination of a device reproducing the image, and wherein the processor is further configured to adjust a frame rate of the image according to a change in brightness of a display included in the device by the illumination information and determine the quantization parameter according to the change in the brightness of the display included in the device by the illumination information. 10. The encoding apparatus of claim 1,
wherein the communicator is further configured to receive power information indicating that a device reproducing the image is in a low power mode, and wherein, when the power information indicates the low power mode, the processor is further configured to reduce a frame rate of the image or converts a color format of the image and, determine the quantization parameter to be greater than the quantization parameter of the low power mode. 11. A method of encoding an image, the method being performed by an encoding apparatus and comprising:
receiving, from a device, device information related to the device; obtaining image information of the image; determining a non-encoding region, a block-based encoding region, and a pixel-based encoding region of the image according to at least one of the device information and the image information; determining a quantization parameter for the block-based encoding region according to at least one of the device information and the image information; performing block-based encoding on the block-based encoding region by using the quantization parameter; performing pixel-based encoding on the pixel-based encoding region; generating an encoded image by entropy encoding a symbol determined by the block-based encoding and a symbol determined by the pixel-based encoding; and transmitting a bitstream comprising the encoded image, region information of the block-based encoding region and the pixel-based encoding region, and quantization information of the quantization parameter to the device. 12. A decoding apparatus for decoding an encoded image, the decoding apparatus comprising:
a communicator configured to transmit device information related to a device comprising the decoding apparatus to an encoding apparatus and receive, from the encoding apparatus, the encoded image generated by encoding an image and image information of the image, region information of a block-based encoding region to which block-based encoding is applied and a pixel-based encoding region to which pixel-based encoding is applied in a region of the image, and quantization information of a quantization parameter used in an encoding process of the block-based encoding region of the image; and a processor configured to entropy decode the encoded image, perform block-based decoding on a portion corresponding to the block-based encoding region of the entropy decoded image by using the quantization parameter, perform pixel-based decoding on a portion corresponding to the pixel-based encoding region of the entropy decoded image, and reconstruct an image that is to be reproduced by the device by combining the portion corresponding to the block-based encoding region of the block-based decoded image and the portion corresponding to the pixel-based encoding region of the pixel-based decoded image. 13. (canceled) 14. A non-transitory computer-readable recording medium having recorded thereon a computer program for executing the encoding method of claim 11. 15. A non-transitory computer-readable recording medium having recorded thereon a computer program for executing the decoding method of claim 13. | 2,400 |
8,625 | 8,625 | 14,767,623 | 2,411 | A method of adapting operation of self-organizing network functions in a communication network comprising a big data level system and a self-organizing network system and at least one network element is provided, wherein the method comprises adapting the operation of at least one self-organizing network function by using knowledge achieved by analysis performed on the big data level. | 1.-13. (canceled) 14. A method of adapting operation of self-organizing network functions in a communication network comprising a big data system and a self-organizing network system, the method comprising:
adapting the operation of at least one self-organizing network function by using knowledge achieved by analysis performed on the big data system. 15. The method according to claim 14, wherein the adaptation of the operation is performed by adapting and/or creating at least one profile of a set of profiles. 16. The method according to claim 15, wherein the creation or updating is based on information of certain identified data entities or their relations. 17. The method according to claim 15, wherein the at least one adapted profile is used by a corresponding SON function. 18. The method according to claim 14, further comprising training the at least one self-organizing network function based on the adapting of the operation of at least one self-organizing network function. 19. The method according to claim 14, further comprising:
performing the analysis at an analytics module of the big data system. 20. The method according to claim 19, further comprising transferring the result of the analysis to the self-organizing network system. 21. The method according to claim 14, wherein data in the big data system includes minimization of drive tests (MDT). 22. The method according to claim 14, wherein data in the big data system includes location based data from user equipments (UEs). 23. The method according to claim 15, wherein data in the big data system includes data from at least one of macro, micro, pico and femto cell layers of a radio access technology (RAT). 24. The method according to claim 14, further comprising training the at least one self-organizing network function based at least on data of a network domain of the at least one self-organizing network function, and data of another network domain of another self-organizing network function. 25. The method according to claim 14, further comprising training the at least one self-organizing network function based at least on data of a cell of the at least one self-organizing network function, and data of another cell of another self-organizing network function. 26. The method according to claim 14, further comprising training the at least one self-organizing network function based at least on data of another self-organizing network function with deployment similar to the at least one self-organizing network function. 27. The method according to claim 14, further comprising verifying operation of the at least one self-organizing network function. 28. The method according to claim 27, wherein the verifying further comprises accessing aggregate behaviour of the at least one self-organizing network function after least one instance of the at least one self-organizing network function has been executed. 29. The method according to claim 27, wherein the verifying further comprises checking whether a change to the at least one self-organizing network function leads to an intended change in operation of the at least one self-organizing network function. 30. The method according to claim 29, further comprising rolling back the change to the at least one self-organizing network function. 31. A big data system for a communication network comprising a self-organizing network system, wherein the big data system comprises an analytics module adapted to perform an analysis on data collected by network elements and to provide results of the analysis to a self-organizing network function. 32. A self-organizing network system for a communication network, the self-organizing network system comprising self-organizing functions, wherein the self-organizing network system is adapted to receive analysis results of an analytics module of a big data system. 33. The self-organizing network system according to claim 32, wherein the self-organizing network system is further adapted to adapt an operation of at least one of the self-organizing functions based on the received analysis results. | A method of adapting operation of self-organizing network functions in a communication network comprising a big data level system and a self-organizing network system and at least one network element is provided, wherein the method comprises adapting the operation of at least one self-organizing network function by using knowledge achieved by analysis performed on the big data level.1.-13. (canceled) 14. A method of adapting operation of self-organizing network functions in a communication network comprising a big data system and a self-organizing network system, the method comprising:
adapting the operation of at least one self-organizing network function by using knowledge achieved by analysis performed on the big data system. 15. The method according to claim 14, wherein the adaptation of the operation is performed by adapting and/or creating at least one profile of a set of profiles. 16. The method according to claim 15, wherein the creation or updating is based on information of certain identified data entities or their relations. 17. The method according to claim 15, wherein the at least one adapted profile is used by a corresponding SON function. 18. The method according to claim 14, further comprising training the at least one self-organizing network function based on the adapting of the operation of at least one self-organizing network function. 19. The method according to claim 14, further comprising:
performing the analysis at an analytics module of the big data system. 20. The method according to claim 19, further comprising transferring the result of the analysis to the self-organizing network system. 21. The method according to claim 14, wherein data in the big data system includes minimization of drive tests (MDT). 22. The method according to claim 14, wherein data in the big data system includes location based data from user equipments (UEs). 23. The method according to claim 15, wherein data in the big data system includes data from at least one of macro, micro, pico and femto cell layers of a radio access technology (RAT). 24. The method according to claim 14, further comprising training the at least one self-organizing network function based at least on data of a network domain of the at least one self-organizing network function, and data of another network domain of another self-organizing network function. 25. The method according to claim 14, further comprising training the at least one self-organizing network function based at least on data of a cell of the at least one self-organizing network function, and data of another cell of another self-organizing network function. 26. The method according to claim 14, further comprising training the at least one self-organizing network function based at least on data of another self-organizing network function with deployment similar to the at least one self-organizing network function. 27. The method according to claim 14, further comprising verifying operation of the at least one self-organizing network function. 28. The method according to claim 27, wherein the verifying further comprises accessing aggregate behaviour of the at least one self-organizing network function after least one instance of the at least one self-organizing network function has been executed. 29. The method according to claim 27, wherein the verifying further comprises checking whether a change to the at least one self-organizing network function leads to an intended change in operation of the at least one self-organizing network function. 30. The method according to claim 29, further comprising rolling back the change to the at least one self-organizing network function. 31. A big data system for a communication network comprising a self-organizing network system, wherein the big data system comprises an analytics module adapted to perform an analysis on data collected by network elements and to provide results of the analysis to a self-organizing network function. 32. A self-organizing network system for a communication network, the self-organizing network system comprising self-organizing functions, wherein the self-organizing network system is adapted to receive analysis results of an analytics module of a big data system. 33. The self-organizing network system according to claim 32, wherein the self-organizing network system is further adapted to adapt an operation of at least one of the self-organizing functions based on the received analysis results. | 2,400 |
8,626 | 8,626 | 12,647,192 | 2,483 | This invention provides a system and method for runtime determination (self-diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras. The system and method allows for faster, less-expensive and more-straightforward diagnosis of vision system failures related to deteriorating camera calibration. | 1. A method for determining camera miscalibration in a system of at least three cameras, comprising the steps of:
a) calibrating the at least three cameras, including finding respective extrinsic calibration parameters for each of the at least three cameras; b) finding a first feature in three-dimensional space with a first plurality of the at least three cameras and determining a first measurement of the first feature; c) finding a second feature in three-dimensional space with a second plurality of the at least three cameras and determining a second measurement of the second feature; and d) comparing the first measurement with the second measurement with respect to at least one of (i) an accuracy determined during step (a), (ii) a desired system accuracy, and (iii) a known property of the first feature and the second feature. 2. The method as set forth in claim 1 wherein the first feature is either one of (a) substantially the same as the second feature and (b) different than the second feature. 3. The method as set forth in claim 2 wherein the first measurement comprises an estimated first location of the first feature and the second measurement comprises an estimated second location of the second feature. 4. The method as set forth in claim 2 wherein the first measurement comprises a score of success in finding the first feature and the second measurement comprises a score of success in finding the second feature. 5. The method as set forth in claim 1 wherein step (d) includes computing a discrepancy between the first measurement and the second measurement and comparing the discrepancy with respect to at least one of (i) the accuracy determined during step (a), (ii) the desired system accuracy, and (iii) a known property of the first feature and the second feature. 6. The method as set forth in claim 5 wherein the known property includes a known distance between the first feature and the second feature, and the first plurality of at least three cameras includes a first grouping of at least two cameras and the second plurality of at least three cameras includes a second grouping of at least two cameras. 7. The method as set forth in claim 1 further comprising, in response to step (d) issuing a signal indicating recalibration is required based upon a result of the step of comparing exceeding at least one of the (i) accuracy determined during step (a) and (ii) the desired system accuracy. 8. The method as set forth in claim 7 wherein the step of issuing includes generating new extrinsic calibration parameters based upon step (d) and providing the extrinsic calibration parameters to at least one of the at least three cameras so as to recalibrate the at least one of the at least three cameras. 9. The method as set forth in claim 1 wherein the accuracy determined during step (a) includes a collection of values based upon calibration residual errors. 10. The method as set forth in claim 9 further comprising providing the new extrinsic parameters so as to recalibrate at least one of the at least three cameras in accordance with step (a). 11. The method as set forth in claim 1 wherein the system of at least three cameras includes a machine vision system inspection function so as to perform runtime machine vision inspection to objects that pass through a volume space viewed by the at least three cameras. 12. The method as set forth in claim 1 wherein the desired system accuracy is based upon historical values for each of the first measurement and the second measurement. 13. The method as set forth in claim 1 wherein the desired system accuracy is based upon a predetermined threshold value. 14. The method as set forth in claim 13 wherein the threshold value is defined based upon a desired accuracy of a runtime vision system task. 15. The method as set forth in claim 1 further comprising providing intrinsic parameters for at least one of the at least three cameras in step (a) and recalibrating the at least one of the at least three cameras based upon new intrinsic parameters. 16. The method as set forth in claim 1 wherein the known property includes a known distance between the first feature and the second feature, and the first plurality of at least three cameras includes a first grouping of at least two cameras and the second plurality of at least three cameras includes a second grouping of at least two cameras. 17. A method for determining camera miscalibration in a system of at least three cameras, comprising the steps of:
a) calibrating the at least three cameras, including finding respective extrinsic calibration parameters for each of the at least three cameras; b) finding a first object pose in three-dimensional space with a first plurality of the at least three cameras and determining a first measurement of the first object pose; c) finding a second object pose in three-dimensional space with a second plurality of the at least three cameras and determining a second measurement of the second object pose; and d) comparing the first measurement with the second measurement with respect to at least one of (i) an accuracy determined during step (a), (ii) a desired system accuracy. 18. The method as set forth in claim 17 wherein the first measurement is a first pose score and the second measurement is a second pose score. 19. A system for determining camera miscalibration in a system of at least three cameras, comprising:
a) at least three cameras, each including respective extrinsic calibration parameters; b) a first plurality of the at least three cameras that find a first feature in three-dimensional space and determine a first measurement of the first feature; c) a second plurality of the at least three cameras that find a second feature in three-dimensional space and determine a second measurement of the second feature; and d) a comparison process that compares the first measurement with the second measurement with respect to at least one of (i) an accuracy associated with the extrinsic calibration parameters, (ii) a desired system accuracy, and (iii) a known property of the first feature and the second feature. 20. The method as set forth in claim 19 wherein the known property includes a known distance between the first feature and the second feature, and the first plurality of at least three cameras includes a first grouping of at least two cameras and the second plurality of at least three cameras includes a second grouping of at least two cameras. 21. The method as set forth in claim 20 wherein the at least two cameras of the first grouping each differ from the at least two cameras of the second grouping. 22. A system for determining camera miscalibration in a system of at least three cameras, comprising of:
a) at least three cameras, each including respective extrinsic calibration parameters calibration parameters; b) a first plurality of the at least three cameras that find a first object pose in three-dimensional space and determine a first measurement of the first object pose; c) a second plurality of the at least three cameras that find a second object pose in three-dimensional space and determine a second measurement of the second object pose; and d) a comparison process that compares the first measurement with the second measurement with respect to at least one of (i) an accuracy associated with the extrinsic calibration parameters and (ii) a desired system accuracy. | This invention provides a system and method for runtime determination (self-diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras. The system and method allows for faster, less-expensive and more-straightforward diagnosis of vision system failures related to deteriorating camera calibration.1. A method for determining camera miscalibration in a system of at least three cameras, comprising the steps of:
a) calibrating the at least three cameras, including finding respective extrinsic calibration parameters for each of the at least three cameras; b) finding a first feature in three-dimensional space with a first plurality of the at least three cameras and determining a first measurement of the first feature; c) finding a second feature in three-dimensional space with a second plurality of the at least three cameras and determining a second measurement of the second feature; and d) comparing the first measurement with the second measurement with respect to at least one of (i) an accuracy determined during step (a), (ii) a desired system accuracy, and (iii) a known property of the first feature and the second feature. 2. The method as set forth in claim 1 wherein the first feature is either one of (a) substantially the same as the second feature and (b) different than the second feature. 3. The method as set forth in claim 2 wherein the first measurement comprises an estimated first location of the first feature and the second measurement comprises an estimated second location of the second feature. 4. The method as set forth in claim 2 wherein the first measurement comprises a score of success in finding the first feature and the second measurement comprises a score of success in finding the second feature. 5. The method as set forth in claim 1 wherein step (d) includes computing a discrepancy between the first measurement and the second measurement and comparing the discrepancy with respect to at least one of (i) the accuracy determined during step (a), (ii) the desired system accuracy, and (iii) a known property of the first feature and the second feature. 6. The method as set forth in claim 5 wherein the known property includes a known distance between the first feature and the second feature, and the first plurality of at least three cameras includes a first grouping of at least two cameras and the second plurality of at least three cameras includes a second grouping of at least two cameras. 7. The method as set forth in claim 1 further comprising, in response to step (d) issuing a signal indicating recalibration is required based upon a result of the step of comparing exceeding at least one of the (i) accuracy determined during step (a) and (ii) the desired system accuracy. 8. The method as set forth in claim 7 wherein the step of issuing includes generating new extrinsic calibration parameters based upon step (d) and providing the extrinsic calibration parameters to at least one of the at least three cameras so as to recalibrate the at least one of the at least three cameras. 9. The method as set forth in claim 1 wherein the accuracy determined during step (a) includes a collection of values based upon calibration residual errors. 10. The method as set forth in claim 9 further comprising providing the new extrinsic parameters so as to recalibrate at least one of the at least three cameras in accordance with step (a). 11. The method as set forth in claim 1 wherein the system of at least three cameras includes a machine vision system inspection function so as to perform runtime machine vision inspection to objects that pass through a volume space viewed by the at least three cameras. 12. The method as set forth in claim 1 wherein the desired system accuracy is based upon historical values for each of the first measurement and the second measurement. 13. The method as set forth in claim 1 wherein the desired system accuracy is based upon a predetermined threshold value. 14. The method as set forth in claim 13 wherein the threshold value is defined based upon a desired accuracy of a runtime vision system task. 15. The method as set forth in claim 1 further comprising providing intrinsic parameters for at least one of the at least three cameras in step (a) and recalibrating the at least one of the at least three cameras based upon new intrinsic parameters. 16. The method as set forth in claim 1 wherein the known property includes a known distance between the first feature and the second feature, and the first plurality of at least three cameras includes a first grouping of at least two cameras and the second plurality of at least three cameras includes a second grouping of at least two cameras. 17. A method for determining camera miscalibration in a system of at least three cameras, comprising the steps of:
a) calibrating the at least three cameras, including finding respective extrinsic calibration parameters for each of the at least three cameras; b) finding a first object pose in three-dimensional space with a first plurality of the at least three cameras and determining a first measurement of the first object pose; c) finding a second object pose in three-dimensional space with a second plurality of the at least three cameras and determining a second measurement of the second object pose; and d) comparing the first measurement with the second measurement with respect to at least one of (i) an accuracy determined during step (a), (ii) a desired system accuracy. 18. The method as set forth in claim 17 wherein the first measurement is a first pose score and the second measurement is a second pose score. 19. A system for determining camera miscalibration in a system of at least three cameras, comprising:
a) at least three cameras, each including respective extrinsic calibration parameters; b) a first plurality of the at least three cameras that find a first feature in three-dimensional space and determine a first measurement of the first feature; c) a second plurality of the at least three cameras that find a second feature in three-dimensional space and determine a second measurement of the second feature; and d) a comparison process that compares the first measurement with the second measurement with respect to at least one of (i) an accuracy associated with the extrinsic calibration parameters, (ii) a desired system accuracy, and (iii) a known property of the first feature and the second feature. 20. The method as set forth in claim 19 wherein the known property includes a known distance between the first feature and the second feature, and the first plurality of at least three cameras includes a first grouping of at least two cameras and the second plurality of at least three cameras includes a second grouping of at least two cameras. 21. The method as set forth in claim 20 wherein the at least two cameras of the first grouping each differ from the at least two cameras of the second grouping. 22. A system for determining camera miscalibration in a system of at least three cameras, comprising of:
a) at least three cameras, each including respective extrinsic calibration parameters calibration parameters; b) a first plurality of the at least three cameras that find a first object pose in three-dimensional space and determine a first measurement of the first object pose; c) a second plurality of the at least three cameras that find a second object pose in three-dimensional space and determine a second measurement of the second object pose; and d) a comparison process that compares the first measurement with the second measurement with respect to at least one of (i) an accuracy associated with the extrinsic calibration parameters and (ii) a desired system accuracy. | 2,400 |
8,627 | 8,627 | 15,013,066 | 2,455 | One or more examples provide a method of performing a REST API operation at a server computing system includes receiving a request of a hypertext transfer protocol (HTTP) session from a client computing system. The request includes data for requesting performance of the REST API operation and issuance of progress updates. The method further includes sending a first part of a response of the HTTP session to the client computing system. The first part of the response acknowledges the request. The method further includes sending, while the REST API operation is performed, at least one additional part of the response to the client computing system, each additional part of the response having a progress update for the REST API operation. The method further includes sending, upon completion of the REST API operation, a final part of the response to the client computing system having a result of the REST API operation. | 1. A method of performing a representational state transfer (REST) application programming interface (API) operation at a server computing system, comprising:
receiving a request of a hypertext transfer protocol (HTTP) session from a client computing system, the request requesting performance of the REST API operation and issuance of progress updates; sending a first part of a response of the HTTP session to the client computing system, the first part of the response acknowledging the request; sending, while the REST API operation is performed, at least one additional part of the response to the client computing system, each additional part of the response having a progress update for the REST API operation; and sending, upon completion of the REST API operation, a final part of the response to the client computing system having a result of the REST API operation. 2. The method of claim 1, wherein the request comprises an HTTP method having an accept header requesting the issuance of progress updates. 3. The method of claim 2, wherein the accept header specifies a multi-purpose interne mail extensions (MIME) content type of multipart/x-mixed-replace to request the issuance of progress updates. 4. The method of claim 1, wherein the request comprises an HTTP GET method having a uniform resource identifier (URI) requesting the issuance of progress updates. 5. The method of claim 4, wherein the URI includes a query parameter to request the issuance of progress updates. 6. The method of claim 1, wherein a status line of the response includes a “202 accepted” status code and wherein the final part of the response includes a custom header having a final status code for the response. 7. The method of claim 1, wherein each of the at least one additional part of the response includes a content-type header specifying a content-type and a message body specifying content formatted in accordance with the content-type and including a progress update. 8. A computing system, comprising:
a memory configured with a representational state transfer (REST) application programming interface (API) and instructions; and a processor, coupled to the memory, configured to execute the instructions to:
receive a request of a hypertext transfer protocol (HTTP) session from a client, the request requesting performance of a REST API operation in the REST API and issuance of progress updates;
send a first part of a response of the HTTP session to the client, the first part of the response acknowledging the request;
send, while the REST API operation is performed, at least one additional part of the response to the client, each additional part of the response having a progress update for the REST API operation; and
send, upon completion of the REST API operation, a final part of the response to the client having a result of the REST API operation. 9. The computing system of claim 8, wherein the request comprises an HTTP method having an accept header requesting the issuance of progress updates. 10. The computing system of claim 9, wherein the accept header specifies a multi-purpose internet mail extensions (MIME) content type of multipart/x-mixed-replace to request the issuance of progress updates. 11. The computing system of claim 8, wherein the request comprises an HTTP GET method having a uniform resource identifier (URI) requesting the issuance of progress updates. 12. The computing system of claim 11, wherein the URI includes a query parameter to request the issuance of progress updates. 13. The computing system of claim 8, wherein a status line of the response includes a “202 accepted” status code and wherein the final part of the response includes a custom header having a final status code for the response. 14. The computing system of claim 8, wherein each of the at least one additional part of the response includes a content-type header specifying a content-type and a message body specifying content formatted in accordance with the content-type and including a progress update. 15. A non-transitory computer readable medium having instructions stored thereon that when executed by a processor cause the processor to perform a method of performing a representational state transfer (REST) application programming interface (API) operation at a server computing system, comprising:
receiving a request of a hypertext transfer protocol (HTTP) session from a client computing system, the request requesting performance of the REST API operation and issuance of progress updates; sending a first part of a response of the HTTP session to the client computing system, the first part of the response acknowledging the request; sending, while the REST API operation is performed, at least one additional part of the response to the client computing system, each additional part of the response having a progress update for the REST API operation; and sending, upon completion of the REST API operation, a final part of the response to the client computing system having a result of the REST API operation. 16. The non-transitory computer readable medium of claim 15, wherein the request comprises an HTTP method having an accept header requesting the issuance of progress updates. 17. The non-transitory computer readable medium of claim 16, wherein the accept header specifies a multi-purpose internet mail extensions (MIME) content type of multipart/x-mixed-replace to request the issuance of progress updates. 18. The non-transitory computer readable medium of claim 15, wherein the request comprises an HTTP GET method having a uniform resource identifier (URI) requesting the issuance of progress updates. 19. The non-transitory computer readable medium of claim 18, wherein the URI includes a query parameter to request the issuance of progress updates. 20. The non-transitory computer readable medium of claim 15, wherein each of the at least one additional part of the response includes a content-type header specifying a content-type and a message body specifying content formatted in accordance with the content-type and including a progress update. | One or more examples provide a method of performing a REST API operation at a server computing system includes receiving a request of a hypertext transfer protocol (HTTP) session from a client computing system. The request includes data for requesting performance of the REST API operation and issuance of progress updates. The method further includes sending a first part of a response of the HTTP session to the client computing system. The first part of the response acknowledges the request. The method further includes sending, while the REST API operation is performed, at least one additional part of the response to the client computing system, each additional part of the response having a progress update for the REST API operation. The method further includes sending, upon completion of the REST API operation, a final part of the response to the client computing system having a result of the REST API operation.1. A method of performing a representational state transfer (REST) application programming interface (API) operation at a server computing system, comprising:
receiving a request of a hypertext transfer protocol (HTTP) session from a client computing system, the request requesting performance of the REST API operation and issuance of progress updates; sending a first part of a response of the HTTP session to the client computing system, the first part of the response acknowledging the request; sending, while the REST API operation is performed, at least one additional part of the response to the client computing system, each additional part of the response having a progress update for the REST API operation; and sending, upon completion of the REST API operation, a final part of the response to the client computing system having a result of the REST API operation. 2. The method of claim 1, wherein the request comprises an HTTP method having an accept header requesting the issuance of progress updates. 3. The method of claim 2, wherein the accept header specifies a multi-purpose interne mail extensions (MIME) content type of multipart/x-mixed-replace to request the issuance of progress updates. 4. The method of claim 1, wherein the request comprises an HTTP GET method having a uniform resource identifier (URI) requesting the issuance of progress updates. 5. The method of claim 4, wherein the URI includes a query parameter to request the issuance of progress updates. 6. The method of claim 1, wherein a status line of the response includes a “202 accepted” status code and wherein the final part of the response includes a custom header having a final status code for the response. 7. The method of claim 1, wherein each of the at least one additional part of the response includes a content-type header specifying a content-type and a message body specifying content formatted in accordance with the content-type and including a progress update. 8. A computing system, comprising:
a memory configured with a representational state transfer (REST) application programming interface (API) and instructions; and a processor, coupled to the memory, configured to execute the instructions to:
receive a request of a hypertext transfer protocol (HTTP) session from a client, the request requesting performance of a REST API operation in the REST API and issuance of progress updates;
send a first part of a response of the HTTP session to the client, the first part of the response acknowledging the request;
send, while the REST API operation is performed, at least one additional part of the response to the client, each additional part of the response having a progress update for the REST API operation; and
send, upon completion of the REST API operation, a final part of the response to the client having a result of the REST API operation. 9. The computing system of claim 8, wherein the request comprises an HTTP method having an accept header requesting the issuance of progress updates. 10. The computing system of claim 9, wherein the accept header specifies a multi-purpose internet mail extensions (MIME) content type of multipart/x-mixed-replace to request the issuance of progress updates. 11. The computing system of claim 8, wherein the request comprises an HTTP GET method having a uniform resource identifier (URI) requesting the issuance of progress updates. 12. The computing system of claim 11, wherein the URI includes a query parameter to request the issuance of progress updates. 13. The computing system of claim 8, wherein a status line of the response includes a “202 accepted” status code and wherein the final part of the response includes a custom header having a final status code for the response. 14. The computing system of claim 8, wherein each of the at least one additional part of the response includes a content-type header specifying a content-type and a message body specifying content formatted in accordance with the content-type and including a progress update. 15. A non-transitory computer readable medium having instructions stored thereon that when executed by a processor cause the processor to perform a method of performing a representational state transfer (REST) application programming interface (API) operation at a server computing system, comprising:
receiving a request of a hypertext transfer protocol (HTTP) session from a client computing system, the request requesting performance of the REST API operation and issuance of progress updates; sending a first part of a response of the HTTP session to the client computing system, the first part of the response acknowledging the request; sending, while the REST API operation is performed, at least one additional part of the response to the client computing system, each additional part of the response having a progress update for the REST API operation; and sending, upon completion of the REST API operation, a final part of the response to the client computing system having a result of the REST API operation. 16. The non-transitory computer readable medium of claim 15, wherein the request comprises an HTTP method having an accept header requesting the issuance of progress updates. 17. The non-transitory computer readable medium of claim 16, wherein the accept header specifies a multi-purpose internet mail extensions (MIME) content type of multipart/x-mixed-replace to request the issuance of progress updates. 18. The non-transitory computer readable medium of claim 15, wherein the request comprises an HTTP GET method having a uniform resource identifier (URI) requesting the issuance of progress updates. 19. The non-transitory computer readable medium of claim 18, wherein the URI includes a query parameter to request the issuance of progress updates. 20. The non-transitory computer readable medium of claim 15, wherein each of the at least one additional part of the response includes a content-type header specifying a content-type and a message body specifying content formatted in accordance with the content-type and including a progress update. | 2,400 |
8,628 | 8,628 | 15,442,073 | 2,416 | A novel method for fully utilizing the multicast or broadcast capability of a physical network is provided. The method identifies segments of the network within which broadcast traffic, multicast traffic, or traffic to unknown recipients (BUM traffic) is allowed or enabled. The identified segment encompasses parts of the network that the BUM traffic is able reach while excluding parts of the network nodes that the BUM traffic is unable to reach. Each identified segment includes network nodes that are interconnected by physical network hardware that supports BUM traffic. The method identifies multiple BUM traffic segments in a given network that each supports its own BUM traffic. The different BUM traffic segments are interconnected by physical network hardware that does not support BUM network traffic. Each identified segment is assigned an identifier that uniquely distinguishes the identified segment from other identified segments. | 1-24. (canceled) 25. A method for identifying a network segment comprising a first network node and a plurality of additional network nodes, the method to be performed by the first network node, the first network node storing a first identifier as identifying the network segment, the method comprising:
sending, to the plurality of additional network nodes, a first message that indicates the first identifier as a network segment identifier; in response to receiving a second message, sent by a first of the additional network nodes and comprising a second identifier for identifying the network segment, continuing to use the stored first identifier to identify the network segment based on the received second identifier ranking lower than the stored first identifier; and in response to receiving a third message, sent by a second of the additional network nodes and comprising a third identifier for identifying the network segment, replacing the first identifier with the third identifier as identifying the network segment based on the received third identifier ranking higher than the stored first identifier. 26. The method of claim 25, wherein the network segment comprises network nodes with connectivity using a particular protocol. 27. The method of claim 26, wherein the first, second, and third messages are sent using the particular protocol. 28. The method of claim 27, wherein the particular protocol comprises a broadcast protocol. 29. The method of claim 27, wherein the particular protocol comprises a multicast protocol. 30. The method of claim 27, wherein the particular protocol comprises a broadcast, unknown unicast, multicast (BUM) protocol. 31. The method of claim 25, wherein the first identifier is based on a subnet address of a node in the network segment. 32. The method of claim 25, wherein the rank of an identifier is based on a machine access control (MAC) address of a particular network node from which the identifier is received. 33. The method of claim 32, wherein sending the first message comprises sending the first message when a particular timer expires. 34. The method of claim 33 further comprising resetting the particular timer each time the first message is sent. 35. A non-transitory machine readable medium storing a program which when executed on a set of processing units of a first network node identifies a network segment comprising the first network node and a plurality of additional network nodes, the first network node storing a first identifier as identifying the network segment, the program comprising sets of instructions for:
sending, to the plurality of additional network nodes, a first message that indicates the first identifier as a network segment identifier; receiving a second message, sent by one of the additional network nodes, comprising a second identifier for identifying the network segment; when the received second identifier ranks higher than the stored first identifier, replacing the first identifier with the second identifier as identifying the network segment; and when the received second identifier ranks lower than the stored first identifier, continuing to use the stored first identifier to identify the network segment. 36. The non-transitory machine readable medium of claim 35, wherein the network segment comprises network nodes with connectivity using a particular protocol. 37. The non-transitory machine readable medium of claim 36, wherein the first and second messages are sent using the particular protocol. 38. The non-transitory machine readable medium of claim 37, wherein the particular protocol comprises a broadcast protocol. 39. The non-transitory machine readable medium of claim 37, wherein the particular protocol comprises a multicast protocol. 40. The non-transitory machine readable medium of claim 37, wherein the particular protocol comprises a broadcast, unknown unicast, multicast (BUM) protocol. 41. The non-transitory machine readable medium of claim 35, wherein the first identifier is based on a subnet address of a node in the network segment. 42. The non-transitory machine readable medium of claim 35, where sending the first message comprises sending the first message when a particular timer expires. 43. The non-transitory machine readable medium of claim 42 further comprising resetting the particular timer each time the first message is sent. 44. The non-transitory machine readable medium of claim 43, further comprising resetting the particular timer when the second identifier of the second message is identical to the first identifier. | A novel method for fully utilizing the multicast or broadcast capability of a physical network is provided. The method identifies segments of the network within which broadcast traffic, multicast traffic, or traffic to unknown recipients (BUM traffic) is allowed or enabled. The identified segment encompasses parts of the network that the BUM traffic is able reach while excluding parts of the network nodes that the BUM traffic is unable to reach. Each identified segment includes network nodes that are interconnected by physical network hardware that supports BUM traffic. The method identifies multiple BUM traffic segments in a given network that each supports its own BUM traffic. The different BUM traffic segments are interconnected by physical network hardware that does not support BUM network traffic. Each identified segment is assigned an identifier that uniquely distinguishes the identified segment from other identified segments.1-24. (canceled) 25. A method for identifying a network segment comprising a first network node and a plurality of additional network nodes, the method to be performed by the first network node, the first network node storing a first identifier as identifying the network segment, the method comprising:
sending, to the plurality of additional network nodes, a first message that indicates the first identifier as a network segment identifier; in response to receiving a second message, sent by a first of the additional network nodes and comprising a second identifier for identifying the network segment, continuing to use the stored first identifier to identify the network segment based on the received second identifier ranking lower than the stored first identifier; and in response to receiving a third message, sent by a second of the additional network nodes and comprising a third identifier for identifying the network segment, replacing the first identifier with the third identifier as identifying the network segment based on the received third identifier ranking higher than the stored first identifier. 26. The method of claim 25, wherein the network segment comprises network nodes with connectivity using a particular protocol. 27. The method of claim 26, wherein the first, second, and third messages are sent using the particular protocol. 28. The method of claim 27, wherein the particular protocol comprises a broadcast protocol. 29. The method of claim 27, wherein the particular protocol comprises a multicast protocol. 30. The method of claim 27, wherein the particular protocol comprises a broadcast, unknown unicast, multicast (BUM) protocol. 31. The method of claim 25, wherein the first identifier is based on a subnet address of a node in the network segment. 32. The method of claim 25, wherein the rank of an identifier is based on a machine access control (MAC) address of a particular network node from which the identifier is received. 33. The method of claim 32, wherein sending the first message comprises sending the first message when a particular timer expires. 34. The method of claim 33 further comprising resetting the particular timer each time the first message is sent. 35. A non-transitory machine readable medium storing a program which when executed on a set of processing units of a first network node identifies a network segment comprising the first network node and a plurality of additional network nodes, the first network node storing a first identifier as identifying the network segment, the program comprising sets of instructions for:
sending, to the plurality of additional network nodes, a first message that indicates the first identifier as a network segment identifier; receiving a second message, sent by one of the additional network nodes, comprising a second identifier for identifying the network segment; when the received second identifier ranks higher than the stored first identifier, replacing the first identifier with the second identifier as identifying the network segment; and when the received second identifier ranks lower than the stored first identifier, continuing to use the stored first identifier to identify the network segment. 36. The non-transitory machine readable medium of claim 35, wherein the network segment comprises network nodes with connectivity using a particular protocol. 37. The non-transitory machine readable medium of claim 36, wherein the first and second messages are sent using the particular protocol. 38. The non-transitory machine readable medium of claim 37, wherein the particular protocol comprises a broadcast protocol. 39. The non-transitory machine readable medium of claim 37, wherein the particular protocol comprises a multicast protocol. 40. The non-transitory machine readable medium of claim 37, wherein the particular protocol comprises a broadcast, unknown unicast, multicast (BUM) protocol. 41. The non-transitory machine readable medium of claim 35, wherein the first identifier is based on a subnet address of a node in the network segment. 42. The non-transitory machine readable medium of claim 35, where sending the first message comprises sending the first message when a particular timer expires. 43. The non-transitory machine readable medium of claim 42 further comprising resetting the particular timer each time the first message is sent. 44. The non-transitory machine readable medium of claim 43, further comprising resetting the particular timer when the second identifier of the second message is identical to the first identifier. | 2,400 |
8,629 | 8,629 | 14,612,915 | 2,483 | An exemplary camera assembly includes a camera body, a camera lens, a cradle that communicates signals between the camera body and the camera lens, and a short wave infrared sensor module selectively received within the cradle. | 1. A camera assembly comprising:
a camera body; a camera lens; a cradle that communicates signals between the camera body and the camera lens; and a short wave infrared sensor module selectively received within the cradle. 2. The camera assembly of claim 1, further comprising a display to display an image received by the short wave infrared sensor module. 3. The camera assembly of claim 2, wherein the camera body comprises the display. 4. The camera assembly of claim 3, wherein the display is a first display, and the short wave infrared sensor module comprises a second display to display an image received by the short wave infrared sensor module. 5. The camera assembly of claim 1, further comprising a relay optic within the short wave infrared sensor module. 6. The camera assembly of claim 1, wherein the cradle selectively receives other modules. 7. The camera assembly of claim 1, including an optical relay within the short wave infrared sensor module, the optical relay magnifying an image captured by the short wave infrared sensor module over a fixed distance between an organic light emitting diode display of the camera body and a sensor of the camera body. 8. The camera assembly of claim 1, wherein the short wave infrared sensor module communicates with camera body and the camera lens through the cradle. 9. The camera assembly of claim 1, wherein the camera body and the camera lens are commercial off-the-shelf components. 10. A method of short wave infrared sensor module imaging, comprising:
communicating signals from a camera body through a cradle to control a camera lens; and selectively receiving a short wave infrared sensor module within the cradle. 11. The method of claim 10, further comprising displaying a short wave infrared sensor image on the short wave infrared sensor module. 12. The method of claim 10, further comprising displaying an image from the short wave infrared sensor module on the camera body. 13. The method of claim 10, further comprising the step of selectively replacing the short wave infrared sensor module within the cradle with another type of imaging module. 14. The method of claim 10, wherein the camera body and camera lens are commercial off-the-shelf components. | An exemplary camera assembly includes a camera body, a camera lens, a cradle that communicates signals between the camera body and the camera lens, and a short wave infrared sensor module selectively received within the cradle.1. A camera assembly comprising:
a camera body; a camera lens; a cradle that communicates signals between the camera body and the camera lens; and a short wave infrared sensor module selectively received within the cradle. 2. The camera assembly of claim 1, further comprising a display to display an image received by the short wave infrared sensor module. 3. The camera assembly of claim 2, wherein the camera body comprises the display. 4. The camera assembly of claim 3, wherein the display is a first display, and the short wave infrared sensor module comprises a second display to display an image received by the short wave infrared sensor module. 5. The camera assembly of claim 1, further comprising a relay optic within the short wave infrared sensor module. 6. The camera assembly of claim 1, wherein the cradle selectively receives other modules. 7. The camera assembly of claim 1, including an optical relay within the short wave infrared sensor module, the optical relay magnifying an image captured by the short wave infrared sensor module over a fixed distance between an organic light emitting diode display of the camera body and a sensor of the camera body. 8. The camera assembly of claim 1, wherein the short wave infrared sensor module communicates with camera body and the camera lens through the cradle. 9. The camera assembly of claim 1, wherein the camera body and the camera lens are commercial off-the-shelf components. 10. A method of short wave infrared sensor module imaging, comprising:
communicating signals from a camera body through a cradle to control a camera lens; and selectively receiving a short wave infrared sensor module within the cradle. 11. The method of claim 10, further comprising displaying a short wave infrared sensor image on the short wave infrared sensor module. 12. The method of claim 10, further comprising displaying an image from the short wave infrared sensor module on the camera body. 13. The method of claim 10, further comprising the step of selectively replacing the short wave infrared sensor module within the cradle with another type of imaging module. 14. The method of claim 10, wherein the camera body and camera lens are commercial off-the-shelf components. | 2,400 |
8,630 | 8,630 | 15,724,897 | 2,494 | A method, system and computer program product for selecting a target hypervisor to run a migrated virtual machine. An “effective priority value,” representing the virtual machine's priority with respect to the other virtual machines running on the same hypervisor, is calculated for the virtual machine when it is running on the source hypervisor as well as if it were to run on a target hypervisor for each possible target hypervisor. The target hypervisor associated with the minimum difference in absolute value terms between the virtual machine's effective priority value calculated when it is running on the source hypervisor and its effective priority value calculated if it were to be migrated to run on a target hypervisor is selected to receive the migrating virtual machine. In this manner, the effective priority metric has enabled a target hypervisor to be chosen that most closely matches the priority environment of the source hypervisor. | 1. A method for selecting a target hypervisor to run a migrated virtual machine, the method comprising:
determining a current resource utilization of virtual machines running on a source hypervisor and two or more target hypervisors; determining an amount of a resource allocated to each virtual machine running on said source hypervisor and said two or more target hypervisors; calculating a resource allocation metric value for each virtual machine running on said source hypervisor using said current resource utilization and said amount of said resource allocated for each virtual machine running on said source hypervisor; calculating a resource allocation metric value for each virtual machine running on said two or more target hypervisors using said current resource utilization and said amount of said resource allocated for each virtual machine running on said two or more target hypervisors; calculating, by a processor, an effective priority value for a virtual machine running on said source hypervisor using said resource allocation metric values for each of said virtual machines running on said source hypervisor, wherein said effective priority value corresponds to a priority of said virtual machine with respect to other virtual machines running on a same hypervisor; calculating, by said processor, an effective priority value for said virtual machine if it were to be migrated to run on a target hypervisor for each of said two or more target hypervisors using said resource allocation metric values for each of said virtual machines running on said target hypervisor; comparing said effective priority value for said virtual machine running on said source hypervisor with its effective priority value if it were to be migrated to said target hypervisor for each of said two or more target hypervisors; generating a set of deltas representing differences in absolute value terms between said effective priority value of said virtual machine running on said source hypervisor and its effective priority value if it were to be migrated to run on said target hypervisor for each of said two or more target hypervisors; determining a minimum delta in said set of deltas; and selecting one of said two or more target hypervisors associated with said minimum delta to receive said virtual machine to be migrated from said source hypervisor. 2. The method as recited in claim 1 further comprising:
calculating said effective priority value for said virtual machine running on said source hypervisor using said resource allocation metric values for said virtual machines running on said source hypervisor, a number of virtual machines running on said source hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said source hypervisor. 3. The method as recited in claim 1 further comprising:
calculating said effective priority value for said virtual machine if it were to be migrated to run on said target hypervisor using said resource allocation metric values for said virtual machines running on said target hypervisor, a number of virtual machines running on said target hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said target hypervisor and said virtual machine to be migrated to run on said target hypervisor. 4. The method as recited in claim 1, wherein said source hypervisor resides on a first cloud computing node of a cloud computing environment and said two or more hypervisors reside on two or more other cloud computing nodes of said cloud computing environment. 5. The method as recited in claim 1, wherein said resource comprises one of the following:
processing unit, memory and network bandwidth. 6. A computer program product for selecting a target hypervisor to run a migrated virtual machine, the computer program product comprising a computer readable storage medium having program code embodied therewith, the program code comprising the programming instructions for:
determining a current resource utilization of virtual machines running on a source hypervisor and two or more target hypervisors; determining an amount of a resource allocated to each virtual machine running on said source hypervisor and said two or more target hypervisors; calculating a resource allocation metric value for each virtual machine running on said source hypervisor using said current resource utilization and said amount of said resource allocated for each virtual machine running on said source hypervisor; calculating a resource allocation metric value for each virtual machine running on said two or more target hypervisors using said current resource utilization and said amount of said resource allocated for each virtual machine running on said two or more target hypervisors; calculating an effective priority value for a virtual machine running on said source hypervisor using said resource allocation metric values for each of said virtual machines running on said source hypervisor, wherein said effective priority value corresponds to a priority of said virtual machine with respect to other virtual machines running on a same hypervisor; calculating an effective priority value for said virtual machine if it were to be migrated to run on a target hypervisor for each of said two or more target hypervisors using said resource allocation metric values for each of said virtual machines running on said target hypervisor; comparing said effective priority value for said virtual machine running on said source hypervisor with its effective priority value if it were to be migrated to said target hypervisor for each of said two or more target hypervisors; generating a set of deltas representing differences in absolute value terms between said effective priority value of said virtual machine running on said source hypervisor and its effective priority value if it were to be migrated to run on said target hypervisor for each of said two or more target hypervisors; determining a minimum delta in said set of deltas; and selecting one of said two or more target hypervisors associated with said minimum delta to receive said virtual machine to be migrated from said source hypervisor. 7. The computer program product as recited in claim 6, wherein the program code further comprises the programming instructions for:
calculating said effective priority value for said virtual machine running on said source hypervisor using said resource allocation metric values for said virtual machines running on said source hypervisor, a number of virtual machines running on said source hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said source hypervisor. 8. The computer program product as recited in claim 6, wherein the program code further comprises the programming instructions for:
calculating said effective priority value for said virtual machine if it were to be migrated to run on said target hypervisor using said resource allocation metric values for said virtual machines running on said target hypervisor, a number of virtual machines running on said target hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said target hypervisor and said virtual machine to be migrated to run on said target hypervisor. 9. The computer program product as recited in claim 6, wherein said source hypervisor resides on a first cloud computing node of a cloud computing environment and said two or more hypervisors reside on two or more other cloud computing nodes of said cloud computing environment. 10. The computer program product as recited in claim 6, wherein said resource comprises one of the following: processing unit, memory and network bandwidth. 11. A system, comprising:
a memory unit for storing a computer program for selecting a target hypervisor to run a migrated virtual machine; and a processor coupled to the memory unit, wherein the processor is configured to execute the program instructions of the computer program comprising:
determining a current resource utilization of virtual machines running on a source hypervisor and two or more target hypervisors;
determining an amount of a resource allocated to each virtual machine running on said source hypervisor and said two or more target hypervisors;
calculating a resource allocation metric value for each virtual machine running on said source hypervisor using said current resource utilization and said amount of said resource allocated for each virtual machine running on said source hypervisor;
calculating a resource allocation metric value for each virtual machine running on said two or more target hypervisors using said current resource utilization and said amount of said resource allocated for each virtual machine running on said two or more target hypervisors;
calculating an effective priority value for a virtual machine running on said source hypervisor using said resource allocation metric values for each of said virtual machines running on said source hypervisor, wherein said effective priority value corresponds to a priority of said virtual machine with respect to other virtual machines running on a same hypervisor;
calculating an effective priority value for said virtual machine if it were to be migrated to run on a target hypervisor for each of said two or more target hypervisors using said resource allocation metric values for each of said virtual machines running on said target hypervisor;
comparing said effective priority value for said virtual machine running on said source hypervisor with its effective priority value if it were to be migrated to said target hypervisor for each of said two or more target hypervisors;
generating a set of deltas representing differences in absolute value terms between said effective priority value of said virtual machine running on said source hypervisor and its effective priority value if it were to be migrated to run on said target hypervisor for each of said two or more target hypervisors;
determining a minimum delta in said set of deltas; and
selecting one of said two or more target hypervisors associated with said minimum delta to receive said virtual machine to be migrated from said source hypervisor. 12. The system as recited in claim 11, wherein the program instructions of the computer program further comprise:
calculating said effective priority value for said virtual machine running on said source hypervisor using said resource allocation metric values for said virtual machines running on said source hypervisor, a number of virtual machines running on said source hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said source hypervisor. 13. The system as recited in claim 11, wherein the program instructions of the computer program further comprise:
calculating said effective priority value for said virtual machine if it were to be migrated to run on said target hypervisor using said resource allocation metric values for said virtual machines running on said target hypervisor, a number of virtual machines running on said target hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said target hypervisor and said virtual machine to be migrated to run on said target hypervisor. 14. The system as recited in claim 11, wherein said source hypervisor resides on a first cloud computing node of a cloud computing environment and said two or more hypervisors reside on two or more other cloud computing nodes of said cloud computing environment. 15. The system as recited in claim 11, wherein said resource comprises one of the following: processing unit, memory and network bandwidth. | A method, system and computer program product for selecting a target hypervisor to run a migrated virtual machine. An “effective priority value,” representing the virtual machine's priority with respect to the other virtual machines running on the same hypervisor, is calculated for the virtual machine when it is running on the source hypervisor as well as if it were to run on a target hypervisor for each possible target hypervisor. The target hypervisor associated with the minimum difference in absolute value terms between the virtual machine's effective priority value calculated when it is running on the source hypervisor and its effective priority value calculated if it were to be migrated to run on a target hypervisor is selected to receive the migrating virtual machine. In this manner, the effective priority metric has enabled a target hypervisor to be chosen that most closely matches the priority environment of the source hypervisor.1. A method for selecting a target hypervisor to run a migrated virtual machine, the method comprising:
determining a current resource utilization of virtual machines running on a source hypervisor and two or more target hypervisors; determining an amount of a resource allocated to each virtual machine running on said source hypervisor and said two or more target hypervisors; calculating a resource allocation metric value for each virtual machine running on said source hypervisor using said current resource utilization and said amount of said resource allocated for each virtual machine running on said source hypervisor; calculating a resource allocation metric value for each virtual machine running on said two or more target hypervisors using said current resource utilization and said amount of said resource allocated for each virtual machine running on said two or more target hypervisors; calculating, by a processor, an effective priority value for a virtual machine running on said source hypervisor using said resource allocation metric values for each of said virtual machines running on said source hypervisor, wherein said effective priority value corresponds to a priority of said virtual machine with respect to other virtual machines running on a same hypervisor; calculating, by said processor, an effective priority value for said virtual machine if it were to be migrated to run on a target hypervisor for each of said two or more target hypervisors using said resource allocation metric values for each of said virtual machines running on said target hypervisor; comparing said effective priority value for said virtual machine running on said source hypervisor with its effective priority value if it were to be migrated to said target hypervisor for each of said two or more target hypervisors; generating a set of deltas representing differences in absolute value terms between said effective priority value of said virtual machine running on said source hypervisor and its effective priority value if it were to be migrated to run on said target hypervisor for each of said two or more target hypervisors; determining a minimum delta in said set of deltas; and selecting one of said two or more target hypervisors associated with said minimum delta to receive said virtual machine to be migrated from said source hypervisor. 2. The method as recited in claim 1 further comprising:
calculating said effective priority value for said virtual machine running on said source hypervisor using said resource allocation metric values for said virtual machines running on said source hypervisor, a number of virtual machines running on said source hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said source hypervisor. 3. The method as recited in claim 1 further comprising:
calculating said effective priority value for said virtual machine if it were to be migrated to run on said target hypervisor using said resource allocation metric values for said virtual machines running on said target hypervisor, a number of virtual machines running on said target hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said target hypervisor and said virtual machine to be migrated to run on said target hypervisor. 4. The method as recited in claim 1, wherein said source hypervisor resides on a first cloud computing node of a cloud computing environment and said two or more hypervisors reside on two or more other cloud computing nodes of said cloud computing environment. 5. The method as recited in claim 1, wherein said resource comprises one of the following:
processing unit, memory and network bandwidth. 6. A computer program product for selecting a target hypervisor to run a migrated virtual machine, the computer program product comprising a computer readable storage medium having program code embodied therewith, the program code comprising the programming instructions for:
determining a current resource utilization of virtual machines running on a source hypervisor and two or more target hypervisors; determining an amount of a resource allocated to each virtual machine running on said source hypervisor and said two or more target hypervisors; calculating a resource allocation metric value for each virtual machine running on said source hypervisor using said current resource utilization and said amount of said resource allocated for each virtual machine running on said source hypervisor; calculating a resource allocation metric value for each virtual machine running on said two or more target hypervisors using said current resource utilization and said amount of said resource allocated for each virtual machine running on said two or more target hypervisors; calculating an effective priority value for a virtual machine running on said source hypervisor using said resource allocation metric values for each of said virtual machines running on said source hypervisor, wherein said effective priority value corresponds to a priority of said virtual machine with respect to other virtual machines running on a same hypervisor; calculating an effective priority value for said virtual machine if it were to be migrated to run on a target hypervisor for each of said two or more target hypervisors using said resource allocation metric values for each of said virtual machines running on said target hypervisor; comparing said effective priority value for said virtual machine running on said source hypervisor with its effective priority value if it were to be migrated to said target hypervisor for each of said two or more target hypervisors; generating a set of deltas representing differences in absolute value terms between said effective priority value of said virtual machine running on said source hypervisor and its effective priority value if it were to be migrated to run on said target hypervisor for each of said two or more target hypervisors; determining a minimum delta in said set of deltas; and selecting one of said two or more target hypervisors associated with said minimum delta to receive said virtual machine to be migrated from said source hypervisor. 7. The computer program product as recited in claim 6, wherein the program code further comprises the programming instructions for:
calculating said effective priority value for said virtual machine running on said source hypervisor using said resource allocation metric values for said virtual machines running on said source hypervisor, a number of virtual machines running on said source hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said source hypervisor. 8. The computer program product as recited in claim 6, wherein the program code further comprises the programming instructions for:
calculating said effective priority value for said virtual machine if it were to be migrated to run on said target hypervisor using said resource allocation metric values for said virtual machines running on said target hypervisor, a number of virtual machines running on said target hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said target hypervisor and said virtual machine to be migrated to run on said target hypervisor. 9. The computer program product as recited in claim 6, wherein said source hypervisor resides on a first cloud computing node of a cloud computing environment and said two or more hypervisors reside on two or more other cloud computing nodes of said cloud computing environment. 10. The computer program product as recited in claim 6, wherein said resource comprises one of the following: processing unit, memory and network bandwidth. 11. A system, comprising:
a memory unit for storing a computer program for selecting a target hypervisor to run a migrated virtual machine; and a processor coupled to the memory unit, wherein the processor is configured to execute the program instructions of the computer program comprising:
determining a current resource utilization of virtual machines running on a source hypervisor and two or more target hypervisors;
determining an amount of a resource allocated to each virtual machine running on said source hypervisor and said two or more target hypervisors;
calculating a resource allocation metric value for each virtual machine running on said source hypervisor using said current resource utilization and said amount of said resource allocated for each virtual machine running on said source hypervisor;
calculating a resource allocation metric value for each virtual machine running on said two or more target hypervisors using said current resource utilization and said amount of said resource allocated for each virtual machine running on said two or more target hypervisors;
calculating an effective priority value for a virtual machine running on said source hypervisor using said resource allocation metric values for each of said virtual machines running on said source hypervisor, wherein said effective priority value corresponds to a priority of said virtual machine with respect to other virtual machines running on a same hypervisor;
calculating an effective priority value for said virtual machine if it were to be migrated to run on a target hypervisor for each of said two or more target hypervisors using said resource allocation metric values for each of said virtual machines running on said target hypervisor;
comparing said effective priority value for said virtual machine running on said source hypervisor with its effective priority value if it were to be migrated to said target hypervisor for each of said two or more target hypervisors;
generating a set of deltas representing differences in absolute value terms between said effective priority value of said virtual machine running on said source hypervisor and its effective priority value if it were to be migrated to run on said target hypervisor for each of said two or more target hypervisors;
determining a minimum delta in said set of deltas; and
selecting one of said two or more target hypervisors associated with said minimum delta to receive said virtual machine to be migrated from said source hypervisor. 12. The system as recited in claim 11, wherein the program instructions of the computer program further comprise:
calculating said effective priority value for said virtual machine running on said source hypervisor using said resource allocation metric values for said virtual machines running on said source hypervisor, a number of virtual machines running on said source hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said source hypervisor. 13. The system as recited in claim 11, wherein the program instructions of the computer program further comprise:
calculating said effective priority value for said virtual machine if it were to be migrated to run on said target hypervisor using said resource allocation metric values for said virtual machines running on said target hypervisor, a number of virtual machines running on said target hypervisor with a resource allocation metric value equal to and below said virtual machine's resource allocation metric value and a total number of resource allocation metric values calculated for said virtual machines running on said target hypervisor and said virtual machine to be migrated to run on said target hypervisor. 14. The system as recited in claim 11, wherein said source hypervisor resides on a first cloud computing node of a cloud computing environment and said two or more hypervisors reside on two or more other cloud computing nodes of said cloud computing environment. 15. The system as recited in claim 11, wherein said resource comprises one of the following: processing unit, memory and network bandwidth. | 2,400 |
8,631 | 8,631 | 15,373,266 | 2,447 | A method for deploying computer networks in a network environment is disclosed. Initially, a broadcast message is transmitted by a network device within a server to a network in order to investigate the availability of the network. A determination is made as to whether or not a response message responding to the broadcast message has been received by the network device within the server. If a response message has been received by the network device, it means the network is available and the network device can be configured to connect with the network. However, if no response message has been received by the network device, it means the network is not available and the network device can be configured as a server node of the network. | 1. A method comprising:
transmitting a broadcast message by a network device within a server to a network in order to investigate the availability of said network; determining whether or not a response message responding to said broadcast message has been received by said network device within said server; in a determination that a response message has been received by said network device, concluding that said network is available and configuring said network device to connect with said network; and in a determination that no response message has been received by said network device, concluding that said network is not available and configuring said network device as a server node of said network. 2. The method of claim 1, wherein said transmitting further includes transmitting said broadcast message to a default IP address corresponding to said network. 3. The method of claim 1, wherein said configuring said network device to connect with said network further includes configuring an IP address of said network device based on said received response message. 4. The method of claim 1, wherein said configuring said network device as a server node of said network further includes initiating a client-server protocol. 5. The method of claim 4, wherein said client-server protocol is a Dynamic Host Configuration Protocol (DHCP). 6. The method of claim 1, wherein said transmitting further includes transmitting a plurality of broadcast messages by said network device to investigate the availability of a plurality of respective networks. 7. The method of claim 6, wherein said plurality of broadcast messages are transmitted to respective default IP addresses corresponding to said respective networks. 8. The method of claim 6, wherein said plurality of networks are accorded with priority settings, and said plurality of broadcast messages are transmitted via said network device according to said priority settings. 9. A server comprising:
a plurality of memories; a network device coupled to said plurality of memories; and a processor coupled to said network device, wherein said processor is configured to:
transmit a broadcast message by a network device within a server to a network in order to investigate the availability of said network;
determine whether or not a response message responding to said broadcast message has been received by said network device within said server;
conclude that said network is available and configure said network device to connect with said network, in a determination that a response message has been received by said network device; and
conclude that said network is not available and configure said network device as a server node of said network, in a determination that no response message has been received by said network device. 10. The server of claim 9, wherein said broadcast message is transmitted to a default IP address corresponding to said network. 11. The server of claim 9, wherein said network device is connected with said network by configuring an IP address of said network device based on said response message as detected. 12. The server of claim 9, wherein said network device is configured as the server node of said network by initiating a client-server protocol. 13. The server of claim 12, wherein said client-server protocol is a Dynamic Host Configuration Protocol (DHCP). 14. The server of claim 9, wherein a plurality of broadcast messages are transmitted by said network device to investigate the availability of a plurality of respective networks. 15. The server of claim 14, wherein said plurality of broadcast messages are transmitted to respective default IP addresses corresponding to said respective networks. 16. The server of claim 15, wherein said plurality of networks are accorded with priority settings, and said plurality of broadcast messages are transmitted via said network device in accordance with said priority settings. 17. A computer-readable medium having a computer program product for deploying computer networks, said computer-readable medium comprising:
program code for transmitting a broadcast message by a network device within a server to a network in order to investigate the availability of said network; program code for determining whether or not a response message responding to said broadcast message has been received by said network device within said server; program code for, in a determination that a response message has been received by said network device, concluding that said network is available and configuring said network device to connect with said network; and program code for, in a determination that no response message has been received by said network device, concluding that said network is not available and configuring said network device as a server node of said network. 18. The computer-readable medium of claim 17, wherein said program code for transmitting further includes program code for transmitting said broadcast message to a default IP address corresponding to said network. 19. The computer-readable medium of claim 17, wherein said program code for configuring said network device to connect with said network further includes said program code for configuring an IP address of said network device based on said received response message. 20. The computer-readable medium of claim 17, wherein said program code for configuring said network device as a server node of said network further includes program code for initiating a client-server protocol. | A method for deploying computer networks in a network environment is disclosed. Initially, a broadcast message is transmitted by a network device within a server to a network in order to investigate the availability of the network. A determination is made as to whether or not a response message responding to the broadcast message has been received by the network device within the server. If a response message has been received by the network device, it means the network is available and the network device can be configured to connect with the network. However, if no response message has been received by the network device, it means the network is not available and the network device can be configured as a server node of the network.1. A method comprising:
transmitting a broadcast message by a network device within a server to a network in order to investigate the availability of said network; determining whether or not a response message responding to said broadcast message has been received by said network device within said server; in a determination that a response message has been received by said network device, concluding that said network is available and configuring said network device to connect with said network; and in a determination that no response message has been received by said network device, concluding that said network is not available and configuring said network device as a server node of said network. 2. The method of claim 1, wherein said transmitting further includes transmitting said broadcast message to a default IP address corresponding to said network. 3. The method of claim 1, wherein said configuring said network device to connect with said network further includes configuring an IP address of said network device based on said received response message. 4. The method of claim 1, wherein said configuring said network device as a server node of said network further includes initiating a client-server protocol. 5. The method of claim 4, wherein said client-server protocol is a Dynamic Host Configuration Protocol (DHCP). 6. The method of claim 1, wherein said transmitting further includes transmitting a plurality of broadcast messages by said network device to investigate the availability of a plurality of respective networks. 7. The method of claim 6, wherein said plurality of broadcast messages are transmitted to respective default IP addresses corresponding to said respective networks. 8. The method of claim 6, wherein said plurality of networks are accorded with priority settings, and said plurality of broadcast messages are transmitted via said network device according to said priority settings. 9. A server comprising:
a plurality of memories; a network device coupled to said plurality of memories; and a processor coupled to said network device, wherein said processor is configured to:
transmit a broadcast message by a network device within a server to a network in order to investigate the availability of said network;
determine whether or not a response message responding to said broadcast message has been received by said network device within said server;
conclude that said network is available and configure said network device to connect with said network, in a determination that a response message has been received by said network device; and
conclude that said network is not available and configure said network device as a server node of said network, in a determination that no response message has been received by said network device. 10. The server of claim 9, wherein said broadcast message is transmitted to a default IP address corresponding to said network. 11. The server of claim 9, wherein said network device is connected with said network by configuring an IP address of said network device based on said response message as detected. 12. The server of claim 9, wherein said network device is configured as the server node of said network by initiating a client-server protocol. 13. The server of claim 12, wherein said client-server protocol is a Dynamic Host Configuration Protocol (DHCP). 14. The server of claim 9, wherein a plurality of broadcast messages are transmitted by said network device to investigate the availability of a plurality of respective networks. 15. The server of claim 14, wherein said plurality of broadcast messages are transmitted to respective default IP addresses corresponding to said respective networks. 16. The server of claim 15, wherein said plurality of networks are accorded with priority settings, and said plurality of broadcast messages are transmitted via said network device in accordance with said priority settings. 17. A computer-readable medium having a computer program product for deploying computer networks, said computer-readable medium comprising:
program code for transmitting a broadcast message by a network device within a server to a network in order to investigate the availability of said network; program code for determining whether or not a response message responding to said broadcast message has been received by said network device within said server; program code for, in a determination that a response message has been received by said network device, concluding that said network is available and configuring said network device to connect with said network; and program code for, in a determination that no response message has been received by said network device, concluding that said network is not available and configuring said network device as a server node of said network. 18. The computer-readable medium of claim 17, wherein said program code for transmitting further includes program code for transmitting said broadcast message to a default IP address corresponding to said network. 19. The computer-readable medium of claim 17, wherein said program code for configuring said network device to connect with said network further includes said program code for configuring an IP address of said network device based on said received response message. 20. The computer-readable medium of claim 17, wherein said program code for configuring said network device as a server node of said network further includes program code for initiating a client-server protocol. | 2,400 |
8,632 | 8,632 | 15,612,331 | 2,419 | Users of a client computer having non-conventional input devices interact with a host computing platform with the same user experience as if he or she was operating the client computer natively. This is achieved by having the non-conventional input devices of the client device appear local to the applications that are running on the host platform, even though the host computing platform may not be equipped with drivers for the non-conventional input devices. | 1. A system, comprising:
a server computer; and software, executing on the server computer, to support remote access to the server computer from a client computer, the software including an operating system (OS) and an application, the software executable by the server computer to: identify input devices installed on the client computer; configure one or more queues in the OS into which events generated by the input devices installed on the client computer are injected; inject the events generated by the input devices into the queues as the events are received from the client computer; notify the application of some of the events injected into the queues; update an output of the application according to the events of which the application is notified; and transmit the updated output to the client computer, wherein the queues are each a device object of the OS and each of said device objects does not have an associated device driver. 2. The system of claim 1, wherein the queues are character devices. 3. The system of claim 2, wherein a character device is configured for each of the different input devices. 4. The system of claim 1, wherein the input devices comprise a touch-based input device and sensors. 5. The system of claim 1, wherein the application is subscribed to events generated by at least one of the input devices and the events of which the application is notified comprises events generated by the said at least one of the input devices. 6. The system of claim 5, wherein the application is an application for making and receiving telephone calls and the events the application is subscribed to are generated by microphone inputs at the client device and the output that is updated include audio from a connected telephone. 7. The system of claim 5, wherein the application is a 3D game application and the events the application is subscribed to are generated by touch-based inputs at the computing device and the output that is updated includes 3D graphics and audio. 8. The system of claim 1, wherein said injecting is carried out by another application running in the server computer. 9. The system of claim 1, wherein said transmitting is carried out by said another application. 10. The system of claim 1, wherein the events are received in native form along with a tag that identifies the input device that generated the event. 11. The system of claim 1, wherein the server computer is a virtual machine and the OS is a guest operating system configured to support touch-based input devices and sensors executing therein. 12. A virtualized computer system comprising a virtual machine executing in a server computer, wherein the virtual machine is remotely accessible by a computing device having input devices including a touch-based input device and sensors, the virtualized computer system comprising:
a guest operating system executing in the virtual machine configured to support the touch-based input devices and the sensors; a first guest application executing in the virtual machine that coordinates communications with the computing device that is remotely accessing the virtual machine and configured to receive events from the computing device and inject the events into one or more queues managed by the guest operating system; and a second guest application executing in the virtual machine that is launched by the computing device and controlled by the computing device, wherein the second guest application is subscribed to some of the events injected into the queues by the first guest application and generates an output in accordance therewith. 13. The system of claim 12, wherein the queues are character devices. 14. The system of claim 13, wherein each of the character devices does not have an associated device driver. 15. The system of claim 14, wherein a character device is configured for each of the different input devices. 16. The system of claim 12, wherein the second guest application is an application for making and receiving telephone calls and the events the second guest application is subscribed to are generated by microphone inputs at the computing device and the output that is generated includes audio from a connected telephone. 17. The system of claim 12, wherein the second guest application is a 3D game application and the events the second guest application is subscribed to are generated by touch-based inputs at the computing device and the output that is generated includes 3D graphics and audio. 18. The system of claim 12, wherein the first guest application is further configured to extract a tag from each event and determine the input device that is associated with the event. 19. A virtualized computer system comprising a virtual machine running in a server computer, wherein the virtual machine is remotely accessible by a computing device having input devices including a touch-based input device and sensors, the virtualized computer system comprising:
an operating system, executing in the computing device, configured to support the touch-based input devices and sensors; and a client terminal application, executing in the computing device, configured to coordinate communications with the virtual machine, the virtual machine being subscribed to events generated by the input devices, the client terminal application configured to transmit to the virtual machine the events in native form along with a tag that identifies the input device that generated the event. 20. The system of claim 19, wherein the sensors include any of GPS sensor, accelerometer, magnetic field sensor, orientation sensor, temperature sensor, barometric pressure sensor, and gyroscopic sensor. | Users of a client computer having non-conventional input devices interact with a host computing platform with the same user experience as if he or she was operating the client computer natively. This is achieved by having the non-conventional input devices of the client device appear local to the applications that are running on the host platform, even though the host computing platform may not be equipped with drivers for the non-conventional input devices.1. A system, comprising:
a server computer; and software, executing on the server computer, to support remote access to the server computer from a client computer, the software including an operating system (OS) and an application, the software executable by the server computer to: identify input devices installed on the client computer; configure one or more queues in the OS into which events generated by the input devices installed on the client computer are injected; inject the events generated by the input devices into the queues as the events are received from the client computer; notify the application of some of the events injected into the queues; update an output of the application according to the events of which the application is notified; and transmit the updated output to the client computer, wherein the queues are each a device object of the OS and each of said device objects does not have an associated device driver. 2. The system of claim 1, wherein the queues are character devices. 3. The system of claim 2, wherein a character device is configured for each of the different input devices. 4. The system of claim 1, wherein the input devices comprise a touch-based input device and sensors. 5. The system of claim 1, wherein the application is subscribed to events generated by at least one of the input devices and the events of which the application is notified comprises events generated by the said at least one of the input devices. 6. The system of claim 5, wherein the application is an application for making and receiving telephone calls and the events the application is subscribed to are generated by microphone inputs at the client device and the output that is updated include audio from a connected telephone. 7. The system of claim 5, wherein the application is a 3D game application and the events the application is subscribed to are generated by touch-based inputs at the computing device and the output that is updated includes 3D graphics and audio. 8. The system of claim 1, wherein said injecting is carried out by another application running in the server computer. 9. The system of claim 1, wherein said transmitting is carried out by said another application. 10. The system of claim 1, wherein the events are received in native form along with a tag that identifies the input device that generated the event. 11. The system of claim 1, wherein the server computer is a virtual machine and the OS is a guest operating system configured to support touch-based input devices and sensors executing therein. 12. A virtualized computer system comprising a virtual machine executing in a server computer, wherein the virtual machine is remotely accessible by a computing device having input devices including a touch-based input device and sensors, the virtualized computer system comprising:
a guest operating system executing in the virtual machine configured to support the touch-based input devices and the sensors; a first guest application executing in the virtual machine that coordinates communications with the computing device that is remotely accessing the virtual machine and configured to receive events from the computing device and inject the events into one or more queues managed by the guest operating system; and a second guest application executing in the virtual machine that is launched by the computing device and controlled by the computing device, wherein the second guest application is subscribed to some of the events injected into the queues by the first guest application and generates an output in accordance therewith. 13. The system of claim 12, wherein the queues are character devices. 14. The system of claim 13, wherein each of the character devices does not have an associated device driver. 15. The system of claim 14, wherein a character device is configured for each of the different input devices. 16. The system of claim 12, wherein the second guest application is an application for making and receiving telephone calls and the events the second guest application is subscribed to are generated by microphone inputs at the computing device and the output that is generated includes audio from a connected telephone. 17. The system of claim 12, wherein the second guest application is a 3D game application and the events the second guest application is subscribed to are generated by touch-based inputs at the computing device and the output that is generated includes 3D graphics and audio. 18. The system of claim 12, wherein the first guest application is further configured to extract a tag from each event and determine the input device that is associated with the event. 19. A virtualized computer system comprising a virtual machine running in a server computer, wherein the virtual machine is remotely accessible by a computing device having input devices including a touch-based input device and sensors, the virtualized computer system comprising:
an operating system, executing in the computing device, configured to support the touch-based input devices and sensors; and a client terminal application, executing in the computing device, configured to coordinate communications with the virtual machine, the virtual machine being subscribed to events generated by the input devices, the client terminal application configured to transmit to the virtual machine the events in native form along with a tag that identifies the input device that generated the event. 20. The system of claim 19, wherein the sensors include any of GPS sensor, accelerometer, magnetic field sensor, orientation sensor, temperature sensor, barometric pressure sensor, and gyroscopic sensor. | 2,400 |
8,633 | 8,633 | 16,100,356 | 2,424 | Techniques for generating a deterministic outcome for content viewership based on a probability model are described. Information that identifies a probability that a particular portion of a program was viewed by an individual member of a household may be accessed. A probability distribution may be generated that represents, for each of a plurality of state spaces, a probability that the particular portion of the program was viewed by the plurality of individual members of the household. Based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios may be determined. One of the plurality of viewership scenarios may be selected, and a report may be generated that identifies, for the selected viewership scenario, each of the individual members of the household and an indication of whether the particular portion of the program was viewed by each of the individual members of the household. | 1. A computer-implemented method, comprising:
accessing household member data that identifies a plurality of individual members of a household; accessing information that identifies, for each of the plurality of individual members of the household, a probability that a particular portion of content was viewed by that individual member of the household; generating a plurality of state spaces, each of the plurality of state spaces comprising an indication of whether or not the particular portion of the content was viewed by each of the plurality of individual members of the household; generating a probability distribution that represents, for each of the plurality of state spaces, a probability that the particular portion of the content was viewed by the plurality of individual members of the household represented by the state space; determining, based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios, each of the plurality of possible viewership scenarios corresponding to one of the state spaces; selecting one of the plurality of possible viewership scenarios; and generating a report that identifies, for the selected viewership scenario and for each of the individual members of the household, a Boolean output indicating whether or not the particular portion of the content was viewed by the individual member of the household. 2. The method of claim 1, wherein the simulation is a Monte Carlo simulation. 3. The method of claim 1, wherein each of the plurality of state spaces comprises, for each of the plurality of individual members of the household, a state value corresponding to whether or not the particular portion of the content was viewed by the individual member of the household. 4. The method of claim 3, wherein the state value is one of a first state value indicating that the particular portion of the content was viewed by the individual member of the household and a second state value indicating that the particular portion of the content was not viewed by the individual member of the household. 5. The method of claim 1, wherein selecting one of the plurality of possible viewership scenarios comprises randomly selecting one of the plurality of possible viewership scenarios. 6. The method of claim 1, wherein the information is based on at least one of panelist viewing data and household member data. 7. The method of claim 6, wherein accessing the information comprises determining, for each of the plurality of individual members of the household, a total number of watched minutes of the particular portion of the content by the individual member of the household and a number of continuous series of watched states of the particular portion of the content by the individual member of the household. 8. The method of claim 1, wherein accessing the information comprises accessing a table that identifies, for each of the plurality of individual members of the household, a probability that the particular portion of the content was viewed by that individual member of the household. 9. The method of claim 1, wherein generating the probability distribution comprises processing one or more of the plurality of state spaces in parallel. 10. The method of claim 1, wherein generating the plurality of state spaces comprises determining all possible intersections between the plurality of individual members of the household. 11. The method of claim 1, wherein the particular portion of the content is a one minute duration of the content. 12. The method of claim 1, wherein the content is at least one of a television program or a movie program. 13. A device comprising a processor and a memory, the memory storing computer executable instructions which, when executed by the processor, cause the device to perform operations comprising:
accessing household member data that identifies a plurality of individual members of a household; accessing information that identifies, for each of the plurality of individual members of the household, a probability that a particular portion of content was viewed by that individual member of the household; generating a plurality of state spaces, each of the plurality of state spaces comprising an indication of whether or not the particular portion of the content was viewed by each of the plurality of individual members of the household; generating a probability distribution that represents, for each of the plurality of state spaces, a probability that the particular portion of the content was viewed by the plurality of individual members of the household represented by the state space; determining, based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios, each of the plurality of possible viewership scenarios corresponding to one of the state spaces; selecting one of the plurality of possible viewership scenarios; and generating a report that identifies, for the selected viewership scenario and for each of the individual members of the household, a Boolean output indicating whether or not the particular portion of the content was viewed by the individual member of the household. 14. The device of claim 13, wherein the simulation is a Monte Carlo simulation. 15. The device of claim 13, wherein each of the plurality of state spaces comprises, for each of the plurality of individual members of the household, a state value corresponding to whether or not the particular portion of the content was viewed by the individual member of the household. 16. The device of claim 15, wherein the state value is one of a first state value indicating that the particular portion of the content was viewed by the individual member of the household and a second state value indicating that the particular portion of the content was not viewed by the individual member of the household. 17. A non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a device, cause the device to perform operations comprising:
accessing household member data that identifies a plurality of individual members of a household; accessing information that identifies, for each of the plurality of individual members of the household, a probability that a particular portion of content was viewed by that individual member of the household; generating a plurality of state spaces, each of the plurality of state spaces comprising an indication of whether or not the particular portion of the content was viewed by each of the plurality of individual members of the household; generating a probability distribution that represents, for each of the plurality of state spaces, a probability that the particular portion of the content was viewed by the plurality of individual members of the household represented by the state space; determining, based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios, each of the plurality of possible viewership scenarios corresponding to one of the state spaces; selecting one of the plurality of possible viewership scenarios; and generating a report that identifies, for the selected viewership scenario and for each of the individual members of the household, a Boolean output indicating whether or not the particular portion of the content was viewed by the individual member of the household. 18. The non-transitory computer-readable storage medium of claim 17, wherein the simulation is a Monte Carlo simulation. 19. The non-transitory computer-readable storage medium of claim 17, wherein each of the plurality of state spaces comprises, for each of the plurality of individual members of the household, a state value corresponding to whether or not the particular portion of the content was viewed by the individual member of the household. 20. The non-transitory computer-readable storage medium of claim 19, wherein the state value is one of a first state value indicating that the particular portion of the content was viewed by the individual member of the household and a second state value indicating that the particular portion of the content was not viewed by the individual member of the household. | Techniques for generating a deterministic outcome for content viewership based on a probability model are described. Information that identifies a probability that a particular portion of a program was viewed by an individual member of a household may be accessed. A probability distribution may be generated that represents, for each of a plurality of state spaces, a probability that the particular portion of the program was viewed by the plurality of individual members of the household. Based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios may be determined. One of the plurality of viewership scenarios may be selected, and a report may be generated that identifies, for the selected viewership scenario, each of the individual members of the household and an indication of whether the particular portion of the program was viewed by each of the individual members of the household.1. A computer-implemented method, comprising:
accessing household member data that identifies a plurality of individual members of a household; accessing information that identifies, for each of the plurality of individual members of the household, a probability that a particular portion of content was viewed by that individual member of the household; generating a plurality of state spaces, each of the plurality of state spaces comprising an indication of whether or not the particular portion of the content was viewed by each of the plurality of individual members of the household; generating a probability distribution that represents, for each of the plurality of state spaces, a probability that the particular portion of the content was viewed by the plurality of individual members of the household represented by the state space; determining, based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios, each of the plurality of possible viewership scenarios corresponding to one of the state spaces; selecting one of the plurality of possible viewership scenarios; and generating a report that identifies, for the selected viewership scenario and for each of the individual members of the household, a Boolean output indicating whether or not the particular portion of the content was viewed by the individual member of the household. 2. The method of claim 1, wherein the simulation is a Monte Carlo simulation. 3. The method of claim 1, wherein each of the plurality of state spaces comprises, for each of the plurality of individual members of the household, a state value corresponding to whether or not the particular portion of the content was viewed by the individual member of the household. 4. The method of claim 3, wherein the state value is one of a first state value indicating that the particular portion of the content was viewed by the individual member of the household and a second state value indicating that the particular portion of the content was not viewed by the individual member of the household. 5. The method of claim 1, wherein selecting one of the plurality of possible viewership scenarios comprises randomly selecting one of the plurality of possible viewership scenarios. 6. The method of claim 1, wherein the information is based on at least one of panelist viewing data and household member data. 7. The method of claim 6, wherein accessing the information comprises determining, for each of the plurality of individual members of the household, a total number of watched minutes of the particular portion of the content by the individual member of the household and a number of continuous series of watched states of the particular portion of the content by the individual member of the household. 8. The method of claim 1, wherein accessing the information comprises accessing a table that identifies, for each of the plurality of individual members of the household, a probability that the particular portion of the content was viewed by that individual member of the household. 9. The method of claim 1, wherein generating the probability distribution comprises processing one or more of the plurality of state spaces in parallel. 10. The method of claim 1, wherein generating the plurality of state spaces comprises determining all possible intersections between the plurality of individual members of the household. 11. The method of claim 1, wherein the particular portion of the content is a one minute duration of the content. 12. The method of claim 1, wherein the content is at least one of a television program or a movie program. 13. A device comprising a processor and a memory, the memory storing computer executable instructions which, when executed by the processor, cause the device to perform operations comprising:
accessing household member data that identifies a plurality of individual members of a household; accessing information that identifies, for each of the plurality of individual members of the household, a probability that a particular portion of content was viewed by that individual member of the household; generating a plurality of state spaces, each of the plurality of state spaces comprising an indication of whether or not the particular portion of the content was viewed by each of the plurality of individual members of the household; generating a probability distribution that represents, for each of the plurality of state spaces, a probability that the particular portion of the content was viewed by the plurality of individual members of the household represented by the state space; determining, based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios, each of the plurality of possible viewership scenarios corresponding to one of the state spaces; selecting one of the plurality of possible viewership scenarios; and generating a report that identifies, for the selected viewership scenario and for each of the individual members of the household, a Boolean output indicating whether or not the particular portion of the content was viewed by the individual member of the household. 14. The device of claim 13, wherein the simulation is a Monte Carlo simulation. 15. The device of claim 13, wherein each of the plurality of state spaces comprises, for each of the plurality of individual members of the household, a state value corresponding to whether or not the particular portion of the content was viewed by the individual member of the household. 16. The device of claim 15, wherein the state value is one of a first state value indicating that the particular portion of the content was viewed by the individual member of the household and a second state value indicating that the particular portion of the content was not viewed by the individual member of the household. 17. A non-transitory computer-readable storage medium comprising computer-executable instructions which, when executed by a device, cause the device to perform operations comprising:
accessing household member data that identifies a plurality of individual members of a household; accessing information that identifies, for each of the plurality of individual members of the household, a probability that a particular portion of content was viewed by that individual member of the household; generating a plurality of state spaces, each of the plurality of state spaces comprising an indication of whether or not the particular portion of the content was viewed by each of the plurality of individual members of the household; generating a probability distribution that represents, for each of the plurality of state spaces, a probability that the particular portion of the content was viewed by the plurality of individual members of the household represented by the state space; determining, based on a simulation of the generated probability distribution, a plurality of possible viewership scenarios, each of the plurality of possible viewership scenarios corresponding to one of the state spaces; selecting one of the plurality of possible viewership scenarios; and generating a report that identifies, for the selected viewership scenario and for each of the individual members of the household, a Boolean output indicating whether or not the particular portion of the content was viewed by the individual member of the household. 18. The non-transitory computer-readable storage medium of claim 17, wherein the simulation is a Monte Carlo simulation. 19. The non-transitory computer-readable storage medium of claim 17, wherein each of the plurality of state spaces comprises, for each of the plurality of individual members of the household, a state value corresponding to whether or not the particular portion of the content was viewed by the individual member of the household. 20. The non-transitory computer-readable storage medium of claim 19, wherein the state value is one of a first state value indicating that the particular portion of the content was viewed by the individual member of the household and a second state value indicating that the particular portion of the content was not viewed by the individual member of the household. | 2,400 |
8,634 | 8,634 | 15,246,057 | 2,482 | A targeting system includes a personal display system, an attitude determination unit and processing circuitry. The personal display system generates or enables a live view of a physical environment including a field element. The attitude determination unit measures an attitude of the targeting system. The processing circuitry determines a relative LOS from the targeting system to the field element based on the attitude and a geographic location of the targeting system, and a geographic location of the field element. The processing circuitry determines a relative position in the live view from its center of the field of view to the field element therein based on the relative LOS, and causes the personal display system to display a primary view that overlays and thereby augments the live view, with the primary view including an icon at the relative position that thereby overlays the field element in the live view. | 1. A targeting system for use in a physical environment including a field element, the targeting system comprising:
a personal display system configured to generate or enable a live view of the physical environment, the live view including the field element and having a field of view centered on a line of sight (LOS) of the targeting system; an attitude determination unit configured to measure an attitude of the targeting system in an azimuth and elevation that describe the LOS of the targeting system, and in a tilt of the targeting system; and processing circuitry configured to receive the attitude from the attitude determination unit, and programmed to at least:
determine a relative LOS from the targeting system to the field element based on the attitude and a geographic location of the targeting system, and a geographic location of the field element;
determine a relative position in the live view from the center of the field of view to the field element therein based on the relative LOS; and
cause the personal display system to display a primary view that overlays and thereby augments the live view, the primary view including an icon at the relative position that thereby overlays the field element in the live view. 2. The targeting system of claim 1, wherein the attitude determination unit is configured to persistently measure the attitude, and the processing circuitry is configured to persistently receive the attitude, and programmed to persistently determine the relative LOS based on the attitude, determine the relative position in the live view based on the relative LOS, and cause the personal display system to display the primary view including the icon at the relative position. 3. The targeting system of claim 1 further comprising a memory storing a classification of the field element, the classification being of a plurality of classifications associated with a respective plurality of icons,
wherein the processing circuitry is further configured to access the memory, and programmed to cause the personal display system to display the primary view including the icon associated with the classification of the field element. 4. The targeting system of claim 3, wherein the processing circuitry is programmed to cause the personal display system to display the primary view further including a notification regarding the field element in an instance in which the relative position and the center of the field of view are co-aligned, the notification being selected from a plurality of notifications based on the classification of the field element. 5. The targeting system of claim 3, wherein the field element is one of a plurality of field elements in the physical environment, and the memory stores a classification of each of the plurality of field elements,
wherein the processing circuitry is programmed to determine the relative LOS from the targeting system to each of the plurality of field elements, and wherein the processing circuitry is further programmed to identify the field element as having a relative position within the field of view based on the relative LOS from the targeting system to the field element. 6. The targeting system of claim 5, wherein the processing circuitry is further programmed to identify another field element as having a relative position outside the field of view based on the relative LOS from the targeting system to the other field element, and
wherein the processing circuitry is programmed to cause the personal display system to display the primary view that further includes an arrow indicating a turning direction from the LOS of the targeting system to the other field element. 7. The targeting system of claim 1, wherein the processing circuitry is further programmed to cause the personal display system to display a secondary view that also overlays and thereby further augments the live view, the secondary view depicting an area of the environment surrounding the targeting system, and including icons that represent the targeting system and field element. 8. The targeting system of claim 7 further comprising a memory storing a classification that identifies the field element as a target, and information that indicates a munition assigned to the target, and a minimum safe distance associated with the munition, and
wherein the processing circuitry is further configured to access the memory, and programmed to cause the personal display system to display the secondary view including an indicator that indicates the minimum safe distance relative to the target. 9. The targeting system of claim 7, wherein the secondary view is centered on the icon that represents the targeting system,
wherein the processing circuitry is further programmed to determine a distance from the targeting system to the field element based on the geographic location of the targeting system and the geographic location of the field element, and wherein the processing circuitry is programmed to cause the personal display system to display the secondary view centered on the icon that represents the targeting system, and in which the icon that represents the field element is positioned relative to the center of the secondary view based on the distance from the targeting system to the field element, and the relative LOS from the targeting system to the field element. 10. The targeting system of claim 1 further comprising a rangefinder configured to measure a range from the targeting system to a landmark in the physical environment, and
wherein the processing circuitry is further programmed to determine the geographic location of the targeting system based on the attitude of the targeting system, the range from the targeting system to the landmark, and a geographic location of the landmark. 11. A method of using a targeting system in a physical environment including a field element, the method comprising:
generating or enabling a live view of the physical environment, the live view including the field element and having a field of view centered on a line of sight (LOS) of the targeting system; measuring an attitude of the targeting system in an azimuth and elevation that describe the LOS of the targeting system, and in a tilt of the targeting system; determining a relative LOS from the targeting system to the field element based on the attitude and a geographic location of the targeting system, and a geographic location of the field element; determining a relative position in the live view from the center of the field of view to the field element therein based on the relative LOS; and displaying a primary view that overlays and thereby augments the live view, the primary view including an icon at the relative position that thereby overlays the field element in the live view. 12. The method of claim 11, wherein the measuring the attitude, determining the relative LOS, determining the relative position and displaying the primary view are performed persistently. 13. The method of claim 11, wherein the field element has a classification of a plurality of classifications associated with a respective plurality of icons, and
wherein displaying the primary view includes displaying the primary view including an icon associated with the classification of the field element. 14. The method of claim 13, wherein displaying the primary view includes displaying the primary view further including a notification regarding the field element in an instance in which the relative position and the center of the field of view are co-aligned, the notification being selected from a plurality of notifications based on the classification of the field element. 15. The method of claim 13, wherein the field element is one of a plurality of field elements in the physical environment, and each of the plurality of field elements have a classification,
wherein determining the relative LOS includes determining the relative LOS from the targeting system to each of the plurality of field elements, and wherein the method further comprises identifying the field element as having a relative position within the field of view based on the relative LOS from the targeting system to the field element. 16. The method of claim 15 further comprising identifying another field element as having a relative position outside the field of view based on the relative LOS from the targeting system to the other field element, and
wherein displaying the primary view includes displaying the primary view that further includes an arrow indicating a turning direction from the LOS of the targeting system to the other field element. 17. The method of claim 11 further comprising displaying a secondary view that also overlays and thereby further augments the live view, the secondary view depicting an area of the environment surrounding the targeting system, and including icons that represent the targeting system and field element. 18. The method of claim 17, wherein the field element has a classification that identifies the field element as a target, a munition is assigned to the target, and the munition has a minimum safe distance associated therewith, and
wherein displaying the secondary view includes displaying the secondary view including an indicator that indicates the minimum safe distance relative to the target. 19. The method of claim 17, wherein the secondary view is centered on the icon that represents the targeting system,
wherein the method further comprises determining a distance from the targeting system to the field element based on the geographic location of the targeting system and the geographic location of the field element, and wherein displaying the secondary view includes displaying the secondary view centered on the icon that represents the targeting system, and in which the icon that represents the field element is positioned relative to the center of the secondary view based on the distance from the targeting system to the field element, and the relative LOS from the targeting system to the field element. 20. The method of claim 11 further comprising:
measuring a range from the targeting system to a landmark in the physical environment; and
determining the geographic location of the targeting system based on the attitude of the targeting system, the range from the targeting system to the landmark, and a geographic location of the landmark. | A targeting system includes a personal display system, an attitude determination unit and processing circuitry. The personal display system generates or enables a live view of a physical environment including a field element. The attitude determination unit measures an attitude of the targeting system. The processing circuitry determines a relative LOS from the targeting system to the field element based on the attitude and a geographic location of the targeting system, and a geographic location of the field element. The processing circuitry determines a relative position in the live view from its center of the field of view to the field element therein based on the relative LOS, and causes the personal display system to display a primary view that overlays and thereby augments the live view, with the primary view including an icon at the relative position that thereby overlays the field element in the live view.1. A targeting system for use in a physical environment including a field element, the targeting system comprising:
a personal display system configured to generate or enable a live view of the physical environment, the live view including the field element and having a field of view centered on a line of sight (LOS) of the targeting system; an attitude determination unit configured to measure an attitude of the targeting system in an azimuth and elevation that describe the LOS of the targeting system, and in a tilt of the targeting system; and processing circuitry configured to receive the attitude from the attitude determination unit, and programmed to at least:
determine a relative LOS from the targeting system to the field element based on the attitude and a geographic location of the targeting system, and a geographic location of the field element;
determine a relative position in the live view from the center of the field of view to the field element therein based on the relative LOS; and
cause the personal display system to display a primary view that overlays and thereby augments the live view, the primary view including an icon at the relative position that thereby overlays the field element in the live view. 2. The targeting system of claim 1, wherein the attitude determination unit is configured to persistently measure the attitude, and the processing circuitry is configured to persistently receive the attitude, and programmed to persistently determine the relative LOS based on the attitude, determine the relative position in the live view based on the relative LOS, and cause the personal display system to display the primary view including the icon at the relative position. 3. The targeting system of claim 1 further comprising a memory storing a classification of the field element, the classification being of a plurality of classifications associated with a respective plurality of icons,
wherein the processing circuitry is further configured to access the memory, and programmed to cause the personal display system to display the primary view including the icon associated with the classification of the field element. 4. The targeting system of claim 3, wherein the processing circuitry is programmed to cause the personal display system to display the primary view further including a notification regarding the field element in an instance in which the relative position and the center of the field of view are co-aligned, the notification being selected from a plurality of notifications based on the classification of the field element. 5. The targeting system of claim 3, wherein the field element is one of a plurality of field elements in the physical environment, and the memory stores a classification of each of the plurality of field elements,
wherein the processing circuitry is programmed to determine the relative LOS from the targeting system to each of the plurality of field elements, and wherein the processing circuitry is further programmed to identify the field element as having a relative position within the field of view based on the relative LOS from the targeting system to the field element. 6. The targeting system of claim 5, wherein the processing circuitry is further programmed to identify another field element as having a relative position outside the field of view based on the relative LOS from the targeting system to the other field element, and
wherein the processing circuitry is programmed to cause the personal display system to display the primary view that further includes an arrow indicating a turning direction from the LOS of the targeting system to the other field element. 7. The targeting system of claim 1, wherein the processing circuitry is further programmed to cause the personal display system to display a secondary view that also overlays and thereby further augments the live view, the secondary view depicting an area of the environment surrounding the targeting system, and including icons that represent the targeting system and field element. 8. The targeting system of claim 7 further comprising a memory storing a classification that identifies the field element as a target, and information that indicates a munition assigned to the target, and a minimum safe distance associated with the munition, and
wherein the processing circuitry is further configured to access the memory, and programmed to cause the personal display system to display the secondary view including an indicator that indicates the minimum safe distance relative to the target. 9. The targeting system of claim 7, wherein the secondary view is centered on the icon that represents the targeting system,
wherein the processing circuitry is further programmed to determine a distance from the targeting system to the field element based on the geographic location of the targeting system and the geographic location of the field element, and wherein the processing circuitry is programmed to cause the personal display system to display the secondary view centered on the icon that represents the targeting system, and in which the icon that represents the field element is positioned relative to the center of the secondary view based on the distance from the targeting system to the field element, and the relative LOS from the targeting system to the field element. 10. The targeting system of claim 1 further comprising a rangefinder configured to measure a range from the targeting system to a landmark in the physical environment, and
wherein the processing circuitry is further programmed to determine the geographic location of the targeting system based on the attitude of the targeting system, the range from the targeting system to the landmark, and a geographic location of the landmark. 11. A method of using a targeting system in a physical environment including a field element, the method comprising:
generating or enabling a live view of the physical environment, the live view including the field element and having a field of view centered on a line of sight (LOS) of the targeting system; measuring an attitude of the targeting system in an azimuth and elevation that describe the LOS of the targeting system, and in a tilt of the targeting system; determining a relative LOS from the targeting system to the field element based on the attitude and a geographic location of the targeting system, and a geographic location of the field element; determining a relative position in the live view from the center of the field of view to the field element therein based on the relative LOS; and displaying a primary view that overlays and thereby augments the live view, the primary view including an icon at the relative position that thereby overlays the field element in the live view. 12. The method of claim 11, wherein the measuring the attitude, determining the relative LOS, determining the relative position and displaying the primary view are performed persistently. 13. The method of claim 11, wherein the field element has a classification of a plurality of classifications associated with a respective plurality of icons, and
wherein displaying the primary view includes displaying the primary view including an icon associated with the classification of the field element. 14. The method of claim 13, wherein displaying the primary view includes displaying the primary view further including a notification regarding the field element in an instance in which the relative position and the center of the field of view are co-aligned, the notification being selected from a plurality of notifications based on the classification of the field element. 15. The method of claim 13, wherein the field element is one of a plurality of field elements in the physical environment, and each of the plurality of field elements have a classification,
wherein determining the relative LOS includes determining the relative LOS from the targeting system to each of the plurality of field elements, and wherein the method further comprises identifying the field element as having a relative position within the field of view based on the relative LOS from the targeting system to the field element. 16. The method of claim 15 further comprising identifying another field element as having a relative position outside the field of view based on the relative LOS from the targeting system to the other field element, and
wherein displaying the primary view includes displaying the primary view that further includes an arrow indicating a turning direction from the LOS of the targeting system to the other field element. 17. The method of claim 11 further comprising displaying a secondary view that also overlays and thereby further augments the live view, the secondary view depicting an area of the environment surrounding the targeting system, and including icons that represent the targeting system and field element. 18. The method of claim 17, wherein the field element has a classification that identifies the field element as a target, a munition is assigned to the target, and the munition has a minimum safe distance associated therewith, and
wherein displaying the secondary view includes displaying the secondary view including an indicator that indicates the minimum safe distance relative to the target. 19. The method of claim 17, wherein the secondary view is centered on the icon that represents the targeting system,
wherein the method further comprises determining a distance from the targeting system to the field element based on the geographic location of the targeting system and the geographic location of the field element, and wherein displaying the secondary view includes displaying the secondary view centered on the icon that represents the targeting system, and in which the icon that represents the field element is positioned relative to the center of the secondary view based on the distance from the targeting system to the field element, and the relative LOS from the targeting system to the field element. 20. The method of claim 11 further comprising:
measuring a range from the targeting system to a landmark in the physical environment; and
determining the geographic location of the targeting system based on the attitude of the targeting system, the range from the targeting system to the landmark, and a geographic location of the landmark. | 2,400 |
8,635 | 8,635 | 14,500,493 | 2,482 | An apparatus and method for generating a mid-infrared region image of a specimen are disclosed. The apparatus includes a mid-infrared region laser that generates a first light beam, and a stage adapted to carry a specimen to be scanned. An optical assembly focuses the first light beam to a point on the specimen. A first light detector measures a first intensity of light leaving the point on the specimen. A stage actuator assembly causes the specimen to move relative to the point in two dimensions. A controller forms a mid-infrared region image from the first intensity. The image can be based on reflected or transmitted light. The maximum size of the imaged area is determined by a scanning assembly that moves in a first direction relative to the stage, the stage moving in a direction orthogonal to the first direction. | 1. An apparatus comprising a MIR imaging system comprising:
a MIR laser that generates a first light beam; a stage adapted to carry a specimen to be scanned; an optical assembly that focuses said first light beam to a point on said specimen; a first light detector that measures a first intensity of light leaving said point on said specimen; a stage actuator assembly that causes said specimen to move relative to said point in two dimensions; and a controller that forms a MIR image from said first intensity, wherein said optical assembly comprises a scanning assembly having a focusing lens that focuses said first light beam to said point and a mirror that moves in a first direction relative to said stage such that said focusing lens maintains a fixed distance between said focusing lens and said stage, said stage moving in a direction orthogonal to said first direction. 2. The apparatus of claim 1 wherein said first light detector measures light reflected from said specimen. 3. The apparatus of claim 1 wherein said first light detector measures light transmitted by said specimen. 4. The apparatus of claim 1 further comprising a beam intensity detector that measures an intensity of said first light beam. 5. The apparatus of claim 4 wherein said MIR laser is a pulsed light source and said controller sums measured intensities from said first light detector only during periods in which said measured beam intensity is greater than a first threshold. 6. The apparatus of claim 5 wherein said controller determines a ratio of said measured intensity of said first light beam and said measured beam intensity to form said MIR image. 7. The apparatus of claim 1 comprising a visible light imaging station that displays a visible image of said specimen generated by illuminating said specimen with light in a visual range of light wavelengths. 8. The apparatus of claim 7, wherein said controller is configured to receive input from a user indicating a region in said visible image that is to be scanned by said MIR imaging system to generate a corresponding MIR image. 9. The apparatus of claim 8 wherein said MIR laser generates light in a range of wavelengths that is controlled by said controller, wherein said controller is configured to receive input from a user indicating a feature of interest in said MIR image, and wherein said controller measures light leaving said feature of interest at different wavelengths from said feature of interest to generate a spectrum characterizing said feature of interest. 10. An apparatus comprising a MIR imaging system comprising:
a pulsed MIR laser that generates a first light beam; a stage adapted to carry a specimen to be scanned; an optical assembly that focuses said first light beam to a point on said specimen; a first light detector that measures a first intensity of light leaving said point on said specimen; a stage actuator assembly that causes said specimen to move relative to said point in two dimensions; a beam intensity detector that measures an intensity of said first light beam; and a controller that forms a MIR image from said first intensity, wherein said controller sums measured intensities from said first light detector only during periods in which said measured beam intensity is greater than a first threshold. 11. The apparatus of claim 10 wherein said controller determines a ratio of said measured intensity of said first light beam and said measured beam intensity to form said MIR image. 12. The apparatus of claim 10 wherein said controller only sums measured intensities associated with pulses that generate an intensity greater than a predetermined threshold in said beam intensity detector. 13. The apparatus of claim 12 where said beam intensity detector comprises a directional attenuator that blocks light not traveling in a predetermined direction. | An apparatus and method for generating a mid-infrared region image of a specimen are disclosed. The apparatus includes a mid-infrared region laser that generates a first light beam, and a stage adapted to carry a specimen to be scanned. An optical assembly focuses the first light beam to a point on the specimen. A first light detector measures a first intensity of light leaving the point on the specimen. A stage actuator assembly causes the specimen to move relative to the point in two dimensions. A controller forms a mid-infrared region image from the first intensity. The image can be based on reflected or transmitted light. The maximum size of the imaged area is determined by a scanning assembly that moves in a first direction relative to the stage, the stage moving in a direction orthogonal to the first direction.1. An apparatus comprising a MIR imaging system comprising:
a MIR laser that generates a first light beam; a stage adapted to carry a specimen to be scanned; an optical assembly that focuses said first light beam to a point on said specimen; a first light detector that measures a first intensity of light leaving said point on said specimen; a stage actuator assembly that causes said specimen to move relative to said point in two dimensions; and a controller that forms a MIR image from said first intensity, wherein said optical assembly comprises a scanning assembly having a focusing lens that focuses said first light beam to said point and a mirror that moves in a first direction relative to said stage such that said focusing lens maintains a fixed distance between said focusing lens and said stage, said stage moving in a direction orthogonal to said first direction. 2. The apparatus of claim 1 wherein said first light detector measures light reflected from said specimen. 3. The apparatus of claim 1 wherein said first light detector measures light transmitted by said specimen. 4. The apparatus of claim 1 further comprising a beam intensity detector that measures an intensity of said first light beam. 5. The apparatus of claim 4 wherein said MIR laser is a pulsed light source and said controller sums measured intensities from said first light detector only during periods in which said measured beam intensity is greater than a first threshold. 6. The apparatus of claim 5 wherein said controller determines a ratio of said measured intensity of said first light beam and said measured beam intensity to form said MIR image. 7. The apparatus of claim 1 comprising a visible light imaging station that displays a visible image of said specimen generated by illuminating said specimen with light in a visual range of light wavelengths. 8. The apparatus of claim 7, wherein said controller is configured to receive input from a user indicating a region in said visible image that is to be scanned by said MIR imaging system to generate a corresponding MIR image. 9. The apparatus of claim 8 wherein said MIR laser generates light in a range of wavelengths that is controlled by said controller, wherein said controller is configured to receive input from a user indicating a feature of interest in said MIR image, and wherein said controller measures light leaving said feature of interest at different wavelengths from said feature of interest to generate a spectrum characterizing said feature of interest. 10. An apparatus comprising a MIR imaging system comprising:
a pulsed MIR laser that generates a first light beam; a stage adapted to carry a specimen to be scanned; an optical assembly that focuses said first light beam to a point on said specimen; a first light detector that measures a first intensity of light leaving said point on said specimen; a stage actuator assembly that causes said specimen to move relative to said point in two dimensions; a beam intensity detector that measures an intensity of said first light beam; and a controller that forms a MIR image from said first intensity, wherein said controller sums measured intensities from said first light detector only during periods in which said measured beam intensity is greater than a first threshold. 11. The apparatus of claim 10 wherein said controller determines a ratio of said measured intensity of said first light beam and said measured beam intensity to form said MIR image. 12. The apparatus of claim 10 wherein said controller only sums measured intensities associated with pulses that generate an intensity greater than a predetermined threshold in said beam intensity detector. 13. The apparatus of claim 12 where said beam intensity detector comprises a directional attenuator that blocks light not traveling in a predetermined direction. | 2,400 |
8,636 | 8,636 | 14,822,519 | 2,473 | A method and an apparatus are provided for transmitting channel state information (CSI) by a terminal in a communication system. The method includes receiving a first signal from a first serving cell; receiving a second signal from a second serving cell; calculating first CSI for the first serving cell based on the first signal; calculating second CSI for the first serving cell based on the first signal; calculating first CSI for the second serving cell based on the second signal; calculating second CSI for the second serving cell based on the second signal; transmitting the first CSI and the second CSI for the first serving cell respectively; and transmitting the first CSI and the second CSI for the second serving cell respectively. | 1. A method for transmitting channel state information (CSI) by a terminal in a communication system, the method comprising:
receiving a first signal from a first serving cell; receiving a second signal from a second serving cell; calculating first CSI for the first serving cell based on the first signal; calculating second CSI for the first serving cell based on the first signal; calculating first CSI for the second serving cell based on the second signal; calculating second CSI for the second serving cell based on the second signal; transmitting the first CSI for the first serving cell and the second CSI for the first serving cell; and transmitting the first CSI for the second serving cell and the second CSI for the second serving cell. 2. The method of claim 1, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 3. The method of claim 1, wherein transmitting the first CSI for the first serving cell and the second CSI for the first serving cell comprises:
transmitting the first CSI for the first serving cell on a first channel according to first time information; and transmitting the second CSI for the first serving cell on the first channel according to second time information. 4. The method of claim 1, wherein transmitting the first CSI for the second serving cell and the second CSI for the second serving cell comprises:
transmitting the first CSI for the second serving cell on a second channel according to third time information; and transmitting the second CSI for the second serving cell on the second channel according to forth time information. 5. A method for receiving channel state information (CSI) by a first base station in a communication system, the method comprising:
transmitting a signal to a terminal; and receiving, from the terminal, first CSI and second CSI for at least one serving cell of the first base station, wherein the first CSI and the second CSI for the at least one serving cell of the first base station is calculated based on the signal transmitted from the first base station, and wherein first CSI and second CSI for at least one serving cell of a second base station is calculated based on a signal transmitted from the second base station and transmitted to the second base station. 6. The method of claim 5, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 7. The method of claim 5, wherein receiving the first CSI and the second CSI for the at least one serving cell comprises:
receiving the first CSI for the at least one cell of the first base station on a first channel according to first time information; and receiving the second CSI for the at least one cell of the first base station on the first channel according to second time information. 8. The method of claim 5, wherein the first CSI for the at least one serving cell of the second base station is transmitted to the second base station on a second channel according to third time information, and
wherein the second CSI for the at least one serving cell of the second base station is transmitted to the second base station on the second channel according to forth time information. 9. A terminal for transmitting channel state information (CSI) in a communication system, the terminal comprising:
a transceiver; and a controller configured to:
receive, via the transceiver, a first signal from a first serving cell,
receive, via the transceiver, a second signal from a second serving cell,
calculate first CSI for the first serving cell based on the first signal,
calculate second CSI for the first serving cell based on the first signal,
calculate first CSI for the second serving cell based on the second signal,
calculate second CSI for the second serving cell based on the second signal,
transmit, via the transceiver, the first CSI for the first serving cell and the second CSI for the first serving cell, and
transmit, via the transceiver, the first CSI for the second serving cell and the second CSI for the second serving cell. 10. The terminal of claim 9, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 11. The terminal of claim 9, wherein the controller is further configured to transmit the first CSI for the first serving cell on a first channel according to first time information, and transmit the second CSI for the first serving cell on the first channel according to second time information. 12. The terminal of claim 9, wherein the controller is further configured to transmit the first CSI for the second serving cell on a second channel according to third time information, and transmit the second CSI for the second serving cell on the second channel according to forth time information. 13. A first base station for receiving channel state information (CSI) in a communication system, the first base station comprising:
a transceiver; and a controller configured to:
transmit, via the transceiver, a signal to a terminal, and
receive, via the transceiver, from the terminal, first CSI and second CSI for at least one serving cell of the first base station,
wherein the first CSI and the second CSI for the at least one serving cell of the first base station is calculated based on the signal transmitted from the first base station, and wherein first CSI and second CSI for the at least one serving cell of a second base station is calculated based on a signal transmitted from the second base station and transmitted to the second base station. 14. The first base station of claim 13, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 15. The first base station of claim 14, wherein the controller is further configured to
receive the first CSI for the at least one cell of the first base station on a first channel according to first time information; and receive the second CSI for the at least one cell of the first base station on the first channel according to second time information. 16. The first base station of claim 13, wherein the first CSI for the at least one cell of the second base station is transmitted to the second base station on a second channel according to third time information, and the second CSI for the at least one cell of the second base station is transmitted to the second base station on the second channel according to forth time information. | A method and an apparatus are provided for transmitting channel state information (CSI) by a terminal in a communication system. The method includes receiving a first signal from a first serving cell; receiving a second signal from a second serving cell; calculating first CSI for the first serving cell based on the first signal; calculating second CSI for the first serving cell based on the first signal; calculating first CSI for the second serving cell based on the second signal; calculating second CSI for the second serving cell based on the second signal; transmitting the first CSI and the second CSI for the first serving cell respectively; and transmitting the first CSI and the second CSI for the second serving cell respectively.1. A method for transmitting channel state information (CSI) by a terminal in a communication system, the method comprising:
receiving a first signal from a first serving cell; receiving a second signal from a second serving cell; calculating first CSI for the first serving cell based on the first signal; calculating second CSI for the first serving cell based on the first signal; calculating first CSI for the second serving cell based on the second signal; calculating second CSI for the second serving cell based on the second signal; transmitting the first CSI for the first serving cell and the second CSI for the first serving cell; and transmitting the first CSI for the second serving cell and the second CSI for the second serving cell. 2. The method of claim 1, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 3. The method of claim 1, wherein transmitting the first CSI for the first serving cell and the second CSI for the first serving cell comprises:
transmitting the first CSI for the first serving cell on a first channel according to first time information; and transmitting the second CSI for the first serving cell on the first channel according to second time information. 4. The method of claim 1, wherein transmitting the first CSI for the second serving cell and the second CSI for the second serving cell comprises:
transmitting the first CSI for the second serving cell on a second channel according to third time information; and transmitting the second CSI for the second serving cell on the second channel according to forth time information. 5. A method for receiving channel state information (CSI) by a first base station in a communication system, the method comprising:
transmitting a signal to a terminal; and receiving, from the terminal, first CSI and second CSI for at least one serving cell of the first base station, wherein the first CSI and the second CSI for the at least one serving cell of the first base station is calculated based on the signal transmitted from the first base station, and wherein first CSI and second CSI for at least one serving cell of a second base station is calculated based on a signal transmitted from the second base station and transmitted to the second base station. 6. The method of claim 5, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 7. The method of claim 5, wherein receiving the first CSI and the second CSI for the at least one serving cell comprises:
receiving the first CSI for the at least one cell of the first base station on a first channel according to first time information; and receiving the second CSI for the at least one cell of the first base station on the first channel according to second time information. 8. The method of claim 5, wherein the first CSI for the at least one serving cell of the second base station is transmitted to the second base station on a second channel according to third time information, and
wherein the second CSI for the at least one serving cell of the second base station is transmitted to the second base station on the second channel according to forth time information. 9. A terminal for transmitting channel state information (CSI) in a communication system, the terminal comprising:
a transceiver; and a controller configured to:
receive, via the transceiver, a first signal from a first serving cell,
receive, via the transceiver, a second signal from a second serving cell,
calculate first CSI for the first serving cell based on the first signal,
calculate second CSI for the first serving cell based on the first signal,
calculate first CSI for the second serving cell based on the second signal,
calculate second CSI for the second serving cell based on the second signal,
transmit, via the transceiver, the first CSI for the first serving cell and the second CSI for the first serving cell, and
transmit, via the transceiver, the first CSI for the second serving cell and the second CSI for the second serving cell. 10. The terminal of claim 9, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 11. The terminal of claim 9, wherein the controller is further configured to transmit the first CSI for the first serving cell on a first channel according to first time information, and transmit the second CSI for the first serving cell on the first channel according to second time information. 12. The terminal of claim 9, wherein the controller is further configured to transmit the first CSI for the second serving cell on a second channel according to third time information, and transmit the second CSI for the second serving cell on the second channel according to forth time information. 13. A first base station for receiving channel state information (CSI) in a communication system, the first base station comprising:
a transceiver; and a controller configured to:
transmit, via the transceiver, a signal to a terminal, and
receive, via the transceiver, from the terminal, first CSI and second CSI for at least one serving cell of the first base station,
wherein the first CSI and the second CSI for the at least one serving cell of the first base station is calculated based on the signal transmitted from the first base station, and wherein first CSI and second CSI for the at least one serving cell of a second base station is calculated based on a signal transmitted from the second base station and transmitted to the second base station. 14. The first base station of claim 13, wherein the first CSI includes first matrix information, and the second CSI includes second matrix information. 15. The first base station of claim 14, wherein the controller is further configured to
receive the first CSI for the at least one cell of the first base station on a first channel according to first time information; and receive the second CSI for the at least one cell of the first base station on the first channel according to second time information. 16. The first base station of claim 13, wherein the first CSI for the at least one cell of the second base station is transmitted to the second base station on a second channel according to third time information, and the second CSI for the at least one cell of the second base station is transmitted to the second base station on the second channel according to forth time information. | 2,400 |
8,637 | 8,637 | 15,201,949 | 2,477 | A method of function prioritization in a multi-channel voice-datalink radio is provided. The method comprises suspending transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel, buffering the one or more data messages while the transmission of the voice message is active, and resuming transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends. | 1. A method of function prioritization in a multi-channel voice-datalink radio, the method comprising:
suspending transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel; buffering the one or more data messages while the transmission of the voice message is active; and resuming transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends. 2. The method of claim 1, wherein the voice message is activated on the communication channel via a push-to-talk function in an aircraft. 3. The method of claim 1, wherein the transmission of one or more data messages is on a first channel frequency, and the transmission of the voice message is on a second channel frequency different than the first channel frequency. 4. The method of claim 1, wherein the communication channel operates with a carrier sense multiple access (CSMA) protocol. 5. The method of claim 1, wherein the communication channel operates using VHF data link (VDL) mode 2 protocol. 6. The method of claim 1, wherein the communication channel operates using VDL mode A protocol. 7. The method of claim 1, wherein transmission of the buffered one or more data messages is resumed from the beginning of a last message block not fully transmitted by a VDL transmitter. 8. A multi-channel voice-datalink radio system, comprising:
a multi-mode radio module comprising:
a transmitter section including at least one transmitter configured to send a radio frequency (RF) signal in a communications band to a first antenna, the RF signal comprising data or voice communications;
a receiver section including at least one receiver configured to receive an RF signal in the communications band from the first antenna; and
a processor section including at least one digital signal processor operatively coupled to the transmitter section and the receiver section; and
a digital interface unit operatively coupled to the processor section; wherein the processor section is configured to:
suspend transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel;
buffer the one or more data messages while the transmission of the voice message is active; and
resume transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends. 9. The radio system of claim 8, wherein the processor section further comprises at least one memory unit and a field programmable gate array, which are operatively coupled to the digital signal processor. 10. The radio system of claim 9, wherein the transmitter is operatively coupled to the processor section through a digital to analog converter. 11. The radio system of claim 10, wherein the receiver is operatively coupled to the processor section through an analog to digital converter. 12. The radio system of claim 11, wherein the analog to digital converter outputs a digital signal to a data buffer. 13. The radio system of claim 12, wherein:
the data buffer is operatively coupled to the digital signal processor, the memory unit, and the field programmable gate array; and the data buffer is configured to hold the one or more data messages while the transmission of the voice message is active. 14. The radio system of claim 8, wherein the voice message is activated on the communication channel via a push-to-talk function in an aircraft. 15. The radio system of claim 8, wherein the transmission of one or more data messages is on a first channel frequency, and the transmission of the voice message is on a second channel frequency different than the first channel frequency. 16. The radio system of claim 8, wherein the communication channel operates with a carrier sense multiple access (CSMA) protocol. 17. The system of claim 8, wherein the communication channel operates using VHF data link (VDL) mode 2 protocol. 18. The system of claim 8, wherein the communication channel operates using VDL mode A protocol. 19. A computer program product, comprising:
a non-transitory computer readable medium including instructions executable by a processor to perform a method of function prioritization in a multi-channel voice-datalink radio, the method comprising:
suspending transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel;
buffering the one or more data messages while the transmission of the voice message is active; and
resuming transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends. 20. The computer program product of claim 19, wherein the communication channel operates using VHF data link (VDL) mode 2 protocol, or VDL mode A protocol. | A method of function prioritization in a multi-channel voice-datalink radio is provided. The method comprises suspending transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel, buffering the one or more data messages while the transmission of the voice message is active, and resuming transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends.1. A method of function prioritization in a multi-channel voice-datalink radio, the method comprising:
suspending transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel; buffering the one or more data messages while the transmission of the voice message is active; and resuming transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends. 2. The method of claim 1, wherein the voice message is activated on the communication channel via a push-to-talk function in an aircraft. 3. The method of claim 1, wherein the transmission of one or more data messages is on a first channel frequency, and the transmission of the voice message is on a second channel frequency different than the first channel frequency. 4. The method of claim 1, wherein the communication channel operates with a carrier sense multiple access (CSMA) protocol. 5. The method of claim 1, wherein the communication channel operates using VHF data link (VDL) mode 2 protocol. 6. The method of claim 1, wherein the communication channel operates using VDL mode A protocol. 7. The method of claim 1, wherein transmission of the buffered one or more data messages is resumed from the beginning of a last message block not fully transmitted by a VDL transmitter. 8. A multi-channel voice-datalink radio system, comprising:
a multi-mode radio module comprising:
a transmitter section including at least one transmitter configured to send a radio frequency (RF) signal in a communications band to a first antenna, the RF signal comprising data or voice communications;
a receiver section including at least one receiver configured to receive an RF signal in the communications band from the first antenna; and
a processor section including at least one digital signal processor operatively coupled to the transmitter section and the receiver section; and
a digital interface unit operatively coupled to the processor section; wherein the processor section is configured to:
suspend transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel;
buffer the one or more data messages while the transmission of the voice message is active; and
resume transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends. 9. The radio system of claim 8, wherein the processor section further comprises at least one memory unit and a field programmable gate array, which are operatively coupled to the digital signal processor. 10. The radio system of claim 9, wherein the transmitter is operatively coupled to the processor section through a digital to analog converter. 11. The radio system of claim 10, wherein the receiver is operatively coupled to the processor section through an analog to digital converter. 12. The radio system of claim 11, wherein the analog to digital converter outputs a digital signal to a data buffer. 13. The radio system of claim 12, wherein:
the data buffer is operatively coupled to the digital signal processor, the memory unit, and the field programmable gate array; and the data buffer is configured to hold the one or more data messages while the transmission of the voice message is active. 14. The radio system of claim 8, wherein the voice message is activated on the communication channel via a push-to-talk function in an aircraft. 15. The radio system of claim 8, wherein the transmission of one or more data messages is on a first channel frequency, and the transmission of the voice message is on a second channel frequency different than the first channel frequency. 16. The radio system of claim 8, wherein the communication channel operates with a carrier sense multiple access (CSMA) protocol. 17. The system of claim 8, wherein the communication channel operates using VHF data link (VDL) mode 2 protocol. 18. The system of claim 8, wherein the communication channel operates using VDL mode A protocol. 19. A computer program product, comprising:
a non-transitory computer readable medium including instructions executable by a processor to perform a method of function prioritization in a multi-channel voice-datalink radio, the method comprising:
suspending transmission of one or more data messages on a communication channel when transmission of a voice message is active on the communication channel;
buffering the one or more data messages while the transmission of the voice message is active; and
resuming transmission of the buffered one or more data messages on the communication channel when the transmission of the voice message ends. 20. The computer program product of claim 19, wherein the communication channel operates using VHF data link (VDL) mode 2 protocol, or VDL mode A protocol. | 2,400 |
8,638 | 8,638 | 15,772,235 | 2,416 | In the systems and methods described herein, the 3GPP RAN determines the thresholds that will be provided to the UE devices to facilitate steering traffic between radio networks. More specifically, an eNB of the 3GPP RAN determines the network-related parameter threshold values based, at least partially, on an indication that all or a portion of user data traffic of one or more UE devices is to be handed over from another network (e.g., WLAN). For example, based on the indication that all or a portion of user data traffic of one or more UE devices is to be handed over from another network, the eNB may determine that either more or less traffic should be offloaded to the WLAN. The eNB can modify the network-related parameter threshold values that are being sent to the one or more UE devices so that the level of traffic being offloaded can be appropriately increased or decreased. | 1. A method comprising:
receiving, at a base station of a first radio network, a setup request message requesting that the base station set up radio resources for a user equipment (UE) device, the setup request message containing an indication that user data traffic of the UE device is to be handed over from a second radio network; and setting up the requested radio resources for the UE device. 2. The method of claim 1, wherein the first radio network is a 3GPP Radio Access Network, and the setup request message is transmitted from a Mobility Management Entity. 3. The method of claim 2, wherein the indication is based at least partially on a Request Type parameter received from the UE device in a Connectivity Request message. 4. The method of claim 1, wherein the second radio network is a Wireless Local Area Network. 5. The method of claim 1, further comprising:
determining, based at least partially on the indication, at least one network-related parameter threshold value. 6. The method of claim 5, further comprising:
determining, based at least partially on a comparison between the at least one network-related parameter threshold value and a current network-related parameter value, whether to steer traffic to the first radio network or to the second radio network. 7. A mobile wireless communication device, comprising:
a transmitter configured to transmit a request for radio resources associated with a first radio network, the request containing a parameter indicating that user data traffic of the mobile wireless communication device is to be handed over from a second radio network; and a receiver configured to receive a control message establishing the requested radio resources. 8. The mobile wireless communication device of claim 7, wherein the first radio network is a 3GPP Radio Access Network, the request for radio resources is a Connectivity Request message, and the parameter is a Request Type parameter. 9. The mobile wireless communication device of claim 7, wherein the second radio network is a Wireless Local Area Network. 10. The mobile wireless communication device of claim 7, further comprising:
a controller configured to
compare at least one network-related parameter threshold value and a current network-related parameter value, and
based at least partially on the comparison, steer traffic to the first radio network or to the second radio network. 11. The mobile wireless communication device of claim 10, wherein the receiver is configured to receive the at least one network-related parameter threshold value from a base station. 12. A base station of a first radio network, the base station comprising:
a receiver configured to receive a setup request message requesting that the base station set up radio resources for a user equipment (UE) device, the setup request message containing an indication that user data traffic of the UE device is to be handed over from a second radio network; and a transmitter configured to transmit a control message establishing the requested radio resources. 13. The base station of claim 12, wherein the first radio network is a 3GPP Radio Access Network, and the setup request message is received from a Mobility Management Entity. 14. The base station of claim 13, wherein the indication is based at least partially on a Request Type parameter received from the UE device in a Connectivity Request message. 15. The base station of claim 12, wherein the second radio network is a Wireless Local Area Network. 16. The base station of claim 12, further comprising:
a controller configured to determine, based at least partially on the indication, at least one network-related parameter threshold value. 17. The base station of claim 16, wherein the transmitter is configured to transmit the at least one network-related parameter threshold value. | In the systems and methods described herein, the 3GPP RAN determines the thresholds that will be provided to the UE devices to facilitate steering traffic between radio networks. More specifically, an eNB of the 3GPP RAN determines the network-related parameter threshold values based, at least partially, on an indication that all or a portion of user data traffic of one or more UE devices is to be handed over from another network (e.g., WLAN). For example, based on the indication that all or a portion of user data traffic of one or more UE devices is to be handed over from another network, the eNB may determine that either more or less traffic should be offloaded to the WLAN. The eNB can modify the network-related parameter threshold values that are being sent to the one or more UE devices so that the level of traffic being offloaded can be appropriately increased or decreased.1. A method comprising:
receiving, at a base station of a first radio network, a setup request message requesting that the base station set up radio resources for a user equipment (UE) device, the setup request message containing an indication that user data traffic of the UE device is to be handed over from a second radio network; and setting up the requested radio resources for the UE device. 2. The method of claim 1, wherein the first radio network is a 3GPP Radio Access Network, and the setup request message is transmitted from a Mobility Management Entity. 3. The method of claim 2, wherein the indication is based at least partially on a Request Type parameter received from the UE device in a Connectivity Request message. 4. The method of claim 1, wherein the second radio network is a Wireless Local Area Network. 5. The method of claim 1, further comprising:
determining, based at least partially on the indication, at least one network-related parameter threshold value. 6. The method of claim 5, further comprising:
determining, based at least partially on a comparison between the at least one network-related parameter threshold value and a current network-related parameter value, whether to steer traffic to the first radio network or to the second radio network. 7. A mobile wireless communication device, comprising:
a transmitter configured to transmit a request for radio resources associated with a first radio network, the request containing a parameter indicating that user data traffic of the mobile wireless communication device is to be handed over from a second radio network; and a receiver configured to receive a control message establishing the requested radio resources. 8. The mobile wireless communication device of claim 7, wherein the first radio network is a 3GPP Radio Access Network, the request for radio resources is a Connectivity Request message, and the parameter is a Request Type parameter. 9. The mobile wireless communication device of claim 7, wherein the second radio network is a Wireless Local Area Network. 10. The mobile wireless communication device of claim 7, further comprising:
a controller configured to
compare at least one network-related parameter threshold value and a current network-related parameter value, and
based at least partially on the comparison, steer traffic to the first radio network or to the second radio network. 11. The mobile wireless communication device of claim 10, wherein the receiver is configured to receive the at least one network-related parameter threshold value from a base station. 12. A base station of a first radio network, the base station comprising:
a receiver configured to receive a setup request message requesting that the base station set up radio resources for a user equipment (UE) device, the setup request message containing an indication that user data traffic of the UE device is to be handed over from a second radio network; and a transmitter configured to transmit a control message establishing the requested radio resources. 13. The base station of claim 12, wherein the first radio network is a 3GPP Radio Access Network, and the setup request message is received from a Mobility Management Entity. 14. The base station of claim 13, wherein the indication is based at least partially on a Request Type parameter received from the UE device in a Connectivity Request message. 15. The base station of claim 12, wherein the second radio network is a Wireless Local Area Network. 16. The base station of claim 12, further comprising:
a controller configured to determine, based at least partially on the indication, at least one network-related parameter threshold value. 17. The base station of claim 16, wherein the transmitter is configured to transmit the at least one network-related parameter threshold value. | 2,400 |
8,639 | 8,639 | 14,540,408 | 2,419 | A deployable airborne sensor array system and method of use are provided herein. The system includes a tether configured to be coupled to and deployed from an aircraft and a plurality of airborne vehicles coupled to the tether. Each of the plurality of airborne vehicles includes different lift characteristics to form a three-dimensional (3D) array of airborne vehicles. Each airborne vehicle includes a sensing device configured to generate sensor data associated with a target. The system also include a computing device configured to process the sensor data received from each of said plurality of airborne vehicles and generate an image of the target based on the sensor data. | 1. A deployable airborne sensor array system comprising:
a tether configured to be coupled to and deployed from an aircraft; a plurality of airborne vehicles coupled to said tether, each of said plurality of airborne vehicles having different lift characteristics to form a three-dimensional (3D) array of airborne vehicles, each airborne vehicle comprising a sensor device configured to generate sensor data associated with a target; and a computing device configured to: process the sensor data received from each of said plurality of airborne vehicles; and generate an image of the target based on the sensor data. 2. The system of claim 1, wherein said tether comprises a tether network having a plurality of tethers coupling together one or more of said plurality of airborne vehicles. 3. The system of claim 1, wherein said different lift characteristics include unbalanced wings on at least first and second airborne vehicles that cause the first and second airborne vehicles to respectively glide to the left and to the right of the aircraft, and further include a positive lift profile and negative lift profile on at least third and fourth airborne vehicles that cause the third and fourth airborne vehicles to respectively glide above and below the aircraft, such that the plurality of airborne vehicles establish a three-dimensional array of sensors operating coherently to capture a three-dimensional view of a target at an instant in time. 4. The system of claim 3, wherein each sensor device comprises an imaging camera and said computing device is further configured to:
aim the plurality of imaging cameras at the target; and instruct the imaging cameras to capture a two-dimensional (2D) image of the target. 5. The system of claim 4, wherein said computing device is further configured to determine a position of each of the imaging cameras relative to the target. 6. The system of claim 5, wherein said computing device is further configured to:
determine an effective pixel size for each imaging camera based on the position of each of the imaging cameras relative to the target; and generate a super-resolution image of the target using the effective pixel size for each imaging camera. 7. The system of claim 5, wherein said computing device is further configured to generate a 3D image of the target using 3D ray tracing methodology, the 3D image generated based on the position of each of the imaging cameras relative to the target. 8. The system of claim 4, wherein said computing device is further configured to:
instruct the imaging cameras to capture a two-dimensional (2D) image of the target at varying times; and interleave the captured 2D images based on the time in which each 2D image was captured to generate a high-speed video of the target. 9. The system of claim 1, wherein said computing device is further configured to:
instruct each of the sensor devices to transmit a radio frequency pulse signal toward the target; receive a radio frequency pulse return signal from each of the sensor devices; and combine the received radio frequency pulse return signals to generate an image of the target having increased azimuth and range resolution. 10. The system of claim 9, wherein the image is a 3D image. 11. A method comprising:
deploying a tether from an aircraft, the tether including a plurality of airborne vehicles coupled to the tether, each of the plurality of airborne vehicles having different lift characteristics to form a three-dimensional (3D) array of airborne vehicles; each airborne vehicle including a sensing device configured to generate sensor data associated with a target; and processing, by a computing device, sensor data associated with a target received from each of the plurality of airborne vehicles, the sensor data generated by a sensing device coupled to each airborne vehicle; and generating, by the computing device, an image of the target based on the sensor data. 12. The method of claim 11, wherein deploying a tether further comprises deploying a tether network including a plurality of tethers coupling together one or more of the plurality of airborne vehicles. 13. The method of claim 11, wherein each sensor device includes an imaging camera, said method further comprising:
aiming the plurality of imaging cameras at the target; and instructing the imaging cameras to capture a two-dimensional (2D) image of the target. 14. The method of claim 13, further comprising determining a position of each of the imaging cameras relative to the target. 15. The method of claim 14, further comprising:
determining an effective pixel size for each imaging camera based on the position of each of the imaging cameras relative to the target; and generating a super-resolution image of the target using the effective pixel size for each imaging camera. 16. The method of claim 14, further comprising generating a 3D image of the target using 3D ray tracing methodology, the 3D image generated based on the position of each of the imaging cameras relative to the target. 17. The method of claim 13, further comprising:
instructing the imaging cameras to capture a two-dimensional (2D) image of the target at varying times; and interleaving the captured 2D images based on the time in which each 2D image was captured to generate a high-speed video of the target. 18. The method of claim 11, further comprising:
instructing each of the sensing devices to transmit a radio frequency pulse signal toward the target; receiving a radio frequency pulse return signal from each of the sensing devices; and combining the received radio frequency pulse return signals to generate a 3D image of the target having increased azimuth resolution. 19. The method of claim 11, wherein deploying a tether further comprises deploying a tether including a plurality of airborne vehicles having different horizontal and vertical lift characteristics relative to one another, such that the plurality of airborne vehicles establish a three-dimensional array of sensors operating coherently to capture a three-dimensional view of a target at an instant in time. | A deployable airborne sensor array system and method of use are provided herein. The system includes a tether configured to be coupled to and deployed from an aircraft and a plurality of airborne vehicles coupled to the tether. Each of the plurality of airborne vehicles includes different lift characteristics to form a three-dimensional (3D) array of airborne vehicles. Each airborne vehicle includes a sensing device configured to generate sensor data associated with a target. The system also include a computing device configured to process the sensor data received from each of said plurality of airborne vehicles and generate an image of the target based on the sensor data.1. A deployable airborne sensor array system comprising:
a tether configured to be coupled to and deployed from an aircraft; a plurality of airborne vehicles coupled to said tether, each of said plurality of airborne vehicles having different lift characteristics to form a three-dimensional (3D) array of airborne vehicles, each airborne vehicle comprising a sensor device configured to generate sensor data associated with a target; and a computing device configured to: process the sensor data received from each of said plurality of airborne vehicles; and generate an image of the target based on the sensor data. 2. The system of claim 1, wherein said tether comprises a tether network having a plurality of tethers coupling together one or more of said plurality of airborne vehicles. 3. The system of claim 1, wherein said different lift characteristics include unbalanced wings on at least first and second airborne vehicles that cause the first and second airborne vehicles to respectively glide to the left and to the right of the aircraft, and further include a positive lift profile and negative lift profile on at least third and fourth airborne vehicles that cause the third and fourth airborne vehicles to respectively glide above and below the aircraft, such that the plurality of airborne vehicles establish a three-dimensional array of sensors operating coherently to capture a three-dimensional view of a target at an instant in time. 4. The system of claim 3, wherein each sensor device comprises an imaging camera and said computing device is further configured to:
aim the plurality of imaging cameras at the target; and instruct the imaging cameras to capture a two-dimensional (2D) image of the target. 5. The system of claim 4, wherein said computing device is further configured to determine a position of each of the imaging cameras relative to the target. 6. The system of claim 5, wherein said computing device is further configured to:
determine an effective pixel size for each imaging camera based on the position of each of the imaging cameras relative to the target; and generate a super-resolution image of the target using the effective pixel size for each imaging camera. 7. The system of claim 5, wherein said computing device is further configured to generate a 3D image of the target using 3D ray tracing methodology, the 3D image generated based on the position of each of the imaging cameras relative to the target. 8. The system of claim 4, wherein said computing device is further configured to:
instruct the imaging cameras to capture a two-dimensional (2D) image of the target at varying times; and interleave the captured 2D images based on the time in which each 2D image was captured to generate a high-speed video of the target. 9. The system of claim 1, wherein said computing device is further configured to:
instruct each of the sensor devices to transmit a radio frequency pulse signal toward the target; receive a radio frequency pulse return signal from each of the sensor devices; and combine the received radio frequency pulse return signals to generate an image of the target having increased azimuth and range resolution. 10. The system of claim 9, wherein the image is a 3D image. 11. A method comprising:
deploying a tether from an aircraft, the tether including a plurality of airborne vehicles coupled to the tether, each of the plurality of airborne vehicles having different lift characteristics to form a three-dimensional (3D) array of airborne vehicles; each airborne vehicle including a sensing device configured to generate sensor data associated with a target; and processing, by a computing device, sensor data associated with a target received from each of the plurality of airborne vehicles, the sensor data generated by a sensing device coupled to each airborne vehicle; and generating, by the computing device, an image of the target based on the sensor data. 12. The method of claim 11, wherein deploying a tether further comprises deploying a tether network including a plurality of tethers coupling together one or more of the plurality of airborne vehicles. 13. The method of claim 11, wherein each sensor device includes an imaging camera, said method further comprising:
aiming the plurality of imaging cameras at the target; and instructing the imaging cameras to capture a two-dimensional (2D) image of the target. 14. The method of claim 13, further comprising determining a position of each of the imaging cameras relative to the target. 15. The method of claim 14, further comprising:
determining an effective pixel size for each imaging camera based on the position of each of the imaging cameras relative to the target; and generating a super-resolution image of the target using the effective pixel size for each imaging camera. 16. The method of claim 14, further comprising generating a 3D image of the target using 3D ray tracing methodology, the 3D image generated based on the position of each of the imaging cameras relative to the target. 17. The method of claim 13, further comprising:
instructing the imaging cameras to capture a two-dimensional (2D) image of the target at varying times; and interleaving the captured 2D images based on the time in which each 2D image was captured to generate a high-speed video of the target. 18. The method of claim 11, further comprising:
instructing each of the sensing devices to transmit a radio frequency pulse signal toward the target; receiving a radio frequency pulse return signal from each of the sensing devices; and combining the received radio frequency pulse return signals to generate a 3D image of the target having increased azimuth resolution. 19. The method of claim 11, wherein deploying a tether further comprises deploying a tether including a plurality of airborne vehicles having different horizontal and vertical lift characteristics relative to one another, such that the plurality of airborne vehicles establish a three-dimensional array of sensors operating coherently to capture a three-dimensional view of a target at an instant in time. | 2,400 |
8,640 | 8,640 | 15,832,787 | 2,456 | A system and method in a building or vehicle for an actuator operation in response to a sensor according to a control logic, the system comprising a router or a gateway communicating with a device associated with the sensor and a device associated with the actuator over in-building or in-vehicle networks, and an external Internet-connected control server associated with the control logic implementing a PID closed linear control loop and communicating with the router over external network for controlling the in-building or in-vehicle phenomenon. The sensor may be a microphone or a camera, and the system may include voice or image processing as part of the control logic. A redundancy is used by using multiple sensors or actuators, or by using multiple data paths over the building or vehicle internal or external communication. The networks may be wired or wireless, and may be BAN, PAN, LAN, WAN, or home networks. | 1. A system for use with a wireless network in a building, the system comprising first and second devices in the building and an Internet-connected server device external to the building,
the first device comprising in a first single enclosure:
a first wireless transceiver for communicating with the server device over the wireless network; and
multiple microphones for capturing human voice data, the multiple microphones are coupled to the first wireless transceiver for sending the captured human voice data to the server device,
and the second device comprising in a second single enclosure:
a second wireless transceiver for communicating with the server device over the wireless network for receiving an actuator command therefrom; and
an actuator coupled to the second wireless transceiver for being activated, operated, or controlled in response to the received actuator command,
wherein the server device is operative to receive over the Internet the captured human voice data from the first device, to process the captured human voice data using a voice processing, and to send the actuator command in response to the processing. 2. The system according to claim 1, wherein the first single enclosure is the same as the second single enclosure. 3. The system according to claim 1, wherein the voice processing by the server device comprises performing a voice recognition algorithm for identifying the voice of a specific person. 4. The system according to claim 1, wherein the first device further comprises a sensor coupled to the first wireless transceiver that outputs sensor data that responds to a physical phenomenon, wherein the first device is further operative to send to the server device via the wireless network the sensor data, and wherein the actuator command is further in response to the sensor data. 5. The system according to claim 4, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation. 6. The system according to claim 4, wherein the sensor comprises a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell, or wherein the photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element. 7. The system according to claim 1, wherein the actuator is directly or indirectly affecting, changing, producing, or creating a physical phenomenon. 8. The system according to claim 7, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current. 9. The system according to claim 1, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of the human voice impinging the microphones array. 10. The system according to claim 1, wherein each of the microphones is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound based motion of a diaphragm or a ribbon. 11. The system according to claim 1, wherein each of the microphones consists of, or comprises, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone. 12. The system according to claim 1, wherein the first device and the second device are each addressable in the wireless network or the Internet using distinct locally administered addresses or a universally administered digital addresses stored in a volatile or non-volatile memory of the respective device and uniquely identifying the respective device in the wireless network or in the Internet. 13. The system according to claim 12, wherein the digital address is a MAC layer address that is MAC-48, EUI-48, or EUI-64 address type or wherein the digital address is a layer 3 address and comprises static or dynamic IP address that is IPv4 or IPv6 type address. 14. The system according to claim 1, wherein the wireless network is a Wireless Personal Area Network (WPAN), that is according to, or based on, Bluetooth™ or IEEE 802.15.1-2005 standards. 15. The system according to claim 1, wherein the wireless network is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards. 16. The system according to claim 1, wherein the wireless network is a Wireless LAN (WLAN) that is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac. 17. The system according to claim 1, wherein the wireless network uses a wireless communication over a licensed or an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band. 18. The system according to claim 1, wherein the wireless network is a cellular telephone network, that is a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 1×RTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, or wherein the cellular telephone network is a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or is based on IEEE 802.20-2008. 19. The system according to claim 1, wherein the first device or the second device is integrated in, is part or, or is entirely included in, an appliance that is activated or controlled in response to the actuator command. 20. The system according to claim 19, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation. 21. The system according to claim 20, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker. 22. The system according to claim 20, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwashers, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker. 23. The system according to claim 19, wherein the primary function of the appliance is associated with environmental control, and the appliance consists of, or is part of, an HVAC system. 24. The system according to claim 23, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater. 25. The system according to claim 19, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine or a clothes dryer, or wherein the appliance is a vacuum cleaner. 26. The system according to claim 19, wherein the appliance is an answering machine, a telephone set, a home cinema system, a HiFi system, a CD or DVD player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier. 27. The system according to claim 1, wherein the actuator is an electric light source for converting electrical energy into light. 28. The system according to claim 27, wherein the electric light source emits, in response to the actuator command, a visible light for illumination or indication. 29. The system according to claim 27, wherein the electric light source emits, in response to the actuator command, a non-visible light for illumination or indication, and wherein the non-visible light is infrared, ultraviolet, X-rays, or gamma rays. 30. The system according to claim 27, wherein the electric light source consists of, or comprises, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode. 31. The system according to claim 1, wherein the actuator is a motion actuator that causes linear or rotary motion in response to the actuator command. 32. The system according to claim 1, wherein the actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves in response to the actuator command. 33. The system according to claim 32, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer (or beeper), a chime, a whistle or a ringer. 34. The system according to claim 32, wherein the operating of the actuator in response to the actuator command comprises playing digital audio content that is pre-recorded or synthesized voice. 35. The system according to claim 32, wherein the operating of the actuator in response to the actuator command comprises simulating the voice of a human being or generating music, or wherein the operating of the actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice. 36. The system according to claim 1, wherein the first device further comprising an additional actuator coupled to the first wireless transceiver for being activated, operated, or controlled, in response to receiving an additional actuator command, and wherein the server device is operative to send the additional actuator command in response to the voice processing. | A system and method in a building or vehicle for an actuator operation in response to a sensor according to a control logic, the system comprising a router or a gateway communicating with a device associated with the sensor and a device associated with the actuator over in-building or in-vehicle networks, and an external Internet-connected control server associated with the control logic implementing a PID closed linear control loop and communicating with the router over external network for controlling the in-building or in-vehicle phenomenon. The sensor may be a microphone or a camera, and the system may include voice or image processing as part of the control logic. A redundancy is used by using multiple sensors or actuators, or by using multiple data paths over the building or vehicle internal or external communication. The networks may be wired or wireless, and may be BAN, PAN, LAN, WAN, or home networks.1. A system for use with a wireless network in a building, the system comprising first and second devices in the building and an Internet-connected server device external to the building,
the first device comprising in a first single enclosure:
a first wireless transceiver for communicating with the server device over the wireless network; and
multiple microphones for capturing human voice data, the multiple microphones are coupled to the first wireless transceiver for sending the captured human voice data to the server device,
and the second device comprising in a second single enclosure:
a second wireless transceiver for communicating with the server device over the wireless network for receiving an actuator command therefrom; and
an actuator coupled to the second wireless transceiver for being activated, operated, or controlled in response to the received actuator command,
wherein the server device is operative to receive over the Internet the captured human voice data from the first device, to process the captured human voice data using a voice processing, and to send the actuator command in response to the processing. 2. The system according to claim 1, wherein the first single enclosure is the same as the second single enclosure. 3. The system according to claim 1, wherein the voice processing by the server device comprises performing a voice recognition algorithm for identifying the voice of a specific person. 4. The system according to claim 1, wherein the first device further comprises a sensor coupled to the first wireless transceiver that outputs sensor data that responds to a physical phenomenon, wherein the first device is further operative to send to the server device via the wireless network the sensor data, and wherein the actuator command is further in response to the sensor data. 5. The system according to claim 4, wherein the sensor is a thermoelectric sensor that responds to a temperature or to a temperature gradient of an object using conduction, convection, or radiation. 6. The system according to claim 4, wherein the sensor comprises a photoelectric sensor that responds to a visible or an invisible light, the invisible light is infrared, ultraviolet, X-rays, or gamma rays, wherein the photoelectric sensor is based on the photoelectric or photovoltaic effect, and consists of, or comprises, a semiconductor component that consists of, or comprises, a photodiode, a phototransistor, or a solar cell, or wherein the photoelectric sensor is based on Charge-Coupled Device (CCD) or a Complementary Metal-Oxide Semiconductor (CMOS) element. 7. The system according to claim 1, wherein the actuator is directly or indirectly affecting, changing, producing, or creating a physical phenomenon. 8. The system according to claim 7, wherein the physical phenomenon comprises temperature, humidity, pressure, audio, vibration, light, motion, sound, proximity, flow rate, electrical voltage, or electrical current. 9. The system according to claim 1, wherein the multiple microphones are arranged as a directional microphones array operative to estimate a number, magnitude, frequency, Direction-Of-Arrival (DOA), distance, or speed of the human voice impinging the microphones array. 10. The system according to claim 1, wherein each of the microphones is an omnidirectional, unidirectional, or bidirectional microphone that is based on the sensing an incident sound based motion of a diaphragm or a ribbon. 11. The system according to claim 1, wherein each of the microphones consists of, or comprises, a condenser, an electret, a dynamic, a ribbon, a carbon, or a piezoelectric microphone. 12. The system according to claim 1, wherein the first device and the second device are each addressable in the wireless network or the Internet using distinct locally administered addresses or a universally administered digital addresses stored in a volatile or non-volatile memory of the respective device and uniquely identifying the respective device in the wireless network or in the Internet. 13. The system according to claim 12, wherein the digital address is a MAC layer address that is MAC-48, EUI-48, or EUI-64 address type or wherein the digital address is a layer 3 address and comprises static or dynamic IP address that is IPv4 or IPv6 type address. 14. The system according to claim 1, wherein the wireless network is a Wireless Personal Area Network (WPAN), that is according to, or based on, Bluetooth™ or IEEE 802.15.1-2005 standards. 15. The system according to claim 1, wherein the wireless network is a wireless control network that is according to, or based on, Zigbee™, IEEE 802.15.4-2003, or Z-Wave™ standards. 16. The system according to claim 1, wherein the wireless network is a Wireless LAN (WLAN) that is according to, or base on, IEEE 802.11-2012, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, IEEE 802.11n, or IEEE 802.11ac. 17. The system according to claim 1, wherein the wireless network uses a wireless communication over a licensed or an unlicensed radio frequency band, that is an Industrial, Scientific and Medical (ISM) radio band. 18. The system according to claim 1, wherein the wireless network is a cellular telephone network, that is a Third Generation (3G) network that uses UMTS W-CDMA, UMTS HSPA, UMTS TDD, CDMA2000 1×RTT, CDMA2000 EV-DO, or GSM EDGE-Evolution, or wherein the cellular telephone network is a Fourth Generation (4G) network that uses HSPA+, Mobile WiMAX, LTE, LTE-Advanced, MBWA, or is based on IEEE 802.20-2008. 19. The system according to claim 1, wherein the first device or the second device is integrated in, is part or, or is entirely included in, an appliance that is activated or controlled in response to the actuator command. 20. The system according to claim 19, wherein the primary functionality of the appliance is associated with food storage, handling, or preparation. 21. The system according to claim 20, wherein the primary function of the appliance is heating food, and wherein the appliance is a microwave oven, an electric mixer, a stove, an oven, or an induction cooker. 22. The system according to claim 20, wherein the appliance is a refrigerator, a freezer, a food processor, a dishwashers, a food blender, a beverage maker, a coffeemaker, or an iced-tea maker. 23. The system according to claim 19, wherein the primary function of the appliance is associated with environmental control, and the appliance consists of, or is part of, an HVAC system. 24. The system according to claim 23, wherein the primary function of the appliance is associated with temperature control, and wherein the appliance is an air conditioner or a heater. 25. The system according to claim 19, wherein the primary function of the appliance is associated with cleaning, wherein the appliance primary function is associated with clothes cleaning and the appliance is a washing machine or a clothes dryer, or wherein the appliance is a vacuum cleaner. 26. The system according to claim 19, wherein the appliance is an answering machine, a telephone set, a home cinema system, a HiFi system, a CD or DVD player, an electric furnace, a trash compactor, a smoke detector, a light fixture, or a dehumidifier. 27. The system according to claim 1, wherein the actuator is an electric light source for converting electrical energy into light. 28. The system according to claim 27, wherein the electric light source emits, in response to the actuator command, a visible light for illumination or indication. 29. The system according to claim 27, wherein the electric light source emits, in response to the actuator command, a non-visible light for illumination or indication, and wherein the non-visible light is infrared, ultraviolet, X-rays, or gamma rays. 30. The system according to claim 27, wherein the electric light source consists of, or comprises, a lamp, an incandescent lamp, a gas discharge lamp, a fluorescent lamp, a Solid-State Lighting (SSL), a Light Emitting Diode (LED), an Organic LED (OLED), a polymer LED (PLED), or a laser diode. 31. The system according to claim 1, wherein the actuator is a motion actuator that causes linear or rotary motion in response to the actuator command. 32. The system according to claim 1, wherein the actuator is a sounder for converting an electrical energy to omnidirectional, unidirectional, or bidirectional pattern emitted, audible or inaudible, sound waves in response to the actuator command. 33. The system according to claim 32, wherein the sounder comprises an electromagnetic loudspeaker, a piezoelectric speaker, an electrostatic loudspeaker (ESL), a ribbon or planar magnetic loudspeaker, or a bending wave loudspeaker, or wherein the sounder comprises an electric bell, a buzzer (or beeper), a chime, a whistle or a ringer. 34. The system according to claim 32, wherein the operating of the actuator in response to the actuator command comprises playing digital audio content that is pre-recorded or synthesized voice. 35. The system according to claim 32, wherein the operating of the actuator in response to the actuator command comprises simulating the voice of a human being or generating music, or wherein the operating of the actuator comprises sounding a syllable, a word, a phrase, a sentence, a short story, or a long story, using male or female voice. 36. The system according to claim 1, wherein the first device further comprising an additional actuator coupled to the first wireless transceiver for being activated, operated, or controlled, in response to receiving an additional actuator command, and wherein the server device is operative to send the additional actuator command in response to the voice processing. | 2,400 |
8,641 | 8,641 | 15,465,328 | 2,481 | Methods and apparatus for coding video information having a plurality of video samples are disclosed. Blocks for video data are coded by an encoder based upon a quantization parameter (QP) for each block. The QP used for each block may be limited by a maximum QP value. A buffer fullness of a buffer unit may be determined that indicates of a ratio between a number of bits currently occupied in the buffer unit and a current capacity of the buffer unit. The encoder may determine an adjustment value for the maximum QP based upon the determined buffer fullness. By dynamically adjusting the maximum QP for coding blocks of video data, distortion from quantization may be reduced while preventing the buffer unit from overflowing or emptying. | 1. An apparatus for coding video information, comprising:
a buffer unit configured to store coded video information; a hardware processor configured to:
determine a buffer fullness of the buffer unit, the buffer fullness being indicative of a ratio between a number of bits currently occupied in the buffer unit and a current capacity of the buffer unit;
determine an initial maximum quantization parameter (QP) value;
determine an adjustment value based at least in part upon the determined buffer fullness of the buffer unit;
adjust the initial maximum QP value using the determined adjustment value, wherein the adjusted maximum QP value specifies a maximum QP value that may be used to code the current block of the video information; and
code the current block of video information based on a QP value to form a video data bitstream for display or transmission, in accordance with a restriction that the QP value may not exceed the adjusted maximum QP value. 2. The apparatus of claim 1, wherein the hardware processor is further configured to set the adjustment value to a default adjustment value when the buffer fullness of the buffer unit is at a level between a higher first fullness threshold and a lower second fullness threshold, wherein the default adjustment value is greater than zero. 3. The apparatus of claim 2, wherein the hardware processor is further configured to set the adjustment value to a value higher than the default adjustment value with the buffer fullness of the buffer unit is lower than the second fullness threshold. 4. The apparatus of claim 2, wherein the hardware processor is further configured to set the adjustment value to a value lower than the default adjustment value with the buffer fullness of the buffer unit is higher than the first fullness threshold. 5. The apparatus of claim 1, wherein the hardware processor is further configured to determine a complexity value derived based at least in part upon a number of bits spent on coding a previous block of video information, and wherein the adjustment value is further based at least in part upon the determined complexity value. 6. The apparatus of claim 1, wherein the QP value is further based at least in part upon the buffer fullness of the buffer unit. 7. The apparatus of claim 1, wherein the adjustment value is further based at least in part upon a bit depth of the video information to be coded. 8. The apparatus of claim 1, wherein the adjustment value is further based at least in part upon a compressed bitrate of the video information to be coded. 9. The apparatus of claim 1, wherein the buffer unit is further configured to output bits of coded video data to the video data bitstream at a fixed rate. 10. A method for coding video information, comprising:
determining a buffer fullness of the buffer unit configured to store coded video information, the buffer fullness being indicative of a ratio between a number of bits currently occupied in the buffer unit and a current capacity of the buffer unit; determining an initial maximum quantization parameter (QP) value; determining an adjustment value based at least in part upon the determined buffer fullness of the buffer unit; adjusting the initial maximum QP value using the determined adjustment value, wherein the adjusted maximum QP value specifies a maximum QP value that may be used to code the current block of the video information; and coding the current block of video information based on a QP value to form a video data bitstream for display or transmission, in accordance with a restriction that the QP value may not exceed the adjusted maximum QP value. 11. The method of claim 10, further comprising setting the adjustment value to a default adjustment value when the buffer fullness of the buffer unit is at a level between a higher first fullness threshold and a lower second fullness threshold, wherein the default adjustment value is greater than zero. 12. The method of claim 11, further comprising setting the adjustment value to a value higher than the default adjustment value with the buffer fullness of the buffer unit is lower than the second fullness threshold. 13. The method of claim 11, further comprising setting the adjustment value to a value lower than the default adjustment value with the buffer fullness of the buffer unit is higher than the first fullness threshold. 14. The method of claim 10, further comprising determining a complexity value derived based at least in part upon a number of bits spent on coding a previous block of video information, and wherein the adjustment value is further based at least in part upon the determined complexity value. 15. The method of claim 10, wherein the QP value is further based at least in part upon the buffer fullness of the buffer unit. 16. The method of claim 10, wherein the adjustment value is further based at least in part upon a bit depth of the video information to be coded. 17. The method of claim 10, wherein the adjustment value is further based at least in part upon a compressed bitrate of the video information to be coded. 18. The method of claim 10, wherein the buffer unit is further configured to output bits of coded video data to the video data bitstream at a fixed rate. 19. An apparatus for coding video information, comprising:
a buffer means for storing coded video information; means for determining a buffer fullness of the buffer means, the buffer fullness being indicative of a ratio between a number of bits currently occupied in the buffer means and a current capacity of the buffer means; means for determining an initial maximum quantization parameter (QP) value; means for determining an adjustment value based at least in part upon the determined buffer fullness of the buffer means; means for adjusting the initial maximum QP value using the determined adjustment value, wherein the adjusted maximum QP value specifies a maximum QP value that may be used to code the current block of the video information; and means for coding the current block of video information based on a QP value to form a video data bitstream for display or transmission, in accordance with a restriction that the QP value may not exceed the adjusted maximum QP value. 20. The apparatus of claim 19, wherein the means for determining the adjustment value is configured to set the adjustment value to a default adjustment value when the buffer fullness of the buffer means is at a level between a higher first fullness threshold and a lower second fullness threshold, wherein the default adjustment value is greater than zero. 21. The apparatus of claim 20, wherein the means for determining the adjustment value is further configured to set the adjustment value to a value higher than the default adjustment value with the buffer fullness of the buffer means is lower than the second fullness threshold. 22. The apparatus of claim 20, wherein the means for determining the adjustment value is further configured to set the adjustment value to a value lower than the default adjustment value with the buffer fullness of the buffer means is higher than the first fullness threshold. 23. The apparatus of claim 19, further comprising means for determining a complexity value derived based at least in part upon a number of bits spent on coding a previous block of video information, and wherein the adjustment value is further based at least in part upon the determined complexity value. 24. The apparatus of claim 19, wherein the QP value is further based at least in part upon the buffer fullness of the buffer unit. 25. The apparatus of claim 19, wherein the adjustment value is further based at least in part upon a bit depth of the video information to be coded. 26. The apparatus of claim 19, wherein the adjustment value is further based at least in part upon a compressed bitrate of the video information to be coded. 27. The apparatus of claim 19, wherein the buffer means is further configured to output bits of coded video data to the video data bitstream at a fixed rate. | Methods and apparatus for coding video information having a plurality of video samples are disclosed. Blocks for video data are coded by an encoder based upon a quantization parameter (QP) for each block. The QP used for each block may be limited by a maximum QP value. A buffer fullness of a buffer unit may be determined that indicates of a ratio between a number of bits currently occupied in the buffer unit and a current capacity of the buffer unit. The encoder may determine an adjustment value for the maximum QP based upon the determined buffer fullness. By dynamically adjusting the maximum QP for coding blocks of video data, distortion from quantization may be reduced while preventing the buffer unit from overflowing or emptying.1. An apparatus for coding video information, comprising:
a buffer unit configured to store coded video information; a hardware processor configured to:
determine a buffer fullness of the buffer unit, the buffer fullness being indicative of a ratio between a number of bits currently occupied in the buffer unit and a current capacity of the buffer unit;
determine an initial maximum quantization parameter (QP) value;
determine an adjustment value based at least in part upon the determined buffer fullness of the buffer unit;
adjust the initial maximum QP value using the determined adjustment value, wherein the adjusted maximum QP value specifies a maximum QP value that may be used to code the current block of the video information; and
code the current block of video information based on a QP value to form a video data bitstream for display or transmission, in accordance with a restriction that the QP value may not exceed the adjusted maximum QP value. 2. The apparatus of claim 1, wherein the hardware processor is further configured to set the adjustment value to a default adjustment value when the buffer fullness of the buffer unit is at a level between a higher first fullness threshold and a lower second fullness threshold, wherein the default adjustment value is greater than zero. 3. The apparatus of claim 2, wherein the hardware processor is further configured to set the adjustment value to a value higher than the default adjustment value with the buffer fullness of the buffer unit is lower than the second fullness threshold. 4. The apparatus of claim 2, wherein the hardware processor is further configured to set the adjustment value to a value lower than the default adjustment value with the buffer fullness of the buffer unit is higher than the first fullness threshold. 5. The apparatus of claim 1, wherein the hardware processor is further configured to determine a complexity value derived based at least in part upon a number of bits spent on coding a previous block of video information, and wherein the adjustment value is further based at least in part upon the determined complexity value. 6. The apparatus of claim 1, wherein the QP value is further based at least in part upon the buffer fullness of the buffer unit. 7. The apparatus of claim 1, wherein the adjustment value is further based at least in part upon a bit depth of the video information to be coded. 8. The apparatus of claim 1, wherein the adjustment value is further based at least in part upon a compressed bitrate of the video information to be coded. 9. The apparatus of claim 1, wherein the buffer unit is further configured to output bits of coded video data to the video data bitstream at a fixed rate. 10. A method for coding video information, comprising:
determining a buffer fullness of the buffer unit configured to store coded video information, the buffer fullness being indicative of a ratio between a number of bits currently occupied in the buffer unit and a current capacity of the buffer unit; determining an initial maximum quantization parameter (QP) value; determining an adjustment value based at least in part upon the determined buffer fullness of the buffer unit; adjusting the initial maximum QP value using the determined adjustment value, wherein the adjusted maximum QP value specifies a maximum QP value that may be used to code the current block of the video information; and coding the current block of video information based on a QP value to form a video data bitstream for display or transmission, in accordance with a restriction that the QP value may not exceed the adjusted maximum QP value. 11. The method of claim 10, further comprising setting the adjustment value to a default adjustment value when the buffer fullness of the buffer unit is at a level between a higher first fullness threshold and a lower second fullness threshold, wherein the default adjustment value is greater than zero. 12. The method of claim 11, further comprising setting the adjustment value to a value higher than the default adjustment value with the buffer fullness of the buffer unit is lower than the second fullness threshold. 13. The method of claim 11, further comprising setting the adjustment value to a value lower than the default adjustment value with the buffer fullness of the buffer unit is higher than the first fullness threshold. 14. The method of claim 10, further comprising determining a complexity value derived based at least in part upon a number of bits spent on coding a previous block of video information, and wherein the adjustment value is further based at least in part upon the determined complexity value. 15. The method of claim 10, wherein the QP value is further based at least in part upon the buffer fullness of the buffer unit. 16. The method of claim 10, wherein the adjustment value is further based at least in part upon a bit depth of the video information to be coded. 17. The method of claim 10, wherein the adjustment value is further based at least in part upon a compressed bitrate of the video information to be coded. 18. The method of claim 10, wherein the buffer unit is further configured to output bits of coded video data to the video data bitstream at a fixed rate. 19. An apparatus for coding video information, comprising:
a buffer means for storing coded video information; means for determining a buffer fullness of the buffer means, the buffer fullness being indicative of a ratio between a number of bits currently occupied in the buffer means and a current capacity of the buffer means; means for determining an initial maximum quantization parameter (QP) value; means for determining an adjustment value based at least in part upon the determined buffer fullness of the buffer means; means for adjusting the initial maximum QP value using the determined adjustment value, wherein the adjusted maximum QP value specifies a maximum QP value that may be used to code the current block of the video information; and means for coding the current block of video information based on a QP value to form a video data bitstream for display or transmission, in accordance with a restriction that the QP value may not exceed the adjusted maximum QP value. 20. The apparatus of claim 19, wherein the means for determining the adjustment value is configured to set the adjustment value to a default adjustment value when the buffer fullness of the buffer means is at a level between a higher first fullness threshold and a lower second fullness threshold, wherein the default adjustment value is greater than zero. 21. The apparatus of claim 20, wherein the means for determining the adjustment value is further configured to set the adjustment value to a value higher than the default adjustment value with the buffer fullness of the buffer means is lower than the second fullness threshold. 22. The apparatus of claim 20, wherein the means for determining the adjustment value is further configured to set the adjustment value to a value lower than the default adjustment value with the buffer fullness of the buffer means is higher than the first fullness threshold. 23. The apparatus of claim 19, further comprising means for determining a complexity value derived based at least in part upon a number of bits spent on coding a previous block of video information, and wherein the adjustment value is further based at least in part upon the determined complexity value. 24. The apparatus of claim 19, wherein the QP value is further based at least in part upon the buffer fullness of the buffer unit. 25. The apparatus of claim 19, wherein the adjustment value is further based at least in part upon a bit depth of the video information to be coded. 26. The apparatus of claim 19, wherein the adjustment value is further based at least in part upon a compressed bitrate of the video information to be coded. 27. The apparatus of claim 19, wherein the buffer means is further configured to output bits of coded video data to the video data bitstream at a fixed rate. | 2,400 |
8,642 | 8,642 | 15,846,504 | 2,463 | A user equipment unit ( 30 ) and a base station node ( 28 ) which is configured for operation with a synchronous HARQ protocol and with capability of sending data on an E-DCH channel either (1) in a nominal mode in single transmission time intervals of a predetermined length, or (2) in an extended mode in a pseudo transmission time interval. The pseudo transmission time interval comprises a first transmission time interval in which the data is transmitted and a second transmission time interval in which the data is re-transmitted. The second transmission time interval is consecutive to the first transmission time interval, and the first transmission time interval and the second transmission time interval are each of the (same) predetermined length. | 1. A method of autonomous transmission for extended coverage performed by a user equipment (UE), the method comprising:
establishing a communication session with a base station, the communication session configured to conform with a synchronous HARQ protocol in which re-transmissions occur a fixed number of transmission time intervals after a previous transmission or re-transmission; transmitting data in a first transmission time interval; and re-transmitting the same data in a second transmission time interval; wherein the first transmission time interval and the second transmission time interval occur sequentially within two transmission time intervals of the synchronous HARQ protocol. 2. The method of claim 1 further comprising signaling to the base station that the data of the second transmission time interval is to be combined with the data of the first transmission time interval. 3. The method of claim 1 further comprising determining whether the data is to be re-transmitted, wherein the same data is re-transmitted in the second transmission time interval upon determining that the data is to be re-transmitted. 4. The method of claim 1 wherein the communication session comprises an E-DCH channel. 5. The method of claim 1 wherein the first and second transmission time intervals are part of a group of HARQ processes, the group of HARQ processes a subset of the full set of HARQ processes. 6. A user equipment (UE) configured for autonomous transmission for extended coverage, the UE comprising:
a processor configured to establish a communication session with a base station, the communication session configured to conform with a synchronous HARQ protocol in which re-transmissions occur a fixed number of transmission time intervals after a previous transmission or re-transmission; and a transceiver configured to: transmit data in a first transmission time interval; and re-transmit the same data in a second transmission time interval; wherein the first transmission time interval and the second transmission time interval occur sequentially within two transmission time intervals of the synchronous HARQ protocol. 7. The UE of claim 6 wherein the transceiver is further configured to signal to the base station that the data of the second transmission time interval is to be combined with the data of the first transmission time interval. 8. The UE of claim 6 wherein the processor is further configured to determine whether the data is to be re-transmitted, wherein the same data is re-transmitted in the second transmission time interval upon determining that the data is to be re-transmitted. 9. The UE of claim 6 wherein the communication session comprises an E-DCH channel. 10. The UE of claim 6 wherein the first and second transmission time intervals are part of a group of HARQ processes, the group of HARQ processes a subset of the full set of HARQ processes. | A user equipment unit ( 30 ) and a base station node ( 28 ) which is configured for operation with a synchronous HARQ protocol and with capability of sending data on an E-DCH channel either (1) in a nominal mode in single transmission time intervals of a predetermined length, or (2) in an extended mode in a pseudo transmission time interval. The pseudo transmission time interval comprises a first transmission time interval in which the data is transmitted and a second transmission time interval in which the data is re-transmitted. The second transmission time interval is consecutive to the first transmission time interval, and the first transmission time interval and the second transmission time interval are each of the (same) predetermined length.1. A method of autonomous transmission for extended coverage performed by a user equipment (UE), the method comprising:
establishing a communication session with a base station, the communication session configured to conform with a synchronous HARQ protocol in which re-transmissions occur a fixed number of transmission time intervals after a previous transmission or re-transmission; transmitting data in a first transmission time interval; and re-transmitting the same data in a second transmission time interval; wherein the first transmission time interval and the second transmission time interval occur sequentially within two transmission time intervals of the synchronous HARQ protocol. 2. The method of claim 1 further comprising signaling to the base station that the data of the second transmission time interval is to be combined with the data of the first transmission time interval. 3. The method of claim 1 further comprising determining whether the data is to be re-transmitted, wherein the same data is re-transmitted in the second transmission time interval upon determining that the data is to be re-transmitted. 4. The method of claim 1 wherein the communication session comprises an E-DCH channel. 5. The method of claim 1 wherein the first and second transmission time intervals are part of a group of HARQ processes, the group of HARQ processes a subset of the full set of HARQ processes. 6. A user equipment (UE) configured for autonomous transmission for extended coverage, the UE comprising:
a processor configured to establish a communication session with a base station, the communication session configured to conform with a synchronous HARQ protocol in which re-transmissions occur a fixed number of transmission time intervals after a previous transmission or re-transmission; and a transceiver configured to: transmit data in a first transmission time interval; and re-transmit the same data in a second transmission time interval; wherein the first transmission time interval and the second transmission time interval occur sequentially within two transmission time intervals of the synchronous HARQ protocol. 7. The UE of claim 6 wherein the transceiver is further configured to signal to the base station that the data of the second transmission time interval is to be combined with the data of the first transmission time interval. 8. The UE of claim 6 wherein the processor is further configured to determine whether the data is to be re-transmitted, wherein the same data is re-transmitted in the second transmission time interval upon determining that the data is to be re-transmitted. 9. The UE of claim 6 wherein the communication session comprises an E-DCH channel. 10. The UE of claim 6 wherein the first and second transmission time intervals are part of a group of HARQ processes, the group of HARQ processes a subset of the full set of HARQ processes. | 2,400 |
8,643 | 8,643 | 14,874,337 | 2,492 | A method is provided, including establishing a plurality of context profiles for a user, at least one context profile is associated with: (i) subject areas pertinent to the at least one context profile (ii) permissions identifying respective third parties with which personal information can be shared when the at least one context profile is active; (iii) permissions identifying what personal information can be shared with respective third parties when the at least one context profile is active; (iv) permissions identifying respective third parties that are permitted to contact the user when the at least one context profile is active; and (v) permissions identifying how respective third parties may contact the user when the at least one context profile is active; when the at least one context profile is active, operating in one of two or more modes (e.g., a regular mode or a discovery mode). | 1-20. (canceled) 21. A method for providing access to personal information of a user, comprising:
at a server system including one or more electronic devices with one or more processors and memory storing one or more programs for execution by the one or more processors:
establishing a plurality of context profiles for a user;
detecting an event associated with a request for personal information of the user;
generating a request for consent to share the personal information of the user with a third party;
sending, to the user, the request for consent to share the personal information of the user with the third party;
receiving, from the user, consent to share at least a subset of the requested personal information with the third party when at least a first context profile, of the plurality of context profiles, is active;
determining an active context profile for the user based on one or more signals indicative of the user's context;
determining whether the active context profile matches the first context profile;
in accordance with a determination that the active context profile matches the first context profile, facilitating sharing of the personal information of the user with the third party; and
in accordance with a determination that the active context profile does not match the first context profile, not facilitating sharing of the personal information of the user with the third party. 22. The method of claim 21, wherein detecting the event associated with the request for personal information of the user comprises receiving a request for the personal information of the user. 23. The method of claim 21, wherein the active context profile for the user is determined automatically without user input. 24. The method of claim 21, further comprising:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to contact the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to contact the user. 25. The method of claim 21, further comprising:
receiving a communication from the third party; in accordance with a determination that the active context profile matches the first context profile, forwarding the communication to the user; and in accordance with a determination that the active context profile does not match the first context profile, not forwarding the communication to the user. 26. The method of claim 21, further comprising:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to communicate directly with the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to communicate directly with the user. 27. The method of claim 21, wherein the plurality of context profiles is stored on a first electronic device of the one or more electronic devices, wherein the personal information of the user is stored on a second electronic device of the one or more electronic devices, the second electronic device is distinct from the first electronic device. 28. A system including one or more electronic devices, comprising:
one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
establishing a plurality of context profiles for a user;
detecting an event associated with a request for personal information of the user;
generating a request for consent to share the personal information of the user with a third party;
sending, to the user, the request for consent to share the personal information of the user with the third party;
receiving, from the user, consent to share at least a subset of the requested personal information with the third party when at least a first context profile, of the plurality of context profiles, is active;
determining an active context profile for the user based on one or more signals indicative of the user's context;
determining whether the active context profile matches the first context profile;
in accordance with a determination that the active context profile matches the first context profile, facilitating sharing of the personal information of the user with the third party; and
in accordance with a determination that the active context profile does not match the first context profile, not facilitating sharing of the personal information of the user with the third party. 29. The system of claim 28, wherein detecting the event associated with the request for personal information of the user comprises receiving a request for the personal information of the user. 30. The system of claim 28, wherein the active context profile for the user is determined automatically without user input. 31. The system of claim 28, wherein the one or more programs further include instructions for:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to contact the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to contact the user. 32. The system of claim 28, wherein the one or more programs further include instructions for:
receiving a communication from the third party; in accordance with a determination that the active context profile matches the first context profile, forwarding the communication to the user; and in accordance with a determination that the active context profile does not match the first context profile, not forwarding the communication to the user. 33. The system of claim 28, wherein the one or more programs further include instructions for:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to communicate directly with the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to communicate directly with the user. 34. The system of claim 28, wherein the plurality of context profiles is stored on a first electronic device of the one or more electronic devices, wherein the personal information of the user is stored on a second electronic device of the one or more electronic devices, the second electronic device is distinct from the first electronic device. 35. A non-transitory computer readable storage medium storing one or more programs comprising instructions, which when executed by one or more electronic devices, cause the one or more devices to:
establish a plurality of context profiles for a user; detect an event associated with a request for personal information of the user; generate a request for consent to share the personal information of the user with a third party; send, to the user, the request for consent to share the personal information of the user with the third party; receive, from the user, consent to share at least a subset of the requested personal information with the third party when at least a first context profile, of the plurality of context profiles, is active; determine an active context profile for the user based on one or more signals indicative of the user's context; determine whether the active context profile matches the first context profile; in accordance with a determination that the active context profile matches the first context profile, facilitate sharing of the personal information of the user with the third party; and in accordance with a determination that the active context profile does not match the first context profile, not facilitate sharing of the personal information of the user with the third party. 36. The computer readable storage medium of claim 35, wherein detecting the event associated with the request for personal information of the user comprises receiving a request for the personal information of the user. 37. The computer readable storage medium of claim 35, wherein the active context profile for the user is determined automatically without user input. 38. The computer readable storage medium of claim 35, further comprising instructions to:
in accordance with a determination that the active context profile matches the first context profile, allow the third party to contact the user; and in accordance with a determination that the active context profile does not match the first context profile, not allow the third party to contact the user. 39. The computer readable storage medium of claim 35, further comprising instructions to:
receive a communication from the third party; in accordance with a determination that the active context profile matches the first context profile, forward the communication to the user; and in accordance with a determination that the active context profile does not match the first context profile, not forward the communication to the user. 40. The computer readable storage medium of claim 35, wherein the plurality of context profiles is stored on a first electronic device of the one or more electronic devices, wherein the personal information of the user is stored on a second electronic device of the one or more electronic devices, the second electronic device is distinct from the first electronic device. | A method is provided, including establishing a plurality of context profiles for a user, at least one context profile is associated with: (i) subject areas pertinent to the at least one context profile (ii) permissions identifying respective third parties with which personal information can be shared when the at least one context profile is active; (iii) permissions identifying what personal information can be shared with respective third parties when the at least one context profile is active; (iv) permissions identifying respective third parties that are permitted to contact the user when the at least one context profile is active; and (v) permissions identifying how respective third parties may contact the user when the at least one context profile is active; when the at least one context profile is active, operating in one of two or more modes (e.g., a regular mode or a discovery mode).1-20. (canceled) 21. A method for providing access to personal information of a user, comprising:
at a server system including one or more electronic devices with one or more processors and memory storing one or more programs for execution by the one or more processors:
establishing a plurality of context profiles for a user;
detecting an event associated with a request for personal information of the user;
generating a request for consent to share the personal information of the user with a third party;
sending, to the user, the request for consent to share the personal information of the user with the third party;
receiving, from the user, consent to share at least a subset of the requested personal information with the third party when at least a first context profile, of the plurality of context profiles, is active;
determining an active context profile for the user based on one or more signals indicative of the user's context;
determining whether the active context profile matches the first context profile;
in accordance with a determination that the active context profile matches the first context profile, facilitating sharing of the personal information of the user with the third party; and
in accordance with a determination that the active context profile does not match the first context profile, not facilitating sharing of the personal information of the user with the third party. 22. The method of claim 21, wherein detecting the event associated with the request for personal information of the user comprises receiving a request for the personal information of the user. 23. The method of claim 21, wherein the active context profile for the user is determined automatically without user input. 24. The method of claim 21, further comprising:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to contact the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to contact the user. 25. The method of claim 21, further comprising:
receiving a communication from the third party; in accordance with a determination that the active context profile matches the first context profile, forwarding the communication to the user; and in accordance with a determination that the active context profile does not match the first context profile, not forwarding the communication to the user. 26. The method of claim 21, further comprising:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to communicate directly with the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to communicate directly with the user. 27. The method of claim 21, wherein the plurality of context profiles is stored on a first electronic device of the one or more electronic devices, wherein the personal information of the user is stored on a second electronic device of the one or more electronic devices, the second electronic device is distinct from the first electronic device. 28. A system including one or more electronic devices, comprising:
one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
establishing a plurality of context profiles for a user;
detecting an event associated with a request for personal information of the user;
generating a request for consent to share the personal information of the user with a third party;
sending, to the user, the request for consent to share the personal information of the user with the third party;
receiving, from the user, consent to share at least a subset of the requested personal information with the third party when at least a first context profile, of the plurality of context profiles, is active;
determining an active context profile for the user based on one or more signals indicative of the user's context;
determining whether the active context profile matches the first context profile;
in accordance with a determination that the active context profile matches the first context profile, facilitating sharing of the personal information of the user with the third party; and
in accordance with a determination that the active context profile does not match the first context profile, not facilitating sharing of the personal information of the user with the third party. 29. The system of claim 28, wherein detecting the event associated with the request for personal information of the user comprises receiving a request for the personal information of the user. 30. The system of claim 28, wherein the active context profile for the user is determined automatically without user input. 31. The system of claim 28, wherein the one or more programs further include instructions for:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to contact the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to contact the user. 32. The system of claim 28, wherein the one or more programs further include instructions for:
receiving a communication from the third party; in accordance with a determination that the active context profile matches the first context profile, forwarding the communication to the user; and in accordance with a determination that the active context profile does not match the first context profile, not forwarding the communication to the user. 33. The system of claim 28, wherein the one or more programs further include instructions for:
in accordance with a determination that the active context profile matches the first context profile, allowing the third party to communicate directly with the user; and in accordance with a determination that the active context profile does not match the first context profile, not allowing the third party to communicate directly with the user. 34. The system of claim 28, wherein the plurality of context profiles is stored on a first electronic device of the one or more electronic devices, wherein the personal information of the user is stored on a second electronic device of the one or more electronic devices, the second electronic device is distinct from the first electronic device. 35. A non-transitory computer readable storage medium storing one or more programs comprising instructions, which when executed by one or more electronic devices, cause the one or more devices to:
establish a plurality of context profiles for a user; detect an event associated with a request for personal information of the user; generate a request for consent to share the personal information of the user with a third party; send, to the user, the request for consent to share the personal information of the user with the third party; receive, from the user, consent to share at least a subset of the requested personal information with the third party when at least a first context profile, of the plurality of context profiles, is active; determine an active context profile for the user based on one or more signals indicative of the user's context; determine whether the active context profile matches the first context profile; in accordance with a determination that the active context profile matches the first context profile, facilitate sharing of the personal information of the user with the third party; and in accordance with a determination that the active context profile does not match the first context profile, not facilitate sharing of the personal information of the user with the third party. 36. The computer readable storage medium of claim 35, wherein detecting the event associated with the request for personal information of the user comprises receiving a request for the personal information of the user. 37. The computer readable storage medium of claim 35, wherein the active context profile for the user is determined automatically without user input. 38. The computer readable storage medium of claim 35, further comprising instructions to:
in accordance with a determination that the active context profile matches the first context profile, allow the third party to contact the user; and in accordance with a determination that the active context profile does not match the first context profile, not allow the third party to contact the user. 39. The computer readable storage medium of claim 35, further comprising instructions to:
receive a communication from the third party; in accordance with a determination that the active context profile matches the first context profile, forward the communication to the user; and in accordance with a determination that the active context profile does not match the first context profile, not forward the communication to the user. 40. The computer readable storage medium of claim 35, wherein the plurality of context profiles is stored on a first electronic device of the one or more electronic devices, wherein the personal information of the user is stored on a second electronic device of the one or more electronic devices, the second electronic device is distinct from the first electronic device. | 2,400 |
8,644 | 8,644 | 15,017,636 | 2,433 | Log based analysis systems and methods for protecting computers and networks from malicious communications and malware attacks by analyzing log data obtained from client networks having network entities representing business units or customers. The system may further comprise a plurality of client asset machines, each operable to execute a security product associated with a security product vendor and log associated information of the network entities into at least one log file. The log files may be uploaded onto a log-analytics detection platform for analysis using learning algorithms operable to generate a risk factor attribute for at least one entity. | 1. A log-analytic system for identifying outbound communications to detect at least one security threat in at least one client network, said system comprising:
at least one log-analytic detection platform operable to receive a plurality of log files from said at least one client network via a communication network; at least one asset associated with said at least one client network and operable to communicate with at least one host via said communication network; and at least one network entity associated with said at least one client network operable to enable outbound communication and further log assessment attributes associated with at least one channel into at least one log file of said plurality of log files; wherein said at least one channel connects said at least one asset with at least one host and said log-analytic detection system is operable to identify said at least one channel and generate a risk factor for at least one entity associated with entities of said at least one channel. 2. The log-analytic detection system of claim 1, wherein said at least one entity associated with entities of said at least one channel is selected from a group consisting of: a channel, an asset, a host and combinations thereof. 3. The log-analytic detection system of claim 1, wherein said at least one log-analytic detection platform is operable to collect data pertaining to said at least one client network, to normalize said data and to store the normalized data into at least one entity record of a data repository. 4. A method for detecting security threats associated with at least one client network, the method for use in a system, said system comprising:
at least one network entity associated with said at least one client network and operable to enable outbound communication via a communication network; at least one asset operable to communicate with one of a plurality of hosts via said communication network; and at least one log-analytic detection platform operable to analyze a plurality of log files associated with a plurality of channels, each said plurality of channels connecting an asset with a host, and further operable to determine a risk factor for at least one entity, each of said plurality of channels being characterized by a channel identification pair comprising said asset and said host, said method for operating said at least one log-analytics detection platform in an improved manner, the method comprising:
obtaining, via said communication network, said plurality of log files from said at least one client network, each of said plurality of log files comprising at least one log record associated with at least one channel;
extracting a channel feature set for each of said plurality of channels from said plurality of log files, said channel feature set comprises data pertaining to at least one associated entity;
aggregating said channel associated features for each of said plurality of channels into at least one data repository; and
generating said risk factor for said least one entity associated with entities of said plurality of channels, said risk factor characterized by an entity score. 5. The method of claim 4, wherein the step of obtaining further comprises:
normalizing each of said plurality of log files by mapping fields associated with said at least one log record from a third-party format into a standard format. 6. The method of claim 4, wherein the step of extracting, comprises:
matching at least one log record associated with at least one of said plurality of channels; grouping said at least one log record into a set of groups of channel associated records for at least one of said plurality of channels, each group of said set is associated with one matched channel; extracting said channel feature set from the group of channel associated records associated with each of said plurality of channels and identified by said channel identification pair, wherein said channel feature set being characterized by at least one of: data pertaining to communication behavior, data pertaining to host domain and data pertaining to host IP; and extracting, for each channel, asset associated features and host associated features and integrating into said channel feature set. 7. The method of claim 4, wherein the step of aggregating, comprises:
retrieving, from said at least one data repository, a stored channel and an associated stored channel feature set identified by said channel identification pair; joining the channel feature set with the stored channel feature set matched by said entity identification pair; computing features for at least one entity associated with the stored channel; and storing the joined channel feature set into said at least one data repository. 8. The method of claim 7, wherein the step of computing further comprises:
grouping a set of channels matched by the associated host; and computing the features of the associated host by joining the feature associated with each channel which is associated with the host. 9. The method of claim 4, wherein the step of generating, comprises:
using an entity scoring model, said entity scoring model is operable to provide said entity score for said at least one entity; classifying said at least one entity to determine said risk factor according to said entity score; and storing pertaining data of said risk factor into said at least one data repository; wherein the entity score expresses the likelihood that said at least one entity is associated with a command and control (C&C) host communication. 10. The method of claim 9, wherein said at least one log-analytic detection platform is operable to collect a plurality of classified entities and execute a supervised machine learning algorithm to determine said entity scoring model,
wherein said plurality of classified entities are selected from a group consisting of a channel, an asset, a host and combinations thereof. 11. The method of claim 9, further comprising validating said risk factor associated with said at least one entity. 12. The method of claim 9, wherein the step of generating further comprises:
creating an output list of potentially compromised client assets, if said risk factor indicates that said at least one entity is malicious, said output list comprising each of said plurality of assets communicating with said at least one entity. 13. The method of claim 4, further comprising:
creating an output incidents report comprising data pertaining to the risk factor associated with each of said plurality of channels related entities. 14. The method of claim 13, wherein said output incidents report is configured to be transmitted via said communication network to said at least one client network. 15. The method of claim 4, further comprising the step of:
creating an alert associated with a detectable security incident associated with at least one entity, said alert is configured to be transmitted via said communication network. 16. A method for detecting security threats associated with at least one client network, the method for use in a system, said system comprising:
at least one network entity associated with said at least one client network and operable to enable outbound communication via a communication network; at least one asset associated with said at least one client network and operable to communicate with at least one of a plurality of hosts via said communication network; and at least one log-analytic detection platform operable to analyze a plurality of log files and further determine a risk factor associated with at least one super-channel, said at least one super-channel is characterized by a super-channel feature set, said at least one super-channel comprises:
a set of channels, each said channel connecting an asset with a host, wherein said at least one host associated with a host-group, and wherein
each said channel being characterized by a characteristics vector and a channel identification pair,
said method for operating said at least one log-analytics detection platform in an improved manner, the method comprising:
obtaining said plurality of log files from said at least one client network, each of said plurality of log files comprising a plurality of communication records,
identifying said at least one super-channel, wherein the set of channels associated with said at least one super-channel are determined by a shared similarity;
extracting the super-channel feature set for said at least one super-channel;
aggregating the super-channel feature set for said at least one super-channel into at least one data repository; and
generating said risk factor for said at least one entity associated with entities of said at least one super-channel, said risk factor characterized by an entity score. 17. The method of claim 16, wherein the step of identifying, comprises:
identifying a set of channels having the same asset and a shared similarity into a super-channel; setting the asset of the supper-channel to be the asset of each channel having said common characteristics and setting the host-group of the super-channel to include the hosts of the associated channels; and creating a new super-channel for each channel that is not grouped, where the associated host-group comprises the host of the associated channel, wherein said shared similarity is based on identity or similarity in certain characteristics or based on similarity between a combination of characteristics of the associated characteristics vector. 18. The method of claim 16, wherein the step of extracting, comprises:
extracting a set of attributes representing the associated super-channel feature set, said super-channel feature set characterized by at least one of: an identified similarity characteristics determined by said shared similarity associated with each channel of said set of channels, a communication behavior characteristics associated with at least one channel of said set of channels, a domain characteristics of at least one host of the associated host-group; and a host IP address characteristics of at least one host of the associated host-group. 19. The method of claim 16, wherein the step of aggregating, comprises:
retrieving, from said at least one data repository, a stored super-channel and an associated stored super-channel feature set matching at least one of said set of channels associated with said at least one super-channel, wherein said matching comprises an identical asset and a common host or a similarity in characteristics of the associated channels; joining the host-group associated with the at least one super-channel into the host-group associated with the stored super-channel; joining the super-channel feature set associated with the at least one super-channel into the stored super-channel feature set; computing features for at least one entity associated with the stored super-channel; and storing the joined super-channel feature set for the stored super-channel of into said at least one data repository. 20. The method of claim 19, wherein the step of computing, comprises:
joining host-groups having at least one common host and updating the associated channels to relate to the joined host-group; grouping a set of super-channels associated with the same host-group; and computing the associated features of the host-group by joining the feature values associated with each super-channel associated with the host-group. 21. The method of claim 16, wherein said characteristics vector comprises data pertaining to at least one characteristic selected from a group consisting of: communication characteristics, domain name characteristics, IP address characteristics and combinations thereof,
wherein said communication characteristics comprises data associated with at least one of the path and query parts of a URL, destination IP address, sequence properties; and wherein said domain name characteristics and IP address characteristics comprises data associated with at least one of the domain and subdomain of the host, the domain registration details, IP addresses of the domain and the domain site. 22. The method of claim 16, wherein further comprising the step of merging associated host-groups based upon similarities, comprising:
determining the shared similarity of a first super-channel with a second super-channel; and merging the associated host-group of the second super-channel into the associated host-group of the first super-channel, if the characteristic vector of said first super-channel is analyzed of being similar to the characteristic vector of said second super-channel. 23. The method of claim 16, wherein said at least one log-analytic detection platform is operable to collect a plurality of classified entities and execute a supervised machine learning algorithm to determine said entity scoring model,
wherein, each of said plurality of classified entities is selected from a group consisting of a super-channel, a host-group, a channel, an asset, a host and combinations thereof. 24. The method of claim 16, wherein the step of generating, comprises:
using an entity scoring model, said entity scoring model is operable to provide said entity score for said at least one entity; classifying said at least one entity to determine said risk factor according to said entity score; and storing pertaining data of said risk factor in said at least one data repository; wherein the entity score expresses the likelihood that said at least one entity is associated in a command and control (C&C) host communication. 25. The method of claim 22, wherein step of merging, comprises:
determining the shared similarity of a first super-channel with a second super-channel such that the associated host-group comprises at least one C&C host; and merging the associated host-group of the second super-channel into the associated host-group of the first super-channel, if the first host-group comprises no C&C hosts and the second host-group comprises at least one C&C host, such that all associated hosts of the merged host-group are marked as C&C hosts. | Log based analysis systems and methods for protecting computers and networks from malicious communications and malware attacks by analyzing log data obtained from client networks having network entities representing business units or customers. The system may further comprise a plurality of client asset machines, each operable to execute a security product associated with a security product vendor and log associated information of the network entities into at least one log file. The log files may be uploaded onto a log-analytics detection platform for analysis using learning algorithms operable to generate a risk factor attribute for at least one entity.1. A log-analytic system for identifying outbound communications to detect at least one security threat in at least one client network, said system comprising:
at least one log-analytic detection platform operable to receive a plurality of log files from said at least one client network via a communication network; at least one asset associated with said at least one client network and operable to communicate with at least one host via said communication network; and at least one network entity associated with said at least one client network operable to enable outbound communication and further log assessment attributes associated with at least one channel into at least one log file of said plurality of log files; wherein said at least one channel connects said at least one asset with at least one host and said log-analytic detection system is operable to identify said at least one channel and generate a risk factor for at least one entity associated with entities of said at least one channel. 2. The log-analytic detection system of claim 1, wherein said at least one entity associated with entities of said at least one channel is selected from a group consisting of: a channel, an asset, a host and combinations thereof. 3. The log-analytic detection system of claim 1, wherein said at least one log-analytic detection platform is operable to collect data pertaining to said at least one client network, to normalize said data and to store the normalized data into at least one entity record of a data repository. 4. A method for detecting security threats associated with at least one client network, the method for use in a system, said system comprising:
at least one network entity associated with said at least one client network and operable to enable outbound communication via a communication network; at least one asset operable to communicate with one of a plurality of hosts via said communication network; and at least one log-analytic detection platform operable to analyze a plurality of log files associated with a plurality of channels, each said plurality of channels connecting an asset with a host, and further operable to determine a risk factor for at least one entity, each of said plurality of channels being characterized by a channel identification pair comprising said asset and said host, said method for operating said at least one log-analytics detection platform in an improved manner, the method comprising:
obtaining, via said communication network, said plurality of log files from said at least one client network, each of said plurality of log files comprising at least one log record associated with at least one channel;
extracting a channel feature set for each of said plurality of channels from said plurality of log files, said channel feature set comprises data pertaining to at least one associated entity;
aggregating said channel associated features for each of said plurality of channels into at least one data repository; and
generating said risk factor for said least one entity associated with entities of said plurality of channels, said risk factor characterized by an entity score. 5. The method of claim 4, wherein the step of obtaining further comprises:
normalizing each of said plurality of log files by mapping fields associated with said at least one log record from a third-party format into a standard format. 6. The method of claim 4, wherein the step of extracting, comprises:
matching at least one log record associated with at least one of said plurality of channels; grouping said at least one log record into a set of groups of channel associated records for at least one of said plurality of channels, each group of said set is associated with one matched channel; extracting said channel feature set from the group of channel associated records associated with each of said plurality of channels and identified by said channel identification pair, wherein said channel feature set being characterized by at least one of: data pertaining to communication behavior, data pertaining to host domain and data pertaining to host IP; and extracting, for each channel, asset associated features and host associated features and integrating into said channel feature set. 7. The method of claim 4, wherein the step of aggregating, comprises:
retrieving, from said at least one data repository, a stored channel and an associated stored channel feature set identified by said channel identification pair; joining the channel feature set with the stored channel feature set matched by said entity identification pair; computing features for at least one entity associated with the stored channel; and storing the joined channel feature set into said at least one data repository. 8. The method of claim 7, wherein the step of computing further comprises:
grouping a set of channels matched by the associated host; and computing the features of the associated host by joining the feature associated with each channel which is associated with the host. 9. The method of claim 4, wherein the step of generating, comprises:
using an entity scoring model, said entity scoring model is operable to provide said entity score for said at least one entity; classifying said at least one entity to determine said risk factor according to said entity score; and storing pertaining data of said risk factor into said at least one data repository; wherein the entity score expresses the likelihood that said at least one entity is associated with a command and control (C&C) host communication. 10. The method of claim 9, wherein said at least one log-analytic detection platform is operable to collect a plurality of classified entities and execute a supervised machine learning algorithm to determine said entity scoring model,
wherein said plurality of classified entities are selected from a group consisting of a channel, an asset, a host and combinations thereof. 11. The method of claim 9, further comprising validating said risk factor associated with said at least one entity. 12. The method of claim 9, wherein the step of generating further comprises:
creating an output list of potentially compromised client assets, if said risk factor indicates that said at least one entity is malicious, said output list comprising each of said plurality of assets communicating with said at least one entity. 13. The method of claim 4, further comprising:
creating an output incidents report comprising data pertaining to the risk factor associated with each of said plurality of channels related entities. 14. The method of claim 13, wherein said output incidents report is configured to be transmitted via said communication network to said at least one client network. 15. The method of claim 4, further comprising the step of:
creating an alert associated with a detectable security incident associated with at least one entity, said alert is configured to be transmitted via said communication network. 16. A method for detecting security threats associated with at least one client network, the method for use in a system, said system comprising:
at least one network entity associated with said at least one client network and operable to enable outbound communication via a communication network; at least one asset associated with said at least one client network and operable to communicate with at least one of a plurality of hosts via said communication network; and at least one log-analytic detection platform operable to analyze a plurality of log files and further determine a risk factor associated with at least one super-channel, said at least one super-channel is characterized by a super-channel feature set, said at least one super-channel comprises:
a set of channels, each said channel connecting an asset with a host, wherein said at least one host associated with a host-group, and wherein
each said channel being characterized by a characteristics vector and a channel identification pair,
said method for operating said at least one log-analytics detection platform in an improved manner, the method comprising:
obtaining said plurality of log files from said at least one client network, each of said plurality of log files comprising a plurality of communication records,
identifying said at least one super-channel, wherein the set of channels associated with said at least one super-channel are determined by a shared similarity;
extracting the super-channel feature set for said at least one super-channel;
aggregating the super-channel feature set for said at least one super-channel into at least one data repository; and
generating said risk factor for said at least one entity associated with entities of said at least one super-channel, said risk factor characterized by an entity score. 17. The method of claim 16, wherein the step of identifying, comprises:
identifying a set of channels having the same asset and a shared similarity into a super-channel; setting the asset of the supper-channel to be the asset of each channel having said common characteristics and setting the host-group of the super-channel to include the hosts of the associated channels; and creating a new super-channel for each channel that is not grouped, where the associated host-group comprises the host of the associated channel, wherein said shared similarity is based on identity or similarity in certain characteristics or based on similarity between a combination of characteristics of the associated characteristics vector. 18. The method of claim 16, wherein the step of extracting, comprises:
extracting a set of attributes representing the associated super-channel feature set, said super-channel feature set characterized by at least one of: an identified similarity characteristics determined by said shared similarity associated with each channel of said set of channels, a communication behavior characteristics associated with at least one channel of said set of channels, a domain characteristics of at least one host of the associated host-group; and a host IP address characteristics of at least one host of the associated host-group. 19. The method of claim 16, wherein the step of aggregating, comprises:
retrieving, from said at least one data repository, a stored super-channel and an associated stored super-channel feature set matching at least one of said set of channels associated with said at least one super-channel, wherein said matching comprises an identical asset and a common host or a similarity in characteristics of the associated channels; joining the host-group associated with the at least one super-channel into the host-group associated with the stored super-channel; joining the super-channel feature set associated with the at least one super-channel into the stored super-channel feature set; computing features for at least one entity associated with the stored super-channel; and storing the joined super-channel feature set for the stored super-channel of into said at least one data repository. 20. The method of claim 19, wherein the step of computing, comprises:
joining host-groups having at least one common host and updating the associated channels to relate to the joined host-group; grouping a set of super-channels associated with the same host-group; and computing the associated features of the host-group by joining the feature values associated with each super-channel associated with the host-group. 21. The method of claim 16, wherein said characteristics vector comprises data pertaining to at least one characteristic selected from a group consisting of: communication characteristics, domain name characteristics, IP address characteristics and combinations thereof,
wherein said communication characteristics comprises data associated with at least one of the path and query parts of a URL, destination IP address, sequence properties; and wherein said domain name characteristics and IP address characteristics comprises data associated with at least one of the domain and subdomain of the host, the domain registration details, IP addresses of the domain and the domain site. 22. The method of claim 16, wherein further comprising the step of merging associated host-groups based upon similarities, comprising:
determining the shared similarity of a first super-channel with a second super-channel; and merging the associated host-group of the second super-channel into the associated host-group of the first super-channel, if the characteristic vector of said first super-channel is analyzed of being similar to the characteristic vector of said second super-channel. 23. The method of claim 16, wherein said at least one log-analytic detection platform is operable to collect a plurality of classified entities and execute a supervised machine learning algorithm to determine said entity scoring model,
wherein, each of said plurality of classified entities is selected from a group consisting of a super-channel, a host-group, a channel, an asset, a host and combinations thereof. 24. The method of claim 16, wherein the step of generating, comprises:
using an entity scoring model, said entity scoring model is operable to provide said entity score for said at least one entity; classifying said at least one entity to determine said risk factor according to said entity score; and storing pertaining data of said risk factor in said at least one data repository; wherein the entity score expresses the likelihood that said at least one entity is associated in a command and control (C&C) host communication. 25. The method of claim 22, wherein step of merging, comprises:
determining the shared similarity of a first super-channel with a second super-channel such that the associated host-group comprises at least one C&C host; and merging the associated host-group of the second super-channel into the associated host-group of the first super-channel, if the first host-group comprises no C&C hosts and the second host-group comprises at least one C&C host, such that all associated hosts of the merged host-group are marked as C&C hosts. | 2,400 |
8,645 | 8,645 | 15,933,192 | 2,444 | Providing a video stream and metadata over channels is described. The method may include receiving a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames. The method may also include generating, by a computer processing device, a second message comprising annotation metadata describing a characteristic of the video frame. The method may also include publishing the first message to a second channel of the plurality of channels and publishing the second message to a third channel of the plurality of channels. | 1. A method comprising:
receiving a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames; generating, by a computer processing device, a second message comprising annotation metadata describing a characteristic of the video frame; publishing the first message to a second channel of the plurality of channels; and publishing the second message to a third channel of the plurality of channels. 2. The method of claim 1, further comprising processing the video frame to determine the characteristic of the video frame. 3. The method of claim 1, wherein generating the second message further comprises including an identification of the video frame in the message. 4. The method of claim 3, wherein the identification of the video frame includes at least one of a video stream identifier or a frame sequence number. 5. The method of claim 1, further comprising:
receiving the first message on the second channel and the second message on the third channel; modifying the second message to comprise second annotation metadata describing a second characteristic of the video frame; and publishing the modified second message to a fourth channel of the plurality of channels. 6. The method of claim 1, further comprising:
receiving the first message on the second channel and the second message on the third channel; and correlating the video frame with the annotation metadata based on a first identifier in the first message and a second identifier in the second message. 7. The method of claim 1, wherein the first message and the second message comprise JSON messages. 8. The method of claim 1, further comprising:
receiving a second video frame of the plurality of video frames on the first channel; distributing the first frame to a first frame analyzer to determine the characteristic of the video frame; distributing the second video frame to a second frame analyzer to determine a second characteristic of the video frame; generating a third message comprising second annotation metadata describing the second characteristic of the second video frame; and publishing the third message to the third channel. 9. The method of claim 1, further comprising:
storing the first message and the second message in a memory buffer; and providing the first message and the second message to a client device in response to a request for previous messages. 10. A system, comprising:
a computer processing device programmed to perform operations to:
receive a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames;
generate a second message comprising annotation data describing a characteristic of the video frame;
publish the first message to a second channel of the plurality of channels; and
publish the second message to a third channel of the plurality of channels. 11. The system of claim 10, wherein to generate the second message comprising annotation data describing a characteristic of the video frame the computer processing device is further to process the video frame to determine the characteristic of the video frame. 12. The system of claim 10, wherein to generate the second message, the computer processing device is further to include an identification of the video frame in the message 13. The system of claim 10, wherein the computer processing device is further to:
receive the first message on the second channel and the second message on the third channel; modify the second message to comprise second annotation metadata describing a second characteristic of the video frame; and publish the modified second message to a fourth channel of the plurality of channels. 14. The system of claim 10, wherein the processing device is further to:
receive the first message on the second channel and the second message on the third channel; and correlate the video frame with the annotation metadata based on a first identifier in the first message and a second identifier in the second message. 15. The system of claim 10, wherein the computer processing device is further to:
receive a second video frame of the plurality of video frames on the first channel; distribute the first frame to a first frame analyzer to determine the characteristic of the video frame; distribute the second video frame to a second frame analyzer to determine a second characteristic of the video frame; generate a third message comprising second annotation metadata describing the second characteristic of the second video frame; and publish the third message to the third channel. 16. The system of claim 10, wherein the computer processing device is further to:
store the first message and the second message in a memory buffer; and provide the first message and the second message to a client device in response to a request for previous messages. 17. The system of claim 10, wherein the computer processing device is further to select the second channel from the plurality of channels in response to detecting an object in the video frame. 18. A non-transitory computer-readable medium having instructions stored thereon that, when executed by a computer processing device, cause the computer processing device to:
receive a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames; generate a second message comprising annotation data describing a characteristic of the video frame; publish the first message to a second channel of the plurality of channels; and publish the second message to a third channel of the plurality of channels. 19. The non-transitory computer-readable medium of claim 18, wherein the instructions further cause the computer processing device to:
receive the first message on the second channel and the second message on the third channel; modify the second message to comprise second annotation metadata describing a second characteristic of the video frame; and publish the modified second message to a fourth channel of the plurality of channels. 20. The non-transitory computer-readable medium of claim 18, wherein the instructions further cause the computer processing device to:
receive the first message on the second channel and the second message on the third channel; and correlate the video frame with the annotation metadata based on a first identifier in the first message and a second identifier in the second message. | Providing a video stream and metadata over channels is described. The method may include receiving a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames. The method may also include generating, by a computer processing device, a second message comprising annotation metadata describing a characteristic of the video frame. The method may also include publishing the first message to a second channel of the plurality of channels and publishing the second message to a third channel of the plurality of channels.1. A method comprising:
receiving a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames; generating, by a computer processing device, a second message comprising annotation metadata describing a characteristic of the video frame; publishing the first message to a second channel of the plurality of channels; and publishing the second message to a third channel of the plurality of channels. 2. The method of claim 1, further comprising processing the video frame to determine the characteristic of the video frame. 3. The method of claim 1, wherein generating the second message further comprises including an identification of the video frame in the message. 4. The method of claim 3, wherein the identification of the video frame includes at least one of a video stream identifier or a frame sequence number. 5. The method of claim 1, further comprising:
receiving the first message on the second channel and the second message on the third channel; modifying the second message to comprise second annotation metadata describing a second characteristic of the video frame; and publishing the modified second message to a fourth channel of the plurality of channels. 6. The method of claim 1, further comprising:
receiving the first message on the second channel and the second message on the third channel; and correlating the video frame with the annotation metadata based on a first identifier in the first message and a second identifier in the second message. 7. The method of claim 1, wherein the first message and the second message comprise JSON messages. 8. The method of claim 1, further comprising:
receiving a second video frame of the plurality of video frames on the first channel; distributing the first frame to a first frame analyzer to determine the characteristic of the video frame; distributing the second video frame to a second frame analyzer to determine a second characteristic of the video frame; generating a third message comprising second annotation metadata describing the second characteristic of the second video frame; and publishing the third message to the third channel. 9. The method of claim 1, further comprising:
storing the first message and the second message in a memory buffer; and providing the first message and the second message to a client device in response to a request for previous messages. 10. A system, comprising:
a computer processing device programmed to perform operations to:
receive a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames;
generate a second message comprising annotation data describing a characteristic of the video frame;
publish the first message to a second channel of the plurality of channels; and
publish the second message to a third channel of the plurality of channels. 11. The system of claim 10, wherein to generate the second message comprising annotation data describing a characteristic of the video frame the computer processing device is further to process the video frame to determine the characteristic of the video frame. 12. The system of claim 10, wherein to generate the second message, the computer processing device is further to include an identification of the video frame in the message 13. The system of claim 10, wherein the computer processing device is further to:
receive the first message on the second channel and the second message on the third channel; modify the second message to comprise second annotation metadata describing a second characteristic of the video frame; and publish the modified second message to a fourth channel of the plurality of channels. 14. The system of claim 10, wherein the processing device is further to:
receive the first message on the second channel and the second message on the third channel; and correlate the video frame with the annotation metadata based on a first identifier in the first message and a second identifier in the second message. 15. The system of claim 10, wherein the computer processing device is further to:
receive a second video frame of the plurality of video frames on the first channel; distribute the first frame to a first frame analyzer to determine the characteristic of the video frame; distribute the second video frame to a second frame analyzer to determine a second characteristic of the video frame; generate a third message comprising second annotation metadata describing the second characteristic of the second video frame; and publish the third message to the third channel. 16. The system of claim 10, wherein the computer processing device is further to:
store the first message and the second message in a memory buffer; and provide the first message and the second message to a client device in response to a request for previous messages. 17. The system of claim 10, wherein the computer processing device is further to select the second channel from the plurality of channels in response to detecting an object in the video frame. 18. A non-transitory computer-readable medium having instructions stored thereon that, when executed by a computer processing device, cause the computer processing device to:
receive a first message on a first channel of a plurality of channels, the first message encapsulating a video frame of a plurality of video frames; generate a second message comprising annotation data describing a characteristic of the video frame; publish the first message to a second channel of the plurality of channels; and publish the second message to a third channel of the plurality of channels. 19. The non-transitory computer-readable medium of claim 18, wherein the instructions further cause the computer processing device to:
receive the first message on the second channel and the second message on the third channel; modify the second message to comprise second annotation metadata describing a second characteristic of the video frame; and publish the modified second message to a fourth channel of the plurality of channels. 20. The non-transitory computer-readable medium of claim 18, wherein the instructions further cause the computer processing device to:
receive the first message on the second channel and the second message on the third channel; and correlate the video frame with the annotation metadata based on a first identifier in the first message and a second identifier in the second message. | 2,400 |
8,646 | 8,646 | 15,218,762 | 2,446 | The invention provides an Ethernet bridge or router comprising a network fabric adapted to provide interconnectivity to a plurality of Ethernet ports, each of the Ethernet ports being adapted to receive and/or transmit Ethernet frames, and wherein the Ethernet bridge or outer further comprises an encapsulator connected to receive Ethernet Protocol Data Units from the Ethernet ports, wherein the encapsulator is operable to generate a Fabric Protocol Data Unit from a received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising header portion, and a payload portion which comprises the Ethernet Protocol Data Unit concerned, and wherein the encapsulator is operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into a routing definition for the network fabric, and to include this routing definition in the header portion of the Fabric Protocol Data Unit. Also provided is a method of data delivery across a network. | 1. A method of data delivery across a network comprising a network fabric configured to provide interconnectivity to a plurality of Ethernet ports, the method comprising:
receiving an Ethernet frame or packet at one of the plurality of Ethernet ports, the Ethernet frame or packet comprising an Ethernet Protocol Data Unit having Ethernet destination address information; generating a Fabric Protocol Data Unit from the received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising a header portion and a payload portion, wherein the payload portion carries the Ethernet Protocol Data Unit, which includes the Ethernet destination address information, and the header portion comprises a destination descriptor for the network fabric derived from the Ethernet destination address which identifies a complete route across the network fabric for the Fabric Protocol Data Unit, the complete route across the network fabric being identified by correlating the value of the destination descriptor to the physical location of the Ethernet ports; transmitting the Fabric Protocol Data Unit from an ingress network port of the network fabric to at least one egress network port of the network fabric using the destination descriptor and without extracting the Ethernet Protocol Data Unit wherein fabric comprises a plurality of switches, and wherein the Ethernet Protocol Data Unit is neither de-encapsulated or re-encapsulated by any of the switches; at the at least one egress port, extracting the Ethernet frame or packet from the Fabric Protocol Data Unit; and delivering the Ethernet frame or packet to an Ethernet device. 2. The method of claim 1, wherein the network fabric further comprises a plurality of network ports and wherein during the step of transmitting the Fabric Protocol Data Unit the Ethernet Protocol Data Unit is neither de-encapsulated or re-encapsulated by any of the network ports. 3. The method of claim 2, wherein during the step of transmitting the Fabric Protocol Data Unit the Ethernet Protocol Data Unit of the Fabric Protocol Data Unit is invisible to the switches of the network fabric. 4. The method of claim 3, wherein during the step of transmitting the Fabric Protocol Data Unit no part of the payload of the Fabric Protocol Data Unit is modified. 5. The method of claim 2, wherein each one of the network ports is allocated a destination number which is representative of the physical position of the network port on the network fabric whereby the Fabric Protocol Data Unit is transmitted across the network using algorithmic routing. 6. The method of claim 1, wherein the plurality of Ethernet ports are programmable and adapted to support automatically both Ethernet Protocol Data Units and proprietary Fabric Protocol Data Units whereby during the step of receiving an Ethernet frame the Ethernet ports automatically interpret either protocol. 7. The method of claim 1, wherein the step of generating a Fabric Protocol Data Unit implements a new protocol layer additional to the protocol layers of the Open Systems Interconnect model for Ethernet networks. 8. The method of claim 7, wherein the new protocol layer is stacked between the Physical Layer and the Data Link Layer of the Open Systems Interconnect model for Ethernet networks and provides for encapsulation of network layer Protocol Data Units and data link layer Protocol Data Units in the Fabric Protocol Data Unit. 9. The method of claim 1, further comprising the step of interrupting the generation of a Fabric Protocol Data Unit, adding one or more control tokens to the Fabric Protocol Data Unit and transmitting the Fabric Protocol Data Unit with the one or more control tokens across the network fabric. 10. The method of claim 9, further comprising the step of replacing or removing one or more control tokens previously inserted into a Fabric Protocol Data Unit. 11. The Method of claim 1 wherein extracting the Ethernet Frame comprises stripping the header from the Fabric Protocol Data Unit leaving the Ethernet Frame or Packet for delivery to the Ethernet Device. 12. An Ethernet bridge or router comprising a network fabric configured to provide interconnectivity to a plurality of Ethernet ports, each of the Ethernet ports being adapted to receive and/or transmit Ethernet frames, and wherein the Ethernet bridge or router further comprises software instructions for operating an encapsulator to generate a Fabric Protocol Data Unit from a received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising a header portion, and a payload portion which comprises the Ethernet Protocol Data Unit concerned, and wherein the encapsulator is operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into a destination descriptor for the network fabric which defines a complete route across the network fabric, and to include this destination descriptor in the header portion of the Fabric Protocol Data Unit. 13. The Ethernet bridge or router of claim 12, wherein the encapsulator is operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into the destination descriptor which defines a set of complete routes for the Fabric Protocol Data Unit through the network fabric, wherein the encapsulator is further operable to transmit the Fabric Protocol Data Units to the network fabric, such that the Fabric Protocol Data Unit is transmitted across the fabric to a selected Ethernet Port. 14. The Ethernet bridge or router of claim 13, wherein the encapsulator is adapted to allow for the payload of a Fabric Protocol Data Unit to be interrupted for the insertion of one or more control tokens. 15. The Bridge or Router of claim 13 wherein the header from the fabric Protocol Data Unit can be stripped to accommodate delivery of the Ethernet Protocol Data Units to the Ethernet port. 16. An Ethernet bridge or router, comprising:
a plurality of Ethernet ports, each of the Ethernet ports being adapted to receive and/or transmit Ethernet Protocol Data Units; a network fabric configured to provide interconnectivity between the plurality of Ethernet ports, the network fabric having a plurality of switches and a plurality of network ports; an encapsulator connected to receive the Ethernet Protocol Data Units from the Ethernet ports, the encapsulator being operable to generate a Fabric Protocol Data Unit from the received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising a header portion, and a payload portion which comprises the Ethernet Protocol Data Unit concerned, wherein the encapsulator is further operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into a destination descriptor which defines a complete route for the Fabric Protocol Data Unit through the network fabric, wherein the network fabric is thus capable of receiving and/or transmitting the Fabric Protocol Data Unit directly through the network fabric to an egress network port using the destination descriptor, without de-encapsulation or re-encapsulation of the Ethernet Protocol Data Unit, and wherein the encapsulator is operable to include the destination descriptor in the header portion of the Fabric Protocol Data Unit. 17. The Ethernet bridge or router of claim 16, wherein the Ethernet Protocol Data Unit of the Fabric Protocol Data Unit is invisible to the switches of the network fabric. 18. The Ethernet bridge or router of claim 17, wherein the network fabric is adapted so as not to modify any part of the payload of the Fabric Protocol Data Unit. 19. The Ethernet bridge or router of claim 16, wherein each of the network ports is allocated a destination number which is representative of the physical position of the network port on the network fabric thereby enabling algorithmic routing of the Fabric Protocol Data Unit across the network. 20. The Ethernet bridge or router of claim 16, wherein the plurality of Ethernet ports are programmable and adapted to support automatically both Ethernet Protocol Data Units and proprietary Fabric Protocol Data Units, the ports automatically interpreting either protocol when it is received. 21. The Ethernet bridge or router of claim 16, wherein the encapsulator implements a new protocol layer additional to the protocol layers of the Open Systems Interconnect model for Ethernet networks. 22. The Ethernet bridge or router as claimed in claim 21, wherein the new protocol layer is stacked between the Physical Layer and the Data Link Layer of the Open Systems Interconnect model for Ethernet networks and provides for encapsulation of network layer Protocol Data Units and data link layer Protocol Data Units in the Fabric Protocol Data Unit. | The invention provides an Ethernet bridge or router comprising a network fabric adapted to provide interconnectivity to a plurality of Ethernet ports, each of the Ethernet ports being adapted to receive and/or transmit Ethernet frames, and wherein the Ethernet bridge or outer further comprises an encapsulator connected to receive Ethernet Protocol Data Units from the Ethernet ports, wherein the encapsulator is operable to generate a Fabric Protocol Data Unit from a received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising header portion, and a payload portion which comprises the Ethernet Protocol Data Unit concerned, and wherein the encapsulator is operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into a routing definition for the network fabric, and to include this routing definition in the header portion of the Fabric Protocol Data Unit. Also provided is a method of data delivery across a network.1. A method of data delivery across a network comprising a network fabric configured to provide interconnectivity to a plurality of Ethernet ports, the method comprising:
receiving an Ethernet frame or packet at one of the plurality of Ethernet ports, the Ethernet frame or packet comprising an Ethernet Protocol Data Unit having Ethernet destination address information; generating a Fabric Protocol Data Unit from the received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising a header portion and a payload portion, wherein the payload portion carries the Ethernet Protocol Data Unit, which includes the Ethernet destination address information, and the header portion comprises a destination descriptor for the network fabric derived from the Ethernet destination address which identifies a complete route across the network fabric for the Fabric Protocol Data Unit, the complete route across the network fabric being identified by correlating the value of the destination descriptor to the physical location of the Ethernet ports; transmitting the Fabric Protocol Data Unit from an ingress network port of the network fabric to at least one egress network port of the network fabric using the destination descriptor and without extracting the Ethernet Protocol Data Unit wherein fabric comprises a plurality of switches, and wherein the Ethernet Protocol Data Unit is neither de-encapsulated or re-encapsulated by any of the switches; at the at least one egress port, extracting the Ethernet frame or packet from the Fabric Protocol Data Unit; and delivering the Ethernet frame or packet to an Ethernet device. 2. The method of claim 1, wherein the network fabric further comprises a plurality of network ports and wherein during the step of transmitting the Fabric Protocol Data Unit the Ethernet Protocol Data Unit is neither de-encapsulated or re-encapsulated by any of the network ports. 3. The method of claim 2, wherein during the step of transmitting the Fabric Protocol Data Unit the Ethernet Protocol Data Unit of the Fabric Protocol Data Unit is invisible to the switches of the network fabric. 4. The method of claim 3, wherein during the step of transmitting the Fabric Protocol Data Unit no part of the payload of the Fabric Protocol Data Unit is modified. 5. The method of claim 2, wherein each one of the network ports is allocated a destination number which is representative of the physical position of the network port on the network fabric whereby the Fabric Protocol Data Unit is transmitted across the network using algorithmic routing. 6. The method of claim 1, wherein the plurality of Ethernet ports are programmable and adapted to support automatically both Ethernet Protocol Data Units and proprietary Fabric Protocol Data Units whereby during the step of receiving an Ethernet frame the Ethernet ports automatically interpret either protocol. 7. The method of claim 1, wherein the step of generating a Fabric Protocol Data Unit implements a new protocol layer additional to the protocol layers of the Open Systems Interconnect model for Ethernet networks. 8. The method of claim 7, wherein the new protocol layer is stacked between the Physical Layer and the Data Link Layer of the Open Systems Interconnect model for Ethernet networks and provides for encapsulation of network layer Protocol Data Units and data link layer Protocol Data Units in the Fabric Protocol Data Unit. 9. The method of claim 1, further comprising the step of interrupting the generation of a Fabric Protocol Data Unit, adding one or more control tokens to the Fabric Protocol Data Unit and transmitting the Fabric Protocol Data Unit with the one or more control tokens across the network fabric. 10. The method of claim 9, further comprising the step of replacing or removing one or more control tokens previously inserted into a Fabric Protocol Data Unit. 11. The Method of claim 1 wherein extracting the Ethernet Frame comprises stripping the header from the Fabric Protocol Data Unit leaving the Ethernet Frame or Packet for delivery to the Ethernet Device. 12. An Ethernet bridge or router comprising a network fabric configured to provide interconnectivity to a plurality of Ethernet ports, each of the Ethernet ports being adapted to receive and/or transmit Ethernet frames, and wherein the Ethernet bridge or router further comprises software instructions for operating an encapsulator to generate a Fabric Protocol Data Unit from a received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising a header portion, and a payload portion which comprises the Ethernet Protocol Data Unit concerned, and wherein the encapsulator is operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into a destination descriptor for the network fabric which defines a complete route across the network fabric, and to include this destination descriptor in the header portion of the Fabric Protocol Data Unit. 13. The Ethernet bridge or router of claim 12, wherein the encapsulator is operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into the destination descriptor which defines a set of complete routes for the Fabric Protocol Data Unit through the network fabric, wherein the encapsulator is further operable to transmit the Fabric Protocol Data Units to the network fabric, such that the Fabric Protocol Data Unit is transmitted across the fabric to a selected Ethernet Port. 14. The Ethernet bridge or router of claim 13, wherein the encapsulator is adapted to allow for the payload of a Fabric Protocol Data Unit to be interrupted for the insertion of one or more control tokens. 15. The Bridge or Router of claim 13 wherein the header from the fabric Protocol Data Unit can be stripped to accommodate delivery of the Ethernet Protocol Data Units to the Ethernet port. 16. An Ethernet bridge or router, comprising:
a plurality of Ethernet ports, each of the Ethernet ports being adapted to receive and/or transmit Ethernet Protocol Data Units; a network fabric configured to provide interconnectivity between the plurality of Ethernet ports, the network fabric having a plurality of switches and a plurality of network ports; an encapsulator connected to receive the Ethernet Protocol Data Units from the Ethernet ports, the encapsulator being operable to generate a Fabric Protocol Data Unit from the received Ethernet Protocol Data Unit, the Fabric Protocol Data Unit comprising a header portion, and a payload portion which comprises the Ethernet Protocol Data Unit concerned, wherein the encapsulator is further operable to transform Ethernet destination address information from the Ethernet Protocol Data Unit into a destination descriptor which defines a complete route for the Fabric Protocol Data Unit through the network fabric, wherein the network fabric is thus capable of receiving and/or transmitting the Fabric Protocol Data Unit directly through the network fabric to an egress network port using the destination descriptor, without de-encapsulation or re-encapsulation of the Ethernet Protocol Data Unit, and wherein the encapsulator is operable to include the destination descriptor in the header portion of the Fabric Protocol Data Unit. 17. The Ethernet bridge or router of claim 16, wherein the Ethernet Protocol Data Unit of the Fabric Protocol Data Unit is invisible to the switches of the network fabric. 18. The Ethernet bridge or router of claim 17, wherein the network fabric is adapted so as not to modify any part of the payload of the Fabric Protocol Data Unit. 19. The Ethernet bridge or router of claim 16, wherein each of the network ports is allocated a destination number which is representative of the physical position of the network port on the network fabric thereby enabling algorithmic routing of the Fabric Protocol Data Unit across the network. 20. The Ethernet bridge or router of claim 16, wherein the plurality of Ethernet ports are programmable and adapted to support automatically both Ethernet Protocol Data Units and proprietary Fabric Protocol Data Units, the ports automatically interpreting either protocol when it is received. 21. The Ethernet bridge or router of claim 16, wherein the encapsulator implements a new protocol layer additional to the protocol layers of the Open Systems Interconnect model for Ethernet networks. 22. The Ethernet bridge or router as claimed in claim 21, wherein the new protocol layer is stacked between the Physical Layer and the Data Link Layer of the Open Systems Interconnect model for Ethernet networks and provides for encapsulation of network layer Protocol Data Units and data link layer Protocol Data Units in the Fabric Protocol Data Unit. | 2,400 |
8,647 | 8,647 | 15,382,366 | 2,457 | A computer-implemented method, according to one embodiment, includes: receiving a request for a set of data at a first data storage tier, looking up corresponding metadata to each portion of the requested set of data, using the metadata to recall each of the portions of the requested set of data from object storage, and using the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. Other systems, methods, and computer program products are described in additional embodiments. | 1. A computer-implemented method, comprising:
receiving a request for a set of data at a first data storage tier; looking up corresponding metadata to each portion of the requested set of data; using the metadata to recall each of the portions of the requested set of data from object storage; and using the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. 2. The computer-implemented method of claim 1, wherein the requested set of data is a virtual tape library volume. 3. The computer-implemented method of claim 2, wherein the first data storage tier is a virtual tape tier, wherein the metadata is stored in a first designated portion of memory in the first data storage tier, wherein the object storage is a second designated portion of the memory. 4. The computer-implemented method of claim 1, wherein the first data storage tier is a virtual tape tier, wherein the object storage is included in a cloud-based distributed system. 5. The computer-implemented method of claim 1, wherein each portion of the requested set of data corresponds to a respective tenant identifier. 6. The computer-implemented method of claim 1, wherein looking up metadata which corresponds to each portion of the requested set of data includes:
obtaining access credentials associated with the requested set of data; and using the access credentials to perform a first authentication with an object storage. 7. The computer-implemented method of claim 6, wherein using the metadata to recall each of the portions of the requested set of data from object storage includes:
using the metadata to perform supplemental authentications with the object storage for each of the portions of the requested set of data; and receiving an object associated with each of the portions of the requested set of data. 8. The computer-implemented method of claim 1, comprising making the recompiled master object available for access. 9. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:
receive, by the processor, a request for a set of data at a first data storage tier, the request being; look up, by the processor, corresponding metadata to each portion of the requested set of data; use, by the processor, the metadata to recall each of the portions of the requested set of data from object storage; and use, by the processor, the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. 10. The computer program product of claim 9, wherein the requested set of data is a virtual tape library volume. 11. The computer program product of claim 10, wherein the first data storage tier is a virtual tape tier, wherein the metadata is stored in a first designated portion of memory in the first data storage tier, wherein the object storage is a second designated portion of the memory. 12. The computer program product of claim 9, wherein the first data storage tier is a virtual tape tier, wherein the object storage is included in a cloud-based distributed system. 13. The computer program product of claim 9, wherein each portion of the requested set of data corresponds to a respective tenant identifier. 14. The computer program product of claim 9, wherein looking up metadata which corresponds to each portion of the requested set of data includes:
obtaining access credentials associated with the requested set of data; and using the access credentials to perform a first authentication with an object storage. 15. The computer program product of claim 14, wherein using the metadata to recall each of the portions of the requested set of data from object storage includes:
using the metadata to perform supplemental authentications with the object storage for each of the portions of the requested set of data; and receiving an object associated with each of the portions of the requested set of data. 16. The computer program product of claim 9, program instructions executable by the processor to cause the processor to make the recompiled master object available for access. 17. A system, comprising:
a processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured to:
receive, by the processor, a request for a set of data at a first data storage tier;
look up, by the processor, corresponding metadata to each portion of the requested set of data;
use, by the processor, the metadata to recall each of the portions of the requested set of data from object storage; and
use, by the processor, the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. 18. The system of claim 17, wherein the requested set of data is a virtual tape library volume. 19. The system of claim 17, wherein the first data storage tier is a virtual tape tier, wherein the object storage is included in a cloud-based distributed system. 20. The system of claim 17, wherein each portion of the requested set of data corresponds to a respective tenant identifier. 21. A computer-implemented method, comprising:
using access credentials associated with a set of data to perform a first authentication with an object storage; accessing metadata which corresponds to each portion of the set of data in response to the authentication; using the metadata to perform supplemental authentications with the object storage for each of the portions of the set of data; and retrieving data associated with each of the portions of the set of data. 22. The computer-implemented method of claim 21, comprising using the retrieved data to recompile a master object in the object storage, the master object having a 1-to-1 mapping to the set of data. 23. The computer-implemented method of claim 21, wherein the object storage is included in a cloud-based distributed system. 24. The computer-implemented method of claim 21, wherein the metadata is stored in a first designated portion of memory in a virtual tape tier, wherein the object storage is a second designated portion of the memory. 25. The computer-implemented method of claim 21, wherein each portion of the set of data corresponds to a respective tenant identifier. | A computer-implemented method, according to one embodiment, includes: receiving a request for a set of data at a first data storage tier, looking up corresponding metadata to each portion of the requested set of data, using the metadata to recall each of the portions of the requested set of data from object storage, and using the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. Other systems, methods, and computer program products are described in additional embodiments.1. A computer-implemented method, comprising:
receiving a request for a set of data at a first data storage tier; looking up corresponding metadata to each portion of the requested set of data; using the metadata to recall each of the portions of the requested set of data from object storage; and using the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. 2. The computer-implemented method of claim 1, wherein the requested set of data is a virtual tape library volume. 3. The computer-implemented method of claim 2, wherein the first data storage tier is a virtual tape tier, wherein the metadata is stored in a first designated portion of memory in the first data storage tier, wherein the object storage is a second designated portion of the memory. 4. The computer-implemented method of claim 1, wherein the first data storage tier is a virtual tape tier, wherein the object storage is included in a cloud-based distributed system. 5. The computer-implemented method of claim 1, wherein each portion of the requested set of data corresponds to a respective tenant identifier. 6. The computer-implemented method of claim 1, wherein looking up metadata which corresponds to each portion of the requested set of data includes:
obtaining access credentials associated with the requested set of data; and using the access credentials to perform a first authentication with an object storage. 7. The computer-implemented method of claim 6, wherein using the metadata to recall each of the portions of the requested set of data from object storage includes:
using the metadata to perform supplemental authentications with the object storage for each of the portions of the requested set of data; and receiving an object associated with each of the portions of the requested set of data. 8. The computer-implemented method of claim 1, comprising making the recompiled master object available for access. 9. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:
receive, by the processor, a request for a set of data at a first data storage tier, the request being; look up, by the processor, corresponding metadata to each portion of the requested set of data; use, by the processor, the metadata to recall each of the portions of the requested set of data from object storage; and use, by the processor, the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. 10. The computer program product of claim 9, wherein the requested set of data is a virtual tape library volume. 11. The computer program product of claim 10, wherein the first data storage tier is a virtual tape tier, wherein the metadata is stored in a first designated portion of memory in the first data storage tier, wherein the object storage is a second designated portion of the memory. 12. The computer program product of claim 9, wherein the first data storage tier is a virtual tape tier, wherein the object storage is included in a cloud-based distributed system. 13. The computer program product of claim 9, wherein each portion of the requested set of data corresponds to a respective tenant identifier. 14. The computer program product of claim 9, wherein looking up metadata which corresponds to each portion of the requested set of data includes:
obtaining access credentials associated with the requested set of data; and using the access credentials to perform a first authentication with an object storage. 15. The computer program product of claim 14, wherein using the metadata to recall each of the portions of the requested set of data from object storage includes:
using the metadata to perform supplemental authentications with the object storage for each of the portions of the requested set of data; and receiving an object associated with each of the portions of the requested set of data. 16. The computer program product of claim 9, program instructions executable by the processor to cause the processor to make the recompiled master object available for access. 17. A system, comprising:
a processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured to:
receive, by the processor, a request for a set of data at a first data storage tier;
look up, by the processor, corresponding metadata to each portion of the requested set of data;
use, by the processor, the metadata to recall each of the portions of the requested set of data from object storage; and
use, by the processor, the portions of the requested set of data to recompile a master object, the master object having a 1-to-1 mapping to the requested set of data. 18. The system of claim 17, wherein the requested set of data is a virtual tape library volume. 19. The system of claim 17, wherein the first data storage tier is a virtual tape tier, wherein the object storage is included in a cloud-based distributed system. 20. The system of claim 17, wherein each portion of the requested set of data corresponds to a respective tenant identifier. 21. A computer-implemented method, comprising:
using access credentials associated with a set of data to perform a first authentication with an object storage; accessing metadata which corresponds to each portion of the set of data in response to the authentication; using the metadata to perform supplemental authentications with the object storage for each of the portions of the set of data; and retrieving data associated with each of the portions of the set of data. 22. The computer-implemented method of claim 21, comprising using the retrieved data to recompile a master object in the object storage, the master object having a 1-to-1 mapping to the set of data. 23. The computer-implemented method of claim 21, wherein the object storage is included in a cloud-based distributed system. 24. The computer-implemented method of claim 21, wherein the metadata is stored in a first designated portion of memory in a virtual tape tier, wherein the object storage is a second designated portion of the memory. 25. The computer-implemented method of claim 21, wherein each portion of the set of data corresponds to a respective tenant identifier. | 2,400 |
8,648 | 8,648 | 14,895,336 | 2,411 | A method of a wireless communication device operably connectable to a first cell of a first public land mobile network—PLMN—applying a first radio access technology—RAT, is disclosed. The method comprises detecting ( 301 ) a second cell of a second PLMN applying a second RAT and determining ( 302 ) whether a network relation exists between the first cell and the second cell. The method also comprises performing ( 304 ) a connection set up to the first cell if it is determined that a network relation exists between the first cell and the second cell, and performing ( 303 ) the connection set up to the second cell if it is determined that there does not exist a network relation between the first cell and the second cell. Also disclosed is an arrangement for a wireless communication device, a wireless communication device and a computer program product. | 1-17. (canceled) 18. A method of operating a wireless communication device operably connectable to a first cell of a first public land mobile network (PLMN) applying a first radio access technology (RAT), the method comprising:
detecting a second cell of a second PLMN applying a second RAT; determining whether a network relation exists between the first cell and the second cell; performing a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; performing the connection set up to the second cell in response determining that there does not exist a network relation between the first cell and the second cell. 19. The method of claim 18, wherein determining whether a network relation exists between the first cell and the second cell comprises determining whether the first PLMN of the first cell and the second PLMN of the second cell coincide. 20. The method of claim 18, wherein determining whether a network relation exists between the first cell and the second cell comprises determining whether there exists a backhaul connection between the first cell and the second cell. 21. The method of claim 18, wherein determining whether a network relation exists between the first cell and the second cell comprises reading first and second system information (SI), respectively associated with the first and second cells, the SI comprising information indicative of a relationship between the first PLMN and the second PLMN, wherein the relationship implies that a network connection exists between the first cell and the second cell. 22. The method of claim 18, further comprising camping on the first cell when the wireless communication device is in an idle mode. 23. The method of claim 18, further comprising, after connection set up to the first cell:
determining if a request for increased data is received; connecting to the second cell in response to determining that a request for increased data is received. 24. The method of claim 23, wherein the increased data is at least one of an increased transmission rate or an increased transmission size. 25. The method claim 18, further comprising storing an indication of the first PLMN associated with the first cell and an indication of the second PLMN associated with the second cell in a memory of the wireless communication device. 26. A computer program product stored in a non-transitory computer readable medium for controlling operation of a wireless communication device operably connectable to a first cell of a first public land mobile network (PLMN) applying a first radio access technology (RAT), the computer program product comprising software instructions which, when run on a processing circuit of the wireless communications device, causes the wireless communications device to:
detect a second cell of a second PLMN applying a second RAT; determine whether a network relation exists between the first cell and the second cell; perform a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; perform the connection set up to the second cell in response determining that there does not exist a network relation between the first cell and the second cell. 27. An arrangement of a wireless communication device operably connectable to a first cell of a first public land wireless network (PLMN) applying a first radio access technology (RAT), the arrangement comprising:
a processing circuit configured to function as a controller configured to:
cause detection of a second cell of a second PLMN applying a second RAT;
cause determination of whether a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform the connection set up to the second cell in response to determining that there does not exist a network relation between the first cell and the second cell. 28. The arrangement of claim 27, wherein the controller is configured to cause determination of whether a network relation exists between the first cell and the second cell by causing determination of whether the first PLMN of the first cell and the second PLMN of the second cell coincide. 29. The arrangement of claim 27, wherein the controller is configured to cause determination of whether a network relation exists between the first cell and the second cell by causing determination of whether there exist a backhaul connection between the first cell and the second cell. 30. The arrangement of claim 27, wherein the controller is configured to cause determination of whether a network relation exists between the first cell and the second cell by causing reading of first and second system information (SI), respectively associated with the first and second cells, the SI comprising information indicative of a relation between the first PLMN and the second PLMN, wherein the relationship implies that a network connection exists between the first cell and the second cell. 31. The arrangement of claim 27, wherein the controller is configured to cause the wireless communication device to camp on the first cell when the wireless communication device is in an idle mode. 32. The arrangement of claim 27, wherein the controller is configured to cause, after connection set up to the first cell:
determination of whether a request for increased data is received; and the wireless communication device to connect to the second cell in response to determining that a request for increased data was received. 33. The arrangement of claim 27, wherein the controller is configured to cause storage of an indication of the first PLMN associated with the first cell and an indication of the second PLMN associated with the second cell in a memory of the wireless communication device. 34. A wireless communication device operably connectable to a first cell of a first public land wireless network (PLMN) applying a first radio access technology (RAT), the wireless communications device comprising:
a processing circuit configured to function as a controller configured to:
cause detection of a second cell of a second PLMN applying a second RAT;
cause determination of whether a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform the connection set up to the second cell in response to determining that there does not exist a network relation between the first cell and the second cell. | A method of a wireless communication device operably connectable to a first cell of a first public land mobile network—PLMN—applying a first radio access technology—RAT, is disclosed. The method comprises detecting ( 301 ) a second cell of a second PLMN applying a second RAT and determining ( 302 ) whether a network relation exists between the first cell and the second cell. The method also comprises performing ( 304 ) a connection set up to the first cell if it is determined that a network relation exists between the first cell and the second cell, and performing ( 303 ) the connection set up to the second cell if it is determined that there does not exist a network relation between the first cell and the second cell. Also disclosed is an arrangement for a wireless communication device, a wireless communication device and a computer program product.1-17. (canceled) 18. A method of operating a wireless communication device operably connectable to a first cell of a first public land mobile network (PLMN) applying a first radio access technology (RAT), the method comprising:
detecting a second cell of a second PLMN applying a second RAT; determining whether a network relation exists between the first cell and the second cell; performing a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; performing the connection set up to the second cell in response determining that there does not exist a network relation between the first cell and the second cell. 19. The method of claim 18, wherein determining whether a network relation exists between the first cell and the second cell comprises determining whether the first PLMN of the first cell and the second PLMN of the second cell coincide. 20. The method of claim 18, wherein determining whether a network relation exists between the first cell and the second cell comprises determining whether there exists a backhaul connection between the first cell and the second cell. 21. The method of claim 18, wherein determining whether a network relation exists between the first cell and the second cell comprises reading first and second system information (SI), respectively associated with the first and second cells, the SI comprising information indicative of a relationship between the first PLMN and the second PLMN, wherein the relationship implies that a network connection exists between the first cell and the second cell. 22. The method of claim 18, further comprising camping on the first cell when the wireless communication device is in an idle mode. 23. The method of claim 18, further comprising, after connection set up to the first cell:
determining if a request for increased data is received; connecting to the second cell in response to determining that a request for increased data is received. 24. The method of claim 23, wherein the increased data is at least one of an increased transmission rate or an increased transmission size. 25. The method claim 18, further comprising storing an indication of the first PLMN associated with the first cell and an indication of the second PLMN associated with the second cell in a memory of the wireless communication device. 26. A computer program product stored in a non-transitory computer readable medium for controlling operation of a wireless communication device operably connectable to a first cell of a first public land mobile network (PLMN) applying a first radio access technology (RAT), the computer program product comprising software instructions which, when run on a processing circuit of the wireless communications device, causes the wireless communications device to:
detect a second cell of a second PLMN applying a second RAT; determine whether a network relation exists between the first cell and the second cell; perform a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; perform the connection set up to the second cell in response determining that there does not exist a network relation between the first cell and the second cell. 27. An arrangement of a wireless communication device operably connectable to a first cell of a first public land wireless network (PLMN) applying a first radio access technology (RAT), the arrangement comprising:
a processing circuit configured to function as a controller configured to:
cause detection of a second cell of a second PLMN applying a second RAT;
cause determination of whether a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform the connection set up to the second cell in response to determining that there does not exist a network relation between the first cell and the second cell. 28. The arrangement of claim 27, wherein the controller is configured to cause determination of whether a network relation exists between the first cell and the second cell by causing determination of whether the first PLMN of the first cell and the second PLMN of the second cell coincide. 29. The arrangement of claim 27, wherein the controller is configured to cause determination of whether a network relation exists between the first cell and the second cell by causing determination of whether there exist a backhaul connection between the first cell and the second cell. 30. The arrangement of claim 27, wherein the controller is configured to cause determination of whether a network relation exists between the first cell and the second cell by causing reading of first and second system information (SI), respectively associated with the first and second cells, the SI comprising information indicative of a relation between the first PLMN and the second PLMN, wherein the relationship implies that a network connection exists between the first cell and the second cell. 31. The arrangement of claim 27, wherein the controller is configured to cause the wireless communication device to camp on the first cell when the wireless communication device is in an idle mode. 32. The arrangement of claim 27, wherein the controller is configured to cause, after connection set up to the first cell:
determination of whether a request for increased data is received; and the wireless communication device to connect to the second cell in response to determining that a request for increased data was received. 33. The arrangement of claim 27, wherein the controller is configured to cause storage of an indication of the first PLMN associated with the first cell and an indication of the second PLMN associated with the second cell in a memory of the wireless communication device. 34. A wireless communication device operably connectable to a first cell of a first public land wireless network (PLMN) applying a first radio access technology (RAT), the wireless communications device comprising:
a processing circuit configured to function as a controller configured to:
cause detection of a second cell of a second PLMN applying a second RAT;
cause determination of whether a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform a connection set up to the first cell in response to determining that a network relation exists between the first cell and the second cell; and
cause the wireless communication device to perform the connection set up to the second cell in response to determining that there does not exist a network relation between the first cell and the second cell. | 2,400 |
8,649 | 8,649 | 15,227,295 | 2,485 | Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display ( 30 ) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data ( 31 ) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention ( 32 ) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern. | 1. A method of combining three dimensional image data and auxiliary graphical data, the method comprising:
obtaining the three dimensional image data from an information carrier; combining the three dimensional image data and auxiliary graphical data on a display plane; obtaining shifting information from the information carrier; and shifting the three dimensional image data based on the shifting information,
wherein the shifting creates a black bar spatial area which is not occupied by the shifted three dimensional image data,
wherein the black bar spatial area is created in an area of the display plane where no image data is displayed,
wherein the shifting information comprises information signaling the presence of an allowed shifting for enabling the shifting of the three dimensional image data, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 2. The method of claim 1, wherein the auxiliary graphical data is at least one of, two dimensional subtitle information, two dimensional subpicture information, three dimensional subtitle information or three dimensional subpicture information. 3. The method of claim 1, wherein the auxiliary graphical information is disposed entirely within the black bar spatial area. 4. The method of claim 1 wherein the auxiliary graphical data disposed within the black bar spatial area is padded with black background information. 5. The method as claimed in claim 1,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be top-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the top of the display plane. 6. The method as claimed in claim 1,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be bottom-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the bottom of the display plane. 7. An information carrier comprising:
three dimensional image data; auxiliary graphical data, wherein the three dimensional image data and the auxiliary graphical data are combined on a display plane; and shifting information, wherein the shifting information comprises information signaling the presence of an allowed shifting for enabling the shifting of the three dimensional image data; wherein the shifting information is arranged to enable creating a black bar spatial area which is not occupied by the shifted three dimensional image data, wherein the reduced or shifted three dimensional image data occupies an area of the display plane that does not contain the black bar spatial area; wherein the auxiliary graphical data is placed within the black bar spatial area, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 8. The information carrier of claim 7,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be top-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the top of the display plane. 9. The information carrier of claim 7,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be bottom-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the bottom of the display plane. 10. A source device for combining three dimensional image data and auxiliary graphical data, the source device comprising:
a processor circuit arranged to obtain the three dimensional image data and shifting information from an information carrier; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on a display plane, wherein the processor circuit is arranged to shift the three dimensional image data to create a black bar spatial area which is not occupied by the shifted three dimensional image data, wherein the shifting information comprises information signaling the presence of an allowed shifting of the three dimensional image data, wherein the shifting of the three dimensional image data is based on the shifting information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 11. A source device as claimed in claim 10,
wherein the device comprises an optical disc unit for retrieving various types of image information from the information carrier, wherein the optical disc unit comprises circuit for obtaining the shifting information from the information carrier as claimed. 12. A three dimensional display device for combining three dimensional image data and auxiliary graphical data, the three dimensional display device comprising:
a display plane; and a processor circuit arranged to obtain the three dimensional image data, from an information carrier using a source device; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on the display plane, wherein the processor circuit is arranged to obtain shifting information from the information carrier using a source device, wherein the processor circuit is arranged to shift the three dimensional image data to create a black bar spatial area which is not occupied by the shifted three dimensional image data, wherein the shifting information comprises information signaling the presence of an allowed shifting of the three dimensional image data, wherein the shifting of the three dimensional image data is bases on the shifting information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 13. A method of combining three dimensional image data and auxiliary graphical data, the method comprising:
obtaining the three dimensional image data from an information carrier; combining the three dimensional image data and auxiliary graphical data on a display plane; obtaining scaling information from the information carrier; and scaling the three dimensional image data based on the scaling information,
wherein the scaling creates a black bar spatial area which is not occupied by the scaled three dimensional image data,
wherein the black bar spatial area is created in an area of the display plane where no image data is displayed,
wherein the scaling information comprises information signaling the presence of an allowed scaling for enabling the scaling of the three dimensional image data, wherein the scaling information comprises at least one of:
a scale factor,
a scale factor applying to scaling in both an x and a y direction of the display plane. 14. An information carrier comprising:
three dimensional image data; auxiliary graphical data, wherein the three dimensional image data and the auxiliary graphical data are combined on a display plane; and scaling information, wherein the scaling information comprises information signaling the presence of an allowed scaling for enabling the scaling of the three dimensional image data; wherein the scaling information is arranged to enable creating a black bar spatial area which is not occupied by the scaled three dimensional image data, wherein the reduced three dimensional image data occupies an area of the display plane that does not contain the black bar spatial area; wherein the auxiliary graphical data is placed within the black bar spatial area, wherein the scaling information comprises at least one of:
a scale factor,
a scale factor applying to scaling in both an x and ay direction of the display plane. 15. A source device for combining three dimensional image data and auxiliary graphical data, the source device comprising:
a processor circuit arranged to obtain the three dimensional image data and scaling information from an information carrier; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on a display plane, wherein the processor circuit is arranged to scale the three dimensional image data to create a black bar spatial area which is not occupied by the scaled three dimensional image data, wherein the scaling information comprises information signaling the presence of an allowed scaling of the three dimensional image data, wherein the scaling of the three dimensional image data is based on the scaling information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the scaling information comprises at least one or:
a scale factor,
a scale factor applying to scaling both an x and a y direction of the display plane. 16. A three dimensional display device for combining three dimensional image data and auxiliary graphical data, the three dimensional display device comprising:
a display plane; and a processor circuit arranged to obtain the three dimensional image data, from an information carrier using a source device; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on the display plane, wherein the processor circuit is arranged to obtain scaling information from the information carrier using a source device, wherein the processor circuit is arranged to scale the three dimensional image data to create a black bar spatial area which is not occupied by the scaled three dimensional image data, wherein the scaling information comprises information signaling the presence of an allowed scaling of the three dimensional image data, wherein the scaling of the three dimensional image data is bases on the scaling information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the scaling information comprises at least one or:
a scale factor,
a scale factor applying to scaling both an x and a y direction of the display plane. | Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display ( 30 ) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data ( 31 ) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention ( 32 ) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.1. A method of combining three dimensional image data and auxiliary graphical data, the method comprising:
obtaining the three dimensional image data from an information carrier; combining the three dimensional image data and auxiliary graphical data on a display plane; obtaining shifting information from the information carrier; and shifting the three dimensional image data based on the shifting information,
wherein the shifting creates a black bar spatial area which is not occupied by the shifted three dimensional image data,
wherein the black bar spatial area is created in an area of the display plane where no image data is displayed,
wherein the shifting information comprises information signaling the presence of an allowed shifting for enabling the shifting of the three dimensional image data, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 2. The method of claim 1, wherein the auxiliary graphical data is at least one of, two dimensional subtitle information, two dimensional subpicture information, three dimensional subtitle information or three dimensional subpicture information. 3. The method of claim 1, wherein the auxiliary graphical information is disposed entirely within the black bar spatial area. 4. The method of claim 1 wherein the auxiliary graphical data disposed within the black bar spatial area is padded with black background information. 5. The method as claimed in claim 1,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be top-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the top of the display plane. 6. The method as claimed in claim 1,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be bottom-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the bottom of the display plane. 7. An information carrier comprising:
three dimensional image data; auxiliary graphical data, wherein the three dimensional image data and the auxiliary graphical data are combined on a display plane; and shifting information, wherein the shifting information comprises information signaling the presence of an allowed shifting for enabling the shifting of the three dimensional image data; wherein the shifting information is arranged to enable creating a black bar spatial area which is not occupied by the shifted three dimensional image data, wherein the reduced or shifted three dimensional image data occupies an area of the display plane that does not contain the black bar spatial area; wherein the auxiliary graphical data is placed within the black bar spatial area, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 8. The information carrier of claim 7,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be top-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the top of the display plane. 9. The information carrier of claim 7,
wherein the shifting information comprises information for arranging the location of subtitles and the black bar spatial area, wherein the subtitles and the black bar spatial area is arranged to be bottom-aligned, wherein the auxiliary graphical data disposed within the black bar spatial area, wherein the black bar is disposed at the bottom of the display plane. 10. A source device for combining three dimensional image data and auxiliary graphical data, the source device comprising:
a processor circuit arranged to obtain the three dimensional image data and shifting information from an information carrier; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on a display plane, wherein the processor circuit is arranged to shift the three dimensional image data to create a black bar spatial area which is not occupied by the shifted three dimensional image data, wherein the shifting information comprises information signaling the presence of an allowed shifting of the three dimensional image data, wherein the shifting of the three dimensional image data is based on the shifting information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 11. A source device as claimed in claim 10,
wherein the device comprises an optical disc unit for retrieving various types of image information from the information carrier, wherein the optical disc unit comprises circuit for obtaining the shifting information from the information carrier as claimed. 12. A three dimensional display device for combining three dimensional image data and auxiliary graphical data, the three dimensional display device comprising:
a display plane; and a processor circuit arranged to obtain the three dimensional image data, from an information carrier using a source device; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on the display plane, wherein the processor circuit is arranged to obtain shifting information from the information carrier using a source device, wherein the processor circuit is arranged to shift the three dimensional image data to create a black bar spatial area which is not occupied by the shifted three dimensional image data, wherein the shifting information comprises information signaling the presence of an allowed shifting of the three dimensional image data, wherein the shifting of the three dimensional image data is bases on the shifting information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the shifting information comprises an offset in at least one of a horizontal direction of the display plane and a vertical direction of the display plane. 13. A method of combining three dimensional image data and auxiliary graphical data, the method comprising:
obtaining the three dimensional image data from an information carrier; combining the three dimensional image data and auxiliary graphical data on a display plane; obtaining scaling information from the information carrier; and scaling the three dimensional image data based on the scaling information,
wherein the scaling creates a black bar spatial area which is not occupied by the scaled three dimensional image data,
wherein the black bar spatial area is created in an area of the display plane where no image data is displayed,
wherein the scaling information comprises information signaling the presence of an allowed scaling for enabling the scaling of the three dimensional image data, wherein the scaling information comprises at least one of:
a scale factor,
a scale factor applying to scaling in both an x and a y direction of the display plane. 14. An information carrier comprising:
three dimensional image data; auxiliary graphical data, wherein the three dimensional image data and the auxiliary graphical data are combined on a display plane; and scaling information, wherein the scaling information comprises information signaling the presence of an allowed scaling for enabling the scaling of the three dimensional image data; wherein the scaling information is arranged to enable creating a black bar spatial area which is not occupied by the scaled three dimensional image data, wherein the reduced three dimensional image data occupies an area of the display plane that does not contain the black bar spatial area; wherein the auxiliary graphical data is placed within the black bar spatial area, wherein the scaling information comprises at least one of:
a scale factor,
a scale factor applying to scaling in both an x and ay direction of the display plane. 15. A source device for combining three dimensional image data and auxiliary graphical data, the source device comprising:
a processor circuit arranged to obtain the three dimensional image data and scaling information from an information carrier; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on a display plane, wherein the processor circuit is arranged to scale the three dimensional image data to create a black bar spatial area which is not occupied by the scaled three dimensional image data, wherein the scaling information comprises information signaling the presence of an allowed scaling of the three dimensional image data, wherein the scaling of the three dimensional image data is based on the scaling information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the scaling information comprises at least one or:
a scale factor,
a scale factor applying to scaling both an x and a y direction of the display plane. 16. A three dimensional display device for combining three dimensional image data and auxiliary graphical data, the three dimensional display device comprising:
a display plane; and a processor circuit arranged to obtain the three dimensional image data, from an information carrier using a source device; wherein the processor circuit combines the three dimensional image data and auxiliary graphical data on the display plane, wherein the processor circuit is arranged to obtain scaling information from the information carrier using a source device, wherein the processor circuit is arranged to scale the three dimensional image data to create a black bar spatial area which is not occupied by the scaled three dimensional image data, wherein the scaling information comprises information signaling the presence of an allowed scaling of the three dimensional image data, wherein the scaling of the three dimensional image data is bases on the scaling information, wherein the image data is arranged to fit in the area not occupied by the black bar spatial area, wherein an overlaying of the three dimensional image data and auxiliary graphical data is arranged such that the auxiliary graphical data is placed within the black bar spatial area, wherein the scaling information comprises at least one or:
a scale factor,
a scale factor applying to scaling both an x and a y direction of the display plane. | 2,400 |
8,650 | 8,650 | 13,630,105 | 2,481 | System and method to provide contextual services, the method including: capturing a characteristic of a first person inside or within a predetermined distance of a monitored space; accessing a database of identifying characteristics of potential visitors to the monitored space; identifying the first person by use of the database, to produce an identified visitor; and providing to a second person an information related to the identified visitor. The system may include: a database of identifying characteristics of potential visitors to a monitored space; a surveillance device configured to capture a characteristic of a first person inside or within a predetermined distance of the monitored space; a processor configured to identify the first person by use of the database, to produce an identified visitor; and a communication interface configured to provide to a second person an information related to the identified visitor. | 1. A method to provide contextual services, comprising:
capturing a characteristic of a first person inside or within a predetermined distance of a monitored space; accessing a database of identifying characteristics of potential visitors to the monitored space; identifying the first person by use of the database, to produce an identified visitor; and providing to a second person an information related to the identified visitor. 2. The method of claim 1, wherein identifying the first person comprises determining a personal identification of the first person. 3. The method of claim 1, wherein identifying the first person comprises determining an organizational affiliation of the first person. 4. The method of claim 1, wherein the monitored space comprises a passageway and an adjacent area not used as a passageway. 5. The method of claim 1, wherein the information related to the identified visitor comprises a role of the identified visitor. 6. The method of claim 1, wherein the information related to the identified visitor comprises a value of the identified visitor. 7. The method of claim 1, wherein the information related to the identified visitor comprises an expected interest of the identified visitor. 8. The method of claim 1, wherein the information related to the identified visitor comprises a warning to be less candid. 9. The method of claim 1, wherein capturing the characteristic of the first person comprises capturing a voiceprint. 10. The method of claim 1, wherein capturing the characteristic of the first person comprises capturing an image. 11. The method of claim 1, wherein capturing the characteristic of the first person comprises capturing an RFID identifier. 12. The method of claim 1, wherein capturing the characteristic of the first person is controlled by a configurable policy. 13. The method of claim 1, wherein the first person is within a predetermined distance and angle from the second person. 14. A system to provide contextual services, comprising:
a database of identifying characteristics of potential visitors to a monitored space; a surveillance device configured to capture a characteristic of a first person inside or within a predetermined distance of the monitored space; a processor configured to identify the first person by use of the database, to produce an identified visitor; and a communication interface configured to provide to a second person an information related to the identified visitor. 15. The system of claim 14, wherein identifying characteristics comprises an organizational affiliation of the first person. 16. The system of claim 14, wherein the information related to the identified visitor comprises one of a role, a value and an expected interest of the identified visitor. 17. The system of claim 14, wherein the information related to the identified visitor comprises a warning to be less candid. 18. The system of claim 14, wherein operation of the surveillance device is controlled by a configurable policy. 19. The system of claim 14, wherein the first person is within a predetermined distance and angle from the second person. 20. The system of claim 14, wherein the surveillance device comprises an RFID detector. | System and method to provide contextual services, the method including: capturing a characteristic of a first person inside or within a predetermined distance of a monitored space; accessing a database of identifying characteristics of potential visitors to the monitored space; identifying the first person by use of the database, to produce an identified visitor; and providing to a second person an information related to the identified visitor. The system may include: a database of identifying characteristics of potential visitors to a monitored space; a surveillance device configured to capture a characteristic of a first person inside or within a predetermined distance of the monitored space; a processor configured to identify the first person by use of the database, to produce an identified visitor; and a communication interface configured to provide to a second person an information related to the identified visitor.1. A method to provide contextual services, comprising:
capturing a characteristic of a first person inside or within a predetermined distance of a monitored space; accessing a database of identifying characteristics of potential visitors to the monitored space; identifying the first person by use of the database, to produce an identified visitor; and providing to a second person an information related to the identified visitor. 2. The method of claim 1, wherein identifying the first person comprises determining a personal identification of the first person. 3. The method of claim 1, wherein identifying the first person comprises determining an organizational affiliation of the first person. 4. The method of claim 1, wherein the monitored space comprises a passageway and an adjacent area not used as a passageway. 5. The method of claim 1, wherein the information related to the identified visitor comprises a role of the identified visitor. 6. The method of claim 1, wherein the information related to the identified visitor comprises a value of the identified visitor. 7. The method of claim 1, wherein the information related to the identified visitor comprises an expected interest of the identified visitor. 8. The method of claim 1, wherein the information related to the identified visitor comprises a warning to be less candid. 9. The method of claim 1, wherein capturing the characteristic of the first person comprises capturing a voiceprint. 10. The method of claim 1, wherein capturing the characteristic of the first person comprises capturing an image. 11. The method of claim 1, wherein capturing the characteristic of the first person comprises capturing an RFID identifier. 12. The method of claim 1, wherein capturing the characteristic of the first person is controlled by a configurable policy. 13. The method of claim 1, wherein the first person is within a predetermined distance and angle from the second person. 14. A system to provide contextual services, comprising:
a database of identifying characteristics of potential visitors to a monitored space; a surveillance device configured to capture a characteristic of a first person inside or within a predetermined distance of the monitored space; a processor configured to identify the first person by use of the database, to produce an identified visitor; and a communication interface configured to provide to a second person an information related to the identified visitor. 15. The system of claim 14, wherein identifying characteristics comprises an organizational affiliation of the first person. 16. The system of claim 14, wherein the information related to the identified visitor comprises one of a role, a value and an expected interest of the identified visitor. 17. The system of claim 14, wherein the information related to the identified visitor comprises a warning to be less candid. 18. The system of claim 14, wherein operation of the surveillance device is controlled by a configurable policy. 19. The system of claim 14, wherein the first person is within a predetermined distance and angle from the second person. 20. The system of claim 14, wherein the surveillance device comprises an RFID detector. | 2,400 |
8,651 | 8,651 | 15,867,282 | 2,465 | A method and system for enforcing network topology. The method includes receiving, at a first port on a first switch, a second role associated with a second switch, where the second switch is connected to the first switch using the first port, and where the first switch is associated with a first role. The method further includes making a first determination, using the first role, the second role, and a network topology policy, that the first switch should not be connected to the second switch. Sending, in response to the first determination, a first alert to an alert recipient, where the first alert specifies that the first switch is improperly connected to the second switch. | 1.-20. (canceled) 21. A non-transitory computer readable medium comprising instructions, which when executed by the processor perform a method, the method comprising:
receiving, at a first port on a first switch, a second role associated with a second switch, wherein the second switch is connected to the first switch using the first port, wherein the first switch is associated with a first role; making a first determination, using the first role, the second role, and a network topology policy, that a number of actual connections between the first switch and the second switch exceeds a number of proposed connections between the first switch and the second switch, wherein the network topology policy specifies the number of proposed connections between the first switch and the second switch; and sending, in response to the first determination, an alert to an alert recipient, wherein the alert specifies that the first switch is improperly connected to the second switch. 22. The non-transitory computer readable medium of claim 21, wherein the method further comprises:
disabling, based on the first determination, the first port. 23. The non-transitory computer readable medium of claim 21, wherein the method further comprises:
receiving, at a second port on the first switch, a third role associated with a third switch, wherein the third switch is connected to the first switch using the second port; making a second determination, using the first role, the third role, and the network topology policy, that the first switch should be connected to the third switch. 24. The non-transitory computer readable medium of claim 21, wherein the second role is received by the first switch using a discovery protocol. 25. The non-transitory computer readable medium of claim 24, wherein the discovery protocol is one selected from a group consisting of link layer discovery protocol (LLDP) and Cisco discovery protocol (CDP). 26. The non-transitory computer readable medium of claim 24, wherein the second role is specified in an optional type-length-value (TLV) element in a Link Layer Discovery Protocol (LLDP) data unit (LLDPDU). 27. The non-transitory computer readable medium of claim 21, wherein the network topology policy specifies that switches associated with the first role cannot be connected to switches associated with the second role. 28. A switch, comprising:
a plurality of ports, wherein the switch is connected to a second switch using a first port of the plurality of ports and a second port of the plurality of ports; a processor; and memory comprising instructions, which when executed by the processor, enable the switch to:
receive, at the first port, a second role associated with the second switch, wherein the switch is associated with a first role;
make a first determination, using the first role, the second role, and a network topology policy, that a number of actual connections between the switch and the second switch exceeds a number of proposed connections between the switch and the second switch, wherein the network topology policy specifies the number of proposed connections between the switch and the second switch; and
send, in response to the first determination, an alert to an alert recipient, wherein the alert specifies that the switch is improperly connected to the second switch. 29. The switch of claim 28, wherein the switch is a multi-layer switch. 30. The switch of claim 28, wherein the instructions in the memory, when executed by the processor, enable the switch to disable, based on the first determination, the first port. 31. The switch of claim 28, wherein the second role is received by the switch using a discovery protocol. 32. The switch of claim 31, wherein the discovery protocol is one selected from a group consisting of link layer discovery protocol (LLDP) and Cisco discovery protocol (CDP). 33. The switch of claim 32, wherein the second role is specified in an optional type-length-value (TLV) element in an LLDP data unit (LLDPDU). 34. The switch of claim 33, wherein the switch is associated with a network ID, wherein the network ID is associated with a network, and wherein the network ID is specified in a second optional TLV element in the LLDPDU, and wherein making the first determination further comprises using the network ID. 35. A non-transitory computer readable medium comprising instructions, which when executed by the processor perform a method, the method comprising:
receiving, at a first port on a first switch, a second role associated with a second switch and a second network ID associated with a second network, wherein the second switch is associated with the second network and is directly, physically connected to the first switch using the first port, wherein the first switch is associated with a first role and a first network ID, and wherein the first network ID is associated with a first network; making a determination, using the first role, the first network ID, the second role, the second network ID and a network topology policy, that the first switch should not be directly, physically connected to the second switch, wherein the network topology policy specifies that the first switch should only be directly, physically connected to network devices within the first network; and sending, in response to the determination, an alert to an alert recipient, wherein the alert specifies that the first switch is improperly connected to the second switch. 36. The non-transitory computer readable medium of claim 35, wherein the method further comprises:
disabling, based on the determination, the first port. 37. The non-transitory computer readable medium of claim 35, wherein the second role is received by the first switch using a discovery protocol. 38. A switch, comprising:
a plurality of ports; a processor; and memory comprising instructions, which when executed by the processor, enable the switch to:
receive, at a first port of the plurality of ports, a second role associated with a second switch, and a second network ID associated with a second network, wherein the second switch is associated with the second network and is directly, physically connected to the switch using the first port, wherein the switch is associated with a first role and a first network ID, and wherein the first network ID is associated with a first network;
make a determination, using the first role, the first network ID, the second role, the second network ID and a network topology policy, that the switch should not be directly, physically connected to the second switch, wherein the network topology policy specifies that the switch should only be directly, physically connected to network devices within the first network; and
send, in response to the first determination, an alert to an alert recipient, wherein the alert specifies that the switch is improperly connected to the second switch. 39. The switch of claim 38, wherein the instructions in the memory, when executed by the switch, enable the switch to disable, based on the first determination, the first port. 40. The switch of claim 38, wherein the second role is received by the switch using a discovery protocol. | A method and system for enforcing network topology. The method includes receiving, at a first port on a first switch, a second role associated with a second switch, where the second switch is connected to the first switch using the first port, and where the first switch is associated with a first role. The method further includes making a first determination, using the first role, the second role, and a network topology policy, that the first switch should not be connected to the second switch. Sending, in response to the first determination, a first alert to an alert recipient, where the first alert specifies that the first switch is improperly connected to the second switch.1.-20. (canceled) 21. A non-transitory computer readable medium comprising instructions, which when executed by the processor perform a method, the method comprising:
receiving, at a first port on a first switch, a second role associated with a second switch, wherein the second switch is connected to the first switch using the first port, wherein the first switch is associated with a first role; making a first determination, using the first role, the second role, and a network topology policy, that a number of actual connections between the first switch and the second switch exceeds a number of proposed connections between the first switch and the second switch, wherein the network topology policy specifies the number of proposed connections between the first switch and the second switch; and sending, in response to the first determination, an alert to an alert recipient, wherein the alert specifies that the first switch is improperly connected to the second switch. 22. The non-transitory computer readable medium of claim 21, wherein the method further comprises:
disabling, based on the first determination, the first port. 23. The non-transitory computer readable medium of claim 21, wherein the method further comprises:
receiving, at a second port on the first switch, a third role associated with a third switch, wherein the third switch is connected to the first switch using the second port; making a second determination, using the first role, the third role, and the network topology policy, that the first switch should be connected to the third switch. 24. The non-transitory computer readable medium of claim 21, wherein the second role is received by the first switch using a discovery protocol. 25. The non-transitory computer readable medium of claim 24, wherein the discovery protocol is one selected from a group consisting of link layer discovery protocol (LLDP) and Cisco discovery protocol (CDP). 26. The non-transitory computer readable medium of claim 24, wherein the second role is specified in an optional type-length-value (TLV) element in a Link Layer Discovery Protocol (LLDP) data unit (LLDPDU). 27. The non-transitory computer readable medium of claim 21, wherein the network topology policy specifies that switches associated with the first role cannot be connected to switches associated with the second role. 28. A switch, comprising:
a plurality of ports, wherein the switch is connected to a second switch using a first port of the plurality of ports and a second port of the plurality of ports; a processor; and memory comprising instructions, which when executed by the processor, enable the switch to:
receive, at the first port, a second role associated with the second switch, wherein the switch is associated with a first role;
make a first determination, using the first role, the second role, and a network topology policy, that a number of actual connections between the switch and the second switch exceeds a number of proposed connections between the switch and the second switch, wherein the network topology policy specifies the number of proposed connections between the switch and the second switch; and
send, in response to the first determination, an alert to an alert recipient, wherein the alert specifies that the switch is improperly connected to the second switch. 29. The switch of claim 28, wherein the switch is a multi-layer switch. 30. The switch of claim 28, wherein the instructions in the memory, when executed by the processor, enable the switch to disable, based on the first determination, the first port. 31. The switch of claim 28, wherein the second role is received by the switch using a discovery protocol. 32. The switch of claim 31, wherein the discovery protocol is one selected from a group consisting of link layer discovery protocol (LLDP) and Cisco discovery protocol (CDP). 33. The switch of claim 32, wherein the second role is specified in an optional type-length-value (TLV) element in an LLDP data unit (LLDPDU). 34. The switch of claim 33, wherein the switch is associated with a network ID, wherein the network ID is associated with a network, and wherein the network ID is specified in a second optional TLV element in the LLDPDU, and wherein making the first determination further comprises using the network ID. 35. A non-transitory computer readable medium comprising instructions, which when executed by the processor perform a method, the method comprising:
receiving, at a first port on a first switch, a second role associated with a second switch and a second network ID associated with a second network, wherein the second switch is associated with the second network and is directly, physically connected to the first switch using the first port, wherein the first switch is associated with a first role and a first network ID, and wherein the first network ID is associated with a first network; making a determination, using the first role, the first network ID, the second role, the second network ID and a network topology policy, that the first switch should not be directly, physically connected to the second switch, wherein the network topology policy specifies that the first switch should only be directly, physically connected to network devices within the first network; and sending, in response to the determination, an alert to an alert recipient, wherein the alert specifies that the first switch is improperly connected to the second switch. 36. The non-transitory computer readable medium of claim 35, wherein the method further comprises:
disabling, based on the determination, the first port. 37. The non-transitory computer readable medium of claim 35, wherein the second role is received by the first switch using a discovery protocol. 38. A switch, comprising:
a plurality of ports; a processor; and memory comprising instructions, which when executed by the processor, enable the switch to:
receive, at a first port of the plurality of ports, a second role associated with a second switch, and a second network ID associated with a second network, wherein the second switch is associated with the second network and is directly, physically connected to the switch using the first port, wherein the switch is associated with a first role and a first network ID, and wherein the first network ID is associated with a first network;
make a determination, using the first role, the first network ID, the second role, the second network ID and a network topology policy, that the switch should not be directly, physically connected to the second switch, wherein the network topology policy specifies that the switch should only be directly, physically connected to network devices within the first network; and
send, in response to the first determination, an alert to an alert recipient, wherein the alert specifies that the switch is improperly connected to the second switch. 39. The switch of claim 38, wherein the instructions in the memory, when executed by the switch, enable the switch to disable, based on the first determination, the first port. 40. The switch of claim 38, wherein the second role is received by the switch using a discovery protocol. | 2,400 |
8,652 | 8,652 | 14,681,891 | 2,485 | In embodiments of iris acquisition using visible light imaging, a mobile device includes a front-facing camera device with a light that can be used to project visible light to illuminate a face of a user of the mobile device. An eye location module can determine that the user is wearing glasses utilizing ambient light or the projected visible light. The eye location module can determine a center point of a lens of the glasses and initiate an LED system to project near infra-red light to illuminate at least one eye of the user. The near infra-red light is projected to encompass the determined center point of the lens effective to illuminate a pupil of the eye. The eye location module can locate the pupil of the eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens. | 1. A method for iris acquisition using visible light imaging, the method comprising:
determining that a user of a mobile device is wearing glasses utilizing ambient light or projected visible light; determining a center point of a lens of the glasses; projecting near infra-red light to illuminate at least one eye of the user, the near infra-red light projected to encompass the determined center point of the lens effective to illuminate a pupil of the at least one eye; and locating the pupil of the at least one eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens. 2. The method as recited in claim 1, further comprising:
performing said determining the center point of the lens of the glasses approximately simultaneously when said locating the pupil of the at least one eye based on the reflection of the near infra-red light from the pupil. 3. The method as recited in claim 1, further comprising:
determining whether ambient light conditions are adequate for said determining that the user of the mobile device is wearing the glasses; and projecting the visible light to illuminate a face of the user based on a determination that the ambient light conditions are not adequate, the visible light projected with a light of a front-facing camera device that is integrated with the mobile device. 4. The method as recited in claim 1, further comprising:
locating the pupil of the at least one eye using the determined center point of the lens of the glasses as a starting point of a location search for the pupil of the at least one eye. 5. The method as recited in claim 1, further comprising:
bisecting the lens of the glasses horizontally and vertically to said determine the center point of the lens. 6. The method as recited in claim 1, further comprising:
determining the reflection of the near infra-red light from the pupil as the closest reflection point to the determined center point of the lens of the glasses. 7. The method as recited in claim 1, further comprising:
activating an IR imager to capture an image of the at least one eye of the user for iris authentication based on said locating the pupil of the at least one eye. 8. The method as recited in claim 1, further comprising:
displaying an alignment indication of the mobile device to indicate a direction to turn the mobile device for an alignment of the face of the user with respect to the mobile device for said locating the pupil of the at least one eye. 9. A mobile device, comprising:
an LED system configured to project near infra-red light to illuminate a face of a user of the mobile device; a memory and processing system to implement an eye location module that is configured to: determine that the user of the mobile device is wearing glasses; determine a center point of a lens of the glasses; initiate the LED system to project the near infra-red light to illuminate at least one eye of the user, the near infra-red light projected to encompass the determined center point of the lens effective to illuminate a pupil of the at least one eye; and locate the pupil of the at least one eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens. 10. The mobile device as recited in claim 9, wherein the eye location module is configured to determine the center point of the lens of the glasses approximately simultaneously when the pupil of the at least one eye is located based on the reflection of the near infra-red light from the pupil. 11. The mobile device as recited in claim 9, wherein the eye location module is configured to:
determine from a light sensor input whether ambient light conditions are adequate for a determination of whether the user of the mobile device is wearing the glasses; and initiate projection of visible light to illuminate a face of the user based on a determination that the ambient light conditions are not adequate, the visible light projected with a light of a front-facing camera device that is integrated with the mobile device. 12. The mobile device as recited in claim 9, wherein the eye location module is configured to locate the pupil of the at least one eye using the determined center point of the lens of the glasses as a starting point of a location search for the pupil of the at least one eye. 13. The mobile device as recited in claim 9, wherein the eye location module is configured to bisect the lens of the glasses horizontally and vertically to determine the center point of the lens. 14. The mobile device as recited in claim 9, wherein the eye location module is configured to determine the reflection of the near infra-red light from the pupil as the closest reflection point to the determined center point of the lens of the glasses. 15. The mobile device as recited in claim 9, wherein the eye location module is configured to activate an IR imager to capture an image of the at least one eye of the user for iris authentication based on the pupil of the at least one eye being located. 16. The mobile device as recited in claim 9, further comprising a display device configured to display an alignment indication of a direction to turn the mobile device for alignment of the face of the user with respect to the mobile device to locate the pupil of the at least one eye. 17. A system, comprising:
a front-facing camera device with a light configured to project visible light to illuminate a face of a person; a memory and processing system to implement an eye location module that is configured to: determine that the person is wearing glasses utilizing ambient light or the projected visible light; determine a center point of a lens of the glasses; initiate an LED system to project near infra-red light to illuminate at least one eye of the person, the near infra-red light projected to encompass the determined center point of the lens effective to illuminate a pupil of the at least one eye; and locate the pupil of the at least one eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens. 18. The system as recited in claim 17, wherein the eye location module is configured to bisect the lens of the glasses horizontally and vertically to determine the center point of the lens. 19. The system as recited in claim 17, wherein the eye location module is configured to determine the reflection of the near infra-red light from the pupil as the closest reflection point to the determined center point of the lens of the glasses. 20. The system as recited in claim 17, wherein the eye location module is configured to activate an IR imager to capture an image of the at least one eye of the person for iris authentication based on the pupil of the at least one eye being located. | In embodiments of iris acquisition using visible light imaging, a mobile device includes a front-facing camera device with a light that can be used to project visible light to illuminate a face of a user of the mobile device. An eye location module can determine that the user is wearing glasses utilizing ambient light or the projected visible light. The eye location module can determine a center point of a lens of the glasses and initiate an LED system to project near infra-red light to illuminate at least one eye of the user. The near infra-red light is projected to encompass the determined center point of the lens effective to illuminate a pupil of the eye. The eye location module can locate the pupil of the eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens.1. A method for iris acquisition using visible light imaging, the method comprising:
determining that a user of a mobile device is wearing glasses utilizing ambient light or projected visible light; determining a center point of a lens of the glasses; projecting near infra-red light to illuminate at least one eye of the user, the near infra-red light projected to encompass the determined center point of the lens effective to illuminate a pupil of the at least one eye; and locating the pupil of the at least one eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens. 2. The method as recited in claim 1, further comprising:
performing said determining the center point of the lens of the glasses approximately simultaneously when said locating the pupil of the at least one eye based on the reflection of the near infra-red light from the pupil. 3. The method as recited in claim 1, further comprising:
determining whether ambient light conditions are adequate for said determining that the user of the mobile device is wearing the glasses; and projecting the visible light to illuminate a face of the user based on a determination that the ambient light conditions are not adequate, the visible light projected with a light of a front-facing camera device that is integrated with the mobile device. 4. The method as recited in claim 1, further comprising:
locating the pupil of the at least one eye using the determined center point of the lens of the glasses as a starting point of a location search for the pupil of the at least one eye. 5. The method as recited in claim 1, further comprising:
bisecting the lens of the glasses horizontally and vertically to said determine the center point of the lens. 6. The method as recited in claim 1, further comprising:
determining the reflection of the near infra-red light from the pupil as the closest reflection point to the determined center point of the lens of the glasses. 7. The method as recited in claim 1, further comprising:
activating an IR imager to capture an image of the at least one eye of the user for iris authentication based on said locating the pupil of the at least one eye. 8. The method as recited in claim 1, further comprising:
displaying an alignment indication of the mobile device to indicate a direction to turn the mobile device for an alignment of the face of the user with respect to the mobile device for said locating the pupil of the at least one eye. 9. A mobile device, comprising:
an LED system configured to project near infra-red light to illuminate a face of a user of the mobile device; a memory and processing system to implement an eye location module that is configured to: determine that the user of the mobile device is wearing glasses; determine a center point of a lens of the glasses; initiate the LED system to project the near infra-red light to illuminate at least one eye of the user, the near infra-red light projected to encompass the determined center point of the lens effective to illuminate a pupil of the at least one eye; and locate the pupil of the at least one eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens. 10. The mobile device as recited in claim 9, wherein the eye location module is configured to determine the center point of the lens of the glasses approximately simultaneously when the pupil of the at least one eye is located based on the reflection of the near infra-red light from the pupil. 11. The mobile device as recited in claim 9, wherein the eye location module is configured to:
determine from a light sensor input whether ambient light conditions are adequate for a determination of whether the user of the mobile device is wearing the glasses; and initiate projection of visible light to illuminate a face of the user based on a determination that the ambient light conditions are not adequate, the visible light projected with a light of a front-facing camera device that is integrated with the mobile device. 12. The mobile device as recited in claim 9, wherein the eye location module is configured to locate the pupil of the at least one eye using the determined center point of the lens of the glasses as a starting point of a location search for the pupil of the at least one eye. 13. The mobile device as recited in claim 9, wherein the eye location module is configured to bisect the lens of the glasses horizontally and vertically to determine the center point of the lens. 14. The mobile device as recited in claim 9, wherein the eye location module is configured to determine the reflection of the near infra-red light from the pupil as the closest reflection point to the determined center point of the lens of the glasses. 15. The mobile device as recited in claim 9, wherein the eye location module is configured to activate an IR imager to capture an image of the at least one eye of the user for iris authentication based on the pupil of the at least one eye being located. 16. The mobile device as recited in claim 9, further comprising a display device configured to display an alignment indication of a direction to turn the mobile device for alignment of the face of the user with respect to the mobile device to locate the pupil of the at least one eye. 17. A system, comprising:
a front-facing camera device with a light configured to project visible light to illuminate a face of a person; a memory and processing system to implement an eye location module that is configured to: determine that the person is wearing glasses utilizing ambient light or the projected visible light; determine a center point of a lens of the glasses; initiate an LED system to project near infra-red light to illuminate at least one eye of the person, the near infra-red light projected to encompass the determined center point of the lens effective to illuminate a pupil of the at least one eye; and locate the pupil of the at least one eye based on a reflection of the near infra-red light from the pupil at approximately the determined center point of the lens. 18. The system as recited in claim 17, wherein the eye location module is configured to bisect the lens of the glasses horizontally and vertically to determine the center point of the lens. 19. The system as recited in claim 17, wherein the eye location module is configured to determine the reflection of the near infra-red light from the pupil as the closest reflection point to the determined center point of the lens of the glasses. 20. The system as recited in claim 17, wherein the eye location module is configured to activate an IR imager to capture an image of the at least one eye of the person for iris authentication based on the pupil of the at least one eye being located. | 2,400 |
8,653 | 8,653 | 15,436,303 | 2,449 | A technique is provided to determine whether a region within a web page is viewable to a user through a browser window. Often, browsers will only show part of a web page at given time, creating a difficulty in establishing whether a region of the web page, which may be an advertisement for example, is in view. This is addressed by providing one or more test features within the region, rendering the web page, monitoring a behavioural characteristic of the test features and determining whether the region is in view based on the monitored behavioural characteristic, wherein the behavioural characteristic varies according to whether the test feature is currently being displayed. One example of a behavioural characteristic is a frame progression rate. Browsers will typically redraw elements of a web page at a higher rate if they are currently in view through the browser window, and this characteristic can therefore be used to determine whether the test feature, and thus the region, is in view. The present invention finds particular utility where the region contains an advertisement, as it allows an advertiser to discover whether the advertisement has been seen by users. | 1.-22. (cancelled) 23. A method for determining whether a region of a web page is in view, comprising:
rendering, at the client device, a test feature within the region of the web page within a browser window, the test feature having a characteristic that has a value that varies depending on whether the test feature is displayed within or outside of a viewable portion of the web page; determining, using code restricted from accessing a position of the region within the browser window, a first value for the characteristic responsive to rendering the test feature within the region; and determining, using the first value of the characteristic, whether the region is within the viewable portion of the web page. 24. The method of claim 23, wherein the code is restricted from accessing the position of the region within the browser window by operating in an iframe. 25. The method of claim 23, wherein the characteristic comprises a behavioral characteristic of an API as it relates to the test feature. 26. The method of claim 23, wherein the characteristic comprises a frame progression rate of the test feature. 27. The method of claim 23, further comprising rendering a plurality of test features within the region, each of the test features having a respective characteristic that has a value that varies depending on whether the test feature is displayed within or outside of the viewable portion of the web page, wherein:
determining, by the client device, the value of the characteristic comprises determining each respective value of the characteristic of the plurality of test features; and determining whether the region is within the viewable portion comprises determining, by comparing each respective value with the control value, a proportion of the region that is within the viewable portion of the web page. 28. The method of claim 23, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value. 29. The method of claim 28, wherein the first value is determined at a first time, and wherein the method further comprises determining the control value by:
positioning, at a second time, the test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the test feature within the second region; and determining the control value using the second value for the characteristic. 30. The method of claim 28, further comprising determining the control value by:
positioning a second test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the second test feature within the second region; and determining the control value using the second value for the characteristic. 31. One or more non-transitory computer-readable storage media having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
rendering a test feature within a region of a web page within a browser window, the test feature having a characteristic that has a value that varies depending on whether the test feature is displayed within or outside of a viewable portion of the web page; determining, using code restricted from accessing a position of the region within the browser window, a first value for the characteristic responsive to rendering the test feature within the region; and determining, using the first value of the characteristic, whether the region is within the viewable portion of the web page. 32. The one or more non-transitory computer-readable storage media of claim 31, wherein the code is restricted from accessing the position of the region within the browser window by operating in an iframe. 33. The one or more non-transitory computer-readable storage media of claim 31, wherein the characteristic comprises a behavioral characteristic of an API as it relates to the test feature. 34. The one or more non-transitory computer-readable storage media of claim 31, wherein the characteristic comprises a frame progression rate of the test feature. 35. The one or more non-transitory computer-readable storage media of claim 31, the operations further comprising rendering a plurality of test features within the region, each of the test features having a respective characteristic that has a value that varies depending on whether the test feature is displayed within or outside of the viewable portion of the web page, wherein:
determining, by the client device, the value of the characteristic comprises determining each respective value of the characteristic of the plurality of test features; and determining whether the region is within the viewable portion comprises determining, by comparing each respective value with the control value, a proportion of the region that is within the viewable portion of the web page. 36. The one or more non-transitory computer-readable storage media of claim 31, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value. 37. The one or more non-transitory computer-readable storage media of claim 36, wherein the first value is determined at a first time, and wherein the operations further comprise determining the control value by:
positioning, at a second time, the test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the test feature within the second region; and determining the control value using the second value for the characteristic. 38. The one or more non-transitory computer-readable storage media of claim 36, the operations further comprising determining the control value by:
positioning a second test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the second test feature within the second region; and determining the control value using the second value for the characteristic. 39. A method for determining whether a region of a web page is in view, comprising:
transmitting, to a client device, code that, when executed by the client device, causes the client device to:
render a test feature within the region of the web page within a browser window, the test feature having a characteristic that has a value that varies depending on whether the test feature is displayed within or outside of a viewable portion of the web page;
determine a first value for the characteristic responsive to rendering the test feature within the region; and
determine, using the first value of the characteristic, whether the region is within the viewable portion of the web page;
wherein the code is restricted from accessing a position of the region within the browser window; and
receiving, from the client device, an indication that the region is within the viewable portion of the browser window responsive to the client device executing the code and determining that the test feature is displayed within the viewable portion of the web page. 40. The method of claim 39, wherein the characteristic comprises a frame progression rate of the test feature. 41. The method of claim 39, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value, wherein the first value is determined at a first time, wherein the code further causes the client device to determine the control value by:
positioning, at a second time, the test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the test feature within the second region; and determining the control value using the second value for the characteristic. 42. The method of claim 39, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value, wherein the code further causes the client device to determine the control value by:
positioning a second test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the second test feature within the second region; and determining the control value using the second value for the characteristic. | A technique is provided to determine whether a region within a web page is viewable to a user through a browser window. Often, browsers will only show part of a web page at given time, creating a difficulty in establishing whether a region of the web page, which may be an advertisement for example, is in view. This is addressed by providing one or more test features within the region, rendering the web page, monitoring a behavioural characteristic of the test features and determining whether the region is in view based on the monitored behavioural characteristic, wherein the behavioural characteristic varies according to whether the test feature is currently being displayed. One example of a behavioural characteristic is a frame progression rate. Browsers will typically redraw elements of a web page at a higher rate if they are currently in view through the browser window, and this characteristic can therefore be used to determine whether the test feature, and thus the region, is in view. The present invention finds particular utility where the region contains an advertisement, as it allows an advertiser to discover whether the advertisement has been seen by users.1.-22. (cancelled) 23. A method for determining whether a region of a web page is in view, comprising:
rendering, at the client device, a test feature within the region of the web page within a browser window, the test feature having a characteristic that has a value that varies depending on whether the test feature is displayed within or outside of a viewable portion of the web page; determining, using code restricted from accessing a position of the region within the browser window, a first value for the characteristic responsive to rendering the test feature within the region; and determining, using the first value of the characteristic, whether the region is within the viewable portion of the web page. 24. The method of claim 23, wherein the code is restricted from accessing the position of the region within the browser window by operating in an iframe. 25. The method of claim 23, wherein the characteristic comprises a behavioral characteristic of an API as it relates to the test feature. 26. The method of claim 23, wherein the characteristic comprises a frame progression rate of the test feature. 27. The method of claim 23, further comprising rendering a plurality of test features within the region, each of the test features having a respective characteristic that has a value that varies depending on whether the test feature is displayed within or outside of the viewable portion of the web page, wherein:
determining, by the client device, the value of the characteristic comprises determining each respective value of the characteristic of the plurality of test features; and determining whether the region is within the viewable portion comprises determining, by comparing each respective value with the control value, a proportion of the region that is within the viewable portion of the web page. 28. The method of claim 23, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value. 29. The method of claim 28, wherein the first value is determined at a first time, and wherein the method further comprises determining the control value by:
positioning, at a second time, the test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the test feature within the second region; and determining the control value using the second value for the characteristic. 30. The method of claim 28, further comprising determining the control value by:
positioning a second test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the second test feature within the second region; and determining the control value using the second value for the characteristic. 31. One or more non-transitory computer-readable storage media having instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
rendering a test feature within a region of a web page within a browser window, the test feature having a characteristic that has a value that varies depending on whether the test feature is displayed within or outside of a viewable portion of the web page; determining, using code restricted from accessing a position of the region within the browser window, a first value for the characteristic responsive to rendering the test feature within the region; and determining, using the first value of the characteristic, whether the region is within the viewable portion of the web page. 32. The one or more non-transitory computer-readable storage media of claim 31, wherein the code is restricted from accessing the position of the region within the browser window by operating in an iframe. 33. The one or more non-transitory computer-readable storage media of claim 31, wherein the characteristic comprises a behavioral characteristic of an API as it relates to the test feature. 34. The one or more non-transitory computer-readable storage media of claim 31, wherein the characteristic comprises a frame progression rate of the test feature. 35. The one or more non-transitory computer-readable storage media of claim 31, the operations further comprising rendering a plurality of test features within the region, each of the test features having a respective characteristic that has a value that varies depending on whether the test feature is displayed within or outside of the viewable portion of the web page, wherein:
determining, by the client device, the value of the characteristic comprises determining each respective value of the characteristic of the plurality of test features; and determining whether the region is within the viewable portion comprises determining, by comparing each respective value with the control value, a proportion of the region that is within the viewable portion of the web page. 36. The one or more non-transitory computer-readable storage media of claim 31, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value. 37. The one or more non-transitory computer-readable storage media of claim 36, wherein the first value is determined at a first time, and wherein the operations further comprise determining the control value by:
positioning, at a second time, the test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the test feature within the second region; and determining the control value using the second value for the characteristic. 38. The one or more non-transitory computer-readable storage media of claim 36, the operations further comprising determining the control value by:
positioning a second test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the second test feature within the second region; and determining the control value using the second value for the characteristic. 39. A method for determining whether a region of a web page is in view, comprising:
transmitting, to a client device, code that, when executed by the client device, causes the client device to:
render a test feature within the region of the web page within a browser window, the test feature having a characteristic that has a value that varies depending on whether the test feature is displayed within or outside of a viewable portion of the web page;
determine a first value for the characteristic responsive to rendering the test feature within the region; and
determine, using the first value of the characteristic, whether the region is within the viewable portion of the web page;
wherein the code is restricted from accessing a position of the region within the browser window; and
receiving, from the client device, an indication that the region is within the viewable portion of the browser window responsive to the client device executing the code and determining that the test feature is displayed within the viewable portion of the web page. 40. The method of claim 39, wherein the characteristic comprises a frame progression rate of the test feature. 41. The method of claim 39, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value, wherein the first value is determined at a first time, wherein the code further causes the client device to determine the control value by:
positioning, at a second time, the test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the test feature within the second region; and determining the control value using the second value for the characteristic. 42. The method of claim 39, the step of determining whether the region is within the viewable portion of the web page comprising comparing the first value of the characteristic to a control value, wherein the code further causes the client device to determine the control value by:
positioning a second test feature within a second region of the web page either known to be within the viewable portion of the web page or known to be outside of the viewable portion of the web page; measuring, using the code, a second value for the characteristic responsive to rendering the second test feature within the second region; and determining the control value using the second value for the characteristic. | 2,400 |
8,654 | 8,654 | 14,430,792 | 2,458 | A method is presented of requesting and receiving content over a network for use at a device. A request for the content is sent (S 1 ) from the device to a content server located in the network. A response is received (S 2 ) at the device from the content server. The response provides a content readiness estimate for the requested content. The readiness estimate is an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device. A content readiness indication is presented (S 3 ) at the device to a user of the device. The readiness indication is derived in dependence upon the readiness estimate and indicates to the user when or whether the content will be or is ready for use at the device according to the predetermined criterion. An instruction is received (S 4 ) at the device from the user indicating when the user wishes to commence use of the content at the device. Content is received (S 5 ) at the device from the content server and stored it at least until it is required for use. Use of the content is commenced (S 6 ) at the device according to the instruction. | 1-32. (canceled) 33. A method of requesting and receiving content over a network for use at a device, the method comprising, at the device:
(a) sending a request for the content to a content server located in the network; (b) receiving a response from the content server providing a content readiness estimate for the requested content, the readiness estimate being an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; (c) presenting a content readiness indication to a user of the device, the readiness indication being derived in dependence upon the readiness estimate and indicating to the user when or whether the content will be or is ready for use at the device according to the predetermined criterion; (d) receiving a user instruction from the user indicating when the user wishes to commence use of the content at the device; (e) receiving content from the content server and storing it at least until it is required for use; and (f) commencing use of the content according to the user instruction. 34. A method as claimed in claim 33, wherein the content readiness indication provides a visual or audible indication as to whether or not the content is ready for use at the device according to the predetermined criterion. 35. A method as claimed in claim 33, wherein the content readiness indication provides a relative or absolute time at which the content will be ready for use at the device according to the predetermined criterion. 36. A method as claimed in claim 35, further comprising presenting the user with an option of commencing use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter, and wherein the user instruction indicates to commence use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter. 37. A method as claimed in claim 33, further comprising presenting the user with an option of commencing use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication and wherein the user instruction indicates to commence use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication. 38. A method as claimed in claim 37, wherein, where at least some content has already been received, the user instruction indicates to commence use of the content immediately, or, where content has not already been received, indicates to commence use of the content upon or soon after first receipt of content. 39. A method as claimed in claim 33, comprising presenting the user with an option of downloading the entire content for later use, and wherein the user instruction indicates to download the entire content and await further instruction as to when the user wishes to commence use of the content at the device. 40. A method as claimed in claim 39, comprising presenting the user with an option to schedule a time to download the content. 41. A method as claimed in claim 33, comprising presenting the user with an option to use a higher network priority for content delivery. 42. A method as claimed in claim 33, further comprising presenting the user with different content delivery options associated with different respective costs to the user, wherein the different content delivery options include two or more of:
an option of commencing use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter, and wherein the user instruction indicates to commence use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter; an option of commencing use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication, and wherein the user instruction indicates to commence use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication; an option of downloading the entire content for later use, and wherein the user instruction indicates to download the entire content and await further instruction as to when the user wishes to commence use of the content at the device; an option to schedule a time to download the content; an option to use a higher network priority for content delivery. 43. A method as claimed in claim 33, comprising receiving the instruction in step (d) after presentation of the content of the readiness indication to the user in step (c). 44. A method as claimed in claim 33, comprising receiving the content readiness estimate in step (b) before receiving any content in step (e). 45. A method as claimed in claim 33, comprising commencing receipt of content in step (e) before receiving the instruction in step (d). 46. A method as claimed in claim 33, wherein the content comprises a plurality of content elements intended to be used in time sequence, and wherein the content elements are received from the content server substantially in time sequence to enable use of the content to be commenced in step (f) before the entire content has been received at the device. 47. A method as claimed in claim 33, wherein the user instruction indicates to commence use of the content before the entire content has been received at the device. 48. A method as claimed in claim 33, wherein the predetermined criterion is that the content is received in its entirety. 49. A method as claimed in claim 33, wherein the content comprises video or audio content, and wherein using the content comprises playing the content. 50. A method as claimed in claim 33, wherein the content readiness estimate is an estimate of when, relative to a first receipt of content at the device, the content will be so ready. 51. A method as claimed in claim 33, wherein the predetermined criterion is that the content can be used continuously or without interruption, without having to pause in order to receive further content before use can recommence. 52. A method as claimed in claim 51,
wherein the content comprises video or audio content; wherein using the content comprises playing the content; and wherein the readiness estimate relates to the time estimated as being required to buffer content at the device prior to commencement of playback to provide continuous or uninterrupted playback. 53. A method as claimed in claim 33, wherein the network is a communication network and wherein the device is a user equipment. 54. A method of delivering content over a network for use at a remote device, the method comprising, at a content server:
(a) receiving a request for content from the device; (b) sending to a delivery prediction function a request for a content readiness estimate for the requested content, the readiness estimate being an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; (c) receiving the readiness estimate from the delivery prediction function; (d) sending a response to the device providing the readiness estimate; and (e) sending the requested content to the device. 55. A method as claimed in claim 54, wherein the delivery prediction function is located at the content server. 56. A method as claimed in claim 54, comprising arranging delivery of other content to the device and other devices of the network taking account of, or to ensure accuracy of, the readiness estimate for the requested content. 57. A method as claimed in claim 54, wherein the network is a communication network and wherein the device is a user equipment. 58. A method as claimed in claim 54, wherein the content readiness estimate is an estimate of when, relative to a first receipt of content at the device, the content will be so ready. 59. A method performed at a node comprising a delivery prediction function, the method comprising:
receiving a request for a content readiness estimate from a content server; determining the readiness estimate by estimating when a content will be ready for use at a device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and sending the readiness estimate to the content server. 60. A method as claimed in claim 59, comprising performing the estimating step taking account of data relating to historical resource consumption of the device. 61. A method as claimed in claim 59, comprising performing the estimating step taking account of throughput in the device's current network cell. 62. A method as claimed in claim 59, wherein the content readiness estimate is an estimate of when, relative to a first receipt of content at the device, the content will be so ready. 63. A method as claimed in claim 59, comprising, at least where movement of the device between network cells is determined to be possible or likely before use of the content is commenced at the device, the estimating comprising taking account of data relating to historical mobility of the device amongst a plurality of network cells visited by the device, with network conditions within those visited cells influencing the performance of the estimating. 64. A method as claimed in claim 63, comprising determining that movement between network cells is possible or likely before use of the content is commenced at the device if an estimation of when the content will be ready for use at the device based on throughput in the device's current network cell is after the total time required for use of the content by a predetermined factor or amount. 65. A device comprising:
an output port configured to send a request for a content to a content server located in a network; an input port configured to:
receive a response from the content server providing a content readiness estimate for the requested content, the readiness estimate being an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and
receive the content from the content server and store it at least until it is required for use;
an interface controller configured to:
present a content readiness indication to a user of the device, the readiness indication being derived in dependence upon the readiness estimate and indicating to the user when or whether the content will be or is ready for use at the device according to the predetermined criterion; and
receive an instruction from the user indicating when the user wishes to commence use of the content at the device; and
a consumption controller configured to commence use of the content according to the instruction. 66. A content server for delivering content over a network for use at a remote device, the content server comprising:
an input port configured to receive a request for content from the device; and an output port configured to send to a delivery prediction function a request for a content readiness estimate for the requested content, wherein the readiness estimate is an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and wherein the input port is further configured to receive the readiness estimate from the delivery prediction function; and wherein the output port is further configured to send a response to the device providing the readiness estimate and send the requested content to the device. 67. A node comprising a delivery prediction function, the node comprising:
an input port for receiving a request for a content readiness estimate from a content server, a content readiness estimator circuit for determining the readiness estimate by estimating when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and an output port for sending the readiness estimate to the content server. | A method is presented of requesting and receiving content over a network for use at a device. A request for the content is sent (S 1 ) from the device to a content server located in the network. A response is received (S 2 ) at the device from the content server. The response provides a content readiness estimate for the requested content. The readiness estimate is an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device. A content readiness indication is presented (S 3 ) at the device to a user of the device. The readiness indication is derived in dependence upon the readiness estimate and indicates to the user when or whether the content will be or is ready for use at the device according to the predetermined criterion. An instruction is received (S 4 ) at the device from the user indicating when the user wishes to commence use of the content at the device. Content is received (S 5 ) at the device from the content server and stored it at least until it is required for use. Use of the content is commenced (S 6 ) at the device according to the instruction.1-32. (canceled) 33. A method of requesting and receiving content over a network for use at a device, the method comprising, at the device:
(a) sending a request for the content to a content server located in the network; (b) receiving a response from the content server providing a content readiness estimate for the requested content, the readiness estimate being an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; (c) presenting a content readiness indication to a user of the device, the readiness indication being derived in dependence upon the readiness estimate and indicating to the user when or whether the content will be or is ready for use at the device according to the predetermined criterion; (d) receiving a user instruction from the user indicating when the user wishes to commence use of the content at the device; (e) receiving content from the content server and storing it at least until it is required for use; and (f) commencing use of the content according to the user instruction. 34. A method as claimed in claim 33, wherein the content readiness indication provides a visual or audible indication as to whether or not the content is ready for use at the device according to the predetermined criterion. 35. A method as claimed in claim 33, wherein the content readiness indication provides a relative or absolute time at which the content will be ready for use at the device according to the predetermined criterion. 36. A method as claimed in claim 35, further comprising presenting the user with an option of commencing use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter, and wherein the user instruction indicates to commence use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter. 37. A method as claimed in claim 33, further comprising presenting the user with an option of commencing use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication and wherein the user instruction indicates to commence use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication. 38. A method as claimed in claim 37, wherein, where at least some content has already been received, the user instruction indicates to commence use of the content immediately, or, where content has not already been received, indicates to commence use of the content upon or soon after first receipt of content. 39. A method as claimed in claim 33, comprising presenting the user with an option of downloading the entire content for later use, and wherein the user instruction indicates to download the entire content and await further instruction as to when the user wishes to commence use of the content at the device. 40. A method as claimed in claim 39, comprising presenting the user with an option to schedule a time to download the content. 41. A method as claimed in claim 33, comprising presenting the user with an option to use a higher network priority for content delivery. 42. A method as claimed in claim 33, further comprising presenting the user with different content delivery options associated with different respective costs to the user, wherein the different content delivery options include two or more of:
an option of commencing use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter, and wherein the user instruction indicates to commence use of the content at the relative or absolute time provided in the readiness indication, or at a predetermined time thereafter; an option of commencing use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication, and wherein the user instruction indicates to commence use of the content regardless of whether or not the content is indicated as being ready for use according to the readiness indication; an option of downloading the entire content for later use, and wherein the user instruction indicates to download the entire content and await further instruction as to when the user wishes to commence use of the content at the device; an option to schedule a time to download the content; an option to use a higher network priority for content delivery. 43. A method as claimed in claim 33, comprising receiving the instruction in step (d) after presentation of the content of the readiness indication to the user in step (c). 44. A method as claimed in claim 33, comprising receiving the content readiness estimate in step (b) before receiving any content in step (e). 45. A method as claimed in claim 33, comprising commencing receipt of content in step (e) before receiving the instruction in step (d). 46. A method as claimed in claim 33, wherein the content comprises a plurality of content elements intended to be used in time sequence, and wherein the content elements are received from the content server substantially in time sequence to enable use of the content to be commenced in step (f) before the entire content has been received at the device. 47. A method as claimed in claim 33, wherein the user instruction indicates to commence use of the content before the entire content has been received at the device. 48. A method as claimed in claim 33, wherein the predetermined criterion is that the content is received in its entirety. 49. A method as claimed in claim 33, wherein the content comprises video or audio content, and wherein using the content comprises playing the content. 50. A method as claimed in claim 33, wherein the content readiness estimate is an estimate of when, relative to a first receipt of content at the device, the content will be so ready. 51. A method as claimed in claim 33, wherein the predetermined criterion is that the content can be used continuously or without interruption, without having to pause in order to receive further content before use can recommence. 52. A method as claimed in claim 51,
wherein the content comprises video or audio content; wherein using the content comprises playing the content; and wherein the readiness estimate relates to the time estimated as being required to buffer content at the device prior to commencement of playback to provide continuous or uninterrupted playback. 53. A method as claimed in claim 33, wherein the network is a communication network and wherein the device is a user equipment. 54. A method of delivering content over a network for use at a remote device, the method comprising, at a content server:
(a) receiving a request for content from the device; (b) sending to a delivery prediction function a request for a content readiness estimate for the requested content, the readiness estimate being an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; (c) receiving the readiness estimate from the delivery prediction function; (d) sending a response to the device providing the readiness estimate; and (e) sending the requested content to the device. 55. A method as claimed in claim 54, wherein the delivery prediction function is located at the content server. 56. A method as claimed in claim 54, comprising arranging delivery of other content to the device and other devices of the network taking account of, or to ensure accuracy of, the readiness estimate for the requested content. 57. A method as claimed in claim 54, wherein the network is a communication network and wherein the device is a user equipment. 58. A method as claimed in claim 54, wherein the content readiness estimate is an estimate of when, relative to a first receipt of content at the device, the content will be so ready. 59. A method performed at a node comprising a delivery prediction function, the method comprising:
receiving a request for a content readiness estimate from a content server; determining the readiness estimate by estimating when a content will be ready for use at a device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and sending the readiness estimate to the content server. 60. A method as claimed in claim 59, comprising performing the estimating step taking account of data relating to historical resource consumption of the device. 61. A method as claimed in claim 59, comprising performing the estimating step taking account of throughput in the device's current network cell. 62. A method as claimed in claim 59, wherein the content readiness estimate is an estimate of when, relative to a first receipt of content at the device, the content will be so ready. 63. A method as claimed in claim 59, comprising, at least where movement of the device between network cells is determined to be possible or likely before use of the content is commenced at the device, the estimating comprising taking account of data relating to historical mobility of the device amongst a plurality of network cells visited by the device, with network conditions within those visited cells influencing the performance of the estimating. 64. A method as claimed in claim 63, comprising determining that movement between network cells is possible or likely before use of the content is commenced at the device if an estimation of when the content will be ready for use at the device based on throughput in the device's current network cell is after the total time required for use of the content by a predetermined factor or amount. 65. A device comprising:
an output port configured to send a request for a content to a content server located in a network; an input port configured to:
receive a response from the content server providing a content readiness estimate for the requested content, the readiness estimate being an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and
receive the content from the content server and store it at least until it is required for use;
an interface controller configured to:
present a content readiness indication to a user of the device, the readiness indication being derived in dependence upon the readiness estimate and indicating to the user when or whether the content will be or is ready for use at the device according to the predetermined criterion; and
receive an instruction from the user indicating when the user wishes to commence use of the content at the device; and
a consumption controller configured to commence use of the content according to the instruction. 66. A content server for delivering content over a network for use at a remote device, the content server comprising:
an input port configured to receive a request for content from the device; and an output port configured to send to a delivery prediction function a request for a content readiness estimate for the requested content, wherein the readiness estimate is an estimate of when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and wherein the input port is further configured to receive the readiness estimate from the delivery prediction function; and wherein the output port is further configured to send a response to the device providing the readiness estimate and send the requested content to the device. 67. A node comprising a delivery prediction function, the node comprising:
an input port for receiving a request for a content readiness estimate from a content server, a content readiness estimator circuit for determining the readiness estimate by estimating when the content will be ready for use at the device according to a predetermined criterion, based on network conditions affecting delivery of the content from the content server to the device; and an output port for sending the readiness estimate to the content server. | 2,400 |
8,655 | 8,655 | 15,386,101 | 2,439 | A machine-readable medium may store instructions executable by a processing resource to access log data of an enterprise and extract time-series data of an enterprise entity from the log data. The time-series data may include measured feature values of a set of selected features over a series of time periods. The instructions may be further executable to train a predictive model specific to the enterprise entity using the time-series data, wherein the predictive model is to generate, for a particular time period, a predicted feature value for each of the selected features; access actual feature values of the enterprise entity for the particular time period; apply first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and apply second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally. | 1. A system comprising:
a prediction engine to:
extract time-series data of an enterprise entity from log data of an enterprise, wherein the time-series data of the enterprise entity includes measured feature values of a set of selected features over a series of time periods; and
train predictive models specific to the enterprise entity using the time-series data, including training a separate predictive model for each selected feature using time-series data specific to the selected feature, wherein the separate predictive model is to output a predicted feature value of the selected feature for a particular time period; and
a detection engine to:
retrieve actual feature values of the enterprise entity for the particular time period; and
apply first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and
apply second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally. 2. The system of claim 1, wherein the time-series data includes domain name service (DNS) log data and wherein the set of selected features includes:
a number of DNS queries by the enterprise entity, a percentage of DNS queries to blacklisted domains by the enterprise entity, a number of distinct domains queried by the enterprise entity; a number of distinct domains queried by the enterprise entity, a percentage of distinct domains queried by the enterprise entity that are blacklisted domains, and a highest number of distinct queries to an individual blacklisted domain by the enterprise entity. 3. The system of claim 1, wherein the time-series data includes hypertext transfer protocol (HTTP) log data and wherein the set of selected features includes:
a total number of HTTP requests by the enterprise entity, a number of distinct domains in uniform resource locators (URLs) of the HTTP requests by the enterprise entity, a percentage of HTTP requests to blacklisted domains or blacklisted internet protocol (IP) addresses by the enterprise entity, a count of HTTP requests to access selected file types by the enterprise entity wherein the selected file types include executable files and image files, and a count of HTTP requests to blacklisted domains or blacklisted IP addresses to access the selected file types by the enterprise entity. 4. The system of claim 1, wherein the time-series data includes netflow log data and wherein the set of selected features includes:
a total number of connections by the enterprise entity, a number of connection bursts by the enterprise entity, a number of ports on which connection attempts were made to a target internet protocol (IP) address, and a number of failed connection attempts. 5. The system of claim 1, wherein the predicted feature values specify a percentile range of predicted values for each of the selected features; and
wherein the detection engine is to apply the first-level deviation criteria to the actual feature and the predicted feature value of a particular selected feature to identify the particular selected feature as a deviant feature when the actual feature value of the particular selected feature exceeds the predicted feature of a threshold percentile in the percentile range of predicted values for the particular selected feature. 6. The system of claim 5, wherein the detection engine is to apply different threshold percentiles to different selected features. 7. The system of claim 1, wherein the detection engine is to apply the second-level deviation criteria to identify of the enterprise entity as behaving abnormally when a threshold number of the selected features are identified as deviant features. 8. The system of claim 1, wherein the enterprise entity comprises an enterprise device or an enterprise user account. 9. A method comprising:
accessing log data of an enterprise; extracting time-series data of an enterprise entity from the log data, wherein the time-series data of the enterprise entity includes measured feature values of a set of selected features over a series of consecutive time periods; training predictive models specific to the enterprise entity using the time-series data, wherein training includes, for each selected feature:
training a separate predictive model for the selected feature using time-series data specific to the selected feature, wherein the separate predictive model is to output a predicted feature value of the selected feature for a particular time period;
accessing actual feature values of the enterprise entity for the particular time period; applying deviation criteria to the actual feature values and the predicted feature values output by the predictive models; and flagging the enterprise entity as behaving abnormally based on application of the deviation criteria to the actual feature values and the predicted feature values. 10. The method of claim 9, wherein applying the deviation criteria to the actual feature values and the predicted feature values comprises:
applying first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and applying second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally. 11. The method of claim 10, wherein the predicted feature values specify a percentile range of predicted values for each of the selected features; and
wherein applying the first-level deviation criteria to the actual feature and the predicted feature value of a particular selected feature comprises:
identifying the particular selected feature as a deviant feature when the actual feature value of the particular selected feature exceeds the predicted feature of a threshold percentile in the percentile range of predicted values for the particular selected feature. 12. The method of claim 11, wherein applying the first-level deviation criteria comprises applying different threshold percentiles to different selected features. 13. The method of claim 10, wherein applying the second-level deviation criteria comprises:
identifying the enterprise entity as behaving abnormally when a threshold number of the selected features are identified as deviant features. 14. The method of claim 10, wherein applying the second-level deviation criteria comprises:
identifying the enterprise entity as behaving abnormally when a predetermined combination of the selected features are identified as deviant features. 15. The method of claim 9, further comprising:
providing, as inputs into the predictive model to generate the predicted values for the selected features, actual feature values of the enterprise entity from a selected subset of past time periods prior to the particular time period. 16. The method of claim 15, wherein the actual feature values from the selected subset of past time periods include:
an actual feature value from a time period immediately prior to the particular time period, an actual feature value from one day prior to the particular time period, an actual feature value from two days prior to the particular time period, an actual feature value from one week prior to the particular time period, and an actual feature value from two weeks prior to the particular time period. 17. A non-transitory machine-readable medium comprising instructions executable by a processing resource to:
access log data of an enterprise; extract time-series data of an enterprise entity from the log data, wherein the time-series data of the enterprise entity includes measured feature values of a set of selected features over a series of time periods; train a predictive model specific to the enterprise entity using the time-series data, wherein the predictive model is to generate; for a particular time period, a predicted feature value for each of the selected features; access actual feature values of the enterprise entity for the particular time period; apply first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and apply second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally. 18. The non-transitory machine-readable medium of claim 17, wherein the instructions are executable by the processing resource to apply the second-level deviation criteria to identify of the enterprise entity as behaving abnormally when a threshold number of the selected features are identified as deviant features. 19. The non-transitory machine-readable medium of claim 17, wherein the instructions are further executable by the processing resource to provide; as inputs into the predictive model to generate the predicted values for the selected features, actual feature values of the enterprise entity from a selected subset of past time periods prior to the particular time period. 20. The non-transitory machine-readable medium of claim 19; wherein the actual feature values from the selected subset of past time periods include:
an actual feature value from a time period immediately prior to the particular time period; an actual feature value from one day prior to the particular time period, an actual feature value from two days prior to the particular time period, an actual feature value from one week prior to the particular time period, and an actual feature value from two weeks prior to the particular time period. | A machine-readable medium may store instructions executable by a processing resource to access log data of an enterprise and extract time-series data of an enterprise entity from the log data. The time-series data may include measured feature values of a set of selected features over a series of time periods. The instructions may be further executable to train a predictive model specific to the enterprise entity using the time-series data, wherein the predictive model is to generate, for a particular time period, a predicted feature value for each of the selected features; access actual feature values of the enterprise entity for the particular time period; apply first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and apply second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally.1. A system comprising:
a prediction engine to:
extract time-series data of an enterprise entity from log data of an enterprise, wherein the time-series data of the enterprise entity includes measured feature values of a set of selected features over a series of time periods; and
train predictive models specific to the enterprise entity using the time-series data, including training a separate predictive model for each selected feature using time-series data specific to the selected feature, wherein the separate predictive model is to output a predicted feature value of the selected feature for a particular time period; and
a detection engine to:
retrieve actual feature values of the enterprise entity for the particular time period; and
apply first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and
apply second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally. 2. The system of claim 1, wherein the time-series data includes domain name service (DNS) log data and wherein the set of selected features includes:
a number of DNS queries by the enterprise entity, a percentage of DNS queries to blacklisted domains by the enterprise entity, a number of distinct domains queried by the enterprise entity; a number of distinct domains queried by the enterprise entity, a percentage of distinct domains queried by the enterprise entity that are blacklisted domains, and a highest number of distinct queries to an individual blacklisted domain by the enterprise entity. 3. The system of claim 1, wherein the time-series data includes hypertext transfer protocol (HTTP) log data and wherein the set of selected features includes:
a total number of HTTP requests by the enterprise entity, a number of distinct domains in uniform resource locators (URLs) of the HTTP requests by the enterprise entity, a percentage of HTTP requests to blacklisted domains or blacklisted internet protocol (IP) addresses by the enterprise entity, a count of HTTP requests to access selected file types by the enterprise entity wherein the selected file types include executable files and image files, and a count of HTTP requests to blacklisted domains or blacklisted IP addresses to access the selected file types by the enterprise entity. 4. The system of claim 1, wherein the time-series data includes netflow log data and wherein the set of selected features includes:
a total number of connections by the enterprise entity, a number of connection bursts by the enterprise entity, a number of ports on which connection attempts were made to a target internet protocol (IP) address, and a number of failed connection attempts. 5. The system of claim 1, wherein the predicted feature values specify a percentile range of predicted values for each of the selected features; and
wherein the detection engine is to apply the first-level deviation criteria to the actual feature and the predicted feature value of a particular selected feature to identify the particular selected feature as a deviant feature when the actual feature value of the particular selected feature exceeds the predicted feature of a threshold percentile in the percentile range of predicted values for the particular selected feature. 6. The system of claim 5, wherein the detection engine is to apply different threshold percentiles to different selected features. 7. The system of claim 1, wherein the detection engine is to apply the second-level deviation criteria to identify of the enterprise entity as behaving abnormally when a threshold number of the selected features are identified as deviant features. 8. The system of claim 1, wherein the enterprise entity comprises an enterprise device or an enterprise user account. 9. A method comprising:
accessing log data of an enterprise; extracting time-series data of an enterprise entity from the log data, wherein the time-series data of the enterprise entity includes measured feature values of a set of selected features over a series of consecutive time periods; training predictive models specific to the enterprise entity using the time-series data, wherein training includes, for each selected feature:
training a separate predictive model for the selected feature using time-series data specific to the selected feature, wherein the separate predictive model is to output a predicted feature value of the selected feature for a particular time period;
accessing actual feature values of the enterprise entity for the particular time period; applying deviation criteria to the actual feature values and the predicted feature values output by the predictive models; and flagging the enterprise entity as behaving abnormally based on application of the deviation criteria to the actual feature values and the predicted feature values. 10. The method of claim 9, wherein applying the deviation criteria to the actual feature values and the predicted feature values comprises:
applying first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and applying second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally. 11. The method of claim 10, wherein the predicted feature values specify a percentile range of predicted values for each of the selected features; and
wherein applying the first-level deviation criteria to the actual feature and the predicted feature value of a particular selected feature comprises:
identifying the particular selected feature as a deviant feature when the actual feature value of the particular selected feature exceeds the predicted feature of a threshold percentile in the percentile range of predicted values for the particular selected feature. 12. The method of claim 11, wherein applying the first-level deviation criteria comprises applying different threshold percentiles to different selected features. 13. The method of claim 10, wherein applying the second-level deviation criteria comprises:
identifying the enterprise entity as behaving abnormally when a threshold number of the selected features are identified as deviant features. 14. The method of claim 10, wherein applying the second-level deviation criteria comprises:
identifying the enterprise entity as behaving abnormally when a predetermined combination of the selected features are identified as deviant features. 15. The method of claim 9, further comprising:
providing, as inputs into the predictive model to generate the predicted values for the selected features, actual feature values of the enterprise entity from a selected subset of past time periods prior to the particular time period. 16. The method of claim 15, wherein the actual feature values from the selected subset of past time periods include:
an actual feature value from a time period immediately prior to the particular time period, an actual feature value from one day prior to the particular time period, an actual feature value from two days prior to the particular time period, an actual feature value from one week prior to the particular time period, and an actual feature value from two weeks prior to the particular time period. 17. A non-transitory machine-readable medium comprising instructions executable by a processing resource to:
access log data of an enterprise; extract time-series data of an enterprise entity from the log data, wherein the time-series data of the enterprise entity includes measured feature values of a set of selected features over a series of time periods; train a predictive model specific to the enterprise entity using the time-series data, wherein the predictive model is to generate; for a particular time period, a predicted feature value for each of the selected features; access actual feature values of the enterprise entity for the particular time period; apply first-level deviation criteria to the actual feature value and the predicted feature value of each selected feature to identify deviant features of the enterprise entity; and apply second-level deviation criteria to the identified deviant features to identify the enterprise entity as behaving abnormally. 18. The non-transitory machine-readable medium of claim 17, wherein the instructions are executable by the processing resource to apply the second-level deviation criteria to identify of the enterprise entity as behaving abnormally when a threshold number of the selected features are identified as deviant features. 19. The non-transitory machine-readable medium of claim 17, wherein the instructions are further executable by the processing resource to provide; as inputs into the predictive model to generate the predicted values for the selected features, actual feature values of the enterprise entity from a selected subset of past time periods prior to the particular time period. 20. The non-transitory machine-readable medium of claim 19; wherein the actual feature values from the selected subset of past time periods include:
an actual feature value from a time period immediately prior to the particular time period; an actual feature value from one day prior to the particular time period, an actual feature value from two days prior to the particular time period, an actual feature value from one week prior to the particular time period, and an actual feature value from two weeks prior to the particular time period. | 2,400 |
8,656 | 8,656 | 15,833,087 | 2,425 | A sensing and alert system is installed in a plurality of electrical switchgear cabinets. Each switchgear cabinet includes at least one high-voltage circuit breaker, and each switchgear cabinet is in communication with a central programmable logic controller (PLC) configured to activate and deactivate the high-voltage circuit breaker in the corresponding switchgear cabinet. The sensing and alert system includes a time of flight (ToF) sensor arranged to capture real-time image data, a lighting module affixed to an inside portion of the switchgear cabinet, and a processor operatively coupled to the ToF sensor and to the lighting module. The sensing and alert system is affixed to an inside portion of each switchgear cabinet. The processor compares real-time image data from the ToF sensor to the stored calibration image data and transmits an alarm if the difference is greater than a predetermined amount. | 1. A sensing and alert system for a plurality of electrical switchgear cabinets, each switchgear cabinet including at least one high-voltage circuit breaker, and each switchgear cabinet in communication with a central programmable logic controller (PLC) configured to activate and deactivate the high-voltage circuit breaker in the corresponding switchgear cabinet, the sensing and alert system comprising:
a time of flight (ToF) sensor arranged to capture real-time image data; a lighting module affixed to an inside portion of the switchgear cabinet; a processor operatively coupled to the ToF sensor and to the lighting module; the sensing and alert system affixed to an inside portion of each switchgear cabinet so that the ToF sensor is in direct line-of-sight of an inside portion of a door of the cabinet, and the lighting module is positioned so as to illuminate an inside portion of the cabinet when activated; the processor configured to receive the real-time image data from the ToF sensor and compare the real-time image data with stored calibration image data, wherein if the real-time image data differs from the stored calibration image data by more than a predetermined amount, the processor transmits an alarm signal to the central PLC, and activates the lighting module; and wherein, the central PLC, upon receipt of the alarm signal from the processor, deactivates the at least one high-voltage circuit breaker in the corresponding switchgear cabinet. 2. The system according to claim 1, wherein the lighting module, the processor, and the ToF sensor receive power from a 24 volt DC power source provided within each electrical cabinet. 3. The system according to claim 1, wherein the ToF sensor is a Polytec epc610 camera module. 4. The system according to claim 1, wherein the central PLC controls the high-voltage circuit breakers in a plurality of electrical switchgear cabinets. 5. The system according to claim 1, wherein after the processor turns on the lighting module of a selected electrical cabinet, the lighting module remains activated until a predetermined time after the processor determines that the real-time image data is not different than the stored calibration image data by more than a predetermined amount. 6. The system according to claim 1, wherein the sensing and alert system is attached magnetically to an inside portion of the switchgear cabinet so as to comply with Underwriters Laboratory (UL) requirements for electrical switchgear cabinetry. 7. The system according to claim 1, further including two ToF sensors disposed in each electrical cabinet for redundant operation. 8. The system according to claim 1, wherein the lighting module includes a plurality of high-intensity LED lamps. 9. A sensing and alert system for a plurality of electrical switchgear cabinets, each switchgear cabinet including at least one high-voltage circuit breaker, and each cabinet in communication with a central programmable logic controller (PLC) configured to activate and deactivate the high-voltage circuit breaker in the corresponding cabinet, the sensing and alert system comprising:
a time of flight (ToF) sensor arranged to capture real-time image data; a lighting module; a processor operatively coupled to the ToF sensor and to the lighting module; the sensing and alert system affixed external to the electrical switchgear cabinet in a position proximal to the electrical switchgear cabinet and arranged so that the ToF sensor captures real-time image data in a predefined area proximal to the switchgear cabinet, and the lighting module illuminates the area proximal to the switchgear cabinet when activated; the processor configured to receive the captured real-time image data from the ToF sensor and compare the real-time image data with stored calibration image data, wherein if the real-time image data differs from the stored calibration image data by more than a predetermined amount, then the processor configured to transmit an alarm signal to a remote monitoring station, and activates the lighting module; and wherein, the remote monitoring station, upon receipt of the alarm signal from the processor, deactivates the at least one high-voltage circuit breaker in the corresponding switchgear cabinet by sending a command to the central PLC. 10. The system according to claim 9, wherein the remote monitoring station, upon receipt of the alarm signal, monitors activity in the electrical room, and controls the PLC to deactivate selected high-voltage circuit breakers if a dangerous condition is detected. 11. The system according to claim 9, wherein the predefined area proximal to the switchgear represents a bounded zone or prohibited zone such that objects entering into the prohibited zone cause generation of the alarm signal. 12. The system according to claim 9, wherein the lighting module, the processor, and the ToF sensor receive power from a 24 volt DC power source. 13. The system according to claim 9, wherein the ToF sensor is a Polytec epc610 camera module. 14. The system according to claim 9, wherein the central PLC controls the high-voltage circuit breakers in a plurality of electrical switchgear cabinets. 15. The system according to claim 9, wherein after the processor turns on the lighting module, the lighting module remains activated until a predetermined time after the processor determines that the real-time image data is not different than the stored calibration image data by more than a predetermined amount. 16. The system according to claim 9, further including two low resolution ToF sensors operation. 17. The system according to claim 16, wherein each low resolution ToF sensor provides a single pixel of data. 18. The system according to claim 9, wherein the lighting module includes a plurality of high-intensity LED lamps. 19. A sensing and alert system, comprising:
a time of flight (ToF) sensor arranged to capture real-time image data; a lighting module; a processor operatively coupled to the ToF sensor and to the lighting module; the sensing and alert system affixed external to an area to be monitored and arranged so that the ToF sensor captures real-time image data in a predefined area proximal to the area to be monitored, and the lighting module illuminates the area proximal to the area to be monitored when activated; the processor configured to receive the captured real-time image data from the ToF sensor and compare the real-time image data with stored calibration image data, wherein if the real-time image data differs from the stored calibration image data by more than a predetermined amount, then the processor configured to transmit an alarm signal to a remote monitoring station, and activates the lighting module. | A sensing and alert system is installed in a plurality of electrical switchgear cabinets. Each switchgear cabinet includes at least one high-voltage circuit breaker, and each switchgear cabinet is in communication with a central programmable logic controller (PLC) configured to activate and deactivate the high-voltage circuit breaker in the corresponding switchgear cabinet. The sensing and alert system includes a time of flight (ToF) sensor arranged to capture real-time image data, a lighting module affixed to an inside portion of the switchgear cabinet, and a processor operatively coupled to the ToF sensor and to the lighting module. The sensing and alert system is affixed to an inside portion of each switchgear cabinet. The processor compares real-time image data from the ToF sensor to the stored calibration image data and transmits an alarm if the difference is greater than a predetermined amount.1. A sensing and alert system for a plurality of electrical switchgear cabinets, each switchgear cabinet including at least one high-voltage circuit breaker, and each switchgear cabinet in communication with a central programmable logic controller (PLC) configured to activate and deactivate the high-voltage circuit breaker in the corresponding switchgear cabinet, the sensing and alert system comprising:
a time of flight (ToF) sensor arranged to capture real-time image data; a lighting module affixed to an inside portion of the switchgear cabinet; a processor operatively coupled to the ToF sensor and to the lighting module; the sensing and alert system affixed to an inside portion of each switchgear cabinet so that the ToF sensor is in direct line-of-sight of an inside portion of a door of the cabinet, and the lighting module is positioned so as to illuminate an inside portion of the cabinet when activated; the processor configured to receive the real-time image data from the ToF sensor and compare the real-time image data with stored calibration image data, wherein if the real-time image data differs from the stored calibration image data by more than a predetermined amount, the processor transmits an alarm signal to the central PLC, and activates the lighting module; and wherein, the central PLC, upon receipt of the alarm signal from the processor, deactivates the at least one high-voltage circuit breaker in the corresponding switchgear cabinet. 2. The system according to claim 1, wherein the lighting module, the processor, and the ToF sensor receive power from a 24 volt DC power source provided within each electrical cabinet. 3. The system according to claim 1, wherein the ToF sensor is a Polytec epc610 camera module. 4. The system according to claim 1, wherein the central PLC controls the high-voltage circuit breakers in a plurality of electrical switchgear cabinets. 5. The system according to claim 1, wherein after the processor turns on the lighting module of a selected electrical cabinet, the lighting module remains activated until a predetermined time after the processor determines that the real-time image data is not different than the stored calibration image data by more than a predetermined amount. 6. The system according to claim 1, wherein the sensing and alert system is attached magnetically to an inside portion of the switchgear cabinet so as to comply with Underwriters Laboratory (UL) requirements for electrical switchgear cabinetry. 7. The system according to claim 1, further including two ToF sensors disposed in each electrical cabinet for redundant operation. 8. The system according to claim 1, wherein the lighting module includes a plurality of high-intensity LED lamps. 9. A sensing and alert system for a plurality of electrical switchgear cabinets, each switchgear cabinet including at least one high-voltage circuit breaker, and each cabinet in communication with a central programmable logic controller (PLC) configured to activate and deactivate the high-voltage circuit breaker in the corresponding cabinet, the sensing and alert system comprising:
a time of flight (ToF) sensor arranged to capture real-time image data; a lighting module; a processor operatively coupled to the ToF sensor and to the lighting module; the sensing and alert system affixed external to the electrical switchgear cabinet in a position proximal to the electrical switchgear cabinet and arranged so that the ToF sensor captures real-time image data in a predefined area proximal to the switchgear cabinet, and the lighting module illuminates the area proximal to the switchgear cabinet when activated; the processor configured to receive the captured real-time image data from the ToF sensor and compare the real-time image data with stored calibration image data, wherein if the real-time image data differs from the stored calibration image data by more than a predetermined amount, then the processor configured to transmit an alarm signal to a remote monitoring station, and activates the lighting module; and wherein, the remote monitoring station, upon receipt of the alarm signal from the processor, deactivates the at least one high-voltage circuit breaker in the corresponding switchgear cabinet by sending a command to the central PLC. 10. The system according to claim 9, wherein the remote monitoring station, upon receipt of the alarm signal, monitors activity in the electrical room, and controls the PLC to deactivate selected high-voltage circuit breakers if a dangerous condition is detected. 11. The system according to claim 9, wherein the predefined area proximal to the switchgear represents a bounded zone or prohibited zone such that objects entering into the prohibited zone cause generation of the alarm signal. 12. The system according to claim 9, wherein the lighting module, the processor, and the ToF sensor receive power from a 24 volt DC power source. 13. The system according to claim 9, wherein the ToF sensor is a Polytec epc610 camera module. 14. The system according to claim 9, wherein the central PLC controls the high-voltage circuit breakers in a plurality of electrical switchgear cabinets. 15. The system according to claim 9, wherein after the processor turns on the lighting module, the lighting module remains activated until a predetermined time after the processor determines that the real-time image data is not different than the stored calibration image data by more than a predetermined amount. 16. The system according to claim 9, further including two low resolution ToF sensors operation. 17. The system according to claim 16, wherein each low resolution ToF sensor provides a single pixel of data. 18. The system according to claim 9, wherein the lighting module includes a plurality of high-intensity LED lamps. 19. A sensing and alert system, comprising:
a time of flight (ToF) sensor arranged to capture real-time image data; a lighting module; a processor operatively coupled to the ToF sensor and to the lighting module; the sensing and alert system affixed external to an area to be monitored and arranged so that the ToF sensor captures real-time image data in a predefined area proximal to the area to be monitored, and the lighting module illuminates the area proximal to the area to be monitored when activated; the processor configured to receive the captured real-time image data from the ToF sensor and compare the real-time image data with stored calibration image data, wherein if the real-time image data differs from the stored calibration image data by more than a predetermined amount, then the processor configured to transmit an alarm signal to a remote monitoring station, and activates the lighting module. | 2,400 |
8,657 | 8,657 | 16,112,196 | 2,473 | A method operates a field bus system, wherein the field bus system has: a gateway, which has a network connection for a network of a specified type and a fieldbus connection for a fieldbus, and a number of fieldbus nodes, wherein the fieldbus nodes are coupled to each other and to the gateway via the fieldbus for the purpose of data exchange. Addressing takes place in the network of the specified type by network addresses, wherein the network addresses have a first part which designates a destination address, and a second part which designates a port of the destination address. The method comprises the steps of: creating a bus configuration, wherein the bus configuration assigns a destination address to a port, and performing a network address translation using the gateway based on the bus configuration created. | 1. A method for operating a fieldbus system, wherein the fieldbus system comprises:
a gateway, which has a network connection for a network of a specified type and a fieldbus connection for a fieldbus, and a number of fieldbus nodes, wherein the fieldbus nodes are coupled to each other and to the gateway via the fieldbus for the purpose of data exchange, wherein addressing takes place in the network of the specified type by way of network addresses, wherein the network addresses have a first part which designates a destination address, and a second part which designates a port of the destination address, the method comprising the steps of: creating a bus configuration, wherein the bus configuration assigns a destination address to a port; and performing a network address translation using the gateway based on the bus configuration created. 2. The method as claimed in claim 1, wherein
performing the network address translation using the gateway based on the bus configuration created comprises the steps of: receiving a data packet, which is addressed to the gateway, via the network connection for the network of the specified type; extracting a port from a network address contained in the received data packet; processing the received data packet by replacing the destination address contained in the received data packet by a destination address which is assigned to the extracted port in the bus configuration; and forwarding the processed data packet via the fieldbus connection of the gateway to the fieldbus node which has the destination address of the processed data packet. 3. The method as claimed in claim 1, wherein
the network connection for the network of the specified type is an Ethernet connection. 4. The method as claimed in claim 1, wherein
the fieldbus is an EtherCAT fieldbus. 5. The method as claimed in claim 1, wherein
the destination address is an IPv4 address or IPv6 address. 6. The method as claimed in claim 2, wherein
the forwarding of the processed data packet via the fieldbus connection of the gateway to the fieldbus node that has the destination address of the processed data packet is performed by an Ethernet over EtherCAT protocol. 7. The method as claimed in claim 1, wherein
the fieldbus system further comprises: a diagnostic and/or commissioning device which is coupled to the gateway over the network of the specified type for the purpose of data exchange, wherein the diagnostic and/or commissioning device exchanges data with the fieldbus nodes via the gateway. 8. The method as claimed in claim 7, wherein
the creation of the bus configuration is performed by the diagnostic and/or commissioning device. 9. The method as claimed in claim 7, wherein
the diagnostic and/or commissioning device is configured to determine a port from a fieldbus-specific address and to send said port to the gateway as part of a data packet. 10. A gateway, comprising:
a network connection for a network of a specified type; a fieldbus connection for a fieldbus; and a control unit, which is configured to carry out a network address translation based on a bus configuration. | A method operates a field bus system, wherein the field bus system has: a gateway, which has a network connection for a network of a specified type and a fieldbus connection for a fieldbus, and a number of fieldbus nodes, wherein the fieldbus nodes are coupled to each other and to the gateway via the fieldbus for the purpose of data exchange. Addressing takes place in the network of the specified type by network addresses, wherein the network addresses have a first part which designates a destination address, and a second part which designates a port of the destination address. The method comprises the steps of: creating a bus configuration, wherein the bus configuration assigns a destination address to a port, and performing a network address translation using the gateway based on the bus configuration created.1. A method for operating a fieldbus system, wherein the fieldbus system comprises:
a gateway, which has a network connection for a network of a specified type and a fieldbus connection for a fieldbus, and a number of fieldbus nodes, wherein the fieldbus nodes are coupled to each other and to the gateway via the fieldbus for the purpose of data exchange, wherein addressing takes place in the network of the specified type by way of network addresses, wherein the network addresses have a first part which designates a destination address, and a second part which designates a port of the destination address, the method comprising the steps of: creating a bus configuration, wherein the bus configuration assigns a destination address to a port; and performing a network address translation using the gateway based on the bus configuration created. 2. The method as claimed in claim 1, wherein
performing the network address translation using the gateway based on the bus configuration created comprises the steps of: receiving a data packet, which is addressed to the gateway, via the network connection for the network of the specified type; extracting a port from a network address contained in the received data packet; processing the received data packet by replacing the destination address contained in the received data packet by a destination address which is assigned to the extracted port in the bus configuration; and forwarding the processed data packet via the fieldbus connection of the gateway to the fieldbus node which has the destination address of the processed data packet. 3. The method as claimed in claim 1, wherein
the network connection for the network of the specified type is an Ethernet connection. 4. The method as claimed in claim 1, wherein
the fieldbus is an EtherCAT fieldbus. 5. The method as claimed in claim 1, wherein
the destination address is an IPv4 address or IPv6 address. 6. The method as claimed in claim 2, wherein
the forwarding of the processed data packet via the fieldbus connection of the gateway to the fieldbus node that has the destination address of the processed data packet is performed by an Ethernet over EtherCAT protocol. 7. The method as claimed in claim 1, wherein
the fieldbus system further comprises: a diagnostic and/or commissioning device which is coupled to the gateway over the network of the specified type for the purpose of data exchange, wherein the diagnostic and/or commissioning device exchanges data with the fieldbus nodes via the gateway. 8. The method as claimed in claim 7, wherein
the creation of the bus configuration is performed by the diagnostic and/or commissioning device. 9. The method as claimed in claim 7, wherein
the diagnostic and/or commissioning device is configured to determine a port from a fieldbus-specific address and to send said port to the gateway as part of a data packet. 10. A gateway, comprising:
a network connection for a network of a specified type; a fieldbus connection for a fieldbus; and a control unit, which is configured to carry out a network address translation based on a bus configuration. | 2,400 |
8,658 | 8,658 | 15,503,068 | 2,477 | A node ( 100 ) of a cellular network sends an uplink grant ( 803 ) to a communication device ( 10 ). The uplink grant indicates uplink radio resources allocated to the communication device ( 10 ) in reoccurring time intervals. In response to detecting a need for an uplink retransmission ( 814 ) by a further communication device ( 10 ′) on at least a part of the allocated uplink radio resources in a certain one of the time intervals, the node ( 100 ) sends control information ( 808 ) to the communication device ( 10 ). The control information ( 808 ) temporarily disables utilization of at least this part of the allocated uplink radio resources in at least this certain time interval by the communication device ( 10 ). | 1-54. (canceled) 55. A method of controlling radio transmission in a cellular network, the method comprising:
a node of the cellular network sending an uplink grant to a communication device, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; and in response to detecting a need for an uplink retransmission by a further communication device in at least a part of the allocated uplink radio resources in one of the time intervals, the node sending control information to the communication device, the control information temporarily disabling utilization of at least the part of the allocated uplink radio resources in at least the one of the time intervals by the communication device. 56. The method of claim 55, wherein the disabling of the utilization is for a configured time period. 57. The method of claim 56, wherein the control information indicates the time period. 58. The method of claim 55, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 59. The method of claim 55, further comprising the node sending further control information to the communication device, the further control information re-enabling the utilization of the allocated uplink radio resources by the communication device. 60. The method of claim 55, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to the further communication device. 61. The method of claim 55, further comprising, for each of the time intervals, the node selecting between:
an active mode in which the communication device performed an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performed no uplink transmission in the allocated uplink radio resources. 62. The method of claim 61, further comprising, in response to detecting no signals from the communication device in the allocated uplink radio resources, the node selecting the inactive mode. 63. A method of controlling radio transmission in a cellular network, the method comprising:
a communication device of the cellular network receiving an uplink grant from the cellular network, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; after receiving the uplink grant, the communication device receiving control information from the cellular network; and in response to the control information, the communication device temporarily disabling utilization of at least a part of the allocated uplink radio resources in at least one of the time intervals by the communication device. 64. The method of claim 63, wherein the disabling of the utilization is for a configured time period. 65. The method of claim 64, wherein the control information indicates the time period. 66. The method of claim 63, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 67. The method of claim 63, further comprising the communication device receiving further control information from the cellular network, the further control information re-enabling utilization of the allocated uplink radio resources by the communication device. 68. The method of claim 63, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to a further communication device. 69. The method of claim 63, further comprising, for each of the time intervals, the communication device selecting between:
an active mode in which the communication device performs an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performs no uplink transmission in the allocated uplink radio resources. 70. The method of claim 69, further comprising:
for each of the time intervals, the communication device checking whether uplink data is available for transmission by the communication device; and in response to uplink data being available for transmission, the communication device selecting the active mode to perform an uplink transmission comprising at least a part of the uplink data. 71. The method of claim 69, further comprising:
for each of the time intervals, the communication device checking whether one or more conditions for sending a buffer status report are met; and in response to one or more of the conditions being met, the communication device selecting the active mode to send an uplink transmission comprising the buffer status report, the buffer status report indicating an amount of uplink data available for transmission by the communication device. 72. A node for a cellular network, the node comprising:
an interface for connecting to a communication device and a further communication device; and processing circuitry configured to:
send an uplink grant to a communication device via the interface, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; and
in response to detecting a need for an uplink retransmission by the further communication device in at least a part of the allocated uplink radio resources in one of the time intervals, send control information to the communication device via the interface, the control information temporarily disabling utilization of at least the part of the allocated uplink radio resources in at least the one of the time intervals by the communication device. 73. The node of claim 72, wherein the disabling of the utilization is for a configured time period. 74. The node of claim 73, wherein the control information indicates the time period. 75. The node of claim 72, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 76. The node of claim 72, wherein the processing circuitry is configured to send further control information to the communication device via the interface, the further control information re-enabling the utilization of the allocated uplink radio resources by the communication device. 77. The node of claim 72, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to the further communication device. 78. The node of claim 72, wherein the processing circuitry is configured to, for each of the time intervals, select between:
an active mode in which the communication device performed an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performed no uplink transmission in the allocated uplink radio resources. 79. The node of claim 78, wherein the processing circuitry is configured to select the inactive mode in response to detecting no signals from the communication device in the allocated uplink radio resources. 80. A communication device, comprising:
an interface for connecting to a cellular network; and processing circuitry configured to:
receive, via the interface, an uplink grant from the cellular network, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals;
after receiving the uplink grant, receive control information from the cellular network via the interface; and
in response to the control information, temporarily disable utilization of at least a part of the allocated uplink radio resources in at least one of the time intervals by the communication device. 81. The communication device of claim 80, wherein the disabling of the utilization is for a configured time period. 82. The communication device of claim 81, wherein the control information indicates the time period. 83. The communication device of claim 80, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 84. The communication device of claim 80, wherein the processing circuitry is configured to receive further control information from the cellular network via the interface, the further control information re-enabling utilization of the allocated uplink radio resources by the communication device. 85. The communication device of claim 80, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to a further communication device. 86. The communication device of claim 80, wherein the at processing circuitry is configured to, for each of the time intervals, select between:
an active mode in which the communication device performs an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performs no uplink transmission in the allocated uplink radio resources. 87. The communication device of claim 86, wherein the at processing circuitry is configured to:
for each of the time intervals, check whether uplink data is available for transmission by the communication device; and in response to uplink data being available for transmission, select the active mode to perform an uplink transmission comprising at least a part of the uplink data. 88. The communication device of claim 86, wherein the at processing circuitry is configured to:
for each of the time intervals, check whether one or more conditions for sending a buffer status report are met; and in response to one or more of the conditions being met, select the active mode to send an uplink transmission comprising the buffer status report, the buffer status report indicating an amount of uplink data available for transmission by the communication device. 89. A non-transitory computer readable recording medium storing a computer program product for controlling radio transmission in a cellular network, the computer program product comprising software instructions which, when run on processing circuitry of a node of the cellular network, causes the node to:
send an uplink grant to a communication device, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; and in response to detecting a need for an uplink retransmission by a further communication device in at least a part of the allocated uplink radio resources in one of the time intervals, send control information to the communication device, the control information temporarily disabling utilization of at least the part of the allocated uplink radio resources in at least the one of the time intervals by the communication device. 90. A non-transitory computer readable recording medium storing a computer program product for controlling radio transmission in a cellular network, the computer program product comprising software instructions which, when run on processing circuitry of a communication device of the cellular network, causes the communication device to:
receive an uplink grant from the cellular network, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; after receiving the uplink grant, receive control information from the cellular network; and in response to the control information, temporarily disable utilization of at least a part of the allocated uplink radio resources in at least one of the time intervals by the communication device. | A node ( 100 ) of a cellular network sends an uplink grant ( 803 ) to a communication device ( 10 ). The uplink grant indicates uplink radio resources allocated to the communication device ( 10 ) in reoccurring time intervals. In response to detecting a need for an uplink retransmission ( 814 ) by a further communication device ( 10 ′) on at least a part of the allocated uplink radio resources in a certain one of the time intervals, the node ( 100 ) sends control information ( 808 ) to the communication device ( 10 ). The control information ( 808 ) temporarily disables utilization of at least this part of the allocated uplink radio resources in at least this certain time interval by the communication device ( 10 ).1-54. (canceled) 55. A method of controlling radio transmission in a cellular network, the method comprising:
a node of the cellular network sending an uplink grant to a communication device, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; and in response to detecting a need for an uplink retransmission by a further communication device in at least a part of the allocated uplink radio resources in one of the time intervals, the node sending control information to the communication device, the control information temporarily disabling utilization of at least the part of the allocated uplink radio resources in at least the one of the time intervals by the communication device. 56. The method of claim 55, wherein the disabling of the utilization is for a configured time period. 57. The method of claim 56, wherein the control information indicates the time period. 58. The method of claim 55, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 59. The method of claim 55, further comprising the node sending further control information to the communication device, the further control information re-enabling the utilization of the allocated uplink radio resources by the communication device. 60. The method of claim 55, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to the further communication device. 61. The method of claim 55, further comprising, for each of the time intervals, the node selecting between:
an active mode in which the communication device performed an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performed no uplink transmission in the allocated uplink radio resources. 62. The method of claim 61, further comprising, in response to detecting no signals from the communication device in the allocated uplink radio resources, the node selecting the inactive mode. 63. A method of controlling radio transmission in a cellular network, the method comprising:
a communication device of the cellular network receiving an uplink grant from the cellular network, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; after receiving the uplink grant, the communication device receiving control information from the cellular network; and in response to the control information, the communication device temporarily disabling utilization of at least a part of the allocated uplink radio resources in at least one of the time intervals by the communication device. 64. The method of claim 63, wherein the disabling of the utilization is for a configured time period. 65. The method of claim 64, wherein the control information indicates the time period. 66. The method of claim 63, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 67. The method of claim 63, further comprising the communication device receiving further control information from the cellular network, the further control information re-enabling utilization of the allocated uplink radio resources by the communication device. 68. The method of claim 63, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to a further communication device. 69. The method of claim 63, further comprising, for each of the time intervals, the communication device selecting between:
an active mode in which the communication device performs an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performs no uplink transmission in the allocated uplink radio resources. 70. The method of claim 69, further comprising:
for each of the time intervals, the communication device checking whether uplink data is available for transmission by the communication device; and in response to uplink data being available for transmission, the communication device selecting the active mode to perform an uplink transmission comprising at least a part of the uplink data. 71. The method of claim 69, further comprising:
for each of the time intervals, the communication device checking whether one or more conditions for sending a buffer status report are met; and in response to one or more of the conditions being met, the communication device selecting the active mode to send an uplink transmission comprising the buffer status report, the buffer status report indicating an amount of uplink data available for transmission by the communication device. 72. A node for a cellular network, the node comprising:
an interface for connecting to a communication device and a further communication device; and processing circuitry configured to:
send an uplink grant to a communication device via the interface, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; and
in response to detecting a need for an uplink retransmission by the further communication device in at least a part of the allocated uplink radio resources in one of the time intervals, send control information to the communication device via the interface, the control information temporarily disabling utilization of at least the part of the allocated uplink radio resources in at least the one of the time intervals by the communication device. 73. The node of claim 72, wherein the disabling of the utilization is for a configured time period. 74. The node of claim 73, wherein the control information indicates the time period. 75. The node of claim 72, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 76. The node of claim 72, wherein the processing circuitry is configured to send further control information to the communication device via the interface, the further control information re-enabling the utilization of the allocated uplink radio resources by the communication device. 77. The node of claim 72, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to the further communication device. 78. The node of claim 72, wherein the processing circuitry is configured to, for each of the time intervals, select between:
an active mode in which the communication device performed an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performed no uplink transmission in the allocated uplink radio resources. 79. The node of claim 78, wherein the processing circuitry is configured to select the inactive mode in response to detecting no signals from the communication device in the allocated uplink radio resources. 80. A communication device, comprising:
an interface for connecting to a cellular network; and processing circuitry configured to:
receive, via the interface, an uplink grant from the cellular network, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals;
after receiving the uplink grant, receive control information from the cellular network via the interface; and
in response to the control information, temporarily disable utilization of at least a part of the allocated uplink radio resources in at least one of the time intervals by the communication device. 81. The communication device of claim 80, wherein the disabling of the utilization is for a configured time period. 82. The communication device of claim 81, wherein the control information indicates the time period. 83. The communication device of claim 80, wherein the control information indicates the uplink radio resources of which the utilization by the communication device is disabled. 84. The communication device of claim 80, wherein the processing circuitry is configured to receive further control information from the cellular network via the interface, the further control information re-enabling utilization of the allocated uplink radio resources by the communication device. 85. The communication device of claim 80, wherein the allocated uplink radio resources are overlapping with uplink radio resources which are allocated by a further uplink grant to a further communication device. 86. The communication device of claim 80, wherein the at processing circuitry is configured to, for each of the time intervals, select between:
an active mode in which the communication device performs an uplink transmission in the allocated uplink radio resources; and an inactive mode in which the communication device performs no uplink transmission in the allocated uplink radio resources. 87. The communication device of claim 86, wherein the at processing circuitry is configured to:
for each of the time intervals, check whether uplink data is available for transmission by the communication device; and in response to uplink data being available for transmission, select the active mode to perform an uplink transmission comprising at least a part of the uplink data. 88. The communication device of claim 86, wherein the at processing circuitry is configured to:
for each of the time intervals, check whether one or more conditions for sending a buffer status report are met; and in response to one or more of the conditions being met, select the active mode to send an uplink transmission comprising the buffer status report, the buffer status report indicating an amount of uplink data available for transmission by the communication device. 89. A non-transitory computer readable recording medium storing a computer program product for controlling radio transmission in a cellular network, the computer program product comprising software instructions which, when run on processing circuitry of a node of the cellular network, causes the node to:
send an uplink grant to a communication device, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; and in response to detecting a need for an uplink retransmission by a further communication device in at least a part of the allocated uplink radio resources in one of the time intervals, send control information to the communication device, the control information temporarily disabling utilization of at least the part of the allocated uplink radio resources in at least the one of the time intervals by the communication device. 90. A non-transitory computer readable recording medium storing a computer program product for controlling radio transmission in a cellular network, the computer program product comprising software instructions which, when run on processing circuitry of a communication device of the cellular network, causes the communication device to:
receive an uplink grant from the cellular network, the uplink grant indicating uplink radio resources allocated to the communication device in reoccurring time intervals; after receiving the uplink grant, receive control information from the cellular network; and in response to the control information, temporarily disable utilization of at least a part of the allocated uplink radio resources in at least one of the time intervals by the communication device. | 2,400 |
8,659 | 8,659 | 15,500,820 | 2,487 | Examples described herein may include a computing system that may include movable surface and a model acquisition engine configured to acquire three-dimensional model data representing a first object disposed on the movable surface. The computing system may also include a communication engine to send the model data to another computing system and to receive from the other computing system manipulation data associated with the model data. The computing system may further include a movement and projection engine to move the movable surface in accordance with the received manipulation data. | 1. A computing system comprising:
a movable surface; a model acquisition engine to acquire three-dimensional model data representing a first object disposed on the movable surface; a communication engine to send the model data to another computing system and to receive, from the other computing system, manipulation data associated with the model data; and a movement and projection engine to move the movable surface in accordance with the received manipulation data. 2. The computing system of claim 1, wherein the manipulation data comprises orientation data associated with an input by a user of the other computing system, and wherein the movement and projection engine is to move the movable surface based on the orientation data. 3. The computing system of claim 2, wherein:
the movable surface comprises a turn table; the orientation data comprises an angle; and the movement and projection engine is to rotate the turn table based on the angle. 4. The computing system of claim 1, wherein the manipulation data comprises applied image data associated with an input by a user of the other computing system, and wherein the movement and projection engine is to move the movable surface based the applied image data. 5. The computing system of claim 1, further comprising a projector, wherein:
the manipulation data comprises applied image data associated with an input by. a user of the other computing system; the projector is to project the applied image data onto the first object. 6. The computing system of claim 5, further comprising a second object, wherein the communication engine is further to receive from the other computing system user image data, and to display the user image data on the second object. 7. The computing system of claim 1, further comprising a camera to capture at least one image representing the first object, wherein the model acquisition engine is to acquire the three-dimensional model data based at least on the image. 8. A non-transitory machine-readable storage medium comprising instructions executable by a processing resource of a computing system comprising a display, the instructions executable to:
receive three-dimensional model data from another computing system comprising a projector and a movable surface, the three-dimensional model data being associated with an object placed on the movable surface; display the three-dimensional model data on the display; receive user input associated with the displayed three-dimensional model data, the user input requesting at least one of re-orienting the displayed three-dimensional model data and applying imagery to the displayed three-dimensional model data; and sending to the other computing system manipulation data comprising at least one of orientation data associated with the user input and applied image data associated with the user input. 9. The non-transitory machine-readable storage medium of claim 8, wherein the model data comprises object orientation data associated with an orientation of the object, and wherein displaying the three-dimensional model data comprises displaying the three-dimensional model with a perspective view corresponding to the orientation of the object. 10. A method comprising:
acquiring model data representing an object disposed on a movable surface; receiving manipulation data comprising at least applied image data associated with a surface of the model data; determining a surface of the object that corresponds to the surface of the model data; and moving the movable surface based at least on a location of the surface of the object. 11. The method claim 10, further comprising:
projecting the applied image data onto the surface of the object. 12. The method of claim 10, wherein moving the movable surface is performed by a first computing system, and wherein receiving the manipulation data comprises receiving the manipulation data by the first computing system from a second computing system. 13. The method of claim 10, wherein moving the movable surface comprises moving the movable surface to cause the surface of the object face a user. 14. The method of claim 10, further comprising obtaining a first image representing the object, wherein acquiring the model data comprises generating the model data based at least on the first image. 15. The method of claim 14, wherein the first image represents the object in a first orientation, the method further comprising moving the movable surface to put the object in a second orientation and obtaining a second image representing the object in the second orientation. | Examples described herein may include a computing system that may include movable surface and a model acquisition engine configured to acquire three-dimensional model data representing a first object disposed on the movable surface. The computing system may also include a communication engine to send the model data to another computing system and to receive from the other computing system manipulation data associated with the model data. The computing system may further include a movement and projection engine to move the movable surface in accordance with the received manipulation data.1. A computing system comprising:
a movable surface; a model acquisition engine to acquire three-dimensional model data representing a first object disposed on the movable surface; a communication engine to send the model data to another computing system and to receive, from the other computing system, manipulation data associated with the model data; and a movement and projection engine to move the movable surface in accordance with the received manipulation data. 2. The computing system of claim 1, wherein the manipulation data comprises orientation data associated with an input by a user of the other computing system, and wherein the movement and projection engine is to move the movable surface based on the orientation data. 3. The computing system of claim 2, wherein:
the movable surface comprises a turn table; the orientation data comprises an angle; and the movement and projection engine is to rotate the turn table based on the angle. 4. The computing system of claim 1, wherein the manipulation data comprises applied image data associated with an input by a user of the other computing system, and wherein the movement and projection engine is to move the movable surface based the applied image data. 5. The computing system of claim 1, further comprising a projector, wherein:
the manipulation data comprises applied image data associated with an input by. a user of the other computing system; the projector is to project the applied image data onto the first object. 6. The computing system of claim 5, further comprising a second object, wherein the communication engine is further to receive from the other computing system user image data, and to display the user image data on the second object. 7. The computing system of claim 1, further comprising a camera to capture at least one image representing the first object, wherein the model acquisition engine is to acquire the three-dimensional model data based at least on the image. 8. A non-transitory machine-readable storage medium comprising instructions executable by a processing resource of a computing system comprising a display, the instructions executable to:
receive three-dimensional model data from another computing system comprising a projector and a movable surface, the three-dimensional model data being associated with an object placed on the movable surface; display the three-dimensional model data on the display; receive user input associated with the displayed three-dimensional model data, the user input requesting at least one of re-orienting the displayed three-dimensional model data and applying imagery to the displayed three-dimensional model data; and sending to the other computing system manipulation data comprising at least one of orientation data associated with the user input and applied image data associated with the user input. 9. The non-transitory machine-readable storage medium of claim 8, wherein the model data comprises object orientation data associated with an orientation of the object, and wherein displaying the three-dimensional model data comprises displaying the three-dimensional model with a perspective view corresponding to the orientation of the object. 10. A method comprising:
acquiring model data representing an object disposed on a movable surface; receiving manipulation data comprising at least applied image data associated with a surface of the model data; determining a surface of the object that corresponds to the surface of the model data; and moving the movable surface based at least on a location of the surface of the object. 11. The method claim 10, further comprising:
projecting the applied image data onto the surface of the object. 12. The method of claim 10, wherein moving the movable surface is performed by a first computing system, and wherein receiving the manipulation data comprises receiving the manipulation data by the first computing system from a second computing system. 13. The method of claim 10, wherein moving the movable surface comprises moving the movable surface to cause the surface of the object face a user. 14. The method of claim 10, further comprising obtaining a first image representing the object, wherein acquiring the model data comprises generating the model data based at least on the first image. 15. The method of claim 14, wherein the first image represents the object in a first orientation, the method further comprising moving the movable surface to put the object in a second orientation and obtaining a second image representing the object in the second orientation. | 2,400 |
8,660 | 8,660 | 15,804,453 | 2,419 | A processing module of a dispersed storage network determines an obfuscation method is determined from a plurality of obfuscation methods for a data segment. The method continues with the processing module obfuscating the data segment according to the obfuscation method to produce an obfuscated data segment. The obfuscated data segment is encrypted and dispersed storage error encoded to produce a set of encoded data slices. The set of encoded data slices is then transmitted for storage in the dispersed storage network. | 1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:
receiving a data segment of a data object, wherein the data object is segmented into a plurality of data segments; determining an obfuscating method of a plurality of obfuscating methods for obfuscating the data segment; obfuscating the data segment; encrypting the data segment; dispersed storage error encoding the encrypted data segment to produce a set of encoded data slices; and transmitting the set of encoded data slices for storage in the DSN. 2. The method of claim 1 further comprising:
dispersed storage error encoding auxiliary data to produce a set of encoded auxiliary data slices; and
generating a sequence of output slices to obscure the set of encoded data slices by interspersing the set of encoded auxiliary data slices within the set of encoded data slices. 3. The method of claim 2, wherein the auxiliary data comprises at least one of:
null data; authentication information; a next pseudo random output sequencing order; a pseudo random output sequencing order identifier; a next outputting threshold; a random number generator output; an encryption key; a starting point for the pseudo random output sequencing order; a device identifier; a data identifier; a data type; a data size indictor; a priority indicator; a security indicator; or a performance indicator. 4. The method of claim 2 wherein the auxiliary data is encrypted via an all or nothing transformation and the encrypted data segment is encrypted via an all or nothing transformation. 5. The method of claim 1 wherein the obfuscating method comprises at least one of:
adding random bits to the data segment to create a new data segment;
inverting one or more bits of the data segment; and
replacing one or more bits of the received data segment with bits produced from an obfuscation calculation, wherein the obfuscation calculation is based on a portion of the data segment bits. 6. The method of claim 1 wherein the encoded data segment is encrypted via an all or nothing transformation. 7. The method of claim 1 wherein the determining an obfuscating method is based at least partially on at least one of:
one or more error coding dispersal storage functions parameters;
a requester identifier (ID);
a vault lookup;
a data object name;
a data object;
a data stream;
sequence information;
a key
a priority indicator;
a security indicator;
a command;
a predetermination;
a message
information in a store data object message; and
a performance indicator. 8. A dispersed storage and task (DST) processing unit comprises:
at least one module, when operable within a computing device, that causes the computing device to:
receive a data segment of a data object, wherein the data object is segmented into a plurality of data segments;
determine an obfuscating method of a plurality of obfuscating methods for obfuscating the data segment;
obfuscate the data segment;
encrypt the data segment;
dispersed storage error encode the encrypted data segment to produce a set of encoded data slices; and
transmit the set of encoded data slices for storage in a distributed storage network (DSN). 9. The DST processing unit of claim 8, wherein the at least one module, when operable within a computing device, further causes the computing device to:
dispersed storage error encode auxiliary data to produce a set of encoded auxiliary data slices; and
generate a sequence of output slices to obscure the set of encoded data slices by interspersing the set of encoded auxiliary data slices within the set of encoded data slices. 10. The DST processing unit of claim 9, wherein the auxiliary data comprises at least one of:
null data; authentication information; a next pseudo random output sequencing order; a pseudo random output sequencing order identifier; a next outputting threshold; a random number generator output; an encryption key; a starting point for the pseudo random output sequencing order; a device identifier; a data identifier; a data type; a data size indictor; a priority indicator; a security indicator; and a performance indicator. 11. The DST processing unit of claim 9, wherein the at least one module, when operable within a computing device, further causes the computing device to:
encrypt the auxiliary data via an all or nothing transformation and the encrypted data segment is encrypted via an all or nothing transformation. 12. The DST processing unit of claim 8 wherein the obfuscating method comprises at least one of:
adding random bits to the data segment to create a new data segment;
inverting one or more bits of the data segment; and
replacing one or more bits of the received data segment with bits produced from an obfuscation calculation, wherein the obfuscation calculation is based on a portion of the data segment bits. 13. The DST processing unit of claim 8, wherein the encoded data segment is encrypted via an all or nothing transformation. 14. The DST processing unit of claim 8, wherein the at least one module, when operable within a computing device, further causes the computing device to:
determine the obfuscating method based at least partially on at least one of:
one or more error coding dispersal storage functions parameters;
a requester identifier (ID);
a vault lookup;
a data object name;
a data object;
a data stream;
sequence information;
a key
a priority indicator;
a security indicator;
a command;
a predetermination;
a message
information in a store data object message; or
a performance indicator. 15. A computer readable storage medium comprises:
at least one memory section that stores operational instructions that, when executed by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), causes the one or more computing devices to:
receive a data segment of a data object, wherein the data object is segmented into a plurality of data segments;
determine an obfuscating method of a plurality of obfuscating methods for obfuscating the data segment;
obfuscate the data segment;
encrypt the data segment;
dispersed storage error encode the encrypted data segment to produce a set of encoded data slices; and
transmit the set of encoded data slices for storage in a distributed storage network (DSN). 16. The computer readable storage medium of claim 15, wherein the at least one memory section further causes the one or more computing devices to:
dispersed storage error encode auxiliary data to produce a set of encoded auxiliary data slices; and
generate a sequence of output slices to obscure the set of encoded data slices by interspersing the set of encoded auxiliary data slices within the set of encoded data slices. 17. The computer readable storage medium of claim 16 wherein the auxiliary data comprises at least one of:
null data;
authentication information;
a next pseudo random output sequencing order;
a pseudo random output sequencing order identifier;
a next outputting threshold;
a random number generator output;
an encryption key;
a starting point for the pseudo random output sequencing order;
a device identifier;
a data identifier;
a data type;
a data size indictor;
a priority indicator;
a security indicator; or
a performance indicator. 18. The computer readable storage medium of claim 15 wherein the at least one memory section further causes the one or more computing devices to:
encrypt the encoded data segment via an all or nothing transformation. 19. The computer readable storage medium of claim 15, wherein the obfuscating method comprises at least one of:
adding random bits to the data segment to create a new data segment; inverting one or more bits of the data segment; and replacing one or more bits of the received data segment with bits produced from an obfuscation calculation, wherein the obfuscation calculation is based on a portion of the data segment bits. 20. The computer readable storage medium of claim 15, wherein the at least one memory section further causes the one or more computing devices to:
determine the obfuscating method based at least partially on at least one of:
one or more error coding dispersal storage functions parameters;
a requester identifier (ID);
a vault lookup;
a data object name;
a data object;
a data stream;
sequence information;
a key
a priority indicator;
a security indicator;
a command;
a predetermination;
a message
information in a store data object message; and
a performance indicator. | A processing module of a dispersed storage network determines an obfuscation method is determined from a plurality of obfuscation methods for a data segment. The method continues with the processing module obfuscating the data segment according to the obfuscation method to produce an obfuscated data segment. The obfuscated data segment is encrypted and dispersed storage error encoded to produce a set of encoded data slices. The set of encoded data slices is then transmitted for storage in the dispersed storage network.1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:
receiving a data segment of a data object, wherein the data object is segmented into a plurality of data segments; determining an obfuscating method of a plurality of obfuscating methods for obfuscating the data segment; obfuscating the data segment; encrypting the data segment; dispersed storage error encoding the encrypted data segment to produce a set of encoded data slices; and transmitting the set of encoded data slices for storage in the DSN. 2. The method of claim 1 further comprising:
dispersed storage error encoding auxiliary data to produce a set of encoded auxiliary data slices; and
generating a sequence of output slices to obscure the set of encoded data slices by interspersing the set of encoded auxiliary data slices within the set of encoded data slices. 3. The method of claim 2, wherein the auxiliary data comprises at least one of:
null data; authentication information; a next pseudo random output sequencing order; a pseudo random output sequencing order identifier; a next outputting threshold; a random number generator output; an encryption key; a starting point for the pseudo random output sequencing order; a device identifier; a data identifier; a data type; a data size indictor; a priority indicator; a security indicator; or a performance indicator. 4. The method of claim 2 wherein the auxiliary data is encrypted via an all or nothing transformation and the encrypted data segment is encrypted via an all or nothing transformation. 5. The method of claim 1 wherein the obfuscating method comprises at least one of:
adding random bits to the data segment to create a new data segment;
inverting one or more bits of the data segment; and
replacing one or more bits of the received data segment with bits produced from an obfuscation calculation, wherein the obfuscation calculation is based on a portion of the data segment bits. 6. The method of claim 1 wherein the encoded data segment is encrypted via an all or nothing transformation. 7. The method of claim 1 wherein the determining an obfuscating method is based at least partially on at least one of:
one or more error coding dispersal storage functions parameters;
a requester identifier (ID);
a vault lookup;
a data object name;
a data object;
a data stream;
sequence information;
a key
a priority indicator;
a security indicator;
a command;
a predetermination;
a message
information in a store data object message; and
a performance indicator. 8. A dispersed storage and task (DST) processing unit comprises:
at least one module, when operable within a computing device, that causes the computing device to:
receive a data segment of a data object, wherein the data object is segmented into a plurality of data segments;
determine an obfuscating method of a plurality of obfuscating methods for obfuscating the data segment;
obfuscate the data segment;
encrypt the data segment;
dispersed storage error encode the encrypted data segment to produce a set of encoded data slices; and
transmit the set of encoded data slices for storage in a distributed storage network (DSN). 9. The DST processing unit of claim 8, wherein the at least one module, when operable within a computing device, further causes the computing device to:
dispersed storage error encode auxiliary data to produce a set of encoded auxiliary data slices; and
generate a sequence of output slices to obscure the set of encoded data slices by interspersing the set of encoded auxiliary data slices within the set of encoded data slices. 10. The DST processing unit of claim 9, wherein the auxiliary data comprises at least one of:
null data; authentication information; a next pseudo random output sequencing order; a pseudo random output sequencing order identifier; a next outputting threshold; a random number generator output; an encryption key; a starting point for the pseudo random output sequencing order; a device identifier; a data identifier; a data type; a data size indictor; a priority indicator; a security indicator; and a performance indicator. 11. The DST processing unit of claim 9, wherein the at least one module, when operable within a computing device, further causes the computing device to:
encrypt the auxiliary data via an all or nothing transformation and the encrypted data segment is encrypted via an all or nothing transformation. 12. The DST processing unit of claim 8 wherein the obfuscating method comprises at least one of:
adding random bits to the data segment to create a new data segment;
inverting one or more bits of the data segment; and
replacing one or more bits of the received data segment with bits produced from an obfuscation calculation, wherein the obfuscation calculation is based on a portion of the data segment bits. 13. The DST processing unit of claim 8, wherein the encoded data segment is encrypted via an all or nothing transformation. 14. The DST processing unit of claim 8, wherein the at least one module, when operable within a computing device, further causes the computing device to:
determine the obfuscating method based at least partially on at least one of:
one or more error coding dispersal storage functions parameters;
a requester identifier (ID);
a vault lookup;
a data object name;
a data object;
a data stream;
sequence information;
a key
a priority indicator;
a security indicator;
a command;
a predetermination;
a message
information in a store data object message; or
a performance indicator. 15. A computer readable storage medium comprises:
at least one memory section that stores operational instructions that, when executed by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), causes the one or more computing devices to:
receive a data segment of a data object, wherein the data object is segmented into a plurality of data segments;
determine an obfuscating method of a plurality of obfuscating methods for obfuscating the data segment;
obfuscate the data segment;
encrypt the data segment;
dispersed storage error encode the encrypted data segment to produce a set of encoded data slices; and
transmit the set of encoded data slices for storage in a distributed storage network (DSN). 16. The computer readable storage medium of claim 15, wherein the at least one memory section further causes the one or more computing devices to:
dispersed storage error encode auxiliary data to produce a set of encoded auxiliary data slices; and
generate a sequence of output slices to obscure the set of encoded data slices by interspersing the set of encoded auxiliary data slices within the set of encoded data slices. 17. The computer readable storage medium of claim 16 wherein the auxiliary data comprises at least one of:
null data;
authentication information;
a next pseudo random output sequencing order;
a pseudo random output sequencing order identifier;
a next outputting threshold;
a random number generator output;
an encryption key;
a starting point for the pseudo random output sequencing order;
a device identifier;
a data identifier;
a data type;
a data size indictor;
a priority indicator;
a security indicator; or
a performance indicator. 18. The computer readable storage medium of claim 15 wherein the at least one memory section further causes the one or more computing devices to:
encrypt the encoded data segment via an all or nothing transformation. 19. The computer readable storage medium of claim 15, wherein the obfuscating method comprises at least one of:
adding random bits to the data segment to create a new data segment; inverting one or more bits of the data segment; and replacing one or more bits of the received data segment with bits produced from an obfuscation calculation, wherein the obfuscation calculation is based on a portion of the data segment bits. 20. The computer readable storage medium of claim 15, wherein the at least one memory section further causes the one or more computing devices to:
determine the obfuscating method based at least partially on at least one of:
one or more error coding dispersal storage functions parameters;
a requester identifier (ID);
a vault lookup;
a data object name;
a data object;
a data stream;
sequence information;
a key
a priority indicator;
a security indicator;
a command;
a predetermination;
a message
information in a store data object message; and
a performance indicator. | 2,400 |
8,661 | 8,661 | 14,846,720 | 2,459 | A third party system includes a tracking mechanism in various content provided by the third party system. When a client device presents content provided by the third party system, the client device executes tracking mechanism, causing the client device to communicate information identifying the content, identifying a user associated with the online system, and other information to the online system. Based on the information received from the client device, the online system selects a rule from rules provided to the online system by the third party system. The online system then performs an action included in the selected rule, allowing the advertiser to initiate various actions by the online system while including a common tracking mechanism in different content provided by the third party system. | 1. A method comprising:
receiving one or more rules from a third party system, each rule identifying an action to perform based on one or more of: information identifying content presented to a user of the online system, information identifying the user to the online system, and information identifying additional content presented to the user of the online system; receiving information from a client device presenting content from a third party system to a user, the information identifying content presented to the user by the client device, information identifying the user to the online system, and information identifying additional content presented to the user by the client device; selecting a rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device; and performing an action identified by the selected rule. 2. The method of claim 1, wherein performing the action identified by the selected rule comprises:
associating the information identifying the user to the online system with a set of users of the online system specified by the rule. 3. The method of claim 1, wherein performing the action identified by the selected rule comprises:
identifying a conversion event based on the information identifying content presented to the user by the client device and the information identifying additional content presented to the user by the client device; and storing an association between the conversion event with the information identifying the user to the online system. 4. The method of claim 3, wherein the conversion event is selected from a group consisting of: adding a product to an online shopping cart maintained by the third party system, viewing the content presented to the user, viewing the content presented to the user for at least a specified amount of time, adding the product to a list of products associated with the user by the third party system, requesting information from the third party system, subscribing to a service provided by the third party system, storing content to the third party system, storing content provided by the third party system to the client device, and any combination thereof. 5. The method of claim 3, wherein the conversion event is selected from a group consisting of: indicating a preference for the content presented to the user, sharing the content presented to the user with another user, providing a comment associated with content presented to the user, and any combination thereof. 6. The method of claim 1, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device. 7. The method of claim 1, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device and including information identifying additional content presented to the user of the online system matching received information identifying additional content presented to the user by the client device. 8. The method of claim 1, wherein the information received from the client device further includes information describing the client device and one or more of the rules include information describing one or more client devices. 9. The method of claim 8, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device, including information identifying additional content presented to the user of the online system matching received information identifying additional content presented to the user by the client device, and including information describing one or more client devices matching received information identifying the client device. 10. The method of claim 8, wherein the information describing the client device is selected from a group consisting of: an operating system executing on the client device, a type of the client device, a model of the client device, a type of network connection by the client device, an application executing on the client device that presented content from the third party system, and any combination thereof. 11. The method of claim 1, wherein the information received from the client device further includes a date and a time when the client device presented the information from the third party system and rules include dates or times. 12. The method of claim 11, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device, including information identifying additional content presented to the user of the online system matching received information identifying additional content presented to the user by the client device, and including a date or a time matching the date and the time when the client device presented the information from the third party system. 13. A method comprising:
including instructions in content provided by a third party system that, when executed by a client device presenting the content, cause the client device to:
obtain information identifying additional content presented by the client device before the content;
obtain information identifying a user presented with the content by the client device from the client device; and
transmit the information identifying the additional content, the information identifying the user, and an identifier of the content presented by the third party system to the online system; and
communicating one or more rules to the online system, each rule identifying at least one action to perform based on one or more of the information identifying the additional content, the information identifying the user, and the identifier of the content. 14. The method of claim 13, wherein a rule identifies an action to include the user in a set of users specified by the rule. 15. The method of claim 13, wherein a rule identifies an action to associate a conversion event with the user. 16. The method of claim 15, wherein the conversion event is selected from a group consisting of: adding a product to an online shopping cart maintained by the third party system, viewing the content presented to the user, viewing the content presented to the user for at least a specified amount of time, adding the product to a list of products associated with the user by the third party system, requesting information from the third party system, subscribing to a service provided by the third party system, storing content to the third party system, storing content provided by the third party system to the client device, and any combination thereof. 17. The method of claim 15, wherein the conversion event is selected from a group consisting of: indicating a preference for the content presented to the user, sharing the content presented to the user with another user, providing a comment associated with content presented to the user, and any combination thereof. 18. The method of claim 15, wherein the instructions include in the content provided by the third party system, when executed by the client device presenting the content, further cause the client device to:
transmit a date and a time when the client device presented the content provided by the third party system or information describing the client device to the online system. 19. The method of claim 18, wherein a rule identifies an action to perform based on one or more selected from a group consisting of: the information identifying the additional content, the information identifying the user, the identifier of the content, the date and the time when the client device presented the content provided by the third party system, the information describing the client device, and any combination thereof. 20. A computer program product comprising a computer readable storage medium having instructions encoded thereon that, when executed by a processor, cause the processor to:
receive one or more rules from a third party system, each rule identifying an action to perform based on one or more of: information identifying content presented to a user of the online system, information identifying the user to the online system, and information identifying additional content presented to the user of the online system; receive information from a client device presenting content from a third party system to a user, the information identifying content presented to the user by the client device, information identifying the user to the online system, and information identifying additional content presented to the user by the client device; select a rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device; and perform an action identified by the selected rule. | A third party system includes a tracking mechanism in various content provided by the third party system. When a client device presents content provided by the third party system, the client device executes tracking mechanism, causing the client device to communicate information identifying the content, identifying a user associated with the online system, and other information to the online system. Based on the information received from the client device, the online system selects a rule from rules provided to the online system by the third party system. The online system then performs an action included in the selected rule, allowing the advertiser to initiate various actions by the online system while including a common tracking mechanism in different content provided by the third party system.1. A method comprising:
receiving one or more rules from a third party system, each rule identifying an action to perform based on one or more of: information identifying content presented to a user of the online system, information identifying the user to the online system, and information identifying additional content presented to the user of the online system; receiving information from a client device presenting content from a third party system to a user, the information identifying content presented to the user by the client device, information identifying the user to the online system, and information identifying additional content presented to the user by the client device; selecting a rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device; and performing an action identified by the selected rule. 2. The method of claim 1, wherein performing the action identified by the selected rule comprises:
associating the information identifying the user to the online system with a set of users of the online system specified by the rule. 3. The method of claim 1, wherein performing the action identified by the selected rule comprises:
identifying a conversion event based on the information identifying content presented to the user by the client device and the information identifying additional content presented to the user by the client device; and storing an association between the conversion event with the information identifying the user to the online system. 4. The method of claim 3, wherein the conversion event is selected from a group consisting of: adding a product to an online shopping cart maintained by the third party system, viewing the content presented to the user, viewing the content presented to the user for at least a specified amount of time, adding the product to a list of products associated with the user by the third party system, requesting information from the third party system, subscribing to a service provided by the third party system, storing content to the third party system, storing content provided by the third party system to the client device, and any combination thereof. 5. The method of claim 3, wherein the conversion event is selected from a group consisting of: indicating a preference for the content presented to the user, sharing the content presented to the user with another user, providing a comment associated with content presented to the user, and any combination thereof. 6. The method of claim 1, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device. 7. The method of claim 1, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device and including information identifying additional content presented to the user of the online system matching received information identifying additional content presented to the user by the client device. 8. The method of claim 1, wherein the information received from the client device further includes information describing the client device and one or more of the rules include information describing one or more client devices. 9. The method of claim 8, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device, including information identifying additional content presented to the user of the online system matching received information identifying additional content presented to the user by the client device, and including information describing one or more client devices matching received information identifying the client device. 10. The method of claim 8, wherein the information describing the client device is selected from a group consisting of: an operating system executing on the client device, a type of the client device, a model of the client device, a type of network connection by the client device, an application executing on the client device that presented content from the third party system, and any combination thereof. 11. The method of claim 1, wherein the information received from the client device further includes a date and a time when the client device presented the information from the third party system and rules include dates or times. 12. The method of claim 11, wherein selecting the rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device comprises:
selecting a rule including information identifying content presented to the user of the online system matching the received information identifying content presented to the user by the client device, including information identifying additional content presented to the user of the online system matching received information identifying additional content presented to the user by the client device, and including a date or a time matching the date and the time when the client device presented the information from the third party system. 13. A method comprising:
including instructions in content provided by a third party system that, when executed by a client device presenting the content, cause the client device to:
obtain information identifying additional content presented by the client device before the content;
obtain information identifying a user presented with the content by the client device from the client device; and
transmit the information identifying the additional content, the information identifying the user, and an identifier of the content presented by the third party system to the online system; and
communicating one or more rules to the online system, each rule identifying at least one action to perform based on one or more of the information identifying the additional content, the information identifying the user, and the identifier of the content. 14. The method of claim 13, wherein a rule identifies an action to include the user in a set of users specified by the rule. 15. The method of claim 13, wherein a rule identifies an action to associate a conversion event with the user. 16. The method of claim 15, wherein the conversion event is selected from a group consisting of: adding a product to an online shopping cart maintained by the third party system, viewing the content presented to the user, viewing the content presented to the user for at least a specified amount of time, adding the product to a list of products associated with the user by the third party system, requesting information from the third party system, subscribing to a service provided by the third party system, storing content to the third party system, storing content provided by the third party system to the client device, and any combination thereof. 17. The method of claim 15, wherein the conversion event is selected from a group consisting of: indicating a preference for the content presented to the user, sharing the content presented to the user with another user, providing a comment associated with content presented to the user, and any combination thereof. 18. The method of claim 15, wherein the instructions include in the content provided by the third party system, when executed by the client device presenting the content, further cause the client device to:
transmit a date and a time when the client device presented the content provided by the third party system or information describing the client device to the online system. 19. The method of claim 18, wherein a rule identifies an action to perform based on one or more selected from a group consisting of: the information identifying the additional content, the information identifying the user, the identifier of the content, the date and the time when the client device presented the content provided by the third party system, the information describing the client device, and any combination thereof. 20. A computer program product comprising a computer readable storage medium having instructions encoded thereon that, when executed by a processor, cause the processor to:
receive one or more rules from a third party system, each rule identifying an action to perform based on one or more of: information identifying content presented to a user of the online system, information identifying the user to the online system, and information identifying additional content presented to the user of the online system; receive information from a client device presenting content from a third party system to a user, the information identifying content presented to the user by the client device, information identifying the user to the online system, and information identifying additional content presented to the user by the client device; select a rule from the one or more rules based on the information identifying content presented to the user by the client device and information identifying the additional content presented to the user by the client device; and perform an action identified by the selected rule. | 2,400 |
8,662 | 8,662 | 15,432,839 | 2,485 | An example device for filtering a decoded block of video data includes one or more processors implemented in circuitry and configured to decode a current block of a current picture of the video data, select a filter (such as an adaptive loop filter) to be used to filter pixels of the current block, calculate a gradient of at least one pixel for the current block, select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter, wherein the one or more processors are configured to select the geometric transform that corresponds to an orientation of the gradient of the at least one pixel, perform the geometric transform on either the filter support region or the coefficients of the selected filter, and filter the at least one pixel of the current block using the selected filter after performing the geometric transform. | 1. A method of filtering a decoded block of video data, the method comprising:
decoding a current block of a current picture of the video data; selecting a filter to be used to filter one or more pixels of the current block; selecting a geometric transform to be performed on one of a filter support region or coefficients of the selected filter; performing the geometric transform on either the filter support region or the coefficients of the selected filter; and filtering the at least one pixel of the current block using the selected filter after performing the geometric transform. 2. The method of claim 1, wherein the geometric transform comprises one of a rotation transform, a diagonal flip transform, or a vertical flip transform. 3. The method of claim 2,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f(K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f(l, k), wherein the vertical flip transform comprises fV(k, l)=f(k, K−l−1), and wherein K is the size of the selected filter, k and l are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0, 0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 4. The method of claim 2, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein selecting the geometric transform comprises:
calculating a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
calculating a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
calculating a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
calculating a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
selecting the diagonal flip transform when gd2<gd1 and gv<gh; selecting the vertical flip transform when gd1<gd2 and gh<gv; and selecting the horizontal flip transform when gd1<gd2 and gv<gh. 5. The method of claim 1, further comprising calculating one or more gradients of the one or more pixels within the current block. 6. The method of claim 5, wherein selecting the geometric transform comprises selecting the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 7. The method of claim 5, wherein calculating the one or more gradients comprises calculating at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 8. The method of claim 1, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and filtering the at least one pixel comprises performing the geometric transform on either the filter support region or the coefficients of the selected filter. 9. The method of claim 1, wherein selecting the filter comprises selecting the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 10. The method of claim 1, further comprising encoding the current block prior to decoding the current block. 11. The method of claim 1, the method being executable on a wireless communication device, wherein the device comprises:
a memory configured to store the video data; a processor configured to execute instructions to process the video data stored in the memory; and a receiver configured to receive the video data and store the video data to the memory. 12. The method of claim 11, wherein the wireless communication device is a cellular telephone and the video data is received by a receiver and modulated according to a cellular communication standard. 13. A device for filtering a decoded block of video data, the device comprising:
a memory configured to store the video data; and one or more processors implemented in circuitry and configured to:
decode a current block of a current picture of the video data;
select a filter to be used to filter one or more pixels of the current block;
select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter;
perform the geometric transform on either the filter support region or the coefficients of the selected filter; and
filter the at least one pixel of the current block using the selected filter after performing the geometric transform. 14. The device of claim 13, wherein the geometric transform comprises one of a rotation, a diagonal flip, or a vertical flip. 15. The device of claim 14,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f (K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f (l, k), wherein the vertical flip transform comprises fV(k, l)=f (k, K−l−1), and wherein K is the size of the selected filter, k and 1 are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0,0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 16. The device of claim 14, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein to select the geometric transform, the one or more processors are configured to:
calculate a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
calculate a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
calculate a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
calculate a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
select the diagonal flip transform when gd2<gd1 and gv<gh; select the vertical flip transform when gd1<gd2 and gh<gv; and select the horizontal flip transform when gd1<gd2 and gv<gh. 17. The device of claim 13, wherein the one or more processors are further configured to calculate one or more gradients of the one or more pixels within the current block. 18. The device of claim 17, wherein the one or more processors are configured to select the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 19. The device of claim 17, wherein the one or more processors are configured to calculate at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 20. The device of claim 13, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and wherein the one or more processing units are configured to perform the geometric transform on either the filter support region or the coefficients of the selected filter. 21. The device of claim 13, wherein the one or more processors are configured to select the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 22. The device of claim 13, wherein the one or more processing units are further configured to encode the current block prior to decoding the current block. 23. The device of claim 13, wherein the device is a wireless communication device, further comprising:
a receiver configured to receive video data including the current picture. 24. The device of claim 23, wherein the wireless communication device is a cellular telephone and the video data is received by the receiver and modulated according to a cellular communication standard. 25. A device for filtering a decoded block of video data, the device comprising:
means for decoding a current block of a current picture of the video data; means for selecting a filter to be used to filter one or more pixels of the current block; means for selecting a geometric transform to be performed on one of a filter support region or coefficients of the selected filter; means for performing the geometric transform on either the filter support region or the coefficients of the selected filter; and means for filtering the at least one pixel of the current block using the selected filter after performing the geometric transform. 26. The device of claim 25, wherein the geometric transform comprises one of a rotation transform, a diagonal flip transform, or a vertical flip transform. 27. The device of claim 26,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f(K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f(l, k), wherein the vertical flip transform comprises fV(k, l)=f(k, K−l−1), and wherein K is the size of the selected filter, k and 1 are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0,0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 28. The device of claim 26, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein the means for selecting the geometric transform comprises:
means for calculating a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
means for calculating a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
means for calculating a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
means for calculating a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
means for selecting the diagonal flip transform when gg2<gd1 and gv<gh; means for selecting the vertical flip transform when gd1<gd2 and gh<gv;
and
means for selecting the horizontal flip transform when gd1<gd2 and gv<gh. 29. The device of claim 25, further comprising means for calculating one or more gradients of the one or more pixels within the current block. 30. The device of claim 29, wherein the means for selecting the geometric transform comprises means for selecting the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 31. The device of claim 29, wherein the means for calculating the one or more gradients comprises means for calculating at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 32. The device of claim 25, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and the means for filtering the at least one pixel comprises means for performing the geometric transform on either the filter support region or the coefficients of the selected filter. 33. The device of claim 25, wherein the means for selecting the filter comprises means for selecting the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 34. The device of claim 25, further comprising means for encoding the current block prior to decoding the current block. 35. A computer-readable storage medium having stored thereon instructions that, when executed, cause a processor to:
decode a current block of a current picture of video data; select a filter to be used to filter pixels of the current block; select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter; perform the geometric transform on either the filter support region or the coefficients of the selected filter; and filter the at least one pixel of the current block using the selected filter after performing the geometric transform. 36. The computer-readable storage medium of claim 35, wherein the geometric transform comprises one of a rotation transform, a diagonal flip transform, or a vertical flip transform. 37. The computer-readable storage medium of claim 36,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f(K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f(l, k), wherein the vertical flip transform comprises fV(k, l)=f(k, K−l−1), and wherein K is the size of the selected filter, k and 1 are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0,0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 38. The computer-readable storage medium of claim 36, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein the instructions that cause the processor to select the geometric transform comprise instructions that cause the processor to:
calculate a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
calculate a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
calculate a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
calculate a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
select the diagonal flip transform when gd2<gd1 and gv<gh; select the vertical flip transform when gd1<gd2 and gh<gv; and select the horizontal flip transform when gd1<gd2 and gv<gh. 39. The computer-readable storage medium of claim 35, further comprising instructions that cause the processor to calculate one or more gradients of the one or more pixels within the current block. 40. The computer-readable storage medium of claim 39, wherein the instructions that cause the processor to select the geometric transform comprise instructions that cause the processor to select the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 41. The computer-readable storage medium of claim 39, wherein the instructions that cause the processor to calculate the one or more gradients comprise instructions that cause the processor to calculate at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 42. The computer-readable storage medium of claim 35, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and the instructions that cause the processor to filter the at least one pixel comprise instructions that cause the processor to perform the geometric transform on either the filter support region or the coefficients of the selected filter. 43. The computer-readable storage medium of claim 35, wherein the instructions that cause the processor to select the filter comprise instructions that cause the processor to select the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 44. The computer-readable storage medium of claim 35, further comprising instructions that cause the processor to encode the current block prior to decoding the current block. | An example device for filtering a decoded block of video data includes one or more processors implemented in circuitry and configured to decode a current block of a current picture of the video data, select a filter (such as an adaptive loop filter) to be used to filter pixels of the current block, calculate a gradient of at least one pixel for the current block, select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter, wherein the one or more processors are configured to select the geometric transform that corresponds to an orientation of the gradient of the at least one pixel, perform the geometric transform on either the filter support region or the coefficients of the selected filter, and filter the at least one pixel of the current block using the selected filter after performing the geometric transform.1. A method of filtering a decoded block of video data, the method comprising:
decoding a current block of a current picture of the video data; selecting a filter to be used to filter one or more pixels of the current block; selecting a geometric transform to be performed on one of a filter support region or coefficients of the selected filter; performing the geometric transform on either the filter support region or the coefficients of the selected filter; and filtering the at least one pixel of the current block using the selected filter after performing the geometric transform. 2. The method of claim 1, wherein the geometric transform comprises one of a rotation transform, a diagonal flip transform, or a vertical flip transform. 3. The method of claim 2,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f(K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f(l, k), wherein the vertical flip transform comprises fV(k, l)=f(k, K−l−1), and wherein K is the size of the selected filter, k and l are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0, 0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 4. The method of claim 2, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein selecting the geometric transform comprises:
calculating a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
calculating a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
calculating a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
calculating a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
selecting the diagonal flip transform when gd2<gd1 and gv<gh; selecting the vertical flip transform when gd1<gd2 and gh<gv; and selecting the horizontal flip transform when gd1<gd2 and gv<gh. 5. The method of claim 1, further comprising calculating one or more gradients of the one or more pixels within the current block. 6. The method of claim 5, wherein selecting the geometric transform comprises selecting the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 7. The method of claim 5, wherein calculating the one or more gradients comprises calculating at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 8. The method of claim 1, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and filtering the at least one pixel comprises performing the geometric transform on either the filter support region or the coefficients of the selected filter. 9. The method of claim 1, wherein selecting the filter comprises selecting the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 10. The method of claim 1, further comprising encoding the current block prior to decoding the current block. 11. The method of claim 1, the method being executable on a wireless communication device, wherein the device comprises:
a memory configured to store the video data; a processor configured to execute instructions to process the video data stored in the memory; and a receiver configured to receive the video data and store the video data to the memory. 12. The method of claim 11, wherein the wireless communication device is a cellular telephone and the video data is received by a receiver and modulated according to a cellular communication standard. 13. A device for filtering a decoded block of video data, the device comprising:
a memory configured to store the video data; and one or more processors implemented in circuitry and configured to:
decode a current block of a current picture of the video data;
select a filter to be used to filter one or more pixels of the current block;
select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter;
perform the geometric transform on either the filter support region or the coefficients of the selected filter; and
filter the at least one pixel of the current block using the selected filter after performing the geometric transform. 14. The device of claim 13, wherein the geometric transform comprises one of a rotation, a diagonal flip, or a vertical flip. 15. The device of claim 14,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f (K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f (l, k), wherein the vertical flip transform comprises fV(k, l)=f (k, K−l−1), and wherein K is the size of the selected filter, k and 1 are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0,0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 16. The device of claim 14, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein to select the geometric transform, the one or more processors are configured to:
calculate a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
calculate a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
calculate a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
calculate a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
select the diagonal flip transform when gd2<gd1 and gv<gh; select the vertical flip transform when gd1<gd2 and gh<gv; and select the horizontal flip transform when gd1<gd2 and gv<gh. 17. The device of claim 13, wherein the one or more processors are further configured to calculate one or more gradients of the one or more pixels within the current block. 18. The device of claim 17, wherein the one or more processors are configured to select the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 19. The device of claim 17, wherein the one or more processors are configured to calculate at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 20. The device of claim 13, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and wherein the one or more processing units are configured to perform the geometric transform on either the filter support region or the coefficients of the selected filter. 21. The device of claim 13, wherein the one or more processors are configured to select the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 22. The device of claim 13, wherein the one or more processing units are further configured to encode the current block prior to decoding the current block. 23. The device of claim 13, wherein the device is a wireless communication device, further comprising:
a receiver configured to receive video data including the current picture. 24. The device of claim 23, wherein the wireless communication device is a cellular telephone and the video data is received by the receiver and modulated according to a cellular communication standard. 25. A device for filtering a decoded block of video data, the device comprising:
means for decoding a current block of a current picture of the video data; means for selecting a filter to be used to filter one or more pixels of the current block; means for selecting a geometric transform to be performed on one of a filter support region or coefficients of the selected filter; means for performing the geometric transform on either the filter support region or the coefficients of the selected filter; and means for filtering the at least one pixel of the current block using the selected filter after performing the geometric transform. 26. The device of claim 25, wherein the geometric transform comprises one of a rotation transform, a diagonal flip transform, or a vertical flip transform. 27. The device of claim 26,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f(K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f(l, k), wherein the vertical flip transform comprises fV(k, l)=f(k, K−l−1), and wherein K is the size of the selected filter, k and 1 are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0,0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 28. The device of claim 26, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein the means for selecting the geometric transform comprises:
means for calculating a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
means for calculating a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
means for calculating a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
means for calculating a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
means for selecting the diagonal flip transform when gg2<gd1 and gv<gh; means for selecting the vertical flip transform when gd1<gd2 and gh<gv;
and
means for selecting the horizontal flip transform when gd1<gd2 and gv<gh. 29. The device of claim 25, further comprising means for calculating one or more gradients of the one or more pixels within the current block. 30. The device of claim 29, wherein the means for selecting the geometric transform comprises means for selecting the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 31. The device of claim 29, wherein the means for calculating the one or more gradients comprises means for calculating at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 32. The device of claim 25, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and the means for filtering the at least one pixel comprises means for performing the geometric transform on either the filter support region or the coefficients of the selected filter. 33. The device of claim 25, wherein the means for selecting the filter comprises means for selecting the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 34. The device of claim 25, further comprising means for encoding the current block prior to decoding the current block. 35. A computer-readable storage medium having stored thereon instructions that, when executed, cause a processor to:
decode a current block of a current picture of video data; select a filter to be used to filter pixels of the current block; select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter; perform the geometric transform on either the filter support region or the coefficients of the selected filter; and filter the at least one pixel of the current block using the selected filter after performing the geometric transform. 36. The computer-readable storage medium of claim 35, wherein the geometric transform comprises one of a rotation transform, a diagonal flip transform, or a vertical flip transform. 37. The computer-readable storage medium of claim 36,
wherein f (k, l) represents the selected filter, wherein the rotation transform comprises fR(k, l)=f(K−l−1, k), wherein the diagonal flip transform comprises fD(k, l)=f(l, k), wherein the vertical flip transform comprises fV(k, l)=f(k, K−l−1), and wherein K is the size of the selected filter, k and 1 are coordinates of coefficients of the selected filter or coordinates of values in the filter support region, 0≦k, l≦K−1, location (0,0) is at the upper left corner of the selected filter or the filter support region, and location (K−1, K−1) is at the lower right corner of the upper left corner of the selected filter or the filter support region. 38. The computer-readable storage medium of claim 36, wherein f (k, l) represents the selected filter, R(i, j) represents a pixel at position (i, j) of the current picture, and wherein the instructions that cause the processor to select the geometric transform comprise instructions that cause the processor to:
calculate a horizontal gradient gh according to:
g h=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 H k,l, where H k,l=|2R(k, l)−R(k−1,l)−R(k+1,l)|;
calculate a vertical gradient gv according to:
g v=Σk=i−M i+N+M 1 Σl=j−M j+N+M−1 V k,l, where V k,l=|2R(k, l)−R(k, l−1)−R(k, l+1)|;
calculate a first diagonal gradient gd1 according to:
g d1=Σk=i−M i+N+M−1Σl=j−M j+N+M−1 D1k,l, where D1k,l=|2R(k, l)−R(k−1, l−1)−R(k+1, l+1)|;
calculate a second diagonal gradient gd2 according to:
g d2=Σk=i−M i+N+M−1Σj=j−m j+N+M−1 D2k,l, where D2k,l=|2R(k, l)−R(k−1, l+1)−R(k+1, l−1)|;
select the diagonal flip transform when gd2<gd1 and gv<gh; select the vertical flip transform when gd1<gd2 and gh<gv; and select the horizontal flip transform when gd1<gd2 and gv<gh. 39. The computer-readable storage medium of claim 35, further comprising instructions that cause the processor to calculate one or more gradients of the one or more pixels within the current block. 40. The computer-readable storage medium of claim 39, wherein the instructions that cause the processor to select the geometric transform comprise instructions that cause the processor to select the geometric transform that corresponds to orientations of the one or more gradients of the one or more pixels. 41. The computer-readable storage medium of claim 39, wherein the instructions that cause the processor to calculate the one or more gradients comprise instructions that cause the processor to calculate at least one of a horizontal gradient, a vertical gradient, a 45 degree diagonal gradient, or a 135 degree diagonal gradient. 42. The computer-readable storage medium of claim 35, wherein the filter support region comprises a plurality of neighboring pixels to the at least one pixel of the current block to which coefficients of the selected filter are to be applied, and the instructions that cause the processor to filter the at least one pixel comprise instructions that cause the processor to perform the geometric transform on either the filter support region or the coefficients of the selected filter. 43. The computer-readable storage medium of claim 35, wherein the instructions that cause the processor to select the filter comprise instructions that cause the processor to select the filter based on a class for the block, wherein the class comprises one of texture, strong horizontal/vertical, horizontal/vertical, strong diagonal, or diagonal. 44. The computer-readable storage medium of claim 35, further comprising instructions that cause the processor to encode the current block prior to decoding the current block. | 2,400 |
8,663 | 8,663 | 15,691,052 | 2,444 | A monitoring system monitors processing of incoming messages and logs data related to performance of an application that processes the messages. The monitoring system temporarily associates reusable identifiers with the messages and logs data upon each message traversing different points in the application. Each of the identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages, and the identifiers and the logged data are configured to minimize a performance penalty of monitoring the application. The monitoring system parses the data, e.g., during post-processing, to determine, from a plurality of data entries that refers to the same identifier, a subset of the data entries where the same identifier was associated with the same message. | 1. A computer implemented method of monitoring processing of messages by an application in a data transaction processing system, the method comprising:
detecting, by a processor coupled with the application, that the application has received a message of a plurality of messages for processing as each message of the plurality of messages is received by the application, the application including a plurality of checkpoints including at least a start checkpoint associated with an input of the application and an end checkpoint associated with an output of the application; associating, by the processor, an identifier of a plurality of identifiers with each received message; and upon each received message traversing a checkpoint of the plurality of checkpoints, storing, by the processor, in a data store, a data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) a time when the received message traversed the checkpoint, wherein each of the plurality of identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages. 2. The computer implemented method of claim 1, wherein the number of messages in the plurality of messages is larger than the number of identifiers in the plurality of identifiers, and wherein each of the plurality of identifiers is sized to be smaller than a size necessary to uniquely represent each of the plurality of messages. 3. The computer implemented method of claim 1, further comprising reusing identifiers from the plurality of identifiers for monitoring the processing of received messages. 4. The computer implemented method of claim 1, further comprising, associating the identifier with another received message. 5. The computer implemented method of claim 1, further comprising, upon a received message traversing an end checkpoint, associating the identifier associated with the received message with another received message. 6. The computer implemented method of claim 1, further comprising:
parsing data entries in the data store; and upon parsing a data entry based on a traversed start checkpoint, wherein the traversed start checkpoint is associated with an identifier, associating the parsed data entry based on the traversed start checkpoint and all subsequently parsed data entries associated with the identifier, including a data entry based on a traversed end checkpoint, wherein the traversed end checkpoint is associated with the identifier, with each other until parsing the data entry based on the traversed end checkpoint. 7. The computer implemented method of claim 6, wherein parsed data entries associated with each other are related to the same received message. 8. The computer implemented method of claim 7, further comprising determining a progress of the same received message through at least a portion of the application. 9. The computer implemented method of claim 6, wherein the data entries are parsed in a sequence based on the order the data entries are stored in the data store. 10. The computer implemented method of claim 1, wherein the data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) the time when the received message traversed the checkpoint comprises an amount of data less than or equal to an amount that can be atomically read or written by the computer. 11. The computer implemented method of claim 1, wherein the data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) the time when the received message traversed the checkpoint comprises an amount of data less than or equal to an amount of a word size of the computer. 12. The computer implemented method of claim 1, wherein the data transaction processing system is an exchange computing system, and wherein the application is executed by a hardware matching processor. 13. The computer implemented method of claim 12, wherein each message is an electronic data transaction request message, and wherein the application processes an electronic data transaction request message by determining whether an attempt to match an electronic data transaction request message with at least one previously received but unsatisfied electronic data transaction request message for a transaction which is counter thereto results in at least partial satisfaction of one or both of the electronic data transaction request message and the at least one previously received but unsatisfied electronic data transaction request message. 14. The computer implemented method of claim 1, wherein the size of the data recorded based on the time when the received message traversed the checkpoint is smaller than a size necessary to represent the time when the received message traversed the checkpoint. 15. The computer implemented method of claim 1, wherein, upon a received message traversing a checkpoint of the plurality of checkpoints, the amount of data stored in the data store increases. 16. The computer implemented method of claim 15, wherein a received message does not change in size as the received message is processed by the application. 17. The computer implemented method of claim 1, which includes processing, by the application, each received message, the processing causing each received message to traverse at least one of the checkpoints in the plurality of checkpoints. 18. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to:
detect that an application has received a message of a plurality of messages as each message of the plurality of messages is received by the application, the application including a plurality of checkpoints including at least a start checkpoint associated with an input of the application and an end checkpoint associated with an output of the application; associate an identifier of a plurality of identifiers with each received message; and upon each received message traversing a checkpoint of the plurality of checkpoints, store, in a data store separate from the message, a data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) a time when the received message traversed the checkpoint, wherein each of the plurality of identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages. 19. The non-transitory computer-readable medium of claim 18, wherein the instructions are further configured to cause the processor to reuse identifiers from the plurality of identifiers for monitoring processing of the received messages. 20. A computer system for monitoring processing of messages, the computer system comprising:
means for detecting that an application has received a message of a plurality of messages for processing as each message of the plurality of messages is received by the application, the application including a plurality of checkpoints including a start checkpoint associated with an input of the application and an end checkpoint associated with an output of the application; means for associating an identifier from a plurality of identifiers with each received message; and upon each received message traversing a checkpoint of the plurality of checkpoints, means for storing a data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) a time when the received message traversed the checkpoint, wherein each of the plurality of identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages. | A monitoring system monitors processing of incoming messages and logs data related to performance of an application that processes the messages. The monitoring system temporarily associates reusable identifiers with the messages and logs data upon each message traversing different points in the application. Each of the identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages, and the identifiers and the logged data are configured to minimize a performance penalty of monitoring the application. The monitoring system parses the data, e.g., during post-processing, to determine, from a plurality of data entries that refers to the same identifier, a subset of the data entries where the same identifier was associated with the same message.1. A computer implemented method of monitoring processing of messages by an application in a data transaction processing system, the method comprising:
detecting, by a processor coupled with the application, that the application has received a message of a plurality of messages for processing as each message of the plurality of messages is received by the application, the application including a plurality of checkpoints including at least a start checkpoint associated with an input of the application and an end checkpoint associated with an output of the application; associating, by the processor, an identifier of a plurality of identifiers with each received message; and upon each received message traversing a checkpoint of the plurality of checkpoints, storing, by the processor, in a data store, a data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) a time when the received message traversed the checkpoint, wherein each of the plurality of identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages. 2. The computer implemented method of claim 1, wherein the number of messages in the plurality of messages is larger than the number of identifiers in the plurality of identifiers, and wherein each of the plurality of identifiers is sized to be smaller than a size necessary to uniquely represent each of the plurality of messages. 3. The computer implemented method of claim 1, further comprising reusing identifiers from the plurality of identifiers for monitoring the processing of received messages. 4. The computer implemented method of claim 1, further comprising, associating the identifier with another received message. 5. The computer implemented method of claim 1, further comprising, upon a received message traversing an end checkpoint, associating the identifier associated with the received message with another received message. 6. The computer implemented method of claim 1, further comprising:
parsing data entries in the data store; and upon parsing a data entry based on a traversed start checkpoint, wherein the traversed start checkpoint is associated with an identifier, associating the parsed data entry based on the traversed start checkpoint and all subsequently parsed data entries associated with the identifier, including a data entry based on a traversed end checkpoint, wherein the traversed end checkpoint is associated with the identifier, with each other until parsing the data entry based on the traversed end checkpoint. 7. The computer implemented method of claim 6, wherein parsed data entries associated with each other are related to the same received message. 8. The computer implemented method of claim 7, further comprising determining a progress of the same received message through at least a portion of the application. 9. The computer implemented method of claim 6, wherein the data entries are parsed in a sequence based on the order the data entries are stored in the data store. 10. The computer implemented method of claim 1, wherein the data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) the time when the received message traversed the checkpoint comprises an amount of data less than or equal to an amount that can be atomically read or written by the computer. 11. The computer implemented method of claim 1, wherein the data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) the time when the received message traversed the checkpoint comprises an amount of data less than or equal to an amount of a word size of the computer. 12. The computer implemented method of claim 1, wherein the data transaction processing system is an exchange computing system, and wherein the application is executed by a hardware matching processor. 13. The computer implemented method of claim 12, wherein each message is an electronic data transaction request message, and wherein the application processes an electronic data transaction request message by determining whether an attempt to match an electronic data transaction request message with at least one previously received but unsatisfied electronic data transaction request message for a transaction which is counter thereto results in at least partial satisfaction of one or both of the electronic data transaction request message and the at least one previously received but unsatisfied electronic data transaction request message. 14. The computer implemented method of claim 1, wherein the size of the data recorded based on the time when the received message traversed the checkpoint is smaller than a size necessary to represent the time when the received message traversed the checkpoint. 15. The computer implemented method of claim 1, wherein, upon a received message traversing a checkpoint of the plurality of checkpoints, the amount of data stored in the data store increases. 16. The computer implemented method of claim 15, wherein a received message does not change in size as the received message is processed by the application. 17. The computer implemented method of claim 1, which includes processing, by the application, each received message, the processing causing each received message to traverse at least one of the checkpoints in the plurality of checkpoints. 18. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to:
detect that an application has received a message of a plurality of messages as each message of the plurality of messages is received by the application, the application including a plurality of checkpoints including at least a start checkpoint associated with an input of the application and an end checkpoint associated with an output of the application; associate an identifier of a plurality of identifiers with each received message; and upon each received message traversing a checkpoint of the plurality of checkpoints, store, in a data store separate from the message, a data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) a time when the received message traversed the checkpoint, wherein each of the plurality of identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages. 19. The non-transitory computer-readable medium of claim 18, wherein the instructions are further configured to cause the processor to reuse identifiers from the plurality of identifiers for monitoring processing of the received messages. 20. A computer system for monitoring processing of messages, the computer system comprising:
means for detecting that an application has received a message of a plurality of messages for processing as each message of the plurality of messages is received by the application, the application including a plurality of checkpoints including a start checkpoint associated with an input of the application and an end checkpoint associated with an output of the application; means for associating an identifier from a plurality of identifiers with each received message; and upon each received message traversing a checkpoint of the plurality of checkpoints, means for storing a data entry based on (i) the identifier associated with the received message, (ii) the traversed checkpoint, and (iii) a time when the received message traversed the checkpoint, wherein each of the plurality of identifiers is sized such that the storage space necessary to store the identifier is less than the storage space necessary to store an identifier sized to uniquely identify all of the plurality of messages. | 2,400 |
8,664 | 8,664 | 15,513,073 | 2,412 | Method and apparatus for device discovery are disclosed. In the method a type of device discovery to be used by a device for proximity services involving at least one another device is determined, wherein the available types of discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided. The method further comprises signalling Information based on the determined type of device discovery. | 1-26. (canceled) 27. A method for device discovery, comprising
determining a type of device discovery to be used by a device for proximity services involving at least one another device, wherein the available types of discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided, and signalling information based on the determined type of device discovery. 28. A method according to claim 27, comprising signalling information of the determined type of device discovery between an access stratum layer function and an upper layer function. 29. A method according to claim 28, wherein the upper layer function comprises a non-access stratum layer function. 30. A method according to claim 28, comprising
determining at the access stratum layer function whether use of the first type of device discovery would be beneficial, and signalling an indication of the determination to the upper layer function. 31. A method according to claim 28, wherein the determining of the type of device discovery is provided on the upper layer. 32. A method according to claim 30, comprising receiving at the access stratum layer function information of determination of the first type by the upper layer function prior to determining whether the use of the first type of device discovery would be beneficial. 33. A method according to claim 27, wherein the determining of the type of the device discovery is based on at least one of a proximity service application, a user input, at least one measurement, information of battery status of the device and information of activity state of the device. 34. A method according to claim 33, wherein the at least one measurement comprises measurement of at least one of downlink signal strength of a serving cell and/or neighbouring cells, the neighbouring cell list and cell size. 35. A method according to claim 34, wherein the determination is provided at least in part by an access stratum layer function based on the at least one measurement. 36. An apparatus for a communication device, the apparatus comprising at least one processor, and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to
determine a type of device discovery to be used by the communication device for proximity services involving at least one another device, wherein the available types of resource discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided, and signal information based on the determined type of device discovery. 37. An apparatus according to claim 36, configured to provide an access stratum layer function and an upper layer function and to signal information of the determined type of device discovery between the access stratum layer function and the upper layer function. 38. An apparatus according to claim 37, configured to determine at the access stratum layer function whether use of the first type of device discovery would be beneficial and signal an indication of the determination to the upper layer function. 39. An apparatus according to claim 37, configured to determine the type of device discovery to be used on the upper layer and/or the access stratum layer function and signal an indication of the determination between the access stratum layer function and the upper layer function. 40. An apparatus according to claim 36, configured to take into account in determining the type of the device discovery information from at least one of a proximity service application, user input, at least one measurement, battery status of the communication device and activity state of the communication device. 41. An apparatus according to claim 36, wherein
the first type of device discovery comprises at least one of an inter-cell discovery, inter-network discovery and inter-frequency discovery, and the second type of device discovery comprises at least intra-frequency discovery and intra-cell discovery. 42. An apparatus according to claim 36, configured to cause the communication device to request for support from an access system for the device discovery in response to determination of the first type of discovery and process resource information received via dedicated signalling in response to the request. 43. An apparatus according to claim 42, configured to maintain the communication device in connected mode subsequent to sending the request for support for the first type of discovery. 44. An apparatus according to claims 36, configured to signal information to and/or from a network entity and between different protocol layers within the device in association with determination of the type of device discovery. 45. A non-transitory computer program product comprising a program code stored in a tangible form in a computer readable medium configured to cause an apparatus at least to:
determine a type of device discovery to be used by a device for proximity services involving at least one another device, wherein the available types of discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided, and signal information based on the determined type of device discovery. 46. The computer program product according to claim 45, the program code further configured to cause the apparatus at least to:
signal information of the determined type of device discovery between an access stratum layer function and an upper layer function. | Method and apparatus for device discovery are disclosed. In the method a type of device discovery to be used by a device for proximity services involving at least one another device is determined, wherein the available types of discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided. The method further comprises signalling Information based on the determined type of device discovery.1-26. (canceled) 27. A method for device discovery, comprising
determining a type of device discovery to be used by a device for proximity services involving at least one another device, wherein the available types of discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided, and signalling information based on the determined type of device discovery. 28. A method according to claim 27, comprising signalling information of the determined type of device discovery between an access stratum layer function and an upper layer function. 29. A method according to claim 28, wherein the upper layer function comprises a non-access stratum layer function. 30. A method according to claim 28, comprising
determining at the access stratum layer function whether use of the first type of device discovery would be beneficial, and signalling an indication of the determination to the upper layer function. 31. A method according to claim 28, wherein the determining of the type of device discovery is provided on the upper layer. 32. A method according to claim 30, comprising receiving at the access stratum layer function information of determination of the first type by the upper layer function prior to determining whether the use of the first type of device discovery would be beneficial. 33. A method according to claim 27, wherein the determining of the type of the device discovery is based on at least one of a proximity service application, a user input, at least one measurement, information of battery status of the device and information of activity state of the device. 34. A method according to claim 33, wherein the at least one measurement comprises measurement of at least one of downlink signal strength of a serving cell and/or neighbouring cells, the neighbouring cell list and cell size. 35. A method according to claim 34, wherein the determination is provided at least in part by an access stratum layer function based on the at least one measurement. 36. An apparatus for a communication device, the apparatus comprising at least one processor, and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to
determine a type of device discovery to be used by the communication device for proximity services involving at least one another device, wherein the available types of resource discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided, and signal information based on the determined type of device discovery. 37. An apparatus according to claim 36, configured to provide an access stratum layer function and an upper layer function and to signal information of the determined type of device discovery between the access stratum layer function and the upper layer function. 38. An apparatus according to claim 37, configured to determine at the access stratum layer function whether use of the first type of device discovery would be beneficial and signal an indication of the determination to the upper layer function. 39. An apparatus according to claim 37, configured to determine the type of device discovery to be used on the upper layer and/or the access stratum layer function and signal an indication of the determination between the access stratum layer function and the upper layer function. 40. An apparatus according to claim 36, configured to take into account in determining the type of the device discovery information from at least one of a proximity service application, user input, at least one measurement, battery status of the communication device and activity state of the communication device. 41. An apparatus according to claim 36, wherein
the first type of device discovery comprises at least one of an inter-cell discovery, inter-network discovery and inter-frequency discovery, and the second type of device discovery comprises at least intra-frequency discovery and intra-cell discovery. 42. An apparatus according to claim 36, configured to cause the communication device to request for support from an access system for the device discovery in response to determination of the first type of discovery and process resource information received via dedicated signalling in response to the request. 43. An apparatus according to claim 42, configured to maintain the communication device in connected mode subsequent to sending the request for support for the first type of discovery. 44. An apparatus according to claims 36, configured to signal information to and/or from a network entity and between different protocol layers within the device in association with determination of the type of device discovery. 45. A non-transitory computer program product comprising a program code stored in a tangible form in a computer readable medium configured to cause an apparatus at least to:
determine a type of device discovery to be used by a device for proximity services involving at least one another device, wherein the available types of discovery comprise at least a first type discovery where additional resource information is provided for the device and a second type discovery where no additional resource information is provided, and signal information based on the determined type of device discovery. 46. The computer program product according to claim 45, the program code further configured to cause the apparatus at least to:
signal information of the determined type of device discovery between an access stratum layer function and an upper layer function. | 2,400 |
8,665 | 8,665 | 15,216,325 | 2,426 | Described are methods and systems for combining programming content in a controlled synchronized manner. The systems and methods allow for centrally generated content to be modified using local content. The centrally generated content can include data that specifies what portions of the centrally generated content can be modified by the local content. | 1. A method of distributing customizable program content comprising:
distributing a program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously on different portions of a display, and distributing spatial location data comprising an indication of which segments can be subsequently modified, wherein the data and program are distributed simultaneously as part of a common program signal. 2. The method of claim 1, wherein the program is a television program. 3. The method of claim 1, wherein the program is a web site. 4. The method of claim 1, wherein the data further comprises the location of one or more program segments. 5. The method of claim 1, wherein the data further comprises the size of one or more program segments. 6. The method of claim 1, wherein the data further comprises the duration of one or more program segments. 7. A non-transitory, machine-readable medium, comprising instructions to:
distribute a television program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously on different portions of a display, and distribute spatial location data comprising an indication of which segments can be subsequently modified, wherein the data and the television program are distributed simultaneously as part of the same program signal. 8. The machine-readable medium of claim 7, wherein the program signal is a digital program signal. 9. The machine-readable medium of claim 7, wherein the program signal is in MPEG-2 format. 10. The machine-readable medium of claim 7, comprising instructions to distribute the program signal in MPEG-2 format. 11. The machine-readable medium of claim 7, comprising compressing the program signal into a compressed format. 12. The machine-readable medium of claim 7, further comprising instruction to modify one or more program segments. 13. The machine-readable medium of claim 12, wherein modifying the one or more program segments comprises replacing the one or more program segments with local content. 14. The machine-readable medium of claim 12, wherein modifying the one or more program segments comprises modifying only part of one or more segments. 15. The machine-readable medium of claim 12, further comprising distributing the modified program to one or more users. 16. A method of customizing program content comprising:
receiving a program in a compressed format, the program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously on different portions of a display, receiving spatial location data comprising an indication of which segments can be subsequently modified, and modifying one or more segments to produce a modified program based upon the spatial location data. 17. The method of claim 16, further comprising distributing the modified program to one or more users. 18. The method of claim 16, further comprising distributing the modified program to a plurality of users. 19. The method of claim 16, wherein the program is a television program. 20. The method of claim 16, wherein the program is a web site. 21. The method of claim 16, wherein the data further comprises the location of one or more program segments. 22. The method of claim 16, wherein the data further comprises the size of one or more program segments. 23. The method of claim 16, wherein the data further comprises the duration of one or more program segments. 24. The method of claim 16, wherein the data and the television program are received simultaneously as part of the same program signal. 25. The method of claim 24, wherein the program is received in digital format. 26. The method of claim 24, wherein the program signal is in MPEG-2 format. 27. The method of claim 24, wherein the program signal is in MPEG-2 format. 28. The method of claim 24, wherein the program signal is in a compressed format. 29. The method of claim 24, wherein the program signal is an analog program signal. 30. The method of claim 16, wherein the program is a nationally broadcast television program. 31. A non-transitory computer readable media having computer readable code stored on a programmed computer system which when executed by the computer system causes the computer system to implement a method for customizing program content, comprising:
modifying a program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously utilizing spatial location data comprising an indication of which segments may be modified. 32. The computer readable media of claim 31, wherein the data further comprises a location, size or timing of the segments. 33. A system for customizing program content comprising:
a computer system, and an application program for modifying a program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously utilizing spatial location data comprising an indication of which segments may be modified. 34. The system of claim 33, wherein the data further comprises a location, size or timing of the segments. | Described are methods and systems for combining programming content in a controlled synchronized manner. The systems and methods allow for centrally generated content to be modified using local content. The centrally generated content can include data that specifies what portions of the centrally generated content can be modified by the local content.1. A method of distributing customizable program content comprising:
distributing a program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously on different portions of a display, and distributing spatial location data comprising an indication of which segments can be subsequently modified, wherein the data and program are distributed simultaneously as part of a common program signal. 2. The method of claim 1, wherein the program is a television program. 3. The method of claim 1, wherein the program is a web site. 4. The method of claim 1, wherein the data further comprises the location of one or more program segments. 5. The method of claim 1, wherein the data further comprises the size of one or more program segments. 6. The method of claim 1, wherein the data further comprises the duration of one or more program segments. 7. A non-transitory, machine-readable medium, comprising instructions to:
distribute a television program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously on different portions of a display, and distribute spatial location data comprising an indication of which segments can be subsequently modified, wherein the data and the television program are distributed simultaneously as part of the same program signal. 8. The machine-readable medium of claim 7, wherein the program signal is a digital program signal. 9. The machine-readable medium of claim 7, wherein the program signal is in MPEG-2 format. 10. The machine-readable medium of claim 7, comprising instructions to distribute the program signal in MPEG-2 format. 11. The machine-readable medium of claim 7, comprising compressing the program signal into a compressed format. 12. The machine-readable medium of claim 7, further comprising instruction to modify one or more program segments. 13. The machine-readable medium of claim 12, wherein modifying the one or more program segments comprises replacing the one or more program segments with local content. 14. The machine-readable medium of claim 12, wherein modifying the one or more program segments comprises modifying only part of one or more segments. 15. The machine-readable medium of claim 12, further comprising distributing the modified program to one or more users. 16. A method of customizing program content comprising:
receiving a program in a compressed format, the program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously on different portions of a display, receiving spatial location data comprising an indication of which segments can be subsequently modified, and modifying one or more segments to produce a modified program based upon the spatial location data. 17. The method of claim 16, further comprising distributing the modified program to one or more users. 18. The method of claim 16, further comprising distributing the modified program to a plurality of users. 19. The method of claim 16, wherein the program is a television program. 20. The method of claim 16, wherein the program is a web site. 21. The method of claim 16, wherein the data further comprises the location of one or more program segments. 22. The method of claim 16, wherein the data further comprises the size of one or more program segments. 23. The method of claim 16, wherein the data further comprises the duration of one or more program segments. 24. The method of claim 16, wherein the data and the television program are received simultaneously as part of the same program signal. 25. The method of claim 24, wherein the program is received in digital format. 26. The method of claim 24, wherein the program signal is in MPEG-2 format. 27. The method of claim 24, wherein the program signal is in MPEG-2 format. 28. The method of claim 24, wherein the program signal is in a compressed format. 29. The method of claim 24, wherein the program signal is an analog program signal. 30. The method of claim 16, wherein the program is a nationally broadcast television program. 31. A non-transitory computer readable media having computer readable code stored on a programmed computer system which when executed by the computer system causes the computer system to implement a method for customizing program content, comprising:
modifying a program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously utilizing spatial location data comprising an indication of which segments may be modified. 32. The computer readable media of claim 31, wherein the data further comprises a location, size or timing of the segments. 33. A system for customizing program content comprising:
a computer system, and an application program for modifying a program comprising multiple program segments configured to be displayed spatially adjacent to one another simultaneously utilizing spatial location data comprising an indication of which segments may be modified. 34. The system of claim 33, wherein the data further comprises a location, size or timing of the segments. | 2,400 |
8,666 | 8,666 | 15,130,523 | 2,485 | A system for inspecting a glass container and methods of inspecting glass containers are provided. The system includes a panel including a plurality of light sources configured to illuminate the glass container. The system includes a camera configured to image the illuminated glass container. The system includes a controller configured to adjust the amount of power applied to each of the light sources individually. The system includes a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container. Methods of calibrating the system are also provided. | 1. A system for inspecting a glass container comprising:
a panel including a plurality of light sources configured to illuminate the glass container; a first camera configured to image the illuminated glass container; a controller configured to adjust the amount of power supplied to each of the light sources individually; and a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container. 2. The system of claim 1, wherein each light source is one or more surface mounted LED. 3. (canceled) 4. The system of claim 1, wherein the panel is configured to illuminate the plurality of light sources simultaneously, the processor is configured to evaluate an image of the panel, and the controller is configured to control the power supplied to each of the plurality of light sources individually such that an image of the panel appears to be uniformly lit to the first camera. 5. (canceled) 6. The system of claim 1, wherein the processor is configured to evaluate an output from the first camera to determine if the brightness detected in a field of view of the first camera is below a predetermined threshold and to indicate to the controller when the brightness detected in a field of view of the first camera is below the predetermined threshold. 7. The system of claim 6, wherein the controller is configured to receive the indication that the brightness detected in a field of view of the first camera is below a predetermined threshold and to adjust the amount of power supplied to at least one of the plurality of light sources until a desired brightness is detected. 8. The system of claim 1, wherein all of the light sources of the panel are directed in a same orientation. 9. The system of claim 1, wherein all of the light sources of the panel are directed parallel to one another and light sources that are farther from the first camera are driven at a higher power than light sources that are closer to the first camera such that the brightness of the farther and closer light sources is the same as viewed by the first camera. 10. (canceled) 11. The system of claim 10, wherein the controller and processor are configured to test the brightness of the light sources to determine power adjustment values based on a captured image of the panel and the controller and processor are configured to adjust the power table for one or more of the predetermined patterns based on the power adjustment values. 12. The system of claim 1, wherein the image is captured when the glass container is off of a central axis of the first camera and the controller is configured to adjust the power of the plurality of light sources as if the image was captured when the glass container was on the central axis of the first camera. 13. The system of claim 1, further including a second camera, an inspection axis of the first camera is offset from an inspection axis of the second camera. 14-15. (canceled) 16. The system of claim 1, wherein the panel is planar. 17. The system of claim 1, wherein the panel is formed from a plurality of segments that are non-parallel to one another. 18. A method of inspecting a glass container using a first panel including a first plurality of light sources, the method comprising:
illuminating a first predetermined set of the first plurality of light sources to illuminate the glass container with a first predetermined illumination pattern; capturing a first image of the illuminated glass container; illuminating a second predetermined set of the first plurality of light sources to illuminate the glass container with a second predetermined illumination pattern, the second predetermined illumination pattern being different from the first predetermined illumination pattern; capturing a second image of the illuminated glass container; and evaluating the first and second images to determine whether the glass container includes a defect. 19. The method of claim 18, further comprising illuminating all of the first plurality of light sources, evaluating the brightness of the image viewed by a first camera, individually adjusting the amount of power supplied to at least one of the first plurality of light sources to provide uniform illumination from the view of the first camera. 20. The method of claim 18, further comprising continuing to evaluate the brightness of a field of view viewed by a first camera and increasing the amount of power supplied to at least one of the plurality of light sources if the brightness of the field of view viewed by the first camera drops below a predetermined threshold. 21. The method of claim 18, wherein the first image is captured by a first camera and the second image is captured by a second camera. 22. (canceled) 23. The method of claim 18, wherein one of the first and second predetermined illumination patterns is a uniform background for inspecting opaque defects while the other one of the second and first predetermined patterns provides a high contrast for highlighting the edges of a container for dimensional inspection. 24. The method of claim 18, wherein the first image is captured with the glass container located in a first location relative to the panel and the second image is captured with the glass container positioned in a second location relative to the panel, the second location being different than the first location. 25. The method of claim 24, wherein the first and second images are captured by a same camera. 26. The method of claim 24, wherein:
illuminating a first predetermined set of the first plurality of light sources includes powering the first predetermined set of the first plurality of light sources based on a first predetermined power table; illuminating a second predetermined set of the first plurality of light sources includes powering the second set of the first plurality of light sources based on a second predetermined power table. 27. The method of claim 26, further including:
analyzing the brightness of the first plurality of light sources and determining a brightness adjustment value for at least one of the first plurality of light sources; creating a calibration table that stores the brightness adjustment value for the at least one of first plurality of light sources; and adjusting the first and second predetermined power tables based on the calibration table. 28. The method of claim 18, wherein illuminating a first predetermined set of the first plurality of light sources includes powering the first predetermined set of the first plurality of light sources based on a first predetermined power table;
further including: analyzing the brightness of the first predetermined set of first plurality of light sources and determining a brightness adjustment value for at least one of the light sources of the first predetermined set; and adjusting the first power table based on the brightness adjustment value for the at least one of the light sources of the first predetermined set. 29. The method of claim 18, wherein the first and second predetermined illumination patterns are selected from one of a plurality of horizontal bands of light; a plurality of vertical bands of light; a uniform continuous light; a shape that follows the contour of the glass container; and a circle. 30. The method of claim 21, wherein an inspection axis of the first camera is offset from an inspection axis of the second camera. 31-34. (canceled) 35. A method of operating a system for inspecting glass containers, the system for inspecting glass containers including a panel including a plurality of light sources configured to illuminate the glass container; a camera configured to image the illuminated glass container; a controller configured to control the amount of power supplied to each of the individual light sources individually; and a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container, the method comprising:
illuminating the plurality of light sources of the panel simultaneously with a same power; taking an image of the panel with the camera; evaluating the brightness of the image of the panel to determine if all of the light sources appear as illuminating at a uniform brightness to the camera; determining a brightness adjustment value necessary for at least one of the light sources to make the panel appear as illuminating at a uniform brightness to the camera. 36. The method of claim 35, further comprising creating a calibration table that stores the brightness adjustment value for the at least one of the light sources;
illuminating a predetermined pattern of the light sources wherein less than all of the light sources are to appear at a same brightness to the camera while inspecting a glass container; and wherein illuminating a predetermined pattern of the light sources includes applying the calibration table to the predetermined pattern to adjust the brightness of individual ones of the light sources based on the brightness values stored in the calibration table. 37-38. (canceled) 39. The method of claim 35, further comprising adjusting a power supplied to individual ones of the light sources using the brightness adjustment value;
wherein the step of adjusting a power supplied to individual ones of the light sources includes increasing the power for light sources that have a brightness that is too low and decreasing the power for light sources that have a brightness that is too high. 40. (canceled) 41. The method of claim 35, further comprising:
creating a calibration table that stores the brightness adjustment value for the at least one of the light sources; illuminating the plurality of light sources of the panel simultaneously with a same power a second time; taking a second image of the panel with the camera; evaluating the brightness of the second image of the panel to determine if all of the light sources appear as illuminating at a uniform brightness to the camera; determining a second brightness adjustment value necessary for at least one of the light sources to make the panel appear as illuminating at a uniform brightness to the camera; and creating an updated calibration table based on the second brightness adjustment value. 42. A method of operating a system for inspecting glass containers, the system for inspecting glass containers including a panel including a plurality of light sources configured to illuminate the glass container; a camera configured to image the illuminated glass container; a controller configured to adjust the amount of power supplied to each of the light sources individually; and a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container, the method comprising:
illuminating a set of the plurality of light sources of the panel for a predetermined pattern based on an initial power table providing a power value for each of the plurality of light sources; capturing an image of a glass container with the camera with the predetermined pattern illuminated based on the initial power table; evaluating a quality of the image of the glass container for performing a predetermined inspection; and adjusting at least one power value of the predetermined power table to adjust the brightness of at least one of the light sources to improve image quality for the predetermined inspection. 43. The method of claim 42, further comprising determining a brightness adjustment value necessary for at least one of the light sources to make the set of the plurality of light sources appear as illuminating at a uniform brightness to the camera. 44. The method of claim 43, wherein the step of determining a brightness adjustment value includes:
illuminating the plurality of light sources of the panel simultaneously with a same power;
taking an image of the panel with the camera;
evaluating the brightness of the image of the panel to determine if all of the light sources appear as illuminating at the uniform brightness to the camera;
determining the brightness adjustment value necessary for at least one of the light sources to make the panel appear as illuminating at a uniform brightness to the camera. 45. The method of claim 43, wherein the brightness adjustment value compensates for the image being off of the central axis of the camera. 46. The method of claim 43, wherein the step of adjusting at least one power value uses the brightness adjustment value to adjust the power value of at least one of the light sources. 47. The method of claim 42, wherein the predetermined inspection analyzes:
at least one component of the peripheral shape of the glass container and adjusting the at least one power value of the predetermined power table improves edge detection of a portion of the glass container defining the at least one component of the peripheral shape being inspected; or the glass container for opaque defects and adjusting the at least one power value of the predetermined power table reduces washout; or the glass container for stress defects and adjusting the at least one power value of the predetermined power table reduces stray reflections. 48-49. (canceled) 50. The method of claim 42, wherein adjusting the at least one power value of the predetermined power table reduces or increases the power supplied to at least one of the light sources compensates for:
changes in wall thickness for different portions of the glass container; or different light transmission characteristics in different portions of the glass container. 51. (canceled) | A system for inspecting a glass container and methods of inspecting glass containers are provided. The system includes a panel including a plurality of light sources configured to illuminate the glass container. The system includes a camera configured to image the illuminated glass container. The system includes a controller configured to adjust the amount of power applied to each of the light sources individually. The system includes a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container. Methods of calibrating the system are also provided.1. A system for inspecting a glass container comprising:
a panel including a plurality of light sources configured to illuminate the glass container; a first camera configured to image the illuminated glass container; a controller configured to adjust the amount of power supplied to each of the light sources individually; and a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container. 2. The system of claim 1, wherein each light source is one or more surface mounted LED. 3. (canceled) 4. The system of claim 1, wherein the panel is configured to illuminate the plurality of light sources simultaneously, the processor is configured to evaluate an image of the panel, and the controller is configured to control the power supplied to each of the plurality of light sources individually such that an image of the panel appears to be uniformly lit to the first camera. 5. (canceled) 6. The system of claim 1, wherein the processor is configured to evaluate an output from the first camera to determine if the brightness detected in a field of view of the first camera is below a predetermined threshold and to indicate to the controller when the brightness detected in a field of view of the first camera is below the predetermined threshold. 7. The system of claim 6, wherein the controller is configured to receive the indication that the brightness detected in a field of view of the first camera is below a predetermined threshold and to adjust the amount of power supplied to at least one of the plurality of light sources until a desired brightness is detected. 8. The system of claim 1, wherein all of the light sources of the panel are directed in a same orientation. 9. The system of claim 1, wherein all of the light sources of the panel are directed parallel to one another and light sources that are farther from the first camera are driven at a higher power than light sources that are closer to the first camera such that the brightness of the farther and closer light sources is the same as viewed by the first camera. 10. (canceled) 11. The system of claim 10, wherein the controller and processor are configured to test the brightness of the light sources to determine power adjustment values based on a captured image of the panel and the controller and processor are configured to adjust the power table for one or more of the predetermined patterns based on the power adjustment values. 12. The system of claim 1, wherein the image is captured when the glass container is off of a central axis of the first camera and the controller is configured to adjust the power of the plurality of light sources as if the image was captured when the glass container was on the central axis of the first camera. 13. The system of claim 1, further including a second camera, an inspection axis of the first camera is offset from an inspection axis of the second camera. 14-15. (canceled) 16. The system of claim 1, wherein the panel is planar. 17. The system of claim 1, wherein the panel is formed from a plurality of segments that are non-parallel to one another. 18. A method of inspecting a glass container using a first panel including a first plurality of light sources, the method comprising:
illuminating a first predetermined set of the first plurality of light sources to illuminate the glass container with a first predetermined illumination pattern; capturing a first image of the illuminated glass container; illuminating a second predetermined set of the first plurality of light sources to illuminate the glass container with a second predetermined illumination pattern, the second predetermined illumination pattern being different from the first predetermined illumination pattern; capturing a second image of the illuminated glass container; and evaluating the first and second images to determine whether the glass container includes a defect. 19. The method of claim 18, further comprising illuminating all of the first plurality of light sources, evaluating the brightness of the image viewed by a first camera, individually adjusting the amount of power supplied to at least one of the first plurality of light sources to provide uniform illumination from the view of the first camera. 20. The method of claim 18, further comprising continuing to evaluate the brightness of a field of view viewed by a first camera and increasing the amount of power supplied to at least one of the plurality of light sources if the brightness of the field of view viewed by the first camera drops below a predetermined threshold. 21. The method of claim 18, wherein the first image is captured by a first camera and the second image is captured by a second camera. 22. (canceled) 23. The method of claim 18, wherein one of the first and second predetermined illumination patterns is a uniform background for inspecting opaque defects while the other one of the second and first predetermined patterns provides a high contrast for highlighting the edges of a container for dimensional inspection. 24. The method of claim 18, wherein the first image is captured with the glass container located in a first location relative to the panel and the second image is captured with the glass container positioned in a second location relative to the panel, the second location being different than the first location. 25. The method of claim 24, wherein the first and second images are captured by a same camera. 26. The method of claim 24, wherein:
illuminating a first predetermined set of the first plurality of light sources includes powering the first predetermined set of the first plurality of light sources based on a first predetermined power table; illuminating a second predetermined set of the first plurality of light sources includes powering the second set of the first plurality of light sources based on a second predetermined power table. 27. The method of claim 26, further including:
analyzing the brightness of the first plurality of light sources and determining a brightness adjustment value for at least one of the first plurality of light sources; creating a calibration table that stores the brightness adjustment value for the at least one of first plurality of light sources; and adjusting the first and second predetermined power tables based on the calibration table. 28. The method of claim 18, wherein illuminating a first predetermined set of the first plurality of light sources includes powering the first predetermined set of the first plurality of light sources based on a first predetermined power table;
further including: analyzing the brightness of the first predetermined set of first plurality of light sources and determining a brightness adjustment value for at least one of the light sources of the first predetermined set; and adjusting the first power table based on the brightness adjustment value for the at least one of the light sources of the first predetermined set. 29. The method of claim 18, wherein the first and second predetermined illumination patterns are selected from one of a plurality of horizontal bands of light; a plurality of vertical bands of light; a uniform continuous light; a shape that follows the contour of the glass container; and a circle. 30. The method of claim 21, wherein an inspection axis of the first camera is offset from an inspection axis of the second camera. 31-34. (canceled) 35. A method of operating a system for inspecting glass containers, the system for inspecting glass containers including a panel including a plurality of light sources configured to illuminate the glass container; a camera configured to image the illuminated glass container; a controller configured to control the amount of power supplied to each of the individual light sources individually; and a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container, the method comprising:
illuminating the plurality of light sources of the panel simultaneously with a same power; taking an image of the panel with the camera; evaluating the brightness of the image of the panel to determine if all of the light sources appear as illuminating at a uniform brightness to the camera; determining a brightness adjustment value necessary for at least one of the light sources to make the panel appear as illuminating at a uniform brightness to the camera. 36. The method of claim 35, further comprising creating a calibration table that stores the brightness adjustment value for the at least one of the light sources;
illuminating a predetermined pattern of the light sources wherein less than all of the light sources are to appear at a same brightness to the camera while inspecting a glass container; and wherein illuminating a predetermined pattern of the light sources includes applying the calibration table to the predetermined pattern to adjust the brightness of individual ones of the light sources based on the brightness values stored in the calibration table. 37-38. (canceled) 39. The method of claim 35, further comprising adjusting a power supplied to individual ones of the light sources using the brightness adjustment value;
wherein the step of adjusting a power supplied to individual ones of the light sources includes increasing the power for light sources that have a brightness that is too low and decreasing the power for light sources that have a brightness that is too high. 40. (canceled) 41. The method of claim 35, further comprising:
creating a calibration table that stores the brightness adjustment value for the at least one of the light sources; illuminating the plurality of light sources of the panel simultaneously with a same power a second time; taking a second image of the panel with the camera; evaluating the brightness of the second image of the panel to determine if all of the light sources appear as illuminating at a uniform brightness to the camera; determining a second brightness adjustment value necessary for at least one of the light sources to make the panel appear as illuminating at a uniform brightness to the camera; and creating an updated calibration table based on the second brightness adjustment value. 42. A method of operating a system for inspecting glass containers, the system for inspecting glass containers including a panel including a plurality of light sources configured to illuminate the glass container; a camera configured to image the illuminated glass container; a controller configured to adjust the amount of power supplied to each of the light sources individually; and a processor configured to evaluate the image of the illuminated glass container for indications of defects in the container, the method comprising:
illuminating a set of the plurality of light sources of the panel for a predetermined pattern based on an initial power table providing a power value for each of the plurality of light sources; capturing an image of a glass container with the camera with the predetermined pattern illuminated based on the initial power table; evaluating a quality of the image of the glass container for performing a predetermined inspection; and adjusting at least one power value of the predetermined power table to adjust the brightness of at least one of the light sources to improve image quality for the predetermined inspection. 43. The method of claim 42, further comprising determining a brightness adjustment value necessary for at least one of the light sources to make the set of the plurality of light sources appear as illuminating at a uniform brightness to the camera. 44. The method of claim 43, wherein the step of determining a brightness adjustment value includes:
illuminating the plurality of light sources of the panel simultaneously with a same power;
taking an image of the panel with the camera;
evaluating the brightness of the image of the panel to determine if all of the light sources appear as illuminating at the uniform brightness to the camera;
determining the brightness adjustment value necessary for at least one of the light sources to make the panel appear as illuminating at a uniform brightness to the camera. 45. The method of claim 43, wherein the brightness adjustment value compensates for the image being off of the central axis of the camera. 46. The method of claim 43, wherein the step of adjusting at least one power value uses the brightness adjustment value to adjust the power value of at least one of the light sources. 47. The method of claim 42, wherein the predetermined inspection analyzes:
at least one component of the peripheral shape of the glass container and adjusting the at least one power value of the predetermined power table improves edge detection of a portion of the glass container defining the at least one component of the peripheral shape being inspected; or the glass container for opaque defects and adjusting the at least one power value of the predetermined power table reduces washout; or the glass container for stress defects and adjusting the at least one power value of the predetermined power table reduces stray reflections. 48-49. (canceled) 50. The method of claim 42, wherein adjusting the at least one power value of the predetermined power table reduces or increases the power supplied to at least one of the light sources compensates for:
changes in wall thickness for different portions of the glass container; or different light transmission characteristics in different portions of the glass container. 51. (canceled) | 2,400 |
8,667 | 8,667 | 15,399,541 | 2,453 | A method for obtaining decision advice from a set of confidants comprises the steps of receiving a plurality of user inputs on a source communication device from a user to generate a pending decision, receiving at least one user input on the source communication device to select one or more confidants from the set of confidants, posting the user inputs to an application server that is being coupled with an application of the source communication device via at least one network, sending the user inputs to other communication devices associated with the one or more confidants, receiving a set of selections from at least one confidant and posting a message that indicates the user has made a decision. The user inputs are defined as a set of choices including at least one of a set of images and a set of text characters. | 1. A method of promoting a merchant advertisement, comprising:
associating a geographic location of a person with a merchant; sending an advertisement associated with the merchant to be displayed by a mobile device carried by the person; using a pay to display model to promote the advertisement in a post to social networking friends of the person. 2. The method of claim 1, further comprising the person downloading and installing an application in which the method is implemented. 3. The method of claim 1, wherein the post including a snapshot. 4. The method of claim 1, further comprising the post including a request for assistance in making a decision. 5. The method of claim 1, wherein the advertisement is an instance of a set of advertisements sent to the mobile device. | A method for obtaining decision advice from a set of confidants comprises the steps of receiving a plurality of user inputs on a source communication device from a user to generate a pending decision, receiving at least one user input on the source communication device to select one or more confidants from the set of confidants, posting the user inputs to an application server that is being coupled with an application of the source communication device via at least one network, sending the user inputs to other communication devices associated with the one or more confidants, receiving a set of selections from at least one confidant and posting a message that indicates the user has made a decision. The user inputs are defined as a set of choices including at least one of a set of images and a set of text characters.1. A method of promoting a merchant advertisement, comprising:
associating a geographic location of a person with a merchant; sending an advertisement associated with the merchant to be displayed by a mobile device carried by the person; using a pay to display model to promote the advertisement in a post to social networking friends of the person. 2. The method of claim 1, further comprising the person downloading and installing an application in which the method is implemented. 3. The method of claim 1, wherein the post including a snapshot. 4. The method of claim 1, further comprising the post including a request for assistance in making a decision. 5. The method of claim 1, wherein the advertisement is an instance of a set of advertisements sent to the mobile device. | 2,400 |
8,668 | 8,668 | 13,158,788 | 2,476 | A power control method of a base station in a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA) is provided for reducing power consumption by turning off the bias of the power amplifier for the duration of a symbol carrying no user data. The method includes checking scheduling information of radio resources, detecting a symbol carrying no user data, based on the scheduling information, and turning off a bias of the power amplifier for a symbol duration of the symbol carrying no user data. The transmission power control method is capable of reducing power consumption of the base station by turning off the bias of the power amplifier of the base station for the symbol duration in which no user data is transmitted. | 1. A method for controlling a power amplifier of a base station in a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA), the method comprising:
checking scheduling information of radio resources; detecting a symbol carrying no user data, based on the scheduling information; and turning off a bias of the power amplifier for a symbol duration of the symbol carrying no user data. 2. The method of claim 1, further comprising assigning, before checking the scheduling information, the radio resources in a symbol first across an entire frequency bandwidth completely and then in a next symbol. 3. The method of claim 1, wherein the detecting of the symbol carrying no user data comprises:
acquiring, at a symbol location indicator of a controller, information on the symbol carrying no user data; and locating, at a symbol location detector of a Radio Frequency (RF) unit, a position of the symbol carrying no user data based on the symbol information. 4. The method of claim 3, wherein the detecting of the symbol carrying no user data further comprises calculating, at the symbol location indicator, a length of a symbol carrying user data. 5. The method of claim 4, wherein the turning off of the bias comprises:
receiving information of the symbol length; and turning on the bias for a symbol duration of the symbol carrying user data and turning off the bias for the symbol duration of the symbol carrying no user data by referencing the information of the symbol length. 6. The method of claim 1, wherein the detecting of the symbol carrying no user data comprises acquiring, at a symbol power detector of a Radio Frequency (RF) unit, information on the symbol carrying no user data, based on power allocated to individual symbols in a frame. 7. The method of claim 6, wherein the detecting of the symbol carrying no user data further comprises locating, at a symbol location detector, a position of the symbol carrying no user data, based on the symbol information. 8. The method of claim 7, wherein the detecting of the symbol carrying no user data further comprises calculating, at the symbol location detector, a length of a symbol carrying user data. 9. The method of claim 8, wherein the turning off of the bias comprises:
receiving information of the symbol length; and turning on the bias for a symbol duration of the symbol carrying user data and turning off the bias for the symbol duration of the symbol carrying no user data by referencing the information of the symbol length. 10. A base station for a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA), the base station comprising:
a controller for acquiring information of a symbol carrying no user data by referencing scheduling information of radio resources; and a Radio Frequency (RF) unit for turning off a bias applied to a power amplifier for a symbol duration of the symbol carrying no user data by referencing the symbol information provided by the controller. 11. The base station of claim 10, wherein the controller comprises:
a scheduler for assigning radio resources; a symbol location indicator for acquiring information of the symbol carrying no user data by referencing the scheduling information provided by the scheduler, and a modem for receiving the symbol information and for outputting the symbol information to the RF unit. 12. The base station of claim 10, wherein the RF unit comprises:
a symbol location detector for locating a position of the symbol carrying no user data by referencing the symbol information provided by the controller and for calculating a length of a symbol carrying user data; and a bias controller for turning on the bias for a symbol duration of the symbol carrying user data and for turning off the bias for the symbol duration of the symbol carrying no user data according to the length of the symbol carrying user data. 13. The base station of claim 11, wherein the scheduler assigns the radio resource first in a symbol across an entire frequency bandwidth completely and then in a next symbol. 14. The base station of claim 13, wherein the modem comprises a first interface unit for outputting the symbol information, and the modem for generating downlink subframe information including the symbol information and for transmitting the downlink subframe information to the first interface unit. 15. The base station of claim 14, wherein the downlink subframe information comprises a control information region indicating a start position of a downlink subframe and a data sample region, the control information region carrying the symbol information. 16. The base station of claim 15, wherein the controller further comprises a Digital Unit-RF unit (DU-RU) connection unit for transferring the symbol information to the RF unit, the DU-RU connection unit comprising:
a second interface unit for receiving the downlink subframe information from the first interface unit; a conversion unit for analyzing the downlink subframe information received from the second interface unit to extract symbol information and data sample and for converting the extracted symbol information and data sample to basic frame information; and a third interface unit for outputting the basic frame information received from the conversion unit. 17. The base station of claim 16, wherein the RF unit further comprises a fourth interface unit for receiving the basic frame information provided by the third interface unit, for extracting the symbol information from the basic frame information, and for outputting the symbol information to the symbol location indicator. 18. A base station for a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA), the base station comprising:
a controller including a scheduler for assigning radio resources; and a Radio Frequency (RF) unit for acquiring information on a symbol carrying no user data based on transmission powers of individual symbols within a frame and for turning off a bias applied to a power amplifier for a symbol duration of the symbol carrying no user data. 19. The base station of claim 18, wherein the RF unit comprises:
a symbol power detector for acquiring information the symbol carrying no user data based on the power of the individual symbols within a frame; a symbol location detector for locating the symbols carrying no user data based on the symbol information provided by the symbol power detector and for calculating a length of a symbol carrying user data; and a bias controller for turning on the bias for a symbol duration of the symbol carrying user data and for turning off the bias for the symbol duration of the symbol carrying no user data according to the length of the symbol carrying user data. 20. The base station of claim 19, wherein the scheduler assigns the radio resource first in a symbol across an entire frequency bandwidth completely and then in a next symbol. | A power control method of a base station in a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA) is provided for reducing power consumption by turning off the bias of the power amplifier for the duration of a symbol carrying no user data. The method includes checking scheduling information of radio resources, detecting a symbol carrying no user data, based on the scheduling information, and turning off a bias of the power amplifier for a symbol duration of the symbol carrying no user data. The transmission power control method is capable of reducing power consumption of the base station by turning off the bias of the power amplifier of the base station for the symbol duration in which no user data is transmitted.1. A method for controlling a power amplifier of a base station in a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA), the method comprising:
checking scheduling information of radio resources; detecting a symbol carrying no user data, based on the scheduling information; and turning off a bias of the power amplifier for a symbol duration of the symbol carrying no user data. 2. The method of claim 1, further comprising assigning, before checking the scheduling information, the radio resources in a symbol first across an entire frequency bandwidth completely and then in a next symbol. 3. The method of claim 1, wherein the detecting of the symbol carrying no user data comprises:
acquiring, at a symbol location indicator of a controller, information on the symbol carrying no user data; and locating, at a symbol location detector of a Radio Frequency (RF) unit, a position of the symbol carrying no user data based on the symbol information. 4. The method of claim 3, wherein the detecting of the symbol carrying no user data further comprises calculating, at the symbol location indicator, a length of a symbol carrying user data. 5. The method of claim 4, wherein the turning off of the bias comprises:
receiving information of the symbol length; and turning on the bias for a symbol duration of the symbol carrying user data and turning off the bias for the symbol duration of the symbol carrying no user data by referencing the information of the symbol length. 6. The method of claim 1, wherein the detecting of the symbol carrying no user data comprises acquiring, at a symbol power detector of a Radio Frequency (RF) unit, information on the symbol carrying no user data, based on power allocated to individual symbols in a frame. 7. The method of claim 6, wherein the detecting of the symbol carrying no user data further comprises locating, at a symbol location detector, a position of the symbol carrying no user data, based on the symbol information. 8. The method of claim 7, wherein the detecting of the symbol carrying no user data further comprises calculating, at the symbol location detector, a length of a symbol carrying user data. 9. The method of claim 8, wherein the turning off of the bias comprises:
receiving information of the symbol length; and turning on the bias for a symbol duration of the symbol carrying user data and turning off the bias for the symbol duration of the symbol carrying no user data by referencing the information of the symbol length. 10. A base station for a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA), the base station comprising:
a controller for acquiring information of a symbol carrying no user data by referencing scheduling information of radio resources; and a Radio Frequency (RF) unit for turning off a bias applied to a power amplifier for a symbol duration of the symbol carrying no user data by referencing the symbol information provided by the controller. 11. The base station of claim 10, wherein the controller comprises:
a scheduler for assigning radio resources; a symbol location indicator for acquiring information of the symbol carrying no user data by referencing the scheduling information provided by the scheduler, and a modem for receiving the symbol information and for outputting the symbol information to the RF unit. 12. The base station of claim 10, wherein the RF unit comprises:
a symbol location detector for locating a position of the symbol carrying no user data by referencing the symbol information provided by the controller and for calculating a length of a symbol carrying user data; and a bias controller for turning on the bias for a symbol duration of the symbol carrying user data and for turning off the bias for the symbol duration of the symbol carrying no user data according to the length of the symbol carrying user data. 13. The base station of claim 11, wherein the scheduler assigns the radio resource first in a symbol across an entire frequency bandwidth completely and then in a next symbol. 14. The base station of claim 13, wherein the modem comprises a first interface unit for outputting the symbol information, and the modem for generating downlink subframe information including the symbol information and for transmitting the downlink subframe information to the first interface unit. 15. The base station of claim 14, wherein the downlink subframe information comprises a control information region indicating a start position of a downlink subframe and a data sample region, the control information region carrying the symbol information. 16. The base station of claim 15, wherein the controller further comprises a Digital Unit-RF unit (DU-RU) connection unit for transferring the symbol information to the RF unit, the DU-RU connection unit comprising:
a second interface unit for receiving the downlink subframe information from the first interface unit; a conversion unit for analyzing the downlink subframe information received from the second interface unit to extract symbol information and data sample and for converting the extracted symbol information and data sample to basic frame information; and a third interface unit for outputting the basic frame information received from the conversion unit. 17. The base station of claim 16, wherein the RF unit further comprises a fourth interface unit for receiving the basic frame information provided by the third interface unit, for extracting the symbol information from the basic frame information, and for outputting the symbol information to the symbol location indicator. 18. A base station for a wireless communication system based on Orthogonal Frequency Division Multiple Access (OFDMA), the base station comprising:
a controller including a scheduler for assigning radio resources; and a Radio Frequency (RF) unit for acquiring information on a symbol carrying no user data based on transmission powers of individual symbols within a frame and for turning off a bias applied to a power amplifier for a symbol duration of the symbol carrying no user data. 19. The base station of claim 18, wherein the RF unit comprises:
a symbol power detector for acquiring information the symbol carrying no user data based on the power of the individual symbols within a frame; a symbol location detector for locating the symbols carrying no user data based on the symbol information provided by the symbol power detector and for calculating a length of a symbol carrying user data; and a bias controller for turning on the bias for a symbol duration of the symbol carrying user data and for turning off the bias for the symbol duration of the symbol carrying no user data according to the length of the symbol carrying user data. 20. The base station of claim 19, wherein the scheduler assigns the radio resource first in a symbol across an entire frequency bandwidth completely and then in a next symbol. | 2,400 |
8,669 | 8,669 | 15,211,541 | 2,482 | Video data is received in 2D or 3D format from different channels as a user scrolls through an electronic guide. The video data may be displayed in a portion of the on screen display along with graphic and text associated with the EPG data. The received video data may be converted to a suitable format to be displayed with Electronic Program Guide (EPG). The video data may be converted from a 3D to a 2D format to be displayed with the EPG data. The video data may be converted from a 2D format to a 3D format, while the EPG data displays in a 2D format. The video data may be converted from one 3D format to another 3D format for display with the EPG data. The selection of converting the received video data can be based on a display format of a previously viewed channel prior to requesting the EPG to be displayed. | 1. A system for media guidance, the system comprising:
at least one hardware processor that is configured to:
receive a request to display guidance information associated with at least a first channel;
cause first scaled video content corresponding to the first channel and the guidance data to be displayed in a guidance interface;
receive an indication that a second channel has been selected using the guidance interface;
in response to receiving the indication, determine that a video format associated with second scaled video content associated with the second channel is different than the video format associated with the first scaled video content, wherein the video format associated with the first scaled video content is two-dimensional video content or three-dimensional video content and wherein the video format associated with the second scaled video content is two-dimensional video content or three-dimensional video content;
convert the second scaled video content to the video format of the first scaled video content in response to determining that the video format associated with the second scaled video content is different than the video format associated with the first scaled video content; and
cause the converted video content and the guidance data to be displayed in the guidance interface. 2. The system of claim 1, wherein the at least one hardware processor is further configured to identify the video format associated with the first scaled video content in response to receiving the request to display the guidance information. 3. The system of claim 1, wherein the at least one hardware processor is further configured to:
receive at least one of: first video content associated with the first channel and second video content associated with the second channel; and scale the received video content to generate at least one of: the first scaled video content and the second scaled video content. 4. The system of claim 3, wherein the guidance information includes an indication that the received video content is in a received three-dimensional video format. 5. The system of claim 1, wherein the video format associated with the first scaled video content and the second scaled video content is a three-dimensional format that includes a plurality of views associated with each other in one of: a top and bottom video format, a left and right format, or a checkerboard format. 6. The system of claim 5, wherein converting the second scaled video content further comprises converting the three-dimensional format to a two-dimensional format by selecting one of the plurality of views for display. 7. The system of claim 5, wherein converting the second scaled video content further comprises converting the three-dimensional format to a different three-dimensional format. 8. The system of claim 5, wherein converting the second scaled video content further comprises converting a two-dimensional format to a display three-dimensional format. 9. The system of claim 5, wherein converting the second scaled video content includes further comprises providing offset information relating to the three-dimensional video format. 10. The system of claim 1, wherein the at least one hardware processor is further configured to simultaneously provide the guidance data in a two-dimensional format and a three-dimensional format. 11. A method for media guidance, the method comprising:
receiving, using a hardware processor, a request to display guidance information associated with at least a first channel; causing first scaled video content corresponding to the first channel and the guidance data to be displayed in a guidance interface; receiving an indication that a second channel has been selected using the guidance interface; in response to receiving the indication, determining that a video format associated with second scaled video content associated with the second channel is different than the video format associated with the first scaled video content, wherein the video format associated with the first scaled video content is two-dimensional video content or three-dimensional video content and wherein the video format associated with the second scaled video content is two-dimensional video content or three-dimensional video content; converting the second scaled video content to the video format of the first scaled video content in response to determining that the video format associated with the second scaled video content is different than the video format associated with the first scaled video content; and causing the converted video content and the guidance data to be displayed in the guidance interface. 12. The method of claim 11, further comprising identifying the video format associated with the first scaled video content in response to receiving the request to display the guidance information. 13. The method of claim 11, further comprising:
receiving at least one of: first video content associated with the first channel and second video content associated with the second channel; and scaling the received video content to generate at least one of: the first scaled video content and the second scaled video content. 14. The method of claim 13, wherein the guidance information includes an indication that the received video content is in a received three-dimensional video format. 15. The method of claim 11, wherein the video format associated with the first scaled video content and the second scaled video content is a three-dimensional format that includes a plurality of views associated with each other in one of: a top and bottom video format, a left and right format, or a checkerboard format. 16. The method of claim 15, wherein converting the second scaled video content further comprises converting the three-dimensional format to a two-dimensional format by selecting one of the plurality of views for display. 17. The method of claim 15, wherein converting the second scaled video content further comprises converting the three-dimensional format to a different three-dimensional format. 18. The method of claim 15, wherein converting the second scaled video content further comprises converting a two-dimensional format to a display three-dimensional format. 19. The method of claim 15, wherein converting the second scaled video content includes further comprises providing offset information relating to the three-dimensional video format. 20. The method of claim 11, wherein the at least one hardware processor is further configured to simultaneously provide the guidance data in a two-dimensional format and a three-dimensional format. 21. A non-transitory computer-readable medium containing computer executable instructions that, when executed by a processor, cause the processor to perform a method for media guidance, the method comprising:
receiving, using a hardware processor, a request to display guidance information associated with at least a first channel; causing first scaled video content corresponding to the first channel and the guidance data to be displayed in a guidance interface; receiving an indication that a second channel has been selected using the guidance interface; in response to receiving the indication, determining that a video format associated with second scaled video content associated with the second channel is different than the video format associated with the first scaled video content, wherein the video format associated with the first scaled video content is two-dimensional video content or three-dimensional video content and wherein the video format associated with the second scaled video content is two-dimensional video content or three-dimensional video content; converting the second scaled video content to the video format of the first scaled video content in response to determining that the video format associated with the second scaled video content is different than the video format associated with the first scaled video content; and causing the converted video content and the guidance data to be displayed in the guidance interface. | Video data is received in 2D or 3D format from different channels as a user scrolls through an electronic guide. The video data may be displayed in a portion of the on screen display along with graphic and text associated with the EPG data. The received video data may be converted to a suitable format to be displayed with Electronic Program Guide (EPG). The video data may be converted from a 3D to a 2D format to be displayed with the EPG data. The video data may be converted from a 2D format to a 3D format, while the EPG data displays in a 2D format. The video data may be converted from one 3D format to another 3D format for display with the EPG data. The selection of converting the received video data can be based on a display format of a previously viewed channel prior to requesting the EPG to be displayed.1. A system for media guidance, the system comprising:
at least one hardware processor that is configured to:
receive a request to display guidance information associated with at least a first channel;
cause first scaled video content corresponding to the first channel and the guidance data to be displayed in a guidance interface;
receive an indication that a second channel has been selected using the guidance interface;
in response to receiving the indication, determine that a video format associated with second scaled video content associated with the second channel is different than the video format associated with the first scaled video content, wherein the video format associated with the first scaled video content is two-dimensional video content or three-dimensional video content and wherein the video format associated with the second scaled video content is two-dimensional video content or three-dimensional video content;
convert the second scaled video content to the video format of the first scaled video content in response to determining that the video format associated with the second scaled video content is different than the video format associated with the first scaled video content; and
cause the converted video content and the guidance data to be displayed in the guidance interface. 2. The system of claim 1, wherein the at least one hardware processor is further configured to identify the video format associated with the first scaled video content in response to receiving the request to display the guidance information. 3. The system of claim 1, wherein the at least one hardware processor is further configured to:
receive at least one of: first video content associated with the first channel and second video content associated with the second channel; and scale the received video content to generate at least one of: the first scaled video content and the second scaled video content. 4. The system of claim 3, wherein the guidance information includes an indication that the received video content is in a received three-dimensional video format. 5. The system of claim 1, wherein the video format associated with the first scaled video content and the second scaled video content is a three-dimensional format that includes a plurality of views associated with each other in one of: a top and bottom video format, a left and right format, or a checkerboard format. 6. The system of claim 5, wherein converting the second scaled video content further comprises converting the three-dimensional format to a two-dimensional format by selecting one of the plurality of views for display. 7. The system of claim 5, wherein converting the second scaled video content further comprises converting the three-dimensional format to a different three-dimensional format. 8. The system of claim 5, wherein converting the second scaled video content further comprises converting a two-dimensional format to a display three-dimensional format. 9. The system of claim 5, wherein converting the second scaled video content includes further comprises providing offset information relating to the three-dimensional video format. 10. The system of claim 1, wherein the at least one hardware processor is further configured to simultaneously provide the guidance data in a two-dimensional format and a three-dimensional format. 11. A method for media guidance, the method comprising:
receiving, using a hardware processor, a request to display guidance information associated with at least a first channel; causing first scaled video content corresponding to the first channel and the guidance data to be displayed in a guidance interface; receiving an indication that a second channel has been selected using the guidance interface; in response to receiving the indication, determining that a video format associated with second scaled video content associated with the second channel is different than the video format associated with the first scaled video content, wherein the video format associated with the first scaled video content is two-dimensional video content or three-dimensional video content and wherein the video format associated with the second scaled video content is two-dimensional video content or three-dimensional video content; converting the second scaled video content to the video format of the first scaled video content in response to determining that the video format associated with the second scaled video content is different than the video format associated with the first scaled video content; and causing the converted video content and the guidance data to be displayed in the guidance interface. 12. The method of claim 11, further comprising identifying the video format associated with the first scaled video content in response to receiving the request to display the guidance information. 13. The method of claim 11, further comprising:
receiving at least one of: first video content associated with the first channel and second video content associated with the second channel; and scaling the received video content to generate at least one of: the first scaled video content and the second scaled video content. 14. The method of claim 13, wherein the guidance information includes an indication that the received video content is in a received three-dimensional video format. 15. The method of claim 11, wherein the video format associated with the first scaled video content and the second scaled video content is a three-dimensional format that includes a plurality of views associated with each other in one of: a top and bottom video format, a left and right format, or a checkerboard format. 16. The method of claim 15, wherein converting the second scaled video content further comprises converting the three-dimensional format to a two-dimensional format by selecting one of the plurality of views for display. 17. The method of claim 15, wherein converting the second scaled video content further comprises converting the three-dimensional format to a different three-dimensional format. 18. The method of claim 15, wherein converting the second scaled video content further comprises converting a two-dimensional format to a display three-dimensional format. 19. The method of claim 15, wherein converting the second scaled video content includes further comprises providing offset information relating to the three-dimensional video format. 20. The method of claim 11, wherein the at least one hardware processor is further configured to simultaneously provide the guidance data in a two-dimensional format and a three-dimensional format. 21. A non-transitory computer-readable medium containing computer executable instructions that, when executed by a processor, cause the processor to perform a method for media guidance, the method comprising:
receiving, using a hardware processor, a request to display guidance information associated with at least a first channel; causing first scaled video content corresponding to the first channel and the guidance data to be displayed in a guidance interface; receiving an indication that a second channel has been selected using the guidance interface; in response to receiving the indication, determining that a video format associated with second scaled video content associated with the second channel is different than the video format associated with the first scaled video content, wherein the video format associated with the first scaled video content is two-dimensional video content or three-dimensional video content and wherein the video format associated with the second scaled video content is two-dimensional video content or three-dimensional video content; converting the second scaled video content to the video format of the first scaled video content in response to determining that the video format associated with the second scaled video content is different than the video format associated with the first scaled video content; and causing the converted video content and the guidance data to be displayed in the guidance interface. | 2,400 |
8,670 | 8,670 | 15,155,940 | 2,461 | In general, embodiments of the invention relate to routing packets between hosts or virtual machines in different layer 2 domains. More specifically, embodiments of the invention relate to using overlay routing mechanisms in an Internet Protocol (IP) fabric to enable communication between hosts or virtual machines in different layer 2 domains to communication. The overlay routing mechanisms may include direct routing, indirect routing, naked routing, or a combination thereof (e.g., hybrid routing). | 1.-20. (canceled) 21. A method for routing, comprising:
receiving, by a network device, a first encapsulated packet addressed to the network device, wherein the first encapsulated packet comprises an inner packet comprising a VARP address; decapsulating, by the network device, the first encapsulated packet to obtain the VARP address of the inner packet; processing, by the network device and based at least in part on the VARP address, the inner packet to obtain a rewritten inner packet comprising a destination address; generating, by the network device, a second encapsulated packet comprising the rewritten inner packet; and routing, by the network device, the second encapsulated packet towards a destination identified by the destination address. 22. The method of claim 21, wherein the inner packet is encapsulated in the first encapsulated packet using a Virtual Extensible Local Area Network (VXLAN) protocol. 23. The method of claim 22, wherein the network device is associated with a first virtual network identifier (VNI), and wherein the destination is associated with a second VNI. 24. The method of claim 21 wherein the inner packet comprises a Media Access Control (MAC) frame. 25. The method of claim 23, wherein the first VNI is associated with a first layer 2 domain, and wherein the second VNI is associated with a second layer 2 domain. 26. The method of claim 21, wherein the network device comprises a first routing table portion for an underlay network and a second routing table portion for an overlay network, and wherein the second routing table portion comprises information relating each of a plurality of IP network segments to one of a plurality of layer 2 domains. 27. The method of claim 22, wherein the network device comprises a virtual tunnel end point (VTEP) associated with the VARP address. 28. A method for routing, comprising:
receiving, from a source and by a first network device, a first encapsulated packet addressed to the first network device, wherein the first encapsulated packet comprises a first virtual network identifier (VNI) and an inner packet comprising a VARP address and a destination address associated with a destination, and decapsulating, by the first network device, the first encapsulated packet to obtain the VARP address of the inner packet; processing, on the first network device and based at least in part on the VARP address, the inner packet to obtain a first rewritten inner packet comprising the destination address and a second network device address associated with a second network device, generating, by the first network device, a second encapsulated packet comprising the first rewritten inner packet and a second VNI; routing, by the first network device, the second encapsulated packet to the second network device; receiving, by the second network device, the second encapsulated packet; decapsulating, by the second network device, the second encapsulated packet to obtain the first rewritten inner packet; processing, on the second network device and based at least in part on the destination address, the first rewritten inner packet to obtain a second rewritten inner packet comprising the destination address; generating, by the second network device, a third encapsulated packet comprising a third VNI and the second rewritten inner packet; and routing, by the second network device, the third encapsulated packet towards the destination. 29. The method of claim 28, wherein:
the first VNI is associated with the source and the first network device, the second VNI is associated with the first network device and the second network device, and the third VNI is associated with the second network device and the destination. 30. The method of claim 29, wherein processing the inner packet by the first network device comprises using a routing table to determine that the destination is accessible via the second network device based on the association of the second network device and the destination with the third VNI. 31. The method of claim 30, wherein the first network device receives route information for populating the routing table using an interior gateway protocol (IGP). 32. The method of claim 29, wherein the first network device and the second network device are a portion of a plurality of network devices, and wherein the second VNI is only associated with the plurality of network devices. 33. The method of claim 28, wherein the inner packet is encapsulated in the first encapsulated packet using a Virtual Extensible Local Area Network (VXLAN) protocol. 34. The method of claim 31, wherein the first network device comprises a first virtual tunnel end point (VTEP), and wherein the second network device comprises a second VTEP. 35. The method of claim 29, wherein the inner packet comprises a Media Access Control (MAC) frame comprising an Internet Protocol (IP) packet. 36. A method for routing, comprising:
receiving, from a source and by a first network device, a first encapsulated packet addressed to the first network device, wherein the first encapsulated packet comprises a first virtual network identifier (VNI) and an inner packet comprising a destination address associated with a destination and a first VARP address; decapsulating, by the first network device, the first encapsulated packet to obtain the first VARP address of the inner packet; processing, by the first network device, the inner packet to obtain an unencapsulated packet comprising the destination address; routing the unencapsulated packet to a second network device via spine tier; receiving, from the spine tier and by the second network device, the unencapsulated packet; processing, by the second network device, the unencapsulated packet to obtain a rewritten inner packet comprising the destination address and a second VARP address; generating, by the second network device, a second encapsulated packet comprising the rewritten inner packet and a second VNI; and routing the second encapsulated packet towards the destination. 37. The method of claim 37, wherein:
the spine tier comprises a spine tier device, before the processing of the unencapsulated packet by the second network device, the unencapsulated packet is received by the spine tier device, and the spine tier device comprises a routing table comprising a routing table entry that indicates that the destination is reachable via the second network device. 38. The method of claim 37, wherein:
the first VNI is associated with the source and the first network device, the second VNI is associated with the second network device and the destination, and the spine tier device is not associated with any VNI. 39. The method of claim 36, wherein the inner packet is encapsulated in the first encapsulated packet using a Virtual Extensible Local Area Network (VXLAN) protocol. 40. The method of claim 39, wherein the first network device comprises a first virtual tunnel end point (VTEP), and wherein the second network device comprises a second VTEP. | In general, embodiments of the invention relate to routing packets between hosts or virtual machines in different layer 2 domains. More specifically, embodiments of the invention relate to using overlay routing mechanisms in an Internet Protocol (IP) fabric to enable communication between hosts or virtual machines in different layer 2 domains to communication. The overlay routing mechanisms may include direct routing, indirect routing, naked routing, or a combination thereof (e.g., hybrid routing).1.-20. (canceled) 21. A method for routing, comprising:
receiving, by a network device, a first encapsulated packet addressed to the network device, wherein the first encapsulated packet comprises an inner packet comprising a VARP address; decapsulating, by the network device, the first encapsulated packet to obtain the VARP address of the inner packet; processing, by the network device and based at least in part on the VARP address, the inner packet to obtain a rewritten inner packet comprising a destination address; generating, by the network device, a second encapsulated packet comprising the rewritten inner packet; and routing, by the network device, the second encapsulated packet towards a destination identified by the destination address. 22. The method of claim 21, wherein the inner packet is encapsulated in the first encapsulated packet using a Virtual Extensible Local Area Network (VXLAN) protocol. 23. The method of claim 22, wherein the network device is associated with a first virtual network identifier (VNI), and wherein the destination is associated with a second VNI. 24. The method of claim 21 wherein the inner packet comprises a Media Access Control (MAC) frame. 25. The method of claim 23, wherein the first VNI is associated with a first layer 2 domain, and wherein the second VNI is associated with a second layer 2 domain. 26. The method of claim 21, wherein the network device comprises a first routing table portion for an underlay network and a second routing table portion for an overlay network, and wherein the second routing table portion comprises information relating each of a plurality of IP network segments to one of a plurality of layer 2 domains. 27. The method of claim 22, wherein the network device comprises a virtual tunnel end point (VTEP) associated with the VARP address. 28. A method for routing, comprising:
receiving, from a source and by a first network device, a first encapsulated packet addressed to the first network device, wherein the first encapsulated packet comprises a first virtual network identifier (VNI) and an inner packet comprising a VARP address and a destination address associated with a destination, and decapsulating, by the first network device, the first encapsulated packet to obtain the VARP address of the inner packet; processing, on the first network device and based at least in part on the VARP address, the inner packet to obtain a first rewritten inner packet comprising the destination address and a second network device address associated with a second network device, generating, by the first network device, a second encapsulated packet comprising the first rewritten inner packet and a second VNI; routing, by the first network device, the second encapsulated packet to the second network device; receiving, by the second network device, the second encapsulated packet; decapsulating, by the second network device, the second encapsulated packet to obtain the first rewritten inner packet; processing, on the second network device and based at least in part on the destination address, the first rewritten inner packet to obtain a second rewritten inner packet comprising the destination address; generating, by the second network device, a third encapsulated packet comprising a third VNI and the second rewritten inner packet; and routing, by the second network device, the third encapsulated packet towards the destination. 29. The method of claim 28, wherein:
the first VNI is associated with the source and the first network device, the second VNI is associated with the first network device and the second network device, and the third VNI is associated with the second network device and the destination. 30. The method of claim 29, wherein processing the inner packet by the first network device comprises using a routing table to determine that the destination is accessible via the second network device based on the association of the second network device and the destination with the third VNI. 31. The method of claim 30, wherein the first network device receives route information for populating the routing table using an interior gateway protocol (IGP). 32. The method of claim 29, wherein the first network device and the second network device are a portion of a plurality of network devices, and wherein the second VNI is only associated with the plurality of network devices. 33. The method of claim 28, wherein the inner packet is encapsulated in the first encapsulated packet using a Virtual Extensible Local Area Network (VXLAN) protocol. 34. The method of claim 31, wherein the first network device comprises a first virtual tunnel end point (VTEP), and wherein the second network device comprises a second VTEP. 35. The method of claim 29, wherein the inner packet comprises a Media Access Control (MAC) frame comprising an Internet Protocol (IP) packet. 36. A method for routing, comprising:
receiving, from a source and by a first network device, a first encapsulated packet addressed to the first network device, wherein the first encapsulated packet comprises a first virtual network identifier (VNI) and an inner packet comprising a destination address associated with a destination and a first VARP address; decapsulating, by the first network device, the first encapsulated packet to obtain the first VARP address of the inner packet; processing, by the first network device, the inner packet to obtain an unencapsulated packet comprising the destination address; routing the unencapsulated packet to a second network device via spine tier; receiving, from the spine tier and by the second network device, the unencapsulated packet; processing, by the second network device, the unencapsulated packet to obtain a rewritten inner packet comprising the destination address and a second VARP address; generating, by the second network device, a second encapsulated packet comprising the rewritten inner packet and a second VNI; and routing the second encapsulated packet towards the destination. 37. The method of claim 37, wherein:
the spine tier comprises a spine tier device, before the processing of the unencapsulated packet by the second network device, the unencapsulated packet is received by the spine tier device, and the spine tier device comprises a routing table comprising a routing table entry that indicates that the destination is reachable via the second network device. 38. The method of claim 37, wherein:
the first VNI is associated with the source and the first network device, the second VNI is associated with the second network device and the destination, and the spine tier device is not associated with any VNI. 39. The method of claim 36, wherein the inner packet is encapsulated in the first encapsulated packet using a Virtual Extensible Local Area Network (VXLAN) protocol. 40. The method of claim 39, wherein the first network device comprises a first virtual tunnel end point (VTEP), and wherein the second network device comprises a second VTEP. | 2,400 |
8,671 | 8,671 | 14,628,200 | 2,421 | The present invention relates to a system and method for managing network delivery of media content to a client device and, more particularly, to processing a scheduler service request to determine to deliver to the user at least one asset by the digital video recorder (DVR), network DVR (nDVR) or network content storage systems utilizing one or more user characteristics so as to conserve bandwidth of the content delivery network. The system and method may use a device DVR, a network DVR, network content storage system or any combination of the previous, to delay the delivery and storage of content media to conserved bandwidth across the CDN network based on one or more user characteristics in response to a “pause” or “record” or other DVR function service request, A scheduler service is used in connection with a digital video recording system and method that can operate upon a “pause” of linear and/or a “record” nonlinear content to delay any caching and/or storing content media at a predetermined time to conserved bandwidth across the CDN network based on one or more user characteristics. The nDVR storage system and/or network content storage system may be utilized used for storage of the content where nothing is directly streamed to the user from the network. The DVR, content storage, scheduler service, and resource manager can quickly deliver the content to the user account on demand such as, for example, to the DVR in-home with policy restrictions, e.g. for playback only from the home DVR. The scheduler service and resource manager may be configured to recognize which content is watched after the original airing, whereby the media content data may then be delivered to the local in-home DVR at the time the program is typically watched by the user and/or subscriber. | 1. A method for managing a transmission of media content over an access link between a device and a source in a content delivery network (CDN), the method comprising the steps of:
receiving a service request from a requesting device request for delivery of a media content data, processing said service request by determining one or more user characteristics of said request for delivery of said media content data; determining using said one or more user characteristics to delay delivery of said service request of said media content data; identifying said media content data of said service request for delivery; scheduling the delivery of said media content data at a predetermined time; wherein subsequent delivery of said requested media content data is prioritized to minimize traffic over an access link between the source and the requesting device in the CDN; delivering said media content data at said predetermined time; and storing said media content data in storage located in said requesting device. 2. The method of claim 1, wherein said processing step is a DVR function for said service request. 3. The method of claim 1, wherein said processing step is a “record” DVR function for said service request. 4. The method of claim 1, wherein said processing step is a “pause” DVR function for said service request. 5. The method of claim 1, wherein said determining step further includes using said one or more user characteristics of the time of day, previous pause history of the user to actual playback of the content, the nature of the content, and other user characteristic data of said user that indicates with reasonable certainty the users characteristics. 6. The method of claim 1, wherein said determining of delayed delivery step further includes using data from the group consisting essentially of bandwidth resources, policies, network persistence, and/or applicable number of rules of the user account. 7. The method of claim 1, wherein said determining of delayed delivery step further includes delivering said media content data to said device of the user account at the typical viewing time of the content and/or just prior to the viewing time. 8. The method of claim 1, wherein said determining of delayed delivery step further includes delivering said media content data to said device of the user account on-demand at any time the user decides to watch said content. 9. A system for managing a transmission of media content over an access link between a device of a user and a source of the media content in a content delivery network (CDN), the system comprising:
a scheduler configured to:
operate in connection with a digital video recorder, whereby said scheduler is configured to respond to said service request of a “pause” of linear and/or a “record” for delivery of media content data;
a resource manager configured to:
receive said service request from said scheduler for delivery of said media content data,
identify said media content data of said service request for delivery;
process said service request by determining one or more user characteristics of said request for delivery of said media content data;
determine using said one or more user characteristics to delay delivery of said service request of said media content data;
utilize said scheduler to delay caching and/or storing said content media data at a predetermined time to conserved bandwidth across the CDN network based on said one or more user characteristics; and
a content processing and communication system communicatively coupled with said resource manager and configured to:
deliver said media content data at said predetermined time; and
store said media content data in storage located in said requesting device. 10. The system of claim 9 wherein said content processing and communication system is configured to provide information in a manifest file about content media data ingested and stored in the CDN. 11. The system of claim 9 wherein said content processing and communication system communicates with said scheduler for scheduling the delivery of said media content data at said predetermined time, whereby any subsequent delivery of said requested media content data is prioritized to minimize traffic over an access link between the source and the requesting device in the CDN. | The present invention relates to a system and method for managing network delivery of media content to a client device and, more particularly, to processing a scheduler service request to determine to deliver to the user at least one asset by the digital video recorder (DVR), network DVR (nDVR) or network content storage systems utilizing one or more user characteristics so as to conserve bandwidth of the content delivery network. The system and method may use a device DVR, a network DVR, network content storage system or any combination of the previous, to delay the delivery and storage of content media to conserved bandwidth across the CDN network based on one or more user characteristics in response to a “pause” or “record” or other DVR function service request, A scheduler service is used in connection with a digital video recording system and method that can operate upon a “pause” of linear and/or a “record” nonlinear content to delay any caching and/or storing content media at a predetermined time to conserved bandwidth across the CDN network based on one or more user characteristics. The nDVR storage system and/or network content storage system may be utilized used for storage of the content where nothing is directly streamed to the user from the network. The DVR, content storage, scheduler service, and resource manager can quickly deliver the content to the user account on demand such as, for example, to the DVR in-home with policy restrictions, e.g. for playback only from the home DVR. The scheduler service and resource manager may be configured to recognize which content is watched after the original airing, whereby the media content data may then be delivered to the local in-home DVR at the time the program is typically watched by the user and/or subscriber.1. A method for managing a transmission of media content over an access link between a device and a source in a content delivery network (CDN), the method comprising the steps of:
receiving a service request from a requesting device request for delivery of a media content data, processing said service request by determining one or more user characteristics of said request for delivery of said media content data; determining using said one or more user characteristics to delay delivery of said service request of said media content data; identifying said media content data of said service request for delivery; scheduling the delivery of said media content data at a predetermined time; wherein subsequent delivery of said requested media content data is prioritized to minimize traffic over an access link between the source and the requesting device in the CDN; delivering said media content data at said predetermined time; and storing said media content data in storage located in said requesting device. 2. The method of claim 1, wherein said processing step is a DVR function for said service request. 3. The method of claim 1, wherein said processing step is a “record” DVR function for said service request. 4. The method of claim 1, wherein said processing step is a “pause” DVR function for said service request. 5. The method of claim 1, wherein said determining step further includes using said one or more user characteristics of the time of day, previous pause history of the user to actual playback of the content, the nature of the content, and other user characteristic data of said user that indicates with reasonable certainty the users characteristics. 6. The method of claim 1, wherein said determining of delayed delivery step further includes using data from the group consisting essentially of bandwidth resources, policies, network persistence, and/or applicable number of rules of the user account. 7. The method of claim 1, wherein said determining of delayed delivery step further includes delivering said media content data to said device of the user account at the typical viewing time of the content and/or just prior to the viewing time. 8. The method of claim 1, wherein said determining of delayed delivery step further includes delivering said media content data to said device of the user account on-demand at any time the user decides to watch said content. 9. A system for managing a transmission of media content over an access link between a device of a user and a source of the media content in a content delivery network (CDN), the system comprising:
a scheduler configured to:
operate in connection with a digital video recorder, whereby said scheduler is configured to respond to said service request of a “pause” of linear and/or a “record” for delivery of media content data;
a resource manager configured to:
receive said service request from said scheduler for delivery of said media content data,
identify said media content data of said service request for delivery;
process said service request by determining one or more user characteristics of said request for delivery of said media content data;
determine using said one or more user characteristics to delay delivery of said service request of said media content data;
utilize said scheduler to delay caching and/or storing said content media data at a predetermined time to conserved bandwidth across the CDN network based on said one or more user characteristics; and
a content processing and communication system communicatively coupled with said resource manager and configured to:
deliver said media content data at said predetermined time; and
store said media content data in storage located in said requesting device. 10. The system of claim 9 wherein said content processing and communication system is configured to provide information in a manifest file about content media data ingested and stored in the CDN. 11. The system of claim 9 wherein said content processing and communication system communicates with said scheduler for scheduling the delivery of said media content data at said predetermined time, whereby any subsequent delivery of said requested media content data is prioritized to minimize traffic over an access link between the source and the requesting device in the CDN. | 2,400 |
8,672 | 8,672 | 14,898,856 | 2,458 | A logging device ( 110 ) and a log aggregation device are provided. The logging device is configured to collaborate with at least one other logging device ( 120, 130 ), the logging device and the at least one other logging device together forming a set of logging devices configured to communicate among each other over a communications network, the logging device being configured to collaboratively execute a process together with the at least one other logging device, a process defining related activities to be executed at a logging device of the set of logging devices, an activity of a process being initiating or dependent, a dependent activity being dependent upon at least one previous activity of the same process, the logging device comprises a log manager ( 112 ) and a log buffer ( 114 ), the log manager is configured to produce for an activity executed on the logging device an associated log entry and to write the log entry to the log buffer, said log entry comprises a data entry and a chaining value, the data entry comprises information on the activity to which the log entry is associated, the log manager is configured to compute the chaining value for a log entry associated with an activity so that: if the activity is an initiating activity, the chaining value is set to an initiating chaining value, and if the activity is a dependent activity, the chaining value is computed from all log entries associated with the activities on which the dependent activity depends. | 1. A logging device configured to collaborate with at least one other logging device, the logging device and the at least one other logging device together forming a set of logging devices configured to communicate among each other over a communications network,
the logging device comprising a log manager and a log buffer, the log manager being configured to produce for an activity executed on the logging device an associated log entry and to write the log entry to the log buffer, said log entry comprises a data entry and a chaining value, the data entry comprising information on the activity to which the log entry is associated, the activity being initiating or dependent, a dependent activity being dependent upon at least one previous activity, the log manager being configured to obtain dependency information for the activity, the dependency information indicating whether the activity is initiating or dependent, and in case the activity is dependent, and log entries associated with the activities on which the dependent activity depends, the log manager being configured to compute the chaining value for a log entry associated with an activity so that:
if the activity is an initiating activity, the chaining value is set to an initiating chaining value, and
if the activity is a dependent activity, the chaining value is computed from log entries associated with the activities on which the dependent activity depends. 2. A logging device as in claim 1,
the logging device being configured to collaboratively execute a process together with the at least one other logging device, a process defining related activities to be executed at a logging device of the set of logging devices, an activity of a process being initiating or dependent, a dependent activity being dependent upon at least one previous activity of the same process. 3. A logging device as in claim 2, wherein the logging device is configured to collaboratively execute a number of processes together with the at least one other logging device, each process of the number of processes having a unique process identifier, the initiating chaining value depending on the process identifier of the process defining the activity associated with the log entry. 4. A logging device as in claim 1, wherein the log manager comprises a random number generator, and wherein producing the log entry comprises calling the random number generator and including a generated number generator in the log entry. 5. A logging device as in claim 1, wherein
the logging device is configured to execute an initiating activity, and/or the logging device is configured to execute a dependent first activity depending on a second activity, wherein the second activity is executed on a device of the at least one other logging device, and/or the logging device is configured to execute a dependent activity, depending on at least two previous activities executed on a logging device of the set of logging devices. 6. A log aggregation device comprising
an aggregator for collecting log entries from logging devices as in claim 1, to obtain an aggregated log, a log entry comprising a chaining value, a threading unit configured to
search in the aggregated log for one or more log entries so that a chaining value computed from the searched one or more log entries equals a target chaining value of a target log entry of the aggregated log, and
if the one or more log entries are found, labeling the target log entry as a dependent activity. 7. A log aggregation device as in claim 6, wherein labeling the target log entry as a dependent activity comprises labeling the target log entry with one or more pointers to the one or more found log entries. 8. A log aggregation device as in claim 6, wherein the threading unit is configured to determine if the chaining value is an initiating chaining value, by matching at least part of the chaining value with one or more unique process identifiers, and if so, labeling the log entry as an initiating activity. 9. A log aggregation device as in claim 7, wherein the threading unit is configured to verify that a graph formed from the log entries in the aggregated log as vertices and the pointers as edges is a directed acyclic graph. 10. A log aggregation device as in claim 7, comprising a display controller configured to
display a representation of the target log entry of the aggregated log, display a representation of the pointers to log entries in the aggregated log on which the target log entry depends, and/or display a representation of log entries in the log entry on which the target log entry depends. 11. A logging system comprising a set of logging devices as in claim 1, and a log aggregation device. 12. A logging method for a device collaborating with at least one other logging device, the logging device and the at least one other logging device together forming a set of logging devices configured to communicate among each other over a communications network, the method comprising
producing for an activity executed on the logging device an associated log entry and writing the log entry to the log buffer, said log entry comprises a data entry and a chaining value, the data entry comprises information on the activity to which the log entry is associated, the activity is initiating or dependent, a dependent activity being dependent upon at least one previous activity, obtaining dependency information for the activity, the dependency information indicating whether the activity is initiating or dependent, and in case the activity is dependent, log entries associated with the activities on which the dependent activity depends, computing the chaining value for a log entry associated with an activity so that:
if the activity is an initiating activity, setting the chaining value to an initiating chaining value, and
if the activity is a dependent activity, computing the chaining value from log entries associated with the activities on which the dependent activity depends. 13. A method for log aggregation, the method comprising
collecting log entries from log devices to obtain an aggregated log, a log entry comprising a chaining value, searching in the aggregated log for one or more log entries so that a chaining value computed from the searched one or more log entries equals a target chaining value of a target log entry of the aggregated log, and if the one or more log entries are found, labeling the target log entry as a dependent activity. 14. A computer program comprising computer program code means adapted to perform all the steps of claim 12 when the computer program is run on a computer. 15. A computer program as claimed in claim 14 embodied on a computer readable medium. | A logging device ( 110 ) and a log aggregation device are provided. The logging device is configured to collaborate with at least one other logging device ( 120, 130 ), the logging device and the at least one other logging device together forming a set of logging devices configured to communicate among each other over a communications network, the logging device being configured to collaboratively execute a process together with the at least one other logging device, a process defining related activities to be executed at a logging device of the set of logging devices, an activity of a process being initiating or dependent, a dependent activity being dependent upon at least one previous activity of the same process, the logging device comprises a log manager ( 112 ) and a log buffer ( 114 ), the log manager is configured to produce for an activity executed on the logging device an associated log entry and to write the log entry to the log buffer, said log entry comprises a data entry and a chaining value, the data entry comprises information on the activity to which the log entry is associated, the log manager is configured to compute the chaining value for a log entry associated with an activity so that: if the activity is an initiating activity, the chaining value is set to an initiating chaining value, and if the activity is a dependent activity, the chaining value is computed from all log entries associated with the activities on which the dependent activity depends.1. A logging device configured to collaborate with at least one other logging device, the logging device and the at least one other logging device together forming a set of logging devices configured to communicate among each other over a communications network,
the logging device comprising a log manager and a log buffer, the log manager being configured to produce for an activity executed on the logging device an associated log entry and to write the log entry to the log buffer, said log entry comprises a data entry and a chaining value, the data entry comprising information on the activity to which the log entry is associated, the activity being initiating or dependent, a dependent activity being dependent upon at least one previous activity, the log manager being configured to obtain dependency information for the activity, the dependency information indicating whether the activity is initiating or dependent, and in case the activity is dependent, and log entries associated with the activities on which the dependent activity depends, the log manager being configured to compute the chaining value for a log entry associated with an activity so that:
if the activity is an initiating activity, the chaining value is set to an initiating chaining value, and
if the activity is a dependent activity, the chaining value is computed from log entries associated with the activities on which the dependent activity depends. 2. A logging device as in claim 1,
the logging device being configured to collaboratively execute a process together with the at least one other logging device, a process defining related activities to be executed at a logging device of the set of logging devices, an activity of a process being initiating or dependent, a dependent activity being dependent upon at least one previous activity of the same process. 3. A logging device as in claim 2, wherein the logging device is configured to collaboratively execute a number of processes together with the at least one other logging device, each process of the number of processes having a unique process identifier, the initiating chaining value depending on the process identifier of the process defining the activity associated with the log entry. 4. A logging device as in claim 1, wherein the log manager comprises a random number generator, and wherein producing the log entry comprises calling the random number generator and including a generated number generator in the log entry. 5. A logging device as in claim 1, wherein
the logging device is configured to execute an initiating activity, and/or the logging device is configured to execute a dependent first activity depending on a second activity, wherein the second activity is executed on a device of the at least one other logging device, and/or the logging device is configured to execute a dependent activity, depending on at least two previous activities executed on a logging device of the set of logging devices. 6. A log aggregation device comprising
an aggregator for collecting log entries from logging devices as in claim 1, to obtain an aggregated log, a log entry comprising a chaining value, a threading unit configured to
search in the aggregated log for one or more log entries so that a chaining value computed from the searched one or more log entries equals a target chaining value of a target log entry of the aggregated log, and
if the one or more log entries are found, labeling the target log entry as a dependent activity. 7. A log aggregation device as in claim 6, wherein labeling the target log entry as a dependent activity comprises labeling the target log entry with one or more pointers to the one or more found log entries. 8. A log aggregation device as in claim 6, wherein the threading unit is configured to determine if the chaining value is an initiating chaining value, by matching at least part of the chaining value with one or more unique process identifiers, and if so, labeling the log entry as an initiating activity. 9. A log aggregation device as in claim 7, wherein the threading unit is configured to verify that a graph formed from the log entries in the aggregated log as vertices and the pointers as edges is a directed acyclic graph. 10. A log aggregation device as in claim 7, comprising a display controller configured to
display a representation of the target log entry of the aggregated log, display a representation of the pointers to log entries in the aggregated log on which the target log entry depends, and/or display a representation of log entries in the log entry on which the target log entry depends. 11. A logging system comprising a set of logging devices as in claim 1, and a log aggregation device. 12. A logging method for a device collaborating with at least one other logging device, the logging device and the at least one other logging device together forming a set of logging devices configured to communicate among each other over a communications network, the method comprising
producing for an activity executed on the logging device an associated log entry and writing the log entry to the log buffer, said log entry comprises a data entry and a chaining value, the data entry comprises information on the activity to which the log entry is associated, the activity is initiating or dependent, a dependent activity being dependent upon at least one previous activity, obtaining dependency information for the activity, the dependency information indicating whether the activity is initiating or dependent, and in case the activity is dependent, log entries associated with the activities on which the dependent activity depends, computing the chaining value for a log entry associated with an activity so that:
if the activity is an initiating activity, setting the chaining value to an initiating chaining value, and
if the activity is a dependent activity, computing the chaining value from log entries associated with the activities on which the dependent activity depends. 13. A method for log aggregation, the method comprising
collecting log entries from log devices to obtain an aggregated log, a log entry comprising a chaining value, searching in the aggregated log for one or more log entries so that a chaining value computed from the searched one or more log entries equals a target chaining value of a target log entry of the aggregated log, and if the one or more log entries are found, labeling the target log entry as a dependent activity. 14. A computer program comprising computer program code means adapted to perform all the steps of claim 12 when the computer program is run on a computer. 15. A computer program as claimed in claim 14 embodied on a computer readable medium. | 2,400 |
8,673 | 8,673 | 14,921,511 | 2,413 | A host device may include a wireless interface for communications, a memory, and a processor coupled to the memory and to the wireless interface. The host device may receive, via the wireless interface, an advertisement message from a client device. The advertisement message may include an identifier associated with the client device and a request for communication of data from a cloud-based service. Responsive to the advertisement, the host may send the identifier to the cloud-based service. The host may receive from the cloud-based service, a proxy indication of available data associated with the client. Responsive to receiving the proxy indication of available data, the host may provide, via the wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client. After receiving the available data from the cloud-based service, the host device may send the available data to the client. | 1. A method of using a host device to communicate with a client device, the method comprising:
at the host device comprising at least one wireless interface for communications, a memory, and a processor coupled to the memory and to the at least one wireless interface:
receiving, via the at least one wireless interface, an advertisement message from the client device, the advertisement message including a identifier associated with the client device and a request for communication of data from a cloud-based service;
responsive to the received advertisement message, sending the identifier to the cloud-based service;
receiving from the cloud-based service, a proxy indication of available data associated with the client device;
responsive to the received proxy indication of available data, providing, via the at least one wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client device;
receiving the available data from the cloud-based service; and
sending the available data to the client device. 2. The method of claim 1, wherein the identifier associated with the client device is a unique device identifier. 3. The method of claim 1, wherein the connection request includes the available data. 4. The method of claim 1, further comprising, at the host device:
receiving a response to the connection request; and after receiving the response to the connection request, sending the available data to the client device. 5. The method of claim 1, wherein the client indication of the available data includes one or more properties of the available data, the method further comprising, at the host device:
receiving a response to the connection request wherein the response is sent responsive to determining that a property of the available data corresponds to a property of data desired by the client device; and after receiving the response to the connection request, sending the available data to the client device. 6. The method of claim 5, wherein one of the one or more properties of the available data indicates a priority of the available data, and wherein the connection request is sent responsive to determining that the priority exceeds a threshold. 7. The method of claim 1, wherein the proxy indication of available data is received via the receiving of the available data. 8. The method of claim 1, wherein the host device is associated with a first user and wherein the client device is associated with a second user different from the first user. 9. The method of claim 1, wherein the at least one wireless interface comprises a first wireless interface and a second wireless interface, wherein the second wireless interface comprises at least one of a Bluetooth radio and a WiFi radio for communications between the host device and the client device, and wherein the first wireless interface comprises a cellular radio for communications between the host device and the cloud-based service. 10. The method of claim 1, wherein the connection request is sent responsive to:
receiving the proxy indication of available data; and receiving a second advertisement message from the client device. 11. The method of claim 1, wherein the advertisement message further includes an indication of a requested connection type, and wherein sending the identifier to the cloud-based service comprises sending the identifier to the cloud-based service responsive to determining that the indication of the requested connection type indicates a downlink connection. 12. The method of claim 11, wherein providing the connection request comprises providing the connection request to the client device without sending the identifier to the cloud-based service responsive to determining that the indication of the requested connection type indicates an uplink connection. 13. A computer product comprising a computer readable medium storing a plurality of instructions for controlling a computer system to:
at a host device comprising at least one wireless interface for communications, a memory, and a processor coupled to the memory and to the at least one wireless interface:
receive, via the at least one wireless interface, an advertisement message from a client device, the advertisement message including a identifier associated with the client device and a request for communication of data from a cloud-based service;
responsive to the received advertisement message, send the identifier to the cloud-based service;
receive from the cloud-based service, a proxy indication of available data associated with the client device;
responsive to the received proxy indication of available data, provide, via the at least one wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client device;
receive the available data from the cloud-based service; and
send the available data to the client device. 14. The computer product of claim 13, wherein the identifier associated with the client device is a unique device identifier. 15. The computer product of claim 13, wherein the connection request includes the available data. 16. The computer product of claim 13, wherein the advertisement message further includes an indication of a requested connection type, and wherein sending the identifier to the cloud-based service comprises sending the identifier to the cloud-based service responsive to determining that the indication of the requested connection type indicates a downlink connection. 17. A system comprising:
one or more processors; and one or more non-transitory computer-readable storage mediums containing instructions configured to cause the one or more processors to perform operations including: at a host device comprising at least one wireless interface for communications, a memory, and a processor coupled to the memory and to the at least one wireless interface:
receiving, via the at least one wireless interface, an advertisement message from the client device, the advertisement message including a identifier associated with the client device and a request for communication of data from a cloud-based service;
responsive to the received advertisement message, sending the identifier to the cloud-based service;
receiving from the cloud-based service, a proxy indication of available data associated with the client device;
responsive to the received proxy indication of available data, providing, via the at least one wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client device;
receiving the available data from the cloud-based service; and
sending the available data to the client device. 18. The system of claim 17, wherein the at least one wireless interface comprises a first wireless interface and a second wireless interface, wherein the second wireless interface comprises at least one of a Bluetooth radio and a WiFi radio for communications between the host device and the client device, and wherein the first wireless interface comprises a cellular radio for communications between the host device and the cloud-based service. 19. The system of claim 17, wherein the connection request is sent responsive to:
receiving the proxy indication of available data; and receiving a second advertisement message from the client device. 20. The system of claim 17, wherein the client indication of the available data includes one or more properties of the available data, the operations further comprising, at the host device:
receiving a response to the connection request wherein the response is sent responsive to determining that a property of the available data corresponds to a property of data desired by the client device; and after receiving the response to the connection request, sending the available data to the client device. | A host device may include a wireless interface for communications, a memory, and a processor coupled to the memory and to the wireless interface. The host device may receive, via the wireless interface, an advertisement message from a client device. The advertisement message may include an identifier associated with the client device and a request for communication of data from a cloud-based service. Responsive to the advertisement, the host may send the identifier to the cloud-based service. The host may receive from the cloud-based service, a proxy indication of available data associated with the client. Responsive to receiving the proxy indication of available data, the host may provide, via the wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client. After receiving the available data from the cloud-based service, the host device may send the available data to the client.1. A method of using a host device to communicate with a client device, the method comprising:
at the host device comprising at least one wireless interface for communications, a memory, and a processor coupled to the memory and to the at least one wireless interface:
receiving, via the at least one wireless interface, an advertisement message from the client device, the advertisement message including a identifier associated with the client device and a request for communication of data from a cloud-based service;
responsive to the received advertisement message, sending the identifier to the cloud-based service;
receiving from the cloud-based service, a proxy indication of available data associated with the client device;
responsive to the received proxy indication of available data, providing, via the at least one wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client device;
receiving the available data from the cloud-based service; and
sending the available data to the client device. 2. The method of claim 1, wherein the identifier associated with the client device is a unique device identifier. 3. The method of claim 1, wherein the connection request includes the available data. 4. The method of claim 1, further comprising, at the host device:
receiving a response to the connection request; and after receiving the response to the connection request, sending the available data to the client device. 5. The method of claim 1, wherein the client indication of the available data includes one or more properties of the available data, the method further comprising, at the host device:
receiving a response to the connection request wherein the response is sent responsive to determining that a property of the available data corresponds to a property of data desired by the client device; and after receiving the response to the connection request, sending the available data to the client device. 6. The method of claim 5, wherein one of the one or more properties of the available data indicates a priority of the available data, and wherein the connection request is sent responsive to determining that the priority exceeds a threshold. 7. The method of claim 1, wherein the proxy indication of available data is received via the receiving of the available data. 8. The method of claim 1, wherein the host device is associated with a first user and wherein the client device is associated with a second user different from the first user. 9. The method of claim 1, wherein the at least one wireless interface comprises a first wireless interface and a second wireless interface, wherein the second wireless interface comprises at least one of a Bluetooth radio and a WiFi radio for communications between the host device and the client device, and wherein the first wireless interface comprises a cellular radio for communications between the host device and the cloud-based service. 10. The method of claim 1, wherein the connection request is sent responsive to:
receiving the proxy indication of available data; and receiving a second advertisement message from the client device. 11. The method of claim 1, wherein the advertisement message further includes an indication of a requested connection type, and wherein sending the identifier to the cloud-based service comprises sending the identifier to the cloud-based service responsive to determining that the indication of the requested connection type indicates a downlink connection. 12. The method of claim 11, wherein providing the connection request comprises providing the connection request to the client device without sending the identifier to the cloud-based service responsive to determining that the indication of the requested connection type indicates an uplink connection. 13. A computer product comprising a computer readable medium storing a plurality of instructions for controlling a computer system to:
at a host device comprising at least one wireless interface for communications, a memory, and a processor coupled to the memory and to the at least one wireless interface:
receive, via the at least one wireless interface, an advertisement message from a client device, the advertisement message including a identifier associated with the client device and a request for communication of data from a cloud-based service;
responsive to the received advertisement message, send the identifier to the cloud-based service;
receive from the cloud-based service, a proxy indication of available data associated with the client device;
responsive to the received proxy indication of available data, provide, via the at least one wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client device;
receive the available data from the cloud-based service; and
send the available data to the client device. 14. The computer product of claim 13, wherein the identifier associated with the client device is a unique device identifier. 15. The computer product of claim 13, wherein the connection request includes the available data. 16. The computer product of claim 13, wherein the advertisement message further includes an indication of a requested connection type, and wherein sending the identifier to the cloud-based service comprises sending the identifier to the cloud-based service responsive to determining that the indication of the requested connection type indicates a downlink connection. 17. A system comprising:
one or more processors; and one or more non-transitory computer-readable storage mediums containing instructions configured to cause the one or more processors to perform operations including: at a host device comprising at least one wireless interface for communications, a memory, and a processor coupled to the memory and to the at least one wireless interface:
receiving, via the at least one wireless interface, an advertisement message from the client device, the advertisement message including a identifier associated with the client device and a request for communication of data from a cloud-based service;
responsive to the received advertisement message, sending the identifier to the cloud-based service;
receiving from the cloud-based service, a proxy indication of available data associated with the client device;
responsive to the received proxy indication of available data, providing, via the at least one wireless interface, a connection request including a client indication of the available data from the cloud-based service to the client device;
receiving the available data from the cloud-based service; and
sending the available data to the client device. 18. The system of claim 17, wherein the at least one wireless interface comprises a first wireless interface and a second wireless interface, wherein the second wireless interface comprises at least one of a Bluetooth radio and a WiFi radio for communications between the host device and the client device, and wherein the first wireless interface comprises a cellular radio for communications between the host device and the cloud-based service. 19. The system of claim 17, wherein the connection request is sent responsive to:
receiving the proxy indication of available data; and receiving a second advertisement message from the client device. 20. The system of claim 17, wherein the client indication of the available data includes one or more properties of the available data, the operations further comprising, at the host device:
receiving a response to the connection request wherein the response is sent responsive to determining that a property of the available data corresponds to a property of data desired by the client device; and after receiving the response to the connection request, sending the available data to the client device. | 2,400 |
8,674 | 8,674 | 15,561,564 | 2,438 | Examples disclosed herein relate to security indicator scores. The examples enable obtaining a security indicator created by a first user where the security indicator may comprise a first observable, and obtaining, from a first source entity, a first sighting of the first observable. The first sighting of the first observable may indicate that the first observable has been observed by the first source entity where the first source entity is associated with a first level of source reliability. The examples enable determining a number of sightings of the first observable. The examples enable determining a first observable score based on the number of sightings of the first observable and the first level of source reliability, and determining an indicator score associated with the security indicator based on the first observable score. The indicator score may be presented to a community of users via a user interface. | 1. A method for determining security indicator scores, the method comprising:
obtaining a security indicator created by a first user, the security indicator comprising a first observable; obtaining, from a first source entity, a first sighting of the first observable, the first sighting of the first observable indicating that the first observable has been observed by the first source entity, wherein the first source entity is associated with a first level of source reliability; determining a number of sightings of the first observable, the sightings of the first observable including the first sighting of the first observable; determining a first observable score based on the number of sightings of the first observable and the first level of source reliability; determining an indicator score associated with the security indicator based on the first observable score; and presenting, via a user interface, the indicator score to a community of users. 2. The method of claim 1, further comprising:
obtaining, from a second source entity, a second sighting of the first observable, the second sighting of the first observable indicating that the first observable has been observed by the second source entity, the second source entity is associated with a second level of source reliability; determining the number of sightings of the first observable, the sightings of the first observable including the first and second sightings of the first observable; and determining the first observable score based on the number of sightings of the first observable and the first and second levels of source reliability. 3. The method of claim 1, wherein the security indicator comprises a second observable, further comprising:
obtaining, from the first source entity, a first sighting of the second observable, the first sighting of the second observable indicating that the second observable has observed by the first source entity; determining the number of sightings of the second observable, the sightings of the second observable including the first sighting of the second observable; and determining a second observable score based on the number of sightings of the second observable and the first level of source reliability. 4. The method of claim 3, further comprising:
determining the indicator score associated with the security indicator based on a maximum of the first and second observable scores. 5. The method of claim 1, further comprising:
obtaining a set of votes associated with the security indicator from the community of users, individual votes of the set of votes indicating whether the security indicator is malicious; and determining the indicator score associated with the security indicator based on the set of votes obtained from the community of users. 6. The method of claim 1, wherein the first source entity is a second user, further comprising:
determining the first level of source reliability based on a number of security indicators created by the second user. 7. The method of claim 1, further comprising:
determining the indicator score associated with the security indicator based on at least one of: a level of severity associated with the security indicator, a level of confidence associated with the security indicator, and a third level of source reliability associated with the first user, wherein the level of severity and the level of confidence are provided by the first user. 8. A non-transitory machine-readable storage medium comprising instructions executable by a processor of a computing device for determining security indicator scores, the machine-readable storage medium comprising:
instructions to obtain, from a first source entity, a first sighting of a first observable that is associated with a first security indicator and a second security indicator, the first sighting of the first observable indicating that the first observable has been observed by the first source entity, wherein the first source entity is associated with a first level of source reliability; instructions to determine a number of sightings of the first observable, the sightings of the first observable including the first sighting of the first observable; instructions to determine a first observable score based on the number of sightings of the first observable and the first level of source reliability; instructions to determine a first indicator score associated with the first security indicator based on the first observable score; instructions to determine a second indicator score associated with the second security indicator based on the first observable score; and instructions to present, via a user interface, the first indicator score or the second indicator score to a community of users. 9. The non-transitory machine-readable storage medium of claim 8, further comprising:
instructions to obtain, from a second source entity, a second sighting of the first observable, the second sighting of the first observable indicating that the first observable has been observed by the second source entity, wherein the second source entity is associated with a second level of source reliability; instructions to determine the number of sightings of the first observable, the sightings of the first observable including the first and second sightings of the first observable; and instructions to determine the first observable score based on the number of sightings of the first observable and a maximum of the first and second levels of source reliability. 10. The non-transitory machine-readable storage medium of claim 8, further comprising:
instructions to present, via the user interface, the first security indicator to the community of users; instructions to obtain a set of votes associated with the first security indicator from the community of users, individual votes of the set of votes indicating whether the first security indicator is malicious; and instructions to determine the first indicator score associated with the first security indicator based on the set of votes obtained from the community of users. 11. The non-transitory machine-readable storage medium of claim 8, further comprising:
instructions to determine a set of security indicators created by a particular user, the set of security indicators including at least the first security indicator; instructions to determine a number of votes associated with the set of security indicators; instructions to determine a third level of source reliability associated with the particular user based on the number of votes; and instructions to determine the first indicator score associated with the first security indicator based on the third level of source reliability. 12. The non-transitory machine-readable storage medium of claim 8, wherein the first or second source entity comprises a user of the community of users or a threat intelligence provider that provides threat intelligence feeds. 13. A system for determining security indicator scores comprising:
a processor that: presents, via a user interface, a security indicator created by a user to a community of users, the security indicator comprising an observable; obtains a set of votes associated with the security indicator from the community of users, individual votes of the set of votes indicating whether the security indicator is malicious; obtains, from a source entity, a sighting of the observable, the sighting of the observable indicating that the observable has been observed by the source entity, wherein the source entity is associated with a level of source reliability; determines an observable score based on the sighting of the first observable and the level of source reliability; and determines an indicator score associated with the security indicator based on the observable score. 14. The system of claim 13, the processor that:
determines a normalized value of the set of votes using a normalization algorithm. 15. The system of claim 13, the processor that:
determining whether to block an event that matches the security indicator based on the indicator score. | Examples disclosed herein relate to security indicator scores. The examples enable obtaining a security indicator created by a first user where the security indicator may comprise a first observable, and obtaining, from a first source entity, a first sighting of the first observable. The first sighting of the first observable may indicate that the first observable has been observed by the first source entity where the first source entity is associated with a first level of source reliability. The examples enable determining a number of sightings of the first observable. The examples enable determining a first observable score based on the number of sightings of the first observable and the first level of source reliability, and determining an indicator score associated with the security indicator based on the first observable score. The indicator score may be presented to a community of users via a user interface.1. A method for determining security indicator scores, the method comprising:
obtaining a security indicator created by a first user, the security indicator comprising a first observable; obtaining, from a first source entity, a first sighting of the first observable, the first sighting of the first observable indicating that the first observable has been observed by the first source entity, wherein the first source entity is associated with a first level of source reliability; determining a number of sightings of the first observable, the sightings of the first observable including the first sighting of the first observable; determining a first observable score based on the number of sightings of the first observable and the first level of source reliability; determining an indicator score associated with the security indicator based on the first observable score; and presenting, via a user interface, the indicator score to a community of users. 2. The method of claim 1, further comprising:
obtaining, from a second source entity, a second sighting of the first observable, the second sighting of the first observable indicating that the first observable has been observed by the second source entity, the second source entity is associated with a second level of source reliability; determining the number of sightings of the first observable, the sightings of the first observable including the first and second sightings of the first observable; and determining the first observable score based on the number of sightings of the first observable and the first and second levels of source reliability. 3. The method of claim 1, wherein the security indicator comprises a second observable, further comprising:
obtaining, from the first source entity, a first sighting of the second observable, the first sighting of the second observable indicating that the second observable has observed by the first source entity; determining the number of sightings of the second observable, the sightings of the second observable including the first sighting of the second observable; and determining a second observable score based on the number of sightings of the second observable and the first level of source reliability. 4. The method of claim 3, further comprising:
determining the indicator score associated with the security indicator based on a maximum of the first and second observable scores. 5. The method of claim 1, further comprising:
obtaining a set of votes associated with the security indicator from the community of users, individual votes of the set of votes indicating whether the security indicator is malicious; and determining the indicator score associated with the security indicator based on the set of votes obtained from the community of users. 6. The method of claim 1, wherein the first source entity is a second user, further comprising:
determining the first level of source reliability based on a number of security indicators created by the second user. 7. The method of claim 1, further comprising:
determining the indicator score associated with the security indicator based on at least one of: a level of severity associated with the security indicator, a level of confidence associated with the security indicator, and a third level of source reliability associated with the first user, wherein the level of severity and the level of confidence are provided by the first user. 8. A non-transitory machine-readable storage medium comprising instructions executable by a processor of a computing device for determining security indicator scores, the machine-readable storage medium comprising:
instructions to obtain, from a first source entity, a first sighting of a first observable that is associated with a first security indicator and a second security indicator, the first sighting of the first observable indicating that the first observable has been observed by the first source entity, wherein the first source entity is associated with a first level of source reliability; instructions to determine a number of sightings of the first observable, the sightings of the first observable including the first sighting of the first observable; instructions to determine a first observable score based on the number of sightings of the first observable and the first level of source reliability; instructions to determine a first indicator score associated with the first security indicator based on the first observable score; instructions to determine a second indicator score associated with the second security indicator based on the first observable score; and instructions to present, via a user interface, the first indicator score or the second indicator score to a community of users. 9. The non-transitory machine-readable storage medium of claim 8, further comprising:
instructions to obtain, from a second source entity, a second sighting of the first observable, the second sighting of the first observable indicating that the first observable has been observed by the second source entity, wherein the second source entity is associated with a second level of source reliability; instructions to determine the number of sightings of the first observable, the sightings of the first observable including the first and second sightings of the first observable; and instructions to determine the first observable score based on the number of sightings of the first observable and a maximum of the first and second levels of source reliability. 10. The non-transitory machine-readable storage medium of claim 8, further comprising:
instructions to present, via the user interface, the first security indicator to the community of users; instructions to obtain a set of votes associated with the first security indicator from the community of users, individual votes of the set of votes indicating whether the first security indicator is malicious; and instructions to determine the first indicator score associated with the first security indicator based on the set of votes obtained from the community of users. 11. The non-transitory machine-readable storage medium of claim 8, further comprising:
instructions to determine a set of security indicators created by a particular user, the set of security indicators including at least the first security indicator; instructions to determine a number of votes associated with the set of security indicators; instructions to determine a third level of source reliability associated with the particular user based on the number of votes; and instructions to determine the first indicator score associated with the first security indicator based on the third level of source reliability. 12. The non-transitory machine-readable storage medium of claim 8, wherein the first or second source entity comprises a user of the community of users or a threat intelligence provider that provides threat intelligence feeds. 13. A system for determining security indicator scores comprising:
a processor that: presents, via a user interface, a security indicator created by a user to a community of users, the security indicator comprising an observable; obtains a set of votes associated with the security indicator from the community of users, individual votes of the set of votes indicating whether the security indicator is malicious; obtains, from a source entity, a sighting of the observable, the sighting of the observable indicating that the observable has been observed by the source entity, wherein the source entity is associated with a level of source reliability; determines an observable score based on the sighting of the first observable and the level of source reliability; and determines an indicator score associated with the security indicator based on the observable score. 14. The system of claim 13, the processor that:
determines a normalized value of the set of votes using a normalization algorithm. 15. The system of claim 13, the processor that:
determining whether to block an event that matches the security indicator based on the indicator score. | 2,400 |
8,675 | 8,675 | 15,448,385 | 2,424 | Systems, devices, methods, and program products are provided for operating a medical system that is operable within at least two different medical device regulatory classes. A device control module receives an indicator for performing at least one procedure with a medical device. Based on the indicator, it selects a binary image for booting the control module from among multiple images including a controlled-type and a non-controlled type, which operate in different regulatory classes. A controlled-type image, which may be FDA PMA or other regulatory classes, is verified to be unaltered, and is then used to operate as a regulated medical device. The indicator can be automatic or manual. The indicator may result from connection of a specific medical device or peripheral device, or may be user input. | 1. A method of operating a medical system that is operable within at least two different medical device regulatory classes, the at least two different regulatory classes including a first regulatory class and a second regulatory class, the method comprising:
receiving an indicator for performing at least one medical procedure that utilizes a medical device; based on the indicator, selecting one of multiple binary images for booting a medical device control module, the multiple binary images including at least a controlled type binary image for operating as a first regulatory class medical device and a non-controlled type binary image for operating as a second regulatory class medical device; and if the selected binary image is the controlled type binary image, booting the medical device control module from the controlled type binary image and thereby operating the medical device control module as the first regulatory class medical device for performing the at least one medical procedure. 2. The method of claim 1, wherein the first regulatory class medical device is of a class that requires FDA premarket approval and a second regulatory class medical device is of a class that does not require FDA premarket approval. 3. The method of claim 1, wherein the first regulatory class medical device is an FDA Class III medical device and the second regulatory class medical device is one of an FDA Class I and Class II device. 4. The method of claim 1, in which the medical device is a front end module for coupling an imaging scope to the medical device control module. 5. The method of any of claim 1, in which receiving the indictor further comprises recognizing that the medical device has been coupled to the medical device control module and obtaining a device identifier for the medical device. 6. The method of claim 1, further comprising
recognizing that a peripheral device has been coupled to the medical device control module; obtaining a device identifier for the peripheral device; wherein selecting one of multiple binary images is further based on the peripheral device identifier. 7. The method of claim 6, in which the peripheral device is a display module. 8. The method of claim 1, in which the medical procedure is an endoscopic fluorescence imaging procedure. 9. The method of claim 1, in which the medical device is a fluorescence imaging scope or a combination of a fluorescence imaging scope and a front end module for coupling the fluorescence imaging scope to the medical device control module. 10. The method of claim 1, further comprising installing an update for a selected one of the multiple binary images being the non-controlled type. 11. The method of claim 1, further comprising if the selected binary image is a non-controlled type binary image, booting the medical device control module from the non-controlled binary image and thereby operating the medical device control module as the second regulatory class medical device for performing the at least one medical procedure. 12. The method of claim 1, in which the multiple binary images include program code for a microprocessor and FPGA configuration code for configuring at least one FPGA in the medical device control module. 13. A medical device control module, either integrated with a medical device or configured to connect to a medical device, that is operable within at least two different medical device regulatory classes, the at least two different regulatory classes including a first regulatory class and a second regulatory class, the module comprising:
at least one digital processor and associated RAM memory; tangible non-transitory computer readable media containing: (a) at least one binary image of a controlled type for operating as a device within the first regulatory class; (b) at least one binary image of a non-controlled type for operating as a device within the second regulatory class; (c) boot manager program code executable by the digital processor to
receive an indicator for performing at least one medical procedure that utilizes a medical device;
based on the indicator, select one of the binary images for booting the medical device control module;
determine if the selected binary image is a controlled type image, and if so, perform a verification of the image and then boot the medical device control module from the controlled type binary image; and
the at least one digital processor operable to, after booting from the controlled type binary image, execute program code from the controlled type binary image for operating the medical device control module, and controlling the medical device, as a first regulatory class medical device for performing the at least one medical procedure. 14. The medical device control module of claim 13, which the boot manager program code further includes program code executable by the at least one digital processor for (i) recognizing that a medical device has been coupled to a medical device control module; (ii) obtaining a device identifier for the medical device; and (iii) wherein selecting one of multiple binary images is further based on the peripheral device identifier. 15. The medical device control module of claim 13, in which the medical device is a front end module for coupling an imaging scope to the medical device control module. 16. The medical device control module of claim 13, in which the boot manager program code further comprises code executable by the at least one digital processor for recognizing that a peripheral device has been coupled to the medical device control module and obtaining a device identifier for the peripheral device, wherein selecting one of multiple binary images is further based on the peripheral device identifier. 17. The medical device control module of claim 16, in which the peripheral device is a display module. 18. The medical device control module of claim 13, wherein the first regulatory class medical device is of a class that requires FDA premarket approval and a second regulatory class medical device is of a class that does not require FDA premarket approval. 19. The medical device control module of claim 18, in which, the medical device is configured to conduct an endoscopic fluorescence imaging procedure. 20. The medical device control module of claim 13, in which the medical device is a fluorescence imaging scope or a combination of a fluorescence imaging scope and a front end module for coupling the fluorescence imaging scope to the medical device control module. 21. The medical device control module of claim 13, in which the multiple binary images include program code for a microprocessor and FPGA configuration code for configuring at least one FPGA in the medical device control module. 22. The medical device control module of claim 11, in which the program code further comprises update program code executable by the at least one digital processor to update a selected non-controlled binary image, and subsequently to recognize that a different medical device has been coupled to a medical device control module, obtain a device identifier for the different medical device, and based on the device identifier, select the updated binary image and then boot the medical device control module from the updated binary image. 23. The medical device control module of claim 13, in which the program code further comprises controlled-update program code executable by the at least one digital processor to verify a replacement controlled binary image, remove a selected controlled binary image, and replace it with the replacement controlled binary image and subsequently to recognize that a different medical device has been coupled to a medical device control module, obtain a device identifier for the different medical device, and based on the device identifier, select the updated binary image and then boot the medical device control module from the updated binary image. 24. A method of operating a medical system that is operable within at least two different medical device regulatory classes, the at least two different regulatory classes including a first regulatory class and a second regulatory class, the method comprising:
receiving an indicator for performing at least one medical procedure that utilizes a medical device; based on the indicator, selecting one of multiple binary images for operating a medical device control module, the multiple binary images including at least a controlled type binary image for a first regulatory class medical device and a non-controlled type binary image for a second regulatory class medical device; and if the selected binary image is the controlled type binary image, exclusively operating the medical device control module via the controlled type binary image and thereby providing the medical device control module as a part of the first regulatory class medical device for performing the at least one medical procedure. | Systems, devices, methods, and program products are provided for operating a medical system that is operable within at least two different medical device regulatory classes. A device control module receives an indicator for performing at least one procedure with a medical device. Based on the indicator, it selects a binary image for booting the control module from among multiple images including a controlled-type and a non-controlled type, which operate in different regulatory classes. A controlled-type image, which may be FDA PMA or other regulatory classes, is verified to be unaltered, and is then used to operate as a regulated medical device. The indicator can be automatic or manual. The indicator may result from connection of a specific medical device or peripheral device, or may be user input.1. A method of operating a medical system that is operable within at least two different medical device regulatory classes, the at least two different regulatory classes including a first regulatory class and a second regulatory class, the method comprising:
receiving an indicator for performing at least one medical procedure that utilizes a medical device; based on the indicator, selecting one of multiple binary images for booting a medical device control module, the multiple binary images including at least a controlled type binary image for operating as a first regulatory class medical device and a non-controlled type binary image for operating as a second regulatory class medical device; and if the selected binary image is the controlled type binary image, booting the medical device control module from the controlled type binary image and thereby operating the medical device control module as the first regulatory class medical device for performing the at least one medical procedure. 2. The method of claim 1, wherein the first regulatory class medical device is of a class that requires FDA premarket approval and a second regulatory class medical device is of a class that does not require FDA premarket approval. 3. The method of claim 1, wherein the first regulatory class medical device is an FDA Class III medical device and the second regulatory class medical device is one of an FDA Class I and Class II device. 4. The method of claim 1, in which the medical device is a front end module for coupling an imaging scope to the medical device control module. 5. The method of any of claim 1, in which receiving the indictor further comprises recognizing that the medical device has been coupled to the medical device control module and obtaining a device identifier for the medical device. 6. The method of claim 1, further comprising
recognizing that a peripheral device has been coupled to the medical device control module; obtaining a device identifier for the peripheral device; wherein selecting one of multiple binary images is further based on the peripheral device identifier. 7. The method of claim 6, in which the peripheral device is a display module. 8. The method of claim 1, in which the medical procedure is an endoscopic fluorescence imaging procedure. 9. The method of claim 1, in which the medical device is a fluorescence imaging scope or a combination of a fluorescence imaging scope and a front end module for coupling the fluorescence imaging scope to the medical device control module. 10. The method of claim 1, further comprising installing an update for a selected one of the multiple binary images being the non-controlled type. 11. The method of claim 1, further comprising if the selected binary image is a non-controlled type binary image, booting the medical device control module from the non-controlled binary image and thereby operating the medical device control module as the second regulatory class medical device for performing the at least one medical procedure. 12. The method of claim 1, in which the multiple binary images include program code for a microprocessor and FPGA configuration code for configuring at least one FPGA in the medical device control module. 13. A medical device control module, either integrated with a medical device or configured to connect to a medical device, that is operable within at least two different medical device regulatory classes, the at least two different regulatory classes including a first regulatory class and a second regulatory class, the module comprising:
at least one digital processor and associated RAM memory; tangible non-transitory computer readable media containing: (a) at least one binary image of a controlled type for operating as a device within the first regulatory class; (b) at least one binary image of a non-controlled type for operating as a device within the second regulatory class; (c) boot manager program code executable by the digital processor to
receive an indicator for performing at least one medical procedure that utilizes a medical device;
based on the indicator, select one of the binary images for booting the medical device control module;
determine if the selected binary image is a controlled type image, and if so, perform a verification of the image and then boot the medical device control module from the controlled type binary image; and
the at least one digital processor operable to, after booting from the controlled type binary image, execute program code from the controlled type binary image for operating the medical device control module, and controlling the medical device, as a first regulatory class medical device for performing the at least one medical procedure. 14. The medical device control module of claim 13, which the boot manager program code further includes program code executable by the at least one digital processor for (i) recognizing that a medical device has been coupled to a medical device control module; (ii) obtaining a device identifier for the medical device; and (iii) wherein selecting one of multiple binary images is further based on the peripheral device identifier. 15. The medical device control module of claim 13, in which the medical device is a front end module for coupling an imaging scope to the medical device control module. 16. The medical device control module of claim 13, in which the boot manager program code further comprises code executable by the at least one digital processor for recognizing that a peripheral device has been coupled to the medical device control module and obtaining a device identifier for the peripheral device, wherein selecting one of multiple binary images is further based on the peripheral device identifier. 17. The medical device control module of claim 16, in which the peripheral device is a display module. 18. The medical device control module of claim 13, wherein the first regulatory class medical device is of a class that requires FDA premarket approval and a second regulatory class medical device is of a class that does not require FDA premarket approval. 19. The medical device control module of claim 18, in which, the medical device is configured to conduct an endoscopic fluorescence imaging procedure. 20. The medical device control module of claim 13, in which the medical device is a fluorescence imaging scope or a combination of a fluorescence imaging scope and a front end module for coupling the fluorescence imaging scope to the medical device control module. 21. The medical device control module of claim 13, in which the multiple binary images include program code for a microprocessor and FPGA configuration code for configuring at least one FPGA in the medical device control module. 22. The medical device control module of claim 11, in which the program code further comprises update program code executable by the at least one digital processor to update a selected non-controlled binary image, and subsequently to recognize that a different medical device has been coupled to a medical device control module, obtain a device identifier for the different medical device, and based on the device identifier, select the updated binary image and then boot the medical device control module from the updated binary image. 23. The medical device control module of claim 13, in which the program code further comprises controlled-update program code executable by the at least one digital processor to verify a replacement controlled binary image, remove a selected controlled binary image, and replace it with the replacement controlled binary image and subsequently to recognize that a different medical device has been coupled to a medical device control module, obtain a device identifier for the different medical device, and based on the device identifier, select the updated binary image and then boot the medical device control module from the updated binary image. 24. A method of operating a medical system that is operable within at least two different medical device regulatory classes, the at least two different regulatory classes including a first regulatory class and a second regulatory class, the method comprising:
receiving an indicator for performing at least one medical procedure that utilizes a medical device; based on the indicator, selecting one of multiple binary images for operating a medical device control module, the multiple binary images including at least a controlled type binary image for a first regulatory class medical device and a non-controlled type binary image for a second regulatory class medical device; and if the selected binary image is the controlled type binary image, exclusively operating the medical device control module via the controlled type binary image and thereby providing the medical device control module as a part of the first regulatory class medical device for performing the at least one medical procedure. | 2,400 |
8,676 | 8,676 | 15,556,596 | 2,487 | The present invention concerns a method for the 3D reconstruction of a scene comprising the matching ( 610 ) of a first event from among the first asynchronous successive events of a first sensor with a second event from among the second asynchronous successive events of a second sensor depending on a minimisation ( 609 ) of a cost function (E). The cost function comprises at least one component from: —a luminance component (E i ) that depends at least on a first luminance signal (I u ) convoluted with a convolution core (gσ(t)), the luminance of said pixel depending on a difference between maximums (t e−,u ,t e+,u ) of said first signal; and a second luminance signal (I v ) convoluted with said convolution core, the luminance of said pixel depending on a difference between maximums (t e−,v ,t e+,v ) of said second signal; —a movement component (E M ) depending on at least time values relative to the occurrence of events located at a distance from a pixel of the first sensor and time values relative to the occurrence of events located at a distance from a pixel of the second sensor. | 1. Method of 3D reconstruction of a scene, the method comprising:
reception (601) of a first piece of asynchronous information from a first sensor (501) that has a first pixel matrix positioned opposite the scene, the first piece of asynchronous information comprising, for each pixel (p) of the first matrix, the first successive events coming from said pixel; reception (602) of a second piece of asynchronous information from a second sensor (502) that has a second pixel matrix positioned opposite the scene, the second piece of asynchronous information comprising, for each pixel (q) of the second matrix, the second successive events coming from said pixel, the second sensor being separate from the first sensor; matching (610) of a first event from amongst the first successive events with a second event from amongst the second successive events depending on a minimisation (609) of a cost function (E); wherein the cost function comprises at least one component from amongst: a luminance component (ET), said luminance component depending on at least:
a first luminance signal (Iu) coming from a pixel of the first sensor convoluted with a convolution core (gσ(t)), the luminance of said pixel depending on a difference between the maximums (te−,u,te+,u) of said first signal; and
a second luminance signal (Iv) coming from a pixel of the second sensor convoluted with said convolution core, the luminance of said pixel depending on a difference between the maximums (te−,v,te+,v) of said second signal;
a movement component (EM), said movement component depending on at least:
time values relating to the occurrence of events spatially located at a predetermined distance from a pixel of the first sensor;
time values relating to the occurrence of events spatially located at a predetermined distance from a pixel of the second sensor. 2. Method according to claim 1, wherein the cost function (E) additionally comprises:
a time component (ET), said time component depending on a difference between:
a time value relating to an event of the first sensor;
a time value relating to an event of the second sensor. 3. Method according to claim 1, wherein the cost function (E) additionally comprises:
a geometric component (EG), said geometric component depending on:
a spatial distance from a pixel of the second sensor at an epipolar straight line or at an epipolar intersection defined by at least one pixel of the first sensor. 4. Method according to claim 1, wherein, the luminance signal (Iu, Iv) of the pixel of the first sensor and of the pixel of the second sensor comprising a maximum, coding an occurrence time of a luminance variation, the convolution core is a predetermined Gaussian variance. 5. Method according to claim 1, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 6. Method according to claim 1, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 7. Method according to claim 1, wherein said movement component (EM) depends on, for a given time:
for each current time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the first sensor, of a function value decreasing from a distance of said given time to said current time value; for each current time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the second sensor, of a function value decreasing from a distance of said given time to said current time value. 8. Method according to claim 1, wherein said movement component (EM) depends on:
a first convolution of a decreasing function with a signal comprising a Dirac for each time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the first sensor; a second convolution of a decreasing function with a signal comprising a Dirac for each time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the second sensor. 9. Device for the 3D reconstruction of a scene, the method comprising:
an interface (703) for the reception (601) of a first piece of asynchronous information from a first sensor (501) that has a first pixel matrix positioned opposite the scene, the first piece of asynchronous information comprising, for each pixel (p) of the first matrix, the first successive events coming from said pixel; an interface (703) for the reception (602) of a second piece of asynchronous information from a second sensor (502) that has a second pixel matrix positioned opposite the scene, the second piece of asynchronous information comprising, for each pixel (q) of the second matrix, the second successive events coming from said pixel, the second sensor being separate from the first sensor; a processor (704) suitable for the matching (610) of a first event from amongst the first successive events with a second event from amongst the second successive events depending on a minimisation (609) of a cost function (E); wherein the cost function comprises at least one component from amongst: a luminance component (EI), said luminance component depending on at least:
a first luminance signal (Iu) coming from a pixel of the first sensor, convoluted with a convolution core (gσ(t)), the luminance of said pixel depending on a difference between the maximums (te−,u,te+,u) of said first signal; and
a second luminance signal (Iv) coming from a pixel of the second sensor, convoluted with said convolution core, the luminance of said pixel depending on a difference between the maximums (te−,v,te+,v) of said second signal;
a movement component (EM), said movement component depending on at least:
time values relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the first sensor;
time values relating to the occurrence of event, spatially located at a predetermined distance from a pixel of the second sensor. 10. Computer program product comprising instructions for the implementation of the method according to claim 1, when this program is executed by a processor. 11. Method according to claim 2, wherein the cost function (E) additionally comprises:
a geometric component (EG), said geometric component depending on: a spatial distance from a pixel of the second sensor at an epipolar straight line or at an epipolar intersection defined by at least one pixel of the first sensor. 12. Method according to claim 2, wherein, the luminance signal (Iu, Iv) of the pixel of the first sensor and of the pixel of the second sensor comprising a maximum, coding an occurrence time of a luminance variation, the convolution core is a predetermined Gaussian variance. 13. Method according to claim 3, wherein, the luminance signal (Iu, Iv) of the pixel of the first sensor and of the pixel of the second sensor comprising a maximum, coding an occurrence time of a luminance variation, the convolution core is a predetermined Gaussian variance. 14. Method according to claim 2, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 15. Method according to claim 3, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 16. Method according to claim 4, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 17. Method according to claim 2, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 18. Method according to claim 3, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 19. Method according to claim 4, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 20. Method according to claim 5, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. | The present invention concerns a method for the 3D reconstruction of a scene comprising the matching ( 610 ) of a first event from among the first asynchronous successive events of a first sensor with a second event from among the second asynchronous successive events of a second sensor depending on a minimisation ( 609 ) of a cost function (E). The cost function comprises at least one component from: —a luminance component (E i ) that depends at least on a first luminance signal (I u ) convoluted with a convolution core (gσ(t)), the luminance of said pixel depending on a difference between maximums (t e−,u ,t e+,u ) of said first signal; and a second luminance signal (I v ) convoluted with said convolution core, the luminance of said pixel depending on a difference between maximums (t e−,v ,t e+,v ) of said second signal; —a movement component (E M ) depending on at least time values relative to the occurrence of events located at a distance from a pixel of the first sensor and time values relative to the occurrence of events located at a distance from a pixel of the second sensor.1. Method of 3D reconstruction of a scene, the method comprising:
reception (601) of a first piece of asynchronous information from a first sensor (501) that has a first pixel matrix positioned opposite the scene, the first piece of asynchronous information comprising, for each pixel (p) of the first matrix, the first successive events coming from said pixel; reception (602) of a second piece of asynchronous information from a second sensor (502) that has a second pixel matrix positioned opposite the scene, the second piece of asynchronous information comprising, for each pixel (q) of the second matrix, the second successive events coming from said pixel, the second sensor being separate from the first sensor; matching (610) of a first event from amongst the first successive events with a second event from amongst the second successive events depending on a minimisation (609) of a cost function (E); wherein the cost function comprises at least one component from amongst: a luminance component (ET), said luminance component depending on at least:
a first luminance signal (Iu) coming from a pixel of the first sensor convoluted with a convolution core (gσ(t)), the luminance of said pixel depending on a difference between the maximums (te−,u,te+,u) of said first signal; and
a second luminance signal (Iv) coming from a pixel of the second sensor convoluted with said convolution core, the luminance of said pixel depending on a difference between the maximums (te−,v,te+,v) of said second signal;
a movement component (EM), said movement component depending on at least:
time values relating to the occurrence of events spatially located at a predetermined distance from a pixel of the first sensor;
time values relating to the occurrence of events spatially located at a predetermined distance from a pixel of the second sensor. 2. Method according to claim 1, wherein the cost function (E) additionally comprises:
a time component (ET), said time component depending on a difference between:
a time value relating to an event of the first sensor;
a time value relating to an event of the second sensor. 3. Method according to claim 1, wherein the cost function (E) additionally comprises:
a geometric component (EG), said geometric component depending on:
a spatial distance from a pixel of the second sensor at an epipolar straight line or at an epipolar intersection defined by at least one pixel of the first sensor. 4. Method according to claim 1, wherein, the luminance signal (Iu, Iv) of the pixel of the first sensor and of the pixel of the second sensor comprising a maximum, coding an occurrence time of a luminance variation, the convolution core is a predetermined Gaussian variance. 5. Method according to claim 1, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 6. Method according to claim 1, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 7. Method according to claim 1, wherein said movement component (EM) depends on, for a given time:
for each current time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the first sensor, of a function value decreasing from a distance of said given time to said current time value; for each current time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the second sensor, of a function value decreasing from a distance of said given time to said current time value. 8. Method according to claim 1, wherein said movement component (EM) depends on:
a first convolution of a decreasing function with a signal comprising a Dirac for each time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the first sensor; a second convolution of a decreasing function with a signal comprising a Dirac for each time value relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the second sensor. 9. Device for the 3D reconstruction of a scene, the method comprising:
an interface (703) for the reception (601) of a first piece of asynchronous information from a first sensor (501) that has a first pixel matrix positioned opposite the scene, the first piece of asynchronous information comprising, for each pixel (p) of the first matrix, the first successive events coming from said pixel; an interface (703) for the reception (602) of a second piece of asynchronous information from a second sensor (502) that has a second pixel matrix positioned opposite the scene, the second piece of asynchronous information comprising, for each pixel (q) of the second matrix, the second successive events coming from said pixel, the second sensor being separate from the first sensor; a processor (704) suitable for the matching (610) of a first event from amongst the first successive events with a second event from amongst the second successive events depending on a minimisation (609) of a cost function (E); wherein the cost function comprises at least one component from amongst: a luminance component (EI), said luminance component depending on at least:
a first luminance signal (Iu) coming from a pixel of the first sensor, convoluted with a convolution core (gσ(t)), the luminance of said pixel depending on a difference between the maximums (te−,u,te+,u) of said first signal; and
a second luminance signal (Iv) coming from a pixel of the second sensor, convoluted with said convolution core, the luminance of said pixel depending on a difference between the maximums (te−,v,te+,v) of said second signal;
a movement component (EM), said movement component depending on at least:
time values relating to the occurrence of events, spatially located at a predetermined distance from a pixel of the first sensor;
time values relating to the occurrence of event, spatially located at a predetermined distance from a pixel of the second sensor. 10. Computer program product comprising instructions for the implementation of the method according to claim 1, when this program is executed by a processor. 11. Method according to claim 2, wherein the cost function (E) additionally comprises:
a geometric component (EG), said geometric component depending on: a spatial distance from a pixel of the second sensor at an epipolar straight line or at an epipolar intersection defined by at least one pixel of the first sensor. 12. Method according to claim 2, wherein, the luminance signal (Iu, Iv) of the pixel of the first sensor and of the pixel of the second sensor comprising a maximum, coding an occurrence time of a luminance variation, the convolution core is a predetermined Gaussian variance. 13. Method according to claim 3, wherein, the luminance signal (Iu, Iv) of the pixel of the first sensor and of the pixel of the second sensor comprising a maximum, coding an occurrence time of a luminance variation, the convolution core is a predetermined Gaussian variance. 14. Method according to claim 2, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 15. Method according to claim 3, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 16. Method according to claim 4, wherein said luminance component (EI) additionally depends on:
luminance signals of pixels of the first sensor, spatially located at a predetermined distance from the first pixel of the first sensor, convoluted with the convolution core; and luminance signals of pixels of the second sensor, spatially located at a predetermined distance from the second pixel of the second sensor, convoluted with the convolution core. 17. Method according to claim 2, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 18. Method according to claim 3, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 19. Method according to claim 4, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. 20. Method according to claim 5, wherein said movement component (EM) depends on:
an average value (S(p)) of the time values relating to the occurrence of pixel events of the first sensor, spatially located at a predetermined distance from the pixel of the first sensor; an average value (S(q)) of the time values relating to the occurrence of pixel events of the second sensor, spatially located at a predetermined distance from the pixel of the second sensor. | 2,400 |
8,677 | 8,677 | 12,470,567 | 2,456 | A system for providing a web service on a network of addressable nodes, said web service comprising a plurality of discrete, individually-addressable microservices, said system comprising: (a) at least one load balancer configured for routing a request from a node for a microservice to one of a plurality of virtual addresses, each virtual address corresponding to a unique microservice, and (b) one or more physical nodes associated with each virtual address, each physical node comprising one or more microservices, each microservice comprising a microservice-specific module for executing a particular function, said microservice-specific module linked to an interface for communicating over said network, each microservice being one of a plurality of individually-addressable microservices constituting a web service. | 1. A web service method, comprising:
receiving, at a first physical node, a request to initiate a web service; identifying a plurality of microservices that support different aspects of the requested web service; identifying a prerequisite microservice for one of the plurality of microservices; transmitting a request to the prerequisite microservice; determining a status of the prerequisite microservice based on whether a response to the request is received; deploying a new instance of the prerequisite microservice based on the determined status; and transmitting, from the first physical node, a plurality of requests to a plurality of addresses of the plurality of microservices that support different aspects of the web service, wherein the web service communicates with the microservices using a common interface. 2. The method of claim 1, wherein requests addressed to a first microservice are distributed among a plurality of physical nodes, each physical node running a copy of the first microservice. 3. The method of claim 1, further comprising using a load balancer to distribute requests for a first microservice's support among a plurality of physical nodes running copies of the microservice. 4. The method of claim 3, further comprising using a second load balancer to distribute requests among a plurality of microservices instanced on a single physical node. 5. The method of claim 1, further comprising a first one of said microservices receiving one of the requests, and transmitting a reply to a second one of said microservices. 6. The method of claim 1, further comprising running copies of a first one of the microservices on two physical nodes. 7. The method of claim 1, further comprising running multiple copies of a first one of the microservice on a first physical node. 8. The method of claim 1, further comprising generating a list of prerequisite microservices during compilation of a microservice that depends on the prerequisite microservices. 9. The method of claim 1, wherein the transmitted request is an HTTP HEAD request. 10. A computer-readable medium, storing computer-executable instructions that, when executed, cause the following to occur:
receiving, at a first physical node, a request to initiate a web service; identifying a plurality of microservices that support different aspects of the requested web service; identifying a prerequisite microservice for one of the plurality of microservices; transmitting a request to the prerequisite microservice; determining a status of the prerequisite microservice based on whether a response to the request is received; deploying a new instance of the prerequisite microservice based on the determined status; and transmitting, from the first physical node, a plurality of requests to a plurality of addresses of the plurality of microservices that support different aspects of the web service, wherein the web service communicates with the microservices using a common interface. 11. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: distributing requests addressed to a first microservice among a plurality of physical nodes, each physical node running a copy of the first microservice. 12. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: balancing load by distributing requests for a first microservice's support among a plurality of physical nodes running copies of the microservice. 13. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur, using a second load balancer to distribute requests among a plurality of microservices instanced on a single physical node. 14. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: running multiple copies of a first one of the microservice on a first physical node. 15. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: generating a list of prerequisite microservices during compilation of a microservice that depends on the prerequisite microservices. 16. The computer-readable medium of claim 10, wherein the transmitted request is an HTTP HEAD request. | A system for providing a web service on a network of addressable nodes, said web service comprising a plurality of discrete, individually-addressable microservices, said system comprising: (a) at least one load balancer configured for routing a request from a node for a microservice to one of a plurality of virtual addresses, each virtual address corresponding to a unique microservice, and (b) one or more physical nodes associated with each virtual address, each physical node comprising one or more microservices, each microservice comprising a microservice-specific module for executing a particular function, said microservice-specific module linked to an interface for communicating over said network, each microservice being one of a plurality of individually-addressable microservices constituting a web service.1. A web service method, comprising:
receiving, at a first physical node, a request to initiate a web service; identifying a plurality of microservices that support different aspects of the requested web service; identifying a prerequisite microservice for one of the plurality of microservices; transmitting a request to the prerequisite microservice; determining a status of the prerequisite microservice based on whether a response to the request is received; deploying a new instance of the prerequisite microservice based on the determined status; and transmitting, from the first physical node, a plurality of requests to a plurality of addresses of the plurality of microservices that support different aspects of the web service, wherein the web service communicates with the microservices using a common interface. 2. The method of claim 1, wherein requests addressed to a first microservice are distributed among a plurality of physical nodes, each physical node running a copy of the first microservice. 3. The method of claim 1, further comprising using a load balancer to distribute requests for a first microservice's support among a plurality of physical nodes running copies of the microservice. 4. The method of claim 3, further comprising using a second load balancer to distribute requests among a plurality of microservices instanced on a single physical node. 5. The method of claim 1, further comprising a first one of said microservices receiving one of the requests, and transmitting a reply to a second one of said microservices. 6. The method of claim 1, further comprising running copies of a first one of the microservices on two physical nodes. 7. The method of claim 1, further comprising running multiple copies of a first one of the microservice on a first physical node. 8. The method of claim 1, further comprising generating a list of prerequisite microservices during compilation of a microservice that depends on the prerequisite microservices. 9. The method of claim 1, wherein the transmitted request is an HTTP HEAD request. 10. A computer-readable medium, storing computer-executable instructions that, when executed, cause the following to occur:
receiving, at a first physical node, a request to initiate a web service; identifying a plurality of microservices that support different aspects of the requested web service; identifying a prerequisite microservice for one of the plurality of microservices; transmitting a request to the prerequisite microservice; determining a status of the prerequisite microservice based on whether a response to the request is received; deploying a new instance of the prerequisite microservice based on the determined status; and transmitting, from the first physical node, a plurality of requests to a plurality of addresses of the plurality of microservices that support different aspects of the web service, wherein the web service communicates with the microservices using a common interface. 11. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: distributing requests addressed to a first microservice among a plurality of physical nodes, each physical node running a copy of the first microservice. 12. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: balancing load by distributing requests for a first microservice's support among a plurality of physical nodes running copies of the microservice. 13. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur, using a second load balancer to distribute requests among a plurality of microservices instanced on a single physical node. 14. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: running multiple copies of a first one of the microservice on a first physical node. 15. The computer-readable medium of claim 10, further storing computer-executable instructions that, when executed, cause the following to occur: generating a list of prerequisite microservices during compilation of a microservice that depends on the prerequisite microservices. 16. The computer-readable medium of claim 10, wherein the transmitted request is an HTTP HEAD request. | 2,400 |
8,678 | 8,678 | 15,669,763 | 2,449 | In one embodiment, a method includes sending, to an online social network, session information between a third-party content provider and a first user of the online social network. The session information includes information referencing an established wireless communication session between a first client system of the first user and a beacon of the third-party content provider. The beacon is physically proximate to the first client system at the time of the wireless communication session, and the wireless communication session allows the online social network to send social-networking information of the first user to the beacon. In response to sending the session information, a first set of social-networking information of the first user is received from the online social network via the beacon. The first set of social-networking information allows the third-party content provider to send, via the beacon, customized third-party content for display on the first client system. | 1. A method comprising, by one or more computing devices of a third-party content provider:
sending, to one or more computing systems of an online social network, session information between a first user of the online social network and the third-party content provider, wherein:
the session information comprises information indicating that a wireless communication session has been established between a first client system of the first user and a beacon associated with the third-party content provider, wherein the wireless communication session allows the online social network to send social-networking information of the first user to the third-party content provider via the beacon; and
the beacon is physically proximate to the first client system at the time of the wireless communication session;
in response to sending the session information, receiving, from the computing systems of the online social network, a first set of social-networking information of the first user; and sending, to the first client system, customized third-party content for display on the first client system, wherein the customized third-party content is based on the first set of social-networking information. 2. The method of claim 1, wherein the first set of social-networking information comprises demographic information of the first user. 3. The method of claim 1, wherein the first set of social-networking information comprises a purchase history of the first user. 4. The method of claim 1, wherein the first set of social-networking information comprises payment credentials of the first user. 5. The method of claim 1, wherein the third-party content provider is associated with at least one attribute, and the first set of social-networking information is based at least in part on the at least one attribute. 6. The method of claim 5, wherein the at least one attribute of the third-party content provider comprises a type of good or service. 7. The method of claim 6, wherein the first set of social-networking information comprises user preferences associated with the type of good or service. 8. The method of claim 1, wherein the customized third-party content comprises a prompt requesting a response by the first user. 9. The method of claim 8, wherein the requested response comprises one or more of a binary answer to a question posed by the prompt, an image, or a text message comprising information requested by the prompt, the requested response being inputted by the first user at the first client system. 10. The method of claim 1, wherein the customized third-party content comprises a promotional offer, the promotional offer redeemable by the first user while the wireless communication session between the beacon and the first client system remains active. 11. The method of claim 1, further comprising:
detecting, via the beacon, that the wireless communication session between the beacon and the first client system has been terminated, wherein the customized third-party content is sent to the first client system in response to detecting the termination. 12. The method of claim 1, wherein the online social network comprises a social graph comprising a plurality of nodes and a plurality of edges connecting the nodes, each of the edges between two of the nodes representing a single degree of separation between them, the nodes comprising:
a first node corresponding to the first user; and a plurality of second nodes corresponding to a plurality of entities associated with the online social network, respectively. 13. The method of claim 12, wherein the first set of social-networking information comprises a connection between the first node and a particular second node of the plurality of second nodes, the particular second node corresponding to the third-party content provider. 14. The method of claim 13, further comprising:
receiving, via the beacon, a second set of social-networking information associated with one or more second users, wherein each of the one or more second user is:
associated with a respective client system within wireless communication range of the beacon; and
connected to the first user in the social graph within a threshold degree of separation. 15. The method of claim 12, wherein the customized third-party content comprises one or more identifiers of one or more second users of the online social network, the second users each associated with a respective second client system, and wherein wireless communication sessions have been established between the second client systems and the beacon. 16. The method of claim 12, wherein the first set of social-networking information is based on a degree of separation in the social graph between a particular second node corresponding to the third-party content provider and the first node. 17. The method of claim 12, wherein a particular second node corresponds to the third-party content provider, and wherein a social-networking action is performed with respect to the first node and the particular second node in response to the establishment of the wireless communication session between the beacon and the first client system. 18. The method of claim 1, wherein the first set of social-networking information is further based on a set of permissions specified by the first user. 19. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:
send, to one or more computing systems of an online social network, session information between a first user of the online social network and the third-party content provider, wherein:
the session information comprises information indicating that a wireless communication session has been established between a first client system of the first user and a beacon associated with the third-party content provider, wherein the wireless communication session allows the online social network to send social-networking information of the first user to the third-party content provider via the beacon; and
the beacon is physically proximate to the first client system at the time of the wireless communication session;
in response to sending the session information, receive, from the computing systems of the online social network, a first set of social-networking information of the first user; and send, to the first client system, customized third-party content for display on the first client system, wherein the customized third-party content is based on the first set of social-networking information. 20. A system comprising: one or more processors; and a memory coupled to the processors comprising instructions executable by the processors, the processors operable when executing the instructions to:
send, to one or more computing systems of an online social network, session information between a first user of the online social network and the third-party content provider, wherein:
the session information comprises information indicating that a wireless communication session has been established between a first client system of the first user and a beacon associated with the third-party content provider, wherein the wireless communication session allows the online social network to send social-networking information of the first user to the third-party content provider via the beacon; and
the beacon is physically proximate to the first client system at the time of the wireless communication session;
in response to sending the session information, receive, from the computing systems of the online social network, a first set of social-networking information of the first user; and send, to the first client system, customized third-party content for display on the first client system, wherein the customized third-party content is based on the first set of social-networking information. | In one embodiment, a method includes sending, to an online social network, session information between a third-party content provider and a first user of the online social network. The session information includes information referencing an established wireless communication session between a first client system of the first user and a beacon of the third-party content provider. The beacon is physically proximate to the first client system at the time of the wireless communication session, and the wireless communication session allows the online social network to send social-networking information of the first user to the beacon. In response to sending the session information, a first set of social-networking information of the first user is received from the online social network via the beacon. The first set of social-networking information allows the third-party content provider to send, via the beacon, customized third-party content for display on the first client system.1. A method comprising, by one or more computing devices of a third-party content provider:
sending, to one or more computing systems of an online social network, session information between a first user of the online social network and the third-party content provider, wherein:
the session information comprises information indicating that a wireless communication session has been established between a first client system of the first user and a beacon associated with the third-party content provider, wherein the wireless communication session allows the online social network to send social-networking information of the first user to the third-party content provider via the beacon; and
the beacon is physically proximate to the first client system at the time of the wireless communication session;
in response to sending the session information, receiving, from the computing systems of the online social network, a first set of social-networking information of the first user; and sending, to the first client system, customized third-party content for display on the first client system, wherein the customized third-party content is based on the first set of social-networking information. 2. The method of claim 1, wherein the first set of social-networking information comprises demographic information of the first user. 3. The method of claim 1, wherein the first set of social-networking information comprises a purchase history of the first user. 4. The method of claim 1, wherein the first set of social-networking information comprises payment credentials of the first user. 5. The method of claim 1, wherein the third-party content provider is associated with at least one attribute, and the first set of social-networking information is based at least in part on the at least one attribute. 6. The method of claim 5, wherein the at least one attribute of the third-party content provider comprises a type of good or service. 7. The method of claim 6, wherein the first set of social-networking information comprises user preferences associated with the type of good or service. 8. The method of claim 1, wherein the customized third-party content comprises a prompt requesting a response by the first user. 9. The method of claim 8, wherein the requested response comprises one or more of a binary answer to a question posed by the prompt, an image, or a text message comprising information requested by the prompt, the requested response being inputted by the first user at the first client system. 10. The method of claim 1, wherein the customized third-party content comprises a promotional offer, the promotional offer redeemable by the first user while the wireless communication session between the beacon and the first client system remains active. 11. The method of claim 1, further comprising:
detecting, via the beacon, that the wireless communication session between the beacon and the first client system has been terminated, wherein the customized third-party content is sent to the first client system in response to detecting the termination. 12. The method of claim 1, wherein the online social network comprises a social graph comprising a plurality of nodes and a plurality of edges connecting the nodes, each of the edges between two of the nodes representing a single degree of separation between them, the nodes comprising:
a first node corresponding to the first user; and a plurality of second nodes corresponding to a plurality of entities associated with the online social network, respectively. 13. The method of claim 12, wherein the first set of social-networking information comprises a connection between the first node and a particular second node of the plurality of second nodes, the particular second node corresponding to the third-party content provider. 14. The method of claim 13, further comprising:
receiving, via the beacon, a second set of social-networking information associated with one or more second users, wherein each of the one or more second user is:
associated with a respective client system within wireless communication range of the beacon; and
connected to the first user in the social graph within a threshold degree of separation. 15. The method of claim 12, wherein the customized third-party content comprises one or more identifiers of one or more second users of the online social network, the second users each associated with a respective second client system, and wherein wireless communication sessions have been established between the second client systems and the beacon. 16. The method of claim 12, wherein the first set of social-networking information is based on a degree of separation in the social graph between a particular second node corresponding to the third-party content provider and the first node. 17. The method of claim 12, wherein a particular second node corresponds to the third-party content provider, and wherein a social-networking action is performed with respect to the first node and the particular second node in response to the establishment of the wireless communication session between the beacon and the first client system. 18. The method of claim 1, wherein the first set of social-networking information is further based on a set of permissions specified by the first user. 19. One or more computer-readable non-transitory storage media embodying software that is operable when executed to:
send, to one or more computing systems of an online social network, session information between a first user of the online social network and the third-party content provider, wherein:
the session information comprises information indicating that a wireless communication session has been established between a first client system of the first user and a beacon associated with the third-party content provider, wherein the wireless communication session allows the online social network to send social-networking information of the first user to the third-party content provider via the beacon; and
the beacon is physically proximate to the first client system at the time of the wireless communication session;
in response to sending the session information, receive, from the computing systems of the online social network, a first set of social-networking information of the first user; and send, to the first client system, customized third-party content for display on the first client system, wherein the customized third-party content is based on the first set of social-networking information. 20. A system comprising: one or more processors; and a memory coupled to the processors comprising instructions executable by the processors, the processors operable when executing the instructions to:
send, to one or more computing systems of an online social network, session information between a first user of the online social network and the third-party content provider, wherein:
the session information comprises information indicating that a wireless communication session has been established between a first client system of the first user and a beacon associated with the third-party content provider, wherein the wireless communication session allows the online social network to send social-networking information of the first user to the third-party content provider via the beacon; and
the beacon is physically proximate to the first client system at the time of the wireless communication session;
in response to sending the session information, receive, from the computing systems of the online social network, a first set of social-networking information of the first user; and send, to the first client system, customized third-party content for display on the first client system, wherein the customized third-party content is based on the first set of social-networking information. | 2,400 |
8,679 | 8,679 | 15,331,661 | 2,454 | A method includes, with a Virtual Network Function (VNF) component associated with a VNF, communicating with an access network over a first physical network connected to a first physical network interface of a physical machine associated with the VNF component. The method further includes, with the VNF component, communicating with a core network over a second physical network connected to a second physical network interface of the physical machine, the second network being isolated from the first network. | 1. A method comprising:
with a Virtual Network Function (VNF) component associated with a VNF, communicating with an access network over a first physical network connected to a first physical network interface of a physical machine associated with the VNF component; and with the VNF component, communicating with a core network over a second physical network connected to a second physical network interface of the physical machine, the second network being isolated from the first network. 2. The method of claim 1, wherein the first and second physical networks are established via an SDN controller. 3. The method of claim 1, wherein the VNF component runs directly on the physical machine. 4. The method of claim 3, wherein the VNF component is associated with bearer traffic. 5. The method of claim 3, wherein the VNF component is provisioned using Metal as a Service (MaaS). 6. The method of claim 1, wherein:
the VNF component runs on a virtual machine running on the first physical machine; the virtual machine comprises a first virtual network interface mapped to the first physical network interface and a second virtual network interface mapped to the second physical network interface; and the VNF component is configured to communicate with the access network through the first virtual network interface and communicate with the core network through the second virtual network interface. 7. The method of claim 6, further comprising, with a VNF manager that manages the VNF, instructing a Software-Defined Networking (SDN) controller to map the first virtual network interface to the first physical network interface and map the second virtual network interface to the second physical network interface. 8. The method of claim 1, wherein the VNF component runs on a virtual machine running on the first physical machine, and further wherein the first VNF component is associated with signaling traffic. 9. The method of claim 1, wherein the first physical machine is one of a plurality of physical machines within a datacenter, each of the physical machines comprising:
a first network interface connected to the first physical network; and a second network interface connected to the second physical network. 10. The method of claim 9, wherein:
physical machines of the plurality of physical machines that run signaling VNF components are connected to the first physical network and the second physical network using a first set of network components; and physical machines of the plurality of physical machines that run bearer VNF components are connected to the first physical network and the second physical network using a second set of network components having a higher throughput than the first set of network components. 11. A system comprising:
a first physical network interface; a second physical network interface a processor; and a memory comprising machine readable instructions that when executed by the processor, cause the system to:
run a Virtual Network Function (VNF) component of a VNF;
on behalf of the VNF component, communicate with an access network over a first network connected to the first physical network interface; and
on behalf of the VNF component, communicate with a core network over a second network connected to the second physical network interface. 12. The system of claim 11, wherein data traffic between the VNF component and the core network traverses different physical cables than data traffic between the VNF component and the access network. 13. The system of claim 11, wherein the VNF component is configured to run on a virtual machine. 14. The system of claim 13, wherein the VNF component is associated with signaling traffic. 15. The system of claim 11, wherein the processor is configured to directly run the VNF component. 16. The system of claim 11, wherein the VNF component is associated with bearer traffic. 17. The system of claim 11, wherein the VNF comprises one of: a Session Border Controller (SBC), an Internet Protocol (IP) Multimedia Subsystem (IMS) core, and a telephony application server. 18. A system comprising:
a plurality of physical computing systems, each of the physical computing systems comprising at least two physical network interfaces; a first network connected to a first one of the physical network interfaces for each of the physical computing systems, the first network being connected to an access network; and a second network connected to a second one of the physical network interfaces for each of the physical computing systems, the second network being connected to a core network, the second network being isolated from the first network; wherein each of the physical computing systems is configured to run at least one Virtual Network Function (VNF) component of a VNF. 19. The system of claim 18, wherein the first network and the second network comprise packet switched Ethernet networks. 20. The system of claim 18, wherein at least one of the plurality of physical computing systems is configured to provide a cloud computing environment to run VNF components of the VNF and at least one of the plurality of physical computing systems is configured to directly run VNF components of the VNF. | A method includes, with a Virtual Network Function (VNF) component associated with a VNF, communicating with an access network over a first physical network connected to a first physical network interface of a physical machine associated with the VNF component. The method further includes, with the VNF component, communicating with a core network over a second physical network connected to a second physical network interface of the physical machine, the second network being isolated from the first network.1. A method comprising:
with a Virtual Network Function (VNF) component associated with a VNF, communicating with an access network over a first physical network connected to a first physical network interface of a physical machine associated with the VNF component; and with the VNF component, communicating with a core network over a second physical network connected to a second physical network interface of the physical machine, the second network being isolated from the first network. 2. The method of claim 1, wherein the first and second physical networks are established via an SDN controller. 3. The method of claim 1, wherein the VNF component runs directly on the physical machine. 4. The method of claim 3, wherein the VNF component is associated with bearer traffic. 5. The method of claim 3, wherein the VNF component is provisioned using Metal as a Service (MaaS). 6. The method of claim 1, wherein:
the VNF component runs on a virtual machine running on the first physical machine; the virtual machine comprises a first virtual network interface mapped to the first physical network interface and a second virtual network interface mapped to the second physical network interface; and the VNF component is configured to communicate with the access network through the first virtual network interface and communicate with the core network through the second virtual network interface. 7. The method of claim 6, further comprising, with a VNF manager that manages the VNF, instructing a Software-Defined Networking (SDN) controller to map the first virtual network interface to the first physical network interface and map the second virtual network interface to the second physical network interface. 8. The method of claim 1, wherein the VNF component runs on a virtual machine running on the first physical machine, and further wherein the first VNF component is associated with signaling traffic. 9. The method of claim 1, wherein the first physical machine is one of a plurality of physical machines within a datacenter, each of the physical machines comprising:
a first network interface connected to the first physical network; and a second network interface connected to the second physical network. 10. The method of claim 9, wherein:
physical machines of the plurality of physical machines that run signaling VNF components are connected to the first physical network and the second physical network using a first set of network components; and physical machines of the plurality of physical machines that run bearer VNF components are connected to the first physical network and the second physical network using a second set of network components having a higher throughput than the first set of network components. 11. A system comprising:
a first physical network interface; a second physical network interface a processor; and a memory comprising machine readable instructions that when executed by the processor, cause the system to:
run a Virtual Network Function (VNF) component of a VNF;
on behalf of the VNF component, communicate with an access network over a first network connected to the first physical network interface; and
on behalf of the VNF component, communicate with a core network over a second network connected to the second physical network interface. 12. The system of claim 11, wherein data traffic between the VNF component and the core network traverses different physical cables than data traffic between the VNF component and the access network. 13. The system of claim 11, wherein the VNF component is configured to run on a virtual machine. 14. The system of claim 13, wherein the VNF component is associated with signaling traffic. 15. The system of claim 11, wherein the processor is configured to directly run the VNF component. 16. The system of claim 11, wherein the VNF component is associated with bearer traffic. 17. The system of claim 11, wherein the VNF comprises one of: a Session Border Controller (SBC), an Internet Protocol (IP) Multimedia Subsystem (IMS) core, and a telephony application server. 18. A system comprising:
a plurality of physical computing systems, each of the physical computing systems comprising at least two physical network interfaces; a first network connected to a first one of the physical network interfaces for each of the physical computing systems, the first network being connected to an access network; and a second network connected to a second one of the physical network interfaces for each of the physical computing systems, the second network being connected to a core network, the second network being isolated from the first network; wherein each of the physical computing systems is configured to run at least one Virtual Network Function (VNF) component of a VNF. 19. The system of claim 18, wherein the first network and the second network comprise packet switched Ethernet networks. 20. The system of claim 18, wherein at least one of the plurality of physical computing systems is configured to provide a cloud computing environment to run VNF components of the VNF and at least one of the plurality of physical computing systems is configured to directly run VNF components of the VNF. | 2,400 |
8,680 | 8,680 | 14,295,540 | 2,482 | Techniques are described for harmonizing coding techniques when residual differential pulse code modulation (RDPCM) is applied to a residual block. In some examples, a scan order used for such a residual block may be required to be the same as when the residual block is generated from intra-predicting the current block and when the residual block is generated from inter-predicting or intra block copy predicting the current block. | 1. A method of decoding video data, the method comprising:
decoding information indicating a direction in which residual differential pulse code modulation (DPCM) is applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block; determining a scan order for the first residual block based on the information indicating the direction, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the direction indicated in the decoded information; entropy decoding the residual data of the first residual block based on the determined scan order; and reconstructing the current block based on the decoded residual data. 2. The method of claim 1, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 3. The method of claim 1, wherein the vector of the current block comprises a block vector, and first residual block is generated from intra block copy prediction of the current block. 4. The method of claim 1,
wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the information indicated a horizontal residual DPCM and if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the information indicated a vertical residual DPCM and if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 5. The method of claim 1, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the information indicated a horizontal residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the information indicated a vertical residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 6. The method of claim 5, wherein the threshold size comprises 8×8. 7. The method of claim 1,
wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the information indicated a horizontal residual DPCM and if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the information indicated a vertical residual DPCM and if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 8. The method of claim 1, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the information indicated a horizontal residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the information indicated a vertical residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 9. The method of claim 1, wherein the residual data generated from the difference between the predictive block, referred to by the vector of the current block, and the current block comprises residual data that includes residual values from the difference between the predictive block and the current block without a transform applied to the residual values that converts the residual values from a pixel domain to a transform domain. 10. The method of claim 1, wherein entropy decoding the residual data comprises entropy decoding 4×4 sub-blocks of the first residual block based on the determined scan order. 11. The method of claim 1, further comprising:
decoding information indicating whether residual DPCM is applied to the first residual block; and determining whether residual DPCM is applied to the first residual block based on the decoded information indicating whether residual DPCM is applied to the first residual block, wherein decoding information indicating the order in which residual DPCM is applied comprises decoding information indicating the order in which residual DPCM is applied if determined that residual DPCM is applied to the first residual block. 12. The method of claim 11, further comprising:
if determined that residual DPCM is not applied to the first residual block, determining that the scan order is a diagonal scan. 13. A method of encoding video data, the method comprising:
determining a direction in which residual differential pulse code modulation (DPCM) is to be applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block; determining a scan order for the first residual block based on the determined direction in which the residual DPCM is applied, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the determined direction in which the residual DPCM is applied to the first residual block; entropy encoding the residual data of the first residual block based on the determined scan order; encoding information indicating the determined direction in which residual DPCM is applied; and outputting the encoded residual data and the information indicating the determined direction in which residual DPCM is applied. 14. The method of claim 13, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 15. The method of claim 13, wherein the vector of the current block comprises a block vector, and the first residual block is generated from intra block copy prediction of the current block. 16. The method of claim 13,
wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 17. The method of claim 13, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 18. The method of claim 17, wherein the threshold size comprises 8×8. 19. The method of claim 13,
wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 20. The method of claim 13, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 21. A device for decoding video data, the device comprising:
a video data memory configured to store a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block; and a video decoder configured to:
decode information indicating a direction in which residual differential pulse code modulation (DPCM) is applied to the first residual block;
determine a scan order for the first residual block based on the information indicating the direction, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the direction indicated in the decoded information;
entropy decode the residual data of the first residual block based on the determined scan order; and
reconstruct the current block based on the decoded residual data. 22. The device of claim 21, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 23. The device of claim 21, wherein the vector of the current block comprises a block vector, and first residual block is generated from intra block copy prediction of the current block. 24. The device of claim 21, wherein the video decoder is further configured to determine whether a size of the first residual block is less than or equal to a threshold size, and wherein to determine the scan order for the first residual block, the video decoder is configured to determine the scan order for the first residual block based on the information indicating the direction and whether the size of the first residual block is less than or equal to the threshold. 25. The device of claim 24, wherein the threshold size comprises 8×8. 26. The device of claim 21, wherein the device comprises one of:
a microprocessor; an integrated circuit; and a wireless communication device. 27. A device for encoding video data, the device comprising:
a video data memory configured to store a predictive block for a current block; and a video encoder configured to:
determine a direction in which residual differential pulse code modulation (DPCM) is to be applied to a first residual block that includes residual data generated from a difference between the predictive block, referred to by a vector of the current block, and the current block;
determine a scan order for the first residual block based on the determined direction in which the residual DPCM is applied, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the determined direction in which the residual DPCM is applied to the first residual block;
entropy encode the residual data of the first residual block based on the determined scan order;
encode information indicating the determined direction in which residual DPCM is applied; and
output the encoded residual data and the information indicating the determined direction in which residual DPCM is applied. 28. The device of claim 27, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 29. The device of claim 27, wherein the vector of the current block comprises a block vector, and first residual block is generated from intra block copy prediction of the current block. 30. The device of claim 27, wherein the video encoder is further configured to determine whether a size of the first residual block is less than or equal to a threshold size, and wherein to determine the scan order for the first residual block, the video encoder is configured to determine the scan order for the first residual block based on the determined direction in which the residual DPCM is applied and whether the size of the first residual block is less than or equal to the threshold. 31. The device of claim 30, wherein the threshold size comprises 8×8. 32. A device for decoding video data, the device comprising:
means for decoding information indicating a direction in which residual differential pulse code modulation (DPCM) is applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block, wherein the information indicating the direction comprises information indicating one of a vertical residual DPCM or a horizontal residual DPCM; means for determining a scan order for the first residual block based on the information indicating the direction, wherein the means for determining the scan order for the first residual block comprises:
means for determining a vertical scan, if the information indicated a horizontal residual DPCM and if the size of the first residual block is less than or equal to 8×8; and
means for determining a horizontal scan, if the information indicated a vertical residual DPCM and if the size of the first residual block is less than or equal to 8×8;
means for entropy decoding the residual data of the first residual block based on the determined scan order; and means for reconstructing the current block based on the decoded residual data. 33. A computer-readable storage medium having instructions stored thereon that when executed cause a video encoder for a device for encoding video data to:
determine a direction in which residual differential pulse code modulation (DPCM) is to be applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block, wherein the determined direction comprises one of a vertical residual DPCM or a horizontal residual DPCM; determine a scan order for the first residual block based on the determined direction in which the residual DPCM is applied, wherein to determine the scan order of the first residual block the instructions cause the video encoder to:
determine a vertical scan, if the determined direction is the horizontal residual DPCM and if the size of the first residual block is less than or equal to 8×8, or
determine a horizontal scan, if the determined direction is the vertical residual DPCM and if the size of the first residual block is less than or equal to 8×8;
entropy encode the residual data of the first residual block based on the determined scan order; encode information indicating the determined direction in which residual DPCM is applied; and output the encoded residual data and the information indicating the determined direction in which residual DPCM is applied. | Techniques are described for harmonizing coding techniques when residual differential pulse code modulation (RDPCM) is applied to a residual block. In some examples, a scan order used for such a residual block may be required to be the same as when the residual block is generated from intra-predicting the current block and when the residual block is generated from inter-predicting or intra block copy predicting the current block.1. A method of decoding video data, the method comprising:
decoding information indicating a direction in which residual differential pulse code modulation (DPCM) is applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block; determining a scan order for the first residual block based on the information indicating the direction, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the direction indicated in the decoded information; entropy decoding the residual data of the first residual block based on the determined scan order; and reconstructing the current block based on the decoded residual data. 2. The method of claim 1, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 3. The method of claim 1, wherein the vector of the current block comprises a block vector, and first residual block is generated from intra block copy prediction of the current block. 4. The method of claim 1,
wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the information indicated a horizontal residual DPCM and if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the information indicated a vertical residual DPCM and if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 5. The method of claim 1, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the information indicated a horizontal residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the information indicated a vertical residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 6. The method of claim 5, wherein the threshold size comprises 8×8. 7. The method of claim 1,
wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the information indicated a horizontal residual DPCM and if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the information indicated a vertical residual DPCM and if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 8. The method of claim 1, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein decoding information indicating the direction comprises decoding information indicating one of a vertical residual DPCM or a horizontal residual DPCM, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the information indicated a horizontal residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the information indicated a vertical residual DPCM, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 9. The method of claim 1, wherein the residual data generated from the difference between the predictive block, referred to by the vector of the current block, and the current block comprises residual data that includes residual values from the difference between the predictive block and the current block without a transform applied to the residual values that converts the residual values from a pixel domain to a transform domain. 10. The method of claim 1, wherein entropy decoding the residual data comprises entropy decoding 4×4 sub-blocks of the first residual block based on the determined scan order. 11. The method of claim 1, further comprising:
decoding information indicating whether residual DPCM is applied to the first residual block; and determining whether residual DPCM is applied to the first residual block based on the decoded information indicating whether residual DPCM is applied to the first residual block, wherein decoding information indicating the order in which residual DPCM is applied comprises decoding information indicating the order in which residual DPCM is applied if determined that residual DPCM is applied to the first residual block. 12. The method of claim 11, further comprising:
if determined that residual DPCM is not applied to the first residual block, determining that the scan order is a diagonal scan. 13. A method of encoding video data, the method comprising:
determining a direction in which residual differential pulse code modulation (DPCM) is to be applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block; determining a scan order for the first residual block based on the determined direction in which the residual DPCM is applied, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the determined direction in which the residual DPCM is applied to the first residual block; entropy encoding the residual data of the first residual block based on the determined scan order; encoding information indicating the determined direction in which residual DPCM is applied; and outputting the encoded residual data and the information indicating the determined direction in which residual DPCM is applied. 14. The method of claim 13, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 15. The method of claim 13, wherein the vector of the current block comprises a block vector, and the first residual block is generated from intra block copy prediction of the current block. 16. The method of claim 13,
wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 17. The method of claim 13, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a vertical scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the vertical scan, or
determining a horizontal scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the horizontal scan. 18. The method of claim 17, wherein the threshold size comprises 8×8. 19. The method of claim 13,
wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 20. The method of claim 13, further comprising:
determining whether a size of the first residual block is less than or equal to a threshold size, wherein determining a direction comprises one of determining that a horizontal residual DPCM is to be applied or a vertical residual DPCM is to be applied, and wherein determining the scan order for the first residual block comprises one of:
determining a horizontal scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had horizontal residual DPCM applied and would have used the horizontal scan, or
determining a vertical scan, if the size of the first residual block is less than or equal to the threshold size, and if the second residual block had vertical residual DPCM applied and would have used the vertical scan. 21. A device for decoding video data, the device comprising:
a video data memory configured to store a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block; and a video decoder configured to:
decode information indicating a direction in which residual differential pulse code modulation (DPCM) is applied to the first residual block;
determine a scan order for the first residual block based on the information indicating the direction, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the direction indicated in the decoded information;
entropy decode the residual data of the first residual block based on the determined scan order; and
reconstruct the current block based on the decoded residual data. 22. The device of claim 21, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 23. The device of claim 21, wherein the vector of the current block comprises a block vector, and first residual block is generated from intra block copy prediction of the current block. 24. The device of claim 21, wherein the video decoder is further configured to determine whether a size of the first residual block is less than or equal to a threshold size, and wherein to determine the scan order for the first residual block, the video decoder is configured to determine the scan order for the first residual block based on the information indicating the direction and whether the size of the first residual block is less than or equal to the threshold. 25. The device of claim 24, wherein the threshold size comprises 8×8. 26. The device of claim 21, wherein the device comprises one of:
a microprocessor; an integrated circuit; and a wireless communication device. 27. A device for encoding video data, the device comprising:
a video data memory configured to store a predictive block for a current block; and a video encoder configured to:
determine a direction in which residual differential pulse code modulation (DPCM) is to be applied to a first residual block that includes residual data generated from a difference between the predictive block, referred to by a vector of the current block, and the current block;
determine a scan order for the first residual block based on the determined direction in which the residual DPCM is applied, wherein the determined scan order is required to be the same scan order as would be used on a second residual block if the second residual block was generated from intra-predicting the current block and if the second residual block had residual DPCM applied with the same direction as the determined direction in which the residual DPCM is applied to the first residual block;
entropy encode the residual data of the first residual block based on the determined scan order;
encode information indicating the determined direction in which residual DPCM is applied; and
output the encoded residual data and the information indicating the determined direction in which residual DPCM is applied. 28. The device of claim 27, wherein the vector of the current block comprises a motion vector, and the first residual block is generated from inter-prediction of the current block. 29. The device of claim 27, wherein the vector of the current block comprises a block vector, and first residual block is generated from intra block copy prediction of the current block. 30. The device of claim 27, wherein the video encoder is further configured to determine whether a size of the first residual block is less than or equal to a threshold size, and wherein to determine the scan order for the first residual block, the video encoder is configured to determine the scan order for the first residual block based on the determined direction in which the residual DPCM is applied and whether the size of the first residual block is less than or equal to the threshold. 31. The device of claim 30, wherein the threshold size comprises 8×8. 32. A device for decoding video data, the device comprising:
means for decoding information indicating a direction in which residual differential pulse code modulation (DPCM) is applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block, wherein the information indicating the direction comprises information indicating one of a vertical residual DPCM or a horizontal residual DPCM; means for determining a scan order for the first residual block based on the information indicating the direction, wherein the means for determining the scan order for the first residual block comprises:
means for determining a vertical scan, if the information indicated a horizontal residual DPCM and if the size of the first residual block is less than or equal to 8×8; and
means for determining a horizontal scan, if the information indicated a vertical residual DPCM and if the size of the first residual block is less than or equal to 8×8;
means for entropy decoding the residual data of the first residual block based on the determined scan order; and means for reconstructing the current block based on the decoded residual data. 33. A computer-readable storage medium having instructions stored thereon that when executed cause a video encoder for a device for encoding video data to:
determine a direction in which residual differential pulse code modulation (DPCM) is to be applied to a first residual block that includes residual data generated from a difference between a predictive block, referred to by a vector of a current block, and the current block, wherein the determined direction comprises one of a vertical residual DPCM or a horizontal residual DPCM; determine a scan order for the first residual block based on the determined direction in which the residual DPCM is applied, wherein to determine the scan order of the first residual block the instructions cause the video encoder to:
determine a vertical scan, if the determined direction is the horizontal residual DPCM and if the size of the first residual block is less than or equal to 8×8, or
determine a horizontal scan, if the determined direction is the vertical residual DPCM and if the size of the first residual block is less than or equal to 8×8;
entropy encode the residual data of the first residual block based on the determined scan order; encode information indicating the determined direction in which residual DPCM is applied; and output the encoded residual data and the information indicating the determined direction in which residual DPCM is applied. | 2,400 |
8,681 | 8,681 | 16,109,518 | 2,426 | A method and system for data communication is provided and may include, at a communications terminal, displaying picture objects and/or video objects received from a media center via a communications network. The communications terminal may transmit direction of view information (DoV) of at least one eye of a user of the terminal with respect to the displayed objects from an eye tracker at the communications terminal. The DoV may be determined by detecting a position of a pupil based on light reflected off of the pupil. Subsequent picture objects and/or video objects may be displayed by the communications terminal based on user interests determined from the transmitted DoV information. | 1-20. (canceled) 21. A system for communications, the system comprising
a display unit operable to display visual information to a user according to a level of interest in a location; a retinal scanner operable to determine a direction of view of at least one eye of the user with respect to the display unit by detecting a position of a pupil; a detector operable to measure a duration and a frequency with which the location is viewed by the user, wherein the location is considered viewed based on the direction of view of the at least one eye of the user, and wherein the level of interest in the location is based on a duration of time the location is viewed and a frequency with which the location is viewed; and a control unit operable to capture a visual object at the location according the level of interest in the location. 22. The system according to claim 21, wherein the system is operable to transmit the direction of view information to a media center. 23. The system according to claim 21, wherein the detector is operable to identify the user. 24. The system according to claim 21, wherein the system comprises a speaker operable to send audio information the user according to the level of interest in the location. 25. The system according to claim 21, wherein the location is associated with the visual object. 26. The system according to claim 25, wherein an identification of the visual object determines the visual information. 27. The system according to claim 21, wherein the retinal scanner is operable to communicate wirelessly with the detector. 28. The system according to claim 21, wherein the control unit is operable to communicate wirelessly with the detector. 29. A system for communication, the system comprising:
a retinal scanner operable to determine a direction of view of a user by detecting a position of a pupil; a position detector operable to determine a head position of the user, the head position being determined as a horizontal angle and a vertical angle relative to another body part of the user; a control unit operable to determine a level of interest, wherein the level of interest is determined according to a change in the direction of view of the user and a change in the head position of the user; and a communication device operable to capture a visual object at the location according the level of interest in the location, and wherein the communication device is operable to send audio and visual information to the user according to the level of interest. 30. The system according to claim 29, wherein the control unit is operable to transmit the direction of view information to a media center. 31. The system according to claim 29, wherein the detector is operable to identify the user. 32. The system according to claim 29, wherein the level of interest is associated with a location. 33. The system according to claim 29, wherein the level of interest is associated with the visual object. 34. The system according to claim 33, wherein an identification of the visual object determines the audio and visual information. 35. The system according to claim 29, wherein the retinal scanner is operable to communicate wirelessly with the position detector. 36. The system according to claim 29, wherein the control unit is operable to communicate wirelessly with the position detector and the retinal scanner. 37. A system for wireless communication, the system comprising:
a communication device operable to display visual information associated with a command; a retinal scanner operable to determine a direction of view of a user of a mobile device by detecting a position of a pupil; a detector operable to measure a duration and a frequency with which a visual object is viewed by a user, wherein the visual object is considered viewed based on a location of the visual object with respect to the direction of view of at least one eye of the user; and a control unit operable to select the command according to the duration and the frequency with which the visual object is viewed by the user. 38. The system according to claim 37, wherein the control unit is operable to transmit the duration and the frequency with which the visual object is viewed to a media center. 39. The system according to claim 37, wherein the retinal scanner is operable to identify the user. 40. The system according to claim 37, wherein the control unit is operable to communicate wirelessly with the detector. | A method and system for data communication is provided and may include, at a communications terminal, displaying picture objects and/or video objects received from a media center via a communications network. The communications terminal may transmit direction of view information (DoV) of at least one eye of a user of the terminal with respect to the displayed objects from an eye tracker at the communications terminal. The DoV may be determined by detecting a position of a pupil based on light reflected off of the pupil. Subsequent picture objects and/or video objects may be displayed by the communications terminal based on user interests determined from the transmitted DoV information.1-20. (canceled) 21. A system for communications, the system comprising
a display unit operable to display visual information to a user according to a level of interest in a location; a retinal scanner operable to determine a direction of view of at least one eye of the user with respect to the display unit by detecting a position of a pupil; a detector operable to measure a duration and a frequency with which the location is viewed by the user, wherein the location is considered viewed based on the direction of view of the at least one eye of the user, and wherein the level of interest in the location is based on a duration of time the location is viewed and a frequency with which the location is viewed; and a control unit operable to capture a visual object at the location according the level of interest in the location. 22. The system according to claim 21, wherein the system is operable to transmit the direction of view information to a media center. 23. The system according to claim 21, wherein the detector is operable to identify the user. 24. The system according to claim 21, wherein the system comprises a speaker operable to send audio information the user according to the level of interest in the location. 25. The system according to claim 21, wherein the location is associated with the visual object. 26. The system according to claim 25, wherein an identification of the visual object determines the visual information. 27. The system according to claim 21, wherein the retinal scanner is operable to communicate wirelessly with the detector. 28. The system according to claim 21, wherein the control unit is operable to communicate wirelessly with the detector. 29. A system for communication, the system comprising:
a retinal scanner operable to determine a direction of view of a user by detecting a position of a pupil; a position detector operable to determine a head position of the user, the head position being determined as a horizontal angle and a vertical angle relative to another body part of the user; a control unit operable to determine a level of interest, wherein the level of interest is determined according to a change in the direction of view of the user and a change in the head position of the user; and a communication device operable to capture a visual object at the location according the level of interest in the location, and wherein the communication device is operable to send audio and visual information to the user according to the level of interest. 30. The system according to claim 29, wherein the control unit is operable to transmit the direction of view information to a media center. 31. The system according to claim 29, wherein the detector is operable to identify the user. 32. The system according to claim 29, wherein the level of interest is associated with a location. 33. The system according to claim 29, wherein the level of interest is associated with the visual object. 34. The system according to claim 33, wherein an identification of the visual object determines the audio and visual information. 35. The system according to claim 29, wherein the retinal scanner is operable to communicate wirelessly with the position detector. 36. The system according to claim 29, wherein the control unit is operable to communicate wirelessly with the position detector and the retinal scanner. 37. A system for wireless communication, the system comprising:
a communication device operable to display visual information associated with a command; a retinal scanner operable to determine a direction of view of a user of a mobile device by detecting a position of a pupil; a detector operable to measure a duration and a frequency with which a visual object is viewed by a user, wherein the visual object is considered viewed based on a location of the visual object with respect to the direction of view of at least one eye of the user; and a control unit operable to select the command according to the duration and the frequency with which the visual object is viewed by the user. 38. The system according to claim 37, wherein the control unit is operable to transmit the duration and the frequency with which the visual object is viewed to a media center. 39. The system according to claim 37, wherein the retinal scanner is operable to identify the user. 40. The system according to claim 37, wherein the control unit is operable to communicate wirelessly with the detector. | 2,400 |
8,682 | 8,682 | 15,804,216 | 2,442 | By way of example, a method, apparatus, system, and software are described for using a previously-identified location within content, such as a splash screen indicating a transition between main program content and a commercial segment, to skip to a subsequent location in the content, such as a location at which a subsequent appearance of the splash screen is detected. This may allow for an at least partially automated recognition-based content skipping feature. | 1-20. (canceled) 21. A method comprising:
receiving a first command by a first user, wherein the first command is associated with a first video frame of video content; transmitting the video content to a device that is associated with a second user; and in response to a second command by the second user, wherein the second command is associated with a second video frame of the video content and is initiated during the transmitting of the video content:
determining, by at least one computer, a reference image that is based on the first video frame and the second video frame,
comparing video frames of the video content with the reference image;
determining one of the video frames based on the reference image;
skipping forward to the one of the video frames; and
resuming transmission of the video content to the device. | By way of example, a method, apparatus, system, and software are described for using a previously-identified location within content, such as a splash screen indicating a transition between main program content and a commercial segment, to skip to a subsequent location in the content, such as a location at which a subsequent appearance of the splash screen is detected. This may allow for an at least partially automated recognition-based content skipping feature.1-20. (canceled) 21. A method comprising:
receiving a first command by a first user, wherein the first command is associated with a first video frame of video content; transmitting the video content to a device that is associated with a second user; and in response to a second command by the second user, wherein the second command is associated with a second video frame of the video content and is initiated during the transmitting of the video content:
determining, by at least one computer, a reference image that is based on the first video frame and the second video frame,
comparing video frames of the video content with the reference image;
determining one of the video frames based on the reference image;
skipping forward to the one of the video frames; and
resuming transmission of the video content to the device. | 2,400 |
8,683 | 8,683 | 15,716,110 | 2,413 | Residence time is a variable part of the propagation delay of the packet. Information about the propagation delay for each transient node can be used as performance metric to calculate the Traffic Engineered route that can conform to delay and delay variation requirements. In an exemplary embodiment, a computing device uses special test packets to measure residence time. The computing device calculates routes to direct special test packets to one or more nodes. A node may calculate the residence time metric, such as a residence time variation (RTV), or residence time (RT) per ordered set of ingress and egress interfaces of the node. The computing device may also collect the residence time metric per test set from each node and may use this information to calculate the Test Engineered route. | 1. A method of packet communication, comprising:
receiving, by a plurality of nodes, one or more test packets configured to traverse the plurality of nodes according to a list; identifying, by each of the plurality of nodes, information indicative of a requested residence time measurement for the one or more test packets; reading, by each of the plurality of nodes, a first clock value as each test packet is passed thereto; reading, by each of the plurality of nodes, a second clock value as each test packet is passed therefrom; and calculating the residence time of each test packet as a function of the first clock value and the second clock value. 2. The method of claim 1, wherein the calculating of the residence time of each test packet further includes calculating by subtracting the first clock value from the second clock value. 3. The method of claim 1, wherein the calculating of the residence time is performed by the plurality of nodes. 4. The method of claim 3, further comprising:
transmitting, to a computing device, a residence time metrics that includes the calculated residence time. 5. The method of claim 1, further comprising:
calculating, for each node, a minimum, a maximum, and a mean of the calculated residence time; calculating, for each node, a residence time variation for the one or more test packets by subtracting the minimum from the calculated residence time for one of the test packets; and transmitting, to a computing device, the residence time metrics that includes the minimum, the maximum, the mean, and the residence time variation. 6. The method of claim 5, wherein the calculating of any one or more of the minimum, the maximum, the mean, and the residence time variation residence time is performed by each node. 7. The method of claim 1, wherein the list of the plurality of nodes to be traversed by the one or more test packets is received from a computing device. 8. The method of claim 7, wherein the list includes a plurality of node segment identifications (SIDs) in an order to be traversed by one or more test packets. 9. The method of claim 8, wherein the list further comprises one or more adjacency segment identifications (Adj-SIDs). 10. The method of claim 7, wherein the list includes an Explicit Route Object (ERO) that includes internet protocol addresses of the plurality of nodes. 11. The method of claim 1, wherein the list is a Multi-Protocol Label Switching (MPLS) label stack. 12. The method of claim 1, wherein the one or more test packets is encapsulated in a header, and includes the list. 13. A packet communication apparatus comprising:
at least one processor and a memory having instructions stored thereon, wherein the instructions upon execution by the at least one processor configures the packet communication apparatus to: receive one or more test packets configured to traverse the plurality of nodes according to a list; identify information indicative of a requested residence time measurement for the one or more test packets; read a first clock value as each test packet is passed thereto; read a second clock value as each test packet is passed therefrom; and calculate the residence time of each test packet as a function of the first clock value and the second clock value. 14. The packet communication apparatus of claim 13, wherein the at least one processor configures the packet communication apparatus to calculate the residence time of each test packet by subtracting the first clock value from the second clock value. 15. The packet communication apparatus of claim 13, wherein the at least one processor further configures the packet communication apparatus to:
calculate, for a node, a minimum, a maximum, and a mean of the calculated residence time; calculate, for the node, a residence time variation for the one or more test packets by subtracting the minimum from the calculated residence time for one of the test packets; and transmit, to a computing device, a residence time metrics that includes the calculated residence time of each test packet, the minimum, the maximum, the mean, and the residence time variation. 16. The packet communication apparatus of claim 13, wherein the list includes a plurality of node segment identifications (SIDs) in an order to be traversed by one or more test packets. 17. The packet communication apparatus of claim 16, wherein the list further comprises one or more adjacency segment identifications (Adj-SIDs). 18. The packet communication apparatus of claim 13, wherein the list includes an Explicit Route Object (ERO) that includes internet protocol addresses of the plurality of nodes. 19. A computer platform configured to generate the list in claim 13. 20. A computer program product comprising a computer-readable medium having code stored thereon, the code, when executed by a processor, causing the processor to implement a method, the code comprising:
instruction for receiving one or more test packets configured to traverse the plurality of nodes according to a list; instruction for identifying information indicative of a requested residence time measurement for the one or more test packets; instruction for reading a first clock value as each test packet is passed thereto; instruction for reading a second clock value as each test packet is passed therefrom; and instruction for calculating the residence time of each test packet as a function of the first clock value and the second clock value. 21. The computer program product of claim 20, wherein the instruction for calculating of the residence time of each test packet further includes instruction for calculating by subtracting the first clock value from the second clock value. 22. The computer program product of claim 20, further comprising:
instruction for calculating, for a node, a minimum, a maximum, and a mean of the calculated residence time; instruction for calculating, for the node, a residence time variation for the one or more test packets by subtracting the minimum from the calculated residence time for one of the test packets; and instruction for transmitting, to a computing device, a residence time metrics that includes the calculated residence time of each test packet, the minimum, the maximum, the mean, and the residence time variation. 23. The computer program product of claim 20, wherein the list includes a plurality of node segment identifications (SIDs) in an order to be traversed by one or more test packets. 24. The computer program product of claim 23, wherein the list further comprises one or more adjacency segment identifications (Adj-SIDs). 25. The computer program product of claim 20, wherein the list includes an Explicit Route Object (ERO) that includes internet protocol addresses of the plurality of nodes. | Residence time is a variable part of the propagation delay of the packet. Information about the propagation delay for each transient node can be used as performance metric to calculate the Traffic Engineered route that can conform to delay and delay variation requirements. In an exemplary embodiment, a computing device uses special test packets to measure residence time. The computing device calculates routes to direct special test packets to one or more nodes. A node may calculate the residence time metric, such as a residence time variation (RTV), or residence time (RT) per ordered set of ingress and egress interfaces of the node. The computing device may also collect the residence time metric per test set from each node and may use this information to calculate the Test Engineered route.1. A method of packet communication, comprising:
receiving, by a plurality of nodes, one or more test packets configured to traverse the plurality of nodes according to a list; identifying, by each of the plurality of nodes, information indicative of a requested residence time measurement for the one or more test packets; reading, by each of the plurality of nodes, a first clock value as each test packet is passed thereto; reading, by each of the plurality of nodes, a second clock value as each test packet is passed therefrom; and calculating the residence time of each test packet as a function of the first clock value and the second clock value. 2. The method of claim 1, wherein the calculating of the residence time of each test packet further includes calculating by subtracting the first clock value from the second clock value. 3. The method of claim 1, wherein the calculating of the residence time is performed by the plurality of nodes. 4. The method of claim 3, further comprising:
transmitting, to a computing device, a residence time metrics that includes the calculated residence time. 5. The method of claim 1, further comprising:
calculating, for each node, a minimum, a maximum, and a mean of the calculated residence time; calculating, for each node, a residence time variation for the one or more test packets by subtracting the minimum from the calculated residence time for one of the test packets; and transmitting, to a computing device, the residence time metrics that includes the minimum, the maximum, the mean, and the residence time variation. 6. The method of claim 5, wherein the calculating of any one or more of the minimum, the maximum, the mean, and the residence time variation residence time is performed by each node. 7. The method of claim 1, wherein the list of the plurality of nodes to be traversed by the one or more test packets is received from a computing device. 8. The method of claim 7, wherein the list includes a plurality of node segment identifications (SIDs) in an order to be traversed by one or more test packets. 9. The method of claim 8, wherein the list further comprises one or more adjacency segment identifications (Adj-SIDs). 10. The method of claim 7, wherein the list includes an Explicit Route Object (ERO) that includes internet protocol addresses of the plurality of nodes. 11. The method of claim 1, wherein the list is a Multi-Protocol Label Switching (MPLS) label stack. 12. The method of claim 1, wherein the one or more test packets is encapsulated in a header, and includes the list. 13. A packet communication apparatus comprising:
at least one processor and a memory having instructions stored thereon, wherein the instructions upon execution by the at least one processor configures the packet communication apparatus to: receive one or more test packets configured to traverse the plurality of nodes according to a list; identify information indicative of a requested residence time measurement for the one or more test packets; read a first clock value as each test packet is passed thereto; read a second clock value as each test packet is passed therefrom; and calculate the residence time of each test packet as a function of the first clock value and the second clock value. 14. The packet communication apparatus of claim 13, wherein the at least one processor configures the packet communication apparatus to calculate the residence time of each test packet by subtracting the first clock value from the second clock value. 15. The packet communication apparatus of claim 13, wherein the at least one processor further configures the packet communication apparatus to:
calculate, for a node, a minimum, a maximum, and a mean of the calculated residence time; calculate, for the node, a residence time variation for the one or more test packets by subtracting the minimum from the calculated residence time for one of the test packets; and transmit, to a computing device, a residence time metrics that includes the calculated residence time of each test packet, the minimum, the maximum, the mean, and the residence time variation. 16. The packet communication apparatus of claim 13, wherein the list includes a plurality of node segment identifications (SIDs) in an order to be traversed by one or more test packets. 17. The packet communication apparatus of claim 16, wherein the list further comprises one or more adjacency segment identifications (Adj-SIDs). 18. The packet communication apparatus of claim 13, wherein the list includes an Explicit Route Object (ERO) that includes internet protocol addresses of the plurality of nodes. 19. A computer platform configured to generate the list in claim 13. 20. A computer program product comprising a computer-readable medium having code stored thereon, the code, when executed by a processor, causing the processor to implement a method, the code comprising:
instruction for receiving one or more test packets configured to traverse the plurality of nodes according to a list; instruction for identifying information indicative of a requested residence time measurement for the one or more test packets; instruction for reading a first clock value as each test packet is passed thereto; instruction for reading a second clock value as each test packet is passed therefrom; and instruction for calculating the residence time of each test packet as a function of the first clock value and the second clock value. 21. The computer program product of claim 20, wherein the instruction for calculating of the residence time of each test packet further includes instruction for calculating by subtracting the first clock value from the second clock value. 22. The computer program product of claim 20, further comprising:
instruction for calculating, for a node, a minimum, a maximum, and a mean of the calculated residence time; instruction for calculating, for the node, a residence time variation for the one or more test packets by subtracting the minimum from the calculated residence time for one of the test packets; and instruction for transmitting, to a computing device, a residence time metrics that includes the calculated residence time of each test packet, the minimum, the maximum, the mean, and the residence time variation. 23. The computer program product of claim 20, wherein the list includes a plurality of node segment identifications (SIDs) in an order to be traversed by one or more test packets. 24. The computer program product of claim 23, wherein the list further comprises one or more adjacency segment identifications (Adj-SIDs). 25. The computer program product of claim 20, wherein the list includes an Explicit Route Object (ERO) that includes internet protocol addresses of the plurality of nodes. | 2,400 |
8,684 | 8,684 | 15,052,514 | 2,492 | Techniques for the secure generation of a set of encryption keys to be used for communication between a wireless terminal and an assisting base station in a dual-connectivity scenario. An example method includes generating an assisting security key for the assisting base station, based on an anchor base station key. The generated assisting security key is sent to the assisting base station, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal while the wireless terminal is dually connected to the anchor base station and the assisting base station. The anchor base station key, or a key derived from the anchor base station key, is used for encrypting data sent to the wireless terminal by the anchor base station. | 1. A method, in a network node, for security key generation for secured communications between a wireless terminal and an anchor base station and between the wireless terminal and an assisting base station, wherein the wireless terminal is or is about to be dually connected to the anchor base station and the assisting base station, the method comprising:
generating an assisting security key for the assisting base station, based, at least in part, on an anchor base station key; sending, to the assisting base station, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station; and using the anchor base station key, or a key derived from the anchor base station key, for encrypting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 2. The method of claim 1, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. 3. The method of claim 2, wherein using the anchor base station key comprises deriving an encryption key, or an integrity key, or both, from the anchor base station key, and using the derived key or keys for protecting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 4. A method, in a network node, for security key generation for secured communications between a wireless terminal and an anchor base station and between the wireless terminal and an assisting base station, wherein the wireless terminal is or is about to be dually connected to the anchor base station and the assisting base station, the method comprising:
sharing a primary security key with the wireless terminal; generating an assisting security key for the assisting base station, based, at least in part, on the primary security key; sending, to the assisting base station, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 5. The method of claim 4, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. 6. A network node for security key generation for secured communications between a wireless terminal and an assisting base station, wherein the wireless terminal is, or is about to be, dually connected to the anchor base station and the assisting base station, the network node comprising interface circuitry configured to communicate with the assisting base station and further comprising processing circuitry, characterized in that the processing circuitry is configured to:
generate an assisting security key for the assisting base station, based, at least in part, on an anchor base station key; send to the assisting base station, using the interface circuitry, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station; and use the anchor base station key, or a key derived from the anchor base station key, for encrypting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 7. The network node of claim 6, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. 8. The network node of claim 7, wherein the processing circuitry is configured to use the anchor base station key to derive an encryption key, or an integrity key, or both, from the anchor base station key, and to use the derived key or keys for protecting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 9. A network node for security key generation for secured communications between a wireless terminal and an assisting base station, wherein the wireless terminal is, or is about to be, dually connected to the anchor base station and the assisting base station, the network node comprising interface circuitry configured to communicate with the assisting base station and further comprising processing circuitry, characterized in that the processing circuit is configured to:
share a primary security key with the wireless terminal; generate an assisting security key for the assisting base station, based, at least in part, on the primary security key; and send to the assisting base station, via the interface circuitry, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 10. The network node of claim 9, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. | Techniques for the secure generation of a set of encryption keys to be used for communication between a wireless terminal and an assisting base station in a dual-connectivity scenario. An example method includes generating an assisting security key for the assisting base station, based on an anchor base station key. The generated assisting security key is sent to the assisting base station, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal while the wireless terminal is dually connected to the anchor base station and the assisting base station. The anchor base station key, or a key derived from the anchor base station key, is used for encrypting data sent to the wireless terminal by the anchor base station.1. A method, in a network node, for security key generation for secured communications between a wireless terminal and an anchor base station and between the wireless terminal and an assisting base station, wherein the wireless terminal is or is about to be dually connected to the anchor base station and the assisting base station, the method comprising:
generating an assisting security key for the assisting base station, based, at least in part, on an anchor base station key; sending, to the assisting base station, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station; and using the anchor base station key, or a key derived from the anchor base station key, for encrypting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 2. The method of claim 1, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. 3. The method of claim 2, wherein using the anchor base station key comprises deriving an encryption key, or an integrity key, or both, from the anchor base station key, and using the derived key or keys for protecting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 4. A method, in a network node, for security key generation for secured communications between a wireless terminal and an anchor base station and between the wireless terminal and an assisting base station, wherein the wireless terminal is or is about to be dually connected to the anchor base station and the assisting base station, the method comprising:
sharing a primary security key with the wireless terminal; generating an assisting security key for the assisting base station, based, at least in part, on the primary security key; sending, to the assisting base station, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 5. The method of claim 4, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. 6. A network node for security key generation for secured communications between a wireless terminal and an assisting base station, wherein the wireless terminal is, or is about to be, dually connected to the anchor base station and the assisting base station, the network node comprising interface circuitry configured to communicate with the assisting base station and further comprising processing circuitry, characterized in that the processing circuitry is configured to:
generate an assisting security key for the assisting base station, based, at least in part, on an anchor base station key; send to the assisting base station, using the interface circuitry, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station; and use the anchor base station key, or a key derived from the anchor base station key, for encrypting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 7. The network node of claim 6, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. 8. The network node of claim 7, wherein the processing circuitry is configured to use the anchor base station key to derive an encryption key, or an integrity key, or both, from the anchor base station key, and to use the derived key or keys for protecting data sent to the wireless terminal by the anchor base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 9. A network node for security key generation for secured communications between a wireless terminal and an assisting base station, wherein the wireless terminal is, or is about to be, dually connected to the anchor base station and the assisting base station, the network node comprising interface circuitry configured to communicate with the assisting base station and further comprising processing circuitry, characterized in that the processing circuit is configured to:
share a primary security key with the wireless terminal; generate an assisting security key for the assisting base station, based, at least in part, on the primary security key; and send to the assisting base station, via the interface circuitry, the generated assisting security key, for use by the assisting base station in encrypting data traffic sent to the wireless terminal or in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station while the wireless terminal is dually connected to the anchor base station and the assisting base station. 10. The network node of claim 9, wherein the generated assisting security key comprises a base assisting security key for use in generating one or more additional assisting security keys for encrypting data traffic sent to the wireless terminal by the assisting base station. | 2,400 |
8,685 | 8,685 | 15,444,350 | 2,492 | Mechanisms to protect the integrity of memory of a virtual machine are provided. The mechanisms involve utilizing certain capabilities of the hypervisor underlying the virtual machine to monitor writes to memory pages of the virtual machine. A guest integrity driver communicates with the hypervisor to request such functionality. Additional protections are provided for protecting the guest integrity driver and associated data, as well as for preventing use of these mechanisms by malicious software. These additional protections include an elevated execution mode, termed “integrity mode,” which can only be entered from a specified entry point, as well as protections on the memory pages that store the guest integrity driver and associated data. | 1. A method for protecting memory of a virtual computing instance executing within a host computer, the method comprising:
receiving a first request, from a component executing within the virtual computing instance, to protect a first memory page; determining, by examining a privilege mode of the virtual computing instance, that the component is permitted to make the first request; protecting the first memory page by updating a data structure that tracks protected memory pages thereby defining a first protected memory page; detecting a write to the first protected memory page; responsive to the detecting, identifying an alert action for the first protected memory page; and performing the alert action. 2. The method of claim 1, further comprising:
receiving a second request, from the virtual computing instance, to remove protection from a second memory page; determining, by examining the privilege mode of the virtual computing instance, that the component is not permuted to make the first request; and responsive to the determining, performing an alert action instead of executing the second request. 3. The method of claim 1, wherein protecting the first memory further comprises:
using a trace service to install a trace on the first memory page, wherein the trace service transmits a notification to an abstraction layer supporting execution of the virtual computing instance, upon detecting the write to the first memory page. 4. The method of claim 1, wherein:
the first memory page stores instructions of the component. 5. The method of claim 1, wherein:
the first memory page is stored within a memory space assigned to the virtual computing instance; and the first memory page stores at least one of:
at least a portion of the data structure that tracks protected memory pages, and
write notifications for retrieval by the component. 6. The method of claim 1, further comprising:
receiving an indication from a trace service that the first memory page has been written to; responsive to receiving the indication, looking up an alert action associated with the memory page; and performing the alert action, wherein the alert action comprises one of: transmitting a notification to the component that the first memory page has been written to, suspending the virtual computing instance, and sending a message to a security monitor that is external to the virtual computing instance. 7. The method of claim 1, further comprising:
receiving a second request, from the component, that defines a message that is stored in a memory space assigned to the virtual computing instance and that is to be transmitted to a security manager that is external to the virtual computing instance; determining, by examining the privilege mode of the virtual computing instance, that the component is permitted to make the second request; and executing the second request by reading the message from the memory space assigned to the virtual computing instance and transmitting the message to the security manager that is external to the virtual computing instance. 8. The method of claim 1, further comprising:
receiving a second request, from the component, to enter the privilege mode; determining that second request is made from a registered entry point; and causing the virtual computing instance to enter the privilege. 9. The method of claim 8, further comprising:
receiving a third request to register the entry point; determining that the entry point has not yet been set; and executing the third request to set the entry point. 10. The method of claim 1, wherein:
the alert action comprises at least one of:
transmitting a write notification indicating that the first protected memory page was written to, to the component,
suspending execution of the virtual computing instance, and
transmitting a write notification indicating that the first protected memory page was written to, to a security manager that is external to the virtual computing instance. 11. A system for protecting memory of a virtual computing instance, the system comprising:
a host computer; an abstraction layer executing within the host computer; the virtual computing instance, supported by the abstraction layer; and a component executing within the virtual computing instance, wherein the abstraction layer is configured to:
receive a first request, from the component, to protect a first memory page,
determine, by examining a privilege mode of the component, that the component is permitted to make the first request,
protect the first memory page by updating a data structure that tracks protected memory page thereby defining a first protected memory page,
detect a write to the first protected memory page,
responsive to the detecting, identify an alert action for the first protected memory page, and
perform the alert action. 12. The system of claim 11, wherein the abstraction layer is further configured to:
receive a second request, from the virtual computing instance, to remove protection from a second memory page; determine, by examining the privilege mode of the virtual computing instance, that the virtual computing instance is not permitted to make the first request; and responsive to the determining, performing an alert action instead of executing the second request. 13. The system of claim 11, wherein the abstraction layer is configured to protect the first memory page by:
using a trace service to install a trace on the first memory page, wherein the trace service transmits a notification to the abstraction layer upon detecting the write to the first memory page. 14. The system of claim 11, wherein:
the first memory page stores instructions of the component. 15. The system of claim 11, wherein:
the first memory page is stored within a memory space assigned to the virtual computing instance; and the first memory page stores at least one of:
at least a portion of the data structure that tracks protected memory pages, and
write notifications for retrieval by the component. 16. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform a method for protecting memory of a virtual computing instance executing within a host computer, the method comprising:
receiving a first request, from a component executing within the virtual computing instance, to protect a first memory page; determining, by examining a privilege mode of the virtual computing instance, that the component is permitted to make the first request; protecting the first memory page by updating a data structure that tracks protected memory pages thereby defining a first protected memory page; detecting a write to the first protected memory page; responsive to the detecting, identifying an alert action for the first protected memory page; and performing the alert action. 17. The non-transitory computer-readable medium of claim 16, wherein the method further comprises:
receiving a second request, from the virtual computing instance, to remove protection from a second memory page; determining, by examining the privilege mode of the virtual computing instance, that the component is not permitted to make the first request; and responsive to the determining, performing an alert action instead of executing the second request. 18. The non-transitory computer-readable medium of claim 16, wherein protecting the first memory further comprises:
using a trace service to install a trace on the first memory page, wherein the trace service transmits a notification to a abstraction layer supporting execution of the virtual computing instance, upon detecting the write to the first memory page. 19. The non-transitory computer-readable medium of claim 16, wherein:
the first memory page stores instructions of the component. 20. The non-transitory computer-readable medium of claim 16, wherein:
the first memory page is stored within a memory space assigned to the virtual computing instance; and the first memory page stores at least one of:
at least a portion of the data structure that tracks protected memory pages, and
write notifications for retrieval by the component. | Mechanisms to protect the integrity of memory of a virtual machine are provided. The mechanisms involve utilizing certain capabilities of the hypervisor underlying the virtual machine to monitor writes to memory pages of the virtual machine. A guest integrity driver communicates with the hypervisor to request such functionality. Additional protections are provided for protecting the guest integrity driver and associated data, as well as for preventing use of these mechanisms by malicious software. These additional protections include an elevated execution mode, termed “integrity mode,” which can only be entered from a specified entry point, as well as protections on the memory pages that store the guest integrity driver and associated data.1. A method for protecting memory of a virtual computing instance executing within a host computer, the method comprising:
receiving a first request, from a component executing within the virtual computing instance, to protect a first memory page; determining, by examining a privilege mode of the virtual computing instance, that the component is permitted to make the first request; protecting the first memory page by updating a data structure that tracks protected memory pages thereby defining a first protected memory page; detecting a write to the first protected memory page; responsive to the detecting, identifying an alert action for the first protected memory page; and performing the alert action. 2. The method of claim 1, further comprising:
receiving a second request, from the virtual computing instance, to remove protection from a second memory page; determining, by examining the privilege mode of the virtual computing instance, that the component is not permuted to make the first request; and responsive to the determining, performing an alert action instead of executing the second request. 3. The method of claim 1, wherein protecting the first memory further comprises:
using a trace service to install a trace on the first memory page, wherein the trace service transmits a notification to an abstraction layer supporting execution of the virtual computing instance, upon detecting the write to the first memory page. 4. The method of claim 1, wherein:
the first memory page stores instructions of the component. 5. The method of claim 1, wherein:
the first memory page is stored within a memory space assigned to the virtual computing instance; and the first memory page stores at least one of:
at least a portion of the data structure that tracks protected memory pages, and
write notifications for retrieval by the component. 6. The method of claim 1, further comprising:
receiving an indication from a trace service that the first memory page has been written to; responsive to receiving the indication, looking up an alert action associated with the memory page; and performing the alert action, wherein the alert action comprises one of: transmitting a notification to the component that the first memory page has been written to, suspending the virtual computing instance, and sending a message to a security monitor that is external to the virtual computing instance. 7. The method of claim 1, further comprising:
receiving a second request, from the component, that defines a message that is stored in a memory space assigned to the virtual computing instance and that is to be transmitted to a security manager that is external to the virtual computing instance; determining, by examining the privilege mode of the virtual computing instance, that the component is permitted to make the second request; and executing the second request by reading the message from the memory space assigned to the virtual computing instance and transmitting the message to the security manager that is external to the virtual computing instance. 8. The method of claim 1, further comprising:
receiving a second request, from the component, to enter the privilege mode; determining that second request is made from a registered entry point; and causing the virtual computing instance to enter the privilege. 9. The method of claim 8, further comprising:
receiving a third request to register the entry point; determining that the entry point has not yet been set; and executing the third request to set the entry point. 10. The method of claim 1, wherein:
the alert action comprises at least one of:
transmitting a write notification indicating that the first protected memory page was written to, to the component,
suspending execution of the virtual computing instance, and
transmitting a write notification indicating that the first protected memory page was written to, to a security manager that is external to the virtual computing instance. 11. A system for protecting memory of a virtual computing instance, the system comprising:
a host computer; an abstraction layer executing within the host computer; the virtual computing instance, supported by the abstraction layer; and a component executing within the virtual computing instance, wherein the abstraction layer is configured to:
receive a first request, from the component, to protect a first memory page,
determine, by examining a privilege mode of the component, that the component is permitted to make the first request,
protect the first memory page by updating a data structure that tracks protected memory page thereby defining a first protected memory page,
detect a write to the first protected memory page,
responsive to the detecting, identify an alert action for the first protected memory page, and
perform the alert action. 12. The system of claim 11, wherein the abstraction layer is further configured to:
receive a second request, from the virtual computing instance, to remove protection from a second memory page; determine, by examining the privilege mode of the virtual computing instance, that the virtual computing instance is not permitted to make the first request; and responsive to the determining, performing an alert action instead of executing the second request. 13. The system of claim 11, wherein the abstraction layer is configured to protect the first memory page by:
using a trace service to install a trace on the first memory page, wherein the trace service transmits a notification to the abstraction layer upon detecting the write to the first memory page. 14. The system of claim 11, wherein:
the first memory page stores instructions of the component. 15. The system of claim 11, wherein:
the first memory page is stored within a memory space assigned to the virtual computing instance; and the first memory page stores at least one of:
at least a portion of the data structure that tracks protected memory pages, and
write notifications for retrieval by the component. 16. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform a method for protecting memory of a virtual computing instance executing within a host computer, the method comprising:
receiving a first request, from a component executing within the virtual computing instance, to protect a first memory page; determining, by examining a privilege mode of the virtual computing instance, that the component is permitted to make the first request; protecting the first memory page by updating a data structure that tracks protected memory pages thereby defining a first protected memory page; detecting a write to the first protected memory page; responsive to the detecting, identifying an alert action for the first protected memory page; and performing the alert action. 17. The non-transitory computer-readable medium of claim 16, wherein the method further comprises:
receiving a second request, from the virtual computing instance, to remove protection from a second memory page; determining, by examining the privilege mode of the virtual computing instance, that the component is not permitted to make the first request; and responsive to the determining, performing an alert action instead of executing the second request. 18. The non-transitory computer-readable medium of claim 16, wherein protecting the first memory further comprises:
using a trace service to install a trace on the first memory page, wherein the trace service transmits a notification to a abstraction layer supporting execution of the virtual computing instance, upon detecting the write to the first memory page. 19. The non-transitory computer-readable medium of claim 16, wherein:
the first memory page stores instructions of the component. 20. The non-transitory computer-readable medium of claim 16, wherein:
the first memory page is stored within a memory space assigned to the virtual computing instance; and the first memory page stores at least one of:
at least a portion of the data structure that tracks protected memory pages, and
write notifications for retrieval by the component. | 2,400 |
8,686 | 8,686 | 15,962,451 | 2,422 | A controllable device, such as a set top box, responds to a transmission received from a one of a plurality of controlling devices of differing capabilities by entering into a one of a plurality of operating modes wherein the one of the plurality of operating modes entered into corresponds to the capabilities of the controlling device from which the transmission originated. | 1. A method performed by a switching device that is operable to connect at least one of a plurality of source devices to a sink device, the method comprising:
detecting an infrared (IR) signal transmitted by a remote control device that is operable to control a first device among the plurality of source devices and the sink device, wherein the remote control device is programmed to transmit the IR signal in response to a user interaction with the remote control device; in response to the detection of the IR signal, determining that the remote control device is in use; and in response to determining that the remote control device is in use, controlling a connection between the at least one of the plurality of source devices and the sink device as a function of the detected IR signal. 2. The method of claim 1, wherein the IR signal comprises a device identification code that indicates the first device and wherein controlling the connection comprises controlling the connection as a function of the device identification code. 3. The method of claim 1, wherein the switching device comprises an audio/video switch, one of the plurality of source devices comprises a set-top-box, and the sink device comprises a television. 4. The method of claim 1, wherein the receiver comprises a universal IR receiver capable of identifying and decoding the command transmission formats of a multiplicity of manufactures. 5. A switching device, comprising:
a plurality of audio/video (AV) ports; a receiver; and control logic that is operable to selectively connect at least one of a plurality of source devices to a sink device each of which is connected to a corresponding one of the plurality of AV ports, the control logic being configured to: determine that the receiver has received an infrared (IR) signal transmitted by a remote control device that is operable to control a first device among the plurality of source devices and the sink device, wherein the remote control device is programmed to transmit the IR signal in response to a user interaction with the remote control device; in response to determining that the receiver has received the IR signal, determine that the remote control device is in use; and in response to at least determining that the remote control device is in use, controlling a connection between the at least one of the plurality of source devices and the sink devices as a function of the detected IR signal. 6. The switching device of claim 5, wherein the IR signal comprises a device identification code that indicates the first device and wherein controlling the connection comprises controlling the connection as a function of the device identification code. 7. The switching device of claim 5, wherein the switching device comprises an audio/video switch, one of the plurality of source devices comprises a set-top-box, and the sink device comprises a television. 8. The switching device of claim 5, wherein the receiver comprises a universal IR receiver capable of identifying and decoding the command transmission formats of a multiplicity of manufactures. 9. A switching device, comprising:
a plurality of audio/video (AV) ports; a receiver; and control logic that is operable to selectively connect at least one of a plurality of source devices to a sink device each of which is connected to a corresponding one of the plurality of AV ports, the control logic being configured to: determine that the receiver has received an infrared (IR) signal transmitted by a remote control device that is operable to control at least a source device among the plurality of source devices and the sink device, wherein the remote control device is programmed to transmit the IR signal in response to a user interaction with the remote control device; in response to determining that the receiver has received the IR signal, determine that the remote control device is in use; and in response to determining that the remote control device is in use: identify the source device that is associated with the remote control device from among the plurality of source devices; identify a first AV port from among the plurality of AV ports to which the identified source device is connected; and connect the first AV port to the AV port to which the sink device is connected. 10. The switching device of claim 9, wherein the control logic is configured to identify the source device from among the plurality of source devices that is associated with the remote control device by detecting a device identification code included in the IR signal. 11. The switching device of claim 9, wherein one of the plurality of source devices comprises a set-top-box and the sink device comprises a television. 12. The switching device of claim 9, wherein the receiver comprises a universal IR receiver capable of identifying and decoding the command transmission formats of a multiplicity of manufactures. | A controllable device, such as a set top box, responds to a transmission received from a one of a plurality of controlling devices of differing capabilities by entering into a one of a plurality of operating modes wherein the one of the plurality of operating modes entered into corresponds to the capabilities of the controlling device from which the transmission originated.1. A method performed by a switching device that is operable to connect at least one of a plurality of source devices to a sink device, the method comprising:
detecting an infrared (IR) signal transmitted by a remote control device that is operable to control a first device among the plurality of source devices and the sink device, wherein the remote control device is programmed to transmit the IR signal in response to a user interaction with the remote control device; in response to the detection of the IR signal, determining that the remote control device is in use; and in response to determining that the remote control device is in use, controlling a connection between the at least one of the plurality of source devices and the sink device as a function of the detected IR signal. 2. The method of claim 1, wherein the IR signal comprises a device identification code that indicates the first device and wherein controlling the connection comprises controlling the connection as a function of the device identification code. 3. The method of claim 1, wherein the switching device comprises an audio/video switch, one of the plurality of source devices comprises a set-top-box, and the sink device comprises a television. 4. The method of claim 1, wherein the receiver comprises a universal IR receiver capable of identifying and decoding the command transmission formats of a multiplicity of manufactures. 5. A switching device, comprising:
a plurality of audio/video (AV) ports; a receiver; and control logic that is operable to selectively connect at least one of a plurality of source devices to a sink device each of which is connected to a corresponding one of the plurality of AV ports, the control logic being configured to: determine that the receiver has received an infrared (IR) signal transmitted by a remote control device that is operable to control a first device among the plurality of source devices and the sink device, wherein the remote control device is programmed to transmit the IR signal in response to a user interaction with the remote control device; in response to determining that the receiver has received the IR signal, determine that the remote control device is in use; and in response to at least determining that the remote control device is in use, controlling a connection between the at least one of the plurality of source devices and the sink devices as a function of the detected IR signal. 6. The switching device of claim 5, wherein the IR signal comprises a device identification code that indicates the first device and wherein controlling the connection comprises controlling the connection as a function of the device identification code. 7. The switching device of claim 5, wherein the switching device comprises an audio/video switch, one of the plurality of source devices comprises a set-top-box, and the sink device comprises a television. 8. The switching device of claim 5, wherein the receiver comprises a universal IR receiver capable of identifying and decoding the command transmission formats of a multiplicity of manufactures. 9. A switching device, comprising:
a plurality of audio/video (AV) ports; a receiver; and control logic that is operable to selectively connect at least one of a plurality of source devices to a sink device each of which is connected to a corresponding one of the plurality of AV ports, the control logic being configured to: determine that the receiver has received an infrared (IR) signal transmitted by a remote control device that is operable to control at least a source device among the plurality of source devices and the sink device, wherein the remote control device is programmed to transmit the IR signal in response to a user interaction with the remote control device; in response to determining that the receiver has received the IR signal, determine that the remote control device is in use; and in response to determining that the remote control device is in use: identify the source device that is associated with the remote control device from among the plurality of source devices; identify a first AV port from among the plurality of AV ports to which the identified source device is connected; and connect the first AV port to the AV port to which the sink device is connected. 10. The switching device of claim 9, wherein the control logic is configured to identify the source device from among the plurality of source devices that is associated with the remote control device by detecting a device identification code included in the IR signal. 11. The switching device of claim 9, wherein one of the plurality of source devices comprises a set-top-box and the sink device comprises a television. 12. The switching device of claim 9, wherein the receiver comprises a universal IR receiver capable of identifying and decoding the command transmission formats of a multiplicity of manufactures. | 2,400 |
8,687 | 8,687 | 12,359,233 | 2,482 | A system and method are described below for encoding interactive low-latency video using interframe coding. For example, one embodiment of a computer-implemented method for performing video compression comprises: detecting motion or high scene complexity within a sequence of images occurring at different regions within the sequence of images; logically subdividing each of the sequence of images into a plurality of tiles, each tile having a size selected based on the amount of motion detected in a region in which the tile is positioned; and encoding one or more of the tiles within each image of the sequence of images using a first compression format and encoding the remainder of the tiles within each image of the sequence of images using a second compression format. | 1. A computer-implemented method for performing video compression comprising:
detecting motion within a sequence of images occurring at different regions within the sequence of images; logically subdividing each of the sequence of images into a plurality of tiles, each tile having a size selected based on the amount of motion detected in a region in which the tile is positioned; and encoding one or more of the tiles within each image of the sequence of images using a first compression format and encoding the remainder of the tiles within each image of the sequence of images using a second compression format. 2. The method as in claim 1 wherein the first compression format comprises intraframe coding. 3. The method as in claim 2 wherein the second compression format comprises interframe coding. | A system and method are described below for encoding interactive low-latency video using interframe coding. For example, one embodiment of a computer-implemented method for performing video compression comprises: detecting motion or high scene complexity within a sequence of images occurring at different regions within the sequence of images; logically subdividing each of the sequence of images into a plurality of tiles, each tile having a size selected based on the amount of motion detected in a region in which the tile is positioned; and encoding one or more of the tiles within each image of the sequence of images using a first compression format and encoding the remainder of the tiles within each image of the sequence of images using a second compression format.1. A computer-implemented method for performing video compression comprising:
detecting motion within a sequence of images occurring at different regions within the sequence of images; logically subdividing each of the sequence of images into a plurality of tiles, each tile having a size selected based on the amount of motion detected in a region in which the tile is positioned; and encoding one or more of the tiles within each image of the sequence of images using a first compression format and encoding the remainder of the tiles within each image of the sequence of images using a second compression format. 2. The method as in claim 1 wherein the first compression format comprises intraframe coding. 3. The method as in claim 2 wherein the second compression format comprises interframe coding. | 2,400 |
8,688 | 8,688 | 16,429,704 | 2,458 | A user device executes a web application on the user device and transmits a request to an application execution server. The request requests that a background process associated with the web application be started at the application execution server. The user device adds an icon associated with the web application to a user interface of the user device, and closes execution of the web application on the user device. The user device changes an appearance of the icon in response to receiving a notification from the background process. | 1. A user device comprising:
a processor and a memory, the memory containing instructions executable by the processor whereby the user device is configured to:
execute a web application on the user device;
transmit a request, to an application execution server, requesting that a background process associated with the web application be started at the application execution server;
add an icon associated with the web application to a user interface of the user device;
close the web application on the user device;
change an appearance of the icon in response to receiving a notification from the background process. 2. The user device of claim 1, wherein, when executing the web application, the processor is further configured to execute a web browser supporting the execution of the web application. 3. The user device of claim 1 wherein to change the appearance of the icon in response to receiving the notification the processor is configured to change the appearance of the icon when the web application is closed. 4. The user device of claim 1, wherein the processor is further configured to launch the web application in response to the icon being clicked when the web application is closed. 5. The user device of claim 4, wherein:
to close the web application, the processor is configured to close a web browser supporting the execution of the web application; to launch the web application, the processor is configured to re-execute the web browser on the user device. 6. The user device of claim 1, wherein to close the web application, the processor is configured to terminate execution of the web application. 7. The user device of claim 1, wherein to close the web application, the processor is configured to close a tab of a web browser supporting execution of the web application. 8. The user device of claim 1, wherein the processor is further configured to generate the request for the background process by execution of the web application. 9. The user device of claim 1, wherein the processor is further configured to maintain a communication channel with the application execution server in order to:
transmit the request; and receive the notification from the background process when the web application is closed. 10. The user device of claim 1, wherein to change the appearance of the icon, the processor is configured to change a size, color, and/or shape of the icon. 11. The user device of claim 1, wherein the notification indicates that an event associated with the web application has been recognized by the application execution server. 12. The user device of claim 11, wherein the event comprises reception of a new email. 13. A method of managing execution of a web application, implemented by a user device, the method comprising:
executing the web application on the user device; transmitting a request, to an application execution server, requesting that a background process associated with the web application be started at the application execution server; adding an icon associated with the web application to a user interface of the user device; closing the web application on the user device; changing an appearance of the icon in response to receiving a notification from the background process. 14. The method of claim 13, further comprising, when executing the web application, executing a web browser supporting the execution of the web application. 15. The method of claim 13, wherein changing the appearance of the icon comprises changing the appearance of the icon when the web application is closed. 16. The method of claim 13, further comprising launching the web application in response to the icon being clicked when the web application is closed. 17. The method of claim 16, wherein:
closing the web application comprises closing a web browser supporting the execution of the web application; launching the web application comprises re-executing the web browser on the user device. 18. The method of claim 13, wherein closing the web application comprises terminating execution of the web application. 19. The method of claim 13, wherein closing the web application comprises closing a tab of a web browser supporting execution of the web application. 20. The method of claim 13, further comprising executing a web browser to generate the request for the background process. 21. The method of claim 13, further comprising generating the request for the background process by execution of the web application. 22. The method of claim 13, further comprising maintaining a communication channel with the application execution server in order to receive updates, from the background process, that trigger further changes to the appearance of the icon over time. 23. The method of claim 22, further comprising receiving an instruction from the application execution server via the communication channel and executing in accordance with the instruction. 24. The method of claim 23, wherein the instruction is a remote procedure call or a representational state transfer message. 25. An application execution server comprising:
a processor and a memory, the memory containing instructions executable by the processor whereby the application execution server is configured to:
create a background process on the application execution server in response to receiving a request from a web application executing on a user device; and
transmit a notification to the user device in response to the background process recognizing an event associated with the web application. 26. The application execution server of claim 25, wherein to transmit the notification to the user device, the processor is configured to transmit after the web application has been closed at the user device. 27. The application execution server of claim 25, wherein the processor is further configured to maintain a communication channel with the user device to:
receive the request; and transmit the notification from the background process to the user device after the web application has been closed at the user device. 28. The application execution server of claim 25, wherein to recognize the event associated with the web application, the processor is configured to recognize reception of a new email. 29. The application execution server of claim 25, wherein to transmit the notification, the processor is configured to transmit the notification in the form of an instruction that is interpretable by the user device. 30. The application execution server of claim 29, wherein the instruction is a remote procedure call or a representational state transfer message. | A user device executes a web application on the user device and transmits a request to an application execution server. The request requests that a background process associated with the web application be started at the application execution server. The user device adds an icon associated with the web application to a user interface of the user device, and closes execution of the web application on the user device. The user device changes an appearance of the icon in response to receiving a notification from the background process.1. A user device comprising:
a processor and a memory, the memory containing instructions executable by the processor whereby the user device is configured to:
execute a web application on the user device;
transmit a request, to an application execution server, requesting that a background process associated with the web application be started at the application execution server;
add an icon associated with the web application to a user interface of the user device;
close the web application on the user device;
change an appearance of the icon in response to receiving a notification from the background process. 2. The user device of claim 1, wherein, when executing the web application, the processor is further configured to execute a web browser supporting the execution of the web application. 3. The user device of claim 1 wherein to change the appearance of the icon in response to receiving the notification the processor is configured to change the appearance of the icon when the web application is closed. 4. The user device of claim 1, wherein the processor is further configured to launch the web application in response to the icon being clicked when the web application is closed. 5. The user device of claim 4, wherein:
to close the web application, the processor is configured to close a web browser supporting the execution of the web application; to launch the web application, the processor is configured to re-execute the web browser on the user device. 6. The user device of claim 1, wherein to close the web application, the processor is configured to terminate execution of the web application. 7. The user device of claim 1, wherein to close the web application, the processor is configured to close a tab of a web browser supporting execution of the web application. 8. The user device of claim 1, wherein the processor is further configured to generate the request for the background process by execution of the web application. 9. The user device of claim 1, wherein the processor is further configured to maintain a communication channel with the application execution server in order to:
transmit the request; and receive the notification from the background process when the web application is closed. 10. The user device of claim 1, wherein to change the appearance of the icon, the processor is configured to change a size, color, and/or shape of the icon. 11. The user device of claim 1, wherein the notification indicates that an event associated with the web application has been recognized by the application execution server. 12. The user device of claim 11, wherein the event comprises reception of a new email. 13. A method of managing execution of a web application, implemented by a user device, the method comprising:
executing the web application on the user device; transmitting a request, to an application execution server, requesting that a background process associated with the web application be started at the application execution server; adding an icon associated with the web application to a user interface of the user device; closing the web application on the user device; changing an appearance of the icon in response to receiving a notification from the background process. 14. The method of claim 13, further comprising, when executing the web application, executing a web browser supporting the execution of the web application. 15. The method of claim 13, wherein changing the appearance of the icon comprises changing the appearance of the icon when the web application is closed. 16. The method of claim 13, further comprising launching the web application in response to the icon being clicked when the web application is closed. 17. The method of claim 16, wherein:
closing the web application comprises closing a web browser supporting the execution of the web application; launching the web application comprises re-executing the web browser on the user device. 18. The method of claim 13, wherein closing the web application comprises terminating execution of the web application. 19. The method of claim 13, wherein closing the web application comprises closing a tab of a web browser supporting execution of the web application. 20. The method of claim 13, further comprising executing a web browser to generate the request for the background process. 21. The method of claim 13, further comprising generating the request for the background process by execution of the web application. 22. The method of claim 13, further comprising maintaining a communication channel with the application execution server in order to receive updates, from the background process, that trigger further changes to the appearance of the icon over time. 23. The method of claim 22, further comprising receiving an instruction from the application execution server via the communication channel and executing in accordance with the instruction. 24. The method of claim 23, wherein the instruction is a remote procedure call or a representational state transfer message. 25. An application execution server comprising:
a processor and a memory, the memory containing instructions executable by the processor whereby the application execution server is configured to:
create a background process on the application execution server in response to receiving a request from a web application executing on a user device; and
transmit a notification to the user device in response to the background process recognizing an event associated with the web application. 26. The application execution server of claim 25, wherein to transmit the notification to the user device, the processor is configured to transmit after the web application has been closed at the user device. 27. The application execution server of claim 25, wherein the processor is further configured to maintain a communication channel with the user device to:
receive the request; and transmit the notification from the background process to the user device after the web application has been closed at the user device. 28. The application execution server of claim 25, wherein to recognize the event associated with the web application, the processor is configured to recognize reception of a new email. 29. The application execution server of claim 25, wherein to transmit the notification, the processor is configured to transmit the notification in the form of an instruction that is interpretable by the user device. 30. The application execution server of claim 29, wherein the instruction is a remote procedure call or a representational state transfer message. | 2,400 |
8,689 | 8,689 | 15,091,289 | 2,465 | The wireless device computes a plurality of channel state information (CSI) reports employing first signals received on a first plurality of cells. The plurality of CSI reports may be computed for transmission in a first subframe. The wireless device may select a first radio resource from a sequence of radio resources. The wireless device may select a first plurality of CSI reports from the plurality of CSI reports. The wireless device may transmit, on the first radio resource and in the first subframe, the first plurality of CSI reports. | 1. A method comprising:
receiving, by a wireless device, at least one message comprising configuration parameters of” a physical uplink control channel (PUCCH) of a cell in a plurality of cells, the configuration parameters indicating a sequence of radio resources for the PUCCH, each of the radio resources in the sequence comprising one or more resource blocks in a plurality of subframes; computing, by the wireless device, a plurality of channel state information (CSI) reports employing first signals received on a first plurality of cells in the plurality of cells, the plurality of CSI reports computed for transmission in a first subframe in the plurality of subframes; selecting a first radio resource from the sequence of radio resources; selecting a first plurality of CSI reports from the plurality of CSI reports; and transmitting, on the first radio resource and in the first subframe, the first plurality of CSI reports. 2. The method of claim 1, wherein the first plurality of CSI reports comprise the plurality of CSI reports. 3. The method of claim 1, wherein a CSI report in the first plurality of CSI reports is selected according to at least a CSI report priority that depends, at least in part, on:
a report type of the CSI report; and a first cell index of a first cell associated with the CSI report. 4. The method of claim 1, wherein the first plurality of the CSI reports comprise fewer CSI reports than the plurality of CSI reports when the wireless device does not have sufficient resources to transmit the plurality of CSI reports. 5. The method of claim 1, wherein the first plurality of cells in the plurality of cells are configured to transmit one or more CSI reports in the first subframe. 6. The method of claim 5, wherein each of the first plurality of cells is configured with one or more CSI processes. 7. The method of claim 1, wherein the plurality of cells are grouped into a plurality of PUCCH groups comprising:
a primary PUCCH group comprising a primary cell with a primary PUCCH transmitted to a base station; and a secondary PUCCH group comprising a PUCCH secondary cell with a secondary PUCCH transmitted to a base station. 8. The method of claim 7, wherein the PUCCH is at least one of the primary PUCCH or the secondary PUCCH. 9. The method of claim 7, wherein the first plurality of cells are in at least one of the primary PUCCH group or the secondary PUCCH group. 10. The method of claim 1, further comprising computing, by the wireless device, a second plurality of channel state information (CSI) reports employing second signals received on a second plurality of cells in the plurality of cells, the second plurality of CSI reports computed for transmission in a second subframe in the plurality of subframes. 11. A method comprising:
receiving, by a wireless device, at least one message comprising configuration parameters of a physical uplink control channel (PUCCH) of a cell in a plurality of cells, the configuration parameters indicating a sequence of radio resources for the PUCCH, each of the radio resources in the sequence comprising one or more resource blocks in a plurality of subframes; computing, by the wireless device, a plurality of channel state information (CSI) reports employing first signals received on a first plurality of cells in the plurality of cells, the plurality of CSI reports computed for transmission in a first subframe in the plurality of subframes; selecting one or more radio resources from the sequence of radio resources; selecting a first plurality of CSI reports from the plurality of CSI reports; and transmitting, on the one or more radio resources and in the first subframe, the first plurality of CSI reports. 12. The method of claim 11, wherein the first plurality of CSI reports are the plurality of CSI reports. 13. The method of claim 11, wherein a CSI report in the first plurality of CSI reports is selected according to at least a CSI report priority that depends, at least in part, on:
a report type of the CSI report; and a first cell index of a first cell associated with the CSI report. 14. The method of claim 11, wherein the first plurality of the CSI reports comprise fewer CSI reports than the plurality of CSI reports when the wireless device does not have sufficient resources to transmit the plurality of CSI reports. 15. The method of claim 11, wherein the first plurality of cells in the plurality of cells are configured to transmit one or more CSI reports in the first subframe. 16. The method of claim 11, wherein each of the first plurality of cells is configured with one or more CSI processes. 17. The method of claim 11, wherein the plurality of cells are grouped into a plurality of PUCCH groups comprising:
a primary PUCCH group comprising a primary cell with a primary PUCCH transmitted to a base station; and a secondary PUCCH group comprising a PUCCH secondary cell with a secondary PUCCH transmitted to a base station. 18. The method of claim 17, wherein the PUCCH is at least one of the primary PUCCH or the secondary PUCCH. 19. The method of claim 17, wherein the first plurality of cells are in at least one of the primary PUCCH group or the secondary PUCCH group. 20. The method of claim 11, further comprising computing, by the wireless device, a second plurality of channel state information (CSI) reports employing second signals received on a second plurality of cells in the plurality of cells, the second plurality of CSI reports computed for transmission in a second subframe in the plurality of subframes. | The wireless device computes a plurality of channel state information (CSI) reports employing first signals received on a first plurality of cells. The plurality of CSI reports may be computed for transmission in a first subframe. The wireless device may select a first radio resource from a sequence of radio resources. The wireless device may select a first plurality of CSI reports from the plurality of CSI reports. The wireless device may transmit, on the first radio resource and in the first subframe, the first plurality of CSI reports.1. A method comprising:
receiving, by a wireless device, at least one message comprising configuration parameters of” a physical uplink control channel (PUCCH) of a cell in a plurality of cells, the configuration parameters indicating a sequence of radio resources for the PUCCH, each of the radio resources in the sequence comprising one or more resource blocks in a plurality of subframes; computing, by the wireless device, a plurality of channel state information (CSI) reports employing first signals received on a first plurality of cells in the plurality of cells, the plurality of CSI reports computed for transmission in a first subframe in the plurality of subframes; selecting a first radio resource from the sequence of radio resources; selecting a first plurality of CSI reports from the plurality of CSI reports; and transmitting, on the first radio resource and in the first subframe, the first plurality of CSI reports. 2. The method of claim 1, wherein the first plurality of CSI reports comprise the plurality of CSI reports. 3. The method of claim 1, wherein a CSI report in the first plurality of CSI reports is selected according to at least a CSI report priority that depends, at least in part, on:
a report type of the CSI report; and a first cell index of a first cell associated with the CSI report. 4. The method of claim 1, wherein the first plurality of the CSI reports comprise fewer CSI reports than the plurality of CSI reports when the wireless device does not have sufficient resources to transmit the plurality of CSI reports. 5. The method of claim 1, wherein the first plurality of cells in the plurality of cells are configured to transmit one or more CSI reports in the first subframe. 6. The method of claim 5, wherein each of the first plurality of cells is configured with one or more CSI processes. 7. The method of claim 1, wherein the plurality of cells are grouped into a plurality of PUCCH groups comprising:
a primary PUCCH group comprising a primary cell with a primary PUCCH transmitted to a base station; and a secondary PUCCH group comprising a PUCCH secondary cell with a secondary PUCCH transmitted to a base station. 8. The method of claim 7, wherein the PUCCH is at least one of the primary PUCCH or the secondary PUCCH. 9. The method of claim 7, wherein the first plurality of cells are in at least one of the primary PUCCH group or the secondary PUCCH group. 10. The method of claim 1, further comprising computing, by the wireless device, a second plurality of channel state information (CSI) reports employing second signals received on a second plurality of cells in the plurality of cells, the second plurality of CSI reports computed for transmission in a second subframe in the plurality of subframes. 11. A method comprising:
receiving, by a wireless device, at least one message comprising configuration parameters of a physical uplink control channel (PUCCH) of a cell in a plurality of cells, the configuration parameters indicating a sequence of radio resources for the PUCCH, each of the radio resources in the sequence comprising one or more resource blocks in a plurality of subframes; computing, by the wireless device, a plurality of channel state information (CSI) reports employing first signals received on a first plurality of cells in the plurality of cells, the plurality of CSI reports computed for transmission in a first subframe in the plurality of subframes; selecting one or more radio resources from the sequence of radio resources; selecting a first plurality of CSI reports from the plurality of CSI reports; and transmitting, on the one or more radio resources and in the first subframe, the first plurality of CSI reports. 12. The method of claim 11, wherein the first plurality of CSI reports are the plurality of CSI reports. 13. The method of claim 11, wherein a CSI report in the first plurality of CSI reports is selected according to at least a CSI report priority that depends, at least in part, on:
a report type of the CSI report; and a first cell index of a first cell associated with the CSI report. 14. The method of claim 11, wherein the first plurality of the CSI reports comprise fewer CSI reports than the plurality of CSI reports when the wireless device does not have sufficient resources to transmit the plurality of CSI reports. 15. The method of claim 11, wherein the first plurality of cells in the plurality of cells are configured to transmit one or more CSI reports in the first subframe. 16. The method of claim 11, wherein each of the first plurality of cells is configured with one or more CSI processes. 17. The method of claim 11, wherein the plurality of cells are grouped into a plurality of PUCCH groups comprising:
a primary PUCCH group comprising a primary cell with a primary PUCCH transmitted to a base station; and a secondary PUCCH group comprising a PUCCH secondary cell with a secondary PUCCH transmitted to a base station. 18. The method of claim 17, wherein the PUCCH is at least one of the primary PUCCH or the secondary PUCCH. 19. The method of claim 17, wherein the first plurality of cells are in at least one of the primary PUCCH group or the secondary PUCCH group. 20. The method of claim 11, further comprising computing, by the wireless device, a second plurality of channel state information (CSI) reports employing second signals received on a second plurality of cells in the plurality of cells, the second plurality of CSI reports computed for transmission in a second subframe in the plurality of subframes. | 2,400 |
8,690 | 8,690 | 14,621,851 | 2,459 | Techniques for a sequential message reader for message syncing are described. An apparatus may comprise a network component and an inbox management component. The network component may be operative to receiving an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number. The inbox management component may be operative to add the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number and determine based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. Other embodiments are described and claimed. | 1. A computer-implemented method, comprising:
receiving an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number; adding the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number; and determining based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. 2. The method of claim 1, the incoming update comprising an atomic modification to a message inbox for the messaging endpoint. 3. The method of claim 1, wherein the messaging endpoint comprises one of a messaging application on a device and a web browser session. 4. The method of claim 1, wherein the incoming update is received in response to a determination that a current recipient sequence number associated with the messaging endpoint in the recipient update queue is less than the incoming recipient sequence number. 5. The method of claim 1, comprising:
determining based on the incoming recipient sequence number that the one or more additional updates are missing from the message inbox on the messaging endpoint; determining a smallest missing sequence number based on the incoming recipient sequence number; transmitting a missing update request from the messaging endpoint to the recipient update queue, the missing update request comprising the smallest missing sequence number; and receiving the one or more additional updates from the recipient update queue in response to the missing update request. 6. The method of claim 1, comprising:
determining based on the incoming recipient sequence number that the two or more additional updates are missing from the message inbox on the messaging endpoint; determining two or more missing sequence numbers based on the incoming recipient sequence number, the two or more missing sequence numbers corresponding to the two or more additional updates; transmitting a bulk missing update request from the messaging endpoint to the recipient update queue, the bulk missing update request comprising the two or more missing sequence numbers; and receiving the two or more additional updates from the recipient update queue in response to the missing update request, the two or more additional updates received in a bulk missing update response in a single network transaction. 7. The method of claim 1, comprising:
receiving user input for creation of the incoming update; creating the incoming update; and transmitting the incoming update to the recipient update queue. 8. The method of claim 1, wherein adding the incoming update to the message inbox comprises:
applying the incoming update to a message cache of a messaging application on a device; and adding the incoming update to a message database of the messaging application on the device. 9. An apparatus, comprising:
a processor circuit on a device; a network component operative on the processor circuit to receiving an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number; and an inbox management component operative on the processor circuit to add the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number and determine based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. 10. The apparatus of claim 9, the incoming update comprising an atomic modification to a message inbox for the messaging endpoint, wherein the recipient messaging endpoint comprises one of a messaging application on a device and a web browser session. 11. The apparatus of claim 9, wherein the incoming update is received in response to a determination that a current recipient sequence number associated with the messaging endpoint in the recipient update queue is less than the incoming recipient sequence number. 12. The apparatus of claim 9, further comprising:
the inbox management component operative to determine based on the incoming recipient sequence number that the one or more additional updates are missing from the message inbox on the messaging endpoint and determine a smallest missing sequence number based on the incoming recipient sequence number; and the network component operative to transmit a missing update request from the messaging endpoint to the recipient update queue, the missing update request comprising the smallest missing sequence number and receive the one or more additional updates from the recipient update queue in response to the missing update request. 13. The apparatus of claim 9, further comprising:
the inbox management component operative to determine based on the incoming recipient sequence number that the two or more additional updates are missing from the message inbox on the messaging endpoint and determine two or more missing sequence numbers based on the incoming recipient sequence number, the two or more missing sequence numbers corresponding to the two or more additional updates; and the network component operative to transmit a bulk missing update request from the messaging endpoint to the recipient update queue, the bulk missing update request comprising the two or more missing sequence numbers and receive the two or more additional updates from the recipient update queue in response to the missing update request, the two or more additional update received in a bulk missing update response in a single network transaction. 14. The apparatus of claim 9, further comprising:
a user interface component operative on the processor circuit to receive user input for creation of the incoming update and create the incoming update; and the network component operative to transmit the incoming update to the recipient update queue. 15. The apparatus of claim 9, wherein adding the incoming update to the message inbox comprises, further comprising:
the inbox management component operative to apply the incoming update to a message cache of a messaging application on a device and apply the incoming update to a message database of the messaging application on the device. 16. At least one computer-readable storage medium comprising instructions that, when executed, cause a system to:
receive an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number; add the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number; and determine based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. 17. The computer-readable storage medium of claim 16, the incoming update comprising an atomic modification to a message inbox for the messaging endpoint, wherein the messaging endpoint comprises one of a messaging application on a device and a web browser session. 18. The computer-readable storage medium of claim 16, wherein the incoming update is received in response to a determination that a current recipient sequence number associated with the messaging endpoint in the recipient update queue is less than the incoming recipient sequence number. 19. The computer-readable storage medium of claim 16, comprising further instructions that, when executed, cause a system to:
determine based on the incoming recipient sequence number that the one or more additional updates are missing from the message inbox on the messaging endpoint; determine a smallest missing sequence number based on the incoming recipient sequence number; transmit a missing update request from the messaging endpoint to the recipient update queue, the missing update request comprising the smallest missing sequence number; and receive the one or more additional updates from the recipient update queue in response to the missing update request 20. The computer-readable storage medium of claim 16, wherein adding the incoming update to the message inbox comprises further instructions that, when executed, cause a system to:
apply the incoming update to a message cache of a messaging application on a device; and apply the incoming update to a message database of the messaging application on the device. | Techniques for a sequential message reader for message syncing are described. An apparatus may comprise a network component and an inbox management component. The network component may be operative to receiving an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number. The inbox management component may be operative to add the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number and determine based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. Other embodiments are described and claimed.1. A computer-implemented method, comprising:
receiving an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number; adding the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number; and determining based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. 2. The method of claim 1, the incoming update comprising an atomic modification to a message inbox for the messaging endpoint. 3. The method of claim 1, wherein the messaging endpoint comprises one of a messaging application on a device and a web browser session. 4. The method of claim 1, wherein the incoming update is received in response to a determination that a current recipient sequence number associated with the messaging endpoint in the recipient update queue is less than the incoming recipient sequence number. 5. The method of claim 1, comprising:
determining based on the incoming recipient sequence number that the one or more additional updates are missing from the message inbox on the messaging endpoint; determining a smallest missing sequence number based on the incoming recipient sequence number; transmitting a missing update request from the messaging endpoint to the recipient update queue, the missing update request comprising the smallest missing sequence number; and receiving the one or more additional updates from the recipient update queue in response to the missing update request. 6. The method of claim 1, comprising:
determining based on the incoming recipient sequence number that the two or more additional updates are missing from the message inbox on the messaging endpoint; determining two or more missing sequence numbers based on the incoming recipient sequence number, the two or more missing sequence numbers corresponding to the two or more additional updates; transmitting a bulk missing update request from the messaging endpoint to the recipient update queue, the bulk missing update request comprising the two or more missing sequence numbers; and receiving the two or more additional updates from the recipient update queue in response to the missing update request, the two or more additional updates received in a bulk missing update response in a single network transaction. 7. The method of claim 1, comprising:
receiving user input for creation of the incoming update; creating the incoming update; and transmitting the incoming update to the recipient update queue. 8. The method of claim 1, wherein adding the incoming update to the message inbox comprises:
applying the incoming update to a message cache of a messaging application on a device; and adding the incoming update to a message database of the messaging application on the device. 9. An apparatus, comprising:
a processor circuit on a device; a network component operative on the processor circuit to receiving an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number; and an inbox management component operative on the processor circuit to add the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number and determine based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. 10. The apparatus of claim 9, the incoming update comprising an atomic modification to a message inbox for the messaging endpoint, wherein the recipient messaging endpoint comprises one of a messaging application on a device and a web browser session. 11. The apparatus of claim 9, wherein the incoming update is received in response to a determination that a current recipient sequence number associated with the messaging endpoint in the recipient update queue is less than the incoming recipient sequence number. 12. The apparatus of claim 9, further comprising:
the inbox management component operative to determine based on the incoming recipient sequence number that the one or more additional updates are missing from the message inbox on the messaging endpoint and determine a smallest missing sequence number based on the incoming recipient sequence number; and the network component operative to transmit a missing update request from the messaging endpoint to the recipient update queue, the missing update request comprising the smallest missing sequence number and receive the one or more additional updates from the recipient update queue in response to the missing update request. 13. The apparatus of claim 9, further comprising:
the inbox management component operative to determine based on the incoming recipient sequence number that the two or more additional updates are missing from the message inbox on the messaging endpoint and determine two or more missing sequence numbers based on the incoming recipient sequence number, the two or more missing sequence numbers corresponding to the two or more additional updates; and the network component operative to transmit a bulk missing update request from the messaging endpoint to the recipient update queue, the bulk missing update request comprising the two or more missing sequence numbers and receive the two or more additional updates from the recipient update queue in response to the missing update request, the two or more additional update received in a bulk missing update response in a single network transaction. 14. The apparatus of claim 9, further comprising:
a user interface component operative on the processor circuit to receive user input for creation of the incoming update and create the incoming update; and the network component operative to transmit the incoming update to the recipient update queue. 15. The apparatus of claim 9, wherein adding the incoming update to the message inbox comprises, further comprising:
the inbox management component operative to apply the incoming update to a message cache of a messaging application on a device and apply the incoming update to a message database of the messaging application on the device. 16. At least one computer-readable storage medium comprising instructions that, when executed, cause a system to:
receive an incoming update at a messaging endpoint from a recipient update queue, the incoming update comprising an incoming recipient sequence number; add the incoming update to a message inbox on the messaging endpoint, the incoming update added to the message inbox in an order determined by the incoming recipient sequence number; and determine based on the incoming recipient sequence number whether one or more additional updates are missing from the message inbox on the messaging endpoint. 17. The computer-readable storage medium of claim 16, the incoming update comprising an atomic modification to a message inbox for the messaging endpoint, wherein the messaging endpoint comprises one of a messaging application on a device and a web browser session. 18. The computer-readable storage medium of claim 16, wherein the incoming update is received in response to a determination that a current recipient sequence number associated with the messaging endpoint in the recipient update queue is less than the incoming recipient sequence number. 19. The computer-readable storage medium of claim 16, comprising further instructions that, when executed, cause a system to:
determine based on the incoming recipient sequence number that the one or more additional updates are missing from the message inbox on the messaging endpoint; determine a smallest missing sequence number based on the incoming recipient sequence number; transmit a missing update request from the messaging endpoint to the recipient update queue, the missing update request comprising the smallest missing sequence number; and receive the one or more additional updates from the recipient update queue in response to the missing update request 20. The computer-readable storage medium of claim 16, wherein adding the incoming update to the message inbox comprises further instructions that, when executed, cause a system to:
apply the incoming update to a message cache of a messaging application on a device; and apply the incoming update to a message database of the messaging application on the device. | 2,400 |
8,691 | 8,691 | 16,255,518 | 2,477 | The present disclosure relates to transmitting synchronization signals and in particular to so called beam sweep. In particular the disclosure relates to methods for providing synchronization using synchronization sequences that are transmitted at different points in time. The disclosure also relates to corresponding devices and computer programs. A method in a network node, for transmitting synchronization sequences of a synchronization signal to one or more receiving wireless devices, comprises determining multiple synchronization sequences, such that each synchronization sequence comprises a respective timing indication, whereby each synchronization sequence enables determination of a time of an event in a receiving wireless device and transmitting the synchronization sequences to the one or more wireless devices, at different points in time. | 1. A method for use in a network node, for transmitting synchronization sequences of a synchronization signal, transmitted in a beam sweep, to one or more receiving wireless devices, the method comprising:
determining multiple synchronization sequences, such that each synchronization sequence comprises a respective timing indication, whereby each synchronization sequence enables a receiving wireless device to determine a time of an event; and transmitting the synchronization sequences to the one or more wireless devices, at different points in time. 2. The method of claim 1, wherein the multiple synchronization sequences are time dependent versions of a synchronization signal referring to one particular event. 3. The method of claim 1, further comprising:
determining the time of the event. 4. The method of claim 1, wherein the synchronization sequences are transmitted in different directions. 5. The method of claim 4, wherein the transmission of the synchronization sequences constitutes a beam sweep. 6. The method of claim 1, wherein the timing indications are relative to a time of transmission of the respective synchronization sequence. 7. The method of claim 1, wherein the timing indications are relative to a reference clock. 8. A method for use in a wireless device, for receiving one or more synchronization sequences of a synchronization signal, transmitted in a beam sweep, the method comprising:
monitoring a spectrum for synchronisation sequences; and when a first synchronisation sequence is detected then: obtaining, by analysing the content of the detected first synchronisation sequence, a timing indication defining a time of an event. 9. The method of claim 8, wherein the method comprises:
receiving a second synchronization sequence, wherein the first and the second synchronization sequences define the same time. 10. The method of claim 8, comprising:
performing a transceiver operation at the time defined by the timing indication. 11. The method of claim 8, wherein the timing indications are relative times to a time of a transmission of the respective synchronization sequence. 12. The method of claim 8, wherein the timing indications are relative to a reference clock. 13. The method of claim 8, wherein the event is a time of a reserved time slot where the wireless device is allowed to transmit. 14. A network node in a cellular communication network configured for transmitting synchronization sequences of a synchronization signal, transmitted in a beam sweep, to one or more receiving wireless devices, the network node, comprising:
a communication interface; processing circuitry configured to cause the network node:
to determine multiple synchronization sequences, such that each synchronization sequence comprises a respective timing indication, whereby each synchronization sequence enables a receiving wireless device to determine of a time of an event; and
to transmit the synchronization sequences to the one or more wireless devices, at different points in time. 15. A wireless device being configured for receiving one or more synchronization sequences of a synchronization signal, transmitted in a beam sweep, the wireless device comprising:
circuitry communication interface; and processing circuitry configured to cause the wireless device:
to monitor a spectrum for synchronisation sequences; and when a first synchronisation sequence is detected then:
to obtain, by analysing the content of the detected first synchronisation sequence, a timing indication defining a time of an event. | The present disclosure relates to transmitting synchronization signals and in particular to so called beam sweep. In particular the disclosure relates to methods for providing synchronization using synchronization sequences that are transmitted at different points in time. The disclosure also relates to corresponding devices and computer programs. A method in a network node, for transmitting synchronization sequences of a synchronization signal to one or more receiving wireless devices, comprises determining multiple synchronization sequences, such that each synchronization sequence comprises a respective timing indication, whereby each synchronization sequence enables determination of a time of an event in a receiving wireless device and transmitting the synchronization sequences to the one or more wireless devices, at different points in time.1. A method for use in a network node, for transmitting synchronization sequences of a synchronization signal, transmitted in a beam sweep, to one or more receiving wireless devices, the method comprising:
determining multiple synchronization sequences, such that each synchronization sequence comprises a respective timing indication, whereby each synchronization sequence enables a receiving wireless device to determine a time of an event; and transmitting the synchronization sequences to the one or more wireless devices, at different points in time. 2. The method of claim 1, wherein the multiple synchronization sequences are time dependent versions of a synchronization signal referring to one particular event. 3. The method of claim 1, further comprising:
determining the time of the event. 4. The method of claim 1, wherein the synchronization sequences are transmitted in different directions. 5. The method of claim 4, wherein the transmission of the synchronization sequences constitutes a beam sweep. 6. The method of claim 1, wherein the timing indications are relative to a time of transmission of the respective synchronization sequence. 7. The method of claim 1, wherein the timing indications are relative to a reference clock. 8. A method for use in a wireless device, for receiving one or more synchronization sequences of a synchronization signal, transmitted in a beam sweep, the method comprising:
monitoring a spectrum for synchronisation sequences; and when a first synchronisation sequence is detected then: obtaining, by analysing the content of the detected first synchronisation sequence, a timing indication defining a time of an event. 9. The method of claim 8, wherein the method comprises:
receiving a second synchronization sequence, wherein the first and the second synchronization sequences define the same time. 10. The method of claim 8, comprising:
performing a transceiver operation at the time defined by the timing indication. 11. The method of claim 8, wherein the timing indications are relative times to a time of a transmission of the respective synchronization sequence. 12. The method of claim 8, wherein the timing indications are relative to a reference clock. 13. The method of claim 8, wherein the event is a time of a reserved time slot where the wireless device is allowed to transmit. 14. A network node in a cellular communication network configured for transmitting synchronization sequences of a synchronization signal, transmitted in a beam sweep, to one or more receiving wireless devices, the network node, comprising:
a communication interface; processing circuitry configured to cause the network node:
to determine multiple synchronization sequences, such that each synchronization sequence comprises a respective timing indication, whereby each synchronization sequence enables a receiving wireless device to determine of a time of an event; and
to transmit the synchronization sequences to the one or more wireless devices, at different points in time. 15. A wireless device being configured for receiving one or more synchronization sequences of a synchronization signal, transmitted in a beam sweep, the wireless device comprising:
circuitry communication interface; and processing circuitry configured to cause the wireless device:
to monitor a spectrum for synchronisation sequences; and when a first synchronisation sequence is detected then:
to obtain, by analysing the content of the detected first synchronisation sequence, a timing indication defining a time of an event. | 2,400 |
8,692 | 8,692 | 15,186,099 | 2,457 | A method for providing location-based information via a temporary social network includes detecting each of a plurality of first user devices within a predetermined proximity of a physical location and, in response, connecting each of the plurality of first user devices to a social network that is associated with the physical location using a communications network. In some embodiments, each of the plurality of first user devices will be connected to the social network for a predetermined amount of time or for as long as they are in the predetermined proximity. While in the social network, location-based information about the physical location is received over the communications network from each of the plurality of first user devices connected to the social network. At least some of that location-based information about the physical location may then be over the communications network to a second user device. | 1. A system, comprising:
a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising:
receiving a plurality of user-generated content submissions over a communications network;
determining that a first subset of the plurality of user-generated content submissions were received from at least one first user device that is located within a minimum proximity of a physical location and, in response, adding each of the first subset of the plurality of user-generated content submissions to a location-based information feed that includes only user-generated content submissions that are received from user devices that are located within the minimum proximity of the physical location; and
providing the location-based information feed over the communications network for display on at least one second user device that is located outside the minimum proximity of the physical location. 2. The system of claim 1, wherein the operations further comprise:
determining that a second subset of the plurality of user-generated content submissions were received from at least one third user device that is located outside of a minimum proximity of the physical location and, in response, preventing each of the second subset of the plurality of user-generated content submissions from being included in the location-based information feed. 3. The system of claim 1, wherein the user-generated content includes at least one of an image and a video. 4. The system of claim 1, wherein the first subset of the plurality of user-generated content submissions are determined to have been received from the at least one first user device that is located within the minimum proximity of the physical location in response to receiving the first subset of the plurality of user-generated content submissions from the at least one first user device via networking equipment provided at the physical location. 5. The system of claim 1, wherein the operations further comprise:
receiving, from the at least one second user device, a search query; and analyzing the search query to identify the location-based information feed for provision to the at least one second user device. 6. The system of claim 1, wherein the operations further comprise:
receiving, from the at least one second user device, at least one filtering criteria; filtering the location-based information feed according to the filtering criteria to produce a filtered location-based information feed; and providing the filtered location-based information feed over the communications network for display on the at least one second user device. 7. A method, comprising:
receiving, by a location-based information feed provider system over a communications network, a plurality of user-generated content submissions; determining, by the location-based information feed provider system, that a first subset of the plurality of user-generated content submissions were received from at least one first user device that is located within a minimum proximity of a physical location and, in response, adding each of the first subset of the plurality of user-generated content submissions to a location-based information feed that includes only user-generated content submissions that are received from user devices that are located within the minimum proximity of the physical location; and providing, by the location-based information feed provider system over the communications network for display on at least one second user device that is located outside the minimum proximity of the physical location, the location-based information feed. 8. The method of claim 7, further comprising:
determining, by the location-based information feed provider system, that a second subset of the plurality of user-generated content submissions were received from at least one third user device that is located outside of a minimum proximity of the physical location and, in response, preventing each of the second subset of the plurality of user-generated content submissions from being included in the location-based information feed. 9. The method of claim 7, wherein the user-generated content includes at least one of an image and a video. 10. The method of claim 7, wherein the first subset of the plurality of user-generated content submissions are determined to have been received from the at least one first user device that is located within the minimum proximity of the physical location in response to receiving the first subset of the plurality of user-generated content submissions from the at least one first user device via networking equipment provided at the physical location. 11. The method of claim 7, further comprising:
receiving, by the location-based information feed provider system from the at least one second user device, a search query; and analyzing, by the location-based information feed provider system, the search query to identify the location-based information feed for provision to the at least one second user device. 12. The method of claim 7, further comprising:
receiving, by the location-based information feed provider system from the at least one second user device, at least one filtering criteria;
filtering, by the location-based information feed provider system, the location-based information feed according to the filtering criteria to produce a filtered location-based information feed; and
providing, by the location-based information feed provider system over the communications network for display on the at least one second user device, the filtered location-based information feed. 13. The method of claim 12, wherein the at least one filtering criteria identifies a plurality of particular users, and wherein filtering the location-based according to the at least one filtering criteria provides the filtered location-based information feed with only user-generated content submissions from the plurality of particular users. 14. A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising:
receiving a plurality of user-generated content submissions over a communications network; determining that a first subset of the plurality of user-generated content submissions were received from at least one first user device that is located within a minimum proximity of a physical location and, in response, adding each of the first subset of the plurality of user-generated content submissions to a location-based information feed that includes only user-generated content submissions that are received from user devices that are located within the minimum proximity of the physical location; and providing the location-based information feed over the communications network for display on at least one second user device that is located outside the minimum proximity of the physical location. 15. The non-transitory machine-readable medium of claim 14, wherein the operations further comprise:
determining that a second subset of the plurality of user-generated content submissions were received from at least one third user device that is located outside of a minimum proximity of the physical location and, in response, preventing each of the second subset of the plurality of user-generated content submissions from being included in the location-based information feed. 16. The non-transitory machine-readable medium of claim 14, wherein the user-generated content includes at least one of an image and a video. 17. The non-transitory machine-readable medium of claim 14, wherein the first subset of the plurality of user-generated content submissions are determined to have been received from the at least one first user device that is located within the minimum proximity of the physical location in response to receiving the first subset of the plurality of user-generated content submissions from the at least one first user device via networking equipment provided at the physical location. 18. The non-transitory machine-readable medium of claim 14, wherein the operations further comprise:
receiving, from the at least one second user device, a search query; and analyzing the search query to identify the location-based information feed for provision to the at least one second user device. 19. The non-transitory machine-readable medium of claim 14, wherein the operations further comprise:
receiving, from the at least one second user device, at least one filtering criteria; filtering the location-based information feed according to the filtering criteria to produce a filtered location-based information feed; and providing the filtered location-based information feed over the communications network for display on the at least one second user device. 20. The non-transitory machine-readable medium of claim 19, wherein the at least one filtering criteria identifies a plurality of particular users, and wherein filtering the location-based according to the at least one filtering criteria provides the filtered location-based information feed with only user-generated content submissions from the plurality of particular users. | A method for providing location-based information via a temporary social network includes detecting each of a plurality of first user devices within a predetermined proximity of a physical location and, in response, connecting each of the plurality of first user devices to a social network that is associated with the physical location using a communications network. In some embodiments, each of the plurality of first user devices will be connected to the social network for a predetermined amount of time or for as long as they are in the predetermined proximity. While in the social network, location-based information about the physical location is received over the communications network from each of the plurality of first user devices connected to the social network. At least some of that location-based information about the physical location may then be over the communications network to a second user device.1. A system, comprising:
a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising:
receiving a plurality of user-generated content submissions over a communications network;
determining that a first subset of the plurality of user-generated content submissions were received from at least one first user device that is located within a minimum proximity of a physical location and, in response, adding each of the first subset of the plurality of user-generated content submissions to a location-based information feed that includes only user-generated content submissions that are received from user devices that are located within the minimum proximity of the physical location; and
providing the location-based information feed over the communications network for display on at least one second user device that is located outside the minimum proximity of the physical location. 2. The system of claim 1, wherein the operations further comprise:
determining that a second subset of the plurality of user-generated content submissions were received from at least one third user device that is located outside of a minimum proximity of the physical location and, in response, preventing each of the second subset of the plurality of user-generated content submissions from being included in the location-based information feed. 3. The system of claim 1, wherein the user-generated content includes at least one of an image and a video. 4. The system of claim 1, wherein the first subset of the plurality of user-generated content submissions are determined to have been received from the at least one first user device that is located within the minimum proximity of the physical location in response to receiving the first subset of the plurality of user-generated content submissions from the at least one first user device via networking equipment provided at the physical location. 5. The system of claim 1, wherein the operations further comprise:
receiving, from the at least one second user device, a search query; and analyzing the search query to identify the location-based information feed for provision to the at least one second user device. 6. The system of claim 1, wherein the operations further comprise:
receiving, from the at least one second user device, at least one filtering criteria; filtering the location-based information feed according to the filtering criteria to produce a filtered location-based information feed; and providing the filtered location-based information feed over the communications network for display on the at least one second user device. 7. A method, comprising:
receiving, by a location-based information feed provider system over a communications network, a plurality of user-generated content submissions; determining, by the location-based information feed provider system, that a first subset of the plurality of user-generated content submissions were received from at least one first user device that is located within a minimum proximity of a physical location and, in response, adding each of the first subset of the plurality of user-generated content submissions to a location-based information feed that includes only user-generated content submissions that are received from user devices that are located within the minimum proximity of the physical location; and providing, by the location-based information feed provider system over the communications network for display on at least one second user device that is located outside the minimum proximity of the physical location, the location-based information feed. 8. The method of claim 7, further comprising:
determining, by the location-based information feed provider system, that a second subset of the plurality of user-generated content submissions were received from at least one third user device that is located outside of a minimum proximity of the physical location and, in response, preventing each of the second subset of the plurality of user-generated content submissions from being included in the location-based information feed. 9. The method of claim 7, wherein the user-generated content includes at least one of an image and a video. 10. The method of claim 7, wherein the first subset of the plurality of user-generated content submissions are determined to have been received from the at least one first user device that is located within the minimum proximity of the physical location in response to receiving the first subset of the plurality of user-generated content submissions from the at least one first user device via networking equipment provided at the physical location. 11. The method of claim 7, further comprising:
receiving, by the location-based information feed provider system from the at least one second user device, a search query; and analyzing, by the location-based information feed provider system, the search query to identify the location-based information feed for provision to the at least one second user device. 12. The method of claim 7, further comprising:
receiving, by the location-based information feed provider system from the at least one second user device, at least one filtering criteria;
filtering, by the location-based information feed provider system, the location-based information feed according to the filtering criteria to produce a filtered location-based information feed; and
providing, by the location-based information feed provider system over the communications network for display on the at least one second user device, the filtered location-based information feed. 13. The method of claim 12, wherein the at least one filtering criteria identifies a plurality of particular users, and wherein filtering the location-based according to the at least one filtering criteria provides the filtered location-based information feed with only user-generated content submissions from the plurality of particular users. 14. A non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising:
receiving a plurality of user-generated content submissions over a communications network; determining that a first subset of the plurality of user-generated content submissions were received from at least one first user device that is located within a minimum proximity of a physical location and, in response, adding each of the first subset of the plurality of user-generated content submissions to a location-based information feed that includes only user-generated content submissions that are received from user devices that are located within the minimum proximity of the physical location; and providing the location-based information feed over the communications network for display on at least one second user device that is located outside the minimum proximity of the physical location. 15. The non-transitory machine-readable medium of claim 14, wherein the operations further comprise:
determining that a second subset of the plurality of user-generated content submissions were received from at least one third user device that is located outside of a minimum proximity of the physical location and, in response, preventing each of the second subset of the plurality of user-generated content submissions from being included in the location-based information feed. 16. The non-transitory machine-readable medium of claim 14, wherein the user-generated content includes at least one of an image and a video. 17. The non-transitory machine-readable medium of claim 14, wherein the first subset of the plurality of user-generated content submissions are determined to have been received from the at least one first user device that is located within the minimum proximity of the physical location in response to receiving the first subset of the plurality of user-generated content submissions from the at least one first user device via networking equipment provided at the physical location. 18. The non-transitory machine-readable medium of claim 14, wherein the operations further comprise:
receiving, from the at least one second user device, a search query; and analyzing the search query to identify the location-based information feed for provision to the at least one second user device. 19. The non-transitory machine-readable medium of claim 14, wherein the operations further comprise:
receiving, from the at least one second user device, at least one filtering criteria; filtering the location-based information feed according to the filtering criteria to produce a filtered location-based information feed; and providing the filtered location-based information feed over the communications network for display on the at least one second user device. 20. The non-transitory machine-readable medium of claim 19, wherein the at least one filtering criteria identifies a plurality of particular users, and wherein filtering the location-based according to the at least one filtering criteria provides the filtered location-based information feed with only user-generated content submissions from the plurality of particular users. | 2,400 |
8,693 | 8,693 | 15,056,650 | 2,462 | A data network analysis system includes a computer-executable set of instructions that obtain service account information associated with a route provided to a customer through a data communication network having network elements. Using the service account information, the instructions identify a termination port that terminates the route to a customer premises equipment of the customer, and at least one target port of the route and those network elements that are assigned to convey the route through one or more of the network elements. The instructions then obtain the routing information for the route from each of the network elements that are assigned to convey the route. | 1. A data network analysis system comprising:
a computing system including at least one processor and at least one tangible memory for storing instructions that are executed by the at least one processor to:
obtain service account information associated with a route provided to a customer through a data communication network comprising a plurality of network elements;
using the service account information, identify a termination port that terminates the route to a customer premises equipment of the customer;
identify at least one target port of the route;
identify a subset of the network elements that are assigned to convey the route through one or more of the network elements; and
obtain the routing information for the route from each of the network elements that are assigned to convey the route. 2. The data network analysis system of claim 1, wherein the instructions are further executed to obtain the termination port by accessing an assignment server that stores the service account information associated with a contract established between the customer and a service provider that provides the route to the customer. 3. The data network analysis system of claim 1, wherein the network elements establish the route using a network layer of the Open Systems Interconnection (OSI) model protocol. 4. The data network analysis system of claim 3, wherein the instructions are further executed to determine whether port information associated with the termination port is included in the address resolution protocol (ARP) tables in each of the subset of network elements. 5. The data network analysis system of claim 1, wherein the instructions are further executed to generate a spoof message that originates at the termination port and is destined for the target port, the spoof message comprising a ping message. 6. The data network analysis system of claim 1, wherein the instructions are further executed to generate a spoof message that originates at one of the network elements that is separate and distinct from the network element that provides the termination port, the spoof message destined for the target port. 7. The data network analysis system of claim 1, wherein the instructions are further executed to:
normalize the obtained routing information from each of the network elements; and facilitate display of the normalized routing information on the display for view by the user. 8. A data network analysis method comprising:
obtaining, using instructions stored on at least one computer-readable medium and executed by at least one processor, service account information associated with a route provided to a customer through a data communication network comprising a plurality of network elements; identifying, using the instructions, a termination port that terminates the route to a customer premises equipment of the customer using the service account information; identifying, using the instructions, at least one target port of the route; identifying, using the instructions, a subset of the network elements that are assigned to convey the route through one or more of the network elements; obtaining, using the instructions, the routing information for the route from each of the network elements that are assigned to convey the route; and facilitate, using the instructions, display of the obtained routing information on a display for view by a user. 9. The data network analysis method of claim 8, further comprising obtaining the termination port by accessing an assignment server that stores the service account information associated with a contract established between the customer and a service provider that provides the route to the customer. 10. The data network analysis method of claim 8, wherein the network elements establish the route using a network layer of the Open Systems Interconnection (OSI) model protocol. 11. The data network analysis method of claim 10, further comprising determining whether port information associated with the termination port is included in the address resolution protocol (ARP) tables in each of the subset of network elements. 12. The data network analysis method of claim 8, further comprising generating a spoof message that originates at the termination port and is destined for the target port, the spoof message comprising a ping message. 13. The data network analysis method of claim 8, further comprising generating a spoof message that originates at one of the network elements that is separate and distinct from the network element that provides the termination port, the spoof message destined for the target port. 14. The data network analysis method of claim 8, further comprising:
normalizing the obtained routing information from each of the network elements; and displaying the normalized routing information on the display for view by the user. 15. An article of manufacture, comprising a computer-readable medium having instructions stored thereon, the instructions, when executed by a processor, cause the computer to perform the operations comprising:
obtain service account information associated with a route provided to a customer through a data communication network comprising a plurality of network elements; using the service account information, identify a termination port that terminates the route to a customer premises equipment of the customer; identify at least one target port of the route; identify a subset of the network elements that are assigned to convey the route through one or more of the network elements; and obtain the routing information for the route from each of the network elements that are assigned to convey the route. 16. The article of manufacture of claim 15, wherein the network elements establish the route using a network layer of the Open Systems Interconnection (OSI) model protocol. 17. The article of manufacture of claim 15, including instructions that are executed to:
obtain the termination port by accessing an assignment server that stores the service account information associated with a contract established between the customer and a service provider that provides the route to the customer; and facilitate display of the obtained routing information on a display for view by a user 18. The article of manufacture of claim 17, wherein the instructions are further executed to determine whether port information associated with the termination port is included in the address resolution protocol (ARP) tables in each of the subset of network elements. 19. The article of manufacture of claim 15, wherein the instructions are further executed to generate a spoof message that originates at the termination port and is destined for the target port, the spoof message comprising a ping message. 20. The article of manufacture of claim 15, wherein the instructions are further executed to generate a spoof message that originates at one of the network elements that is separate and distinct from the network element that provides the termination port, the spoof message destined for the target port. | A data network analysis system includes a computer-executable set of instructions that obtain service account information associated with a route provided to a customer through a data communication network having network elements. Using the service account information, the instructions identify a termination port that terminates the route to a customer premises equipment of the customer, and at least one target port of the route and those network elements that are assigned to convey the route through one or more of the network elements. The instructions then obtain the routing information for the route from each of the network elements that are assigned to convey the route.1. A data network analysis system comprising:
a computing system including at least one processor and at least one tangible memory for storing instructions that are executed by the at least one processor to:
obtain service account information associated with a route provided to a customer through a data communication network comprising a plurality of network elements;
using the service account information, identify a termination port that terminates the route to a customer premises equipment of the customer;
identify at least one target port of the route;
identify a subset of the network elements that are assigned to convey the route through one or more of the network elements; and
obtain the routing information for the route from each of the network elements that are assigned to convey the route. 2. The data network analysis system of claim 1, wherein the instructions are further executed to obtain the termination port by accessing an assignment server that stores the service account information associated with a contract established between the customer and a service provider that provides the route to the customer. 3. The data network analysis system of claim 1, wherein the network elements establish the route using a network layer of the Open Systems Interconnection (OSI) model protocol. 4. The data network analysis system of claim 3, wherein the instructions are further executed to determine whether port information associated with the termination port is included in the address resolution protocol (ARP) tables in each of the subset of network elements. 5. The data network analysis system of claim 1, wherein the instructions are further executed to generate a spoof message that originates at the termination port and is destined for the target port, the spoof message comprising a ping message. 6. The data network analysis system of claim 1, wherein the instructions are further executed to generate a spoof message that originates at one of the network elements that is separate and distinct from the network element that provides the termination port, the spoof message destined for the target port. 7. The data network analysis system of claim 1, wherein the instructions are further executed to:
normalize the obtained routing information from each of the network elements; and facilitate display of the normalized routing information on the display for view by the user. 8. A data network analysis method comprising:
obtaining, using instructions stored on at least one computer-readable medium and executed by at least one processor, service account information associated with a route provided to a customer through a data communication network comprising a plurality of network elements; identifying, using the instructions, a termination port that terminates the route to a customer premises equipment of the customer using the service account information; identifying, using the instructions, at least one target port of the route; identifying, using the instructions, a subset of the network elements that are assigned to convey the route through one or more of the network elements; obtaining, using the instructions, the routing information for the route from each of the network elements that are assigned to convey the route; and facilitate, using the instructions, display of the obtained routing information on a display for view by a user. 9. The data network analysis method of claim 8, further comprising obtaining the termination port by accessing an assignment server that stores the service account information associated with a contract established between the customer and a service provider that provides the route to the customer. 10. The data network analysis method of claim 8, wherein the network elements establish the route using a network layer of the Open Systems Interconnection (OSI) model protocol. 11. The data network analysis method of claim 10, further comprising determining whether port information associated with the termination port is included in the address resolution protocol (ARP) tables in each of the subset of network elements. 12. The data network analysis method of claim 8, further comprising generating a spoof message that originates at the termination port and is destined for the target port, the spoof message comprising a ping message. 13. The data network analysis method of claim 8, further comprising generating a spoof message that originates at one of the network elements that is separate and distinct from the network element that provides the termination port, the spoof message destined for the target port. 14. The data network analysis method of claim 8, further comprising:
normalizing the obtained routing information from each of the network elements; and displaying the normalized routing information on the display for view by the user. 15. An article of manufacture, comprising a computer-readable medium having instructions stored thereon, the instructions, when executed by a processor, cause the computer to perform the operations comprising:
obtain service account information associated with a route provided to a customer through a data communication network comprising a plurality of network elements; using the service account information, identify a termination port that terminates the route to a customer premises equipment of the customer; identify at least one target port of the route; identify a subset of the network elements that are assigned to convey the route through one or more of the network elements; and obtain the routing information for the route from each of the network elements that are assigned to convey the route. 16. The article of manufacture of claim 15, wherein the network elements establish the route using a network layer of the Open Systems Interconnection (OSI) model protocol. 17. The article of manufacture of claim 15, including instructions that are executed to:
obtain the termination port by accessing an assignment server that stores the service account information associated with a contract established between the customer and a service provider that provides the route to the customer; and facilitate display of the obtained routing information on a display for view by a user 18. The article of manufacture of claim 17, wherein the instructions are further executed to determine whether port information associated with the termination port is included in the address resolution protocol (ARP) tables in each of the subset of network elements. 19. The article of manufacture of claim 15, wherein the instructions are further executed to generate a spoof message that originates at the termination port and is destined for the target port, the spoof message comprising a ping message. 20. The article of manufacture of claim 15, wherein the instructions are further executed to generate a spoof message that originates at one of the network elements that is separate and distinct from the network element that provides the termination port, the spoof message destined for the target port. | 2,400 |
8,694 | 8,694 | 15,793,893 | 2,449 | A messaging system receives a message sent by an enterprise to an individual user. The message has an associated message tag describing the content of the message. The messaging system applies a filtering policy to the message. The filtering policy selectively blocks messages sent by enterprises to users. The filtering policy allows the message having the associated tag to pass through the filter even though the message might otherwise violate the filtering policy. The messaging system samples a subset of tagged messages from enterprises and analyzes the messages for compliance with a tagging policy. The messaging system may also train one or more tag models to recognize the correct tags for the messages. | 1. A method comprising:
receiving a message sent by a sender to a recipient using an electronic messaging system; identifying a tag associated with the message; applying a filtering policy to the message, the filtering policy indicating an action for the messaging system to perform on the message based on the tag; and performing the action indicated by the filtering policy on the message. 2. The method of claim 1, wherein identifying the tag associated with the message comprises:
identifying the tag in a header of the message. 3. The method of claim 1, wherein applying the filtering policy to the message comprises:
identifying a first filtering policy applicable to the message, the first filtering policy blocking the message from being sent from the sender to the recipient; and identifying, responsive to the tag associated with the message, a second filtering policy applicable to the message, the second filtering policy granting an exception to the first filtering policy for the message and allowing the message to be sent from the sender to the recipient; wherein performing the action indicated by the filtering policy comprises passing the message from the sender to the recipient according to the second filtering policy. 4. The method of claim 1, wherein a plurality of messages having the tag are sent by the sender and further comprising:
sampling a subset of the messages sent by the sender that have the tag; analyzing a message in the sampled subset of messages for compliance with a tagging policy specifying types of messages that may be tagged using the tag; and performing an enforcement action on the message in the sampled subset responsive to the analysis. 5. The method of claim 4, wherein analyzing the message in the sampled subset of messages comprises:
determining whether content of the message in the sampled subset of messages matches content described by the tag associated with the message. 6. The method of claim 4, wherein performing the enforcement action comprises:
delivering the message in the sampled subset from the sender to a recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset complies with the tagging policy; and blocking the message in the sampled subset from being sent from the sender to the recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset does not comply with the tagging policy. 7. The method of claim 1, wherein applying a filtering policy to the message comprises:
analyzing the message using a trained tag model operable to receive the message and output a value indicating whether the message conforms with requirements associated with the tag. 8. A non-transitory computer-readable storage medium storing computer program instructions executable by a processor to perform operations comprising:
receiving a message sent by a sender to a recipient using an electronic messaging system; identifying a tag associated with the message; applying a filtering policy to the message, the filtering policy indicating an action for the messaging system to perform on the message based on the tag; and performing the action indicated by the filtering policy on the message. 9. The computer-readable storage medium of claim 8, wherein identifying the tag associated with the message comprises:
identifying the tag in a header of the message. 10. The computer-readable storage medium of claim 8, wherein applying the filtering policy to the message comprises:
identifying a first filtering policy applicable to the message, the first filtering policy blocking the message from being sent from the sender to the recipient; and identifying, responsive to the tag associated with the message, a second filtering policy applicable to the message, the second filtering policy granting an exception to the first filtering policy for the message and allowing the message to be sent from the sender to the recipient; wherein performing the action indicated by the filtering policy comprises passing the message from the sender to the recipient according to the second filtering policy. 11. The computer-readable storage medium of claim 8, wherein a plurality of messages having the tag are sent by the sender, the operations further comprising:
sampling a subset of the messages sent by the sender that have the tag; analyzing a message in the sampled subset of messages for compliance with a tagging policy specifying types of messages that may be tagged using the tag; and performing an enforcement action on the message in the sampled subset responsive to the analysis. 12. The computer-readable storage medium of claim 11, wherein analyzing the message in the sampled subset of messages comprises:
determining whether content of the message in the sampled subset of messages matches content described by the tag associated with the message. 13. The computer-readable storage medium of claim 11, wherein performing the enforcement action comprises:
delivering the message in the sampled subset from the sender to a recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset complies with the tagging policy; and blocking the message in the sampled subset from being sent from the sender to the recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset does not comply with the tagging policy. 14. The computer-readable storage medium of claim 8, wherein applying a filtering policy to the message comprises:
analyzing the message using a trained tag model operable to receive the message and output a value indicating whether the message conforms with requirements associated with the tag. 15. A system comprising:
a computer processor for executing computer program instructions; and a non-transitory computer-readable storage medium storing computer program instructions executable by the processor to perform operations comprising:
receiving a message sent by a sender to a recipient using an electronic messaging system;
identifying a tag associated with the message;
applying a filtering policy to the message, the filtering policy indicating an action for the messaging system to perform on the message based on the tag; and
performing the action indicated by the filtering policy on the message. 16. The system of claim 15, wherein identifying the tag associated with the message comprises:
identifying the tag in a header of the message. 17. The system of claim 15, wherein applying the filtering policy to the message comprises:
identifying a first filtering policy applicable to the message, the first filtering policy blocking the message from being sent from the sender to the recipient; and identifying, responsive to the tag associated with the message, a second filtering policy applicable to the message, the second filtering policy granting an exception to the first filtering policy for the message and allowing the message to be sent from the sender to the recipient; wherein performing the action indicated by the filtering policy comprises passing the message from the sender to the recipient according to the second filtering policy. 18. The system of claim 15, wherein a plurality of messages having the tag are sent by the sender, the operations further comprising:
sampling a subset of the messages sent by the sender that have the tag; analyzing a message in the sampled subset of messages for compliance with a tagging policy specifying types of messages that may be tagged using the tag; and performing an enforcement action on the message in the sampled subset responsive to the analysis. 19. The system of claim 18, wherein performing the enforcement action comprises:
delivering the message in the sampled subset from the sender to a recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset complies with the tagging policy; and blocking the message in the sampled subset from being sent from the sender to the recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset does not comply with the tagging policy. 20. The system of claim 15, wherein applying a filtering policy to the message comprises:
analyzing the message using a trained tag model operable to receive the message and output a value indicating whether the message conforms with requirements associated with the tag. | A messaging system receives a message sent by an enterprise to an individual user. The message has an associated message tag describing the content of the message. The messaging system applies a filtering policy to the message. The filtering policy selectively blocks messages sent by enterprises to users. The filtering policy allows the message having the associated tag to pass through the filter even though the message might otherwise violate the filtering policy. The messaging system samples a subset of tagged messages from enterprises and analyzes the messages for compliance with a tagging policy. The messaging system may also train one or more tag models to recognize the correct tags for the messages.1. A method comprising:
receiving a message sent by a sender to a recipient using an electronic messaging system; identifying a tag associated with the message; applying a filtering policy to the message, the filtering policy indicating an action for the messaging system to perform on the message based on the tag; and performing the action indicated by the filtering policy on the message. 2. The method of claim 1, wherein identifying the tag associated with the message comprises:
identifying the tag in a header of the message. 3. The method of claim 1, wherein applying the filtering policy to the message comprises:
identifying a first filtering policy applicable to the message, the first filtering policy blocking the message from being sent from the sender to the recipient; and identifying, responsive to the tag associated with the message, a second filtering policy applicable to the message, the second filtering policy granting an exception to the first filtering policy for the message and allowing the message to be sent from the sender to the recipient; wherein performing the action indicated by the filtering policy comprises passing the message from the sender to the recipient according to the second filtering policy. 4. The method of claim 1, wherein a plurality of messages having the tag are sent by the sender and further comprising:
sampling a subset of the messages sent by the sender that have the tag; analyzing a message in the sampled subset of messages for compliance with a tagging policy specifying types of messages that may be tagged using the tag; and performing an enforcement action on the message in the sampled subset responsive to the analysis. 5. The method of claim 4, wherein analyzing the message in the sampled subset of messages comprises:
determining whether content of the message in the sampled subset of messages matches content described by the tag associated with the message. 6. The method of claim 4, wherein performing the enforcement action comprises:
delivering the message in the sampled subset from the sender to a recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset complies with the tagging policy; and blocking the message in the sampled subset from being sent from the sender to the recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset does not comply with the tagging policy. 7. The method of claim 1, wherein applying a filtering policy to the message comprises:
analyzing the message using a trained tag model operable to receive the message and output a value indicating whether the message conforms with requirements associated with the tag. 8. A non-transitory computer-readable storage medium storing computer program instructions executable by a processor to perform operations comprising:
receiving a message sent by a sender to a recipient using an electronic messaging system; identifying a tag associated with the message; applying a filtering policy to the message, the filtering policy indicating an action for the messaging system to perform on the message based on the tag; and performing the action indicated by the filtering policy on the message. 9. The computer-readable storage medium of claim 8, wherein identifying the tag associated with the message comprises:
identifying the tag in a header of the message. 10. The computer-readable storage medium of claim 8, wherein applying the filtering policy to the message comprises:
identifying a first filtering policy applicable to the message, the first filtering policy blocking the message from being sent from the sender to the recipient; and identifying, responsive to the tag associated with the message, a second filtering policy applicable to the message, the second filtering policy granting an exception to the first filtering policy for the message and allowing the message to be sent from the sender to the recipient; wherein performing the action indicated by the filtering policy comprises passing the message from the sender to the recipient according to the second filtering policy. 11. The computer-readable storage medium of claim 8, wherein a plurality of messages having the tag are sent by the sender, the operations further comprising:
sampling a subset of the messages sent by the sender that have the tag; analyzing a message in the sampled subset of messages for compliance with a tagging policy specifying types of messages that may be tagged using the tag; and performing an enforcement action on the message in the sampled subset responsive to the analysis. 12. The computer-readable storage medium of claim 11, wherein analyzing the message in the sampled subset of messages comprises:
determining whether content of the message in the sampled subset of messages matches content described by the tag associated with the message. 13. The computer-readable storage medium of claim 11, wherein performing the enforcement action comprises:
delivering the message in the sampled subset from the sender to a recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset complies with the tagging policy; and blocking the message in the sampled subset from being sent from the sender to the recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset does not comply with the tagging policy. 14. The computer-readable storage medium of claim 8, wherein applying a filtering policy to the message comprises:
analyzing the message using a trained tag model operable to receive the message and output a value indicating whether the message conforms with requirements associated with the tag. 15. A system comprising:
a computer processor for executing computer program instructions; and a non-transitory computer-readable storage medium storing computer program instructions executable by the processor to perform operations comprising:
receiving a message sent by a sender to a recipient using an electronic messaging system;
identifying a tag associated with the message;
applying a filtering policy to the message, the filtering policy indicating an action for the messaging system to perform on the message based on the tag; and
performing the action indicated by the filtering policy on the message. 16. The system of claim 15, wherein identifying the tag associated with the message comprises:
identifying the tag in a header of the message. 17. The system of claim 15, wherein applying the filtering policy to the message comprises:
identifying a first filtering policy applicable to the message, the first filtering policy blocking the message from being sent from the sender to the recipient; and identifying, responsive to the tag associated with the message, a second filtering policy applicable to the message, the second filtering policy granting an exception to the first filtering policy for the message and allowing the message to be sent from the sender to the recipient; wherein performing the action indicated by the filtering policy comprises passing the message from the sender to the recipient according to the second filtering policy. 18. The system of claim 15, wherein a plurality of messages having the tag are sent by the sender, the operations further comprising:
sampling a subset of the messages sent by the sender that have the tag; analyzing a message in the sampled subset of messages for compliance with a tagging policy specifying types of messages that may be tagged using the tag; and performing an enforcement action on the message in the sampled subset responsive to the analysis. 19. The system of claim 18, wherein performing the enforcement action comprises:
delivering the message in the sampled subset from the sender to a recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset complies with the tagging policy; and blocking the message in the sampled subset from being sent from the sender to the recipient of the sampled message responsive to the analysis indicating that the message in the sampled subset does not comply with the tagging policy. 20. The system of claim 15, wherein applying a filtering policy to the message comprises:
analyzing the message using a trained tag model operable to receive the message and output a value indicating whether the message conforms with requirements associated with the tag. | 2,400 |
8,695 | 8,695 | 15,433,921 | 2,434 | A method, computer system and computer program product for authenticating a transaction is provided. A service provider receives a transaction between a user and a website displayed on a first device. The service provider identifies a first geolocation of the first device. The service provider generates a code for display on the first device. The service provider receives credential information to identify the user and the code from a second device. The service provider identifies a second geolocation of the second device, and determines a level of risk for the transaction based at in part on the first geolocation and the second geolocation. In response to the level of risk being an acceptable level of risk, the service provider authenticates the user. The service provider generates information to enable the user on the first device to perform the transaction with the website, and sends the information to the website. | 1. A method for authenticating a transaction of a user, the method comprising:
receiving by a service provider a transaction between a user and a website displayed on a first device; identifying by the service provider a first geolocation of the first device; generating by the service provider a code for display on the first device; receiving by the service provider from a second device credential information to identify the user; receiving by the service provider the code from the second device; identifying by the service provider a second geolocation of the second device; determining a level of risk for the transaction based at in part on the first geolocation and the second geolocation in response to determining that the level of risk is an acceptable level of risk, authenticating the user; generating, by the service provider, information to enable the user on the first device to perform the transaction with the website; and sending the information from the service provider to the website. 2. The method of claim 1, wherein the transaction is a login of the user with the website on the first device. 3. The method of claim 1, wherein the first geolocation is identified from an internet protocol address of the first device. 4. The method of claim 3, wherein identifying the first geolocation further comprises:
identifying the internet protocol address of the first device within a geolocation database; and identifying a geolocation associated with the first device in the geolocation database, wherein the geolocation is selected from wherein the geolocation is selected from at least one of a country, a region, a city, a zip code, a latitude, a longitude, and a time zone. 5. The method of claim 1, wherein the second geolocation is identified from at least one of global positioning location information of the second device, and triangulation signal location information of the second device. 6. The method of claim 1, wherein determining the level of risk for the transaction further comprises:
determining a location proximity between the first device and the second device; and determining that the level of risk is the acceptable level of risk when the location proximity between the first device and the second device is less than a proximity threshold. 7. The method of claim 1, wherein the code is a single use quick response (QR) code, wherein generating the code further comprises:
dynamically generating the single use quick response (QR) code for display on the first device; and receiving the QR code from the second device, wherein the second device is a mobile device having a camera configured to scan the QR code displayed on the first device. 8. The method of claim 1, wherein generating information to enable the website to perform the transaction with the user on the first device further comprises:
generating by the service provider a cookie to enable the user to perform the transaction with the website on the first device; and sending the cookie from the service provider to the first device. 9. A computer system comprising:
a hardware processor; and an authentication system in communication with the processor and configured:
to receive a transaction between a user and a website displayed on a first device;
to identify a first geolocation of the first device;
to generate a code for display on the first device;
to receive credential information from a second device to identify the user;
to receive the code from the second device;
to identify a second geolocation of the second device;
to determine a level of risk for the transaction based at in part on the first geolocation and the second geolocation
to authenticate the user in response to determining that the level of risk is an acceptable level of risk;
to generate information to enable the website to perform the transaction with the user on the first device; and
to send the information to the website. 10. The computer system of claim 9, wherein the transaction is a login of the user with the website on the first device. 11. The computer system of claim 9, wherein the first geolocation is identified from an internet protocol address of the first device. 12. The computer system of claim 11, wherein in identifying the first geolocation, the authentication system is further configured:
to identify the internet protocol address of the first device within a geolocation database; and to identify a geolocation associated with the first device in the geolocation database, wherein the geolocation is selected from wherein the geolocation is selected from at least one of a country, a region, a city, a zip code, a latitude, a longitude, and a time zone. 13. The computer system of claim 9, wherein the second geolocation is identified from at least one of global positioning location information of the second device, and triangulation signal location information of the second device. 14. The computer system of claim 9, wherein in determining the level of risk for the transaction, the authentication system is further configured:
to determine a location proximity between the first device and the second device; and to determine that the level of risk is the acceptable level of risk when the location proximity between the first device and the second device is less than a proximity threshold. 15. The computer system of claim 9, wherein the code is a single use quick response (QR) code, and wherein in generating the code, the authentication system is further configured:
to dynamically generate the single use quick response (QR) code for display on the first device; and to receive the QR code from the second device, wherein the second device is a mobile device having a camera configured to scan the QR code displayed on the first device. 16. The computer system of claim 9, wherein in generating information to enable the website to perform the transaction with the user on the first device, the authentication system is further configured:
to generate a cookie to enable the user to perform the transaction with the website on the first device; and to send the cookie from the service provider to the first device. 17. A computer program product for authenticating a transaction of a user, the computer program product comprising:
a computer readable storage media; first program code, stored on the computer readable storage media, for receiving a transaction between a user and a website displayed on a first device; second program code, stored on the computer readable storage media, for identifying a first geolocation of the first device; third program code, stored on the computer readable storage media, for generating a code for display on the first device; fourth program code, stored on the computer readable storage media, for receiving credential information to identify the user from a second device; fifth program code, stored on the computer readable storage media, for receiving the code from the second device; sixth program code, stored on the computer readable storage media, for identifying a second geolocation of the second device; seventh program code, stored on the computer readable storage media, for determining a level of risk for the transaction based at in part on the first geolocation and the second geolocation; eighth program code, stored on the computer readable storage media, for authenticating the user in response to determining that the level of risk is an acceptable level of risk; ninth program code, stored on the computer readable storage media, for generating information to enable the user on the first device to perform the transaction with the website; and tenth program code, stored on the computer readable storage media, for sending the information from the service provider to the website. 18. The computer program product of claim 17, wherein the transaction is a login of the user with the website on the first device. 19. The computer program product of claim 17, wherein the first geolocation is identified from an internet protocol address of the first device. 20. The computer program product of claim 19, wherein second program code further comprises:
program code, stored on the computer readable storage media, for identifying the internet protocol address of the first device within a geolocation database; and program code, stored on the computer readable storage media, for identifying a geolocation associated with the first device in the geolocation database, wherein the geolocation is selected from wherein the geolocation is selected from at least one of a country, a region, a city, a zip code, a latitude, a longitude, and a time zone. 21. The computer program product of claim 17, wherein the second geolocation is identified from at least one of global positioning location information of the second device, and triangulation signal location information of the second device. 22. The computer program product of claim 17, wherein the seventh program code further comprises:
program code, stored on the computer readable storage media, for determining a location proximity between the first device and the second device; and program code, stored on the computer readable storage media, for determining that the level of risk is the acceptable level of risk when the location proximity between the first device and the second device is less than a proximity threshold. 23. The computer program product of claim 17, wherein the code is a single use quick response (QR) code, and wherein the third program code further comprises:
program code, stored on the computer readable storage media, for dynamically generating the single use quick response (QR) code for display on the first device; and wherein the fifth program code further comprises: program code, stored on the computer readable storage media, for receiving the QR code from the second device, wherein the second device is a mobile device having a camera configured to scan the QR code displayed on the first device. 24. The computer program product of claim 17, wherein the ninth program code further comprises:
program code, stored on the computer readable storage media, for generating by the service provider a cookie to enable the user to perform the transaction with the website on the first device; and program code, stored on the computer readable storage media, for sending the cookie from the service provider to the first device. | A method, computer system and computer program product for authenticating a transaction is provided. A service provider receives a transaction between a user and a website displayed on a first device. The service provider identifies a first geolocation of the first device. The service provider generates a code for display on the first device. The service provider receives credential information to identify the user and the code from a second device. The service provider identifies a second geolocation of the second device, and determines a level of risk for the transaction based at in part on the first geolocation and the second geolocation. In response to the level of risk being an acceptable level of risk, the service provider authenticates the user. The service provider generates information to enable the user on the first device to perform the transaction with the website, and sends the information to the website.1. A method for authenticating a transaction of a user, the method comprising:
receiving by a service provider a transaction between a user and a website displayed on a first device; identifying by the service provider a first geolocation of the first device; generating by the service provider a code for display on the first device; receiving by the service provider from a second device credential information to identify the user; receiving by the service provider the code from the second device; identifying by the service provider a second geolocation of the second device; determining a level of risk for the transaction based at in part on the first geolocation and the second geolocation in response to determining that the level of risk is an acceptable level of risk, authenticating the user; generating, by the service provider, information to enable the user on the first device to perform the transaction with the website; and sending the information from the service provider to the website. 2. The method of claim 1, wherein the transaction is a login of the user with the website on the first device. 3. The method of claim 1, wherein the first geolocation is identified from an internet protocol address of the first device. 4. The method of claim 3, wherein identifying the first geolocation further comprises:
identifying the internet protocol address of the first device within a geolocation database; and identifying a geolocation associated with the first device in the geolocation database, wherein the geolocation is selected from wherein the geolocation is selected from at least one of a country, a region, a city, a zip code, a latitude, a longitude, and a time zone. 5. The method of claim 1, wherein the second geolocation is identified from at least one of global positioning location information of the second device, and triangulation signal location information of the second device. 6. The method of claim 1, wherein determining the level of risk for the transaction further comprises:
determining a location proximity between the first device and the second device; and determining that the level of risk is the acceptable level of risk when the location proximity between the first device and the second device is less than a proximity threshold. 7. The method of claim 1, wherein the code is a single use quick response (QR) code, wherein generating the code further comprises:
dynamically generating the single use quick response (QR) code for display on the first device; and receiving the QR code from the second device, wherein the second device is a mobile device having a camera configured to scan the QR code displayed on the first device. 8. The method of claim 1, wherein generating information to enable the website to perform the transaction with the user on the first device further comprises:
generating by the service provider a cookie to enable the user to perform the transaction with the website on the first device; and sending the cookie from the service provider to the first device. 9. A computer system comprising:
a hardware processor; and an authentication system in communication with the processor and configured:
to receive a transaction between a user and a website displayed on a first device;
to identify a first geolocation of the first device;
to generate a code for display on the first device;
to receive credential information from a second device to identify the user;
to receive the code from the second device;
to identify a second geolocation of the second device;
to determine a level of risk for the transaction based at in part on the first geolocation and the second geolocation
to authenticate the user in response to determining that the level of risk is an acceptable level of risk;
to generate information to enable the website to perform the transaction with the user on the first device; and
to send the information to the website. 10. The computer system of claim 9, wherein the transaction is a login of the user with the website on the first device. 11. The computer system of claim 9, wherein the first geolocation is identified from an internet protocol address of the first device. 12. The computer system of claim 11, wherein in identifying the first geolocation, the authentication system is further configured:
to identify the internet protocol address of the first device within a geolocation database; and to identify a geolocation associated with the first device in the geolocation database, wherein the geolocation is selected from wherein the geolocation is selected from at least one of a country, a region, a city, a zip code, a latitude, a longitude, and a time zone. 13. The computer system of claim 9, wherein the second geolocation is identified from at least one of global positioning location information of the second device, and triangulation signal location information of the second device. 14. The computer system of claim 9, wherein in determining the level of risk for the transaction, the authentication system is further configured:
to determine a location proximity between the first device and the second device; and to determine that the level of risk is the acceptable level of risk when the location proximity between the first device and the second device is less than a proximity threshold. 15. The computer system of claim 9, wherein the code is a single use quick response (QR) code, and wherein in generating the code, the authentication system is further configured:
to dynamically generate the single use quick response (QR) code for display on the first device; and to receive the QR code from the second device, wherein the second device is a mobile device having a camera configured to scan the QR code displayed on the first device. 16. The computer system of claim 9, wherein in generating information to enable the website to perform the transaction with the user on the first device, the authentication system is further configured:
to generate a cookie to enable the user to perform the transaction with the website on the first device; and to send the cookie from the service provider to the first device. 17. A computer program product for authenticating a transaction of a user, the computer program product comprising:
a computer readable storage media; first program code, stored on the computer readable storage media, for receiving a transaction between a user and a website displayed on a first device; second program code, stored on the computer readable storage media, for identifying a first geolocation of the first device; third program code, stored on the computer readable storage media, for generating a code for display on the first device; fourth program code, stored on the computer readable storage media, for receiving credential information to identify the user from a second device; fifth program code, stored on the computer readable storage media, for receiving the code from the second device; sixth program code, stored on the computer readable storage media, for identifying a second geolocation of the second device; seventh program code, stored on the computer readable storage media, for determining a level of risk for the transaction based at in part on the first geolocation and the second geolocation; eighth program code, stored on the computer readable storage media, for authenticating the user in response to determining that the level of risk is an acceptable level of risk; ninth program code, stored on the computer readable storage media, for generating information to enable the user on the first device to perform the transaction with the website; and tenth program code, stored on the computer readable storage media, for sending the information from the service provider to the website. 18. The computer program product of claim 17, wherein the transaction is a login of the user with the website on the first device. 19. The computer program product of claim 17, wherein the first geolocation is identified from an internet protocol address of the first device. 20. The computer program product of claim 19, wherein second program code further comprises:
program code, stored on the computer readable storage media, for identifying the internet protocol address of the first device within a geolocation database; and program code, stored on the computer readable storage media, for identifying a geolocation associated with the first device in the geolocation database, wherein the geolocation is selected from wherein the geolocation is selected from at least one of a country, a region, a city, a zip code, a latitude, a longitude, and a time zone. 21. The computer program product of claim 17, wherein the second geolocation is identified from at least one of global positioning location information of the second device, and triangulation signal location information of the second device. 22. The computer program product of claim 17, wherein the seventh program code further comprises:
program code, stored on the computer readable storage media, for determining a location proximity between the first device and the second device; and program code, stored on the computer readable storage media, for determining that the level of risk is the acceptable level of risk when the location proximity between the first device and the second device is less than a proximity threshold. 23. The computer program product of claim 17, wherein the code is a single use quick response (QR) code, and wherein the third program code further comprises:
program code, stored on the computer readable storage media, for dynamically generating the single use quick response (QR) code for display on the first device; and wherein the fifth program code further comprises: program code, stored on the computer readable storage media, for receiving the QR code from the second device, wherein the second device is a mobile device having a camera configured to scan the QR code displayed on the first device. 24. The computer program product of claim 17, wherein the ninth program code further comprises:
program code, stored on the computer readable storage media, for generating by the service provider a cookie to enable the user to perform the transaction with the website on the first device; and program code, stored on the computer readable storage media, for sending the cookie from the service provider to the first device. | 2,400 |
8,696 | 8,696 | 14,161,016 | 2,486 | A method for detecting a double-parked vehicle includes identifying a parking region in video data received from an image capture device monitoring the parking region. The method includes defining an enforcement region at least partially surrounding the parking region. The method includes detecting a stationary vehicle in the enforcement region. The method includes determining the occurrence of an event relative to the stationary vehicle. In response to the determined occurrence of the event, the method includes classifying the stationary vehicle as being one of double parked and not double parked. | 1. A method for detecting a double-parked vehicle, the method comprising:
identifying a parking region in video data received from an image capture device monitoring the parking region; defining an enforcement region at least partially surrounding the parking region; detecting a stationary candidate double-parked vehicle in the enforcement region; determining the occurrence of an event relative to the stationary candidate double-parked vehicle; in response to the determined occurrence of the event, classifying the stationary candidate double-parked vehicle as being one of double parked and not double parked. 2. The method of claim 1, wherein the determining the occurrence of the event includes:
determining whether hazard lights are operating on the stationary candidate double-parked vehicle; and, in response to the hazard lights operating on the stationary candidate double-parked vehicle, classifying the stationary candidate double-parked vehicle as being double parked. 3. The method of claim 2, wherein the determining whether the hazard lights are operating includes:
identifying a hazard light region in the sequence of frames surrounding one of a front light and rear light; and, determining pixel colors in the hazard light region in the sequence of frames. 4. The method of claim 1, wherein the determining the occurrence of the event further includes:
determining whether the stationary candidate double-parked vehicle qualifies for an exception. 5. The method of claim 4, wherein the exception is determined to have occurred if the stationary candidate double-parked vehicle is one of an emergency vehicle and a commercial vehicle. 6. The method of claim 1, wherein the determining the occurrence of the event includes:
determining whether an object is located in front of the stationary candidate double-parked vehicle; and, in response to a distance between the object and the stationary candidate double-parked vehicle being below a threshold, classifying the stationary candidate double-parked vehicle as not being double parked. 7. The method of claim 6, wherein the determining the occurrence of the event further includes:
in response to the distance between the object and the stationary candidate double-parked vehicle being equal to or above the threshold, determining if the object is a stop light; determining pixel colors in the stop light region in the sequence of frames; and, in response to the pixel colors being green, classifying the stationary candidate double-parked vehicle as being double parked 8. The method of claim 1, wherein the determining the occurrence of the event includes:
detecting a second vehicle in the sequence of frames; determining a trajectory of the second vehicle across a number of frames; determining whether the trajectory moves around the stationary candidate double-parked vehicle; and, in response the trajectory moving around the stationary candidate double-parked vehicle, classifying the stationary candidate double-parked vehicle as being double parked. 9. The method of claim 1, wherein the determining the occurrence of the event includes:
detecting a second stationary vehicle in the parking region located within proximity to the stationary candidate double-parked vehicle in the enforcement region. 10. The method of claim 1, wherein the detecting the vehicle is performed using one of background estimation and subtraction, tracking, temporal difference, optical flow, and an initialization process. 11. The method of claim 1, wherein the detecting the vehicle includes:
determining a number of frames the detected vehicle is located in a same position; comparing the number of frames to a threshold; and, in response to the number of frames meeting or exceeding the threshold, classifying the detected vehicle as the stationary candidate double-parked vehicle. 12. The method of claim 1 further comprising:
in response to the stationary candidate double-parked vehicle being classified as double parked, associating the stationary candidate double-parked vehicle as being a double parked vehicle and providing the user with a notification as output. 13. The method of claim 1, wherein the determining the occurrence of the event includes:
considering multiple events for confirming whether the stationary candidate double-parked vehicle is double parked. 14. A system for detecting a double-parked vehicle, the system comprising:
a double parking confirmation device including a memory for storing modules, including:
a region determination module operative to identify a parking region and an enforcement region at least partially surrounding the parking region in a sequence of frames received from an image capture device monitoring the parking region;
a vehicle detection module operative to detect a stationary candidate double-parked vehicle in the enforcement region;
a vehicle classification module operative to:
determine an occurrence of an event relative to the stationary candidate double-parked vehicle, and
in response to the determined occurrence of the event, classify the stationary candidate double-parked vehicle as being one of double parked and not double parked; and,
a processor in communication with the memory and being operative to execute the modules. 15. The system of claim 14, wherein the vehicle classification module is further operative to:
determine whether hazard lights are operating on the stationary candidate double-parked vehicle; in response to the hazard lights operating on the stationary candidate double-parked vehicle, classify the stationary candidate double-parked vehicle as being double parked. 16. The system of claim 15, wherein the vehicle classification module is further operative to:
identify a hazard light region in the sequence of frames surrounding one of a front light and rear light; determine pixel colors in the hazard light region in the sequence of frames; in response to changes in the pixel colors between frames, classify the stationary candidate double-parked vehicle as operating its hazard lights. 17. The system of claim 14, wherein the determining the occurrence of the event further includes:
determine whether the stationary candidate double-parked vehicle qualifies for an exception. 18. The system of claim 17, wherein the exception is determined to have occurred if the stationary candidate double-parked vehicle is one of an emergency vehicle and a commercial vehicle. 19. The system of claim 13, wherein the vehicle classification module is further operative to:
determine whether an object is located in front of the stationary candidate double-parked vehicle; and, in response to a distance between the object and the stationary candidate double-parked vehicle being below a threshold, classify the stationary candidate double-parked vehicle as not being double parked. 20. The system of claim 19, wherein the vehicle classification module is further operative to:
in response to the distance between the object and the stationary candidate double-parked vehicle being equal to or above the threshold, determine if the object is a stop light; determine pixel colors in the stop light region in the sequence of frames; and, in response to the pixel colors being green, classify the stationary candidate double-parked vehicle as being double parked 21. The system of claim 14, wherein the vehicle classification module is further operative to:
detect a second vehicle in the sequence of frames; determine a trajectory of the second vehicle across a number of frames; determine whether the trajectory moves around the stationary candidate double-parked vehicle; and, in response the trajectory moving around the stationary candidate double-parked vehicle, classify the stationary candidate double-parked vehicle as being double parked. 22. The system of claim 13, wherein the vehicle detection module detects the stationary candidate double-parked vehicle using background estimation and subtraction, temporal difference, optical flow, and an initialization process. 23. The system of claim 13, wherein the vehicle detection module is operative to:
determine a number of frames the stationary candidate double-parked vehicle is located in a same position; compare the number of frames to a threshold; and, in response to the number of frames meeting or exceeding the threshold, classify the detected vehicle as being the stationary candidate double-parked vehicle. 24. The system of claim 14, further comprising a notification module adapted to:
in response to the stationary candidate double-parked vehicle being classified as double parked, associate the stationary candidate double-parked vehicle as being a double parked vehicle and provide the user with a notification as output. 25. The system of claim 14, wherein the vehicle classification module is further operative to:
consider multiple events for confirming whether the stationary candidate double-parked vehicle is double parked. 26. The system of claim 14 further comprising an output device for providing the classification to a user. | A method for detecting a double-parked vehicle includes identifying a parking region in video data received from an image capture device monitoring the parking region. The method includes defining an enforcement region at least partially surrounding the parking region. The method includes detecting a stationary vehicle in the enforcement region. The method includes determining the occurrence of an event relative to the stationary vehicle. In response to the determined occurrence of the event, the method includes classifying the stationary vehicle as being one of double parked and not double parked.1. A method for detecting a double-parked vehicle, the method comprising:
identifying a parking region in video data received from an image capture device monitoring the parking region; defining an enforcement region at least partially surrounding the parking region; detecting a stationary candidate double-parked vehicle in the enforcement region; determining the occurrence of an event relative to the stationary candidate double-parked vehicle; in response to the determined occurrence of the event, classifying the stationary candidate double-parked vehicle as being one of double parked and not double parked. 2. The method of claim 1, wherein the determining the occurrence of the event includes:
determining whether hazard lights are operating on the stationary candidate double-parked vehicle; and, in response to the hazard lights operating on the stationary candidate double-parked vehicle, classifying the stationary candidate double-parked vehicle as being double parked. 3. The method of claim 2, wherein the determining whether the hazard lights are operating includes:
identifying a hazard light region in the sequence of frames surrounding one of a front light and rear light; and, determining pixel colors in the hazard light region in the sequence of frames. 4. The method of claim 1, wherein the determining the occurrence of the event further includes:
determining whether the stationary candidate double-parked vehicle qualifies for an exception. 5. The method of claim 4, wherein the exception is determined to have occurred if the stationary candidate double-parked vehicle is one of an emergency vehicle and a commercial vehicle. 6. The method of claim 1, wherein the determining the occurrence of the event includes:
determining whether an object is located in front of the stationary candidate double-parked vehicle; and, in response to a distance between the object and the stationary candidate double-parked vehicle being below a threshold, classifying the stationary candidate double-parked vehicle as not being double parked. 7. The method of claim 6, wherein the determining the occurrence of the event further includes:
in response to the distance between the object and the stationary candidate double-parked vehicle being equal to or above the threshold, determining if the object is a stop light; determining pixel colors in the stop light region in the sequence of frames; and, in response to the pixel colors being green, classifying the stationary candidate double-parked vehicle as being double parked 8. The method of claim 1, wherein the determining the occurrence of the event includes:
detecting a second vehicle in the sequence of frames; determining a trajectory of the second vehicle across a number of frames; determining whether the trajectory moves around the stationary candidate double-parked vehicle; and, in response the trajectory moving around the stationary candidate double-parked vehicle, classifying the stationary candidate double-parked vehicle as being double parked. 9. The method of claim 1, wherein the determining the occurrence of the event includes:
detecting a second stationary vehicle in the parking region located within proximity to the stationary candidate double-parked vehicle in the enforcement region. 10. The method of claim 1, wherein the detecting the vehicle is performed using one of background estimation and subtraction, tracking, temporal difference, optical flow, and an initialization process. 11. The method of claim 1, wherein the detecting the vehicle includes:
determining a number of frames the detected vehicle is located in a same position; comparing the number of frames to a threshold; and, in response to the number of frames meeting or exceeding the threshold, classifying the detected vehicle as the stationary candidate double-parked vehicle. 12. The method of claim 1 further comprising:
in response to the stationary candidate double-parked vehicle being classified as double parked, associating the stationary candidate double-parked vehicle as being a double parked vehicle and providing the user with a notification as output. 13. The method of claim 1, wherein the determining the occurrence of the event includes:
considering multiple events for confirming whether the stationary candidate double-parked vehicle is double parked. 14. A system for detecting a double-parked vehicle, the system comprising:
a double parking confirmation device including a memory for storing modules, including:
a region determination module operative to identify a parking region and an enforcement region at least partially surrounding the parking region in a sequence of frames received from an image capture device monitoring the parking region;
a vehicle detection module operative to detect a stationary candidate double-parked vehicle in the enforcement region;
a vehicle classification module operative to:
determine an occurrence of an event relative to the stationary candidate double-parked vehicle, and
in response to the determined occurrence of the event, classify the stationary candidate double-parked vehicle as being one of double parked and not double parked; and,
a processor in communication with the memory and being operative to execute the modules. 15. The system of claim 14, wherein the vehicle classification module is further operative to:
determine whether hazard lights are operating on the stationary candidate double-parked vehicle; in response to the hazard lights operating on the stationary candidate double-parked vehicle, classify the stationary candidate double-parked vehicle as being double parked. 16. The system of claim 15, wherein the vehicle classification module is further operative to:
identify a hazard light region in the sequence of frames surrounding one of a front light and rear light; determine pixel colors in the hazard light region in the sequence of frames; in response to changes in the pixel colors between frames, classify the stationary candidate double-parked vehicle as operating its hazard lights. 17. The system of claim 14, wherein the determining the occurrence of the event further includes:
determine whether the stationary candidate double-parked vehicle qualifies for an exception. 18. The system of claim 17, wherein the exception is determined to have occurred if the stationary candidate double-parked vehicle is one of an emergency vehicle and a commercial vehicle. 19. The system of claim 13, wherein the vehicle classification module is further operative to:
determine whether an object is located in front of the stationary candidate double-parked vehicle; and, in response to a distance between the object and the stationary candidate double-parked vehicle being below a threshold, classify the stationary candidate double-parked vehicle as not being double parked. 20. The system of claim 19, wherein the vehicle classification module is further operative to:
in response to the distance between the object and the stationary candidate double-parked vehicle being equal to or above the threshold, determine if the object is a stop light; determine pixel colors in the stop light region in the sequence of frames; and, in response to the pixel colors being green, classify the stationary candidate double-parked vehicle as being double parked 21. The system of claim 14, wherein the vehicle classification module is further operative to:
detect a second vehicle in the sequence of frames; determine a trajectory of the second vehicle across a number of frames; determine whether the trajectory moves around the stationary candidate double-parked vehicle; and, in response the trajectory moving around the stationary candidate double-parked vehicle, classify the stationary candidate double-parked vehicle as being double parked. 22. The system of claim 13, wherein the vehicle detection module detects the stationary candidate double-parked vehicle using background estimation and subtraction, temporal difference, optical flow, and an initialization process. 23. The system of claim 13, wherein the vehicle detection module is operative to:
determine a number of frames the stationary candidate double-parked vehicle is located in a same position; compare the number of frames to a threshold; and, in response to the number of frames meeting or exceeding the threshold, classify the detected vehicle as being the stationary candidate double-parked vehicle. 24. The system of claim 14, further comprising a notification module adapted to:
in response to the stationary candidate double-parked vehicle being classified as double parked, associate the stationary candidate double-parked vehicle as being a double parked vehicle and provide the user with a notification as output. 25. The system of claim 14, wherein the vehicle classification module is further operative to:
consider multiple events for confirming whether the stationary candidate double-parked vehicle is double parked. 26. The system of claim 14 further comprising an output device for providing the classification to a user. | 2,400 |
8,697 | 8,697 | 15,388,151 | 2,433 | Network firewalls operate based on rules that define how a firewall should handle traffic passing through the firewall. At their most basic, firewall rules may indicate that certain network traffic should be denied from passing through a network firewall or indicate that certain network traffic should be allowed to pass through the network firewall. Manners of handling network traffic beyond simply allowing or denying the network traffic may also be defined by the rules. For instance, a rule may indicate that certain network traffic should be routed to a specific system. Thus, if an administrator of a network firewall determines that certain network traffic should be handled in a certain way by a network firewall, the administrator need only implement a firewall rule defining how that network traffic should be handled in the network firewall. | 1. A method of reducing the number of rules employed by a network firewall, the method comprising:
identifying related rules of a plurality of rules used by the network firewall, wherein at least one of the plurality of rules comprise criteria of one or more compound groups that each identify a source or destination virtual machine based on at least a security group or a service group for the virtual machine and wherein two rules are related rules when there exists at least one network traffic pattern that can satisfy criteria for both of the rules, including criteria of one or more compound groups of at least one of the two rules; identifying one or more ineffective rules of the related rules based on the relationships between the rules; and adjusting, in the network firewall, the one or more ineffective rules in the plurality of rules to obviate the ineffectiveness of the one or more ineffective rules when handling network traffic exchanged with a plurality of virtual machines based on the plurality of rules. 2. The method of claim 1, further comprising:
presenting, to an administrator of the network firewall, statistics about the one or more ineffective rules and the plurality of rules; receiving confirmation from the administrator of the network firewall that the one or more ineffective rules should be removed from the plurality of rules; and wherein adjusting the one or more ineffective rules occurs in response to the confirmation. 3. The method of claim 1, wherein identifying the related rules comprises:
filtering each rule of the plurality of rules using a plurality of high precedence rules to determine whether one or more of the high precedence rules are related. 4. The method of claim 3, wherein identifying the related rules further comprises:
after filtering each rule, identifying the one or more compound groups within the plurality of rules; and performing one-to-one matching on the one or more compound groups. 5. The method of claim 4, wherein identifying the related rules further comprises:
after performing one-to-one matching, for each rule of the plurality of rules, generating all possible traffic scenarios that match the rule; and for each of the possible traffic scenarios, determine whether one or more of the plurality of high precedence rules covers the possible traffic scenario. 6. The method of claim 1, further comprising:
identifying an updated rule to the plurality of rules; determining whether the updated rule impacts the relationships between the rules. 7. The method of claim 6, wherein determining whether the updated rule impacts the relationships between the rules comprises:
determining whether rules of the plurality of rules that come after the updated rule during rule application will become ineffective due to the updated rule; and determining whether rules of the plurality of rules that come before the updated rule during the rule application will cause the updated rule to be ineffective. 8. The method of claim 6, wherein the updated rule comprises an additional rule, an amendment to a rule, or a deletion of a rule. 9. The method of claim 1, further comprising:
identifying one or more shadowed rules of the related rules based on the relationships between the rules, wherein a rule is a shadowed rule when criteria for the shadowed rule is at least partially overlapped by one or more rules that come before the shadowed rule during rule application. 10. The method of claim 1, wherein adjusting the one or more ineffective rules comprises at least one of deleting at least one of the one or more ineffective rules and merging at least one of the one or more ineffective rules with one or more other rules. 11. A system for reducing the number of rules employed by a network firewall, the system comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
identify related rules of a plurality of rules used by the network firewall, wherein at least one of the plurality of rules comprise criteria of one or more compound groups that each identify a source or destination virtual machine based on at least a security group or a service group for the virtual machine and wherein two rules are related rules when there exists at least one network traffic pattern that can satisfy criteria for both of the rules, including criteria of one or more compound groups of at least one of the two rules;
identify one or more ineffective rules of the related rules based on the relationships between the rules; and
adjust, in the network firewall, the one or more ineffective rules in the plurality of rules to obviate the ineffectiveness of the one or more ineffective rules when handling network traffic exchanged with a plurality of virtual machines based on the plurality of rules. 12. The system of claim 11, wherein the program instructions further direct the processing system to:
present, to an administrator of the network firewall, statistics about the one or more ineffective rules and the plurality of rules; receive confirmation from the administrator of the network firewall that the one or more ineffective rules should be removed from the plurality of rules; and wherein adjustment of the one or more ineffective rules occurs in response to the confirmation. 13. The system of claim 11, wherein to identify the related rules the program instructions direct the processing system to:
filter each rule of the plurality of rules using a plurality of high precedence rules to determine whether one or more of the high precedence rules are related. 14. The system of claim 13, wherein to identify the related rules the program instructions further direct the processing system to:
after filtering each rule, identify the one or more compound groups within the plurality of rules; and perform one-to-one matching on the one or more compound groups. 15. The system of claim 14, wherein to identify the related rules the program instructions further direct the processing system to:
after performing one-to-one matching, for each rule of the plurality of rules, generating all possible traffic scenarios that match the rule; and for each of the possible traffic scenarios, determine whether one or more of the plurality of high precedence rules covers the possible traffic scenario. 16. The system of claim 11, wherein the program instructions further direct the processing system to:
identify an updated rule to the plurality of rules; determine whether the updated rule impacts the relationships between the rules. 17. The system of claim 16, wherein to determine whether the updated rule impacts the relationships between the rules the program instructions direct the processing system to:
determine whether rules of the plurality of rules that come after the updated rule during rule application will become ineffective due to the updated rule; and determine whether rules of the plurality of rules that come before the updated rule during the rule application will cause the updated rule to be ineffective. 18. The system of claim 16, wherein the updated rule comprises an additional rule, an amendment to a rule, or a deletion of a rule. 19. The system of claim 11, wherein the program instructions further direct the processing system to:
identify one or more shadowed rules of the related rules based on the relationships between the rules, wherein a rule is a shadowed rule when criteria for the shadowed rule is at least partially overlapped by one or more rules that come before the shadowed rule during rule application. 20. The system of claim 11, wherein to adjust the one or more ineffective rules, the program instructions direct the processing system to at least one of delete at least one of the one or more ineffective rules and merge at least one of the one or more ineffective rules with one or more other rules. | Network firewalls operate based on rules that define how a firewall should handle traffic passing through the firewall. At their most basic, firewall rules may indicate that certain network traffic should be denied from passing through a network firewall or indicate that certain network traffic should be allowed to pass through the network firewall. Manners of handling network traffic beyond simply allowing or denying the network traffic may also be defined by the rules. For instance, a rule may indicate that certain network traffic should be routed to a specific system. Thus, if an administrator of a network firewall determines that certain network traffic should be handled in a certain way by a network firewall, the administrator need only implement a firewall rule defining how that network traffic should be handled in the network firewall.1. A method of reducing the number of rules employed by a network firewall, the method comprising:
identifying related rules of a plurality of rules used by the network firewall, wherein at least one of the plurality of rules comprise criteria of one or more compound groups that each identify a source or destination virtual machine based on at least a security group or a service group for the virtual machine and wherein two rules are related rules when there exists at least one network traffic pattern that can satisfy criteria for both of the rules, including criteria of one or more compound groups of at least one of the two rules; identifying one or more ineffective rules of the related rules based on the relationships between the rules; and adjusting, in the network firewall, the one or more ineffective rules in the plurality of rules to obviate the ineffectiveness of the one or more ineffective rules when handling network traffic exchanged with a plurality of virtual machines based on the plurality of rules. 2. The method of claim 1, further comprising:
presenting, to an administrator of the network firewall, statistics about the one or more ineffective rules and the plurality of rules; receiving confirmation from the administrator of the network firewall that the one or more ineffective rules should be removed from the plurality of rules; and wherein adjusting the one or more ineffective rules occurs in response to the confirmation. 3. The method of claim 1, wherein identifying the related rules comprises:
filtering each rule of the plurality of rules using a plurality of high precedence rules to determine whether one or more of the high precedence rules are related. 4. The method of claim 3, wherein identifying the related rules further comprises:
after filtering each rule, identifying the one or more compound groups within the plurality of rules; and performing one-to-one matching on the one or more compound groups. 5. The method of claim 4, wherein identifying the related rules further comprises:
after performing one-to-one matching, for each rule of the plurality of rules, generating all possible traffic scenarios that match the rule; and for each of the possible traffic scenarios, determine whether one or more of the plurality of high precedence rules covers the possible traffic scenario. 6. The method of claim 1, further comprising:
identifying an updated rule to the plurality of rules; determining whether the updated rule impacts the relationships between the rules. 7. The method of claim 6, wherein determining whether the updated rule impacts the relationships between the rules comprises:
determining whether rules of the plurality of rules that come after the updated rule during rule application will become ineffective due to the updated rule; and determining whether rules of the plurality of rules that come before the updated rule during the rule application will cause the updated rule to be ineffective. 8. The method of claim 6, wherein the updated rule comprises an additional rule, an amendment to a rule, or a deletion of a rule. 9. The method of claim 1, further comprising:
identifying one or more shadowed rules of the related rules based on the relationships between the rules, wherein a rule is a shadowed rule when criteria for the shadowed rule is at least partially overlapped by one or more rules that come before the shadowed rule during rule application. 10. The method of claim 1, wherein adjusting the one or more ineffective rules comprises at least one of deleting at least one of the one or more ineffective rules and merging at least one of the one or more ineffective rules with one or more other rules. 11. A system for reducing the number of rules employed by a network firewall, the system comprising:
one or more computer readable storage media; a processing system operatively coupled with the one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media that, when read and executed by the processing system, direct the processing system to:
identify related rules of a plurality of rules used by the network firewall, wherein at least one of the plurality of rules comprise criteria of one or more compound groups that each identify a source or destination virtual machine based on at least a security group or a service group for the virtual machine and wherein two rules are related rules when there exists at least one network traffic pattern that can satisfy criteria for both of the rules, including criteria of one or more compound groups of at least one of the two rules;
identify one or more ineffective rules of the related rules based on the relationships between the rules; and
adjust, in the network firewall, the one or more ineffective rules in the plurality of rules to obviate the ineffectiveness of the one or more ineffective rules when handling network traffic exchanged with a plurality of virtual machines based on the plurality of rules. 12. The system of claim 11, wherein the program instructions further direct the processing system to:
present, to an administrator of the network firewall, statistics about the one or more ineffective rules and the plurality of rules; receive confirmation from the administrator of the network firewall that the one or more ineffective rules should be removed from the plurality of rules; and wherein adjustment of the one or more ineffective rules occurs in response to the confirmation. 13. The system of claim 11, wherein to identify the related rules the program instructions direct the processing system to:
filter each rule of the plurality of rules using a plurality of high precedence rules to determine whether one or more of the high precedence rules are related. 14. The system of claim 13, wherein to identify the related rules the program instructions further direct the processing system to:
after filtering each rule, identify the one or more compound groups within the plurality of rules; and perform one-to-one matching on the one or more compound groups. 15. The system of claim 14, wherein to identify the related rules the program instructions further direct the processing system to:
after performing one-to-one matching, for each rule of the plurality of rules, generating all possible traffic scenarios that match the rule; and for each of the possible traffic scenarios, determine whether one or more of the plurality of high precedence rules covers the possible traffic scenario. 16. The system of claim 11, wherein the program instructions further direct the processing system to:
identify an updated rule to the plurality of rules; determine whether the updated rule impacts the relationships between the rules. 17. The system of claim 16, wherein to determine whether the updated rule impacts the relationships between the rules the program instructions direct the processing system to:
determine whether rules of the plurality of rules that come after the updated rule during rule application will become ineffective due to the updated rule; and determine whether rules of the plurality of rules that come before the updated rule during the rule application will cause the updated rule to be ineffective. 18. The system of claim 16, wherein the updated rule comprises an additional rule, an amendment to a rule, or a deletion of a rule. 19. The system of claim 11, wherein the program instructions further direct the processing system to:
identify one or more shadowed rules of the related rules based on the relationships between the rules, wherein a rule is a shadowed rule when criteria for the shadowed rule is at least partially overlapped by one or more rules that come before the shadowed rule during rule application. 20. The system of claim 11, wherein to adjust the one or more ineffective rules, the program instructions direct the processing system to at least one of delete at least one of the one or more ineffective rules and merge at least one of the one or more ineffective rules with one or more other rules. | 2,400 |
8,698 | 8,698 | 16,125,256 | 2,434 | A transfer of master data is executed in a backend computing system. The master data includes user data and system data. The transfer of master data includes receiving user data associated with a particular user identifier in the backend computing system, transferring the received user data to an event stream processor, receiving system data associated with a particular log providing computing system in the backend computing system, transferring the received user data to the event stream processor, and executing a transfer of log data associated with logs of computing systems connected to the backend computing system. | 1. A computer-implemented method, comprising:
executing a transfer of master data in a backend computing system, wherein the master data includes user data and system data, and wherein the transfer of master data comprises: receiving user data associated with a particular user id in the backend computing system; transferring the received user data to an event stream processor (ESP); receiving system data associated with a particular log providing computing system in the backend computing system; transferring the received user data to the ESP; and executing a transfer of log data associated with logs of computing systems connected to the backend computing system. 2. The method of claim 1, wherein, for user data, the master data is received from a user management system or an identity management system, and wherein, for system data, the master data is received from system context data associated with a particular computing system connected to the backend computing system as determined by a software application executing on the backend computing system or manual maintenance data related to a particular system context. 3. The method of claim 1, wherein the user data is associated with a single individual or a common identification associated with multiple individuals. 4. The method of claim 1, comprising:
if determined that a user context associated with the particular user is not available to the ESP, creating a new user context associated with the particular user; and
if determined that a user context associated with the particular user is available to the ESP, updating the user context associated with the particular user. 5. The method of claim 1, comprising:
if determined that a system context associated with the particular log providing computing system is not available to the ESP, creating a new system context associated with the particular log providing computing system; and if determined that a system context associated with the particular log providing computing system is available to the ESP, updating the system context associated with the particular log providing computing system. 6. The method of claim 1, wherein the transfer of log data comprises:
reading log data from a particular log associated with a particular computing system, wherein the log data is read starting with the latest timestamp; transferring read log data to the ESP, wherein the read log data is transformed into a normalized format prior to transfer; and enriching each log entry of the transferred log data. 7. The method of claim 6, wherein the enrichment of each particular log entry comprises:
attempting to read a user context for a particular user id associated with the particular log entry; if a user context for the particular user id is found within the backend computing system, writing into the particular log a user context id associated with the user context; if a user context for the particular user id is not found within the backend computing system, creating a new user context within the backend computing system and writing into the particular log a user context id associated with the new user context; removing the original user id from the particular log entry; and
writing a revised log entry into the backend computing system. 8. A non-transitory, computer-readable medium storing computer-readable instructions, the instructions executable by a computer and configured to:
execute a transfer of master data in a backend computing system, wherein the master data includes user data and system data, and wherein the transfer of master data comprises:
receive user data associated with a particular user id in the backend computing system;
transfer the received user data to an event stream processor (ESP);
receive system data associated with a particular log providing computing system in the backend computing system;
transfer the received user data to the ESP; and
execute a transfer of log data associated with logs of computing systems connected to the backend computing system. 9. The non-transitory, computer-readable medium of claim 8, wherein, for user data, the master data is received from a user management system or an identity management system, and wherein, for system data, the master data is received from system context data associated with a particular computing system connected to the backend computing system as determined by a software application executing on the backend computing system or manual maintenance data related to a particular system context. 10. The non-transitory, computer-readable medium of claim 8, wherein the user data is associated with a single individual or a common identification associated with multiple individuals. 11. The non-transitory, computer-readable medium of claim 8, the instructions further configured to:
if determined that a user context associated with the particular user is not available to the ESP, create a new user context associated with the particular user; and
if determined that a user context associated with the particular user is available to the ESP, update the user context associated with the particular user. 12. The non-transitory, computer-readable medium of claim 8, the instructions further configured to:
if determined that a system context associated with the particular log providing computing system is not available to the ESP, create a new system context associated with the particular log providing computing system; and if determined that a system context associated with the particular log providing computing system is available to the ESP, update the system context associated with the particular log providing computing system. 13. The non-transitory, computer-readable medium of claim 8, wherein the transfer of log data comprises:
reading log data from a particular log associated with a particular computing system, wherein the log data is read starting with the latest timestamp; transferring read log data to the ESP, wherein the read log data is transformed into a normalized format prior to transfer; and enriching each log entry of the transferred log data. 14. The non-transitory, computer-readable medium of claim 13, wherein the enrichment of each particular log entry comprises:
attempting to read a user context for a particular user id associated with the particular log entry; if a user context for the particular user id is found within the backend computing system, writing into the particular log a user context id associated with the user context; if a user context for the particular user id is not found within the backend computing system, creating a new user context within the backend computing system and writing into the particular log a user context id associated with the new user context; removing the original user id from the particular log entry; and
writing a revised log entry into the backend computing system. 15. A system, comprising:
a memory; at least one hardware processor interoperably coupled with the memory and configured to:
execute a transfer of master data in a backend computing system, wherein the master data includes user data and system data, and wherein the transfer of master data comprises:
receive user data associated with a particular user id in the backend computing system;
transfer the received user data to an event stream processor (ESP);
receive system data associated with a particular log providing computing system in the backend computing system;
transfer the received user data to the ESP; and
execute a transfer of log data associated with logs of computing systems connected to the backend computing system. 16. The system of claim 15, wherein, for user data, the master data is received from a user management system or an identity management system, and wherein, for system data, the master data is received from system context data associated with a particular computing system connected to the backend computing system as determined by a software application executing on the backend computing system or manual maintenance data related to a particular system context. 17. The system of claim 15, wherein the user data is associated with a single individual or a common identification associated with multiple individuals. 18. The system of claim 15, the processor further configured to:
if determined that a user context associated with the particular user is not available to the ESP, create a new user context associated with the particular user; and if determined that a user context associated with the particular user is available to the ESP, update the user context associated with the particular user. 19. The system of claim 15, the processor further configured to:
if determined that a system context associated with the particular log providing computing system is not available to the ESP, create a new system context associated with the particular log providing computing system; and if determined that a system context associated with the particular log providing computing system is available to the ESP, update the system context associated with the particular log providing computing system. 20. The system of claim 15, wherein the transfer of log data comprises:
reading log data from a particular log associated with a particular computing system, wherein the log data is read starting with the latest timestamp; transferring read log data to the ESP, wherein the read log data is transformed into a normalized format prior to transfer; and enriching each log entry of the transferred log data. | A transfer of master data is executed in a backend computing system. The master data includes user data and system data. The transfer of master data includes receiving user data associated with a particular user identifier in the backend computing system, transferring the received user data to an event stream processor, receiving system data associated with a particular log providing computing system in the backend computing system, transferring the received user data to the event stream processor, and executing a transfer of log data associated with logs of computing systems connected to the backend computing system.1. A computer-implemented method, comprising:
executing a transfer of master data in a backend computing system, wherein the master data includes user data and system data, and wherein the transfer of master data comprises: receiving user data associated with a particular user id in the backend computing system; transferring the received user data to an event stream processor (ESP); receiving system data associated with a particular log providing computing system in the backend computing system; transferring the received user data to the ESP; and executing a transfer of log data associated with logs of computing systems connected to the backend computing system. 2. The method of claim 1, wherein, for user data, the master data is received from a user management system or an identity management system, and wherein, for system data, the master data is received from system context data associated with a particular computing system connected to the backend computing system as determined by a software application executing on the backend computing system or manual maintenance data related to a particular system context. 3. The method of claim 1, wherein the user data is associated with a single individual or a common identification associated with multiple individuals. 4. The method of claim 1, comprising:
if determined that a user context associated with the particular user is not available to the ESP, creating a new user context associated with the particular user; and
if determined that a user context associated with the particular user is available to the ESP, updating the user context associated with the particular user. 5. The method of claim 1, comprising:
if determined that a system context associated with the particular log providing computing system is not available to the ESP, creating a new system context associated with the particular log providing computing system; and if determined that a system context associated with the particular log providing computing system is available to the ESP, updating the system context associated with the particular log providing computing system. 6. The method of claim 1, wherein the transfer of log data comprises:
reading log data from a particular log associated with a particular computing system, wherein the log data is read starting with the latest timestamp; transferring read log data to the ESP, wherein the read log data is transformed into a normalized format prior to transfer; and enriching each log entry of the transferred log data. 7. The method of claim 6, wherein the enrichment of each particular log entry comprises:
attempting to read a user context for a particular user id associated with the particular log entry; if a user context for the particular user id is found within the backend computing system, writing into the particular log a user context id associated with the user context; if a user context for the particular user id is not found within the backend computing system, creating a new user context within the backend computing system and writing into the particular log a user context id associated with the new user context; removing the original user id from the particular log entry; and
writing a revised log entry into the backend computing system. 8. A non-transitory, computer-readable medium storing computer-readable instructions, the instructions executable by a computer and configured to:
execute a transfer of master data in a backend computing system, wherein the master data includes user data and system data, and wherein the transfer of master data comprises:
receive user data associated with a particular user id in the backend computing system;
transfer the received user data to an event stream processor (ESP);
receive system data associated with a particular log providing computing system in the backend computing system;
transfer the received user data to the ESP; and
execute a transfer of log data associated with logs of computing systems connected to the backend computing system. 9. The non-transitory, computer-readable medium of claim 8, wherein, for user data, the master data is received from a user management system or an identity management system, and wherein, for system data, the master data is received from system context data associated with a particular computing system connected to the backend computing system as determined by a software application executing on the backend computing system or manual maintenance data related to a particular system context. 10. The non-transitory, computer-readable medium of claim 8, wherein the user data is associated with a single individual or a common identification associated with multiple individuals. 11. The non-transitory, computer-readable medium of claim 8, the instructions further configured to:
if determined that a user context associated with the particular user is not available to the ESP, create a new user context associated with the particular user; and
if determined that a user context associated with the particular user is available to the ESP, update the user context associated with the particular user. 12. The non-transitory, computer-readable medium of claim 8, the instructions further configured to:
if determined that a system context associated with the particular log providing computing system is not available to the ESP, create a new system context associated with the particular log providing computing system; and if determined that a system context associated with the particular log providing computing system is available to the ESP, update the system context associated with the particular log providing computing system. 13. The non-transitory, computer-readable medium of claim 8, wherein the transfer of log data comprises:
reading log data from a particular log associated with a particular computing system, wherein the log data is read starting with the latest timestamp; transferring read log data to the ESP, wherein the read log data is transformed into a normalized format prior to transfer; and enriching each log entry of the transferred log data. 14. The non-transitory, computer-readable medium of claim 13, wherein the enrichment of each particular log entry comprises:
attempting to read a user context for a particular user id associated with the particular log entry; if a user context for the particular user id is found within the backend computing system, writing into the particular log a user context id associated with the user context; if a user context for the particular user id is not found within the backend computing system, creating a new user context within the backend computing system and writing into the particular log a user context id associated with the new user context; removing the original user id from the particular log entry; and
writing a revised log entry into the backend computing system. 15. A system, comprising:
a memory; at least one hardware processor interoperably coupled with the memory and configured to:
execute a transfer of master data in a backend computing system, wherein the master data includes user data and system data, and wherein the transfer of master data comprises:
receive user data associated with a particular user id in the backend computing system;
transfer the received user data to an event stream processor (ESP);
receive system data associated with a particular log providing computing system in the backend computing system;
transfer the received user data to the ESP; and
execute a transfer of log data associated with logs of computing systems connected to the backend computing system. 16. The system of claim 15, wherein, for user data, the master data is received from a user management system or an identity management system, and wherein, for system data, the master data is received from system context data associated with a particular computing system connected to the backend computing system as determined by a software application executing on the backend computing system or manual maintenance data related to a particular system context. 17. The system of claim 15, wherein the user data is associated with a single individual or a common identification associated with multiple individuals. 18. The system of claim 15, the processor further configured to:
if determined that a user context associated with the particular user is not available to the ESP, create a new user context associated with the particular user; and if determined that a user context associated with the particular user is available to the ESP, update the user context associated with the particular user. 19. The system of claim 15, the processor further configured to:
if determined that a system context associated with the particular log providing computing system is not available to the ESP, create a new system context associated with the particular log providing computing system; and if determined that a system context associated with the particular log providing computing system is available to the ESP, update the system context associated with the particular log providing computing system. 20. The system of claim 15, wherein the transfer of log data comprises:
reading log data from a particular log associated with a particular computing system, wherein the log data is read starting with the latest timestamp; transferring read log data to the ESP, wherein the read log data is transformed into a normalized format prior to transfer; and enriching each log entry of the transferred log data. | 2,400 |
8,699 | 8,699 | 15,651,521 | 2,483 | An aerosol delivery device includes a cartridge of aerosol precursor composition, and a control body coupled or coupleable to the cartridge. The control body includes a control component to control delivery of components of the aerosol precursor composition in response to detection of airflow through at least a portion of the cartridge or control body, and includes a camera system with a digital camera to capture video imagery of a scene in a field of view thereof. The camera system or the control component is configured to perform video content analytics on the video imagery to detect a temporal or spatial event in the scene, and transfer at least one of the video imagery or information indicative of the temporal or spatial event externally to a computing device configured to store or display the video imagery or information, or perform at least one control operation based on the information. | 1. An aerosol delivery device comprising:
a cartridge of aerosol precursor composition; and a control body coupled or coupleable to the cartridge, the control body including a control component configured to control delivery of components of the aerosol precursor composition in response to detection of airflow through at least a portion of the cartridge or control body, and including a camera system with a digital camera configured to capture video imagery of a scene in a field of view thereof, wherein the camera system or the control component is configured to perform video content analytics on the video imagery to detect a temporal or spatial event in the scene, and transfer at least one of the video imagery or information indicative of the temporal or spatial event externally to a computing device configured to store or display the video imagery or information, or perform at least one control operation based on the information. 2. The aerosol delivery device of claim 1, wherein the camera system further includes a processor configured to perform the video content analytics on the video imagery, and the control component is configured to transfer at least one of the video imagery or information to the computing device. 3. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a number of people in a room climate controlled by a heating or cooling system, and the computing device is a network-connected thermostat of or coupled to the heating or cooling system, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the number of people in the room to the network-connected thermostat configured to perform at least one control operation including control of the heating or cooling system based thereon. 4. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person in an environment with an electric light, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person in the environment to the computing device configured to perform at least one control operation including control of the at least one electric light based thereon. 5. The aerosol delivery device of claim 4, wherein the electric light is a network-connected electric light with the computing device embedded therein, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected electric light. 6. The aerosol delivery device of claim 4, wherein the computing device is a network-connected light switch coupled to the electric light, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected light switch. 7. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of a notification of the person or number of people in the environment. 8. The aerosol delivery device of claim 7, wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of the notification that includes display of the video imagery with a visual indicator of the person or number of people thereon. 9. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event in the environment, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including a hazardous condition in the environment. 10. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person in an environment, and the camera system or the control component being configured to perform video content analytics includes being configured to detect a facial expression of the person, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including distress detected based on the facial expression of the person. 11. The aerosol delivery device of claim 1, wherein the scene is a parking lot including a layout of parking spaces, the temporal or spatial event in the scene is an open one of the parking spaces, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the open one of the parking spaces to the computing device configured to perform at least one control operation including display of the video imagery with a visual indicator of the open one of the parking spaces thereon. 12. The aerosol delivery device of claim 1, wherein the housing is coupled to the cartridge including a reservoir of aerosol precursor composition comprising glycerin and nicotine. 13. A control body of an aerosol delivery device, the control body comprising:
a housing to which a cartridge of aerosol precursor composition is coupled or coupleable to form the aerosol delivery device; a control component contained within the housing and configured to control delivery of components of the aerosol precursor composition in response to detection of airflow through at least a portion of the housing or cartridge; and a camera system with a digital camera configured to capture video imagery of a scene in a field of view thereof, wherein the camera system or the control component is configured to perform video content analytics on the video imagery to detect a temporal or spatial event in the scene, and transfer at least one of the video imagery or information indicative of the temporal or spatial event externally to a computing device configured to store or display the video imagery or information, or perform at least one control operation based on the information. 14. The control body of claim 13, wherein the camera system further includes a processor configured to perform the video content analytics on the video imagery, and the control component is configured to transfer at least one of the video imagery or information to the computing device. 15. The control body of claim 13, wherein the temporal or spatial event is a number of people in a room climate controlled by a heating or cooling system, and the computing device is a network-connected thermostat of or coupled to the heating or cooling system, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the number of people in the room to the network-connected thermostat configured to perform at least one control operation including control of the heating or cooling system based thereon. 16. The control body of claim 13, wherein the temporal or spatial event is a person in an environment with an electric light, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person in the environment to the computing device configured to perform at least one control operation including control of the at least one electric light based thereon. 17. The control body of claim 16, wherein the electric light is a network-connected electric light with the computing device embedded therein, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected electric light. 18. The control body of claim 16, wherein the computing device is a network-connected light switch coupled to the electric light, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected light switch. 19. The control body of claim 13, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of a notification of the person or number of people in the environment. 20. The control body of claim 19, wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of the notification that includes display of the video imagery with a visual indicator of the person or number of people thereon. 21. The control body of claim 13, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event in the environment, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including a hazardous condition in the environment. 22. The control body of claim 13, wherein the temporal or spatial event is a person in an environment, and the camera system or the control component being configured to perform video content analytics includes being configured to detect a facial expression of the person, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including distress detected based on the facial expression of the person. 23. The control body of claim 13, wherein the scene is a parking lot including a layout of parking spaces, the temporal or spatial event in the scene is an open one of the parking spaces, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the open one of the parking spaces to the computing device configured to perform at least one control operation including display of the video imagery with a visual indicator of the open one of the parking spaces thereon. 24. The control body of claim 13, wherein the housing is coupled to the cartridge including a reservoir of aerosol precursor composition comprising glycerin and nicotine. | An aerosol delivery device includes a cartridge of aerosol precursor composition, and a control body coupled or coupleable to the cartridge. The control body includes a control component to control delivery of components of the aerosol precursor composition in response to detection of airflow through at least a portion of the cartridge or control body, and includes a camera system with a digital camera to capture video imagery of a scene in a field of view thereof. The camera system or the control component is configured to perform video content analytics on the video imagery to detect a temporal or spatial event in the scene, and transfer at least one of the video imagery or information indicative of the temporal or spatial event externally to a computing device configured to store or display the video imagery or information, or perform at least one control operation based on the information.1. An aerosol delivery device comprising:
a cartridge of aerosol precursor composition; and a control body coupled or coupleable to the cartridge, the control body including a control component configured to control delivery of components of the aerosol precursor composition in response to detection of airflow through at least a portion of the cartridge or control body, and including a camera system with a digital camera configured to capture video imagery of a scene in a field of view thereof, wherein the camera system or the control component is configured to perform video content analytics on the video imagery to detect a temporal or spatial event in the scene, and transfer at least one of the video imagery or information indicative of the temporal or spatial event externally to a computing device configured to store or display the video imagery or information, or perform at least one control operation based on the information. 2. The aerosol delivery device of claim 1, wherein the camera system further includes a processor configured to perform the video content analytics on the video imagery, and the control component is configured to transfer at least one of the video imagery or information to the computing device. 3. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a number of people in a room climate controlled by a heating or cooling system, and the computing device is a network-connected thermostat of or coupled to the heating or cooling system, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the number of people in the room to the network-connected thermostat configured to perform at least one control operation including control of the heating or cooling system based thereon. 4. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person in an environment with an electric light, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person in the environment to the computing device configured to perform at least one control operation including control of the at least one electric light based thereon. 5. The aerosol delivery device of claim 4, wherein the electric light is a network-connected electric light with the computing device embedded therein, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected electric light. 6. The aerosol delivery device of claim 4, wherein the computing device is a network-connected light switch coupled to the electric light, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected light switch. 7. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of a notification of the person or number of people in the environment. 8. The aerosol delivery device of claim 7, wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of the notification that includes display of the video imagery with a visual indicator of the person or number of people thereon. 9. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event in the environment, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including a hazardous condition in the environment. 10. The aerosol delivery device of claim 1, wherein the temporal or spatial event is a person in an environment, and the camera system or the control component being configured to perform video content analytics includes being configured to detect a facial expression of the person, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including distress detected based on the facial expression of the person. 11. The aerosol delivery device of claim 1, wherein the scene is a parking lot including a layout of parking spaces, the temporal or spatial event in the scene is an open one of the parking spaces, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the open one of the parking spaces to the computing device configured to perform at least one control operation including display of the video imagery with a visual indicator of the open one of the parking spaces thereon. 12. The aerosol delivery device of claim 1, wherein the housing is coupled to the cartridge including a reservoir of aerosol precursor composition comprising glycerin and nicotine. 13. A control body of an aerosol delivery device, the control body comprising:
a housing to which a cartridge of aerosol precursor composition is coupled or coupleable to form the aerosol delivery device; a control component contained within the housing and configured to control delivery of components of the aerosol precursor composition in response to detection of airflow through at least a portion of the housing or cartridge; and a camera system with a digital camera configured to capture video imagery of a scene in a field of view thereof, wherein the camera system or the control component is configured to perform video content analytics on the video imagery to detect a temporal or spatial event in the scene, and transfer at least one of the video imagery or information indicative of the temporal or spatial event externally to a computing device configured to store or display the video imagery or information, or perform at least one control operation based on the information. 14. The control body of claim 13, wherein the camera system further includes a processor configured to perform the video content analytics on the video imagery, and the control component is configured to transfer at least one of the video imagery or information to the computing device. 15. The control body of claim 13, wherein the temporal or spatial event is a number of people in a room climate controlled by a heating or cooling system, and the computing device is a network-connected thermostat of or coupled to the heating or cooling system, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the number of people in the room to the network-connected thermostat configured to perform at least one control operation including control of the heating or cooling system based thereon. 16. The control body of claim 13, wherein the temporal or spatial event is a person in an environment with an electric light, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person in the environment to the computing device configured to perform at least one control operation including control of the at least one electric light based thereon. 17. The control body of claim 16, wherein the electric light is a network-connected electric light with the computing device embedded therein, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected electric light. 18. The control body of claim 16, wherein the computing device is a network-connected light switch coupled to the electric light, and the camera system or control component being configured to transfer the information includes being configured to transfer the information to the network-connected light switch. 19. The control body of claim 13, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of a notification of the person or number of people in the environment. 20. The control body of claim 19, wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the person or number of people in the environment to the computing device configured to perform at least one control operation including output of the notification that includes display of the video imagery with a visual indicator of the person or number of people thereon. 21. The control body of claim 13, wherein the temporal or spatial event is a person or number of people in an environment, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event in the environment, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including a hazardous condition in the environment. 22. The control body of claim 13, wherein the temporal or spatial event is a person in an environment, and the camera system or the control component being configured to perform video content analytics includes being configured to detect a facial expression of the person, and
wherein the camera system or the control component is configured to transfer at least one of the video imagery or information in response to a second temporal or spatial event, the camera system or the control component being configured to transfer an indication of the second temporal or spatial event with the video imagery or information, and the second temporal or spatial event including distress detected based on the facial expression of the person. 23. The control body of claim 13, wherein the scene is a parking lot including a layout of parking spaces, the temporal or spatial event in the scene is an open one of the parking spaces, and
wherein the camera system or the control component being configured to transfer at least one of the video imagery or information includes being configured to transfer the video imagery and information indicative of the open one of the parking spaces to the computing device configured to perform at least one control operation including display of the video imagery with a visual indicator of the open one of the parking spaces thereon. 24. The control body of claim 13, wherein the housing is coupled to the cartridge including a reservoir of aerosol precursor composition comprising glycerin and nicotine. | 2,400 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.