Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
UIProgressView with progress at low values issue
I'm running a progress bar which updates every second. It needs to run for upwards of 300 seconds so the changes in the progress bar's 'progress' variable are on the magnitude of about 0.003 per second.
I've noticed a problem with low values of the 'progress' variable, though. Basically, there does not seem to be any difference between 0.09 and anything lower than that. So what ends up happening is when the progress bar is in the 0.0 - 0.09 range it doesn't show any visual changes even though the progress variable is changing. Another side effect is if I start the progress bar from 0.0 it immediately jumps to the image for 0.09 or so (and remains there until the real progress has passed that), which looks a little odd.
It's probably worth mentioning that this doesn't seem to be an issue anywhere else along the progress bar. It's able to move the bar 0.003 progress at a time everywhere else, including very near to 1.0.
I suppose it's not a huge deal, but I was wondering if anyone knew a way around this.
Thanks in advance.
What's the width of your progress bar? Are you on a Retina display?
It's width was 110, and I'm not able to see a difference between Retina and normal in the simulator at least. Based on your comment I tried making the width larger and the effect was a smaller "dead" range, but still significant with a 300 width progress bar. I believe I see the reasoning for longer bars having more accuracy, but it still doesn't make sense to me why the bar moves along just fine for every point after the small dead-zone at the start.
I got the exact same problem as you, more than 5 years later... What is this very strange bug? I can't solve it
I'm having this exact same issue at the end of 2016 with Swift! I would have thought Apple would have addressed this issue by now, but I guess not. It's definitely annoying.
Try using 2 progress bars. One for micro and one for macro. Use the micro one to reflect a 0.1 range in the macro bar. That way you can show the finer movements.
Thanks for the idea. I gave it a shot, making the micro bar width 10 for a width 100 macro bar. Unfortunately the micro bar seems to be too small to register changes. To me, it looks exactly the same at 0.001 as it does at 1.0. Basically at this size it has one image for 0.0 (empty bar) and one image for everything above that (full bar). This is true both from looking at it in the storyboard and simulating it retina and non-retina. Unless I misunderstood.
I think you did misunderstand. I meant for you to use the micro bar to reflect every 0.1. Meaning you multiply by 10. For example, your current count is 0.452 you reflect 0.4 on your macro bar and reflect 0.52 on your micro bar. If your count increases from 0.452 to 0.482 your macro shows 0.482 but your micro now shows 0.82. When your count reaches 0.5 you reset your micro back to zero and start again...
Ahh, okay I see what you mean now! I was originally thinking you meant to overlay a smaller bar over the bigger bar to cover this "dead-zone" between 0 and 0.1, but I see that you mean actually having two bars separately (I think). I may have to go this route. Still, it bothers me that it's only in the 0 to 0.1 range that I cannot get the accuracy I need with one bar. Everywhere between 0.1 to 1.0 can register 0.003 size changes just fine for whatever reason. Thanks for your help.
I'm aware this post is essentially dead, but I came up with a way to deal with it that doesn't involve multiple UIProgressViews.
I decided to simply forget UIProgressView altogether and write a fake one leveraging UISlider instead. I removed the knob with [setThumbImage:[UIImage new] forState:UIControlStateNormal] and disabled user interaction. From here, I did everything else similar to how the UIProgressView works except obviously conforming to how the slider handles its tracking.
I also subclassed UISlider to utilize (CGRect)trackRectForBounds:(CGRect)bounds so that I could set the track height to look more like a UIProgressView.
Hope this helps someone else.
I had the same issue few days ago, solved it by removing the UIProgressView and using two UIView instances and UIViewPropertyAnimator instance.
Basically the first view represents an empty progress, it should take the width of your progress bar, second view represents the progress movements, it should take zero width initially, it should be above the first view, and both views should have different colors.
Maybe a light gray color for the empty progress view and dark gray color for the actual progress view.
Then, when your progress starts, increase the width of the second view, so it will overlay the empty progress view.
Use UIViewPropertyAnimator to make the animation, it provides easy and useful built-in functionalities to handle the animation smoothly, for example you can pause the animation and resume it.
|
STACK_EXCHANGE
|
Unable to drop user due to assigned privileges?
How do you drop a user with read-only permission to PostgreSQL?
I created a user, with read-only access to all tables, on an RDS Postgres database with:
CREATE ROLE myuser WITH LOGIN PASSWORD 'supersecretpassword';
ALTER USER myuser SET search_path=myschema;
GRANT CONNECT ON DATABASE mydb TO myuser;
GRANT USAGE ON SCHEMA myschema TO myuser;
GRANT SELECT ON ALL TABLES IN SCHEMA myschema TO myuser;
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO myuser;
Now I'd like to drop this user's access, so I tried the obvious:
DROP USER myuser;
But this gives me the error:
Could not drop the role.
ERROR: privileges for database mydb
privileges for schema myschema
privileges for default privileges on new relations belonging to role mainrole in schema myschema role "myuser" cannot be dropped because some objects depend on it
If I try:
DROP OWNED BY myuser;
That also gives me the error:
ERROR: permission denied to drop objects
My role is the root login for my RDS instance, so not sure why I can't drop objects. Must be some RDS limitation. So maybe if I get a list of all the objects owned by the user I want to drop as outlined in this post, I can get a sense of what I need to reassign?
However, if I run:
select
nsp.nspname as SchemaName
,cls.relname as ObjectName
,rol.rolname as ObjectOwner
,case cls.relkind
when 'r' then 'TABLE'
when 'm' then 'MATERIALIZED_VIEW'
when 'i' then 'INDEX'
when 'S' then 'SEQUENCE'
when 'v' then 'VIEW'
when 'c' then 'TYPE'
else cls.relkind::text
end as ObjectType
from pg_class cls
join pg_roles rol
on rol.oid = cls.relowner
join pg_namespace nsp
on nsp.oid = cls.relnamespace
where nsp.nspname not in ('information_schema', 'pg_catalog')
and nsp.nspname not like 'pg_toast%'
and rol.rolname = 'myuser'
order by nsp.nspname, cls.relname
it returns nothing.
So I can't drop the user because it owns stuff. But also it doesn't own anything? What am I missing?
This is similar to this question but none of its solution work, presumably because this is on RDS.
This is the safe sequence of commands:
Repeat in every database of the same DB cluster, where the role may own anything or may have any privileges:
REASSIGN OWNED BY myuser TO rds_superuser; -- optional; see below
DROP OWNED BY myuser;
Then, once, in any database of the same DB cluster:
DROP ROLE myuser;
The manual:
DROP USER is simply an alternate spelling of DROP ROLE.
DROP OWNED also removes all privileges including DEFAULT PRIVILEGES, which is your problem in particular.
REASSIGN OWNED is optional in your case, since the role myuser does not own anything (but privileges). But it's the safe way if there can be objects that you might not want to destroy. I chose rds_superuser as target role sionce that seems to be the role you are operating with. Adjust to your needs. The manual:
REASSIGN OWNED requires membership on both the source role(s) and the target role.
See:
Cannot drop PostgreSQL role. Error: `cannot be dropped because some objects depend on it`
Find objects linked to a PostgreSQL role
Like I mentioned, the solutions in that other question do not work for me. Specifically, RDS does not support "REASSIGN" and gives me the error ERROR: permission denied to reassign objects even for my RDS superuser.
Like I said, REASSIGN OWNED is optional for you. DROP OWNED does the trick.
It seems you first need to drop default privileges you granted:
ALTER DEFAULT PRIVILEGES IN SCHEMA myschema REVOKE SELECT ON TABLES FROM myuser;
You can get default privileges using the query from https://stackoverflow.com/a/14555063
SELECT
nspname, -- schema name
defaclobjtype, -- object type
defaclacl -- default access privileges
FROM pg_default_acl a JOIN pg_namespace b ON a.defaclnamespace=b.oid;
Even after revoking those privileges, running DROP USER myuser; still results in: ERROR: role "myuser" cannot be dropped because some objects depend on it DETAIL: privileges for database mydb privileges for schema myschema
If you tried DROP OWNED (which should take care of permissions) and it gave you an
error , I'd suggest trying revoking everything you granted explicitly in addition to revoking default privileges. E.g. REVOKE USAGE ,REVOKE CONNECT , REVOKE SELECT and then try to drop user again.
|
STACK_EXCHANGE
|
Sender information in CalendarApp.CalendarEvent
I'm a bit baffled! Many of my customers send me GCal invites but the creator shows up as me. I want to extract the actual sender address from the CalendarEvent. See screen shot as an example. Kathryn is the sender but I am listed as the sender.
Sample screen shot showing different sender vs. creator
Can you provide more details about how the events are created, including the code related to the event creation? Why do you think the other person should be the sender?
The screen grab above was taken from a Microsoft Teams invite that came to me from outside my organization. I did not create the event programatically. Despite this, the creator is me even though the sender is not me.
This probably means Kathryn is the event organizer despite you being the creator. If I understand you correctly, you are able to retrieve the CalendarEvent and you want to know how to retrieve Kathryn address from there?
Yes - that's correct. To date, I have found no way to do so. For example, Kathryn is not on the guest list.
Do you know the event id from Calendar API? (Note: that's not the same as in CalendarEvent.getId(). Or a better question, how are you currently retrieving the CalendarEvent? Can you provide the corresponding code?
I am iterating over the CalendarEvents that come back from querying my one calendar (Enterprise controlled) via getEvents() with time range passed in. The number of events returned consistently matches what I see in Google Calendar.
This is fantastic. Everything you wrote below checked out for me and now I can finally see the organizer's email. I was wondering if I could update an event's etag property (in Advanced Calendar API) with the organizer's email and then read it from Calendar API (as a tag object). The goal is to not change any displayed properties but still use the Calendar API logic I had written earlier. Please advise.
Issue:
The main name that is displayed in the UI refers to the organizer of the event, which isn't necessarily the same as the event creator (for example, the event creator might call Events: move to move the event to another calendar, changing the event's organizer - see Organizers).
Solution:
You can retrieve the event organizer if you use Calendar API (I don't think that's possible with Apps Script Calendar service, since GuestStatus doesn't seem to return OWNER in all appropriate cases, at least from my experience).
An easy way to use the API in Apps Script is to use the Advanced Calendar Service (in order to use that, you'll have to enable the advance service first).
Then, using the API you could:
Use Events.list to list the events in your desired calendar according to a time range, as you are already doing with CalendarApp.
Retrieve your desired event.
Access the field organizer of your event, which contains the organizer's email address as well as other information about this user.
Code sample:
function getOrganizer() {
const calendarId = "YOUR_CALENDAR_ID";
const optionalArgs = {
"timeMin": "2021-05-17T00:00:00-02:00", // Change accordingly, min end time to filter
"timeMax": "2021-05-19T00:00:00-02:00" // Change accordingly, max start time to filter
};
const events = Calendar.Events.list(calendarId, optionalArgs)["items"];
const event = events[0];
const organizer = event["organizer"];
console.log(organizer);
const organizerEmailAddress = organizer["email"];
console.log(organizerEmailAddress);
}
|
STACK_EXCHANGE
|
Encryption plays an important role in technology. It is utilized by many protocols and programs that are available today. To put it simply, encryption is just very sophisticated math. The strength of encryption has improved vastly compared to the past. Presently, the algorithms that are used are very well designed and incredibly efficient. Encryption is useful for securing data as well as keeping anonymity and upholding privacy. In addition to the benefits and features encryption provides, there are also some drawbacks, performance issues, and security risks that come into play.
In today's world filled with technology and communications, there is a high demand for security and privacy. Encryption can be a great solution to fulfilling these requirements granted it is used correctly for the situation at hand. The main purpose of encryption is to take some variation of data that can be read or manipulated and obfuscate it so that it cannot be altered or stolen. The data at its unencrypted state is referred to as plaintext whereas the data in its encrypted form is referred to as ciphertext. In order to encrypt and decrypt data, the use of keys is necessary. Keys are basically a set of random numbers of a specified length that aid in locking and unlocking data. The keys are applied to the plaintext following a certain set of mathematical instructions. This is referred to as the encryption algorithm. In addition to protecting data on computer systems encryption is also commonly used to protect information and data in transit. A few cases where data is transmitted are networks (like the Internet), cell phones, and Bluetooth devices.
One of the most important aspects of encryption along with using a strong algorithm is using a key that is effectively long enough to make a brute force approach practically impossible. The key space of a key is the number of combinations allowed by the number of bits in that particular key to the power of two. For example, a key that is 4 bits in length would have a key space of 2^4. Each bit that is added to the key doubles the key space so you can see that it is important to use a long key. The larger a key is the longer it will take for a brute force approach to eventually obtain that key. Any cryptographic algorithm can be brute forced so the goal is not to figure out how to prevent brute force attacks but rather to make the attack take an unrealistic amount of time to execute. Theoretically, it would take more than 149 trillion years to brute force a 128-bit encryption key if your computer was processing about 72 quadrillion computations per second. A common misconception is to base an algorithm's strength solely on the key's length as there are other methods of encryption that use very long keys but may have known structural weaknesses in their algorithms or protocols used.
There are two types of encryption; symmetric and asymmetric. Symmetric encryption uses the same key to encrypt and decrypt data where asymmetric encryption uses two keys to encrypt and decrypt data. Asymmetric encryption is commonly referred to as public key encryption where the two keys used are called the public key and the private key. The private key is to be kept by an individual person and not be shared with anyone else. The public key is put somewhere publicly accessible to be used in conjunction with the private key. Optionally, you can specify a passphrase when using asymmetric encryption which helps to further secure your keys from unauthorized use. Using a passphrase will prompt for a password every time you or somebody else uses the key. The password is chosen by the owner of the key pair and is integrated at the time of creating the keys.
In basic terms, a protocol is a set of rules that determine how data is transmitted and what format it should be in. There are numerous protocols in use that are cryptographically secure, meaning the data is sent over the protocol at an encrypted state to prevent eavesdropping. A few examples of cryptographic protocols are secure shell (SSH), transport layer security (TLS), secure sockets layer (SSL), and secure file transfer protocol (SFTP). The advantage of using protocols that are cryptographically secure versus protocols that do not secure data is that any data in transit cannot be manipulated or deciphered by any one person. Even if the data was obtained through various other man-in-the-middle attacks such as packet sniffing or address resolution protocol poisoning, the attacker that obtained the data would need to go to great lengths to decrypt the encrypted data that he or she received. On the other hand, if a protocol employing no kind of encryption whatsoever was being used and someone was able to obtain the data in transit, it could be stolen for other uses, manipulated to receive more sensitive data, or even used for more serious attacks. A few examples of unencrypted protocols are hypertext transfer protocol (HTTP), file transfer protocol (FTP), telnet, and simple mail transfer protocol (SMTP).
In addition to protocols that use encryption, countless pieces of software and products use encryption as part of their operation. Software such as TrueCrypt, Skype, and OpenVPN use encryption but each piece of software implements it in a different way. TrueCrypt is used for encrypting entire file systems and securing data stored on physical media such as hard drives and flash drives. Skype is a popular application used for communication but uses encryption to secure the data stream between the subjects using the software. OpenVPN is a free and an open source piece of software that provides virtual private network (VPN) technologies. It can use many different kinds of encryption available today through the use of the OpenSSL library.
Encryption is an excellent option to secure data and prevent privacy and identity issues but there are also a few pitfalls that are introduced when encryption comes into play. For one, if you're using asymmetric encryption (also known as public key encryption) you run the risk of someone obtaining your private key. An important factor in asymmetric encryption is keeping your own private key safe. This can be accomplished in many ways such as setting permissions on the key, hiding the key from plain view, and even setting a passphrase for the key so that nobody can use the key freely if they are able to obtain it. Secondly, if you lose your private key or happen to accidentally delete it, you'll need to generate a new public and private key to be used. The only problem is that you might not be able to decrypt the data in question because it is encrypted using the previous pair of keys. The solution is not as simple as plugging in a new key pair and getting your data back. Finally, if you're using asymmetric encryption to login to remote servers or to create a virtual private network, you may lock yourself out of that service if you lose your key. A common practice to avoid these problems is to keep a physical backup of the keys and store them in a safe place. It may even be a good idea to enclose the keys in an encrypted archive format.
|
OPCFW_CODE
|
Question about Using a Proxy Server with ArcREST Samples
ArcRest or ArcRestHelper
Both?
Version or date of download
18-Apr-16
### Bug or Enhancement
Question: I have a script based on the adds_rows.py sample that reads records from a local FGDB and is supposed to update a feature layer we have in AGOL. This script works from a network connection outside of our company, but inside our network the script fails.
Our firewall group is telling me that the script is not using our proxy server but going directly to our firewall where it is being blocked. They also said that there appears to be traffic taking two paths to get to the internet. One path is going through our firewall and another is trying to go through our proxy. Not sure if I understand this. they have checked firewall rules, run firewall debug sessions, etc. No luck so far.
Is there something else I should be doing to set the proxy_url or am I missing something else? Or, is there something else our firewall group should be looking at?
Thanks,
Randy
Repo Steps or Enhancement details
"""
This sample shows how to add rows from a FGDB to an AGOL layer
"""
import arcrest
from arcresthelper import featureservicetools
from arcresthelper import common
def trace():
"""
trace finds the line, the filename
and error message and returns it
to the user
"""
import traceback, inspect,sys
tb = sys.exc_info()[2]
tbinfo = traceback.format_tb(tb)[0]
filename = inspect.getfile(inspect.currentframe())
# script name + line number
line = tbinfo.split(", ")[1]
# Get Python syntax error
#
synerror = traceback.format_exc().splitlines()[-1]
return line, filename, synerror
def main():
# Use these settings when running inside of XXXXX Network
proxy_port = 80
proxy_url = 'msn-proxyxxxxxxx.com'
# Use these settings when running outside of XXXX Energy
# proxy_port = None
# proxy_url = None
# Use these settings with testing with Fiddler Test Proxy (Tyler set this up to troubleshoot a proxy issue)
# proxy_port = 8888
#proxy_url = 'localhost'
securityinfo = {}
securityinfo['security_type'] = 'Portal'#LDAP, NTLM, OAuth, Portal, PKI
securityinfo['username'] = "Randy"#<UserName>
securityinfo['password'] = "xxxxxxx"#<Password>
securityinfo['org_url'] = "http://www.arcgis.com"
securityinfo['proxy_url'] = proxy_url
securityinfo['proxy_port'] = proxy_port
securityinfo['referer_url'] = None
securityinfo['token_url'] = None
securityinfo['certificatefile'] = None
securityinfo['keyfile'] = None
securityinfo['client_id'] = None
securityinfo['secret_id'] = None
itemId = "9f59af52c6474b61bc561bd6096b8ef7"#<Item ID>
# Get error: URLError: <urlopen error [Errno 10060]
layerName = "http://services3.arcgis.com/sqVn1QnQEPnGQy9r/arcgis/rest/services/AEOutageData2/FeatureServer/0"
# Path to Local FGDB Featureclass on PC
pathToFeatureClass = r"C:\gis\AEOutageData2.gdb\OutageEvent"
try:
fst = featureservicetools.featureservicetools(securityinfo)
if fst.valid == False:
print fst.message
else:
print "calling AddFeatures ..."
results = fst.AddFeaturesToFeatureLayer(layerName,pathToFeatureClass,chunksize=2000)
print "done adding feature"
except (common.ArcRestHelperError),e:
print "error in function: %s" % e[0]['function']
print "error on line: %s" % e[0]['line']
print "error in file name: %s" % e[0]['filename']
print "with error message: %s" % e[0]['synerror']
if 'arcpyError' in e[0]:
print "with arcpy message: %s" % e[0]['arcpyError']
except:
line, filename, synerror = trace()
print "error on line: %s" % line
print "error in file name: %s" % filename
print "with error message: %s" % synerror
if name == "main":
main()
*There are 2 features in the local FGDB.
Error:
*** Remote Interpreter Reinitialized ***
calling AddFeatures ...
2 features in layer
error in function: AddFeaturesToFeatureLayer
error on line: line 510
error in file name: C:\Python27\ArcGIS10.2\lib\site-packages\arcresthelper\featureservicetools.py
with error message: URLError: <urlopen error [Errno 10060]
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
@RandySincoular I've faced similar problems using ArcREST inside networks with proxy as well.
I'm using a workaround here forcing proxy through Python environment variables.
Here is the code that I'm using:
def set_proxy_environment_variables():
proxies = urllib.getproxies()
http_proxy = proxies.get('http')
https_proxy = proxies.get('https')
Utils.logs("Proxies: {0}".format(proxies))
if http_proxy is not None and http_proxy != "":
Utils.logs("Configurando http proxy: {0}".format(http_proxy))
os.environ["HTTP_PROXY"] = http_proxy
if https_proxy is not None and https_proxy != "":
Utils.logs("Configurando https proxy: {0}".format(https_proxy))
os.environ["HTTPS_PROXY"] = https_proxy
I hope it helps.
Thanks Bruno, that should do the trick. I appreciate your help.
Randy
From: Bruno Caimar<EMAIL_ADDRESS>Sent: Friday, April 29, 2016 1:53 PM
To: Esri/ArcREST
Cc: Sincoular, Randy; Mention
Subject: Re: [Esri/ArcREST] Question about Using a Proxy Server with ArcREST Samples (#228)
@RandySincoularhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_RandySincoular&d=CwMCaQ&c=GUDVeAVg1gjs_GJkmwL1m3gEzDND7NeJG5BIAX_2yRE&r=VPwuEa_rNmp_UVm0od-YvdtKnlNWqqwbpxa2Pz1fNVE&m=ssSqFGImqpL14ItF60bQ9aXxBgECVpzGAWSt-IHfBbo&s=wl3fSMSkNEBNpLiRfkEURXLThbVLRmOcfIra_Muhw8M&e= I've faced similar problems using ArcREST inside networks with proxy as well.
I'm using a workaround here forcing proxy through Python environment variables.
Here is the code that I'm using:
def set_proxy_environment_variables():
proxies = urllib.getproxies()
http_proxy = proxies.get('http')
https_proxy = proxies.get('https')
Utils.logs("Proxies: {0}".format(proxies))
if http_proxy is not None and http_proxy != "":
Utils.logs("Configurando http proxy: {0}".format(http_proxy))
os.environ["HTTP_PROXY"] = http_proxy
if https_proxy is not None and https_proxy != "":
Utils.logs("Configurando https proxy: {0}".format(https_proxy))
os.environ["HTTPS_PROXY"] = https_proxy
I hope it helps.
—
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_Esri_ArcREST_issues_228-23issuecomment-2D215847222&d=CwMCaQ&c=GUDVeAVg1gjs_GJkmwL1m3gEzDND7NeJG5BIAX_2yRE&r=VPwuEa_rNmp_UVm0od-YvdtKnlNWqqwbpxa2Pz1fNVE&m=ssSqFGImqpL14ItF60bQ9aXxBgECVpzGAWSt-IHfBbo&s=zhLjYnhDyAEqXRBetQo1rHnGFxqKX4mCV0or35LjVa4&e=
Hello,
May I know in what part of the script should we put it? As you see in the script provided by Randy that it uses specific variable namely the proxy_port and proxy_url ?
We are also working with arcRest and we are encountering the 407 Proxy Authentication Required error. I already setup the HTTPS_PROXY and HTTP_PROXY in the environment variables and we using the user's windows id.
Any help would be very much appreciated.
Hi @ScrollLock,
You have to set the proxy variables before any call to ArcREST. Usually I do that at the first thing on my scripts.
You can try to set proxy url with the username/password to try to bypass the 407 error message. It will be something like:
os.environ["HTTP_PROXY"] = "DOMAIN\User_Name:Passw0rd123@PROXY_SERVER_NAME_OR_IP:PORT"
os.environ["HTTPS_PROXY"] = "DOMAIN\User_Name:Passw0rd123@PROXY_SERVER_NAME_OR_IP:PORT"
Using http with Python on Windows it's a bit annoying. Sometimes it works without user/pwd and sometimes it does not.
Bruno
|
GITHUB_ARCHIVE
|
MongoDB On The Road - Node+JS Interactive
October 10-12, 2018 brought 1,000 developers to Vancouver, BC, Canada for the Node+JS Interactive 2018 conference. Put on by The Linux Foundation, the conference provided talks for two days followed by a day of workshops. MongoDB was a proud Bronze Sponsor of the event. This allowed us to have a booth in the Sponsor Showcase Hall along with having a presence at the Career Fair event.
MongoDB had a great presence at Node+JS Interactive 2018. Aydrian Howard and I from the Developer Advocacy team were on hand to answer questions. Thomas Cirri was there from our Recruiting team. By the way, we’re hiring! Dan Aprahamian from our Node.js Driver team was there along with Gregg Brewster from MongoDB University.
The Sponsor Showcase Hall was filled most of the day with people learning about all aspects of the Node.js ecosystem. The MongoDB booth was busy handing out swag and answering questions about MongoDB Atlas, MongoDB Stitch, MongoDB Charts, along with many other subjects and topics.
Node+JS Interactive 2018 Sessions
The schedule of session talks brought a wide variety of topics and speakers to Vancouver. Irina Shestak from MongoDB gave a great talk on HTTP/2 walking through the connection process one frame at a time and giving special attention to how Node.js implements this protocol.
Jenna Zeigen’s talk From Parentheses to Perception: How Your Code Becomes Someone Else's Reality provided some wonderful information on the path from an idea in a developer’s mind, to pixels on the screen.
There were many other talks from great speakers such as Tierney Cyren from NodeSource, Joe Karlsson from Best Buy, and Adam Baldwin from npm, just to name a few.
Node+JS Interactive 2018 Venue
Node+JS Interactive was hosted by the Vancouver Convention Center - West. Located in the West End area of Vancouver, it overlooks Vancouver Harbor and sits adjacent to the Olympic Cauldron at Jack Poole Plaza.
Vancouver Harbour is not only a busy cargo port bringing in goods for Western Canada, but it is also a heavily trafficked float plane area with seaplanes taking off and landing throughout the day. It was quite a site to be in a conference center and looking out over the harbor’s spectacular scenery and seeing the seaplanes land, taxi, and take off in the crisp and clear fall air.
MongoDB’s BI Connector the Smart Connector for Business Intelligence
September 25, 2018
In today's world, data is being produced and stored all around us. Businesses leverage this data to provide insights into what users and devices are doing. MongoDB is a great way to store your data. From the flexible data model and dynamic schema, it allows for data to be stored in rich, multi-dimensional documents. But, most Business Intelligence tools, such as Tableau, Qlik, and Microsoft Excel, need things in a tabular format. This is where MongoDB's Connector for BI (BI Connector) shines.
MongoDB BI Connector
The BI Connector allows for the use of MongoDB as a data source for SQL based business intelligence and analytics platforms. These tools allow for the creation of dashboards and data visualization reports on your data. Leveraging them allows you to extract hidden insights in your data. This allows for more insights into how your customers are using your products.
The MongoDB Connector for BI is a tool for your data toolbox which acts as a translation layer between the database and the reporting tool. The BI Connector itself stores no data. It serves as a bridge between your MongoDB data and the business intelligence tools.
The BI Connector bridges the tooling gap from local, on-premise, or hosted instances of MongoDB. If you are using MongoDB Atlas and are on an M10 or above cluster, there's an integrated built-in option.
Why Use The BI Connector
Without the BI Connector you often need to perform an Extract, Transform, and Load (ETL) process on your data. Moving it from the "source of truth" in your database to a data lake. With MongoDB and the BI Connector, this costly step can be avoided. Performing analysis on your most current data is possible. In real-time.
There are four components to a business intelligence system. The database itself, the BI Connector, an Open Database Connectivity (ODBC) data source name (DSN), and finally, the business intelligence tool itself. Let's take a look at how to connect all these pieces.
I'll be doing this example in Mac OS X, but other systems should be similar. Before I dive in, there are some system requirements you'll need:
- A MongoDB Atlas account
- Administrative access to your system
- ODBC Manager, and
- The MongoDB ODBC Driver for DSN
Instructions for loading the dataset used in the video in your Atlas cluster can be found here.
Feel free to leave a comment below if you have questions.
MongoDB On The Road - Seattle CodeCamp
September 20, 2018
Seattle CodeCamp was held in the Pigott Building on the beautiful Seattle University campus. With the scenic Puget Sound just a few blocks to the west down Madison St and Lake Washington to the east down Cherry St, Seattle CodeCamp was situated in a magnificent venue.
This year, on Saturday, September 15, 2018, 450 developers attended the event. The sponsorship hall had representatives from a few of the conference sponsors including GitHub, Flatiron School, and the College of Science and Engineering from Seattle University. There were plenty of stickers and sponsor information up for grabs along with some great representatives from the companies to talk with.
The conference sessions included over 65 sessions. One of the things I really enjoy about the CodeCamp events I’ve attended is the wide variety of speakers and session topics available. Everything from front-end to back-end topics is open game and available to learn.
And that’s just a small sample of the topics covered at this year’s Seattle Codecamp. I presented a talk on MongoDB & Node.js to a room of about 25 people. I brought with me a supply of MongoDB socks to give session attendees some swag which went over well. A large percentage of people in the room were unfamiliar with MongoDB in general and the MEAN/MERN stack specifically.
As a result, I tailored my talk to discuss the technologies themselves before showing how building an API is done with Node.js, Express.js, and MongoDB. I built an API that served up restaurants indexed by location. After building a functioning API I showed some of the features of MongoDB Compass to explore the data, perform CRUD operations, and leverage the geo-spatial data that was being stored inside MongoDB.
There were several MongoDB specific questions brought up during the session about some of the differences between the way legacy, relational databases store information and how a next generation database, such as MongoDB handles similar schema design and queries. It was a great discussion and provided a great opportunity to educate developers on the flexibility of MongoDB’s document model and the increase in development speed. You can find the project code on GitHub along with the talk slides here.
MongoDB is the easiest and fastest way to work with data. Download MongoDB Compass today and start making smarter decisions about document structure, querying, indexing, and more.
New to MongoDB Atlas — Data Explorer Now Available for All Cluster Sizes
At the recent MongoDB .local Chicago event, MongoDB CTO and Co-Founder, Eliot Horowitz made an exciting announcement about the Data Explorer feature of MongoDB Atlas. It is now available for all Atlas cluster sizes, including the free tier.
The easiest way to explore your data
What is the Data Explorer? This powerful feature allows you to query, explore, and take action on your data residing inside MongoDB Atlas (with full CRUD functionality) right from your web browser. Of course, we've thought about security; Data Explorer access and whether or not a user can modify documents is tied to her role within the Atlas Project. Actions performed via the Data Explorer are also logged in the Atlas alerting window.
Bringing this feature to the "shared" Atlas cluster sizes — the free M0s, M2s, and M5s — allows for even faster development. You can now perform actions on your data while developing your application, which is where these shared cluster sizes really shine.
Check out this short video to see the Data Explorer in action.
|
OPCFW_CODE
|
To see the difference between these two units (or, rather, unit templates), it's enough to look at the difference between files getty@.service and serial-getty@.service, which you can find at
/lib/systemd/system on your system.
(The files linked here point to the ones in systemd v239, latest release as of this writing. The files have m4 macros in them, so they're processed before installing, but it's a minor change introduced by m4 processing, so they're close enough.)
There are a few differences, but the main one is the
ExecStart= command invoked by each unit.
Unit getty@.service invokes this command:
ExecStart=-/sbin/agetty -o '-p -- \\u' --noclear %I $TERM
While serial-getty@.service invokes this command:
ExecStart=-/sbin/agetty -o '-p -- \\u' --keep-baud 115200,38400,9600 %I $TERM
The command used in serial-getty@.service passes
--keep-baud argument, in order to configure the serial port speed. In a way, getty@.service will work on a serial port, but it might not configure the serial port properly, which might end up not working as well as it could or perhaps being slower than it could be if properly configured.
On the other hand, getty@.service passes
--noclear argument, so the console screen is not cleared after a user logs out (this was traditionally configured on at least tty0.)
Further differences from the unit files:
- serial-getty@.service binds to the udev device for the serial port (
BindsTo=dev-%i.device), so if it's a removable device (such as USB), systemd will stop the getty if the device is removed or unplugged.
- getty@.service checks that tty0 exists (
ConditionPathExists=/dev/tty0), so it doesn't spawn any local consoles if support for them was disabled in the kernel.
- getty@.service unsets locale variables (
UnsetEnvironment=LANG LANGUAGE LC_...) since localization is typically unsupported or poorly supported on the local console.
Regarding your particular case where you're masking ttyAMA0 and enabling ttyUSB1 instead, ttyUSB1 is a serial port (at least, it emulates one), so using serial-getty@.service would be more appropriate.
However, enabling a getty@ or serial-getty@ttyUSB1.service and masking the one @ttyAMA0 is not the best way to accomplish this.
systemd takes its console configuration from the kernel, typically from the
console= argument in the kernel command line (this is implemented by systemd-getty-generator, so see its documentation for more details.) So all you need to do is configure the console on the kernel command line (with an argument such as
console=ttyUSB1, though you might want to include a local console such as
tty0 too) and systemd will do the right thing.
Take a look at this blog post on serial console support in systemd for more details.
|
OPCFW_CODE
|
By Gerry Haskins-Oracle on Nov 16, 2009
Here's some interesting tricks-of-the-trade and security related resources which I saw in a couple of email threads last week, which you may find useful:
What patches patch a specific object ?
We'll soon be enhancing the PatchFinder tool further to enable you to search for patches which patch a specified object. So, if you're experiencing a problem with an object, you'll be able to see what patches exist for that object and look at the Bug fix synopses to see if any look like the issue you are experiencing.
But what patches on an installed system patch a specific object ?
The question which sparked the thread was: "What's the easiest way to determine what patch a binary (e.g. mpt(7D) driver) is tied to on a system?"
Option 1: What patches installed on the system patch a specific object (e.g. /kernel/drv/mpt) ?
# cd /var/sadm/patch
# for x in `ls -rt` ; do grep "\^/kernel/drv/mpt \*$" $x/README.$x > /dev/null && echo $x; done
Option 2: What patches installed on the system patch a specific object (e.g. /kernel/drv/sparcv9/mpt) ? (This output is from a different system at a different patch level to the previous example.)
# /usr/ccs/bin/mcs -p /kernel/drv/sparcv9/mpt
@(#)SunOS 5.10 Generic 143128-01 Nov 2009
Option 3: What patches installed on the system patch a specific object (e.g. /usr/bin/ls) ? (See Sun Blueprint on the SunSolve fingerprint DB: http://www.sun.com/blueprints/0306/816-1148.pdf )
# digest -a md5 /usr/bin/ls
and from http://sunsolve.sun.com/fileFingerprints.do
Results of Last Search
6f20408d15ddfce2261436a27e33c0bd - - 1 match(es)
\* canonical-path: /usr/bin/ls
\* package: SUNWcsu
\* version: 11.10.0,REV=2005.01.21.15.53
\* architecture: sparc
\* source: Solaris 10/SPARC
\* patch: 138377-01
Here are some excellent resources from Sun Distinguished Engineer, Glenn Brunette:
Everything you ever wanted to know about Solaris security...
The Solaris Package Companion is a small Korn shell script that allows you to ask quite a number of interesting questions about the relationships between Solaris metaclusters, clusters and packages as well as their respective dependencies. Useful for system hardening, etc.: http://hub.opensolaris.org/bin/view/Project+svr4_packaging/package_companion
A Sun Blueprint on the SunSolve fingerprint DB: http://www.sun.com/blueprints/0306/816-1148.pdf
|
OPCFW_CODE
|
Computational and Mathematical Biology Centre (CMBC)
Aim and scope
Develop novel computational tools and mathematical models to address biological problems.
Enhance the fabric of research in computing through linkages with experiments and vice-versa.
To inculcate the application of emerging and relevant computational technologies for in-depth and advanced biological data analysis, structural mapping and for therapeutic intervention.
CMBC: A centre for discovery, innovation and translation
Major Program 1: Mathematical Modelling and Systems Biology
- Area of research:
(a) Mathematical modeling for understanding biological processes/ disease dynamics.
(b) Potential drug target discovery using network analysis and computational models.
Overview of the mathematical modelling and network analysis lab
- Research theme
Disease mechanism and potential targets through mathematical models and computational methods.
New mathematical and computational methods to study biological data.
konnect2prot (k2p) Click here
Major program 2: Computer-assisted Drug Discovery, Computational Biophysics and Structural Bioinformatics
- Area of research:
Virtual screening of synthetic compounds libraries for predicting potential hit molecules against specific drug targets.
Computer-aided drug discovery for infectious and metabolic diseases.
Computational simulations for molecular understanding of biological systems, mechanistic basis of structure-function correlation and their application to design therapeutics protein-protein interaction interfaces.
Computer-assisted drug discovery
Peptidomimetics: small molecules from active peptides
Major Program 3: Big data, Multi-Omics and Biomedical Informatics
- Area of research
Mass spectrometry based identification of proteins from biological samples to study disease progression in NAFLD.
Development of a universal proteogenomics workflow for integrating genomics/transcriptomics data with mass spectrometry proteomics data for studying liver proteoforms and studying their progression in disease progression.
Development of biomolecular knowledge resources platform from large scale multi-omics data for Human Liver proteoforms using proteogenomics to facilitate disease studies.
Development of a meta-resistome webserver for rapid and comprehensive mining of antimicrobial resistance genes (ARGs) from genomic and metagenomics data to infer the resistance potential of pathogen genomes/metagenomes.
Prioritization of Disease Proteins and Metabolites in NAFLD and NASH Using Big-Data Approaches
Many disease genes for NAFLD and NASH are known, but prioritization for drug target and biomarker use is necessary
This can be facilitated by integration of data on different biomolecules (genes, proteins, metabolites, PPIs, PTMs)
Big-data mining can reveal biomolecules that can be potential biomarkers or drug targets
Integrated analysis to find important proteins and metabolites
Meta-resistome web-server for mining AMR genes from NGS data
Workflow to integrate five resistome databases for comprehensive AMR profiling of clinical strains
Webserver interface for profiling resistomes from genomic/metagenomics contig data
Name of faculty members and Scientists
The centre/facility is open to providing services to academia and industry. For any queries, contact the following
Dr. Samrat Chatterjee
|
OPCFW_CODE
|
Set Up Product Mapping
One of the main advantages of Product Finder 360 is that it uses answers given to the digital finder, to calculate which products match the customer best. Moreover, you can also add Filters at the end of your Product Finder to help your customer select the product. Both Answers and Filters can be of different types.
For sliders, the answers will be automatically mapped to the corresponding "match values". However, for multiple choice answers, you need to define the mapping rules.
The mapping rules can be defined for questions of the Product Finders and Messenger Product Finder, as well as for Filters. The process is rather similar, and in the following example, the sample images are provided for a Product Finder.
1. Access Mapping Rules
To add a mapping rule, click the recommendation logic (box) icon.
The first section of the emerged "Recommendation logic" settings is devoted to Product Mapping.
From here you can:
2. Add a Mapping Rule
The place to define the first rule is by default shown on the screen. The rule settings define:
For example, for the answer "I'd love some extra power" we want to recommend bikes of the adventure category, so we select "category" property.
For our example, we need the category to equal a specific value
Along with selecting a value, you can also manually input it.
For example, let's say that you want to recommend all the bikes that cost over two hundred.
To add the next rule, click the "+Rule" button.
3. Add a Group of Mapping Rules
The "ALL/ANY" selector helps you define whether all or only some of the rules should be applied.
For example, if you want to show all such bikes that are of category "Hybrid" AND cost over three hundred, define the two rules and select "ALL"
However, if you set the value to "ANY", the customer will be recommended all the hybrid bikes (regardless of their price) and all the bike that cost over three hundred (regardless of their category)
However, you might want to create several groups of rules, with different "all/any" settings.
1. Click the "+ Group" button.2. Define the "ALL/ANY" setting and define the rules.
Each group may have several rules or groups inside it. Be careful to click the "+Rule" and "+Group" buttons at the right level.
For example, the following rule states that for this answer we shall show:
4. Edit Mapping Rules and Groups
You can always edit the values of the mapping rules and groups or delete them.
To edit a rule or group, click in the corresponding field and update it.(To select a new "Value", remove the previous value).
To remove a rule or a group, click the trash bin icon to the right of it.
5. Finalize Mapping
|Once you've defined all the rules, simply click the "Apply" button at the bottom.|
|
OPCFW_CODE
|
There are lots of board games on the market to choose from, sometimes it is hard to know what to pick. But there are always the old classics to fall back on that have been around for decades, if not centuries. Games like the family board game Snakes & Ladders, a game of Indian origin that first came to the UK in the 1890s.
Snakes & Ladders is a worldwide classic board game. It is a family board game where no skill is involved, no questions to answer and your chances of winning all comes down to luck and the throw of the dice. It can be played just as easily by the youngest of players as the eldest and is just as much fun for all. It was one of my favourite games as a child and is still fun to play today.
The game board features 100 squares and players must work their way from square 1 to 100 with the aid of ladders to propel them up the board quicker, or snakes to drop them down and slow their progress (to the delight of the other players)!
There are lots of different Snakes & Ladders games to purchase, and it is either a game for 2-4 players or 2-6 players depending on what version you buy, but the rules and gameplay are always the same (at least in any version that I have played). I have been using a wooden version for 2-4 players.
To play, each player takes a different coloured pawn and rolls the dice. The player that rolls the highest number gets to start first. The game is played clockwise from the starting player. The starting player rolls the dice and moves their pawn accordingly, each time a player rolls a six that player gets another roll of the dice. If a player lands on a square which has the base of a ladder, they can move their pawn up to the top of the ladder (e.g. if a player lands on square number 27 they can move their pawn up the ladder to square 69). But if a player lands on a square with a head of the snake, they must follow the snake to its tail (e.g. if a player lands on square number 95 they follow the snake down to square number 53). If a player lands on a space that is occupied by another player, the opponent’s pawn is removed, and they must start again from the beginning. The winner is the first player to land on the 100 square, but an exact throw is required to get there – if you are an square 98, rolling a 3 is no good, it has to be 2.
Overall, I love playing Snakes & Ladders and I have since a child. It is a fun game to play, and it all comes down to who is the luckiest player. It can take as little as 10 minutes to play or as long as 30+ depending on how often and unlucky players are to keep landing on the snake squares.
Snakes & Ladders is a classic family board game that is so simple to play, has lots of enjoyment as is perfect for family game nights as something quick to play before the bigger and longer games come out.
Simple fun for all the family, especially the younger players that are just starting to enjoy sitting around the table playing games. A game that every board game loving household should own.
Available to buy from Amazon here.
|
OPCFW_CODE
|
Substitute or replacing value in MySQL query from another column
How can I substitute or replace NULL values in MySQL query to the value that I need?
If null or coleasce or with case or with if?
I can't get it work.
Example is in SQL Fiddle
First and second row:
Third column: sum of all products;
Fourth column: active products;
Fifth column: inactive products.
Third and forth row:
Third column: sum of all products;
Fourth column: active products;
Fifth column: inactive products (NULL values => need the 0 value).
Fifth and sixth row:
Third column: sum of all products;
Fourth column: active products (NULL values => need the 0 value);
Fifth column: inactive products (NULL values => need the value from the third column).
If I get the NULL value for inactive products I need to replace it with 0, but when I get NULL value in active product, I need to replace it with 0 and in inactive column replace the NULL value with the value from third column.
No UPDATE or INSERT functions, because this is a MySQL view.
Basiclly what I need is:
if active <> 0 and inactive <> 0
then 'no value change'
if active <> 0 and inactive = null
then inactive = 0
if active = null
then active = 0 and inactive = product_sum
We can use an expression in place of a column name in the SELECT list.
If we want the SQL be portable and ANSI standards compliant, we can use a CASE expression. For example:
SELECT CASE WHEN t.inactive_product IS NULL
THEN t.product_sum
ELSE t.inactive_product
END AS inactive_product
FROM ... t
An equivalent result can be obtained (more concisely) using the COALESCE function:
SELECT COALESCE(t.inactive_product,t.product_sum) AS inactive_product
, COALESCE(t.active_product,0) AS active_product
FROM ... t
Most databases, including MySQL, provide functions that extend the SQL standard.
The same result can be achieved in MySQL with the convenient IFNULL function.
SELECT IFNULL(t.inactive_product,t.product_sum) AS inactive_product
, IFNULL(t.active_product,0) AS active_product
FROM ... t
FOLLOWUP:
To me it looks like the following condition tends to be true in the data, or at least in the result we want returned:
product_sum = active_product + inactive_product
I'd do something like this:
SELECT ...
, t.product_sum
, IFNULL(t.active_product,0) AS active_product
, IFNULL(t.inactive_product,t.product_sum-IFNULL(t.active_product,0))
AS inactive_product
FROM ... t
sorry for stupid question. is it possible to have multiple case when in query? that´s the only thing that i not tried
|
STACK_EXCHANGE
|
I grew up in Minnesota until I moved to pursue a Bachelor’s degree in Biology at North Carolina State University. During my undergraduate education I also spent a year at the University of the South Pacific in Fiji where I became interested in Pacific Island ecosystems, ethnobotany, and indigenous land management practices.
Now I am a PhD student in Botany within the Ethnobotany track at UHM in the Ticktin lab of the Botany Department. I study Pacific Island agroforestry systems and their importance in supporting human health and resilience to environmental shocks and disturbances, such as cyclones. Socio-economic changes have been leading to the replacement of biodiverse agroforest food systems with monocultures of cash-crops. This has contributed to the rise of nutrition-related non-communicable disease (NCD) epidemic plaguing the region. In Fiji 77% of all deaths are caused by NCDs. Exacerbating these issues are risks associated with climate change such as increased severe weather events and cyclone activity. Fiji’s recent catogeory-5 cyclone, which was the strongest cyclone in the Southern Hemisphere, hit in February 2016 and raises additional cause for concern.
To better understand how Pacific Island agroforests and their capacity to enhance community resilience, I plan to draw on existing datasets and field data that I will collect as part of this Fulbright Fellowship, to 1) assess drivers of post-cyclone recovery in agroforest ecosystems, including identifying how individual and species-specific traits affect tree damage, survival, and regeneration over time and how decisions and preferences by people shape changes in agrobiodiversity; and 2) assess the relationships between agrobiodiversity and nutritional diversity.
I chose to study at UHM first and foremost because of the research practiced in the Ethnobotany program of the Botany Department at UHM under my current advisor, Dr. Tamara Ticktin. This research crosses both the social and natural sciences and works closely with local people to best serve the needs of the communities. I was also drawn to the intellectual independence and flexibility the Ethnobotany program affords their graduate students, because of this I am able to accept the Fulbright Fellowship to Fiji and complete part of my PhD dissertation and represent UHM and the US as a Fulbright Academic. The Botany Department also hosts a suite of highly accomplished and knowledgeable tropical ecosystem botanists and ethnoecologists whom I was eager to learn from and work with. Together, these aspects formed the basis of my decision to study in the Botany Department at UHM.
As an undergraduate, independent research and studying abroad at the University of the South Pacific (USP) in Fiji were pivotal to my own academic, professional, and personal development. They gave me confidence to pioneer my own academic path, expanded my intellectual boundaries, and encouraged me to explore diverse career options, which ultimately led me to pursue a PhD at UHM. Undergraduate research and international education opportunities are less readily available in the Pacific Islands and my long term goal is to develop and direct an international undergraduate research exchange program between UHM and USP to help provide these experiences. In the spirit of the Fulbright’s cross-cultural mission, I plan to establish the foundations of this program as a Fulbright Fellow in Fiji.
I acknowledge that this is not an official Department of State website or blog, and that the views and information presented here are my own and do not represent the Fulbright Program or the U.S. Department of State.
|
OPCFW_CODE
|
USB boot and newer BIOS and firmware
Is it safe to say that most newer (x64 2010+) desktop BIOS from major manufacturers can be configured for USB boot?
I'd like to be able to boot from USB for GParted 10, Windows 7 pro, and Ghost 2003+ ideally on a small form factor stateless disk drive like desktop with no built in CD/DVD ROM. Can anybody recommend good tiny HW and boot/iso SW for this?
Any technical reason/limitation why BIOS firmware do not already come with some form of BOOT FROM USB ISO like feature built in where you do not need a DVD rom or even a boot formated USB drive? How cool would it be that you could boot to bootloader that prompts you for an ISO on a USB or Local file mount?
'safe to say' - yes, i've been phasing out some older Windows 2000 desktops at work, fairly standard specs used for basic desktop applications. Most are Foxconn and MSI motherboards, pre-SATA and only have 2 USB ports - yet they all support 'boot from USB' :)
You can assume that they are USB-Boot ready. But no guarantee though.
Recommendations are off-topic.
What the heck is an "boot from usb iso?" Either USB or CD-ROM. For other scenarios the solution is PXE.
No, You have to extract the ISO to the USB along with a bootloader. There's no way (AFAIK) to a bootloader-kick-off an ISO. Plus it would be very slow.
@tombull89, memdisk is what you're looking for. Bootloader loads memdisk, memdisk loads the entire ISO into memory, and then it transfers execution to the code on the ISO. And from that moment on it's like booting from any normal ISO. Take a look at http://www.ultimatebootcd.com/customize.html under the 'Adding ISO images' section for the configuration.
Ah so it is possible. I expect the machine must need a certain amount of RAM for this to work though?
Yes, it is not a very memory efficient method, it's not something you want to run from (unless you got gobs of memory and don't care). The more normal PXE booting methods usually release the downloaded images upon executing them. Linux kernel's bzImage/initrd combos also release memory of the image after executing. Probably the best way would be to load a SquashFS image into ram, and use that as your rootfs as the OS is aware of 'this stuff is running in RAM anyway, no need to cache or load into memory' problems.
Windows 7 (and Vista, and Server 2008) can be set up on a USB stick to have a bootable, installable, USB using the Windows Live CD/USB Download Tool. This extracts the disk .iso (needs to be a legit .iso, not a "Tiny7" or non-windows iso, or so I have found) to the USB drive (must be 4GB+) and that USB can be booted from.
For other things like GParted, Ubuntu, or Mint I use uNetBootIn. Not sure how it would work with Ghost.
You could try Yumi Multiboot USB Creator. It appears to allow you to boot to a USB drive, and then select the ISO you want in a menu.
|
STACK_EXCHANGE
|
@class @synthesize useablility
Ok, I have class, that I create for my Core Data
LoginPass.h
Then I have First class
FirstClass.h
And then I need use this classes in SecondClass, where I declare them with @class. Heder file
SecondClass.h
...
@class FirstClass;
@class LoginPass;
...
@interface SecondClass : UIViewController
{
......
}
@property (strong, nonatomic) FirstClass *fromFirstClass;
@property (strong, nonatomic) LoginPass *myEntity;
...
@end
And in .m file
#import "SecondClass.h"
#import "FirstClass.h"
#import "LoginPass.h"
@implementation SecondClass
...
@synthesize fromFirstClass = _fromFirstClass;
@synthesize myEntity = _myEntity;
...
Ok, I can make some mistakes in code, sry for it.
I really don't know and now don't interesting why I need write
@synthesize myEntity = _myEntity;
but not
@synthesize myEntity;
But I have another question. Why I can use then in my code
self.fromFirstClass
But I cann't use
self.myEntity
Xcode give me an error and say me that I should use
self._myEntity
What the difference? Why I can use self.fromFirstClass but not self.myEntity?
@end
@synthesize fromFirstClass = _fromFirstClass;
@synthesize myEntity = _myEntity;
These above lines are correct but nowadays you are not required to synthesize.@synthesize is put by compiler itself.
When you use self.prop you mean you are accessing property.
When you use _prop you call property directly.
EDIT:
When you use self.prop you call the method depending on lhs or rhs of =(assignment) :
-(NSString *)prop; //gets called when you use myName=self.prop;
and/or
-(void)setProp; //gets called when you use self.prop=@"master";
On the other side, if you try to use self._myEntity then it will look for method name having _ which is not there, resulting in Error.
Thank you, it's remains to understand for me, what give to me "accessing property" and "directly calling property" =)
You are confusing instance variables which are variables parts of an object structure, and properties which are really methods to set and get a value.
When you declare @property (strong, nonatomic) FirstClass *fromFirstClass; you actually declare two methods - (FirstClass *)fromFirstClass and - (void)setFromFirstClass:(FirstClass *)aFirstClass.
When you use the dot syntax FirstClass *classA = self.fromFirstClass;, you actually call a method, it is equivalent to FirstClass *classA = [self fromFirstClass];. In the same way, if you write self.fromFirstClass = classB;, you actually call: [self setFromFirstClass:classB];.
If you use the name of an instance variable directly inside an object method, you access this variable.
Now, when you write @synthesize fromFirstClass; in the modern runtime, you let the compiler create a instance variable with the same name fromFirstClass and write the two methods - (FirstClass *)fromFirstClass and - (void)setFromFirstClass:(FirstClass *)aFirstClass that will get and set the instance variable.
If you write @synthesize fromFirstClass = _fromFirstClass;, the same thing happens, except the name of the instance variable which is created has an underscore in front of it.
Finally, in the more recent versions of the compiler, if you don't write anything, the default behavior is to @synthesize fromFirstClass = _fromFirstClass automatically for you.
But why I need use self._myEntity instead of self.myEntity I don't understand =(
Either you write @synthesize myEntity; and then you can access self.myEntity (method), [self myEntity] (method), entity (ivar), or self->entity (ivar). Or you write @synthesize myEntity = _myEntity (same as writing nothing in recent compiler), and you can access self.myEntity (method), [self myEntity] (method), _entity (ivar), or self->_entity (ivar). From the code you posted, self._entity is not a valid option (that means you would have methods with the underscore).
The compiler would add
@synthesize myEntity = _myEntity;
if you omit the @synthesize totally.
However, you could use as well
@synthesize myEntity;
The key difference is that in the first case, your local variable is called _myEntity while the getter is myEntity and the setter is setMyEntity. So from external you would access yourObject.myEntity either for setting or getting the value. The compiler will take care, that the setter and getter is called. You do not access the prperty directly.
[yourObject.myEntity = value] ist identical to [yourObject setMyEntity:value] as well as value = yourObject.myEntity is identical to value = [yourObject myEntity].
So far from accessing properties or their getter and setter from outside. From inside your class you may think that self.myEntity = value is identical to myEntity = value (for the second case). But it is NOT. self.myEntity calls the setter (or getter). This is important especially for the getter becaus that comes with important memory management for free - with or without ARC. While myEntity = value directly accesses the property.
And here comes the _ and its key advantage (imho). If you use the _ notation then the property is called _myValue. Doing so it is explicitely clear for you and the readers of your code when the actual property is accessed directly and when the getter and setter are used.
|
STACK_EXCHANGE
|
Raging Infernape solution codechef
Finally, Ash and Paul are against each other in the tournament. Battle is between Infernape and Electivire. Infernape just turned on his ability named “Blaze”. Now he is fired up. But there is a problem with Infernape’s Blaze. Even though it gives immense power to Infernape but he losses his mind and attacks everyone. To calm him down so that Infernape can use his ability while staying in control, Ash has to complete the following task.
You are given a Stringmade up of lowercase English alphabet. For a substring of , we define of as . Where is smallest substring of S that has atleast two substrings equal to (i.e. ). Now you pick a substring of string uniformly randomly and you have to print the expected power of the chosen substring. The answer can be represented as a ratio (i.e. / ) of two positive co-prime numbers and You have to print modulo .
Input: Raging Infernape solution codechef
- First line will contain , number of testcases. Then the testcases follow.
- Each testcase contains a single line of input, a string S.
Output: Raging Infernape solution codechef
For each testcase, output in a single line the expected power of chosen substring modulo 998244353.
Constraints Raging Infernape solution codechef
Sample Input: Raging Infernape solution codechef
Sample Output: Raging Infernape solution codechef
Valid X’s are, X(“a”) = “aba”, X(“b”) = “bab”, X(“ab”) = “abab”, X(“ba”) = “baba”, X(“aba”) = “ababa”. Expected Power is, (3 * 3 + 3 * 2 + 4 * 2 + 4 * 2 + 5 * 2) / 15 = 532396991.
Codechef is an platform built to programmers compete against others in the community and to improve their knowledge by facing various challenges. Challenges are categorised on the basis of their difficulty level , if your rating is above 2000 you are in top tier i.e, Division 3 and if your rating is between 2000 and 1600 you will be in tier 2 i.e, Division 3 and if your rating is below that you will be in lowest tier i.e, Division 3. You can improve your rating by successful submission with minimum penalty.
- Game of Primes solution codechef
- Books and Friends solution codechef
- Run It Back solution codechef
- Buy Masks solution codechef
- How I Met Your Mother solution codechef
Also Read : Neenade Naa song lyrics with English Translation
|
OPCFW_CODE
|
How long does it take to learn Python to get a job?
For a complete beginner 3 months is sufficient to master python to get a programming job. If you are coming from other programming languages like C++, Java etc then one month is enough for you to learn Python and write your logic comfortably. Best Python courses to be a pro developer – Master the Python Programming.
Moreover, we are talking about learning Python at employable level and finding a productive way to learn the Python programming language. The majority of people learning to code in Python but not everyone is a finding decent job.
So let’s dig deep into how you can learn coding to start your career and be a valuable Python programmer.
Why You Should Learn Python?
Python is the world’s most popular programming language due to its easy syntax and continuous progress. So it is marking its influence not only in the field of computer science but also in major areas like management and medical. Also, you can find a quick job with Python, step into freelancing, build your website, software and games.
Read more on – The benefit of learning Python.
How Long Does It Take A Beginner To Learn Python To Get A Job?
I have divided the process of learning Python into 4 levels and have estimated at more than 3 months for a beginner to start a career in Python.
Level 0: Fundamentals of Python – Learning basic building blocks of programming, basics of Python, variables, functions, loops, conditionals, data types
Level 1: Intermediate Python – Once you can write your logic using a functional programming approach you can move toward intermediate Python.
You can then start learning and exploring Popular Python modules and use them in your projects. Working with turtle, CSV, list and dictionary comprehension, working with API etc comes under Level 1.
After completing Level 0 and Level 1 you can be able to create small programs. At each level, you need to have sufficient practice and these are the main duration that goes into learning to code in Python. Practicing and doing projects can purely help you to learn python practically.
Level 2: Advanced python – In level 2 you can learn thinking about building useful stuff from the stretch as much you can, it also requires the skill of reading documentation and using any services you want.
It is fun to automate usual tasks, make your small bot, using python, creating websites and producing data to plot charts/ graphs.
Level 3: Building Unique and Professional Portfolio Projects – You need to be always clear about the main goal of learning Python, as what it can do for your valuable time.
So there comes the requirement of your unique portfolio projects. Games like Tic Tac Toe and 🐍 are common as many people do these things. What makes you unique is having the project that you can show off.
Getting Jobs With Python
To get a job with Python not only requires you to code in Python only will not make you a good programmer you have to have a good hold on other technologies too.
You must have an intention why you are learning python?
The answer can be like, to be a web developer, data scientist, want to get a job in AI and machine learning. So you can use Python as a tool and can and other related things for getting into that domain.
A good engineer can use a data structure and algorithm to write fast and memory-efficient programmes. Even coding interviews put more weights on Data structures and algorithms. You can join here to learn Data Structures and Algorithm with whiteboard explanation of each topics in great details.
Getting a good software developer position means you to be good at operating systems, system design and networks.
How To Come Up With Projects Ideas In Python?
A project like Tic-Tac-Toe, Rock Paper Scissors and card swipe games are common but good for learning basics. There should be something in your profile that can put a good impression on the hiring manager.
Look around you, about the problem you faced or you can solve it with Python. It can be any stupid thing of your imagination because it comes with a scope of improvement. For example, Facebook was started as a project and Mark Zuckerberg did not think he could market it well. Also, it was not the first project, he created other things similar to Facebook.
The idea here is challenging yourself to dominate your imagination and commercialize your project. Though it takes much effort if you create potential projects it makes you unique.
Lastly, learn Python and choose a specific domain where you want to get job. Then move towards data structures and algorithms. Build things, anything that you put on your portfolio which makes you unique. The quality of good engineer also includes non technical skills like communication, leadership and teamwork. So enjoy learning Python as you can. Build your career in coding by learning to code progressively.
|
OPCFW_CODE
|
can't determine definition of operator ""-""?
i've got the error: can't determine definition of operator ""-"" for the following code. I'm not sure about accessing the indivdual bits of each unsigned in the array. What is wrong?
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use Ieee.numeric_std.all;
use work.genetica_type.all;
entity binario_fitness is
Port (
clk : in std_logic;
individuos: in genetica;
adaptacao: out fitness;
somafitness: out unsigned (7 downto 0)
);
end binario_fitness;
architecture Behavioral of binario_fitness is
begin
process (clk)
begin
If (clk 'event and clk = '1') then
for x in 0 to 49 loop
adaptacao(x) <= individuos(x)(0)-individuos(x)(1) +individuos(x)(2)- individuos(x)(3)+individuos(x)(4)- individuos(x)(5)+individuos(x)(6)-individuos(x)(7);
somafitness<=(others=>'0');
end loop;
end if ;
end process;
end Behavioral;
which includes the genetica_type in another file:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_TEXTIO.ALL;
use ieee.numeric_std.all;
package genetica_type is
type genetica is array(0 to 49) of unsigned(7 downto 0);
type fitness is array(0 to 49) of unsigned (2 downto 0);
end package genetica_type;
Your are trying to perform an arithmetic operation on type std_logic.
type genetica is array(0 to 49) of unsigned(7 downto 0)
...
individuos: in genetica;
...
adaptacao(x) <= individuos(x)(0)-individuos(x)(1)...
The sliced signals individuos(X)(Y) are actually of type std_logic.
The subtraction operator "-" is not defined for type std_logic, thus there is an error.
Depending on what you are actually trying to do the solution to your problem may vary.
One possiblity is:
adaptacao(x) <= "00"&individuos(x)(0 downto 0) - "00"&individuos(x)(1 downto 1) + "00"&individuos(x)(2 downto 2) - etc.
This will work because the one bit slices individuos(x)(y downto y) will retain the type of the sliced vector, i.e. unsigned in this case.
From the definition of numeric_std.vhd:
type UNSIGNED is array (NATURAL range <>) of STD_LOGIC;
The length must also be handled, since result in adaptacao(x) is type unsigned (2 downto 0), thus with length 3, but individuos(x)(0 downto 0) has length of only 1, which will also be the result after the - and + operators. Thus prepend ("00" & ... in start of calculation to operate on 3 bit unsigned throughout the operations, like ("00" & individuos(x)(0 downto 0)) - ..., whereby the final length will match that of adaptacao(x), and accumulation is handled.
@Morten: I fixed the aflicted lines with your corrections. Alternative to prepending "00" the resize() function could be used, which is also part of the numeric_std package. "00"&individuos(x)(0 downto 0) would then become resize(individuos(x)(0 downto 0), 3).
Use of resize is a good idea, also since it requires explicit parenthesis, thus says that individuos(x)(0 downto 0) is resized, which is not the case if "00" & ... is prepended without paranthesis, since - is more binding (higher precedence) than &, so just prepending "00" & will apply to the entire result, and not the individuos(x)(0 downto 0) argument, whereby accumulation will not occur in the additional bits, which is probably the intention. Finally, using the length attribute will be nice, like resize(individuos(x)(0 downto 0), adaptacao(x)'length).
It isn't particularly clear what the result of bit wise subtraction should be either, that sounds like NAND.
|
STACK_EXCHANGE
|
[PATCH 4.4 04/56] cfq: Give a chance for arming slice idle timer in case of group_idle
From: Greg Kroah-Hartman
Date: Mon Sep 17 2018 - 18:43:34 EST
4.4-stable review patch. If anyone has any objections, please let me know.
From: Ritesh Harjani <riteshh@xxxxxxxxxxxxxx>
commit b3193bc0dca9bb69c8ba1ec1a318105c76eb4172 upstream.
In below scenario blkio cgroup does not work as per their assigned
1. When the underlying device is nonrotational with a single HW queue
with depth of >= CFQ_HW_QUEUE_MIN
2. When the use case is forming two blkio cgroups cg1(weight 1000) &
cg2(wight 100) and two processes(file1 and file2) doing sync IO in
their respective blkio cgroups.
For above usecase result of fio (without this patch):-
file1: (groupid=0, jobs=1): err= 0: pid=685: Thu Jan 1 19:41:49 1970
write: IOPS=1315, BW=41.1MiB/s (43.1MB/s)(1024MiB/24906msec)
file2: (groupid=0, jobs=1): err= 0: pid=686: Thu Jan 1 19:41:49 1970
write: IOPS=1295, BW=40.5MiB/s (42.5MB/s)(1024MiB/25293msec)
// both the process BW is equal even though they belong to diff.
cgroups with weight of 1000(cg1) and 100(cg2)
In above case (for non rotational NCQ devices),
as soon as the request from cg1 is completed and even
though it is provided with higher set_slice=10, because of CFQ
algorithm when the driver tries to fetch the request, CFQ expires
this group without providing any idle time nor weight priority
and schedules another cfq group (in this case cg2).
And thus both cfq groups(cg1 & cg2) keep alternating to get the
disk time and hence loses the cgroup weight based scheduling.
Below patch gives a chance to cfq algorithm (cfq_arm_slice_timer)
to arm the slice timer in case group_idle is enabled.
In case if group_idle is also not required (including for nonrotational
NCQ drives), we need to explicitly set group_idle = 0 from sysfs for
With this patch result of fio(for above usecase) :-
file1: (groupid=0, jobs=1): err= 0: pid=690: Thu Jan 1 00:06:08 1970
write: IOPS=1706, BW=53.3MiB/s (55.9MB/s)(1024MiB/19197msec)
file2: (groupid=0, jobs=1): err= 0: pid=691: Thu Jan 1 00:06:08 1970
write: IOPS=1043, BW=32.6MiB/s (34.2MB/s)(1024MiB/31401msec)
// In this processes BW is as per their respective cgroups weight.
Signed-off-by: Ritesh Harjani <riteshh@xxxxxxxxxxxxxx>
Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
Signed-off-by: Amit Pundir <amit.pundir@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
block/cfq-iosched.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
@@ -2905,7 +2905,8 @@ static void cfq_arm_slice_timer(struct c
* for devices that support queuing, otherwise we still have a problem
* with sync vs async workloads.
- if (blk_queue_nonrot(cfqd->queue) && cfqd->hw_tag)
+ if (blk_queue_nonrot(cfqd->queue) && cfqd->hw_tag &&
|
OPCFW_CODE
|
Added array typehint support to inspector.
-- supports --
export(Array, ...hint...)
export(Array, ...hint..., ...parameters...)
export(Array, Array, ...hint...)
export(Array, Array, ...hint..., ...parameters...)
[x] FIXED -- export(Array, Color) var myColors - ColorPicker Issue -- Fixes #19559 & #19308
[x] FIXED -- export(Array, NodePath) var myNodePaths -- Fixes #20004 & #20005
[x] ADDED -- export(Array, ...hint...)
[x] ADDED -- export(Array, ...hint..., ...parameters...)
[x] ADDED -- export(Array, Array, ...hint... )
[x] ADDED -- export(Array, Array, ...hint..., ...parameters...)
@bojidar-bg could maybe help.
export(Array, ...) takes the same hints as export(...). So, here are a few examples:
export(Array, int, 1, 6) die_rolls # integers between 0 and 6
export(Array, Array, float) heat_map # Array of arrays of floats
export(Array) variants # Array of anything
export(Array, PackedScene) levels # Array of PackedScene resources
enum X {a, b, c}
export(Array, X) xes # array of the X enum, thus with values of 0, 1 or 2 (AFAIR)
export(Array, String, FILE, "*.txt") txtfiles # Array of paths to txt files
Basically, everything from the export docs, but with Array, before the other hints.
@akien-mga @bojidar-bg All ready for code review.
I added your changes to my build and was doing some testing. When I do a direct export of something like this...
export(Array, String, FILE, GLOBAL) var libs = []
...everything works great! But if I want to add this property to a group by using _set, _get, and _get_property_list() to manually make the PropertyInfo object, it doesn't seem to wanna work. The Inspector shows an Array[Nil] value for the property rather than the Array[0] value it usually shows (where you can then increase the size to add values and change them). Clicking on it doesn't do anything. I've double-checked to make sure that PropertyInfo fields have the same value as what the typical export statement generates. Here's an example of what I'm doing:
tool
extends Resource
class_name BuildSettings
var libs = []
func _get(property):
match property:
"settings/libs": return libs
func _set(property, value):
match property:
"settings/libs": libs = property
func _get_property_list():
return [
{
"name": "settings/libs",
"type": TYPE_ARRAY,
"hint": TYPE_STRING_ARRAY,
"hint_string": str(TYPE_STRING)+"/"+str(PROPERTY_HINT_GLOBAL_FILE)+":*.lib,*.a,*.dll,*.dylib,*.so"
}
]
Is there something I'm doing wrong, or is this a bug that needs fixing?
@willnationsdev I noticed you deleted your question, did you figure out your issue?
@ordigdug yeah, I realized I had the property in my _get_property_list method, but I hadn't realized there was a name mismatch on the _set/_get calls in my code.
From PR meeting reduz stated "as those types of hints will probably be deprecated in favor of a more limited and generic system after 3.1, i would prefer the minimum amount of code is added"
I'm not going to waste anymore time on this.
@pgruenbacher Bumping old threads is not a good practice, especially when they are mostly unrelated (as is in this case).
yea sorry I was on my phone and meant to do the opposite linking
|
GITHUB_ARCHIVE
|
Pass kubernetes.Interface to functions that need it
Why is this PR needed?
Adding an interface between GetAdminClientSet & Skuba allows substituting a mock implementation that can return a fake ClientSet.
This interface is also passed as an explicit parameter to all functions that need access to the GetAdminClientSet function so that functions can be used to write unit tests for our code.
Fixes #
https://github.com/SUSE/avant-garde/issues/989
What does this PR do?
This PR makes our code more testable, by untangling the explicit dependency on ClientSetFromFile implemented in kubeadm.
Info for QA
This PR should not introduce any regression.
Merge restrictions
(Please do not edit this)
We are in v4-maintenance phase, so we will restrict what can be merged to prevent unexpected surprises:
What can be merged (merge criteria):
2 approvals:
1 developer: code is fine
1 QA: QA is fine
there is a PR for updating documentation (or a statement that this is not needed)
It is better to add os.Exit(1) in every command cmds requires amd.conf. This is because users will get not clear errors when they uses cmds without bootstrapping/adm.conf.
skuba cluster status
** This is an UNTAGGED version and NOT intended for production usage. **
E1105 14:43:57.490720 29870 status.go:38] unable to get admin client set: failed to load admin kubeconfig: open admin.conf: no such file or directory
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xa8 pc=0x12e98b7]
goroutine 1 [running]:
github.com/SUSE/skuba/pkg/skuba/actions/cluster/status.Status(0x0, 0x0, 0x164f9d4, 0x22)
/home/chang/go/src/github.com/SUSE/skuba/pkg/skuba/actions/cluster/status/status.go:35 +0x37
github.com/SUSE/skuba/cmd/skuba/cluster.NewStatusCmd.func1(0xc000143b80, 0x271d710, 0x0, 0x0)
/home/chang/go/src/github.com/SUSE/skuba/cmd/skuba/cluster/status.go:41 +0xc2
github.com/spf13/cobra.(*Command).execute(0xc000143b80, 0x271d710, 0x0, 0x0, 0xc000143b80, 0x271d710)
/home/chang/go/src/github.com/SUSE/skuba/vendor/github.com/spf13/cobra/command.go:760 +0x2ae
github.com/spf13/cobra.(*Command).ExecuteC(0xc000142280, 0x13424f9, 0xc000665f88, 0xc0000ee058)
/home/chang/go/src/github.com/SUSE/skuba/vendor/github.com/spf13/cobra/command.go:846 +0x2ec
github.com/spf13/cobra.(*Command).Execute(...)
/home/chang/go/src/github.com/SUSE/skuba/vendor/github.com/spf13/cobra/command.go:794
main.main()
/home/chang/go/src/github.com/SUSE/skuba/cmd/skuba/main.go:57 +0x3a
|
GITHUB_ARCHIVE
|
Output array size too large for JVM
Hi @jmuhlich
as per your suggestion in a different repo I I just gave this a spin. Trying to stitch a whole slide acquisition
in a single .nd2 file I run into errors with output size:
(/home/jovyan/shared_volker/ashlar) jovyan@43e3f12000b3:~/Desktop$ JAVA_TOOL_OPTIONS="-Xmx20G" ashlar whole_slide_scan_large_image.nd2 -c 0
Picked up JAVA_TOOL_OPTIONS: -Xmx20G
Cycle 0:
reading whole_slide_scan_large_image.nd2
WARNING: Stage coordinates' measurement unit is undefined; assuming μm.
Channel 0:
merging tile 1/1Traceback (most recent call last):
File "/home/jovyan/shared_volker/ashlar/bin/ashlar", line 8, in <module>
sys.exit(main())
File "/home/jovyan/shared_volker/ashlar/lib/python3.7/site-packages/ashlar/scripts/ashlar.py", line 174, in main
args.quiet
File "/home/jovyan/shared_volker/ashlar/lib/python3.7/site-packages/ashlar/scripts/ashlar.py", line 219, in process_single
mosaic.run()
File "/home/jovyan/shared_volker/ashlar/lib/python3.7/site-packages/ashlar/reg.py", line 1085, in run
tile_image = self.aligner.reader.read(c=channel, series=tile)
File "/home/jovyan/shared_volker/ashlar/lib/python3.7/site-packages/ashlar/reg.py", line 420, in read
img = self.reader.read(series, c)
File "/home/jovyan/shared_volker/ashlar/lib/python3.7/site-packages/ashlar/reg.py", line 397, in read
byte_array = self.metadata._reader.openBytes(index)
File "jnius/jnius_export_class.pxi", line 1047, in jnius.JavaMultipleMethod.__call__
File "jnius/jnius_export_class.pxi", line 769, in jnius.JavaMethod.__call__
File "jnius/jnius_export_class.pxi", line 856, in jnius.JavaMethod.call_method
File "jnius/jnius_utils.pxi", line 91, in jnius.check_exception
jnius.JavaException: JVM exception occurred: Array size too large: 70714 x 28428 x 2
(/home/jovyan/shared_volker/ashlar) jovyan@43e3f12000b3:~/Desktop$
The output array size seems quite realistic for this problem: 70714 x 28428 x 2
The machine has plenty of RAM (>64 Gbyte). Not sure whether I should be doing something differently or whether I am hitting a limtation here.
It looks like your ND2 file is pre-stitched -- note the beginning of the Ashlar output before the exception:
merging tile 1/1
You would expect to see a number much larger than 1 if the file contained individual tiles. You'll need to provide the raw un-stitched tiles for Ashlar to be useful, but unfortunately I don't think there is a clean way to do this for ND2 files yet. Let's chat offline a bit about your data and what the options are, and I'll update this ticket later with the resolution.
Ah, I thought this was an nd2 containing individual tiles, but I didn't check in detail, you are probably right.
This was one of the few nd2 files I had lying around, usually I have collections of tiff files plus an accompanying .csv file with the stage coordinates for each tiff file. This is a bit of a legacy protocol being used in the group that is based on the Nikon Jobs functionality.
I sent you an email, and yes ... would be happy to discuss offline.
I think this can be closed
|
GITHUB_ARCHIVE
|
Senior Software Engineer - Syracuse
Employment Type: Full-Time
Industry: Information Technology
The Senior Software Engineers primary responsibility will be to participate in the creation of new products and enhancements to existing products from concept to launch as part of a cross-functional project team. The Senior Software Engineers responsibility to the team is to design, implement, and test solutions that result in compelling, easy to use products. Sr. Software Engineer will be responsible for a demanding and rewarding variety of duties related to the development, enhancement and delivery of an industry-leading product.
- Work closely with all business functions (buyers, salespeople, warehouse workers, transportation, etc.) and business analyst to understand daily, weekly, monthly, and other cyclical needs.
- Collaborates with other software engineers who are building software offerings that integrate with this platform.
- Applies truly agile approaches to project development (short iterations, frequent inspection and adaptation) to develop clear sense of direction and trust with stakeholders.
- Negotiates trade-offs and compromises.
- Implements features in a vertical fashion whenever possible (i.e. UI, Business Rules, Database Access Layers, External Interfaces, and Actual Database Scheme Design).
- Selects development tools that are appropriate and effective for getting the job done.
- Occasional travel to Rochester 1 or 2 times per year
Knowledge Skills and Assessments:
- Experience designing and/or maintaining proprietary enterprise systems
- Experience with Microsoft SSIS and SSRS
- Familiar with software modernization techniques such as ADM
- Experience developing mobile software for iOS and Android
- Experience with the latest and greatest web standards, including HTML5 and CSS3
- Knowledge of web libraries and frameworks such as node.js, jQuery, Bootstrap, .NET, etc
- Strong sense of web design and attuned to the fundamentals of user experience
- Familiarity with the whole web stack, including protocols and web server optimization techniques
- Relevant work experience, including full time industry experience
- Exceptional communication and listening skills
- Is approachable by his/her team, and works well with others
- Is passionate about their project and their job
- Is a problem solver, and proud of it
- Is passionate about learning new technology and figuring out how to apply it
- Willingness to carve time out to potentially learn a legacy language; having patience to deal with existing code while planning a new future
- A good sense of humor
- Bachelors degree in Computer Science, related technical field or equivalent practical experience.
- High level of proficiency in relational database architecture and SQL.
- Experience with ORMs, Parameterized Queries, Stored Procedures, and the security trade-offs involved with all of them.
- Experience with unit testing and TDD.
- Demonstrated experience in understanding the thorny issues around interpreting and migrating legacy application code.
- Must always be learning (technology, our business, the industry, our customers).
- Experience developing software using agile methodologies (e.g., Scrum).
- Willingness to have patience with legacy code, and the sense of responsibility to have patience in moving away from it.
Associated topics: back end, c/c++, design pattern, develop, devops, java, matlab, perl, programming, senior
Loading some great jobs for you...
|
OPCFW_CODE
|
Re: Clean Object Class Design -- What is it?
Date: 10 Sep 2001 14:20:56 -0700
"Craig Shearer" <craig.shearer_at_no_spam_please_bigfoot.com> wrote in message news:<3b9c3fa0_at_clear.net.nz>...
> "Bob Badour" <bbadour_at_golden.net> wrote in message
> > >
> > >If you're asking about Integrity Checking, then there is no formal
> > >for defining this.
> > This gives a huge advantage to the relational data model then, which does
> > have such a formal mechanism.
> Let's compare correctly. I said that there was no formal mechanism for doing
> this in the example OODBMS that I was using (which happens to be JADE) but
> that doesn't mean that such a mechanism could not be implemented. It would
> be possible to implement this with as much formality as The Relational Data
> Model allows... not all OODBMS products have matured this far, yet.
Since the relational model already specifies the interface to an OODBMS that has this formality, why would anyone want to use a non-relational dbms?
The network model on which the non-relational dbmses are based does not have any real formality. The formality of the relational data model arose in its very conception. As far as I have seen, nobody has proposed an alternate formal system yet. At best, all competing data models have tried to describe an ad hoc model using formal techniques.
> > >However, it is possible to define code in the destructor
> > >for the class that would raise an exception under certain circumstances,
> > >thus preventing deletion by forcing the transaction to abort.
> > Since the DBMS must support multiple applications and multiple programming
> > environments, what prevents your Java programmers from omitting an
> > constraint in the destructor that your Smalltalk programmers rely on?
> But this particular DBMS is active - you can't get at JUST the data, you
> necessarily invoke methods too.
Nothing prevents someone from writing a new method that violates the existing integrity constraints as implemented in other methods.
> If another language wants to delete a JADE
> object, then the JADE destructor will be invoked - you can't program around
In JADE, it is perhaps possible to code a cascaded delete. This barely scratches the surface of integrity enforcement. What prevents someone from writing a method that inserts an orphan?
What application programming languages does JADE support?
> Or, as you are so fond of saying, your original assumptions are false which
> renders the rest of your argument invalid. :-)
Perhaps. How many applications and application programming environments does JADE support?
> > >I'm not certain what you mean by symmetric references, but I'll hazard a
> > >guess. When references between classes are defined, you specify an update
> > >mode - effectively which side is manually maintained. So, if the
> > >relationship defines manual on one side and automatic on the other, then
> > >compiler will reject (with a syntax error) code attempting to update the
> > >automatic side of the relationship. It is possible, though, to define
> > >sides as manual/automatic, meaning that whichever side is maintained
> > >manually, the other side will be automatically maintained.
> > Symmetric means the user can use the value as to identify referencers as
> > easily as the referenced. All of the above manual and automatic crap are
> > just more physical implementation details forced on users.
> Then by your definition, these references are symmetrical.
Really? I can access the instances of any object class without navigating through instances of other object classes? I can insert new order items into an order without specifically accessing an instance of the order class?
> Regarding the manual/automatic thing, I don't believe this to be a physical
> detail - my opinion is that this represents something logically inherent in
> the data.
Navigation is not inherent in the data. Whether the user establishes a valid relationship by changing one entity or the other entity is irrelevant to the data. In the end, the user should determine which is appropriate to the application at hand provided the resulting relationship is valid.
> But then your reaction to anything that The Relational Data Model
> doesn't implement seems to be that it's an evil physical implementation
The relational data model specifies a logical data model. Implementation is a totally independent issue.
Requiring the user to know which of two pointers he or she can change forces users to deal with irrelevant details. The relational model does allow (and even requires) symmetric relationships among data -- in fact, that is one of its fundamental uses!
Implying that the relational model does not allow relationships among data is ludicrous. Received on Mon Sep 10 2001 - 23:20:56 CEST
|
OPCFW_CODE
|
[feat]: add an "onSearchSync" prop or bypass the Debounce lag
Feature description
I have 2 arrays that match the Option shape locally. Those arrays are hold by possibleOptions object (see below the mockSearch fn):
"highlighted" with around 20 items, that I'm showing when using triggerSearchOnFocus meaning when user click on the multiselect or the search.value is empty.
"all" with around 1000 items, for searching purposes.
When I'm using the async onSearch with a following mockSearch function with delay = 0 or by passing the prop delay={0}, I still see a debounce:
const mockSearch = async (value: string): Promise<Option[]> => {
return new Promise(resolve => {
setTimeout(() => {
if (!value) {
resolve(possibleOptions.highlighted)
}
const res = possibleOptions.all.filter(option =>
option.value.toLowerCase().includes(value.toLowerCase()),
)
resolve(res)
}, 0)
})
}
When I have options locally, I should be able to have a UX without a debounce, so the result almost appear instantly.
Perhaps its about bypassing the Debounce? What you think @hsuanyi-chou ?
Thanks in advance!
Affected component/components
No response
Additional Context
Additional details here...
Before submitting
[X] I've made research efforts and searched the documentation
[X] I've searched for existing issues and PRs
Would not providing a loadingIndicator solve the problem?
Would not providing a loadingIndicator solve the problem?
Unfortunately no @hsuanyi-chou
If you want filter locally, all you have to do is to build the Option type(which is { value: string; label: string; disable?: boolean; fixed?: boolean; [key:string]: string | undefined; }) and send to the defaultOptions or options prop.
If you want filter locally, all you have to do is to build the Option type(which is { value: string; label: string; disable?: boolean; fixed?: boolean; [key:string]: string | undefined; }) and send to the defaultOptions or options prop.
cmdk will do the rest.
@hsuanyi-chou That's what I did in the beginning. But you can only pass a full defaultOptions. So when the user click the multi-select (focus on) it will display the list of 1000 items.
And my use case is the following:
when the user click the multi-select (focus on) I only show a suggested/highlighted list (few items from the 1000 items)
and when the user start to type (focus on, searching mode) I will look for a value within the whole list of 1000 items.
If your case doesn’t need to send a request, you can simply remove the debounce and the loading state code to make it become a sync filter.
All the code is in your project, not in node_modules.
@hsuanyi-chou yes I know and I tried to make this behaviors based on your component, but there is plenty of logic, and could not figure it out what exactly I need to remove.
Would not be better if there was a props to make such behavior available for everyone? I can see its a popular features to have a suggested default options on focus.
added.
@hsuanyi-chou works perfectly, many thanks!
Btw are you on discord?
Yes, I am on discord.
Here
|
GITHUB_ARCHIVE
|
Function return values are, in .NET apps, pushed onto an "evaluation stack", which resides in protected memory within the process. However, you're talking about a string, and that's a reference type, so what's on the evaluation stack is a pointer to that string's location on the heap. Heap memory is relatively insecure because it can be shared, and because it lives as long as the GC doesn't think it needs to be collected, unlike the evaluation or call stacks which are highly volatile. But, to access heap memory, that memory must be shared, and your attacker must have an app with permission from the OS and CLR to access that memory, and that knows where to look.
There are much easier ways to get a plaintext password from a computer, if an attacker has that kind of access. A keylogger can watch the password being typed in, or another snooper could watch the actual handle on the unmanaged side of the GDI UI and see the textbox that's actually displayed in the Windows GUI get the plaintext value (it's only obfuscated on the display). All that without even trying to crack .NET's code access security or protected memory.
If your attacker has this kind of control, you have lost. Therefore, that should be the first line of defense; make sure there is no such malware on the client computer, and that the instance of your client app that the user is attempting to log into has not been replaced with a cracked lookalike.
As far as obfuscated password storage between instances, if you're worried about mem-snooping, a symmetric algorithm like Rijndael is no defense. If your attacker can see the client computer's memory, he knows the key that was used to encrypt it because your application will need to know it in order to decrypt it; it will thus either be hard-coded into the client app or it will be stored near the secure string. Again, if your attacker has this kind of control, you have lost if you do your authentication client-side.
I would, instead, use a service layer on a physically and electronically secured machine to provide any functionality of your app that would be harmful to you if misused by an attacker (primarily data retrieval/modification). That service layer could be used both to authenticate and to authorize the user to perform whatever the client app would allow.
Consider the following:
- The user enters their credentials into your client app. These credentials can be the same as the AD credentials but they will not be used as such. The only way to prevent a keylogger or other malware seeing this is to ensure that no such malware exists on the computer, through enforcement of a good AV software.
- The client app connects to your service endpoint through WCF. The endpoint can be signed with an X.509 certificate; not NSA-level security, but at least you can be confident you're talking to the server under your control.
- The client app then hashes your user's password with something that produces a large digest, like SHA-512. This in itself is not secure; it's too fast and the entropy of your user's password is too low, to prevent an attacker cracking the hash. However, again, they have to have control of the computer to see the hash, and we're going to further obfuscate it.
- The client app transmits the username, password and the Hardware ID of the client computer over the WCF channel.
- The server gets these credentials. Notice that the server doesn't get a plaintext password; this is for a reason.
- The server cuts the hashed password into 256-bit halves. The first half is then BCrypted (using an implementation configured to be suitably slow; 10 or 11 "rounds" will usually do it), and compared with a hashed value in a user database. If they match, the DB returns the user's AD credentials, which have been symmetrically encrypted with the other half of the password hash. This is why a plaintext password is never sent; the server doesn't have to know it, but an attacker would in order to get anything meaningful out of a stolen copy of the user database.
- The server decrypts the AD credentials, submits them to AD, and receives the IPrincipal representing that user's identity and security context. The IPrincipal implementation will contain zero information that could be used to crack the user's account.
- The server generates a cryptographically-random 128-bit value, concatenates the 128-bit Hardware GUID, and hashes it with SHA512. It used half of that hash to symmetrically encrypt the key value that was used to decrypt the AD credentials. It then BCrypts the other half, and stores that hash beside the encrypted key.
- The server then transmits back three pieces of information over the secure WCF channel; the IPrincipal that AD produced, the unhashed 128-bit random value (the "transfer token"), and another cryptographically-random value of arbitrary length (the "session token").
- The client app is now authenticated on the client side, meaning you can control user access to code by interrogating the IPrincipal for AD role membership, and the server is now also confident that the user who has the session token is a real user. When making any further calls to the service (data retrieval/persistence), the client should use the WCF channel that was negotiated, AND pass its session token. The combination of WCF channel and session token is one-time and unique; using an old token on a new channel, or passing the wrong token on the same channel, indicates the session has been compromised. Above all, none of the persistent data stored anywhere at anytime in either client or server can be used to get the AD credentials and authenticate.
Now, when your client application closes, all "session state" is lost between client and server; the session token is not valid for any other negotiated channel. So, you've lost authentication; the next client who connects could be anyone regardless of who they say they are. This is where the "transfer token" comes in:
- The "transfer token" is a free pass back into the system. It is one-time, and expires if unused 18 hours after it was issued.
- The client application, when closing, passes two pieces of information to the new instance (however it chooses to do so); the user name of the person who logged in, and the "transfer token".
- The new instance of the client application takes these two pieces of information, and also gets the Hardware ID of the client machine. It negotiates a secure connection with the WCF service, and passes these three pieces of information.
- If the user last logged in more than 18 hours ago (not 24 hours, so they can't show up a minute before they did yesterday and restart the app), or if you want to be really paranoid, more than 8 hours ago, the app immediately returns an error that the transfer token for that account is out of date.
- The service takes the transfer token, concatenates the Hardware ID, SHA-512s it, BCrypts half, and compares the result to the stored second verification value. Only the proper combination of the transfer token and the machine that last logged in will produce the correct hash. If it matches, the other half of the hash is used to decrypt the key that will then decrypt the AD info.
- The service then proceeds as if the user had provided the application password hash, decrypting the AD info, retrieving the IPrincipal, generating a new transfer token, session token, and re-encrypting the key for the AD data.
- If any part of this process fails (trying to use an incorrect token including using the same token twice, or using the token from a different machine or for a different user), the service reports back that the credentials are invalid. The client app will then fall back to the standard user-password verification.
Here's the rub; this system, by relying on a secret password that is not persisted anywhere except the user's mind, has no back doors; administrators cannot retrieve a lost password to the client app. Also, the AD credentials, when they have to change, can only be changed from within the client app; the user can't be forced to change their password by AD itself on a Windows login, because doing so will destroy the authentication scheme they need to get into the client app (the encrypted credentials will no longer work, and the client app credentials are needed to re-encrypt the new ones). If you were somehow able to intercept this validation inside AD, and the client's app credentials were the AD credentials, you could change the credentials in the user app automatically, but now you're using one set of credentials to obfuscate the same set of credentials, and if that secret were known you're hosed.
Lastly, this variant of this security system functions solely on one principle; that the server is not currently being compromised by an attacker. Someone can get in, download offline data, and they're stuck; but if they can install something to monitor memory or traffic, you're hosed, because when the credentials (either username/password hash or transfer token/hardware ID) come in and are verified, the attacker now has the key to decrypt the user's AD credentials. Usually, what happens is that the client never sends the decryption key, only the verification half of the hashed password, and then the server sends back the encrypted credentials; but, you are considering the client to be a bigger security risk than the server, so as long as that is true, it's best to keep as little plaintext as possible on the client for any length of time.
|
OPCFW_CODE
|
I'm currently in college, working as part of at team for one of the CA projects. it's a problem based learning project where we've been given a trigger and told to come up with something to represent it.We've been assigned to specific teams and we're not allowed change to another team. All problems/issues must be resolved within the team. We've decided to toy around with the idea of a game. The PBL procedure involves assigning roles to each team member and getting stuck in, analysing the problem, deciding on the best way to solve it, getting stuck into UML and coding etc... (we're using Java). However, it seems to me that for the most part my fellow team members seem to lack enthusiasm for the task at hand. After our first lesson one member was assigned the task of printing out special "trigger" cards for the purpose of the next meeting, and one was supposed to update the log with the minutes of the last meeting. Now we're in week 3 and I've printed the cards myself and I've had to post on the online log and ask for it to be updated (4 days after the class).I find it quite distressing to be honest. Last week we assigned tasks to each member, namely researching a specific type of game to see the feesability of using it for the project. I regularly composed and sent emails on my progress, ideas of how we could proceed, even diagrams and the beginning of UML class diagrams that might be useful. I got some response form one of the memebrs (4 on the team) but nothing from the others. In the mean time we got a mail from the lecturer saying he couldn't make the next class, but it wasn't an issue, the materials were online and he was just an observer this time anyway. When I got to class the other day, one guy didn't show up, apparently it's too far to drive if there's not even a lecturer there (like the workshop would be much different anyway); and when we got into it, the other lads had come up with "maybe some sort of pacman game would be good", but that was it, no research, no ideas on how we might code it, nothing. They completely shot my idea down as too much work, but didn't offer any suggestions to replace it. I'm not the chair person on this team, how can I help motivate them to get stuck in without offending anyone? or maybe the team needs a shake up to get some life into it.
I really don't get the lack of interest or excitement, this is a part-time course, everyone has full time jobs, I'd imagine they're here because they have life experience and now know what they want, but listening to them talk, and the lack of "buzz" for the want of a better word, about the project doesn't instill confidence in our ability to get it done. It kind of feels like I'm currently the driving force behind it, but I don't want to be pushy; however at this rate I can't see how we can complete the project on time. For the most part they only seem to communicate during college hours. We've set up a shared workspace for task management, scheduling etc, but again no-one seems interested in maintaining it. Is it me? am I expecting too much? am I that annoying "me! me! me!" individual in the group? I don't think so, am I wrong getting frustrated at the lack of movement? we're heading into week 4 now and while we've agreed a game "template" but it's only because I pushed it, they wanted to wait for the other guy to return, but then we'd be in week 5 of an 11 week project with no movement.
Sorry for the rant! I'd be very grateful for any words of wisdom you can provide.
|
OPCFW_CODE
|
ASPENTECH SQLPLUS ODBC DRIVER DETAILS:
|File Size:||6.2 MB|
|Supported systems:||Windows All|
|Price:||Free* (*Registration Required)|
ASPENTECH SQLPLUS ODBC DRIVER (aspentech_sqlplus_8366.zip)
SQL Database Azure.
Workshop, use the aspen sqlplus editor to construct simple queries with the select statement and become familiar with the query editor. Executable files and password of this? Sql*plus user s guide and reference, release 8.1.7 part no. The industries that give you will hopefully allow you only. The cisco information server download odbc driver not have any record.
Apurva is a peoplesoft consultant and a big advocate of everything peoplesoft. Admin can perform various database administration functions including, starting or stopping 1, aspentech software is only windows based compatibility these technology, the best option is the 'aspentech sqlplus odbc driver'. Please remember to ip21 by aspentech software in python. We have already developed an application to insert the data into ip21 historian which uses same protocol. Is it possible to query data from infoplus 21 ip21 aspentech using php? Microsoft odbc driver 13 for sql server , login failed for user 'sa' ask question asked 2 years, 2 months ago.
Cisco Information Server Download.
342, also uses odbc, sql server 2016, 11. The updated driver provides robust data access to microsoft sql server and microsoft azure sql database for c/c++ based applications. The microsoft odbc driver for sql server allows native c and c++ applications to leverage the standard odbc api and connect to microsoft sql server 2008, sql server 2008 r2, sql server 2012, sql server 2014, sql server 2016 preview , analytics platform system, azure sql database and azure sql data warehouse. The steps that follow shows our installation on a windows 2019 server but the same applies for earlier windows version.
SQL Data Warehouse.
Has not open a software in python. Aspentech sqlplus driver download - therefore it should be possible in your case as well. In that case, it is the soap wrapper around sqlplus, you will not have to install the windows odbc.
The names of program executable files are . The microsoft odbc client configuration use all installations currently unknown. Therefore it should be possible in your case as well. Dimension the connection string and odbc command. For more information, see the related sections. I configured two data source technologies. The setup package generally installs about 21 ip21 historian. Odbc driver for sql server download odbc driver for sql server. You can also use sql plus with the connector.
Before i can use pyodbc, i need to install the microsoft odbc driver 13 for sql server. Recent versions of ip21 require a slight modification to the web. I am deploying ubuntu with over 98% of this? Driver ibm thinkpad t42 type 2373 for Windows xp.
SQL Server Named Pipes Provider.
Does anybody have to our informers. Sql server azure sql azure synapse analytics sql dw applies to, sql server azure sql database azure synapse analytics sql dw parallel data warehouse microsoft odbc driver for sql server dll api. A82950-01 oracle corporation welcomes your comments and suggestions on the quality and usefulness of this document. Does anybody have any experience of in connecting via odbc to ip21 by aspentech. The industries that drive our economies and touch our lives are optimized by aspenone software every day. Aspen sqlplus odbc driver download - stay on top of performance with pre-built dashboards that give you a graphical view of kpis, golden profiles, compliance tags, and spc tags.
Setting up the aspentech sqlplus odbc data source. Gigabyte n730 Descargar Driver. Is it, you have a linux version. A variable and touch our site, and a peoplesoft. Does anybody have any experience of testing / using this?does anybody have any experience of in connecting via odbc to ip21 by aspentech. Your input is an important part of the information used for revision. Aspen sqlplus has not been rated by our users yet.
Server native client sql server on. How to get microsoft azure sql server on linux ubuntu with free license. When you configure the server, make sure the connection url and odbc client configuration use the same data source name. The most common release is 188.8.131.52, with over 98% of all installations currently using this version. Aspentech sqlplus driver cross-platform html5 enterprise search engine enables you to quickly find and visualize relevant information with full-text search and hit highlighting.
The setup package generally installs about 21 ip21 configuration with xmii. In the majority of cases you only need one process because the sqlplus language is extremely efficient, but it s good to know the flexibility and scalability is there. Both the following operating systems, release 8. Microsoft odbc driver 13 for sql server named pipes provider, could not open a connection to sql server 53 . Aspen sqlplus odbc driver download - if so, you'll need to add to the script a method to obtain the host name into a variable and then use the variable in the connection string assignment just like in the vb program. Microsoft odbc driver for sql server microsoft odbc driver for sql server.
Vrova Docking Station Drivers For Mac. But you will still have to learn sqlplus syntax. With over them, port=10014 dim ocmd as for all. 0, the same windows 2019 server database. Our comprehensive elearning courses, created by aspentech experts, offer self-guided learning paths for all our major solutions. Nikon digital ds-u2.
Connecting SQL Server Management Studio to.
The setup package generally installs about 19 files and is usually about 21.31 mb 22,342,656 bytes . And suggestions on the majority of in python. The updated driver provides robust data source name. The new driver cross-platform html5 enterprise search and understand our informers. Hi all, i have an issue with some ip21 tags. I believe there is a driver that is available from easysoft jdbc/odbc bridge . Does anybody have any experience of the operating system level.
|
OPCFW_CODE
|
The Global Kettlebells Market analysis report contains all study material about Global Kettlebells Market Overview, Clinical Review, Medical Trend, Demand and Development Research in all over the world.The report, named “Global Kettlebells Market“, provides a Detailed overview of the Kettlebells market related to overall world. Report assesses the size of the Kettlebells market and also estimates the valuation of the Global Kettlebells market by the end of the given forecast period.
Worldwide report on ” Kettlebells Market” includes a comprehensive study of the Kettlebells market and defines the key terminologies as well as Kettlebells market classifications for the benefit of new entrants to the Worldwide Kettlebells market. This report points out the Kettlebells Market drivers and restraints affecting the growth of the Kettlebells market. It also cites the various Kettlebells Industry opportunities for the Kettlebells market to grow in the next couple of years.
The report studies the global Kettlebells market on the basis of major product types and end user segments. The report related to Kettlebells Market also compiles data from relevant industry bodies to forecast the growth of each of the segments related Kettlebells Market Scenario. The report especially focuses on Kettlebells market and analyzes the various micro- and macro-economic factors affecting the growth of the global Kettlebells market. Presently, the global Kettlebells industry economy is witnessing its slowest growth phase in the past two decades related Kettlebells market scenario.
To Buy Purchase Report Here : http://www.qymarketresearch.com/report/99195#inquiry-for-buying
The report profiles some of the key players in the global Kettlebells market and provides insightful information about Kettlebells industry, such as business overview, Kettlebells market product segmentation, revenue segmentation, and latest developments. The report Kettlebells market analyzes the strengths and weaknesses of the key players through SWOT analysis and projects the Kettlebells market growth of the key players during the forecast period. The report on ” Worldwide Kettlebells Industry ” also provides some valuable recommendations and serves as a helpful guide to new as well as existing players in the Kettlebells market.
QY Market Research is a single destination for all the industry, company and country reports. We feature large repository of latest industry reports, leading and niche company profiles, and market statistics released by reputed private publishers and public organizations.
Suite #8138, 3422 SW 15 Street,
Deerfield Beach, Florida 33442
Toll Free: +1-855-465-4651 (USA-CANADA)
|
OPCFW_CODE
|
Section: New Results
DBMS has become a very mature technology that is ubiquitous in information systems. Over time, the extensive use of DBMS technology has had major consequences in large organizations: the production of very large databases, the production of heterogeneous databases, and the increasing requirement of diverse applications to access those very large, heterogeneous databases. This creates difficult technical problems which get worse as DBMS technology improves and is more able to produce very large, heterogeneous databases. The SaintEtiQ system provides a novel solution for representing, querying and accessing large databases. We recently completed our work on summary querying techniques as well as decision support systems. We also pursued our work on summary management over P2P systems.
Summary query evaluation
We proposed a querying mechanism for users to efficiently exploit the hierarchical summaries produced by SaintEtiQ . The first idea is to query the summaries with their own vocabulary, taking advantage of the hierarchical organization of the summaries . The query evaluation matches summaries in the tree with fuzzy selection predicates of the query. The algorithm performs boolean set comparisons and uses the tree structure to cut branches and prune the search space. This leads to important gains in response time, in particular, in the case of null answers (i.e., of an empty result set), as only a small part of the summary hierarchy must be parsed, instead of the entire database.
As an extension of this work, we proposed to formulate the query predicates with a free user vocabulary rather than with the summary descriptors. We studied the query evaluation including the mapping between user concepts and summaries, using the symbolic-numerical interface of the fuzzy set theory .
Querying summaries: multidimensional indexing
We investigated the area of multidimensional indexing from the point of view of space-partitioning. Through its architectural aspects, a summary hierarchy shares many features with multidimensional indexes (R-Tree, UB-Tree, X-Tree, ...). Current work on flexible querying uses the hierarchy as an index to select the appropriate database records, since in multidimensional indexing, each selection criterion reduces the search space for the other criteria.
Thus, we proposed to use summary hierarchies from the SaintEtiQ system as an index structure for a PostgreSQL access method. The objective of this work is to study the feasibility of using summaries as indexes, and determine the parameters that have an impact on the access method's performance. The study is limited to searching because defining a fully functional access method is a tedious task: updates and inserts are not yet supported. The index file is a binary version of the XML file produced by the SaintEtiQ prototype. The point in not modifying the tree structure is to evaluate the prototype's output as faithfully as possible. Although a summary hierarchy is intended for a different purpose and not optimized for querying, it provides acceptable response time for queries other than one-column queries. However, explaining the response time remains difficult. The immediate perspective is to use larger data sets so as to make the influence factors more distinct. Since it does not exist any benchmark data set for evaluating multidimensional indexing techniques, we are working on generating random data with a variable search space occupation ratio. Tuning that ratio will help simulate real data. Once the performance factors are known, it will be possible to adapt the construction of summaries for the purpose of using them as an index structure. very promising.
On-Line Analytical Processing of summaries
We proposed a general framework to explore and analyze database summaries built from massive data sets. Summaries are self-descriptive and higher-level views of groups of raw data. The overall on-line summarization processing is then intended to support a new approach to On-Line Analytical Processing of large data sets . It aims at providing an effective and rich tool for visualizing, querying and accessing summaries considered as compressed semantic views of raw data.
Our contributions are as follows. First, we defined a logical data model called summary partitions , by analogy with OLAP datacubes. The aim is to provide the end-user with an effective way of presenting a reduced version of the data set as well as to support analysis. Pre-built and ordered partitions are considered on the basis of a process dedicated to the generation of summaries at different levels of granularity. Second, we defined a collection of algebraic operators over the space of summary partitions: relational, granularity and structuring operators are designed for on-line analytical processing of summarized versions of the data . Third, we addressed the issue of representing the summary partitions, especially to make as simple and informative as possible the summaries to the end-user. To achieve this, we tried to build fuzzy prototypes for the summaries, as a pre-visualization mechanism .
Summaries over a P2P architecture
We started to study the integration of a new service for managing summaries in P2P systems. In such a context, summaries have two main virtues. First, they can be directly queried and used to approximately answer a query without exploring the original data. Second, as semantic indexes, they support locating relevant nodes based on data content.
The first idea was to incrementally construct a global summary which describes all the data shared in the network. Distributed storage of such a global summary is, for instance, managed by a dedicated service and peers call that service with the right global summary key. For a given query, the global summary is first used to determine the set of nodes having relevant data. Then, those nodes are directly contacted. Simulation results have shown that the cost of query routing is significantly reduced compared to flooding approaches. However, converging to, and maintaining such a global summary is hard and costly in a P2P environment. Current work consists in retrieving a sort of natural partitioning of unstructured networks in peer domains, each managing its global summary. Our approach relies only on scale-free network properties such as the power law degree distribution and the associated clustering coefficient distribution. The intra-domain links will be used as summary links (i.e. index links) to maintain the global summary, while the inter-domain links will be used as search links to propagate the query among domains. We aim at finding the optimal number of domains that minimizes the total cost of query routing and summary maintenance.
|
OPCFW_CODE
|
It is your first day as a network administrator. Your boss walks up to your desk and for your first task you must implement standard configurations across all your switches and routers. Let’s not yet worry about how you will deploy these configurations across the enterprise, but let’s talk a bit about the content.
Standards are important in networks. Having a uniform config is a great goal to have. Different site configurations will vary, but there are certain configuration pieces you can keep the same. Whether you deploy Cisco, Juniper or another solution, there are best practices you can implement.
I always recommend looking at the best practices section for your solution and implement what you can. Below are just a few configs I’ve found useful, picked up through studies or are recommended best practices. These network device configurations are specific to Cisco, but others have similarities.
In the world of Spanning Tree Protocol, loop prevention is of utmost important. When Spanning Tree Protocol is used in the topology, information is exchanged between switches in the form of Bridge Protocol Data Units (BPDU). You should not receive BPDUs from user-facing ports. I usually configure all user-facing with bpduguard.
interface GigabitEthernet 1/0/1 description User-A spanning-tree bpduguard enable
I recall early in my IT career a day that this and a few other commands would have helped. Off-brand switches and hubs were not allowed at the university where I worked, but they were often used for offline PC imaging. One of the techs found a stray cable laying around and thinking that this belonged to the rogue device, plugged it back in. This “offline” device was uplinked to one of the switches in the nearby closet. Also, we did not have bpduguard enabled, storm-control or anything else that would have helped. As the tech left for the day, the network slowed down to a crawl. It was all because of that one cable plugged back into the same switch. It took some detective work and a lot of time for the engineers to track the issue down. A few commands would have prevented it all.
Here is a useful set of commands that can get you out of a jam with a rogue DHCP server.
ip dhcp snooping vlan <#s> no ip dhcp snooping information option ip dhcp snooping
interface <type/#> ip dhcp snooping trust
I remember when I received a call from a site about the internet not working. Well, that is and always will be a vague statement. “What IP address do you have?” I asked. Usually this question can point you in the right direction. The IP I was then given was nothing like the IP they should have pulled from the DHCP server. That was the big clue something fishy was going on. I added the above command for the VLAN the users were on. The purpose of DHCP snooping is to trust specific ports where you know DHCP packets should come from. I added the interface trust command to the physical ports the actual DHCP server was on as well as the uplinks between the switches.
The no ip dhcp snooping information option command is always a confusing one. Do I need it on or not? The option in question is DHCP option 82. This option gives the server additional info to where the device needing the IP resides in the network. In my experience I usually do not need to do this with the Windows servers we use for DHCP. I do not need the option added in, so I run the above command. As soon as this occurs, only the true DHCP server can communicate with clients. The rogue server will not be able to communicate with clients. You can then use the switch logs to track down which port the rogue server connected to. At that point there will probably be a nice conversation with someone.
You might be part of a smaller operation without the ability to track who does what with a TACACS or RADIUS solution. That is the best option, but if you do not have it you can still keep track of who ran what commands on the device:
archive log config log enable hidekeys
This is the quick and dirty way of finding out Senior Engineer So and So was the one that took down that uplink by mistake. Hey, it happens. We all do it. However, for accountability purposes, we need to know who did what and what commands were typed. The above config will allow you to use the show archive log config all command:
52 1 SrEngineer@vty0 | interface GigabitEthernet8/28 53 1 SrEngineer@vty0 | shutdown
Here is a simple command I toss on the console and lines when I am configuring a device.
line con 0 logging synchronousline vty 0 15 logging synchronous
Have you ever started configuring a device and as you type commands the console output continues to interrupt what you are currently typing? Yes, it is annoying, distracting and just makes the output look messy as you type the next command. Adding logging synchronous to your lines will keep the commands you are currently typing on a separate line from the output coming out. This will help when you must console in and configure devices.
These simple global commands do not do much, but they do help in keeping connections to a device (telnet/SSH) cleaned up during disconnects.
service tcp-keepalives-inservice tcp-keepalives-out
TCP connections to a device (usually a management connection to the device) that age out or are disrupted can stay “active” on the device. This happens because the device does not know the remote connection was disrupted. Adding in these service commands will clear the connections for you.
As I mentioned in the beginning, looking at your best practices is key. This will allow you to create a set of configs you can deploy globally. Look at your infrastructure and come up with those standards. Your organization will thank you and if they do not, your future-self will.
|
OPCFW_CODE
|
this.schema.generateId is not a function
I wrote an article on this issue I have been facing. article on dev.to
I added the files from the old project and recreated the model folder as db. I unistalled redis-om verson 0.3.6 and installed version 0.1.5. I hit the endpoint to create an admin and this was the log.
{"date":"Sun Aug 21 2022 16:05:39 GMT+0000 (Greenwich Mean Time)","error":{},"exception":true,"level":"error","message":"uncaughtException: The field 'paidAt' is configured with a type of 'date'. Valid types include 'array', 'boolean', 'number', and 'string'.\nError: The field 'paidAt' is configured with a type of 'date'. Valid types include 'array', 'boolean', 'number', and 'string'.\n at Schema.validateFieldDef (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js:84:19)\n at Schema.defineProperties (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js:32:18)\n at new Schema (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js:15:14)\n at file:///home/user/Projects/web/ARMS-redis/src/db/schema/cash.schema.js:25:16\n at ModuleJob.run (node:internal/modules/esm/module_job:198:25)\n at async Promise.all (index 0)\n at async ESMLoader.import (node:internal/modules/esm/loader:385:24)\n at async loadESM (node:internal/process/esm_loader:88:5)\n at async handleMainPromise (node:internal/modules/run_main:61:12)","os":{"loadavg":[1.71,1.92,1.59],"uptime":179703.37},"process":{"argv":["/home/user/.nvm/versions/node/v16.15.0/bin/node","/home/user/Projects/web/ARMS-redis/src/index.js"],"cwd":"/home/user/Projects/web/ARMS-redis","execPath":"/home/user/.nvm/versions/node/v16.15.0/bin/node","gid":1000,"memoryUsage":{"arrayBuffers":239734,"external":1656558,"heapTotal":68870144,"heapUsed":49024800,"rss":113688576},"pid":125259,"uid":1000,"version":"v16.15.0"},"stack":"Error: The field 'paidAt' is configured with a type of 'date'. Valid types include 'array', 'boolean', 'number', and 'string'.\n at Schema.validateFieldDef (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js:84:19)\n at Schema.defineProperties (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js:32:18)\n at new Schema (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js:15:14)\n at file:///home/user/Projects/web/ARMS-redis/src/db/schema/cash.schema.js:25:16\n at ModuleJob.run (node:internal/modules/esm/module_job:198:25)\n at async Promise.all (index 0)\n at async ESMLoader.import (node:internal/modules/esm/loader:385:24)\n at async loadESM (node:internal/process/esm_loader:88:5)\n at async handleMainPromise (node:internal/modules/run_main:61:12)","trace":[{"column":19,"file":"/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js","function":"Schema.validateFieldDef","line":84,"method":"validateFieldDef","native":false},{"column":18,"file":"/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js","function":"Schema.defineProperties","line":32,"method":"defineProperties","native":false},{"column":14,"file":"/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/schema/schema.js","function":"new Schema","line":15,"method":null,"native":false},{"column":16,"file":"file:///home/user/Projects/web/ARMS-redis/src/db/schema/cash.schema.js","function":null,"line":25,"method":null,"native":false},{"column":25,"file":"node:internal/modules/esm/module_job","function":"ModuleJob.run","line":198,"method":"run","native":false},{"column":null,"file":null,"function":"async Promise.all","line":null,"method":"all","native":false},{"column":24,"file":"node:internal/modules/esm/loader","function":"async ESMLoader.import","line":385,"method":"import","native":false},{"column":5,"file":"node:internal/process/esm_loader","function":"async loadESM","line":88,"method":null,"native":false},{"column":12,"file":"node:internal/modules/run_main","function":"async handleMainPromise","line":61,"method":null,"native":false}]}
Well, it has something to do with the type, date.
I went back to this project on GitHub and so I installed that version instead.
{"level":"info","message":"Redis server connected"}
{"level":"info","message":"server started on port 3000"}
{"level":"error","message":"this.schema.generateId is not a function"}
TypeError: this.schema.generateId is not a function
at Repository.createEntity (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/repository/repository.js:43:30)
at Repository.createAndSave (/home/user/Projects/web/ARMS-redis/node_modules/redis-om/dist/repository/repository.js:63:27)
at create (file:///home/user/Projects/web/ARMS-redis/src/controller/admin.controller.js:57:31)
{"contentLength":"47","level":"info","message":"HTTP Log","method":"POST","responseTime":122.367,"status":200,"timestamp":"Sun Aug 21 2022 16:20:31 GMT+0000 (Greenwich Mean Time)","url":"/admin"}
Back to square one.
I think I should switch to typescript. I will switch to typescript before I sleep and if it doesn't work, at least I learnt something.
I am ubuntu 20.04, node v16.15.0, npm 8.10.0, Redis server v=7.0.0, redis js 4.2.0, redis-om js [0.1.5, 0.2.0, 0.3.6].
The date type was added later. The downgrade from 0.3.6 to 0.1.5 caused this issue. It's probable that the example code at https://github.com/redis-developer/express-redis-om-workshop/blob/solution/package.json, which uses Redis OM 0.2.0, is not compatible with 0.3.6.
Redis OM is still under development, which is why the version number starts with a zero. Think of 0.1., 0.2., and 0.3.* as major releases that are not guaranteed to be backwards compatible. The canonical examples to use for the current version of Redis OM are in the README in this repository.
So deleting the node_modules would have done the resolved the issue
regardless of the version?
On Mon, Aug 29, 2022, 15:47 Guy Royse @.***> wrote:
The date type was added later. The downgrade from 0.3.6 to 0.1.5 caused
this issue. It's probable that the example code at
https://github.com/redis-developer/express-redis-om-workshop/blob/solution/package.json,
which uses Redis OM 0.2.0, is not compatible with 0.3.6.
Redis OM is still under development, which is why the version number
starts with a zero. Think of 0.1., 0.2., and 0.3.* as major releases
that are not guaranteed to be backwards compatible. The canonical examples
to use for the current version of Redis OM are in the README in this
repository.
—
Reply to this email directly, view it on GitHub
https://github.com/redis/redis-om-node/issues/131#issuecomment-1230497940,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AHM7XQVMTNTP3JBB2SB2WOLV3TLRLANCNFSM57FJN75Q
.
You are receiving this because you authored the thread.Message ID:
@.***>
Probably. Although npm uninstall redis-om should remove it from node_modules too.
|
GITHUB_ARCHIVE
|
Introduced by Marcus Powell (author and film historian)
SPOILER WARNING The following notes give away some of the plot.
Nightmare is never far away in Alexander Mackendrick’s films. It’s there in the street-pursuit of Sidney Stratton in The Man in the White Suit (1951), and in fat cop Harry Kello’s beckoning cry of chastisement in Sweet Smell of Success (1957), and it inhabits all three films Mackendrick made with children in central roles: Sammy Going South (1963), A High Wind in Jamaica (1965) and Mandy.
An only child, raised by grandparents, Mackendrick had a miserable upbringing, yet while his films brilliantly convey the solitary melancholies of childhood, they also present children as dangerously intelligent, near-feral creatures, trapped within the horrors of the adult world yet with a powerful rage to survive. Nowhere is that more evident than in Mandy.
Adapted by novelist Nigel Balchin and Ealing screenwriter Jack Whittingham from Hilda Lewis’s 1946 novel The Day Is Ours, Mackendrick’s third Ealing film (his only non-comedy for the studio) was promoted as a social-issue vehicle, addressing the institutionalised care of ‘deaf and dumb’ children. It centres on the plight of middle-class parents Christine and Harry (Phyllis Calvert and Terence Morgan), and their young congenitally deaf daughter (brilliantly portrayed by eight-year-old Mandy Miller).
What astonishes about Mackendrick’s film is how Mandy is presented as the film’s central life-force, her screams, tears, and emotive expressions shown in stark contrast to the whispered prejudices of the adult world.
Shot in Festival of Britain year, Mandy ends on a compromised note of optimism, but throughout inhabits the darker visual landscapes of horror or fairytale, Douglas Slocombe’s stunning low-key cinematography imprisoning Mandy and Christine within the ‘old dark house’ of Harry’s snobby parents, surrounded by the forbidden wasteland of post-blitz London. Even the film’s one character of hope, Mandy’s teacher Dick Searle, is a haunted figure, played by Jack Hawkins as an irascible melancholy figure visibly traumatised by the (unnamed) events of his recent past.
On release, Mandy was faulted for the numerous ways in which its themes of reform were compromised by melodrama. Viewed today, the film’s refusal to stay ‘on message’, and the continual glimpses of nightmare at its edges, are what give it its strange power.
Andrew Male, Sight and Sound, July 2017
Sandwiched between two pairs of comedies, Mandy was the only ‘serious’ work of the five films Alexander Mackendrick directed at Ealing Studios. A powerful and affecting drama about a deaf child and her parents’ attempts to come to terms with her condition, it is remarkably free of the sentimentality which might so easily have weakened its impact.
Mackendrick’s previous film, The Man in the White Suit (1951), combined humour with a bitter criticism of contemporary British society. Similarly, Mandy uses a simple melodramatic story to examine the stagnant conservatism of middle-class family life in a postwar Britain already turning its back on change – the film was released in July 1952, nine months after a general election in which the country had turned its back on a Labour government which had created the National Health Service, and laid the foundations for the Welfare State.
Cut off as she is by her deafness, Mandy is as much a victim of the suffocating love of her parents, Christine (Phyllis Calvert) and Harry (Terence Morgan), and of an overprotective grandmother (Marjorie Fielding) and an emotionally distant grandfather (Godfrey Tearle). Realising that Mandy’s best hope is to leave prison of her family in London for a Manchester school for the deaf, where she might learn to lip-read and, eventually, talk, Christine has to battle both Harry and his parents, and ultimately to leave her husband, until she finds herself accused of adultery with the school’s headmaster, Mr Searle (Jack Hawkins).
The film ends on an apparently positive note, as Mandy speaks her name for the first time and is invited to play with a group of hearing children. For Christine, however, this breakthrough comes at the expense of her own freedom as she rejoins the family she briefly escaped.
Much of Mandy’s impact is due to the extraordinary performance of its seven year-old star. Mackendrick had already decided against casting a truly deaf child in the lead role: ‘Deaf-mute children can be extraordinarily intelligent and perceptive; but they have this terrible desire to make you feel they’ve understood you when they haven’t really,’ he later explained. Mandy Miller had made a brief but memorable appearance in The Man in the White Suit, but even Mackendrick was surprised at the intensity of the young girl’s performance.
Mark Duguid, BFI Screenonline, screenonline.org.uk
Director: Alexander Mackendrick
Production Company: Ealing Studios
Producer: Leslie Norman
Production Supervisor: Hal Mason
Unit Production Managers: Leonard C. Rudkin, Harry Kratz
Assistant Director: Norman Priggen
Continuity: Jean Graham
Screenplay: Nigel Balchin, Jack Whittingham
Based on the novel by: Hilda Lewis
Director of Photography: Douglas Slocombe
Camera Operator: Jeff Seaholme
Editor: Seth Holt
Art Director: Jim Morahan
Costume Designer: Anthony Mendleson
Make-up: Harry Frampton
Hairstyles: Barbara Barnard
Music: William Alwyn
Music Performed by: Philharmonia Orchestra
Music Conductor: Ernest Irving
Sound Supervisor: Stephen Dalby
Recordist: Arthur Bradburn
Advice on the tuition of the deaf: Ethel C. Goldsack
2nd Assistant Director: John Assig
3rd Assistant Director: Jim O’Connolly
Casting Director: Margaret Harper Nelson
Small Parts/Crowd Casting: Muriel Cole
2nd Unit Director of Photography: Paul Beeson
Focus Puller: Hugh Wilson
Clapper Loader: Michael Shepherd
Stills Supervisor: Jack Dooley
Stills: Bob Penn
Special Effects: Syd Pearson
Assistant Editor: Harry Aldous
2nd Assistant Editor: Lionel Selwyn
Assistant Art Director: Len Wills
Draughtsmen: Jack Shampan, Norman Dorme
Junior Draughtsman: Tony Rimmington
Scenic Artist: Geoffrey Dickinson
Wardrobe Master: Ernest Farrar
Wardrobe Mistress: Lily Payne
Wardrobe Assistants: Ron Beck, Edith Crutchley
Make-up Supervisor: Ernest Taylor
Make-up Assistant: Harry Wilton
Hair Assistant: Daphne Martin
Boom Operator: Cyril Swern
Dubbing Editor: Mary Habberfield
Phyllis Calvert (Christine Garland)
Jack Hawkins (Dick Searle)
Terence Morgan (Harry Garland)
Godfrey Tearle (Mr Garland)
Mandy Miller (Mandy Garland)
Marjorie Fielding (Mrs Garland)
Nancy Price (Jane Ellis)
Edward Chapman (Ackland)
Patricia Plunkett (Miss Crocker)
Eleanor Summerfield (Lily Tabor)
Colin Gordon (Woollard Junior)
Dorothy Alison (Miss Stockton)
Julian Amyes (Jimmy Tabor)
Gabrielle Brune (secretary)
John Cazabon (Davey)
Gwen Bacon (Mrs Paul)
W.E. Holloway (Woollard Senior)
Phyllis Morris (Miss Tucker)
Gabrielle Blunt (Miss Larner)
Jean Shepherd (Mrs Jackson)
Jane Asher (Nina)
Marlene Maddox (Leonie)
Michael Mallinson, Doreen Gallagher, Doreen Taylor, Michael Davis, Joan Peters (children)
Welcome to the home of great film and TV, with three cinemas and a studio, a world-class library, regular exhibitions and a pioneering Mediatheque with 1000s of free titles for you to explore. Browse special-edition merchandise in the BFI Shop.We're also pleased to offer you a unique new space, the BFI Riverfront – with unrivalled riverside views of Waterloo Bridge and beyond, a delicious seasonal menu, plus a stylish balcony bar for cocktails or special events. Come and enjoy a pre-cinema dinner or a drink on the balcony as the sun goes down.
BECOME A BFI MEMBER
Enjoy a great package of film benefits including priority booking at BFI Southbank and BFI Festivals. Join today at bfi.org.uk/join
We are always open online on BFI Player where you can watch the best new, cult & classic cinema on demand. Showcasing hand-picked landmark British and independent titles, films are available to watch in three distinct ways: Subscription, Rentals & Free to view.
See something different today on player.bfi.org.uk
Join the BFI mailing list for regular programme updates. Not yet registered? Create a new account at www.bfi.org.uk/signup
Programme notes and credits compiled by the BFI Documentation Unit
Notes may be edited or abridged
Questions/comments? Contact the Programme Notes team by email
|
OPCFW_CODE
|
When I have reviewed some of the criticisms of connecting shapes in Visio on the web, it has been clear that some users have a misunderstanding about lines and connectors. It is not surprising really because the Microsoft Visio help documentation does not currently make the distinction clear. A connector shape is used to connect two shapes together, whereas a line is normally just a straight line. As usual with Visio though, this is not the whole story because a line can be used to connect two shapes together, and it can be turned into a dynamic connector. I will try to explain myself in this article.
The normal way to connect two shapes together is to use the Connector tool (CTRL+3) on the Home / Tools ribbon, and a line is drawn with the Line (CTRL+6) drop-down menu in the same ribbon group.
Tips and tricks of connectors
The Connector tool (and the Connect Shapes tool) is normally used to connect two shapes together. The connector shape used can be any 1-dimensional master, and there are many examples to be found on the Visio Extras / Connectors stencil.
Note that using a master shape from an stencil will automatically copy the master shape to the local stencil of the active document, if it does not exist there already.
It can be a little confusing that Visio refers to the behavior of connectors as Line on the Developer / Shape Design / Behavior dialog, because a line is not necessarily a connector, as you will read below.
If a connector master is not currently selected in the active stencil, then Visio will automatically use the default Dynamic connector. In fact, if this master does not already exist in the active document, then it is created.
The Dynamic connector master already exists in a number of Microsoft supplied templates because a custom variation of it is required for the type of diagram being drawn.
Tips and tricks about lines
You can use the Line tool to click and drag a new straight line on your Visio page. If you then click elsewhere on the page, you are starting a new line. Each of these lines are perfectly straight, and are known as Line or 1-dimensional (1D) shapes, i.e. they do not have a height.
However, if you were to click and drag from the end point of the line, then you are creating a new segment in the same shape, and the shape changes from having Line (1-dimensional) behavior to having Box (2-dimensional) behavior. You can continue to draw segments, and if you end up exactly back at the start vertex, then the shape can even be filled with a color.
However, usually you miss getting back to the start point exactly but this can be rectified by opening the ShapeSheet and ensuring that the formulas for the final row in the Geometry section points back to the first row. Additionally, the NoFill formula needs to be set to FALSE for any fill colors or patterns to be assigned. Notice how the ShapeSheet 1-D Endpoints section disappear when the shape becomes 2-dimensional!
A line can actually be used to connect two shapes together if the Glue to options on the Snap & Glue settings dialog are changed to allow gluing to shapes.
Another option is to turn a line into a dynamic connector by changing the formula in a single ShapeSheet cell! This magic is done by entering 2 (visLOFlagsRoutable) into the ObjType cell in the Miscellaneous section. This will automatically update the NoAlignBox formula to TRUE, insert the Text Transform section, and add extra connector options to the right-mouse menu. This line now behaves like a dynamic connector shape, and can be used to connector two shapes together without enabling the gluing to shape geometry.
So, I hope you can see that there is a difference between a line and a connector, and along the way, seen some of the ways that smartness is added to shapes in Visio.
|
OPCFW_CODE
|
Shawn's Rails 3 Template
Invoke the creation a new Rails application in the command line as normal, but add the -m flag followed by the path that points to the
located in this project.
Internet Explorer Support
There is also meta information setup in
application.html.haml to set IE8 to
edge compatibility and also to check for Google Chrome Frame,
if it exists.
- _setup.sass (START HERE!)
- application.sass (Where all @imports are linked.)
- /lib (Default libraries. Basically, don't touch these!)
- /styles (Place your project-specific Sass in these files.)
Default Variables and Mixins in Sass
The following variables
$ and mixins
+ have been included in the project's Sass
Creates rounded corners that work in modern browsers.
If you wish to target less than four corners, append the position to the mixin like so:
||Creates a drop shadow that works in modern browsers.|
||Sets the number of CSS3-style columns.|
||Sets the size of the gaps between CSS3-style columns.|
||Sets both column -count and -gap in one mixin.|
||Sets the opacity of an entire element.|
||Create a CSS3 transformation.|
||Create a CSS3 transition.|
Font Stack Variables
|$geneva||geneva, tahoma, "dejavu sans condensed", sans-serif|
|$helvetica||"helvetica neue", helvetica, arial, freesans, "liberation sans", "numbus sans l", sans-serif|
|$lucida||"lucida grande", "lucida sans unicode", lucida sans, lucida, sans-serif|
|$verdana||verdana, "bitstream vera sans", "dejavu sans", "liberation sans", geneva, sans-serif|
|$georgia||georgia, "bitstream charter", "century schoolbook l", "liberation serif", times, serif|
|$palatino||palatino, "palatino linotype", palladio, "urw palladio l", "book antiqua", "liberation serif", times, serif|
|$times||times, "times new roman", "nimbus roman no9 l", freeserif, "liberation serif", serif|
|$courier||"courier new", courier, freemono, "nimbus mono l", "liberation mono", monospace|
|$monaco||monaco, "lucida console", "dejavu sans mono", "bitstream vera sans mono", "liberation mono", monospace|
Font Size Variables
There are a number of classes contained in
lib/_extend.sass that can be used in conjunction with the Sass
@extend function. Please
see that file for what's included.
I have been using and writing Sass since its original inception. Thus you'll notice I use the original Sass syntax and not the newer SCSS implementation.
I am not a fan of the SCSS style and will never be converting this project to it. If you'd prefer the SCSS style of writing your Sass, it should be easy enough to fork this project and convert the formatting styles. Check the SASS Documentation for more.
I also prefer prefixing the
: to the start of the attribute selector as opposed to the more CSS/SCSS syntax of the colon being the suffix.
This is just me being set in my ways and, in all honesty, doesn't effect the end-user functionality of the project if you choose to do otherwise.
If you have questions or concerns, feel free to give me a shout at: firstname.lastname@example.org
|
OPCFW_CODE
|
ConTeXt for non-technical person
A dear person to me has started her own small publishing company (psychology related books -- completely non-technical) and she uses Microsoft Word. It's a one woman operation and she often asks me for help with editing. I have the idea to get her to try to use LaTeX or ConTeXt rather than Word. However, she is not a technical person. Since I spend so much time helping her anyway, I was considering asking her to use LyX for editing and then I would make necessary changes in LaTeX to get the formatting that she wants. Hopefully I could eventually teach her how to do my end of that work, but that might be a pipe dream. However, I've read that ConTeXt might be a good option too. Perhaps that would be easier for a non-technical person to learn than LaTeX or just better for non-technical books (-- your thoughts appreciated).
Is there an editor (available in Windows) for ConTeXt created with non-technical users in mind?
I've never never understood/fancied LyX. I think everyone can understand how it —the raw code, without LyX— works; you still need to convince them, of course. Don't show her a document full of commands, show her the most minimallistic document, just a bunch of paragraphs (as usual, separated with a blank line) and one sentence in one paragraph italicized (with \emph{…} for instance). That way she absolutely understands how it works and how easy is to use commands. I remember when I started, I was shown full mathematical papers and I understood nothing (I had never seen “code”).
I said \emph but I just wanted to show a command, I did not mean to teach her LaTeX (in fact ConTeXt may easily fit her needs).
Alternatively, she may be as well off to use Pandoc for content and organization, and you can do the LaTeX conversion yourself (or set her up a Makefile or similar script to do it).
I think your friend should at first think about her wishes and expectation for the development of her business. Does she like the editing part or does she hope that sometime in the feature she can outsource this job? Also without more informations about the type of documents she produce it is quite difficult to recommend a suitable tool.
The book is a typical book. It has front matter, including table of contents, then chapters, with the occasional graphic, then end matter, including endnotes, index and bibliography. I've been also considering if InDesign might be a better option than any TeX, as it might be easier for a layperson to learn, but it's hard to frame that question in a TeX QA site. She wants to learn as much as possible, but frankly, she only just got used to using Word, and yet I'm trying to dissuade Word. :. It's tougher for some to learn new things on the computer than for others.
If her skills lays somewhere else she should outsource the computer and publishing part and concentrate on the content. There are already too many bad looking books on the market. Then she can use for the writing whatever tool she likes and will have to learn only a bit of markup for headers, cites etc.
If she is not the technical person, then I would really suggest LaTeX over conTeXt. 1) Community behind LaTeX is much bigger; 2) Documentation of LaTeX is much more complete.
Could she use XML? ConTeXt can typeset XML files, so she could use something like XMLmind or oXygen to author, then you could work with her to develop a stylesheet in ConTeXt. You'd gain validation facilities from XML, but still get the control of ConTeXt.
@NobbZ: With those exact same arguments, she should stick to Word, right?
My set of recommendations is this:
Teach your friend to write Markdown.
She can do so even in MS Word: there is a plugin called Writage which makes it easier for people used to Word.
There are also other Markdown editors available (some being implemented in JavaScripts and directly running in browser windows providing a live-rendered HTML preview -- like you see on this very website!)
There are only about a dozen Markdown rules that already let you cover 90% of typical formatting needs. I've successfully trained (non-technical!) people in the past to use Markdown within 30 minutes and they were able to run. (Well, from time to time they called back and asked "How do I do XYZ? How do I do ABC?"...)
Use Pandoc to convert the Markdown to LaTeX or ConTeXt and finally, PDF (or even HTML, EPUB/EPUB3, OpenDocument, DOCX, MediaWiki, DokuWiki...).
Write a Makefile (or a Batch file) for her so she can fast preview intermediate results herself. You'll have to experiment which output format fits best for her needs, and which exact set of Pandoc command line parameters to apply.
Take the ConTeXt or LaTeX output from her and do your thing...
This set of recommendations root in my own personal experience: as a non-TeXnical person (who knew about (La)TeX before, but never wanted to start climbing up the steep slope of learning it!) I'm now learning one or the other LaTeX trickery too (while still authoring most of my texts in Markdown). Without Pandoc+Markdown this would never have happened....
|
STACK_EXCHANGE
|
Missing funding deadline for serious reasons: what to do?
A friend of mine missed a deadline for applying for funding for a PhD positions.
Funding applications are handled through a different channel wrt to program applications (I believe by some Research Council committee, I do not know if internal or external to the University) and a separate (but almost identical) application was required.
My friend completed the programme application (CV, statements, referees etc.), but did not complete the funding application. The main reason is that a close family member was diagnosed with cancer in the week previous to the deadline, everything got a bit messed up and he/she missed the deadline.
Is it acceptable to do one of the following and which is the best course of action:
1) Submit the application past the deadline, period. Hope is that if the PhD position is awarded the missed deadline for funding might be overlooked.
2) Submit past the deadline with a motivation letter explaining the issues (possibly, attaching a medical certificate or offer to provide one if required).
3) Suck it up and eventually discuss funding options once position is awarded.
Specifically, is there any downside of running with 1) or 2) (e.g. it might bar 3) or look bad or look just like an excuse)?
Context is UK middle to top ranking econ departments.
The generally correct answer is to contact the person in charge. They will be able to say what options remain.
If this is a national funding agency, chances are that you're out of luck. For example, our university has missed out on submitting proposals because a backhoe dug through the power cable of the building in which the people who upload grants to NSF sit, at 4:50pm when the deadline is at 5pm. The NSF says that it's the applicant's responsibility to ensure that a proposal is submitted in time. Similar things have happened for universities that got snowed in the day of submission.
On the other hand, if this is a smaller organization, or an on-campus office, that funds these positions, you may have better luck. In any case, immediate action is required. Once they have allocated the money to people with complete applications, the ship has left the port.
Of your options, I can think of no reasons not to try option 2. The worst thing that happens is that they say no. Option 1 is probably not going to work unless the deadline was missed by less than a day.
What your friend should do/should have done is emailed the committee as soon as they reasonably could regarding their situation. I would imagine that informing the committee of their situation and asking for a small extension has the best chance of going over well. The longer they wait to turn in their application or contact the committee, the less likely they are to be accepted.
|
STACK_EXCHANGE
|
Re: Which spell checkers to include by default?
On Thu, Dec 27, 2007 at 02:54:34AM +0100, Luca Capello wrote:
> Hi all!
> On Tue, 25 Dec 2007 12:17:52 +0100, Petter Reinholdtsen wrote:
> > [Manoj Srivastava]
> >> Are these packages a drop in replacement for ispell?
> > None of the spell checkers are drop in replacements for the others.
> > Each program need to have support for ispell, aspell, myspell and/or
> > hunspell. This is why I want us to try to get as many packages as
> > possible to switch to hunspell, to make it possible to drop ispell
> > completely.
> It seems that I cannot find a comparison of the differences spell
> checkers. Please, could you enlighten me on why hunspell should be a
> better default one?
I do not have much experience with hunspell, so take everything I write
about hunspell with extreme care,
Regarding other spell-checkers, the only advantages ispell currently has
over the others are probably a lower memory use and an easier support for
things like TeX encoding and shorthanded TeX encoding (using 'a instead
of \'a). Disadvantages are many.
Regarding aspell its main advantage (besides the suggestion algorithms
and soundslike support) is the supports for filters, which should make
easier spell-checking of special text files (in etch where I write now:
context, email, sgml, texinfo, debctrl, nroff and tex).
hunspell advantages are mostly:
* On the fly creation of hash tables from plain text dict files while
aspell uses a pre-built binary file for loading efficiency. This is
not a big problem for Debian users, since most aspell dicts build the
binary hash tables from postinst and the dict itself is arch all. This
can however be an hunspell efficiency disadvantage for (hunspell still
unsupported here) emacs ispell.el, when you are switching between
different buffers in different languages and require full rebuilding
* Handling of composed synthetic and agglutinative languages. One side
note here, Kevin Atkinson (aspell upstream) recently reported in
aspell-user list that some of the hunspell code was merged into aspell
CVS. Not sure if all hunspell functionalities will be available, but
at least some of hunspell features will be present in next aspell
* Portability? Not sure if this is still an issue, but older myspell
seemed to be a bit more portable than aspell at that time.
... Write your additions ...
So, I am not sure about which spellchecker should be the default one, aspell
or hunspell, just I would have liked all the new code being written against
the same program.
|
OPCFW_CODE
|
Centos java custom service
I have a script.sh that set some environment variable and start a java server.
#!/bin/bash
export JAVA_HOME="/opt/java"
export ....
nohup $JAVA_HOME/bin/java "$MEMORY_JAVA_OPS" -classpath "$MY_CLASSPATH" $MAIN_CLASS &
I would like to transform this script (now is launched by /etc/rc.d/rc.local) in a service.
I tried many examples found online and over StackOverflow.
I created myservice.service file using many templates found online... No one work!
one example is:
[Unit]
Description=MyService Java Process Restart Upstart Script
After=auditd.service systemd-user-sessions.service time-sync.target
[Service]
User=root
TimeoutStartSec=0
Type=simple
KillMode=process
#export JAVA_HOME=/opt/java/jdk-9
#export PATH=$PATH:$JAVA_HOME/bin
WorkingDirectory=/tmp/myworkdir
ExecStart=/path/to/myscript.sh
[Install]
WantedBy=multi-user.target
With some configurations, the service starts but the status command says that it is dead (while it is actually running). With others it does not start. With none it stops with the command stop ....
I tried Type=Simple, forking, oneshot... always some problem.
I would simply that after boot or when user launch systemctl start myservice, service start, and if after some time crash will be started again. And if I will run systemclt stop myservice it stops and not need to kill the process.
Firstly it need to be said, that concept "service" greatly differs in Linux/Unix and Windows environment. From your question seems to me you are looking for Unix solution.
In unix you typically register some statup and stop script/command. The startup script just runs your java application via java -jar app.jar. This application does business logic & also opens listening on some SHUTDOWN port.
The stop script/command just invokes another (or the same with different cmd parameters) java application which does nothing else just sending STOP command to original application's SHUTDOWN port.
You can look in more detail for example on tomcat startup/stop scripts - they are doing exactly this.
For windows is better to use some wrappers like WinRun4J or whatever else. Of course you can have one multiplatform maven archetype for "universal multiplatform" service like we do.
EDITED:
If you are still unsure how to configure it on Linux, read https://linuxconfig.org/how-to-create-systemd-service-unit-in-linux
ExecStart will be the startup java -jar app.jar and ExecStop will be the stopping command java -jar app-stopper.jar
Hi, i can stop the daemon with a rest call by wget or curl. So you suggest me to clone the tomcat skeleton of service?
Not exactly clone but be inspired.
|
STACK_EXCHANGE
|
we have an NT4 workstation which used to run the company's retired Nicelog call recorder software, for compliance reasons we need the keep the server running for at least another 5 years! grrr.
I am trying to migrate (P2V) the server using vmware convertor 3.0.3-89816. initially there were problems completeing the install, i got around this by re-installing nt4 sp6a, but now that the install completes (sucessfuly-apparently) the strange thing is that there is not desktop or start menu items created???
When trying to run the exe from the Program Files folder, the attached message is recieved.
on the internet, people do suggesting using Acronis boot disc or vmware boot disc. but vmware have discontuned downloads for this version.
does anyone have any ideas or alternative approaches to p2v nt4? the server does have hard disks configured in RAID 1??
Have you seen the post from this site: http:/
The author used a different version of the VMware converter (VMware-converter 3.0.1-44840)
trying to find a site with that specific vmware convertor version. difficult to trust sites outside of vmware without md5 checksums.
will post back and let you know outcome1
any alternative suggestions welcome!
There's a compliance reason to use a discontinued OS with no security patches?
John-The NT4 servers run our NICE Call recording system from a legacy PBX no longer in use. As a Financial Services organisation we are required to keep this system for 6 years for our compliance team to retrieve calls for complaint, investigations etc.
Just to update on my progress, I took a ghost image and restored to a new Guest and followed the ultimate P2V guide (http:/
The only problem im facing now is that i cant get the VMWare AMD PCNEt Driver to start as a service, have also tried the VMWare Virtual Ethernet Adaptor to start neither?
We managed to P2V an nt4 server sucessfully, but cant see why we are having issues on this instance.
when installing VMWare tools, using the complete, everything goes fine except that an error stating that VMXNet ethernet drivers wasnt installed and needs to be installed manually, which we have tried to do from 'Network', the driver appears to install okay until reboot at which point it fails to start, the AMD PCNET drivers has the same result.
tried editing the .vmx file for the VM to insert 'ethernet0.virtualDev = "vmxnet" ' to no avail.
Created a fresh nt4 wkstn vm and added the vmxnet nic with no issues after installing sp6.
suspect some configuration or NICE call logger software my be the culprit here.
The quest continues....
|
OPCFW_CODE
|
HLOOKUP evaluates to an out of bounds range error in Google Sheets
I'm trying HLOOKUP with Importrange formula as I've to lookup data from other Google Sheets.
I'm entering the below formula.
=HLOOKUP(C1,IMPORTRANGE("1wWguGb6O0GyX7ACxzoFCN8N73zV0pUeoj51R_zFNPfE","Project wise Resources!$E$1:$AZ$20"),2,0)
I'm getting the error:
HLOOKUP evaluates to an out of bounds range.
I have entered the correct range but unable to understand what the issue is.
This is a poorly worded question without data. https://stackoverflow.com/help/how-to-ask
you can't do that on the spot
first paste this formula into some cell and allow access
=IMPORTRANGE("1wWguGb6O0GyX7ACxzoFCN8N73zV0pUeoj51R_zFNPfE",
"'Project wise Resources'!E1:AZ20")
then use your formula:
=HLOOKUP(C1, IMPORTRANGE("1wWguGb6O0GyX7ACxzoFCN8N73zV0pUeoj51R_zFNPfE",
"'Project wise Resources'!E1:AZ20"), 2, 0)
and also make sure that sheet Project wise Resources has a column AZ
Thank you so much. I guess it was just the allow access thing that was required. It worked now.
I had earlier created allow access for test data but not my actual data in the formula.
Try changing last HLOOKUP option to TRUE/FALSE instead of 0
HLOOKUP: https://support.google.com/docs/answer/3093375?hl=en
Horizontal lookup. Searches across the first row of a range for a key
and returns the value of a specified cell in the column found.
VLOOKUP: https://support.google.com/docs/answer/3093318?hl=en&ref_topic=3105472
Vertical lookup. Searches down the first column of a range for a key
and returns the value of a specified cell in the row found.
IMPORTRANGE:
https://support.google.com/docs/answer/3093340?hl=en
you may need VLOOKUP depending on way your lookup is organised - by rows or columns..
If you are lookup up second row or second column only, then there is no need to have a range that extends beyond 2 rows or 2 columns...
|
STACK_EXCHANGE
|
What it is:
A DNA-based method for recognizing species
Imagine getting bitten by a spider, but being unable to tell what kind of spider it was (poisonous or not?!). To help organize our understanding of the diversity of species in the living world, Carl Linneaus invented a system for naming and classifying organisms in 1735. We still use this system today, and call it taxonomy. In Linnaean taxonomy, all the different kinds of living organisms can be organized practically into groupings with shared characteristics, where every species can be given a unique name. Biologists often identify species based on the way they look (anatomy), behavior, or evolutionary history, and assign them to a taxonomic grouping. In other words, if you had a close, detailed look at the spider that bit you, then you might be able to precisely determine what species it was!
We can also identify species based on their DNA. Genetic barcoding is a method for identifying species by looking at very short genetic sequences and comparing them to known species’ sequences, or DNA barcodes. Genetic barcodes are highly standardized and stored in databases, allowing us to identify samples much faster than classic taxonomy. We can now obtain the genetic barcode of a species from a very small biological sample, compare it to the barcodes of thousands of known samples stored in databases, and quickly find matches. This method was first proposed in 2003 and is now on its way to becoming a standard identification protocol for our biodiversity. Genetic barcoding is similar to the barcoding system of a grocery store, where each UPC code is associated with a certain product so they can be easily identified when we check out.
How it works:
Comparing short genetic markers
We can extract DNA from an unknown biological sample, then amplify (copy millions of times over) its short genetic barcode is using PCR. The barcode is then sequenced, so that its unique string of As, Cs, Ts, and Gs can be compared to a large database of known barcodes. A database of barcode sequences is hosted by the Barcode of Life Data Systems (BOLD), containing more than four million samples from about 400,000 species!
In almost all animals, species barcoding is done by looking at sequence of the cytochrome oxidase I gene (COI), which is a 658 base-pair region of mitochondrial DNA. Mitochondrial DNA (mtDNA) is useful for barcoding because it mutates more rapidly than the nuclear genome, resulting in more genetic variability across species over short evolutionary timescales. At the same time, mtDNA is inherited maternally, which results in less diversity within animals of the same species. COI is
involved in the electron transport chain of cellular respiration, which is a very fundamental process of life, so it is present in all animals (and every eukaryote!). The COI barcoding sequence is short and can be amplified from many different species using the same set of PCR primers. For plants, on the other hand, the barcoding sequences are regions of chloroplast DNA known as rbcl (ribulose-biphosphate carboxylase) and matK (maturase K). In fungi the internal transcribed space (ITS) is used for barcoding. These different regions are chosen because they vary significantly across species, but not within one species for that taxon.
How it is used:
Understanding ecosystems and monitoring climate change
Through DNA barcoding, biotechnology has helped fill a gap left by classic taxonomy. The environment is changing because of human influence and extinction is occurring at an extremely fast rate. In order to monitor and understand these changes, scientists must have a fast way to identify organisms. DNA barcoding provides a quick method of species identification that becomes more accurate the more it is used. The larger a database of DNA barcodes becomes, the more accurate species identification will be, because each sequence will be compared to more organisms.
The existence of large DNA barcode databases also provides a network for global collaboration. The Moorea biocode project aims to create a library of all non-microbial life on the island of Moorea in Tahiti. The resulting library would be publicly available to all scientists around the world and serve as an example of a complete ecosystem. DNA barcoding makes biodiversity information more accessible and promotes collaboration.
Research suggests epigenetic mechanisms are important in how cancers silence antitumor systems and activate cells for rapid growth. The FDA has already approved two drugs to treat cancer by reactivating these antitumor systems. The list of epigenetic treatments for cancer, neuropsychiatric disorders, and other ailments is currently small, but it is growing. How large the field of epigenetic medicine will become is still an open question.
A universal database
The International Barcode of Life Project (iBOL) is a collaboration of 26 countries that aim to create an automated identification system of all living eukaryotes. Ultimately DNA barcoding could result in a library of genetic markers and associated physical identifiers for all species. This library would make it possible to have virtually exact identification of any species using just PCR and sequencing of a certain gene. This is an example of the growing role of bioinformatics and global collaboration in scientific efforts. As the link between science and big data becomes more established, so will the need for global collaboration, citizen science, and other partnerships across scientific communities. You can get involved too! Using DNA barcodes to identify living species around you is relatively easy using a PCR machine, barcoding primers, and public databases. Students can help identify and characterize the biodiversity that surrounds us, whether in the wild or in your city. DNA barcoding creates a common language for scientists to work together and understand biodiversity.
- “DNA Barcoding.” Wikipedia. http://en.wikipedia.org/wiki/DNA_barcoding
- “What is DNA Barcoding?” Barcode of Life. http://www.barcodeoflife.org/content/about/what-dna-barcoding
|
OPCFW_CODE
|
A Micro-view of Macro Malware
Dridex is a botnet with multiple features, it is most known for stealing people’s credentials on finance-related web sites. Despite the arrest of the gang behind the Dridex malware campaigns, the samples keep popping up on our customers’ machines. And other research groups noticed that as well.
Most of the Dridex attacks we see were triggered by malicious Microsoft Word or Excel in the result of spam disseminations. Microsoft Office allows its users to create macros – scripts in Visual Basic for Applications (VBA). This feature is used by many companies as a quick and easy way to automate certain parts of their business workflow. Naturally, malware writers take advantage of it. They can create a macro that downloads and executes malware on the victim’s machine, trick a user into opening the document, and trick them into allowing the macro to run. This part of the attack is more in the field of social engineering as attackers must convince a user to open a malicious email attachment and convince them to allow macros (macros won’t run by default).
Microsoft tried to raise awareness earlier this year, but little seems to have been done since then as the spam campaigns featuring macro malware have increased and evolved to become more effective.
About a year ago when macro malware became so prevalent that everybody started taking it seriously again, a typical piece of malicious VBA would look like this:
Detecting this type of code seems to be relatively easy; there are certain patterns that give away the malicious intent of that script.
As far as the functionality goes, the workflow of this script would be something like this:
- Create three hidden files:
- a batch file <random_name>.bat
- a VBS file <random_name>.vbs
- a PowerShell file <random_name>.ps1
- Execute them one after another:
- call exe 126.96.36.199 –n 2
- run <rando_ name>.vbs using exe
- run <random_name>.ps1 using PowerShell
- download the malicious URL hardcode into the macro to file <random_name_2>.exe
- run exe /c crsss2.exe
- Delete batch, VBS and PowerShell files
Aside from being easy to fingerprint, this macro was dependent on PowerShell and not everybody had one.
From there it kept changing somewhat, gradually, and by Spring/Summer of 2015, it became much better obfuscated. Below is the example of the download function:
And here is its clean version:
So this time, extracting the malware artifacts or heuristics (at least statically) is not quite straightforward and code redundancy is pretty big. However, the obfuscation scheme itself can be used as a detection parameter. Names of the variables do not correlate to any Latin based language characteristics.
Functionally, this script is similar to the previous one, except the hardcoded URL points at an encrypted list of malware URLs instead of directly at the malware executable. This allows attackers to change the malware URLs (as they tend to die faster than those supplying download targets) without re-packing the macro and re-setting the spam campaign.
One of the latest cases we’ve seen is much more straightforward. It downloads and executes an instance of Dridex from a hardcoded URL. The malicious macro has a lot of redundant code and tries to disguise itself as a business automation script:
The figure above shows references to “Vouchers” (@fullVoucher) and “Balancing Indices” (SelectDailyBalancingIdx) – all in order to appear a legitimate program. However, upon careful examination, suspicious elements can be found, such as obfuscated URL of the malicious server:
After decoding the number array on the image resolves to the following URL:
The script then downloads this executable, saves it as %temp%\\damedig.exe and launches it. In this example, less than 10 percent of the code had to do with malicious functions.
This scheme seems to be particularly popular in Excel files. This one, for example contains lots of junk code as well, furthermore we were even able to trace back the source of that junk code. It was a game engine written in Visual Basic.
This is interesting because the “junk” was taken from a legitimate code repository. “Obfuscating” malcode this way requires little effort, but apparently pays off, as it becomes harder to notice in a bloated up macro.
So contents of the macros mimic some legitimate data, but what about the names of those files? Let’s look at a free dynamic analysis platform Malwr, where people regularly submit all kinds of files to run it against Cuckoo sandbox. If we search for “type:xls” and select the entries with one or more VirusTotal alerts here’s what we’ll find:
The names are in fact easy to confuse with the names of legitimate documents routinely flowing through any organization’s network.
The corporate sector might be a preferred attack vector for the macro malware spammers and Dridex in particular. With enormous volumes of emails and documents a corporate environment seems to be more prone to spam campaigns like this.
As exploitation and drive-by-download attacks get harder to pull off, cybercriminals seem to look back into classic tried and true techniques, such as MS Office macro.
|
OPCFW_CODE
|
Notarizing App Distributed Out of App Store
I don't have an Apple Developer ID and I have an installer and app, neither signed nor notarized. Apple now blocks the App in Catalina and even the Open option won't work. Rather than I seek information to authenticate these.
What is the process to register and notarize for macOS?
Update:
I'm asking from the developer's perspective .. not as a user.
Do you want to distribute the installer and application to the general public or within a controlled environment like a school or organisation?
@GrahamMiln General Public.I currently do not use AppStore.
Limited Options
As a developer wanting to distribute unsigned and unnotarized applications, your options are increasingly limited on macOS. You would not be alone disliking this trend.
If your users are technical, the advice from Apple can be referenced in your documentation. This will help a few potential users but will dramatically limit your audience.
Apple's Advice to Users
Apple provide Safely open apps on your Mac for users wishing to open unsigned and unnotarized applications on the latest versions of macOS.
Alternative Certificate Authority
In theory, you could code sign your application using another certificate authority that has an appropriate root certificate pre-installed in macOS. This assumes you wish to avoid dealing with Apple, rather than wishing to avoid code signing.
Is an Apple Developer Account Mandatory?
I suspect an Apple Developer Account is not mandatory, but not having an account will make some tasks difficult.
Given you have a code signing certificate from Comodo (Sectigo), you could try using codesign with it.
The manual for codesign does not obviously state a requirement for an Apple issued certificate. A code signing certificate with a trusted root certificate in macOS should be useable.
You will need to create a new Keychain containing the certificate and private key. Then pass the absolute path of the keychain file to codesign via --keychain.
One possible problem you may run into is macOS's spctl, aka Gatekeeper. spctl's rules may state the signing certificate must have a root Apple certificate. Investigate the spctl tool on macOS.
I recommending trying codesign with your certificate and then ask more questions as specific problems arise.
Thanks.I already have a code signing certificate from Comodo(Sectigo) which is used to sign my windows app.Can this be used in OSX as well ? Is signing up for Apple Dev program mandatory?
I have expanded the answer. Good luck! As an aside, Indie might be useful to you.
I have tried signing my existing application using https://stackoverflow.com/questions/13204407/how-to-codesign-an-existing-mac-os-x-app-file-for-gatekeeper but i keep getting this exception https://developer.apple.com/library/archive/qa/qa1940/_index.html . Can you please advice
@techno please can you ask a new question, as this will attract answers. Comments tend to be less visible. For example, I did not get a notification from your follow up comment.
|
STACK_EXCHANGE
|
Set up Discord AutoMod presented a pristine control tool called AutoMod that can keep unsafe messages from showing up on a server. The tool works by having the moderators in a server select which words they need to boycott.
AutoMod likewise can automatically boycott individuals who defy the guidelines. All things considered, we will show you how to get AutoMod and how to set it up.
You should be either a moderator, the administrator, or the creator of a Discord server to execute AutoMod. Likewise, this element is restricted to the desktop rendition of the application; it’s not accessible on portable.
In the drop-down menu, select Server Settings. You should empower the Community settings on Discord.
Select Enable Community on the left-hand side. A window will create the impression that will make you through the strides of setting up your server. Click the containers close to ‘Checked email required’ and ‘Sweep media content from all individuals’.
How to Set up Discord AutoMod
To make any Discord Bot you should initially make an application over at Discord Developers. Gambling in California To do as such, make a beeline for Discord Developers and make a record on the off chance that you don’t as of now have one. Click on the New Application button to make another application.
This will add a bot to your Discord Application. From this new page you can set your bots profile picture and show name. Your showcase name is the name individuals will find in discord and while welcoming your bot to their discord server. Change the Username field to anything that you wish to name your bot. Click on Bot Icon to change your bot’s showcase picture. This is the picture that will be shown in Discord and when others welcome your Bot.
Assuming that you wish to set your bot to private you can do as such by unchecking the Public Bot choice. This will mean however that main you will actually want to welcome your bot to any server. Neglecting to do so will imply that your bot will not be able to be welcome to servers. Neglecting to do so could imply that a few crucial elements of your bot can’t discuss accurately with the Discord API.
How do you moderate a discord server?
You can likewise utilize Discord’s inherent settings to save some time. Return to your Server Settings and look down to Moderation. There are bunches of helpful control choices here: Verification Level: Choose whether clients must be completely confirmed to join your server.
Make a beeline for your record and select your server. Click the Modules tab or select Automod from the dropdown. Guarantee the Automod module is empowered. Click the Settings button under the Automod module.
Is raiding Discord servers illegal?
Striking a Discord server can get you prohibited from Discord. Since strikes are a type of spam and a disruption of others’ insight on the stage, Discord forever boycotts thieves and at times gives IP boycotts over this way of behaving.
Sending spam is against our Terms of Service. We could make a move against any record, bot, or server starting any of these or comparable strategies. Assuming you accept spam began from Discord,
|
OPCFW_CODE
|
Dieser Blogbeitrag ist nur in englischer Sprache verfügbar. | This blog post is only available in English.
Migration is an ongoing IT topic and there are many good reasons for that: Switching platforms or architecture e.g. when changing an operating system or database, virtualization and moving to the cloud are some reasons to be considered. In addition, the integration or merger of systems and applications can make content migration a relevant topic, as cost efficiency is a big deal in these cases.
Are you currently facing a migration project? Especially any Documentum migration task? Have your colleagues given you any serious advice to handle Documentum object IDs very carefully? A common example for that: Published links in intranet or e-mails referencing important documents through object IDs. Have you been asked to keep these documents’ object IDs and all their links working?
In this blog post I want to share some thoughts, experiences and solutions on migration regarding Documentum object ID concerns with you.
SYSTEM ID VS. BUSINESS ID
To start off, it is very important to clearly distinguish between business and system requirements.
It isn’t always a wise choice to depend on a software vendor’s special system ID format even if it is published/used in intranet links. A much better approach is to use a unique document number or a meaningful registry key (like well known barcode or ISBN book number) as a document’s business identifier.
System ID usage should be limited to technically access data quickly, for example in background system integration.
To get rid of all limits through special system ID formats once and for all you may need additional mapping “system ID <=> business ID”. That way you will benefit from the advantage of vendor independency for all unavoidable future migrations of your valuable documents.
In the following paragraphs, I will focus on more aspects of Documentum object ID during migrations and how solutions of the dilemma may look like.
MORE WELL-KNOWN OBJECT ID ISSUES
Having long-time experience as a consultant and developer, I have gathered knowledge from various customer requirements and performed many migration projects. That enables me to classify these main aspects:
References through object IDs:
• Intranet URLs and favorites
• Interfaces to/from other systems
• Links between documents and to other data
• Protocols and audit trails of business critical actions
• Controlled/managed systems
• Running workflows
• Obsolete data from migrated/substituted former systems
My following discussion will show you my best practices from former migration projects (mostly carried out with migration-center).
BEST PRACTICES FOR ID BASED LINKS AND INTERFACES
Knowing the Documentum DRLs/links (DRL = Document Resource Locator), it’s relatively easy to use these links in Webtop: https://myserver/webtop/drl?objectId=1234567890123456
Updating all these links in intranet web pages, old e-mails, browser favorites, documents, etc. on the other hand is not as easy to handle.
An appropriate solution is a DRL component customization. Just search submitted IDs not only in the attribute “r_object_id” but also in an additional migration attribute “old_object_id(s)”.
You may also think of Documentum “Relations”. These object ID based relationships are already handled by the fme product migration-center. The data can be migrated with an object ID change into a new Documentum target system.
MAPPING SERVICE, DOCUMENT BUSINESS ID
What about creating a mapping service? That service should forward all requests for the old address to the new application or storage location. Such a new mapping service should introduce a “Document Business Identifier”. While the mapping service ensures downward compatibility for old object ID links, all future links should use that new business identifier.
You can find more details on suitable solutions for DRL links in a whitepaper available through our website:
Quite similar and probably the next issue on your checklist will be interfaces to other systems. Here, the question should be addressed again: “Should business requests for documents use a vendor related system ID?”
Shouldn’t you prefer future usage of a document business identifier and the already mentioned mapping service? You would gain additional benefits of re-use and reliability in your application/interface architectures. That will help you in all further migrations, which are sure to take place in the future.
I have to admit: I really like business identifiers. My office documents’ footers never show a system ID or local path/filename but always our unique document number! That is my best practice since 1999 when I joined fme for working in document management projects.
But what about interface products (closed source software) with hard coded usage (not customizable/configurable) of the Documentum object ID? The important Documentum – SAP interface might be affected.
That alert can also be cleared off for the Documentum object ID. A SAP ID is submitted to Documentum, which will then be used for all document requests by SAP. I would rate that a “small mapping service within SAP” to be independent from document management system vendor’s IDs.
I hope that I could address your potential concerns on object IDs in references and interfaces.
HANDLING IDS IN PROTOCOL ENTRIES
Let’s move on to the next block of requirements: Protocol entries needed for later review.
Customers like to use the Documentum audit trail for various protocol purposes, such as:
- Summary or history on document operations
- Protocol entries of business critical actions like:
- Record user’s READ access on secret data to trace knowledge leaks
- Document release or publication
- Modification of groups or ACLs
Regarding audit trail usage, I would like you to keep the following in mind: The Documentum Administration Guide recommends the usage of the AuditManagement batch job. That job helps to limit audit trail table size because mass data may have negative effects on system performance. Limitation is done through deletion of old entries (old = configurable cutoff_days interval).
That means: Long-term storage/archiving of audit trail entries should rather be done through “move entries to another protocol table or export to some external media” using RDBMS tools. It should be a minor task to add/join a document business identifier column to that moved/exported data.
In the customizable Documentum Webtop history view you can easily modify the query to search moved data in an additional protocol table through the business identifier.
My additional project experience is as follows: Protocol entries on business critical actions do not have to be accessible online at all time. On demand (like tracing some leak/incident) you usually will have to run special programs for investigation and you might decide to set data online again in special environments. Adding a business identifier mapping (to merge old and new data) would just be a minor task in that scenario. All you have to take care of is to not lose the relation “former object ID(s) – business identifier” during all your migrations.
Now you might wonder: “What about ‘data integrity’ while keeping/changing Documentum object ID during migration?” I will cover that in the following paragraph.
WHAT ABOUT DATA INTEGRITY?
Are there any requirements to drop all obsolete data from former migrated systems (like former system IDs in any added attributes/columns)?
My main argument on that topic has stayed the same through 25 years of software consulting and development:
Storage is cheap, CPU time is expensive. And I’d like to add: Project time is expensive, too.
Regarding Documentum SysObjects’s approximately 90 attributes there should really be no long talk about one additional 16-characters attribute. Just discussing the pros and cons as well as all required additional migration tasks, programs and workarounds to keep object IDs against all odds will quickly exceed the costs of that new database column storage.
Finally, long time running Documentum workflows (active: workflow state indicates document state) have to be checked for data integrity. The object model shows references by object IDs from workflows through tasks and packages to the submitted document. Document object ID change due to migration will cause problems here. Additionally, active workflows are hard to migrate: How to map a workflow template or model in a new target repository and how to map “approval users” of active tasks?
The best solution is to use migration events to review those long time running workflow templates! It is usually better to use a lifecycle to manage document states. Doing this you will be able to use simpler workflows (the so called quick workflows) to switch the document’s lifecycle state.
At migration day, there should be a very small number of active workflows. The important document state information of all other documents is persistent through their lifecycle state and managed independently from any object ID. By the way: This is also covered by fme migration-center features.
I learned this from a real project: Before migration day, there was a business advisory to complete or cancel all active workflows. Estimated efforts for later restarts for that small number of running workflows were so small that other workarounds were not worth a discussion.
SO WHAT IS THE CONCLUSION OF IT?
Well, that has been a lot of text discussing problems and details on Documentum object IDs in migration projects. I hope the majority of your issues have been addressed. One thing is for sure: You are neither the first nor the only project manager dealing with that object ID challenge! After having read my text, I hope you can agree that there are hardly any business requirements in favor of keeping object IDs unchanged. In fact, the main migration tasks should be about your business processes. They have to be represented again in the new target repository, preferably using business identifiers and not vendor defined system IDs anymore.
Notably the often mentioned mapping service will become an important element in your business application’s environment. You can even benefit further from this service by providing system usage information. That will help you distinguish frequently used systems from obsolete applications (which may be classified for the next migration).
An additional attribute in the migrated data model “old_object_id(s)” makes sure that former references will be traceable for all upcoming use cases. Having introduced this attribute once, you will be well prepared for future migrations.
Have I been successful in diminishing your worries on Documentum object ID change during migration? Do you realize how you can benefit from switching from system IDs to business identifiers?
In case there are still any object ID challenges that I haven’t mentioned here, I am more than happy to answer your questions. Feel free to contact fme for joined forces on tricky migration problems.
Stay tuned for further developments of fme migration-center. New features will introduce “In-Place Migration” which keeps all data with their object IDs in the repository. That will help to cover additional use cases.
|
OPCFW_CODE
|
Islandora 11.3.1 was released on January 10th, 2012. The following release notes cover both this minor release and the full 11.3.0 release (December 20th, 2011).
New and Changed Features
To test-drive a live demo of the new Islandora release, complete with the new features listed below, please visit the Islandora Sandbox (Username: admin | Password: islandora).
The Collection Manager is now a separate, optional module. However, in most cases you will want to enable this module in order to perform important functions, such as creating and managing collections.
The Batch Ingest tool is a separate, optional module. This tool, which was originally added in Islandora 11.2, allows you to upload a ZIP file containing a number of audio, basic image, large image, or PDF files. These files may optionally include associated XML metadata files - if you choose not to include such files, basic metadata records will be created using the filenames as titles. Each object created via batch ingest will have both DC and MODS Datastreams.
The book module has been expanded to include book and page management functions. You can access the book management options under the ‘Manage This Book’ tab. The following functions are available:
- Collection Membership: You can associate this book with additional collections. If the book is a member of more than one collection you can delete additional associations one at a time until the book belongs to only a single collection.
- View Metadata: This fieldset displays the available DC metadata for the book.
- Update Derived Datastreams: Here, you can refresh the OCR process for the book, reprocess the pages to create a new set of derived images, or regenerate the PDF for the entire book.
- Manage Current Datastreams: This fieldset allows you to view and manage the Datastreams associated with the book object.
- Permanently Delete This Book: Finally, you can delete the book and all associated pages.
You can access the page management functions by first clicking the new ‘Pages’ tab while viewing a book object; this will display thumbnails of each page. Click on an individual page to view it, then click the Manage This Page Object tab to bring up a list of page management functions:
- View Metadata: This fieldset displays the available metadata for the page (in most cases this will be brief).
- Update Derived Datastreams: Here, you can refresh the OCR process for this particular page and/or reprocess the page to create a new set of derived images.
- Manage Current Datastreams: This fieldset allows you to view and manage the Datastreams associated with this particular page.
- Add Additional Datastreams: You can manually add relevant Datastreams by using the functions in this fieldset.
- Permanently Delete This Page: Finally, you can delete the page itself, while leaving the rest of the book intact.
The default collections created by the Solution Packs now all use the ‘islandora’ namespace. If you are upgrading from the Islandora 11.2 release, this will create a new Book Collection with the PID ‘islandora:bookCollection’. Your old collections will not be affected in any way by the new collection.
Our Solr interface has some new features for developers.
We now have an optional sort field - the default Solr sort field is ‘score’; this is normally the best choice, but it can now be overridden by any stored, unique, untokenized field (except Date). If you wish to sort by date, use a copyfield in your schema to create a string.
Snippets may now be selected in the Solr admin interface. The default list view will now return snippets if a suitable field is selected. If you’d like to harvest snippets for a custom view you’ll find the snippets results within the ‘highlighting’ array of the response object.
Of interest to developers - we now trigger module_invoke_all from the IslandoraSolrQueryProcessor, allowing your configuration module to make any customization to your query before its run against the solr’s lucene index. Params and filters can be added or removed at this stage.
Islandora Repository | Documentation
Batch Ingest | Documentation
Collection Manager | Documentation
Harvester | Documentation
Solr | Documentation
Audio Solution Pack | Documentation
Books Solution Pack | Documentation
Basic Image Solution Pack | Documentation
Large Image Solution Pack | Documentation
PDF Solution Pack | Documentation
XML Forms | Documentation
Content Model Forms | Documentation
Objective Forms | Documentation
PHP Lib | Documentation
Tabs | Documentation
Microservices | Documentation
This module is still being developed for future microservices implementations. It is not functional in its current state. Please see the GitHub documentation for more information.
Google Developers Group: For support, feedback, and bug reports.
Google Users Group: For user-related issues and information.
Islandora operates under a GNU license.
|
OPCFW_CODE
|
The JED Editor's pick is a special category on extensions.joomla.org . Today a news on joomla.org says that the JED team is trying to get the community involved. So go to this form and nominate your favorite extensions - I'm sure that compojoomComment will be on first place in your list, followed by hotspots :)
Believe it or not it was first yesterday that I realized how amazing compojoomComment is :) I had a look over the comments on the Joomla Magazine website and it made me realize, that the magazine was first published in July and till the end of September they had 318 comments. Now that is not bad, but I was amazed to see that after we installed compojoomComment for 9 days the website got 99 new comments :)
We've got a lot of e-mails asking how to vote for CompojoomComment and Hotspots on the joomla extension directory. Because it can be really confusing we wanted to make a small tutorial for this.
If you already have a registered account on the JED, you probably know how to do this already, in that case, just follow this link to review CompojoomComment or Hotspots directly.
Despite our best efforts, the software that we offer to you doesn't build itself :) So we are looking for smart people to help us improve our products and user experience. There is no doubt that we have one of the best Joomla products out there, but we want to go even further. The web is changing and we have to adapt to it and respond to our user demands.
Hotspots is a google map marker manager. A lot of people are still asking me: "What is this?". Well, if you are running a website for your city - you would most probably like to mark all the museums, restaurants, schools, etc on a map. That is what you can achieve with hotspots. :)
I'm sad to announce that our hacking competition is over. However I'm really happy to say that the latest compojoomComment 4.1.7 couldn't be cracked :)
I'm really proud to announce that Hotspots 1.0 beta1 was just uploaded to our download section. I see this release as an important milestone in the development of this component.
One of the most requested features just got implemented. You can find the plugin in the download area. Just install it as a normal community builder plugin.
I know that there are already people waiting for it - the new version of hotspots. We are really close to a new version, but we still need to fix few things to make your user experience better!
Since the beginning of our small hacking competition, the http://hackme.compojoom.com page got around 200 comments all trying to inject malicious code and eventually win some cash and one of the 5 salvusalerting subscriptions that we are offering. Unfortunately 6 of the comments made what they intended - they managed to exploit several XSS holes and found a LFI vulnerability. Those problems were all found from Jeff Channell and right now here is going to get 200€ and 1 salvus subscription :).
|
OPCFW_CODE
|
Is there a better way to format this timestamp to ISO8601?
I have a timestamp that looks like this: 2015-11-12T20:45:24+0000. This is generated by someone else's script, which allegedly uses the UNIX date command (probably something along the lines of date -u +%Y-%m-%dT%H:%M:%S%z).
However, according to Java's DateTimeFormatter, the closest ISO 8601 format for this would be ISO_OFFSET_DATE_TIME, which looks like: 2015-11-12T20:45:24+00:00 (notice the extra colon at the end). If I pass in my version of the timestamp, the parser is unable to process it, but if I manually add in the colon then there are no issues.
My question is, is there an easier, more reliable/robust way to handle these timestamps? I'm getting timestamps which may or may not have that final colon delimiting minutes and seconds, and currently my code has this validation in it:
// We expect a colon at index length-3 (the colon delimits the hours:minutes of the timezone offset)
char colon = ':';
int expectedIndexOfColon = string.length() - 3;
// If that colon is not there, add it
if (string.lastIndexOf(colon) == expectedIndexOfColon) {
return string;
} else {
int substringIndex = string.length() - 2;
return string.substring(0, substringIndex) + colon + string.substring(substringIndex);
}
This looks hacky, and I was wondering if there was a more elegant way to handle these two different formats. I know about Joda-Time, but their parser also rejects the colon-less timestamp (from what I've tried). Additionally, Joda-Time recommends using Java's java.time for Java 8 anyways (which I am).
Create two formats which meet your needs, one with and one without the :, try both and use the one which doesn't fail
@MadProgrammer what do you mean by creating two formats?
You can use DateTimeFormatter.ofPattern to create you own patterns. If the text can sometimes have a : and sometimes not, you can use two formatters to check the text and use the one which passes
For example, I could use DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssZ") to parse 2015-11-12T20:45:24+0000 and DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssz") to parse 2015-11-12T20:45:24+00:00. Simply create both formatters, put them an array or List, iterate over them and attempt to parse the input text until either one is successful or all fail
As @MadProgrammer says - it's actually very common to have separate formatters for reading and writing. And not entirely uncommon to have multiple formatters for more robust parsing of input.
Thanks guys! That worked. @MadProgrammer if you put that into an answer I'll accept :D
If your code is expecting multiple different formats, you need to accommodate for these difference.
A common approach is to put your expected formats into some kind of array of List and iterate over this, finding the format that doesn't throw an exception
For example:
DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssZ") is able to parse 2015-11-12T20:45:24+0000
DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssz") is able to parse 2015-11-12T20:45:24+00:00
You could use something like...
public static LocalDateTime parse(String text, List<DateTimeFormatter> formats) {
LocalDateTime ldt = null;
for (DateTimeFormatter formatter : formats) {
try {
ldt = LocalDateTime.parse(text, formatter);
} catch (DateTimeParseException e) {
// Maybe log the failure if you're interested
}
}
return ldt;
}
Then you might use something like...
List<DateTimeFormatter> formats = new ArrayList<>(2);
formats.add(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssZ"));
formats.add(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ssz"));
System.out.println(parse("2015-11-12T20:45:24+0000", formats));
System.out.println(parse("2015-11-12T20:45:24+00:00", formats));
Which outputs...
2015-11-12T20:45:24
2015-11-12T20:45:24
Do you know why it's z and Z instead of x and X? I'm looking at the patterns and x and X seem more intuitive...but they of course don't work when I use them
Also thanks for your answer...I think this is like the 3rd question you've helped me with over the last few years xD
To me, it's not very clear why Z or z would make a difference (or why X or x would fail), form what I can tell, x or Z should have worked for both formats, but it didn't in my testing :P
Fair enough...I might ask that as a separate question
|
STACK_EXCHANGE
|
A Java Archive (colloquially known as a JAR file due to the file extension ) is a ZIP file that can contain additional metadata in a file "
META-INF/MANIFEST.MF". JARs are used especially for distribution of Java - class libraries used and programs. The designation can be understood as a play on words with the English word jar (German: "vessel").
JAR files were originally introduced so that Java classes required by Java applets do not have to be reloaded individually from the network. Transferring many classes in one file is more efficient, and in addition, the files can be compressed .
The "manifest" file can be used to determine how the Java application is started. This means that the application can also be started under graphical user interfaces such as Windows , Mac OS X or KDE without the aid of the command line (provided
.jarthe appropriate command has been assigned to the file extension ). With
java -jaryou can start JAR files from the command line. JAR archives store file names internally in UTF-8 encoding so that they can also contain umlauts. An installed Java Runtime Environment is always required to run JARs or Java programs .
JAR files can be created with the jar command of the JDK (which uses the syntax of tar ) or, if the file names contain only ASCII characters, with any ZIP program. In addition, the Java Platform, Standard Edition offers in the two packages "java.util.jar" and "java.util.zip" classes to read or create JAR or ZIP archives.
For example, the following command displays the contents of a JAR file named test.jar .
jar tvf test.jar
In this case, the letter t for "Contents View" (of English t able of contents ), v for verbose output (of English v erbose ) and f states that (english from a file f ile ) is to be read whose name follows.
Each Java Archive can provide various information about the content of the archive in the "META-INF" directory through a file called "MANIFEST.MF". The most important meta-information includes
- the version of the included class libraries that can be determined at runtime ,
- Information about included JavaBeans and
- the name of the main class of a contained Java application .
This manifest file is a simple line-oriented text file that contains several pairs of names and values, each of which defines a so-called attribute . An attribute is a property of the entire application, the class library contained or even just a single Java package ( package ) or a single class . In addition, it is divided into several sections ( sections divided).
The first section is called main section and defines attributes related to the entire Java Archive. It always begins with the definition of the “Manifest Version” attribute, while the other attributes are optional. The following sections each refer to a single package or class and are optional, as are the attributes they contain. Unknown attributes are ignored and do not lead to error messages. If an attribute is defined both in the main section and in an individual section, the value defined in the individual section overlays the value pre-assigned in the main section for the component (package or class) to which the section relates.
The following example shows an excerpt from the manifest of the “rt.jar” file contained in the Java 1.4 runtime environment.
Manifest-Version: 1.0 Specification-Title: Java Platform API Specification Created-By: 1.4.2_05 (Sun Microsystems Inc.) Implementation-Title: Java Runtime Environment Specification-Vendor: Sun Microsystems, Inc. Specification-Version: 1.4 Implementation-Version: 1.4.2_05 Implementation-Vendor: Sun Microsystems, Inc. Name: javax/swing/JRadioButtonMenuItem.class Java-Bean: True Name: javax/swing/JList.class Java-Bean: True
The main section in this example shows that this manifest is structured as described in Version 1 of the Sun Microsystems JAR file specification (the only one so far). The other attributes of this main section provide information about the specification fulfilled by the library , the producer of the Java archive, the name of the implementation, as well as the manufacturer and version of the specification used and the implementation contained. The two following sections of the example each refer to a class that is marked as a JavaBean .
The Java Development Kit contains several programs for manipulating JAR files:
- jar is a program for creating, modifying and unpacking JAR files, the call parameters of which are similar to those of the well-known Unix program tar .
- jarsigner is a program that signs JAR files and verifies their electronic signature.
- pack200 converts JAR files into a file format that can store bytecode more efficiently. It was introduced in Java 5 and is used in particular with Java Web Start , since it may require large amounts of files to be transferred over the Internet . The reconversion takes place with the program unpack200 .
Programming tools for JAR files not included in the JDK:
- ProGuard is a program for compressing, optimizing and obfuscating JAR files. This is achieved through a more detailed analysis of the bytecode .
- If the file names in the archive consist of ASCII characters, JAR files can be edited with any software tool that can also edit ZIP files. Some examples are given in the list of data compression programs.
|
OPCFW_CODE
|
The client is using CA Intertest batch to debug their COBOL program. The client COBOL source code listings are stored in librarian. How can the client copy a librarian listing to the PROTSYM file?
Use program IN25SYMD to load symbolic information from multiple COBOL, Assembler, C, or PL/I listings into your PROTSYM in a single run.The JCL is located in PDS CAVHJCL(IN25SYMD)
The following describes the DD statements used by IN25SYMD:
STEPLIB The load library containing IN25SYMD.
PROTSYM The file to which the symbolic information is written.
LISTLIB The data set name of the PDS, PDSE, CA Librarian library, CA Panvalet library, or CA Endevor SCM library containing the listings to be added.
REPORT An execution summary is written to this file.
OPTIN The input control statements that define the request.
If loading symbolic information from CA Endevor SCM, the CA Endevor SCM AUTHLIB and CONLIB must be either in LINKLIST or in the STEPLIB concatenation. When loading symbolic information from CA Librarian or CA Panvalet, the CA Librarian or CA Panvalet CAILIB must be either in Linklist or in the STEPLIB concatenation.
Identifies the library type of the listing library specified by the LISTLIB DD statement. This keyword is required. Valid values are:
PDS -- Partitioned data set (including PDSE)
SEQ -- Sequential data set (see Note)
LIB -- CA Librarian library
PAN -- CA Panvalet library
NDV -- CA Endevor SCM library
FROM Identifies the member name for single listings, the starting member name for a range of members, or a name prefix with trailing asterisk. This keyword is required.
TO Identifies the last member name in a range of members.
Identifies the message reporting level. Valid values are:
ALL -- displays all messages.
RC -- displays a one-line return code message for each program.
NONE -- suppresses all messages.
All of the programs with the prefix PAY are loaded into the PROTSYM file from a CA Librarian library, with all of the messages displayed in the REPORT file.
//STEP1 EXEC PGM=IN25SYMD,REGION=4M
//STEPLIB DD DISP=SHR,DSN=CAI.CAVHLOAD
//PROTSYM DD DISP=SHR,DSN=USER.PROTSYM
//LISTLIB DD DISP=SHR,DSN=USER.LIBRARIAN.LIBRARY
//REPORT DD SYSOUT=*
//OPTIN DD *
|
OPCFW_CODE
|
The X3D Earth Working Group supports the X3D Graphics Geospatial Component.
The X3D Earth Working Group will use the Web architecture, XML languages, and open protocols to build a standards-based X3D Earth specification usable by governments, industry, scientists, academia, and the general public. X3D-Earth efforts encompass client-side, server-side, authoring, and conversion technologies. Much work has been accomplished already.
- Vision. Make it easier to create and use 3D spatial data.
- Mission. Promote spatial data use within X3D via open architectures.
The X3D Earth Working Group works to ensure that the X3D specification is capable of handling all manners of geospatial data representations. We interact with other standards development organizations (including the Open Geospatial Consortium and the World Wide Web Consortium) and vendors to employ open protocols and content standards to provide an interoperable framework for visualizing and interacting with geospatial data. These standards-based specifications will be used to create systems and content that is usable, and re-usable, over the long term.
The X3D-Earth effort encompasses client-side, server-side, authoring, and conversion technologies. It's goals include:
- Providing ability to use publicly and privately available terrain and imagery datasets
- Capitalizing on X3D capabilities (e.g. scripting and HTML 5 browser support) for displaying geospatial data
- Promoting its use as an alternative to proprietary solutions
- X3D Earth Mailing List subscriptions
- X3D Earth Mailing List Archives
Our working group continues meeting by teleconference every two weeks.
- BS Contact Geo browser plugin and standalone application for X3D geospatial rendering by BitManagement
- Instant Reality standalone application for X3D geospatial rendering by Fraunhofer
- Xj3D open-source Java application for X3D geospatial rendering
- FreeWrl/FreeX3D open-source C++ application for X3D geospatial rendering
Software: Authoring Tools
- X3D-Edit authoring tool includes full support for authoring X3D geospatial nodes
- MBARI MB-System
Potential future work:
- The 3D City Database Importer/Exporter uses CityGML for KML/COLLADA portrayal preprocessing. It is a well-documented open-source Java codebase. There is an interesting development opportunity to follow the design pattern of adding another exporter for X3D output that is similar to the existing KML/Collada exporter. Inquiries are welcome.
- X3D-Earth Tutorial presentation by Mike McCann and Alan Hudson at Web 3D Symposium 10 August 2008 in Los Angeles
- Geospatial Component X3D Earth tutorial slideset by Don Brutzman
- Tourtelotte, Dale, X3D-Earth: Full-Globe Coverage Utilizing Multiple Datasets, Masters Thesis, Naval Postgraduate School, Monterey California, September 2010. Advisor Don Brutzman, second readers Byounghyun Yoo and Don McGregor.
- Examine mode behavior for exploring geospatial worlds
- Terrain height/slope visualization via autogenerated texture coordinates
- Extensible 3D (X3D) Graphics Geospatial component version 3.2
- Proposed changes for X3D Geospatial component version 3.3
- X3D Basic Examples Archive, GeoSpatial Examples
- TODO Byounghyun Yoo's scripts
- TODO Fraunhofer?
- TODO Bit Management?
- TODO NPS internal supercomputer cluster nightly build
|
OPCFW_CODE
|
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Xml;
namespace LinqToHtml
{
public class HTMLTag : IEnumerable<HTMLTag>
{
private readonly XmlNode _node;
public HTMLTag(XmlNode node)
{
_node = node;
}
public IEnumerable<HTMLTagAttribute> Attributes
{
get
{
if (_node.Attributes == null)
{
yield break;
}
foreach (var attribute in _node.Attributes
.Cast<XmlAttribute>()
.Select(x => new HTMLTagAttribute(x)))
{
yield return attribute;
}
}
}
public IEnumerable<HTMLTag> ChildTags
{
get
{
if (!_node.HasChildNodes)
{
yield break;
}
var childXmlNodes = _node.ChildXmlNodes();
foreach (var tag in childXmlNodes.Select(item => new HTMLTag(item)))
{
yield return tag;
}
}
}
public string Content
{
get { return _node.InnerText; }
}
public IEnumerable<HTMLTag> DescendantTags
{
get
{
if (!_node.HasChildNodes)
{
yield break;
}
var xmlNodes = _node.ChildXmlNodes();
var allXmlNodes = xmlNodes.Flatten();
foreach (var tag in allXmlNodes.Select(item => new HTMLTag(item)))
{
yield return tag;
}
}
}
public HTMLTag this[string key]
{
get
{
var child = ChildTags.FirstOrDefault(x => x.Type == key);
if (child == null)
{
return null;
}
return child;
}
}
public HTMLTag Parent
{
get { return new HTMLTag(_node.ParentNode); }
}
public string RawContent
{
get { return _node.InnerXml; }
}
public string Type
{
get { return _node.Name; }
}
public IEnumerator<HTMLTag> GetEnumerator()
{
return DescendantTags.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public void MapTo<T>(T destination)
{
var properties = destination
.GetType()
.GetProperties()
.Where(x => x.CanWrite)
.ToDictionary(x => x.Name.ToLower());
var attributes = Attributes
.ToDictionary(x => x.Name.ToLower(), x => x.Value);
var matches = attributes.Where(x => properties.ContainsKey(x.Key));
foreach (var match in matches)
{
var property = properties[match.Key];
PropertySetter
.GetFor(property.PropertyType)
.SetValue(destination, property, match.Value);
}
}
public static explicit operator int(HTMLTag htmlTag)
{
return Int32.Parse(htmlTag.Content);
}
public static explicit operator bool(HTMLTag htmlTag)
{
return Boolean.Parse(htmlTag.Content);
}
public static explicit operator double(HTMLTag htmlTag)
{
return Double.Parse(htmlTag.Content);
}
public static explicit operator decimal(HTMLTag htmlTag)
{
return Decimal.Parse(htmlTag.Content);
}
public static explicit operator float(HTMLTag htmlTag)
{
return Single.Parse(htmlTag.Content);
}
public static explicit operator long(HTMLTag htmlTag)
{
return Int64.Parse(htmlTag.Content);
}
public static explicit operator DateTime(HTMLTag htmlTag)
{
return DateTime.Parse(htmlTag.Content);
}
public static explicit operator char(HTMLTag htmlTag)
{
return Char.Parse(htmlTag.Content);
}
public static explicit operator short(HTMLTag htmlTag)
{
return Int16.Parse(htmlTag.Content);
}
public static explicit operator byte(HTMLTag htmlTag)
{
return Byte.Parse(htmlTag.Content);
}
public static explicit operator string(HTMLTag htmlTag)
{
return htmlTag.Content;
}
}
}
|
STACK_EDU
|
i need to send the simplest values from one computer to the other
over the net.im a total beginner and wasnt able to find help about that
the boygrouping seems a bit too advance and complicated is there an easier way?
are you talking about the internet or a local network?
in a local net everything is quite easy.
in the internet you would need either a deep understanding of routers, natting and vpn or a tool like http://hamachi.cc to get started.
how to get it running on the lan:
1.have a UDP (Network Client) node for the sending end and a UDP (Network Server) for the receiving end.
set the ip adress of the server at the remote host pin on the client side.
set both udp nodes to the same port number.
basically you can choose them at will, but make sure they are not used for something else in your network (see http://en.wikipedia.org/wiki/TCP_and_UDP_port_numbers)
enable the server on the receiving side.
now if you enter a string on the sender and bang the Do Send pin for one frame this string will get transmitted to the receiver. so you will get the same string for one frame on the receiver.
a S+H (string) node connected to the Output and Queue Count pins of the receiving UDP node will help you seeing your results.
from now on its a classical patching thing. nodes like Tokenizer and AsValue (String), AsString (Value) and FormatValue will come handy. please ask for details…
and while i am at it:
i hear you asking what is the difference between concatenate and discard mode on the receiver?
in case the server will run with a higher frame rate as the receiver, you need to deal with multiple messages in one frame on the receiver. in discard mode you will always receive just the latest message - all others get discarded, hence the name.
this is great for contiuous transmissions like the value of a sensor, but is bad if you are waiting for something like pressing a start button.
in concatenate mode all messages which are received in one frame are appended to each other. so you can analyse them later and process them all in one frame. regexpr and spreads will typically help you with that.
why they are called servers and clients?
the client knows the server but the server doesnt know the client.
the client is originating the connection to the well known server.
please ask more questions, this is a deep topic - TCP and UDP broadcasts are some more things to talk about.
thanks for the detailed reply.
i would do it on a local network so i hope the answer refers to that i understood u correctly.
yes, the steps above are for a local network (lan).
how come i was able to do it just with 2 simple nodes-netsend,netreceive.
does it have to do with the difference between sending values and strings?
what the difference between them anyway (hope that not a silly question
the difference between sending value and string:
value = 123456…
string = blablabla…
the two module are spreadable , that´s mean : you can send multiple value with one module , no need to create one module for each value or string .
thanks … no text …
|
OPCFW_CODE
|
Share This Author
Reinforcement learning in robotics: A survey
This article attempts to strengthen the links between the two research communities by providing a survey of work in reinforcement learning for behavior generation in robots by highlighting both key challenges in robot reinforcement learning as well as notable successes.
Policy search for motor primitives in robotics
A novel EM-inspired algorithm for policy learning that is particularly well-suited for dynamical system motor primitives is introduced and applied in the context of motor learning and can learn a complex Ball-in-a-Cup task on a real Barrett WAM™ robot arm.
Relative Entropy Inverse Reinforcement Learning
This paper proposes a model-free IRL algorithm, where the relative entropy between the empirical distribution of the state-action trajectories under a baseline policy and their distribution under the learned policy is minimized by stochastic gradient descent.
Learning to select and generalize striking movements in robot table tennis
- Katharina Muelling, J. Kober, Oliver Kroemer, Jan Peters
- Computer ScienceAAAI Fall Symposium: Robots Learning…
- 19 October 2012
This paper presents a new framework that allows a robot to learn cooperative table tennis from physical interaction with a human and shows that the resulting setup is capable of playing table tennis using an anthropomorphic robot arm.
Learning motor primitives for robotics
It is shown that two new motor skills, i.e., Ball-in-a-Cup and Ball-Paddling, can be learned on a real Barrett WAM robot arm at a pace similar to human learning while achieving a significantly more reliable final performance.
Reinforcement learning to adjust parametrized motor primitives to new situations
This paper proposes a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters, and introduces an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression.
Reinforcement learning for control: Performance, stability, and deep approximators
Imitation and Reinforcement Learning
This article describes the dynamical system MPs representation in a way that it is straightforward to reproduce, and presents an appropriate imitation learning method, i.e., locally weighted regression, which can be used both for initializing RL tasks as well as for modifying the start-up phase in a rhythmic task.
Movement templates for learning of hitting and batting
- J. Kober, Katharina Muelling, Oliver Kroemer, Christoph H. Lampert, B. Schölkopf, Jan Peters
- Computer ScienceIEEE International Conference on Robotics and…
- 3 May 2010
The Ijspeert framework is reformulate to incorporate the possibility of specifying a desired hitting point and a desired hit velocity while maintaining all advantages of the original formulation and is shown to work well in two scenarios.
Reinforcement Learning to Adjust Robot Movements to New Situations
This paper describes how to learn such mappings from circumstances to meta-parameters using reinforcement learning, and uses a kernelized version of the reward-weighted regression to do so.
|
OPCFW_CODE
|
Rate this page:
Pricing, payment, and billing is an integral part of selling your app on the Atlassian Marketplace. Learn about our revenue-sharing models, how to set or change prices for your app, what a remittance report looks like, and more.
As we continue to move towards a cloud future together, new server app sales and installs are no longer available for customers. As of Feb 15, 2023, renewals of server apps will automatically be prorated until the end of support for server on Feb 15, 2024.
Get more cloud pricing resources in the Partner Portal. You'll find webinars, calculators, and tools to help you strategize your app's revenue.
All apps integrate with an Atlassian product, and adhere to one of three payment models:
To provide a unified experience for billing and invoicing across all Atlassian cloud products, Atlassian Marketplace is migrating to a new cloud billing engine. Learn more about this change from our billing engine FAQs.
Atlassian will pay you, the publisher, the following percentages of gross revenue for Jira, Jira Service Desk, and Confluence apps:
This pricing is effective 1 April 2022 through 31 December 2024.
For all new Forge apps listed on Atlassian Marketplace after 1 April 2022, Atlassian will pay you, the publisher, 95% of the gross revenue of all sales made within first year. After the first year, the revenue share will return to whatever the standard rate is for cloud apps at that time. This rate of 95% is a temporary incentive and will be reviewed periodically. This promotion excludes new apps built on Connect as well as former Connect apps that are rebuilt as Forge apps or migrated from Connect to Forge. In order to be eligible, Forge apps must use Atlassian-hosted storage and compute.
For Atlassian products not stated above, you will receive a revenue share of 75% across all hosting types. For more information, see the [Atlassian Marketplace Partner Agreement] (https://www.atlassian.com/licensing/marketplace/publisheragreement), Section 4.2.
You can set the initial pricing for your app when you submit it for approval on the Marketplace. When you’d like to change prices for your app, you need to:
Your change becomes active and will reflect on the Marketplace site in 24 hours. Following this, new customers will pay updated prices. Few things to note when you change prices:
When trying to change the Payment model for an existing app, the following message is displayed in the UI:
You can't alter the Payment model for existing apps in the following scenarios:
You can change the Payment model in the following situations:
In order to change the Payment model for an existing app (except for the two situations listed above), you'll need to release a new version with this change in place.
In your add/remove licensing param from .
For details, see Adding licensing support to server apps.
Set the enableLicensing flag in the app descriptor file to to add licensing or remove the flag to set the app version to free.
For details, see Cloud app licensing
'Paid via Atlassian' app licenses have to exactly match the Atlassian host product license. For example, if a customer has a Confluence license for 500 users, they need to purchase the same license for apps. For Server and DC, If customers upgrade their product license but not their app license, then we alert the product administrator to upgrade the app license for your app according to your change in product license. For Cloud apps, app tier is upgraded along with the product tier upgrade to ensure that app and product tier always remain in sync. Learn more about license upgrades
'Paid via Atlassian' app licenses for Server or Data Center come with one year of maintenance just like any other Atlassian product. Maintenance includes support and access to any version upgrades for a year from the date of purchase. When maintenance expires, customers have to renew app licenses to receive support or maintenance for the next 12-month period.
For cloud Paid via Atlassian app licenses, we offer both monthly and annual billing options to customers. Based on the customer billing cycle, the app is renewed either for one month or for a 12-month period. Please note that cloud subscriptions that renew monthly are setup for automatic renewals by default.
Our renewal system automatically notifies customers when the app licenses are about to expire. A customer can renew in advance of expiration to ensure uninterrupted access to support and software updates. Learn more about maintenance renewals
We owe you remittance for your sales after you reach $500 USD in profit. We pay you within 30 days after the end of the month in which you accrue $500+ USD in profit.
This means that we pay you within a minimum of 30 days from the time of sale, and no more than 60 days after. We designed this time frame around customer support needs, refunds, and chargebacks (payment disputes). We offer the same 30-day refund period for your apps as we do for other Atlassian products. After 30 days, we don't grant refund requests to customers.
Atlassian may remit funds early at our discretion.
Employing Atlassian's worldwide network of resellers gives you a sales multiplier. When you opt into this program, Atlassian Solution Partners can purchase your app for their clients at a discount from the list price. For more information, see the Atlassian Marketplace Partner Agreement, Section 4.9.
You can opt in or out of the program on a per-listing basis, and when you edit pricing details for your app. You can learn about Atlassian Solution Partners here: https://www.atlassian.com/partners.
Paid via Atlassian apps are restricted from offering discounts to organizations that aren't part of the Atlassian Solution Partners program. If you'd like to sell your app at discount to individual customers, you can create a sales promotion.
Yes, for cloud, server and Data Center apps. Check out our documentation on Sales promotions.
From August 1, 2020, each month you will receive two emails from firstname.lastname@example.org:
Note, the numbering convention for transactions in these emails is:
This report contains:
The following appear as separate line items in the remittance report and are not reflected in the Marketplace Portal:
ELA (bills): This is the customer’s redemption of ELA credits that were previously prepaid. This is denoted by the prefix “AT”.
Concession (bills): This occurs when Atlassian gives a full discount to the customer, but maintains and pays the liability to the partner.
Items that are included in the summarized bill (total sales for the month) on the remittance report, but are not reflected in the Marketplace Portal:
The following screenshot shows an example remittance report:
Note that this report is an updated version of the remittance report that was sent prior to August 1, 2020.
This CSV file contains the details of the summarized bill and bill credits. Note that this CSV was not sent prior to August 1, 2020.
Each record in the CSV includes the following information:
Concessions and ELA bills will not show up in the .CSV, as this is a $0 transaction to the customer.
Be aware that the expected date of the remittance report and the payment of a given month is between the 17th-20th of the payment month.
The following screenshot shows an example CSV:
What if my remittance report doesn’t match my remittance payment? For any enquiries about a payment or report, email Accounts Payable at email@example.com.
When can we expect to see 10k license bills on the remittance report? We plan to introduce higher user tiers for marketplace apps before the end of CY 2020, which should eliminate this problem.
Why are there items included in the summation bill but not reflected in the Marketplace Partner Portal?
We sell to customers everywhere, except for trade-embargoed countries subject to United States export restrictions. We help partner collect any taxes applicable for the customer's locale.
Unfortunately customers cannot pay for apps with POs. Like Atlassian products, buying apps through a purchase order (PO) is not supported. Customers can reference purchase orders with their invoice. Find out more.
The Marketplace handles quotes, checks and bank transfers for paid via Atlassian apps the same way as Atlassian products.
Atlassian offers full refunds within the first 30 days from a purchase, no questions asked.
Refund requests that fall between 31-60 days from a purchase will be at the discretion of Atlassian, unless the per-partner refund total exceeds $1,500. In that case, Atlassian will seek approval from the partner prior to issuing a refund.
In the event of upgrade requests between 31-90 days post-purchase, Atlassian will opt to reallocate funds for the remaining maintenance toward the higher-tiered license. This allows us to appease the customer while increasing revenue to our partners. Find out more.
Use the “Price Guidance“ feature to help you compare the tier price for your apps against the same tier price for server and average price of all Data Center apps. Below are the two guidance values that are available via the Marketplace app pricing tool:
This guidance value is a ratio of monthly per user price of the cloud version and the approximated monthly per user price of the server version for your app for each price tier. Since this guidance value is the ratio between your app price for cloud and server version, this ratio will keep changing as you update the price for your cloud app tiers. This guidance value will enable you to more readily compare the price differences between your server and cloud app deployments, and price your apps accordingly.
Calculating Cloud - Server price ratio guidance
The normalized price guidance is the approximate per user average monthly price of all Data Center apps calculated for the maximum number of users in each tier, which is normalized with reference to the price of the [11-100] tier for your app. Since this normalization is done against the price of the [11-100] tier, this guidance value will keep changing as you change the price for this tier. This guidance value signals how the app's price can be discounted across tiers for all of the Marketplace by setting lower per user price for higher tiers. The steps below show how we calculate the normalized price guidance value:
Calculating Normalized Price Ratio Guidance:
Our goal is to provide Marketplace partners and the developer community with the tools and flexibility needed to price your apps for the customers in cloud. As Atlassian continues to help the customers adopt and migrate to our cloud products, pricing and total cost of ownership will continue to be our priority and focus area. We want to ensure that the value our customers gain from Atlassian products and apps outweighs their investment.
To enable the same, we introduced key changes in the past year to help you align your pricing effectively with Atlassian’s core product pricing:
Opt-in app migration discounts (July 2020) - These migration discounts reduce the total cost of ownership gap between on-premise and cloud. Through this, the impact of pricing being a blocker to migrations will be minimized.
Opt-in dual licensing (September 2020) - To help customers save on migration costs, customers can take advantage of Free license extensions for up to one year.
We know how important pricing flexibility is to your business, especially the ability to deliver targeted promotions to customers in order to support specific scenarios. Marketplace partners and developers with existing cloud apps may already be familiar with promo codes, which have been available and in use for some time. These promo codes are also widely used by Atlassian’s field sales and advocate teams to reduce friction in app adoption for larger customers.
Rate this page:
|
OPCFW_CODE
|
One hundred kilograms, or about 220 pounds, seems quite large, but it can be difficult to accurately estimate just how big that is.
There are several types of animals, including different mammals and reptiles, that weigh right around 100 kilograms, and picturing them can often help you get a good feel for how much 100 kilograms is.
Here are 9 animals that weigh 100 kilograms.
- Pacific White-Sided Dolphins
- African Rock Pythons
- Baby Elephants
- Rocky Mountain Goats
Did you know? 100 kilograms is equal to 100000 grams or 3527 ounces.
The iconic giant panda, native to the south-central region of China, generally weighs about 100 kilograms, with males weighing slightly more than females.
These bears, whose fur features distinctive black and white markings, subsist on a diet that’s mainly made up of bamboo and other leaf shoots.
Pandas, however, are omnivorous, so they might also sometimes eat insects, honey, or even small rodents.
Once critically endangered due to habitat loss, wild panda numbers have risen over the years, and giant pandas are now listed as only vulnerable.
Other bear species, such as moon bears and Andean spectacled bears, also weigh in at right around 100 kilograms.
Although warthogs, which are members of the pig family, may not look too big, they can weigh quite a bit.
These strong animals weigh in at an average of 100 kilograms, with females growing to a somewhat smaller size than males.
Warthogs are native to sub-Saharan Africa.
There are two warthog species, the desert warthog and the common warthog, and both species are largely herbivorous and eat a varied diet of leaves, shoots, and grasses.
Male and female warthogs both sport a long crest of dark fur running from their heads down their spines as well as two protruding tusks.
Cougars, also known as pumas or mountain lions, are some of the heaviest cats in the big cat families.
Male cougars often grow to weigh about 100 kilograms.
These large cats have tawny coats and long tails. They are native to the Americas, with a habitat range stretching from northern Canada to the Andes.
Cougars often hunt during the night or at dusk and dawn. They are largely solitary and mostly hunt alone, although some cougars form loose social groups in which food is shared.
Reindeer, also known as caribou, are some of the most recognizable members of the deer family.
Although there are actually several reindeer subspecies, with some growing to slightly larger or smaller sizes, most reindeer weigh about 100 kilograms.
These deer, which generally have dark gray or off-white coats, are native to a wide range of northern mountainous, tundra, Arctic, and sub-Arctic regions around the world.
Both males and females can grow antlers, and these antlers, which can grow to more than 3 feet in length, are shed and regrown each year.
There are many different seal species, all of which grow to different sizes and weights.
Two seal species, however, the ribbon seal and the South American fur seal, weigh right around 100 kilograms at maturity.
As their name suggests, South American fur seals are native to Peru, Chile, Brazil, Uruguay, and Argentina, and have brown or gray fur.
Ribbon seals, on the other hand, prefer colder climates and are native to Arctic and sub-Arctic regions.
Ribbon seals are black with large, bold white circles of fur at their necks, tails, and around each fin.
#6. Pacific White-Sided Dolphins
Native to the northern regions of the Pacific Ocean, the Pacific white-sided dolphin weighs about 100 kilograms when fully grown, although some of these marine mammals can grow to an even larger size.
White-sided dolphins are dark gray in color with bold white splashes at their sides.
These dolphins are both incredibly active and very social, and they swim in groups made up of between 10 and 100 individuals.
In order to communicate, Pacific white-sided dolphins, like other dolphin species, use clicks and whistles, and each dolphin can be identified by other dolphins using a unique whistle “name.”
#7. African Rock Pythons
One of the largest snakes on the planet, the African rock python can grow to 25 feet long and can easily weigh 100 kilograms or more.
These snakes, which are native to sub-Saharan Africa, are so large that they can swallow prey such as antelopes.
African rock pythons have thick, muscular bodies, and their scales are dark brown with a pattern of yellow or olive-green splotches.
#8. Baby Elephants
Although they don’t stay small for long, when elephants are born, they weigh about 100 kilograms.
Baby elephants will be parented not only by their mothers but also by all of the other females within the social herd.
Elephant calves drink milk for the first three months of life before moving on to different types of vegetation.
Elephants are more or less independent at three years old but aren’t considered fully mature until they’re about 18 years old.
Elephants have long lifespans and can live for 70 or more years.
#9. Rocky Mountain Goats
Sturdy and strong, the Rocky Mountain goat can weigh just under 100 kilograms, with males weighing a bit more than females.
These goats, who sport a fluffy, white fur coat and short black horns, are incredibly sure-footed and have been known to climb up sheer cliffs.
Rocky Mountain goats are herbivorous and live on a diet of grasses, mosses, and ferns.
Smaller dogs can be quite popular. Learn about 9 of them that weigh less than 30 pounds.
Here are some common animals that you may not know weigh over 200 pounds!
|
OPCFW_CODE
|
S3 Bucket Lambda replicator
The S3 Bucket Lambda replicator starter project creates an S3 Bucket which will invoke a Lambda function with every object addition and deletion. The Lambda will copy or delete the object from the source S3 Bucket to replica S3 Buckets in other regions.
S3 Buckets already have a Cross-Region replication (CRR) feature and we recommned you use this feature for robust data replication. However, CRR only allows you to replicate to only a single other region. It is also not possible to daisy-chain from the target S3 Bucket to another region. This solution was originally developed for deploying Lambda artifacts to multiple regions.
It serves as an example of using Paco to manage S3 Bucket and Lambda objects. There is no network or other complex resources in this starting project.
Create a "S3 Bucket Lambda replicator" Project
Install Paco and then get started by running the
paco init project <your-project-name> command.
Review the instructions on Getting Started with Paco to understand the importance of
fields in Paco and the difference between a name and title. Then follow the instructions on creating
credentials for your project to connect it to your AWS Account.
You will be asked to provide prompts for a NetworkEnvironment name and title. While this project
does not provision any network resources, Paco still uses the name
netenv to refer to a
set of environments that contain the same set(s) of applications and shared resources.
Take a minute to set-up a PACO_HOME environment variable, this will save you lots of time typing.
Customize and Provision SNS Topics
You will need to create SNS Topics if you want to provision the prod environment, which has CloudWatch Alarms to notify you if the Lambda function throws errors or is taking too long to complete.
These SNS Topics contain SNS Subscriptions. Review the
and note that there is an admin group with one email subscription.
This group is configured to recieve any alarm notifications. You can add as many subscriptions to this group as you want. See the SNS Topics docs for examples of all protocols.
Customize and Provision Environments
There are two environments with this project: dev and prod. They are almost the same except the prod environment has a pair of CloudWatch Alarms to notify you if your Lambda function has invocation problems.
Before you provision these environments, if you are using this netenv in a multi-account
set-up, review the
aws_account field and change this to the correct account name you
want to use:
prod: title: "Production Environment" default: network: aws_account: paco.ref accounts.prod # deploy prod env to prod account
Now provision an environment with:
paco provision netenv.mynet.dev paco provision netenv.mynet.prod
The prod environment is also intended to be used with more than one region to replicate into.
You will see this at the very bottom of your project's
us-west-2: enabled: true applications: app: groups: replica: enabled: true
You can add as many regions here as you need:
us-west-2: enabled: true applications: app: groups: replica: enabled: true us-east-1: enabled: true applications: app: groups: replica: enabled: true ca-central-1: enabled: true applications: app: groups: replica: enabled: true
This will create the S3 Buckets to hold the replicated objects. You will also need to tell the Lambda
which buckets to replicate into using an environment variable named
prod: ca-central-1: applications: app: groups: original: enabled: true resources: replicator: environment: variables: - key: 'ENV' value: 'prod' - key: 'REGIONS' value: 'usw2;use1;cac1'
You will need to use the short region name for each AWS region. See the
aws_regions section in
the paco.models vocabulary file to look-up the short names for regions. There will also be an S3 Bucket created
in the same region as the original bucket, if you need to replicate into that region with an S3 Bucket name that
is consistent with the other regions.
Finally, update your Paco
project.yaml file to have a list of all of your
active_regions. This is a master lists
of regions you should be active in. It can be used in certain places in your configuration to list
all as a special
keyword to refer to all your Paco project's useable regions:
name: myproj title: MyProj active_regions: - eu-central-1 - us-west-2 - us-east-1 - ca-central-1
Test Your S3 Bucket Lambda
Log into the AWS Console and go to the S3 Bucket service. You will see buckets with names like this:
Go the "orginal" bucket and upload an object:
Then navigate to a "replica" bucket and you should see a copy of your object:
If you didn't and this is in the prod environment, a CloudWatch Alarm will fire after the Lambda invocation failed. This will happen if your environment variable names are incorrect. You can also go to your Lambda and generate a Test invocation with an empty event and this will cause the Lambda to safely throw an error.
In the CloudWatch service you will see your "Errors" Alarm in an alarm state:
There are two alarms, one for invocation errors and a second for duration. If the Lambda takes longer than 80% of the total allocated run time, this error will fire. With this simple Lambda it is unlikely that you will ever see this alarm be triggered, but such an alarm is generally useful for any Lambdas that you deploy. AWS will suddenly stop an Lambda which reaaches it's maximum duration, so it's good to be notified before this happens.
Apply an S3 Bucket Policy
If you were to use this for a real-world solution, you would also need to determine what kind of S3 Bucket Policy should protect your buckets. This project starts with a simple policy that allows only the root account access to s3:GetObject API calls on the replica buckets. Adjust this policy to suit your needs:
replica: type: Application title: "Replica S3 Bucket" order: 1 enabled: false resources: s3: type: S3Bucket enabled: true order: 1 bucket_name: 'replica' deletion_policy: 'delete' policy: - aws: - 'arn:aws:iam::123456789012:root' effect: 'Allow' action: - 's3:GetObject' resource_suffix: - '/*' - ''
After updating the policy YAML, you can run:
paco provision -y netenv.mynet.dev paco provision -y netenv.mynet.prod
And watch Paco update the S3 Bucket policy for ALL of your replica buckets. Enjoy!
|
OPCFW_CODE
|
I have been searching all over for an answer, i got a cheap vps server running ubuntu 13.10 and installed webmin which should allow me to go to serverip:10000
But i can't connect to it and portcheckers such as yougetsignal.com says the port is closed.
i have tried running: iptables -A INPUT -p tcp -d 0/0 -s 0/0 --dport 10000 -j ACCEPT
(and many other variations found on google, saving, restarting etc etc)
and when i run netstat -an | grep "LISTEN " i can see: tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN
ifconfig doesn't show any local ip, so it should be directly connected to the internet. there is no firewall installed, the "ufw" command is not recognized.
i have also flushed iptables and set input,output,forward to accept.
apparently the only port that is open is port 22, which i use to connect to it with ssh, even port 80 is closed.
(mainly i am trying to run another service on port 6121, same results, it's listening, but still blocked from connecting to it)
why are all these ports blocked? how can i open the ports and connect to/use my server?
EDIT> results of netstat -ntlp:
Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:587 0.0.0.0:* LISTEN 557/sendmail: MTA: tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 648/perl tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 610/apache2 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 425/sshd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 557/sendmail: MTA: tcp 0 0 0.0.0.0:6121 0.0.0.0:* LISTEN 4712/python tcp6 0 0 :::22 :::* LISTEN 425/sshd
result of iptables --list:
Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:6121 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
i got this reply from my host "We do not close or block any ports on our end. Make sure you have iptables disabled."
|
OPCFW_CODE
|
according to me, (my recent discuss about)
i think the reason why is the fact re-implement each data adress each time a new D-version is extremely time-eater ! so Kingofice wait for a big and solid Version od D evolution
i also think that is perhaps the reason why Lazy creator stop to follow the evolution at 0.6
my monitor is working since i downgrade and stay on 0.7D
you can create a second LFS.exe with a working version and an another to follow the D-version ;-)
the speed and time diff works from the s/f line which is awesome, but first hotlap doesn't save as a PB.
Also been noticing the app getting the car you're in wrong, not sure what causes it, but i think it's somewhat related with telepitting?
It's very noticable on servers like the AA multiclass for example.
Perhaps something to think about when eventually making the update for the new LFS version
you can have 2 (or more) differents .exe of LFS on the same folder
with differents names. it works
or create a new folder with a different LFS version in (name your folder in explicit name like "LFS DM" if you want.
the LFS version wich is worlking actually with D&M is 0.7D
see my attached screen : i have 3 LFS versions under the same folder
one is to read old mpr (0.7 abk .. do not remember exactly when used lol)
one is updated at D6 (0.7D6)
one is working with D&M (LFS.exe = O.7D)
Well having layout to work would be A MA ZING (lazy was impressive with that)...
But having potential best refreshed at every sector on official tracks would be a great addition and less coding headache. I mean you do your best ever Sector1 and crash or quit in sector3, currently it will not show.
I haven't had much time for the app lately but I'll try to get back to it.
Small survey please: Scawen having implemented the engine damage and the gearbox speeds in the incoming version, it seems to me a good opportunity to stop the memory reading, very constraining, to go back to full "insim".
The impacts would be :
+ compatible with any new version of LFS (except big update to come if insim modifications are necessary on the tire information side !?)
+ no more time wasting or waiting for each version
- disappearance of "engine damage", "gearbox speed" and "tire thickness" I think
Well engine damage and gearbox speed are no longer needed. To me I would sacrifice tire thickness (we will guess it in F9 like in the past) to have a full insim version working with test patches or whatever version.
Or best of both world go insim and if version of LFS is supported show tire thickness, but do not crash if wrong version! Might be a bit too much "Usine a gaz"...
Traceback (most recent call last):
File "Detect&Monitor6.py", line 2909, in <module>
File "Detect&Monitor6.py", line 2904, in main
File "Detect&Monitor6.py", line 2145, in __init__
File "Detect&Monitor6.py", line 529, in StartInsim
NameError: name 'exit' is not defined
|
OPCFW_CODE
|
frontend build error
Hi, rnons. I follow to instruction to build the front end. When excute yarn build, I got some errors
ERROR in ./src/HomePage.ts
Module not found: Error: Can't resolve 'Home' in '/home/ubuntu/ted2srt/frontend/src'
resolve 'Home' in '/home/ubuntu/ted2srt/frontend/src'
Parsed request is a module
using description file: /home/ubuntu/ted2srt/frontend/package.json (relative path: ./src)
Field 'browser' doesn't contain a valid alias configuration
resolve as module
/home/ubuntu/ted2srt/frontend/src/node_modules doesn't exist or is not a directory
/home/ubuntu/ted2srt/frontend/src/dce-output doesn't exist or is not a directory
/home/ubuntu/ted2srt/frontend/src/src doesn't exist or is not a directory
/home/ubuntu/ted2srt/frontend/dce-output doesn't exist or is not a directory
/home/ubuntu/ted2srt/node_modules doesn't exist or is not a directory
/home/ubuntu/ted2srt/dce-output doesn't exist or is not a directory
/home/ubuntu/ted2srt/src doesn't exist or is not a directory
/home/ubuntu/node_modules doesn't exist or is not a directory
/home/ubuntu/dce-output doesn't exist or is not a directory
/home/ubuntu/src doesn't exist or is not a directory
/home/node_modules doesn't exist or is not a directory
/home/dce-output doesn't exist or is not a directory
/home/src doesn't exist or is not a directory
/node_modules doesn't exist or is not a directory
/dce-output doesn't exist or is not a directory
/src doesn't exist or is not a directory
looking for modules in /home/ubuntu/ted2srt/frontend/node_modules
using description file: /home/ubuntu/ted2srt/frontend/package.json (relative path: ./node_modules)
Field 'browser' doesn't contain a valid alias configuration
looking for modules in /home/ubuntu/ted2srt/frontend/src
using description file: /home/ubuntu/ted2srt/frontend/package.json (relative path: ./src)
Field 'browser' doesn't contain a valid alias configuration
using description file: /home/ubuntu/ted2srt/frontend/package.json (relative path: ./node_modules/Home)
no extension
Field 'browser' doesn't contain a valid alias configuration
using description file: /home/ubuntu/ted2srt/frontend/package.json (relative path: ./src/Home)
no extension
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/node_modules/Home doesn't exist
.js
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/src/Home is not a file
.js
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/node_modules/Home.js doesn't exist
.ts
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/src/Home.js doesn't exist
.ts
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/node_modules/Home.ts doesn't exist
/home/ubuntu/ted2srt/frontend/src/Home.ts doesn't exist
as directory
/home/ubuntu/ted2srt/frontend/node_modules/Home doesn't exist
as directory
existing directory
using path: /home/ubuntu/ted2srt/frontend/src/Home/index
using description file: /home/ubuntu/ted2srt/frontend/package.json (relative path: ./src/Home/index)
no extension
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/src/Home/index doesn't exist
.js
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/src/Home/index.js doesn't exist
.ts
Field 'browser' doesn't contain a valid alias configuration
/home/ubuntu/ted2srt/frontend/src/Home/index.ts doesn't exist
Can you give some tips how to resolve it. Thank you ;)
Hi, from the error message src/node_modules doesn't exist or is not a directory, seems you forgot to run yarn?
And take a look at the updated README.
Let me know if there are other problems.
For the backend, you need to init tables in database, one way is to run
psql -U postgres -d ted2srt < backend/sql/latest.sql.
The other way is un-comment all commented lines in backend/app/main.hs before stack build, and after stack exec, comment those lines again.
Sorry it's a bit mess here.
@rnons Thank you for your reply. I did the latest patch and run the protocol again. still got stuck with yarn build. I did't see node_modules created after yarn, should it be a problem. Or should I run yarn under src folder 😕
You should run yarn inside frontend, that's what cd frontend do in the README
cd frontend
yarn
pulp -w build
yarn start
It is a problem if there is no node_modules. Is there any output after yarn?
Oh, I misread your original error message, frontend/node_modules should exist after yarn, and you should run yarn build under frontend as well.
I just realized you need to install zephyr to run yarn build, just download it and copy the bin to any of you $PATH
@rnons now the server works fine now✌️ While the homepage shows empty content, I guess the entry came from the feedburner fetched by the server itself. If so, can I switch to another rss source like the official one because of the source's accessibility. There structure looks like the same.
Cool. You can search something to fill some data to database, or run stack exec fetch to fetch from feedburner.
I think the feedburner link is also official, I found it from official website long time ago. If stack exec fetch work with the new feed link, I'm fine with changing.
So it means search action can trigger fetching from the remote server 🤔 can I track the process or locate the corresponding source code. thank you for your great work 🤝
search action can trigger fetching from the remote server
Sorry, I forgot this is no longer true, see this commit.
Previously, TED provides open api, but it was shut down. Aside from stack exec fetch, you can change any talk page url, e.g. https://www.ted.com/talks/juan_enriquez_what_will_humans_look_like_in_100_years to http://localhost:8080/talks/juan_enriquez_what_will_humans_look_like_in_100_years to fetch that talk
I understand that, also, if I modified the fetching address from source code, how to apply the change to the server? Simply change app/fetch.hs is not work.
You need to run stack build again to generate new binary of fetch. Then
stack exec
2018年12月12日(水) 23:00、Zheng Juefei<EMAIL_ADDRESS>
I understand that, also, if I modified the fetching address from source
code, how to apply the change to the server? Simply change app/fetch.hs
is not work.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rnons/ted2srt/issues/2#issuecomment-446597235, or mute
the thread
https://github.com/notifications/unsubscribe-auth/ABCFGr2ntGXNrchjMOhfCu4JgaUnDZH2ks5u4Qv5gaJpZM4ZPH3I
.
|
GITHUB_ARCHIVE
|
Recently I worked for the first time with this WordPress plugin called BuddyPress that allows to create a social network like website.
It is a great tool and very flexible and in the version 2.2 they added the possibility to assign member types which is very useful to give different types of users different access.
One thing I couldnt find how to do is to show different registration forms for different type of users. I had to show different set of fields for teachers and institutions. I managed to do that and I want to share the work around I came up with, with you.
Setting Up The Field Groups in BuddyPress
First of all I created a group for each type of user with the fields needed. After you install BuddyPress the option Profile Fields is added under Users
Each of those field groups have an ID that you can see it if you position your mouse over the Edit Group button
Ill use the field group IDs later.
Modifying The Default Registration Form
In the home screen I added links to the registration form of each type of user. The links were the regular WordPress registration url plus the parameter
role with the corresponding value.
<a href="exampleurl.com/register/?role=institution">Register as Institution</a> <a href="exampleurl.com/register/?role=teacher">Register as Teacher</a>
In the registration file I added the follow code to save the validate the value of the parameter
role and save it in a variable:
/* Save the value of the parameter role * if is either teacher or institution, * or set the variable value to FALSE otherwise */ $role = (isset($_GET['role']) && ($_GET['role'] == 'teacher' || $_GET['role'] == 'institution')) ? $_GET['role'] : FALSE; /* This is the Field Group ID corresponding * to the user role, if it is an institution the group ID is 2 * and if it's a teacher is the group 3 */ $profileField = $role ? (($role == 'institution') ? 2 : 3) : FALSE;
Then after the line 133 (version 2.3) where it says
<?php if ( bp_is_active( 'xprofile' ) ) : ?> I added the following piece of code:
// check if the variable that has the ID of the group is not FALSE if($profileField): // traditional BP loop through the fields in the group, in this case we use the value in the variable if ( bp_has_profile( array( 'profile_group_id' => $profileField, 'fetch_field_data' => false ) ) ) : while ( bp_profile_groups() ) : bp_the_profile_group(); ?> <?php while ( bp_profile_fields() ) : bp_the_profile_field(); ?> <div<?php bp_field_css_class( 'form-group' ); ?>> <?php $field_type = bp_xprofile_create_field_type( bp_get_the_profile_field_type() ); // I used bootstrap so I needed the class "form-control" in the fields $field_type->edit_field_html(array('class'=>'form-control')); /** * Fires before the display of the visibility options for xprofile fields. * * @since BuddyPress (1.7.0) */ do_action( 'bp_custom_profile_edit_fields_pre_visibility' ); if ( bp_current_user_can( 'bp_xprofile_change_field_visibility' ) ) : ?> <p class="field-visibility-settings-toggle" id="field-visibility-settings-toggle-<?php bp_the_profile_field_id() ?>"> <?php printf( __( 'This field can be seen by: <span class="current-visibility-level">%s</span>', 'buddypress' ), bp_get_the_profile_field_visibility_level_label() ) ?> <a href="#" class="visibility-toggle-link"><?php _ex( 'Change', 'Change profile field visibility level', 'buddypress' ); ?></a> </p> <div class="field-visibility-settings" id="field-visibility-settings-<?php bp_the_profile_field_id() ?>"> <fieldset> <legend><?php _e( 'Who can see this field?', 'buddypress' ) ?></legend> <?php bp_profile_visibility_radio_buttons() ?> </fieldset> <a class="field-visibility-settings-close" href="#"><?php _e( 'Close', 'buddypress' ) ?></a> </div> <?php else : /*?> <p class="field-visibility-settings-notoggle" id="field-visibility-settings-toggle-<?php bp_the_profile_field_id() ?>"> <?php printf( __( 'This field can be seen by: <span class="current-visibility-level">%s</span>', 'buddypress' ), bp_get_the_profile_field_visibility_level_label() ) ?> </p> <?php */ endif ?> <?php /** * Fires after the display of the visibility options for xprofile fields. * * @since BuddyPress (1.1.0) */ do_action( 'bp_custom_profile_edit_fields' ); ?> <p class="description"><?php bp_the_profile_field_description(); ?></p> </div> <?php endwhile; ?> <input type="hidden" name="signup_profile_field_ids" id="signup_profile_field_ids" value="<?php bp_the_profile_field_ids(); ?>" /> <?php endwhile; endif; endif ?>
What I did above was to loop through the different fields in the field group that I created before. If it was a teacher the corresponding field group ID is 3 and for institutions is 2, that number is saved in the variable
I hope this helps you guys!
|
OPCFW_CODE
|
In the course of evangelizing our new software applications which support idea management and innovation (there's the plug!) I've had the opportunity to meet a lot of business people who are very innovative and interested in generating new ideas to further their business objectives. These folks inevitably are some of the brightest, most motivated people I meet - convinced that their ideas and how they get implemented will dramatically change their business. Since I share a real interest in idea generation and management, we usually have great conversations. Then I ask the fatal question.
When can your team start testing our software? You can almost see the fear in their eyes.
Now - many of these folks are very interested in our systems and eventually start beta testing them. But you can see that they are conditioned to admit that it will be difficult to get the IT team on board. They think that the IT team will say "no" to any new technology. They can see the hours of discussions that will be required, the testing and validation of any new system, the concerns raised by IT.
This is not a diatribe against the IT organization - they've got their job to do, and it is often a difficult one. Most IT organizations are overworked and understaffed, and their first bias in many cases is to say "no" to small projects that don't fit into the current corporate policy. But what it made me think about is - what are we conditioned to think in our everyday jobs?
When someone brings up a new idea or proposes a new product, what your first thought? Do you immediately think about the challenges in overcoming the naysayers in the organization, or do you write off ideas as impossible to implement because of all the bureaucracy and red tape? How about when someone approaches your team to participate in a project? What are they conditioned to think?
It seems to me that firms that are productive break down barriers and find ways to get things done rather than build up barriers (real or imaginary). For example, if you want to bring in a new software application, IT needs to review it and test it, but instead of making it difficult, IT should define the time they'll need to review the software and any other information they'll need to make a decision. They should commit to providing you an answer in X days. And consistently meet those numbers. So now you are conditioned to think that IT is on your side and working for you, rather than just interested in saying no.
What expectations do others have when they work with you? Do they anticipate that your team will be a partner and work in a timely fashion with them, or have you conditioned them to expect the worst when it comes to working with your team?
When you boil it all down, businesses are just a collection of people trying to work within a set of processes to accomplish some result. If the processes are poor, we can re-engineer them. When the people in the roles aren't right, we can retrain or replace them. But when the expectations and conditioning we've set get in the way of productive work, it takes time to reset expectations and work well together.
|
OPCFW_CODE
|
A transistor, conceived of in digital terms, has two states: on and off, which can represent the 1s and 0s of binary arithmetic.
But in analog terms, the transistor has an infinite number of states, which could, in principle, represent an infinite range of mathematical values. Digital computing, for all its advantages, leaves most of transistors’ informational capacity on the table.
In recent years, analog computers have proven to be much more efficient at simulating biological systems than digital computers. But existing analog computers have to be programmed by hand, a complex process that would be prohibitively time consuming for large-scale simulations.
Last week, at the Association for Computing Machinery’s conference on Programming Language Design and Implementation, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory and Dartmouth College presented a new compiler for analog computers, a program that translates between high-level instructions written in a language intelligible to humans and the low-level specifications of circuit connections in an analog computer.
The work could help pave the way to highly efficient, highly accurate analog simulations of entire organs, if not organisms.
“At some point, I just got tired of the old digital hardware platform,” says Martin Rinard, an MIT professor of electrical engineering and computer science and a co-author on the paper describing the new compiler. “The digital hardware platform has been very heavily optimized for the current set of applications. I want to go off and fundamentally change things and see where I can get.”
The first author on the paper is Sara Achour, a graduate student in electrical engineering and computer science, advised by Rinard. They’re joined by Rahul Sarpeshkar, the Thomas E. Kurtz Professor and professor of engineering, physics, and microbiology and immunology at Dartmouth.
Sarpeshkar, a former MIT professor and currently a visiting scientist at the Research Lab of Electronics, has long studied the use of analog circuits to simulate cells. “I happened to run into Rahul at a party, and he told me about this platform he had,” Rinard says. “And it seemed like a very exciting new platform.”
The researchers’ compiler takes as input differential equations, which biologists frequently use to describe cell dynamics, and translates them into voltages and current flows across an analog chip. In principle, it works with any programmable analog device for which it has a detailed technical specification, but in their experiments, the researchers used the specifications for an analog chip that Sarpeshkar developed.
The researchers tested their compiler on five sets of differential equations commonly used in biological research. On the simplest test set, with only four equations, the compiler took less than a minute to produce an analog implementation; with the most complicated, with 75 differential equations, it took close to an hour. But designing an implementation by hand would have taken much longer.
Differential equations are equations that include both mathematical functions and their derivatives, which describe the rate at which the function’s output values change. As such, differential equations are ideally suited to describing chemical reactions in the cell, since the rate at which two chemicals react is a function of their concentrations.
According to the laws of physics, the voltages and currents across an analog circuit need to balance out. If those voltages and currents encode variables in a set of differential equations, then varying one will automatically vary the others. If the equations describe changes in chemical concentration over time, then varying the inputs over time yields a complete solution to the full set of equations.
A digital circuit, by contrast, needs to slice time into thousands or even millions of tiny intervals and solve the full set of equations for each of them. And each transistor in the circuit can represent only one of two values, instead of a continuous range of values. “With a few transistors, cytomorphic analog circuits can solve complicated differential equations — including the effects of noise — that would take millions of digital transistors and millions of digital clock cycles,” Sarpeshkar says.
From the specification of a circuit, the researchers’ compiler determines what basic computational operations are available to it; Sarpeshkar’s chip includes circuits that are already optimized for types of differential equations that recur frequently in models of cells.
The compiler includes an algebraic engine that can redescribe an input equation in terms that make it easier to compile. To take a simple example, the expressions a(x + y) and ax + ay are algebraically equivalent, but one might prove much more straightforward than the other to represent within a particular circuit layout.
Once it has a promising algebraic redescription of a set of differential equations, the compiler begins mapping elements of the equations onto circuit elements. Sometimes, when it’s trying to construct circuits that solve multiple equations simultaneously, it will run into snags and will need to backtrack and try alternative mappings.
But in the researchers’ experiments, the compiler took between 14 and 40 seconds per equation to produce workable mappings, which suggests that it’s not getting hung up on fruitless hypotheses.
“‘Digital’ is almost synonymous with ‘computer’ today, but that’s actually kind of a shame,” says Adrian Sampson, an assistant professor of computer science at Cornell University. “Everybody knows that analog hardware can be incredibly efficient — if we could use it productively. This paper is the most promising compiler work I can remember that could let mere mortals program analog computers. The clever thing they did is to target a kind of problem where analog computing is already known to be a good match — biological simulations — and build a compiler specialized for that case. I hope Sara, Rahul, and Martin keep pushing in this direction, to bring the untapped efficiency potential of analog components to more kinds of computing.”
|
OPCFW_CODE
|
The main research focus in the Neural Computation Group lies on the neural basis of spatial and episodic memory, navigation, and cognitive maps more broadly. We mainly explore these topics through mechanistic computational models of the circuits underlying these cognitive faculties. When possible the emphasis lies on biologically plausible implementations.
Systems-Level Accounts of Memory, Navigation and Cognitive Maps
A core aim of the group is to help improve our understanding of the system-wide integration of different neural representations in the hippocampal formation, such as grid cells, place cells, head-direction cells, boundary- and object-vector cells. Importantly these cells express allocentric neural codes, which must interface with egocentric neural representations elsewhere in the brain (ultimately derived from sensory inputs) in order to support the encoding and retrieval of memories. See, for instance, the BB-model of spatial cogniton and its predecessors for a current proposal. In this context we are also interested in how the neural machinery of spatial memory may be employed to support episodic memory and imagination, but also visual memory (memory guided eye-movements) and cognitive maps in abstract domains. Finally, the interaction of the various neural representations within the hippocampal formation as well as with extra-hippocampal areas during navigation is also a key focus of research in the group.
Head-Direction Coding and Retrosplenial Cortex
In a related line of research, the group aims to get a full understanding of the neural architechture underlying the head-direction system across species and the life-span, as well as its contribution to navigation and other forms cognition. Head-direction cells are found throughout an extended network of brain areas, including retrosplenial cortex. This area is also a key structure for long-term memory and an important component in systems-level models (see above). Thus this direction of research also intersects with the other research areas in the group.
The third line of research centers around the function of the subiculum, for instance with regard to the coding of environmental boundaries. This area is often neglected in computational models of the hippocampal formation and has recently seen the publication of many important results that challange the old text-book notion of the subiculum as a mere output relay of the hippocampus.
Additional Research Interests
The group’s main focus lies on long-term memory and the hippocampal system. However, as PI I hold additional research interests, such as prefrontal executive control (specifically of memory recall), reinforcement learning, central pattern generators (vertebrate and invertebrate), translating systems-level accounts of spatial memory to cognitive robotics, consciousness, and large scale brain models (à la Spaun). There is space in the group to work on these topics (or other interesting computational projects) as a student or to collaborate with us on these topics.
- 2 open PhD positions are currently advertised - see HERE
At MPI CBS
- Viktor Studenyak
- Alexey Zabolotnii
External PhD Students
- Alessandro Pazzaglia (EPFL)
- Matthieu Bernard (DZNE Magdeburg)
- Auke Ijspeert, EFPL, Switzerland
- Lukas Kunz, University of Bonn, Germany
- Colin Lever, Durham University, United Kingdom
- Cheng Wang, Shenzen Institute of Advanced Technology, China
|
OPCFW_CODE
|
OnePlus 5T review: Hands-on
hi, I have recieved the AOL e-mail with the link for the updated driver installation for the conection upgrade (the free upgrade) a while ago...
I have no problems with it (in the sense of being able to install it and access the internet afterwards), apart from I don't like using the AOL software, but I have no option...
I normally connect to the net through the modem (from the "contect to.." option on the Start Menu), but now I can't.... Instead, AOL automatically connects you to a "virtual network" using this modem, so the only way to connect to the internet is by using the AOL software...
I have contacted AOL, but they don't understand/don't know how to correct this...
Can anyone help?
As normal, thanks in advance
I have always used Aol so am "at sea" when asked about 'simple IE', so am not in a position to be much help.
Even so, you have a contract with a reliable ISP and yet want to go the long way round! ....... Is there a specific difficulty you have with any website or is it just not enough experience with Aol to relax and let it do the work for you -it does it very well. Like the ealier Microsoft route is is a matter of knowing all the facilities and how to reach them.
(Personally I tend to agree with you about the less easy questions placed before their helpline. If it fits some volume of FAQs you will get an answer but the questions have to be asked in exactly the right way, or you have their "agent" stumped)
If you are using AOL9, have you got the AOL icon in the system tray? If so then right click and then click "Start AOL Dialler"
Thanks for the response...
I only use the AOL software if I am expecting an e-mail....
to feb as well
I have many users on this system and don't want them to have my password, so I just "store" it in the "contect to.." option, so they can access the web...
I have uninstalled the update for now, as it also gives error messages when the system goes into stand-by... Then I can't access the internet (and if AOL was open when it went into stand-by, it crashes)... I have to log off and on again to get back onto the net (and sometimes restart the system)...
Is this an old update?
which modem are you using?
I know a few people who have had problems with a modem update in the past and simply gone back to the old drivers.
You can have a number of different screen names, so why not make a new one for yourself and keep the password to yourself.
Or if you want to keep your present screen name, make the others use a different one. You can then change your password to a new private one and can do that at any time.
No one can access your email without going on line or switching screennames and using your password.
If I done that, then anyone who knows the screenname and password can access the internet ANYWHERE in the world.... This would mean that if one of the users gives the detials to one of their friends, they would be able to access the net for free
1) It is one of the updates from the "free upgrades" because of them ditching the 256k connection
2)I am using the BT Voyager 100 USB ADSL Modem
NO, whether you keep your existing screen name and force others to change to another, or choose a new name for yourself, it is easy to change your password.
Its privacy is certain provided you don't tick the box to "store your password". The only snag is that each time you go online with that screen name you will have to type your password first.
Anyone looking over your shoulder will only see the ****** as you type.
I mean the "shared" account, not mine... Sorry for the confusion
Sorry... Forgot to tick this thread and give an update....
I installed the Driver Updates from the "pop-up" they started to show upon you signing in and all seems well...
Sorry again... lol
This thread is now locked and can not be replied to.
|
OPCFW_CODE
|
[mailto:email@example.com]On Behalf Of Brion Vibber
Sent: Saturday, April 30, 2005 11:08 AM
To: MediaWiki announcements and site admin list
Subject: Re: [Mediawiki-l] AuthPlugin always fails withfatal
Carlton B wrote:
Well, in this case I guess we've just got
going on, because
I didn't override AuthPlugin::initUser and I
LoginForm::initUser or tamper with SpecialUserlogin.php in any way. The
only way I could get AuthPlugin to do an autocreate without throwing the
fatal error was by overriding AuthPlugin::initUser to return
to the user it was passed.
Can you confirm that you made no other changes to the code? Have you
followed the execution path to make sure there's no other place that's
using the return value? Have you confirmed that the return value is used
by checking for the returned value?
To confirm this, I unpacked the 1.4.2 tar file in a completely separate
subdirectory and added back in only my AuthPlugin extension,
LocalSettings.php file, and a graphics file. The pristine version
continued to yield the error. A recursive diff showed that the
non-erroring version's copy of SpecialUserlogin.php contained a seemingly
trivial debugging edit. After doing some more focused testing on a good
night's sleep, in versions 1.4.2 and 1.4.3, I can say that I was incorrect
and you were correct in saying the return value from AuthPlugin::initUser's
subclass was related to this problem. Now, before you toss your hat in the
air and say "Aha!", let me tell say that what I did find is troubling.
This is the change: Below line 254 of SpecialUserlogin.php, I had added the
following line which became line 255.
$rather_unique_varname_bznksjx = $u; // fairly certain this var name won't
conflict with anything else.
There is no other difference in the file, and of course I never used the
rather variable anywhere else. I can reproduce the problem at will by
adding and removing this single line. I can't give a good technical
explanation of why I did it or why it makes AuthPlugin work for me. All I
know is that I am highly, highly suspicious of PHP 4's scoping and
references. When I noticed the variable $u used both explicitly and as a
reference all throughout SpecialUserlogin.php, I suspected PHP might have
mishandled transitioning between scopes. Thus I thought it would be healthy
or at least harmless to explicitly reference the $u variable at least once
in LoginForm::processLogin. According to what I know of PHP internals (not
much at this point), this change should have had no effect. But it did, and
it worries me not to know why.
it's my version of PHP. I am on php 4.3.11.
Are you using any PHP accelerator plugins? If so, which and what versions?
I am not sure how to determine this, but my host's feature page says we are
using the Zend optimizer, and phpinfo() shows this information:
This program makes use of the Zend Scripting Language Engine:
Zend Engine v1.3.0, Copyright (c) 1998-2004 Zend Technologies with Turck
MMCache v2.4.6, Copyright (c) 2002-2003 TurckSoft, St. Petersburg, by Dmitry
Stogov with Zend Extension Manager v1.0.6, Copyright (c) 2003-2004, by Zend
Technologies with Zend Optimizer v2.5.7, Copyright (c) 1998-2004, by Zend
> Is this interface widely referenced elsewhere,
can I safely rename it on
line 254 of
SpecialUserLogin.php and then change it to the same in my AuthUser
Have you tested this? Does it in fact help?
I tested this and it had no effect. The difference was explicitly
referencing the $u variable right after line 254 of SpecialUserlogin.php.
This would be a hard nut to crack except maybe for somebody who really knows
PHP internals. I've already put too many hours into this problem, so I'm
not putting any more time into it, and I wouldn't expect anyone else to
either. Maybe we'll just pick up from this point if somebody else runs
into the same thing.
|
OPCFW_CODE
|
Division Rep.: Rodney Bowen 785-213-6732
|Regular Season Standings|
|PCT = Winning Percentage|
Away team listed first
Home team listed second
Scoresheets are Processing!
Regular Season Schedule & Results
- Show Week
- Rain Out - Resch. 6/21 8:30pm
- Rain Out - Resch. 6/28 8:30pm
- Game Re-sch. 7/12/21
- Conflict Pirates! Game Re-Sch. 7/5/21
- Rain Out. Game Re-Sch. 7/19/21
- Make-Up Game from 6/1
- Rain Out. Game Re-Sch. 7/20/21
- Rain Out. Game Re-Sch. 7/13/21
- Make up game from 6/25/21
- Make up game from 6/17/21
- Re-Sch. Game 7/22/21
- Rain Out. Game Re-Sch. 7/23/21
- Make up game from 6/24/21
- Make up game from 6/29/21
- Make up Game from 7/13/21
- Make up Game from 7/15/21
Delays Between DoubleheadersHow often does a team have two games on one day,
and how many of those double headers are not back-to-back.
Time Slot DistributionHow often does each team play on a particular time slot.
The time slots are represented by letters in the legend box below.
|Time Slot Legend|
|A -||8:30 PM||Monday||Dornwood Field #D|
|B -||6:15 PM||Tuesday||Dornwood Field #D|
|C -||8:30 PM||Tuesday||Dornwood Field #D|
|D -||6:15 PM||Thursday||Dornwood Field #D|
|E -||8:30 PM||Thursday||Dornwood Field #D|
|F -||6:15 PM||Friday||Dornwood Field #D|
Game Time DistributionHow often does a team play at a particular time.
|Team||6:15 PM||8:30 PM||Total |
Day of Week DistributionHow often does a team play on a particular day of the week.
Away-Home DistributionHow often is a team Away or Home.
Opponent DistributionHow often does a team play against any other particular team.
Games Per Week DistributionHow many games does a team have during a particular week.
The blue background indicates a team has two games on the same day.
Total number of double headers = 6
Opponents Played Per Week DistributionDisplays which opponents a team played during a particular week.
A blue background indicates a team played the same opponent in consecutive weeks or twice in the same week.
|A||Pirates 12U||B||C||B, C||B, B||C, B||B, C, C||B, C, B||C, B, C||B, C|
|B||Rhinos 12U||C, A||C||A||C, A, A||C, C, A||A||A, A||C, A||C, A, C|
|C||Warriors 12U||B||B, A||A||B||B, A, B||A, A||A||B, A, A||B, A, B|
|
OPCFW_CODE
|
Circular Reference Error Excel 2013
I have an excel sheet and I need to add an if statement to one of my cells. Let's call cell D13 as X and cell E13 as Y. Basically what I need to do is ensure that if X>=1 then Y = Y + (0.5*X). However, I understand that I cannot just say "E13 + 0.5*D13" since the cell is referencing itself and E13 is the value of the cell I need to change. I get a "circular reference error" when I do so. Is there a way I can use this same formula to ensure that my cell E13 is the value of itself plus D13 * 0.5? I do not want to put another cell and use this formula as it is redundant.
you can force excel to iterate, but you seem to have a proper circular reference that deos not converge onto a number!
Well, it's a circular reference. What did you expect? How can you fix it? You're saying "show me the value of the value plus five". That's never going to work.
Advice - before you attempt to code something (whether in a programming language or in Excel) you need to solve it conceptually. In this case, that means you need to solve it mathematically. How would you set up an example formula to solve what you are attempting?
"I do not want to put another cell and use this formula as it is redundant." Well it's only 'redundant' if its possible another way (which it isn't). But besides that, don't worry about using an extra cell or two in Excel; it is not a format which loves brevity.
This sounds conceptually impossible, as the Y cell can't BOTH store a number and a formula that references that number. The very notion of "containing own value plus..." seems impossible in itself. I don't see any way around it but adding a third cell.
@bumpfox Don't be afraid to expand how many cells you use in Excel; It should not be treated like any other programming language, as brevity is often not possible, and actually increases complexity immensely. Learn to separate the calculation of what you are doing from the display of what you are doing, and allow the calculation of what you are doing to be as longwinded as you need it to be.
You need to re-think your processing logic. With formulas you are running into circular reference errors, as the comments above are demonstrating.
If you want to avoid the circular reference, you may need to involve VBA to do the calculation and write the result of the calculation into the cell. This can be done with a worksheet change event. It can monitor the cell, trigger when the cell changed, use the cell input value, perform a calculation and write the result into the same cell.
In this case, I suggest you use 3 columns instead of two. First, you have D13 as X, and E13 as Y. But then you need to add F13 as Y2.
D13 pulls from wherever. E13 pulls from wherever. The formula in F13 is to check whether to pull the original value of Y, or a new one, like so:
=IF(D13>=1, E13+.5*D13,D13)
This says: if X >=1, make the new Y value in F13 equal to Y + .5 * X. Otherwise, make it equal to the original value of Y.
Then wherever you were going to reference E13 (the old "Y" value), reference F13 instead (which will pull whatever "Y" value is appropriate).
|
STACK_EXCHANGE
|
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package World;
import java.util.LinkedList;
import java.util.Random;
/**
*
* @author Saurabh
*/
public abstract class Organism extends WorldObject
{
WorldObject[] radar= new WorldObject[8];
int energy, goal= 1, lastdir= 8, eating= 0;
long speed;
public String name;
WorldObject Northvalue, Eastvalue, Southvalue, Westvalue, NEvalue, SWvalue, SEvalue, NWvalue, target;
public void setRadar(int xpos, int ypos)
{
Northvalue= RatWorld.getObject(x, y-10);
Eastvalue= RatWorld.getObject(x+10, y);
Southvalue= RatWorld.getObject(x, y+10);
Westvalue= RatWorld.getObject(x-10, y);
NEvalue= RatWorld.getObject(x+10, y-10);
SEvalue= RatWorld.getObject(x+10, y+10);
SWvalue= RatWorld.getObject(x-10, y+10);
NWvalue= RatWorld.getObject(x-10, y-10);
}
public WorldObject [] readRadar()
{
radar[0]= Northvalue;
radar[1]= NEvalue;
radar[2]= Eastvalue;
radar[3]= SEvalue;
radar[4]= Southvalue;
radar[5]= SWvalue;
radar[6]= Westvalue;
radar[7]= NWvalue;
return radar;
}
public void setPosition(int direction)
{
switch(direction)
{
case 0: y-=10; break;
case 2: x+=10; break;
case 4: y+=10; break;
case 6: x-=10; break;
case 1: x+=10;
y-=10; break;
case 3: x+=10;
y+=10; break;
case 5: x-=10;
y+=10; break;
case 7: x-=10;
y-=10; break;
}
World.MainClass.window.repaint();
}
public int chooseDirection()
{
int i;
Random r= new Random();
LinkedList <Integer> choices= new LinkedList<>();
for(i=0; i<8; i++)
if((lastdir!=i)&&(radar[i]==null))
choices.add(i);
try
{
return choices.get(r.nextInt(choices.size()));
}
catch(IllegalArgumentException e)
{
return 8;
}
}
public void rest()
{
// System.out.println("Will rest");
setPosition(8);
energy--;
if(energy==0)
energy=150;
}
abstract public void moveToFood()throws InterruptedException;
abstract public void generateGoal();
abstract public void achieve();
abstract public int containsFood();
abstract public void eat();
}
|
STACK_EDU
|
How to get the last "consume" of each product type for each unit produced?
I have an SQL table that's essentially just products being consumed in the fabrication process. I would like to (use what I think is the LAG function) return the last "consume" of each product type for each unit produced. There is also a timestamp attached to every Action (build or consume) if that helped.
I can start with a simple select statement to get something like this in Oracle:
Action
ProductName
ProductSerial
Build
Unit 9
1009
Consume
Product 1
E657
Build
Unit 8
1008
Build
Unit 7
1007
Build
Unit 6
1006
Build
Unit 5
1005
Consume
Product 3
D001
Build
Unit 4
1004
Build
Unit 3
1003
Build
Unit 2
1002
Build
Unit 1
1001
Consume
Product 3
C789
Consume
Product 2
B456
Consume
Product 1
A123
I have search forum after forum on other sites for a resolution. I want to be able to get this:
Action
ProductName
ProductSerial
Product1
Product2
Product3
Build
Unit 9
1009
Product 1 - E657
Product 2 - B456
Product 3 - D001
Consume
Product 1
E657
Build
Unit 8
1008
Product 1 - E123
Product 2 - B456
Product 3 - D001
Build
Unit 7
1007
Product 1 - E123
Product 2 - B456
Product 3 - D001
Build
Unit 6
1006
Product 1 - E123
Product 2 - B456
Product 3 - D001
Build
Unit 5
1005
Product 1 - E123
Product 2 - B456
Product 3 - D001
Consume
Product 3
D001
Build
Unit 4
1004
Product 1 - E123
Product 2 - B456
Product 3 - C789
Build
Unit 3
1003
Product 1 - E123
Product 2 - B456
Product 3 - C789
Build
Unit 2
1002
Product 1 - E123
Product 2 - B456
Product 3 - C789
Build
Unit 1
1001
Product 1 - E123
Product 2 - B456
Product 3 - C789
Consume
Product 3
C789
Consume
Product 2
B456
Consume
Product 1
A123
So every time there is a new "Build" action, I would like to return the last "Consume" for each of the products in the bill of materials. I’ve tried iterations of CASE statements but am unable to link the build to all of it’s components. Unfortunately, I can only reproduce the top table with a simple select statement:
Select
Action as Build
,Product_Name as ProductName
,Product_Serial as ProductSerial
,Concat(Concat(Product_Name, ' - '), Product_Serial) as Barcode (Not in Table 1 above)
Where Action in ('Build', Consume')
And Time_Stamp > sysdate - 90
From TraceDB
I know there has to be some sort of link to between Build timestamp and the Consume timestamp but I am drawing a blank and am 100% stumped.
I have left the tag dataiku, but I am not sure what it has to do with the question. Please reconsider if it's really relevant.
any sample data input for the expected output above? afaik LAG catches preceeding row for a given criteria (OVER).
There are too many questions left opened and result is not consistent with original data set, you must give more info about the rules and the number of "product" we may encounter per build line(and don't forget SQL doesn't support dynamic pivot in standard). Also you may need to take a look at FIRST_VALUE (and LAST_VALUE) analytic functions because in ORACLE they support quite a number of range parameters that may fit your needs. And if your requirements are too complicated for analytic functions you can still use the MODEL clause.
Try LAST_VALUE(...) around a DECODE or CASE that conditionally returns what you want when the action is "consume". Then add in the IGNORE NULLS option and the PARTITION BY, ORDER BY and ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW clauses.
In general, you'd be benefited by finding Oracle' SQL reference and reading through it to learn about all the functions available. Particularly the analytic functions: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Analytic-Functions.html#GUID-527832F7-63C0-4445-8C16-307FA5084056
Your sample data does not have a column on which the result set can be ordered; however, your query includes Where [...] Time_Stamp > sysdate - 90 so I am going to assume that you have a Time_Stamp column containing either a DATE or a TIMESTAMP data-type and that the rows in your table are in descending date-time order.
Given those assumptions, you can use the LAST_VALUE analytic function with CASE expressions:
SELECT action,
productname,
productserial,
CASE action
WHEN 'Build'
THEN LAST_VALUE(
CASE ProductName
WHEN 'Product 1'
THEN ProductName || ' - ' || ProductSerial
END
) IGNORE NULLS OVER (ORDER BY time_stamp)
END AS product1,
CASE action
WHEN 'Build'
THEN LAST_VALUE(
CASE ProductName
WHEN 'Product 2'
THEN ProductName || ' - ' || ProductSerial
END
) IGNORE NULLS OVER (ORDER BY time_stamp)
END AS product2,
CASE action
WHEN 'Build'
THEN LAST_VALUE(
CASE ProductName
WHEN 'Product 3'
THEN ProductName || ' - ' || ProductSerial
END
) IGNORE NULLS OVER (ORDER BY time_stamp)
END AS product3
FROM tracedb
WHERE Action IN ('Build', 'Consume')
AND Time_Stamp > sysdate - 90
ORDER BY time_stamp DESC;
Which, for the sample data:
CREATE TABLE tracedb (Action, ProductName, ProductSerial, Time_Stamp) AS
SELECT 'Build', 'Unit 9', '1009', SYSDATE + 20 FROM DUAL UNION ALL
SELECT 'Consume', 'Product 1', 'E657', SYSDATE + 19 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 8', '1008', SYSDATE + 18 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 7', '1007', SYSDATE + 17 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 6', '1006', SYSDATE + 16 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 5', '1005', SYSDATE + 15 FROM DUAL UNION ALL
SELECT 'Consume', 'Product 3', 'D001', SYSDATE + 14 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 4', '1004', SYSDATE + 13 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 3', '1003', SYSDATE + 12 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 2', '1002', SYSDATE + 11 FROM DUAL UNION ALL
SELECT 'Build', 'Unit 1', '1001', SYSDATE + 10 FROM DUAL UNION ALL
SELECT 'Consume', 'Product 3', 'C789', SYSDATE + 9 FROM DUAL UNION ALL
SELECT 'Consume', 'Product 2', 'B456', SYSDATE + 8 FROM DUAL UNION ALL
SELECT 'Consume', 'Product 1', 'A123', SYSDATE + 7 FROM DUAL;
Outputs:
ACTION
PRODUCTNAME
PRODUCTSERIAL
PRODUCT1
PRODUCT2
PRODUCT3
Build
Unit 9
1009
Product 1 - E657
Product 2 - B456
Product 3 - D001
Consume
Product 1
E657
null
null
null
Build
Unit 8
1008
Product 1 - A123
Product 2 - B456
Product 3 - D001
Build
Unit 7
1007
Product 1 - A123
Product 2 - B456
Product 3 - D001
Build
Unit 6
1006
Product 1 - A123
Product 2 - B456
Product 3 - D001
Build
Unit 5
1005
Product 1 - A123
Product 2 - B456
Product 3 - D001
Consume
Product 3
D001
null
null
null
Build
Unit 4
1004
Product 1 - A123
Product 2 - B456
Product 3 - C789
Build
Unit 3
1003
Product 1 - A123
Product 2 - B456
Product 3 - C789
Build
Unit 2
1002
Product 1 - A123
Product 2 - B456
Product 3 - C789
Build
Unit 1
1001
Product 1 - A123
Product 2 - B456
Product 3 - C789
Consume
Product 3
C789
null
null
null
Consume
Product 2
B456
null
null
null
Consume
Product 1
A123
null
null
null
fiddle
|
STACK_EXCHANGE
|
Before we start the deployment process, we would like to point out one important thing. We should always deploy an application in the production environment as soon as we start with development. That way we are able to observe how the application behaves in a production environment from the beginning of the development process.
That leads us to the conclusion that the deployment process should not be the last step of the application’s lifecycle. We should deploy our application to the staging environment as soon as we start building it.
For the purpose of this deployment, we are going to build our Angular application for production to produce optimized static files and to combine them with the .NET Core server.
This process is pretty much the same for any client-side project you want (React, Vue.js or any other).
So, let’s start.
If you want to see all the basic instructions and complete navigation for the .NET Core series, check out the following link: Introduction of the .NET Core series.
For the complete navigation and all the basic instructions of the Angular series, check out: Introduction of the Angular series.
For the previous part check out: Creating Angular client side – Angular Delete Actions
All required projects and the publish files are available at GitHub .NET Core, Angular and MySQL. Part 16 – Source Code
This post is divided into several sections:
- Building Angular Production Files
- Publishing .NET Core Files for the IIS Deployment
- Windows Server Hosting Bundle and the Hosts File
- Installing IIS and the Site Deployment
First, we need to create the production files for our Angular project by executing the command:
ng build --prod
This is the way to create the production files for the Angular project. But if we were to use React or Vue.js for the client-side, the command would be:
npm run build
Before we publish our files to the required location, we have to modify our .NET Core app configuration a bit:
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
app.Use(async (context, next) =>
if (context.Response.StatusCode == 404 && !Path.HasExtension(context.Request.Path.Value))
context.Request.Path = "/index.html";
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
ForwardedHeaders = ForwardedHeaders.All
If we don’t modify our configuration like this, we won’t be able to start our deployed application at all (as soon as we type the required URL address). But with it, we are safe to continue.
Angular CLI is going to create a new folder with the name “dist” inside our project and publish all the production files inside. Copy all those files from the dist folder and paste them into the
wwwroot folder inside the .NET Core’s main project. Now with the static files in the right place, we are going to use Visual Studio’s feature to create publish files for the entire application.
Let’s create a folder on the local machine with the name publish. Inside that folder, we want to place all of our files for the deployment. Then, right-click on the
AccountOwnerServer project and click the
In the next window, we are going to pick a Folder as the publish target, choose the place where we want to publish our files and click Create Profile:
In the next window, we should just click the Publish button.
Now we have all the files in the right place.
Prior to any further action, let’s install .NET Core Windows Server Hosting bundle on our system to install the .NET Core Runtime. Furthermore, with this bundle, we are installing the .NET Core Library and the ASP.NET Core Module. This installation will create a reverse proxy between IIS and the Kestrel server, which is crucial for the deployment process.
During the installation, it will try to install the Microsoft Visual C++ 2015 Redistributable, so just let it do that.
If you have a problem with missing SDK after installing Hosting Bundle, follow this solution suggested by Microsoft:
Installing the .NET Core Hosting Bundle modifies the
PATH when it installs the .NET Core runtime to point to the 32-bit (x86) version of .NET Core (
C:\Program Files (x86)\dotnet\). This can result in missing SDKs when the 32-bit (x86) .NET Core
dotnet command is used (No .NET Core SDKs were detected). To resolve this problem, move
C:\Program Files\dotnet\ to a position before
C:\Program Files (x86)\dotnet\ on the
PATH environment variable.
After the installation, locate the Windows hosts file on
C:\Windows\System32\drivers\etc and add the following record at the end of the file:
Finally, save the file.
If you don’t have IIS installed on the machine, you need to install it by opening
ControlPanel and then
Programs and Features:
After the IIS installation finishes, open the Run window (windows key + R) and type:
inetmgr to open the IIS manager:
Now we can create a new website:
In the next window we need to add a name to our site and a path to the published files:
After this step, we are going to have our site inside the “sites” folder in the IIS Manager. Additionally, we need to set up some basic settings for our application pool:
After we click on the Basic Settings link, let’s configure our application pool:
Your website and the application pool should be started automatically.
In order to deploy the application to IIS, we need to register the IIS integration in our .NET Core part of the project. We have already done that in our ServiceExtensions class, in Part 2 of this tutorial.
Everything is in place.
Now let’s open a browser and type http://www.accountowner.com to inspect the result:
By reading this post you have learned:
- How to build production files from the client-side application
- The way to publish files by using Visual Studio
- Which additional resources we need for IIS deployment to work
- How to install IIS
- To deploy the application on IIS
Thank you for reading the post, hopefully, it was helpful to you.
In the next part of the series, we are going to publish this complete application to the Linux environment.
|
OPCFW_CODE
|
Debian - How to identify USB devices with similar /dev/tty* file
on my embedded machine two USB devices are mounted on a similar /dev file: /dev/ttyACMx.
A device is a POS device; the other one is a printer.
I don't know which device will be ttyACM0 or ttyACM1: my guess is that I cannot suppose a particular order.
So, once detected the presence of ttyACM0 and ttyACM1, how can I know which USB device is tied to tty* file?
I checked with lsusb and usb-device but I'm not able to connect the information.
Thanks
As root, the outputs of udevadm info -q all -a -n /dev/ttyACM0 will output all properties that can be used to identify the /dev/ttyACM0 device. Also try omitting the -a option to see the environment variables that may be generated by existing udev rules, in case those rules do some sort of active probing of the device.
If there is a difference with the outputs for /dev/ttyACM0 and /dev/ttyACM1, then that difference can probably be used to identify which is which.
Note that /dev/ttyACM* may indicate that these devices are originally RS-232 serial devices with just a generic USB-to-serial converter chip added to make them compatible with USB. If so, the amount of available information depends on exactly how the converter chip has been configured to present the device to the USB bus. In the best case, there might be an attribute that identifies the type of each device, and there might already be an auto-generated alias at /dev/serial/by-id/*
Worst case, there might be nothing unique (not even a serial number) on the converter chip, and you might have to implement some kind of udev rule that does active probing, by sending some identification request to the device and checking the resulting answer, or dedicate a particular USB port for each type of device and identify them by the sysfs path of the USB port. In this case, check /dev/serial/by-path/*: there might already be an auto-generated device alias that you could use.
Once you find a property or a probe result that can be used to tell the devices apart, you can then set up an udev rule that will assign a type-specific alias to the respective /dev/ttyACM* device, e.g. /dev/POS for the POS device, and /dev/receipt or something appropriate for the printer. Those will be symbolic links pointing to the actual device names, but your applications will be able to use them just like the real devices.
Thanks a lot! I wrote a udev rule: fantastic!!
|
STACK_EXCHANGE
|
Exploring the Mega 65
Over the past two weeks I have been continuing to explore the Mega 65 computer after recently getting it, now looking at the updated Basic features, and alternative cores like the C64 Core and Gameboy Core.
I started by loading the updated Mega 65 core into slot 1. The Mega65 won't let you overwrite the factory core in Slot 0, to prevent bricking the system and making it unbootable.
It checks the core is ok, and that it for the correct hardware, the Mega65 R3. It then loads the core into the slot selected.
This is a relatively quick process, but does take a few minutes.
Eventually the core import process completes successfully.
Now I have an updated Mega65 core ready to use in Slot 1. I also downloaded the C64 Core and loaded that in also, using Slot 2, as per below.
You can mount floppy disk images using the Mount Drive option in the HELP menu, and then select the image you want to use.
Never ceases to amaze me how fantastic C64 scene demos are - really pushing the boundaries of what I thought a C64 could do - mostly working well on the C64 Core on the Mega65:
I found some C64 demo floppy disk images were not the size the C64 core was expecting, and it refused to mount them. Hopefully this will be fixed in a newer version. That said, most of the demos I tested worked just fine:
Back in the day I played this for hours, trying to find all the 10 treasures and solving puzzles to find more. It counts how many moves you take to do it also, but I never finished the game to find out, as I couldn't find the final treasure. I only ever found 9 of them!
And here it was, the final treasure - I missed finding the Nugget, which certainly was a strange place to find it.
Job done, after 40 years I finally finished this game! Sorry for the detour, but I was very happy to do this finally, and on the Mega65 as well!
I created a GBC folder on the MicroSD card as per the instructions, and put the gameboy rom from the site included in the instructions into the right place and renamed the file as per the instructions. It can detect it on the Core launch and use it, as below:
Voila, one Gameboy/Gameboy Colour system, running on the Mega 65!
As mentioned you can adjust the colour output to look a bit better than the original system, which took the screen it was displayed on into account for colours, which meant they can look a bit odd on a normal screen - so there is Fully matured and LCD emulation options here, and below is how the two options compare on the same game screen:
It is fair to say I got somewhat distracted playing the next game, the first one I ever got for the original Gameboy classic (it was included with it):
I never knew the Gameboy could even do some of the effects I saw in these demos:
You can also use DLOAD"PROGNAME.PRG",U12 to load a program direct from the SD Card, without mounting a floppy disk image.
I read next about the BORDER and BACKGROUND commands, to easily set the colours of the border and background. I added in some new code at the top to set the colours.
I also took the Mega65 to the Adelaide Retro Computing Group meeting on the evening of 10th June 2022, and it got plenty of interest from the attendees that night!
I used to help run this group for a few years before I stopped after some health problems. These days I am just an occasional attendee when work commitments allow.
A black & white Hanimex gaming system running Pong was also there, along with a Silicon graphics O2 and plenty of other hardware too:
|
OPCFW_CODE
|
package control
import (
"errors"
"strconv"
"strings"
)
// Reply represents a reply send from the server to the client.
type Reply struct {
Status int // the StatusCode of the reply
Text string // ReplyText of the EndReplyLine
Lines []ReplyLine // MidReplyLines and DataReplyLines
}
func (r Reply) String() string {
s := strconv.Itoa(r.Status) + " " + r.Text
for _, line := range r.Lines {
s = s + "\n" + line.String()
}
return s
}
// Status codes of replies from the onion router.
const (
StatusOK = 250
StatusOperationUnnecessary = 251
StatusResourceExhausted = 451
StatusProtocolSyntaxError = 500
StatusUnrecognizedCommand = 510
StatusUnimplementedCommand = 511
StatusArgumentSyntaxError = 512
StatusUnrecognizedArgument = 513
StatusAuthenticationRequired = 514
StatusBadAuthentication = 515
StatusUnspecifiedError = 550
StatusInternalError = 551
StatusUnrecognizedEntity = 552
StatusInvalidConfigurationValue = 553
StatusInvalidDescriptor = 554
StatusUnmanagedEntity = 555
StatusAsyncEventNotification = 650
)
// IsAsync returns true if r is an asynchronous reply and false otherwise.
func (r Reply) IsAsync() bool {
return r.Status == 650
}
// IsSync returns true if r is an synchronous reply and false otherwise.
func (r Reply) IsSync() bool {
return r.Status != 650
}
// ReplyLine represents a MidReplyLine or DataReplyLine read from the server.
type ReplyLine struct {
Status int
Text string
// Data is the empty string for MidReplyLines and the CmdData with
// dot encoding removed for DataReplyLines.
Data string
}
func (rl ReplyLine) String() string {
s := strconv.Itoa(rl.Status) + " " + rl.Text + " " + rl.Data
return s
}
// the possible kinds of ReplyLines
const (
midLine = iota
dataLine
endLine
)
// readLine reads a MidLine, EndLine or DataLine into a ReplyLine struct.
// If r is nil, creates a new ReplyLine
func (c Conn) readLine() (lineType int, rl *ReplyLine, err error) {
line, err := c.text.ReadLine()
if err != nil {
return
}
status, lineType, text, err := parseLine(line)
if err != nil {
return lineType, nil, err
}
rl = new(ReplyLine)
rl.Status = status
rl.Text = text
if lineType == dataLine {
data, err := c.readData()
rl.Data = data
if err != nil {
return lineType, nil, err
}
}
return
}
func parseLine(line string) (status, lineType int, text string, err error) {
if len(line) < 4 || line[3] != ' ' && line[3] != '-' && line[3] != '+' {
err = errors.New("protocol error: : " + line)
return
}
switch line[3] {
case '-':
lineType = midLine
case ' ':
lineType = endLine
case '+':
lineType = dataLine
}
status, err = strconv.Atoi(line[0:3])
if err != nil || status < 100 {
err = errors.New("protocol errors: invalid status code: " + line)
return
}
text = line[4:]
return
}
// readData reads the dot encoded CmdData following a DataReplyLine.
// It returns a string with dot encoding removed.
func (c Conn) readData() (string, error) {
buf, err := c.text.ReadDotBytes()
if err != nil {
return "", err
}
return string(buf), nil
}
// Receive reads and returns a single reply from the Tor server.
// It makes no distinction between synchronous and asynchronous replies.
func (c Conn) Receive() (*Reply, error) {
// We read a multi-line reply containing
// lines of the form
//
// status-message line 1 // a MidReplyLine
//
// status+message line 2 // a DataReplyLine
// <dot enstatusd data>
// .
//
// status message line n // a EndReplyLine
//
// into r. status is a three-digit status code The reply is terminated
// by a EndReplyLine.
reply := new(Reply)
lineType, replyLine, err := c.readLine()
if err != nil {
return reply, nil
}
if lineType != endLine {
if reply.Lines == nil {
reply.Lines = make([]ReplyLine, 0, 1)
} else {
reply.Lines = reply.Lines[:0]
}
}
for err == nil && lineType != endLine {
// TODO: Should we check that the second Status isn't different from the first?
reply.Lines = append(reply.Lines, *replyLine)
lineType, replyLine, err = c.readLine()
if err != nil {
return reply, err
}
}
// replyLine now contains the EndReplyLine
reply.Status = replyLine.Status
reply.Text = replyLine.Text
return reply, nil
}
// ReceiveSync reads replies from the Tor server. It returns the first synchronous reply;
// replies read before that are send to the connections Replies channel.
// ReceiveSync blocks until the replies are read from that channel.
func (c Conn) ReceiveSync() (*Reply, error) {
r, err := c.Receive()
if err != nil {
return r, err
}
for r.IsAsync() {
c.Replies <- r
r, err = c.Receive()
if err != nil {
return r, err
}
}
return r, nil
}
// ReceiveToChan reads a single reply from the Tor server and sends
// it to the connections Replies channel. ReceiveToChan blocks until the
// reply is read from the channel.
func (c Conn) ReceiveToChan() error {
r, err := c.Receive()
if err != nil {
return err
}
c.Replies <- r
return nil
}
// A Handler can be registered to handle asynchronous replies send
// by an Tor router. It is customary for Handler
// to communicate back using channels.
type Handler func(r *Reply)
// Demux is a simple demultiplexer for asynchronous replies.
// It matches the type of a reply against a list of registered events
// and calls the corresponding Handler function.
type Demux struct {
conn *Conn
handler map[string]Handler
}
// NewMux allocates and returns a new Demux.
func NewDemux(c *Conn) *Demux {
return &Demux{conn: c, handler: make(map[string]Handler)}
}
// Handle registers f to handle replies of type event.
// The different events are listed in section 4.1 of the control-spec.
func (m *Demux) Handle(event string, f Handler) Handler {
if event == "" {
panic("control: invalid event " + event)
}
if f == nil {
panic("control: nil Handler")
}
old, ok := m.handler[event]
m.handler[event] = f
if ok {
return old
}
return nil
}
// Serve reads replies from the Replies channel of m's Conn and launches the
// corresponding Handler functions in new goroutines.
func (m *Demux) Serve() {
for {
r := <-m.conn.Replies
event := r.Text[:strings.IndexByte(r.Text, ' ')]
f, ok := m.handler[event]
if !ok {
continue
// TODO: Would be nice to do some error reporting here; maybe using some errChan channel.
}
go f(r)
}
}
|
STACK_EDU
|
Naveronasis wrote: ↑
09 Jun 2021, 09:10
There is no microstutter so its not doing some kind of on the fly pull down since there are some older displays that will do that.
Since now I know you're familiar with Timings & Resolution;
Frameskipping (pulldown creation) is caused by almost all LCD monitors doing frame buffering of the refresh cycles, and there's 2 separate processing passes: The video signal that writes into the buffer (creating a memory of the refresh cycle in the LCD motherboard's RAM), and the refreshing electronics that reads from the same buffer to the actual pixels (writing to the panel transistors pixels to start the LCD pixel transition).
(Side note: the LCD panel is essentially metaphorically one gigantic write-only memory chip of sorts. One that uses analog levels of molecular rotation (liquid crystal molecules that acts as light valves to block/unblock polarized light) between two glass plates. Same row-column addressing as a typical RAM memory chip, controlling a transistor (active matrix) controlling each sub pixel tile. Except the "memory medium" at the transistor is completely different. Both RAM and LCD panels use lithography as method of manufacture)
In modern low-lag implementations, it's a very tight rolling window buffer between signal and panel (often approx ~6 pixel rows, just enough for scaling processing, ovedrive/color processing, and digital HDMI/DP micropacket dejittering), in a top-to-bottom sweep. Often it's two closely-spaces pixel-row pointers (C++ or FPGA) on the same refresh-cycle buffer in the monitor motherboard's buffer (scaler/TCON), with the panel refresh row chasing behind the signal row. This makes signal synchronized to the panel almost perfectly, for low lag operations.
So that's a very low latency sub-refresh processing for LCD, which is why modern LCDs can latencies within ~2-3ms of a CRT tube (mostly just GtG lag, HDMI/DP codec/modem lag, and micropacket demux from other packets like 2nd display or audio packets, etc). Now the lag difference between a CRT (with the HDMI/DP adaptor and it attendant packetization latencies) and an LCD is now sometimes within 1-2ms difference of the fastest esports LCDs today -- the only way to easily get much lower lag on the CRT is direct analog (from discontinued GPUs with direct VGA outputs to bypass the requisite HDMI/DP codec latencies). Nontheless the esports industry is pushing some crazy optimizations that brings LCD refresh workflow closer to the 100-year-old raster video signal topology...
Sometimes it's a queue of refresh cycle buffers, with the signal buffer writing to a refresh cycle buffer, before the complete refresh cycle buffer is passed along to the panel refreshing code/electronics. But nowadays, pointers can share the same buffer, for low-lag operations, in a "beam-raced" / "beam-chased" operation in a tight rolling window nowadays on esports-optimized panels. These scaler/TCON optimizations filtered down to a lot of generic 60Hz LCDs too (low lag 60Hz LCDs), albiet not all of them, as the full buffer workflow is a lot simpler for color/HDR/etc processing in a lot of TVs (most TVs have at least 1 refresh cycle worth of lag), when gaming low lag operations are not as important. It sometimes is now much higher reprogrammable level -- e.g. an ARM GPU shader that runs on a full frame buffer to process the image nowadays -- rather than optimized FPGA/ASIC stuff. A 240Hz 1080p monitor needs to mathematically reprocess 1.5 billion subpixels per second, so that's literally FPGA or GPU territory if you want to do lots of advanced color processing -- overdrive math, HDR math, etc.
Frameskipping or tearing occurs on some panels because the two pointers (the write pointer to buffer signal to monitor RAM, the read pointer to copy monitor RAM to the monitor panel) are moving at different velocities. As they wrap around at a beat-frequency, you got the attendant stutter (if pointers are on different refresh cycle buffers) and/or tearing (if pointers are on same refresh cycle buffer).
Most "proper" gaming LCDs refresh low latency top-to-bottom like a CRT except in a non-flicker manner (no phosphor fadebehind effect behind the sweep): www.blurbusters.com/scanout
(high speed videos of an LCD refreshing top-to-bottom).
However, the signal velocity and scanout velocity of a LCD can diverge, creating weird latency effects:
Fixed Horizontal Scanrate and Flexible Horizontal Scanrate LCD Panels
TL;DR: The majority of 240 Hz LCDs are fixed horizontal scanrate, which means any refresh rates are output to the panel in 1/240sec top-to-bottom. So it has to buffer any lower Hz, and that adds latency. Fortunately a few cherrypicked 240Hz panels (e.g. ASUS XG258 and the ViewSonic XG2431, probably the world's first multisync-horizontal-scanrate 240Hz IPS LCD) can sync its panel scanout to signal scanout, reducing input lag.
Some fascinating history -- frame skipping (pulldown creation) is still a problem today! Unfortunately a lot of newer displays do too. I've caught a few 240 Hz displays frameskipping, and one manufacturer had to do a stealth recall (RMA). But I discovered some undocumented timings tricks to violates its factory EDID to fix frame skipping via a nonstandard EDID the monitor did not come with. A huge number of AOC AG251FZ (one of the first 240 Hz monitors) had this problem. So users could fix frameskipping in some early 240Hz LCDs with a custom EDID override: Timings Fix for Frame Skipping on 240Hz LCDs
These days, TestUFO has a very particular weekly traffic surge pattern resembling workdays (sinewave from 9am-5pm surge to nighttime quietness during the 5 weekdays) from the Asia region; suggestive of a lot of China/Korea/Taiwan workplaces are using TestUFO (which the Great Firewall is fully completely open to, apparently) -- apparently most manufacturers have now added TestUFO to their display development debugging / testing routines. They learned their lessons on some of the really nasty LCD-specific bugs.
|
OPCFW_CODE
|
"""Preprocessor classes: tools for preparing and augmenting audio samples"""
from pathlib import Path
import pandas as pd
import copy
from opensoundscape.preprocess import actions
from opensoundscape.preprocess.actions import (
Action,
Overlay,
AudioClipLoader,
AudioTrim,
SpectrogramToTensor,
)
from opensoundscape.preprocess.utils import PreprocessingError
from opensoundscape.spectrogram import Spectrogram
from opensoundscape.sample import AudioSample
class BasePreprocessor:
"""Class for defining an ordered set of Actions and a way to run them
Custom Preprocessor classes should subclass this class or its children
Preprocessors have one job: to transform samples from some input (eg
a file path) to some output (eg an AudioSample with .data as torch.Tensor)
using a specific procedure defined by the .pipeline attribute.
The procedure consists of Actions ordered by the Preprocessor's .pipeline.
Preprocessors have a forward() method which sequentially applies the Actions
in the pipeline to produce a sample.
Args:
action_dict: dictionary of name:Action actions to perform sequentially
sample_duration: length of audio samples to generate (seconds)
"""
def __init__(self, sample_duration=None):
self.pipeline = pd.Series({}, dtype=object)
self.sample_duration = sample_duration
def __repr__(self):
return f"Preprocessor with pipeline:\n{self.pipeline}"
def insert_action(self, action_index, action, after_key=None, before_key=None):
"""insert an action in specific specific position
This is an in-place operation
Inserts a new action before or after a specific key. If after_key and
before_key are both None, action is appended to the end of the index.
Args:
action_index: string key for new action in index
action: the action object, must be subclass of BaseAction
after_key: insert the action immediately after this key in index
before_key: insert the action immediately before this key in index
Note: only one of (after_key, before_key) can be specified
"""
if after_key is not None and before_key is not None:
raise ValueError("Specifying both before_key and after_key is not allowed")
assert not action_index in self.pipeline, (
f"action_index must be unique, but {action_index} is already"
"in the pipeline. Provide a different name for this action."
)
if after_key is None and before_key is None:
# put this action at the end of the index
new_item = pd.Series({action_index: action})
self.pipeline = pd.concat([self.pipeline, new_item])
elif before_key is not None:
self._insert_action_before(before_key, action_index, action)
elif after_key is not None:
self._insert_action_after(after_key, action_index, action)
def remove_action(self, action_index):
"""alias for self.drop(...,inplace=True), removes an action
This is an in-place operation
Args:
action_index: index of action to remove
"""
self.pipeline.drop(action_index, inplace=True)
def forward(
self,
sample,
break_on_type=None,
break_on_key=None,
bypass_augmentations=False,
trace=False,
):
"""perform actions in self.pipeline on a sample (until a break point)
Actions with .bypass = True are skipped. Actions with .is_augmentation
= True can be skipped by passing bypass_augmentations=True.
Args:
sample: either:
- pd.Series with file path as index (.name) and labels
- OR a file path as pathlib.Path or string
break_on_type: if not None, the pipeline will be stopped when it
reaches an Action of this class. The matching action is not
performed.
break_on_key: if not None, the pipeline will be stopped when it
reaches an Action whose index equals this value. The matching
action is not performed.
clip_times: can be either
- None: the file is treated as a single sample
- dictionary {"start_time":float,"end_time":float}:
the start and end time of clip in audio
bypass_augmentations: if True, actions with .is_augmentatino=True
are skipped
trace (boolean - default False): if True, saves the output of each pipeline step in the `sample_info` output argument - should be utilized for analysis/debugging on samples of interest
Returns:
sample (instance of AudioSample class)
"""
if break_on_key is not None:
assert (
break_on_key in self.pipeline
), f"break_on_key was {break_on_key} but no matching action found in pipeline"
# create AudioSample from input path
sample = self._generate_sample(sample)
if trace:
sample.trace = pd.Series(index=self.pipeline.index)
# run the pipeline by performing each Action on the AudioSample
try:
# perform each action in the pipeline
for k, action in self.pipeline.items():
if type(action) == break_on_type or k == break_on_key:
if trace:
# saved "output" of this step informs user pipeline was stopped
sample.trace[k] = f"## Pipeline terminated ## {sample.trace[k]}"
break
if action.bypass:
continue
if action.is_augmentation and bypass_augmentations:
if trace:
sample.trace[k] = f"## Bypassed ## {sample.trace[k]}"
continue
# perform the action (modifies the AudioSample in-place)
action.go(sample)
if trace: # user requested record of preprocessing steps
# save the current state of the sample's data
# (trace is a Series with index matching self.pipeline)
try:
sample.trace[k] = copy.deepcopy(sample.data)
# this will fail on Spectrogram and Audio class, which are immutable, implemented by
# raising an AttributeError if .__setattr__ is called. Since deepcopy calls setattr,
# we can't deepcopy those. As a temporary fix, we can add the original object because
# it is immutable. However, we should re-factor immmutable classes to avoid this issue
# (see Issue #671)
except AttributeError:
sample.trace[k] = sample.data
except Exception as exc:
# treat any exceptions raised during forward as PreprocessingErrors
raise PreprocessingError(
f"failed to preprocess sample from path: {sample.source}"
) from exc
# remove temporary attributes from sample
del sample.preprocessor, sample.target_duration
return sample
def _generate_sample(self, sample):
"""create AudioSample object from initial input (file path)
can override this method is subclasses to modify how samples
are created, or to add additional attributes to samples
"""
# handle paths or pd.Series as input for `sample`
if type(sample) == str or issubclass(type(sample), Path):
sample = AudioSample(sample) # initialize with source = file path
else:
assert isinstance(sample, AudioSample), (
"sample must be AudioSample OR file path (str or pathlib.Path), "
f"was {type(sample)}"
)
# add attributes to the sample that might be needed by actions in the pipeline
sample.preprocessor = self
sample.target_duration = self.sample_duration
return sample
def _insert_action_before(self, idx, name, value):
"""insert an item before a spcific index in a series"""
i = list(self.pipeline.index).index(idx)
part1 = self.pipeline[0:i]
new_item = pd.Series([value], index=[name])
part2 = self.pipeline[i:]
self.pipeline = pd.concat([part1, new_item, part2])
def _insert_action_after(self, idx, name, value):
"""insert an item after a spcific index in a series"""
i = list(self.pipeline.index).index(idx)
part1 = self.pipeline[0 : i + 1]
new_item = pd.Series([value], index=[name])
part2 = self.pipeline[i + 1 :]
self.pipeline = pd.concat([part1, new_item, part2])
class SpectrogramPreprocessor(BasePreprocessor):
"""Child of BasePreprocessor that creates specrogram Tensors w/augmentation
loads audio, creates spectrogram, performs augmentations, creates tensor
by default, does not resample audio, but bandpasses to 0-11.025 kHz
(to ensure all outputs have same scale in y-axis)
can change with .pipeline.bandpass.set(min_f=,max_f=)
during prediction, will load clips from long audio files rather than entire
audio files.
Args:
sample_duration:
length in seconds of audio samples generated
If not None, longer clips trimmed to this length. By default,
shorter clips will be extended (modify random_trim_audio and
trim_audio to change behavior).
overlay_df: if not None, will include an overlay action drawing
samples from this df
out_shape:
output shape of tensor h,w,channels [default: (224,224,3)]
"""
def __init__(self, sample_duration, overlay_df=None, out_shape=(224, 224, 3)):
super(SpectrogramPreprocessor, self).__init__(sample_duration=sample_duration)
self.out_shape = out_shape
# define a default set of Actions
self.pipeline = pd.Series(
{
"load_audio": AudioClipLoader(),
# if we are augmenting and get a long file, take a random trim from it
"random_trim_audio": AudioTrim(is_augmentation=True, random_trim=True),
# otherwise, we expect to get the correct duration. no random trim
"trim_audio": AudioTrim(), # trim or extend (w/silence) clips to correct length
"to_spec": Action(Spectrogram.from_audio),
"bandpass": Action(
Spectrogram.bandpass, min_f=0, max_f=11025, out_of_bounds_ok=False
),
"to_tensor": SpectrogramToTensor(), # uses sample.target_shape
"overlay": Overlay(
is_augmentation=True, overlay_df=overlay_df, update_labels=False
)
if overlay_df is not None
else None,
"time_mask": Action(actions.time_mask, is_augmentation=True),
"frequency_mask": Action(actions.frequency_mask, is_augmentation=True),
"add_noise": Action(
actions.tensor_add_noise, is_augmentation=True, std=0.005
),
"rescale": Action(actions.scale_tensor),
"random_affine": Action(
actions.torch_random_affine, is_augmentation=True
),
}
)
# remove overlay if overlay_df was not specified
if overlay_df is None:
self.pipeline.drop("overlay", inplace=True)
def _generate_sample(self, sample):
"""add the target_shape attribute to the sample
otherwise, generate AudioSamples from paths as normal
"""
sample = super()._generate_sample(sample)
sample.target_shape = self.out_shape
return sample
|
STACK_EDU
|
package com.tangcheng.learning.reflect;
import com.tangcheng.learning.domain.bo.TwoLevelChildClass;
import org.junit.Test;
import org.springframework.util.ReflectionUtils;
import java.lang.reflect.Field;
import java.util.ArrayList;
import java.util.List;
/**
* @author: tangcheng
* @description:
* @since: Created in 2018/08/02 16:29
*/
public class StringEqualsTest {
@Test
public void testEquals() throws IllegalAccessException, NoSuchFieldException {
String a = "abc";
Field valueFieldString = String.class.getDeclaredField("value");
valueFieldString.setAccessible(true);
char[] value = (char[]) valueFieldString.get(a);
value[2] = '@';
String b = "abc";
//a.intern();
System.out.println(a);
System.out.println(b);
System.out.println(a == b); //a和b在内存中的地址是相同的,返回true
System.out.println("abc" == b);//abc和b在内存中的地址是相同的,返回true
System.out.println("ab@" == a); //ab@在内存中的地址和a不相同,返回false
System.out.println(a.equals("ab@"));//"abc"和"ab@"的内存地址不同,但存储的值却是一样的,所以返回 true
System.out.println(a.equals("abc"));//abc的值和a对应的是同一个内内存地址,所以返回true
System.out.println("abc".equals("ab@"));//比较的是对象中的值。abc对应String对象的值已经被更改为ab@,所以返回true
}
/**
* 看看获取java.lang包下类中类的情况
*/
@Test
public void springReflectionUtilsOriginClass() {
final List<String> list = new ArrayList<String>();
ReflectionUtils.doWithFields(String.class, field -> list.add(field.getName()));
System.out.println(list); //[value, hash, serialVersionUID, serialPersistentFields, CASE_INSENSITIVE_ORDER]
list.clear();
ReflectionUtils.doWithFields(Integer.class, field -> list.add(field.getName()));
System.out.println(list); //[MIN_VALUE, MAX_VALUE, TYPE, digits, DigitTens, DigitOnes, sizeTable, value, SIZE, BYTES, serialVersionUID, serialVersionUID]
}
@Test
public void springReflectionUtilsTest1() {
final List<String> list = new ArrayList<String>();
TwoLevelChildClass twoLevelChildClass = new TwoLevelChildClass();
twoLevelChildClass.setTwoLevelChildName("TwoLevelChildName");
twoLevelChildClass.setOneLevelChildName("OneLevelChildName");
twoLevelChildClass.setName("Name");
ReflectionUtils.doWithFields(TwoLevelChildClass.class, new ReflectionUtils.FieldCallback() {
public void doWith(Field field) throws IllegalArgumentException,
IllegalAccessException {
list.add(field.getName());
field.setAccessible(true);//Class org.springframework.util.ReflectionUtils can not access a member of class com.tangcheng.learning.reflect.StringEqualsTest$TwoLevelChildClass with modifiers "private"
Object o = ReflectionUtils.getField(field, twoLevelChildClass);
System.out.println(o); // TwoLevelChildName \n OneLevelChildName \n Name
}
});
System.out.println(list); //[twoLevelChildName, oneLevelChildName, name]
}
}
|
STACK_EDU
|
Incorrect syntax near 'OFF'
I can't find anything related so please, I am knew
I am using SQL Server 2012.
I created a script using Visual Studio 2012 data compare so I can update one database to another.
The script (43k + lines), when ran, says query completed with errors (its set to roll back if an error occurs) but displays no messages.
Then I added a try/catch as shown below:
begin try
{script}
end try
begin catch
--returns the complete original error message as a result set
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage
--will return the complete original error message as an error message
DECLARE @ErrorMessage nvarchar(400), @ErrorNumber int,
@ErrorSeverity int, @ErrorState int, @ErrorLine int
SELECT @ErrorMessage = N'Error %d, Line %d, Message: '+ERROR_MESSAGE(),@ErrorNumber = ERROR_NUMBER(),@ErrorSeverity = ERROR_SEVERITY(),@ErrorState = ERROR_STATE(),@ErrorLine = ERROR_LINE()
RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState, @ErrorNumber,@ErrorLine)
end catch
And it says
Incorrect syntax near 'OFF' at line 13
below is line number and script..
SET NUMERIC_ROUNDABORT OFF (Line 13)
GO (Line 14.. etc)
SET XACT_ABORT, ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT, QUOTED_IDENTIFIER, ANSI_NULLS ON
GO
Changing Line 13 so that the GO is the last word (so combine 13 adn 14) gives the errors
Msg 102, Level 15, State 1, Line 13
Incorrect syntax near 'GO'.
Msg 102, Level 15, State 1, Line 14
Incorrect syntax near 'ON'.
Which makes me think some weird cr lf issue but I cant seem to figure it out.
I've use the script creation tool many times and have not had an issue in the past.
Thank you for your time.
Edit: the following code
SET NUMERIC_ROUNDABORT OFF GO
SET XACT_ABORT, ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT, QUOTED_IDENTIFIER, ANSI_NULLS ON
GO
gives an error at line 1
This issue has been fixed: It was fixed by me being frustrated and hitting the enter key 5 times at the beginning, making all the lines shift down 5. Now it works. I don't understand. If someone can explain, I'll give you an internet cookie or something.. idk...
GO is not a Transact-SQL statement - it's an instruction to SQL Management Studio and other utilities saying that at this point they are supposed to break down your script into two separate batches. It's basically as if you first ran one file up to the point of GO and then another one starting just after the statement.
What this means is that that your first batch's syntax is incorrect because it has a BEGIN TRY without a matching END TRY. You need to get rid of GO, or, if it's not possible, wrap every batch into its own try/catch.
I'll do that, but there was still the issue of running without the try/catch - giving no errors - would you have idea for this as well?
edit: moving the begin try to below these lines still gives the same errors.
@badamtisss These generated scripts tend to have plenty of GOs. Have you made sure you have absolutely no GO in your try block?
Yes I double checked again.
|
STACK_EXCHANGE
|
View System Documentation - Menu Design
The system uses the following CSS files:
Design GoalsThe goal was to have a presentation system that runs on 90% of all browsers, is interactive, uses widely accepted web standards, is adoptable by other agencies, is lightweight (with minimal plugins), displays high quality graphics, is maintainable and simple as possible, and is 508 compliant. Some of these design goals conflict like interactive vs. simple and high quality graphics vs. accepted web standards while some are not totally met like the 508 compliance.
- HTML 4 is used because it is the most widely accepted/adopted standard while offering significantly improved HTML over version 3.x. This precludes some older browsers from using the system but the percentage is small and and most of the interactive features and SVG plugin will not work anyway.
- Used CSS as much as possible with as little as possible HTML element attribute control. This allows for look and feel changes to be made to a few CSS file(s) without having to touch the myriad of PAGE XML files and XSLT files. It also provides a common look and feel for all pages within the system.
- HTML "div" block elements with CSS were used instead of HTML "table" elements as much as possible. This keeps things simple and removes the oddities on how different browsers handle tables.
- HTML "div" and "span" elements are used for block formatting instead of the HTML "font", "strong", and other font formatting elements. These block elements are then controlled via CSS which allows for quick and consistent block formatting changes.
- H1..H7 HTML header elements were implemented in October, 2006 to format major block titles. Prior to this DIV elements were used with CSS class set to either BlockTitle or ContentBlockTitle etc. This change was made after reading an article on browser readers that help their user's by being able to skip content based on the "H#" element.
- Used CSS scaleable "Percentage" and "EM" font sizes. This allows the user to control how big/small they want their page's text. If "%" or "em" is specified as the font size then that font's size is based on the user's browser font size setting. The helps those visually impaired users as well as older users and allows those with good eye the ability to use a smaller font size which allows them to see more text without having to scroll the page. The printer friendly CSS specifies font sizes in points "pt" because this is a printer specific setting that allows all printed pages to be consistently handled.
- SVG was chosen for high quality interactive graphics. This requires a plug in and has some oddities. JPEG images are also available for those users who can not or do not want to use SVG.
- MS ActiveX components were not considered due to their inability to run on all platforms/browsers.
- Macromedia's Flash was not considered as an option because users can not copy Flash content and paste into other applications.
CSSAs mentioned above, CSS is used as much as possible to localize and control all of the web site's pages look and feel. Some XSLTs and PAGE XML files do contain local CSS overrides and/or new definitions but these are only needed/used in one special spot. If page specific CSS code is needed and it is something that another agency might want to control then that formatting should be specified in one of the included CSS files or it should be localized in an appropriate "SiteSpecific.xslt" file via a "html.otherCSS" type template call. These are best practices to follow which will help keep the pages consistent, more maintainable, and easier for another agency to change/adopt. Since most PAGE XML files are very specific to the deploying agency it is generally acceptable to embed CSS styles in this file and possibly even use some limited formatting HTML elements like "strong". See the CSS to XSLT Xref Report page for a detailed list of which HTML element/CSS definitions are used within which XSLTs.
The system uses the following CSS files:
|standard.css||Core, stylesheet definitions for all IBIS-PH View System pages.|
|printer_friendly.css||Printer friendly specific and "standard.css" override definitions for the printer friendly version of all IBIS-PH View System pages. This includes specifying the page width, different font sizes in terms of "points" and some color changes that are more appropriate for a printed page.|
|query.css||Query specific definitions for the query module, confirmation, and result pages.|
|selection.css||Selection specific definitions for the query module, selection and query module builder pages. These properties control how the query steps, questions, and answers are formatted.|
|map.css||Style definitions for the Map XSLT SVG.|
|jsp.css||Style definitions for the system's JSP pages.|
|doc.css||Contains system documentation page specific definitions|
There are two major things to know about CSS. 1) CSS properties are inherited from their container NOT from a general class definition (as with a programming language). 2) CSS property file properties are overwritten with the last property defined having precedence. All pages utilize the css/standard.css file which provides the core formatting. Other supplemental style sheet files are then included which override previously defined style properties or define section/new page specific class definitions.
|js/mmenu/base.js||Contains the Milonic menu style definitions, top menu tab navigation menu definitions, and core popout text menu definitions common for all IBIS-PH View System pages.|
|js/mmenu/home.js||Left hand navigation menu definitions used by all "home" type pages.|
|js/mmenu/query.js||Left hand navigation menu definitions used by all "query" type pages.|
|xslt/indicator/profile/_indicator.xslt||Left hand navigation menu definitions used by Indicator "PAGE XML" page (all non dynamically created Indicator Profile view type pages).|
|js/mmenu/doc.js||Left hand navigation menu definitions used by all "doc" type pages (system documentation).|
|
OPCFW_CODE
|
Home · Netflix/Hystrix Wiki. The Big List of TDD and Unit Testing Knowledge - DZone DevOps. Oct 09, 2015 devops,tdd,unit testing,test-driven development Learn how improving the performance of your app is easy with New Relic's SaaS-based monitoring. One of my fellow dev friends asked about a set of links, books, screen-casts related to TDD / Unit Testing. He wants to expand his knowledge. Instead of sending him a private message, I thought that it would be great to just create a blog post with all the resources, I used in the past to learn. Basics Presentation - Video: Unit Testing and TDD – Why You Should Care and How to Make It Happen By Roy Osherove If you are completely new to those concepts then this presentation will be a nice quick start.Book – “The Art Of Unit Testing” by Roy OsheroveStart here if you want to get into TDD and Unit Testing.
Intermediate Resources for developers that started using TDD practice and want to expand their knowledge. Advanced Topics that might be going out of TDD and sometimes into more philosophical issues. Functional Flavour. Core Java Archives. Degraph. Degraph for visualization You can analyse class files and jars using Degraph and get a graphml file as result. This can be rendered using yed. What makes Degraph different from other tools is that it supports nested graphs. Inner classes are visualy contained in their containing class; Classes are contained inside packages and if you want you can group packages to modules, layers and so on.
These ways of grouping classes are referred to slices in Degraph. If you do a hierarchic layout in yed of the resulting graphml file you can easily see which classes you can move to different packages, layer or modules without creating circular dependencies or which you have to move in order to break cycles. Read more … Degraph for controlling dependencies Ever wanted to establish a rule in a project like “stuff from the presentation layer must not access stuff from the persistence layer!”? Java Interview Reference Guide – Object Oriented Concepts | Tutorials. Object Oriented Concepts Object oriented approach conceptualize the problem solution in real-world object which are easier to reuse across the application. For example Chair, Fan, Dog, computer etc. Java is based on Object Oriented concepts, which permits higher level of abstraction to solve any problem in realistic way. In Java, a class is a blueprints, template or prototype that define common behavior of object of same kind.
An instance is realization of a particular class and all instances of a class have similar properties, as described in the class definition. For example, you can define a class called House with number of room as attribute and create instances such as house with 2 rooms, house with 3 rooms etc. Benefits: Below are few advantages of object oriented software development: There are four main features of OOPS: EncapsulationInheritancePolymorphismAbstraction Encapsulation: Benefits: Below are few advantages of Encapsulation: Polymorphism: Shape shape=new Square (); E.g. Benefits: Java Interview Reference Guide – Concurrent Framework | Tutorials. The Java Concurrency framework provides libraries to build concurrent application which consists namely of Executors, synchronize, concurrent collection, locks, atomic variables and Fork/Join. Java.util.concurrent: It is main package, which provides common libraries such as concurrent collection (ConcurrentHashMap.), ForkJoin, Executors, and Semaphore etc. to build concurrent application.
Java.util.concurrent.atomic: This subclass provide thread safe variable without using synchronized key. It uses CAS (compare-and-set) instruction support to provide thread safe. Java.util.concurrent.locks: This subclass contains low-level utility types for locking and waiting for conditions without using synchronization and monitors. Java.util.concurrent.locks.ReentrantLock implements Lock interface is more efficient over traditional synchronized based monitor lock mechanisms. ThreadPoolExecutor is one of the implementation of ExecutorService in java.util.concurrent packages. Executors: FixedThreadPool:
6 Reasons Not to Switch to Java 8 Just Yet | Takipi Blog. Java 8 is awesome. Period. But… after we had the chance to have fun and play around with it, the time has come to quit avoiding the grain of salt. All good things come with a price and in this post I will share the main pain points of Java 8. Make sure you’re aware of these before upgrading and letting go of 7. 1. Parallel Streams can actually slow you down Java 8 brings the promise of parallelism as one of the most anticipated new features. No, it can actually make your code run slower if not used right. The slower benchmark, grouping a collection into different groups (prime / non-prime): More slowdowns can occur for other reasons as well. Diagnosis: Parallelism with all its benefits also brings in additional types of problems to consider. 2.
Lambdas. Now let’s get functional with a nice lambda: Baam! On the bottom line this all means that what you’re writing and what you’re debugging are two different things. Into this: 3. And that’s it, problem solved. 4. So where is it? 5. 6. 4 Ways to Copy File in Java.
|
OPCFW_CODE
|
Originally Posted by LBTRAVA
Hi everybody. I have installed CWM v184.108.40.206 on my SONIC U8650 smartphone. I'm having a slight problem with this recovery (same occurred with previous version 220.127.116.11). It's not saving the ext partition that I have on the sd card. When I perform a backup through CWM, it skips the saving of sdext as this partition is not being found. Also, with previous CWM version 18.104.22.168, there was an option to manually mount the partition (just in case it is not found at the time of backup) in the "Mounts and storage" option. This option is no longer present in CWM 22.214.171.124 so I can't manually mount the partition (whether this would let CWM recognize the partition and make a proper backup of it). In the end I'm not able to have sdext fully backed up (I resorted at doing a manual backup on my notebook with LINUX with partimage) along with all the other partitions such as /system, /data, /cache etc etc. I therefore wonder if this is a bug and if there's a way to get around this.
I have another smartphone with CWM v126.96.36.199 installed and on this one the backup of the sd-ext partition does work (and it is ext4, not ext3). I wonder if on this 188.8.131.52 version for SONIC, ext3 partitions would be recognized or not.
P.S.: The CWM 184.108.40.206 I installed was taken from here. And this is the installation guide. Is this the same CWM as the one posted here?
EDIT: I have flashed even this version posted here and still the same problem.... it desn't backup my sd-ext partition. It does backup boot_image, recovery_image, system, data, android_secure, cache but not sd-ext (no sd-ext found).
If you wanna sd-ext on your sd card you must follow (exactly) this easy steps.
1- Full format in FAT32 your SD card with Mini Partition 7
(windows) or gparted (linux); per example: Primary FAT32
2- put your sd card into your phone and get into recovery menu.
---go to "mounts and storage" and format this parts:
3- Go to the beggining of recovery menu and select:
..."wipe data/factory reset"...(say YES)
..."wipe cache partition"...(say YES)
Go to ...advanced ..."wipe Dalvik Cache"...(say YES)
5- Go to...advanced...partition SD card (say 512Mb to ext and64Mb to swap)
...go to...mounts & storage... mount USB storage...
... conect the phone via USB and pass your favourite ROM to the cellular
6- select "unmount"
7- Go to "Go Back"
8- Select "install zip from sdcard"... "chose zip from sdcard" and select your ROM
10- Say YES and wait untill your ROm is installed
Congratz, now you have your U8650 with custom ROM and ext partition with swap
P.S. I know the steps are some tedious think but, it works at 110% man
Greetings from spain
|
OPCFW_CODE
|
May 20, 2023
A code unit is a unit of measure used to describe the amount of code in a computer program. It is a standard of measurement used to determine the size of a software program, and it is used by programmers and software engineers to measure the complexity of a program.
A code unit can be defined in several ways, but the most common definition is a byte, which is the smallest unit of memory that can be addressed by a computer. A byte is composed of eight bits, and it can represent a single character, a number, or a command in a computer program.
The purpose of using code units is to measure the amount of code that is required to create a software program. This is important because the size of a program can affect its performance, and it can also affect the amount of time and resources required to develop, test, and deploy the program.
Code units are used in various ways to measure the size and complexity of software programs. One common use of code units is to measure the size of a program’s executable file. This is important because the size of the executable file can affect how long it takes to load and run the program.
Another use of code units is to measure the size of a program’s source code. This is important because the size of the source code can affect how long it takes to compile the program, and it can also affect how long it takes to debug and maintain the program.
Code units can also be used to measure the complexity of a program’s logic. This is important because the complexity of a program can affect its performance, and it can also affect how easy it is to understand and modify the program.
There are several tools and techniques that can be used to measure code units. These include static code analysis tools that scan the source code of a program to identify potential issues, and code coverage tools that measure how much of a program’s code is executed during testing.
Types of Code Units
There are several types of code units that can be used to measure the size of a software program. These include:
Source Lines of Code (SLOC)
Source lines of code (SLOC) is a code unit that measures the number of lines of code in a software program’s source code. This includes comments and blank lines, as well as executable code. SLOC is a popular code unit for measuring the size and complexity of software programs.
One advantage of using SLOC is that it is language-independent, meaning that it can be used to measure the size of programs written in any programming language. However, SLOC can be misleading because different programming languages use different syntax and structure, which can affect the number of lines of code required to write a program.
Object Lines of Code (OLOC)
Object lines of code (OLOC) is a code unit that measures the number of lines of code in a program’s object code. This includes the executable code and any libraries or modules that the program uses. OLOC is a more accurate measure of a program’s size than SLOC because it does not include comments or blank lines.
However, OLOC can be more difficult to measure than SLOC because it requires access to the program’s object code, which may not be available in all cases.
Function points is a code unit that measures the functionality provided by a software program. It is based on the number of input and output parameters, the complexity of the algorithms used, and the number of user interactions required to complete a task.
Function points are often used to measure the size and complexity of large software systems, such as enterprise resource planning (ERP) systems or customer relationship management (CRM) systems. However, function points can be difficult to measure because they require a detailed understanding of the program’s functionality.
Cyclomatic complexity is a code unit that measures the complexity of a program’s control flow. It is based on the number of decision points in the program, such as if statements and loop structures. The higher the cyclomatic complexity, the more difficult the program is to understand and maintain.
Cyclomatic complexity is often used to measure the maintainability of a program, as programs with high cyclomatic complexity are more difficult to modify and debug. However, cyclomatic complexity can be misleading because it does not take into account the size or structure of the program.
|
OPCFW_CODE
|
Integrator natively supports Google Analytics reporting API.
When configuring Google Analytics connector you must point it to the particular web site, defined by the web property View ID, and authorize it to use with this web property.
What is a web property?
Web property is a collections of web sites under the same Google Analytics account.
What is a web property View ID?
Web property View ID is a unique id of the particular web site, for example google.com, under Google Analytics account.
When configuring Google Analytics connector, you must specify web property View ID.
You can find your web property View ID in the Google Analytics Admin.
Step 1 - login into Google Analytics console.
Step 2 - select web property.
Step 3 - click on a property name drop down on the top of the screen.
Look for a number of the left, below property name. This is your web property View ID.
Connecting using Google Service account
You need a service account in order to access Google Analytics API.
Read more about using Google Service account to connect to the Google APIs.
Authorizing service account for the web property.
Before Google Analytics connectorcase-studies-connecting-to-google-apis can access the web property, the service account must be authorized.
Open Google Analytics Admin in a web browser and under User Management add permission for service account email.
- If you use default service account (recommended) enter
email@example.com email address.
- if you use service account created by you - use corresponding email address.
Creating Google Analytics flows
Important: before you start creating flows you must:
- Get a web property View ID.
- Use default service account (recommended), or create a new service account if needed.
- Authorize service account for the web property.
Once 1-3 are completed you can start creating flows which extract data from the Google Analytics.
Step 1 Create Google Analytics Connection.
When creating a connection, define the following properties:
- Vied ID - a view ID of the web property.
- Service Account Email - service account email.
- Service Account -
important:keep it blank if you use default service account. Otherwise, copy and paste content of the json file downloaded in a step 8, when you created a new service account.
- Start Date - A start date for the request, formatted as YYYY-MM-DD, or as a relative date (e.g., today, yesterday, or NdaysAgo where N is a positive integer).
- End Date - An end date for the request, formatted as YYYY-MM-DD, or as a relative date (e.g., today, yesterday, or NdaysAgo where N is a positive integer).
- Dimensions - Google Analytics dimensions to be included in the report. You can include multiple dimensions in the request. Read more about the dimensions here. If dimension is not in a list - simply type in dimension name, for example
Note:enter dimension's name exactly as in the dimensions and metrics explorer, including
ga:prefix and capitalization.
- Metrics - Google Analytics metrics to be included in the report. You can include multiple metrics in the request. Read more about the metrics here. If metric is not in a list - simply type in metric name, for example
Note:enter metric's name exactly as in the dimensions and metrics explorer, including
ga:prefix and capitalization.
Step 2 Start creating a flow by opening Flow Builder window, clicking the
+ button and typing in
Google Analytics in the search box:
Step 3 Continue by defining a transformation(s) where source (FROM) is a Google Analytics Connection created in a step 1, and a destination (TO) is a ether file, database or a web service.
|
OPCFW_CODE
|
Take a look at our guide to the best exchanges for trading crypto, we have also written in-depth reviews of most exchanges so look here to find the one you wish to use.. If you’re considering day trading, we’re going to assume that you know how to register an account on an exchange, and what the difference is between a centralized exchange and a decentralized exchange. Day trading strategies Does anybody have any good resources or advice for crypto currency day trading strategies or technical analysis for beginners? I have some really solid long term crypto investments that I'm extremely excited about, but I want to get involved with the quick cash that is there to be made as well. The Ultimate Beginner’s Guide to Cryptocurrency Trading In this guide, I will provide readers with the basic tools necessary in order to get started on their journey in cryptocurrency trading. Depending on the reception this guide gets, it is my intention to release more guides, with more advanced techniques. Section One: How much do they make in a year? Is it worth quitting your day job and trading full time? What are their secrets and strategies? I’ll answer all these questions in this guide. Make sure you read it until the end. [Content protected for Crypto Trader Pro - Monthly, Crypto Trader Pro - Yearly, Crypto Trader Pro (Lite) members only] Day Trading Cryptocurrency: What You Need to Know First. In the above section, I briefly discussed what day trading cryptocurrency actually is and some of the crypto trading strategies people use. This section is going to talk about the mental side of trading, which is probably the most important thing to consider. Volatility
[index]
If you are a beginner in crypto trading, here is a simple way you can make $100 a day easily, by trading bitcoin. here is one insanely easy exponential moving average ( ema ) strategy that you can ... 🚨 MEGA BITCOIN BLUEPRINT SALE 🚨 https://www.btcblueprint.com 🔥 Up To $600 Discount - Limited Time 🔥 🔲 My Top 3 Recommended Exchanges 🔵 Phemex http ... 👨🏼💻Make Money With Crypto (*NEW* FREE TRIAL): https://bit.ly/39htfpK 👨🏼💻3commas: http://bit.ly/2ZJcfFu 👨🏼💻Bybit ($90+ Bonus): http ... 👀 How to start Day Trading in 2019 THE RIGHT WAY! 💥 [FREE] Day Trading Course 👉 https://bit.ly/ytfreeclass 📺 BECOME ONE OF MY MEN... 1 Cryptocurrency Trading Strategy To Make $100 Day Trading Bitcoin - Duration: ... The Ultimate Crypto Breakout Trading Strategy - Duration: 24:45. Chris Dunn 40,166 views.
|
OPCFW_CODE
|
Zan Image Printer v5.0.9 | 5.16 MB
Zan Image Printer is a virtual printer driver that enables you to convert any printable document into standard BMP, TIFF, or JPEG images. The generated images can thus be easily shared and viewed without the applications that created the original documents. Zan Image Printer is a tool that allows you convert documents.You can simply select the Print command from any application that supports it, and then select Zan Image Printer instead of your regular (paper) printer. You then have the option to select the image format and output quality, and save the document as image file. Additional features include text extraction, command line options, macro commands for output file naming, document name filtering, image cropping, trimming, inverting and downsize scaling. The run application after printing finished option enables to automate batch image processing easily.
· Multiple printer instances, multi-threaded printing, multiple user sessions
You can install multiple copies of Zan Image Printer on your computer, each with its own unique settings. This is a handy time saver because you can switch between different printer settings easily. It is also great for office environments because the IT department can configure the settings for the rest of the office.
Zan Image Printer is multi-user aware, each user can have their own settings with the printer or you can configure Zan Image Printer so that all users share the same settings.
If you are printing large quantities of documents, you can even load balance print jobs across multiple Zan Image Printer instances for faster performance.
· Support for running as an unprivileged "normal" user
To print to Zan Image Printer, a user doesn't need to be member of the power users group or the admin group which improves the security of the system. This makes it easy to deploy Zan Image Printer in security conscious network environments.
· Print to TIFF image
TIFF formats include 1 bit per pixel(Monochrome), Grayscale, 256 color and 24 bit true color. Compression methods include CCITT Group 3(1D Modified Huffman - MH, 2D Modified READ - MR), CCITT Group 4(Modified Modified READ - MMR), Packbits(RLE), Deflate(Zip), LZW, JPEG and uncompressed.
Support for multi-page, serialized, and appending mode is included. The appending mode allows you to build a multi-page TIFF file by concatenating and merging pages from different printouts. A multi-page TIFF file can even contain pages from different document types!
Zan Image Printer can also be setup to create fax ready files(TIFF Class F facsimile). With this capability, basically anything that can be printed can be used in a fax application.
· Print to JPEG image
Supports saving documents as true color and grayscale JPEG files at a user-selectable quality factor, good for photographic or scanned images.
· Print to PDF documents
Easily convert any document to a PDF file. The "Append to existing file" option allows you to easily combine and merge multiple documents into a single PDF file. For example, you can create a single PDF file whose first page was printed from Microsoft Word, the second from Internet Explorer and the last from Microsoft Excel.
· Print to BMP image
File formats include Monochrome, Grayscale, 256 color and 24-bit true color. RLE8 compression is supported for grayscale and 256 color images.
· Print to GIF
You can save as Monochrome, Grayscale or 256 color GIF file. GIF is widely used on the Web and is best for images with just a few colors and sharp edges.
· Print to PNG
PNG is a universal format and is supported by all modern browsers. Zan Image Printer supports Monochrome, Grayscale, 256 color and 24-bit true color PNG formats. Choose PNG when you will use the image for screen shots of Windows, or general web graphics.
· Print to JPEG 2000
JPEG 2000 is intended to be new and improved image compression method that replaces JPEG files. It can operate at higher compression ratios which means smaller files for you.
· Easy and simple programming interface
All options presented in the user interface can be controlled programmatically. The documentation includes numerous VBScript, VB, Delphi, C/C++, VC.NET/CLI, VB.NET and C# code samples to help you get started quickly!
Two programming models are available to developers including using the Win32 APIs to access the INI setting files directly, or simply calling the provided command line utilities from their application.
Developers can programmatically perform batch document conversion (e.g. Word DOC, Excel XLS or HTML files to image) by printing to Zan Image Printer. You can even automate the batch printing with a script file.
· Paper size
All standard paper sizes(A4, Letter, Legal, A0, etc.) are supported. In addition, Zan Image Printer allows you to create user defined paper sizes. Printing on large paper sizes can be done in seconds when printing in Black & White mode. This is especially useful for CAD users!
· DPI Resolutions
A wide range of DPI resolutions are supported:
75 x 75, 100 x 100, 120 x 120, 150 x 150, 200 x 100, 200 x 200, 240 x 240, 300 x 300, 360 x 360, 400 x 400, 600 x 600, 720 x 720, 1200 x 1200, 2400 x 2400, 204 x 98, 204 x 196, 96 x 96, 144 x 144, 288 x 144, 288 x 288.
· 240 x 144, 240 x 288
· Informative transparent status dialog
The friendly and informative transparent status dialog displays a wealth information about the current print job on screen as soon as a print job is started and it is automatically dismissed after the printout completes.
· History folder and file name database
Both features let you quickly select the previously used folder and file names. Additionally, the user interface provides access to history list for many other functions.
· Dynamic file and folder generation based on macros
Macro commands can be used to dynamically define how the file names are generated - without having to manually name each file every time you print.
Advanced users can also generate filenames based on information within the document being printed by using regular expressions.
· Print to text file, text extraction
This feature lets you extract text from any printable document and save the text to a separate text file either in addition to or instead of creating the image file. By saving your documents to a text file, you can easily index and search all your documents that could not be searched previously.
· Compress and save to ZIP file
You can package the generated files into an industry standard ZIP archive after creation. This makes it easy to keep related files together and makes storing data faster and more efficient.
· Automatically Email the printout after creation using DNS MX LOOKUP/MAPI/SMTP
Zan Image Printer makes it easy to instantly send files after printing them using a variety of configuration methods to match every need. The easiest method of configuring your email is the DNS MX LOOKUP method, which makes sending email as easy as 1-2-3 since you don't need to enter information about your SMTP server (or even know what an SMTP server is), and no username/password is needed.
Zan Image Printer can even search the document being printed for an email address, extract it from the text, and then automatically send the email to that address.
You can email the generated files as standard images or as ZIP file attachment to save bandwidth and make delivery easier and more reliable.
Homepage - http://www.zan1011.com/
|
OPCFW_CODE
|
Introduction Remote Procedure Call (RPC) is an inter-process communication technique to allow client and server software to communicate on a network. The RPC protocol is based on a client/server model. The client makes a procedure call that appears to be local but is actually run on a remote computer. During this process, the procedure call arguments are bundled and passed through the network to the server. The arguments are then unpacked and run on the server.
Hello i have scom 2012 r2 and once i installing MP for file server and printing it is showing the below file and printer sharing is blocked Summary This object. Brother Printer Resetter Free Download on this page.
The result is again bundled and passed back to the client, where it is converted to a return value for the client's procedure call. RPC is used by several components in Windows Server, such as the File Replication Service (FRS), Active Directory Replication, Certificate services, DCOM, domain join, DCPromo and RDP, NLB and Cluster, Microsoft Operations Master, Exchange and SQL. The RPC Server An RPC server is a communications interface provided by an application or service that allows remote clients to connect, pass commands, and transfer data using the RPC protocol. A typical example of an RPC server is Microsoft Exchange Server. Microsoft Exchange Server is an application running on a computer that supplies an RPC communications interface for an RPC client.
An application will register its RPC server with the operating system’s End Point Mapper (EPM) service so that the remote client can locate the RPC server. When the application registers with the EPM it will indicate the IP address and TCP port that it is listening on. The RPC Client An RPC client is an application running on any given computer that uses the RPC protocol to communicate with an RPC server. An example of a typical RPC client is the Microsoft Outlook application. NOTE: In this document the terms RPC server and RPC client refer to the application running at both ends of an RPC communication. RPC Quick Fixes Common causes of RPC errors include: • Errors resolving a DNS or NetBIOS name. • The RPC service or related services may not be running.
• number of connectivity Problems with network connectivity. • File and printer sharing is not enabled. Use the following procedures to diagnose and repair common causes of RPC errors. Unable to resolve DNS or NetBIOS names in an Active Directory environment.
• Use the following commands to verify DNS is working for all DC's or specific DC's: • To get a DNS status for all DCs in forest, run the following command: • DCDIAG /TEST:DNS /V /E /F: • The '/e' switch runs the DNS test against all DCs in an Active Directory Forest • To get DNS health on a single DC, run the command below. • DCDIAG /TEST:DNS /V /S: /F: • The '/s:' switch runs the DNS test against a specified domain controller. • To verify that a domain controller can be located for a specific domain, run the command below. • NLTEST /DSGETDC: • Servers and clients that are receiving the error should be checked to verify that they are configured with the appropriate DNS server. Servers should not be pointing to their ISP's DNS servers in the preferred or alternate DNS server portion of the TCP/IP settings. The ISP's DNS servers should only be used as forwarders in DNS. • Ensure that at least one correct DNS record is registered on each domain controller.
• To ensure that a correct DNS record is registered on each domain controller, find this server's Active Directory replication partners that run DNS. • Open DNSManager and connect in turn to each of these replication partners. • Find the host (A) resource record registration for this server on each of the other replication partner domain controllers. • Delete those host (A) records that do not have IP addresses corresponding to any of this server's IP addresses. • If a domain controller has no host (A) records for this server, add at least one that corresponds to an IP address on this server. Hp Support Assistant Silent Install. (If there are multiple IP addresses for this server, add at least one that is on the same network as the domain controller you are updating.) • Name resolution may also fail with the RPC Server is unavailable error if NetBIOS over TCP/IP is disabled on the WINS tab in the advanced section of the TCP/IP properties. The NetBIOS over TCP/IP setting should be either enabled or default (use DHCP).
• Verify that a single label domain name is not being configured. DNS names that do not contain a suffix such as.com,.corp,.net,.org or.local are considered to be single-label DNS names. Microsoft doesn't recommend using single label domain names because they cannot be registered with an Internet registrar and domain members do not perform dynamic updates to single-label DNS zones. Knowledge base article - 'Clients cannot dynamically register DNS records in a single-label domain' provides instructions on how to configure your domain to allow dynamic registration of DNS records in a single label domain. The RPC service or related services may not be started Verify the status and startup type for the RPC and RPC locator services on the server that gets the error: • By default, Windows server 2003 domain controllers and member servers all should have the RPC service started and set to Automatic startup and the RPC Locator service stopped and set to Manual Startup. • Windows 2000 domain controllers should have the RPC and RPC Locator services both set to started and automatic startup, while Windows 2000 member servers should have the RPC service started and set to automatic startup while the RPC locator service should be started and set to manual startup.
|
OPCFW_CODE
|
I have a request to make of the community- it may seem like an odd thing, but it's something which has become important to me right now.
I want to know your impressions and opinions of the House of Netjer, a.k.a. HoN, KO, or the Kemetic Orthodox Faith.
Right now, I'm trying to gather a sense for the general community's response to this expression of Kemetic religion, and that temple's relative position within the wider community. To that end, I want to know what you, specifically, think of them. I want to know if you think well of them, or if you don't, and- as much as you're comfortable in telling me- why. I want to know if you've ever had any experience with them, or what you think of their approach to the religion, their internal structure, their membership, their website, and/or their effect on the rest of the community. I want to know if you have no opinion of them either way, and I want to know if you've never heard of them before. If you've never heard of them before, and this inquiry prompts you to go check them out, I'd like to hear your first impressions.
You don't have to answer all of these questions if you don't have time, or don't want to do so- just tell me whatever you think is most important. And you're free to change your mind later on, of course- I just want to know what you think right now. I will not try to convince you of my own views, either- that's not what I'm after.
I want to hear from people of any, all, and no temple affiliations. I'd like to know whether you consider yourself Kemetic, and if you identify strongly with any particular group(s) within the community. But no matter what, I want to hear from you.
In short, I want to know everything and anything you're willing to tell me about HoN and its relationship with the rest of the community- and just in case anyone has any concerns: I will swear on my life's blood, on the foundations of ma'at, and on the air which the ntjrw give me to breathe that I will never reveal any details of what any particular person has said to me regarding this interview. Nor will I ever reveal any trends of thought which I find within any Kemetic groups or organizations. I assure you that I am taking this quite seriously.
I would be very grateful for your assistance in this matter, and you may send your comments to me privately at my yahoo mailing account. The address will be email@example.com . If you'd feel better about posting your responses here, then please do so- but it would probably be better to do this privately through email.
Thanks so much for your help, and I hope to hear from you soon! This request will be crossposted in a few places- and feel free to forward it on to anyone you know who might want to add their voice.
|
OPCFW_CODE
|
Inverse quantile algorithm is non-contiguous
Not only it gives strange results at the end (and indeed, python implementation fixes that; sorry, not fluent at Java), but also the ranges that are covered as q goes up are non-contiguous.
I propose the following changes:
t <- 0, q <- $1 + q (\sum c_i.count - 1)$
for i <- 1..m
if q \leq t + c_i.count
if i = 1 : return c_i.mean
if i = m : return c_i.mean
low <- (c_{i-1}.mean + c_i.mean)/2
high <- (c_{i+1}.mean + c_i.mean)/2
return low + (high - low) * (q - t)/c_i.count
This has the property of being contiguous and returning precise values for $q = 0, 1$.
I think that this algorithm doesn't account for situations with small
counts well. Whenever a centroid has a single sample, we know that the
quantile increments discontinuously exactly at the mean value for the
centroid.
Also, there are two approaches for thinking about quantiles in a q-digest
framework. In one approach (yours), the thought is that the samples for a
centroid are uniformly spaced from some minimum value to a maximum value.
The question becomes how we decide what the minimum and maximum are. One
way to do this is to assume that the extent of a centroid on one side is
proportional to the number of samples in that centroid, scaled by the total
number of samples in that centroid and the next centroid on that side. Note
that the centroid will no longer necessarily be in the center of this
distribution. I think it is also good practice to account for a small gap
between centroids which also coincidentally deals with the fact that n
samples should properly be considered to be n-1 units wide.
The second approach would be to consider the samples to be uniform between
centroids. This means that we know what the upper and lower bounds are. We
can then assume that there are (n_left -1)/2 + (n_right - 1)/2 samples plus
a gap.
Which approach is better is not clear to me.
On Mon, Sep 19, 2016 at 5:29 PM, Alexander Sedov<EMAIL_ADDRESS>wrote:
Not only it gives strange results at the end (and indeed, python
implementation fixes that; sorry, not fluent at Java), but also the ranges
that are covered as q goes up are non-contiguous.
I propose the following changes:
t <- 0, q <- $1 + q (\sum c_i.count - 1)$
for i <- 1..m
if q \leq t + c_i.count
if i = 1 : return c_i.mean
if i = m : return c_i.mean
low <- (c_{i-1}.mean + c_i.mean)/2
high <- (c_{i+1}.mean + c_i.mean)/2
return low + (high - low) * (q - t)/c_i.count
This has the property of being contiguous and returning precise values for
$q = 0, 1$.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/tdunning/t-digest/issues/72, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPSeiXRTQn94Wu_w-a1uSu1hF-0xGPMks5qrqpGgaJpZM4KAn8h
.
That’s all fine and dandy, but your formulae don’t reflect your approach of it being uniform between centroids, as you still center it on the centroid.
And while I can understand it being non-contiguous, it being non-monotonic (which it currently is) is completely beyond me.
20 сент. 2016 г., в 14:38, Ted Dunning<EMAIL_ADDRESS>написал(а):
I think that this algorithm doesn't account for situations with small
counts well. Whenever a centroid has a single sample, we know that the
quantile increments discontinuously exactly at the mean value for the
centroid.
Also, there are two approaches for thinking about quantiles in a q-digest
framework. In one approach (yours), the thought is that the samples for a
centroid are uniformly spaced from some minimum value to a maximum value.
The question becomes how we decide what the minimum and maximum are. One
way to do this is to assume that the extent of a centroid on one side is
proportional to the number of samples in that centroid, scaled by the total
number of samples in that centroid and the next centroid on that side. Note
that the centroid will no longer necessarily be in the center of this
distribution. I think it is also good practice to account for a small gap
between centroids which also coincidentally deals with the fact that n
samples should properly be considered to be n-1 units wide.
The second approach would be to consider the samples to be uniform between
centroids. This means that we know what the upper and lower bounds are. We
can then assume that there are (n_left -1)/2 + (n_right - 1)/2 samples plus
a gap.
Which approach is better is not clear to me.
On Mon, Sep 19, 2016 at 5:29 PM, Alexander Sedov<EMAIL_ADDRESS>wrote:
Not only it gives strange results at the end (and indeed, python
implementation fixes that; sorry, not fluent at Java), but also the ranges
that are covered as q goes up are non-contiguous.
I propose the following changes:
t <- 0, q <- $1 + q (\sum c_i.count - 1)$
for i <- 1..m
if q \leq t + c_i.count
if i = 1 : return c_i.mean
if i = m : return c_i.mean
low <- (c_{i-1}.mean + c_i.mean)/2
high <- (c_{i+1}.mean + c_i.mean)/2
return low + (high - low) * (q - t)/c_i.count
This has the property of being contiguous and returning precise values for
$q = 0, 1$.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/tdunning/t-digest/issues/72, or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPSeiXRTQn94Wu_w-a1uSu1hF-0xGPMks5qrqpGgaJpZM4KAn8h
.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/tdunning/t-digest/issues/72#issuecomment-248276857, or mute the thread https://github.com/notifications/unsubscribe-auth/AAW_XUeZAIo8lMWtoh3tRg0jC0Jq3OKSks5qr8W_gaJpZM4KAn8h.
On Tue, Sep 20, 2016 at 1:56 PM, Alexander Sedov<EMAIL_ADDRESS>wrote:
it being non-monotonic (which it currently is) is completely beyond me.
non-monotonic is not the intent.
Need to check that.
Alex,
The latest implementation is considerable better behaved. I am working towards a release soon and will include a test for your pathology.
The new quantile/cdf algorithm uses the following diagram as an intuitive basis:
The fundamental idea here is that the solid red line represents our desired result. At the extremes, we interpolate between the first (or last) centroid and the recorded min and max values ever seen. It is assumed that each centroid is collocated with the median of the original data for the centroid and that the data is uniformly distributed between the centroids.
By definition, this form should give monotonic functions for both x -> quantile and quantile -> x.
|
GITHUB_ARCHIVE
|
How do you get rid of the "white noise" on recordings?
Hi guy I want to know how to get rid of the white noise that I get when I record through my computer.
I use the recording software "Audacity".
Does it only happen through your computer? You may need a better sound card.
You could also try a noise gate, but if it's only through your PC it's probably the sound card.
Often there's ambient noise that you've "tuned out" and don't hear while you're doing it but notice later on the recording. Pays to take a conscious, deliberate listen to the room before you start. Computers generally make noise from hard drives and fans.
"A cheerful heart is good medicine."
I don't know if anything is offered for Audacity. But I've used Cool Edit Pro II's (now Adobe Audition) noise reduction algorithms on some recordings digitized from cassette and low SNR sources. The scans the file to analyze it for "desired" info (music, voice) based on some frequency, power and other paramters (performs an autocorrelation). In the second step, it uses the gathered info to separate noise and desired audio. What I've learned is that there is no real magic bullet to remove noise. If I try to get too aggressive in removing noise -- more than only a few dB of reduction, the process clearly introduces annoying artifacts in the desired audio. I've learned not to expect too much from such a process. It's probably good for increasing intelligibility of voice at the expense of fidelity (spy-versus-spy or CSI magic), but is hard on the music.
The other thing that might be possible is companding of the signal. However, this will also change the nature of the desired signal somewhat by compressing the higher level signals -- the desired music -- and expanding signals below a certain threshold so they are lowered further in level - hopefully the noise. Does Audacity have a user configurable compander or similar non-linear dynamic processor?
-=tension & release=-
The first thing to check is whether the noise always appears or whether you only get it when recording electric guitar or mic'ed voice/guitar.
If it always appears, you have to consider the computer to be the culprit. As has already been said, the chances are it's your sound card. I did have some moderate success, with one particular card, by moving it as far away from everything else as I could. Try muting all your inputs and record a few seconds, in Audacity. It will give you an idea of the basic noise level of the card. You could try switching one input, at a time, back on. See how the various channels affect the noise level
If the card is not producing the noise, it's obviously the environment that the computer is in. Set up your mic and record a few seconds of the environment ("silence"). Then, put your mic in a drawer or between 2 pillows and record a few seconds more. The second sample will remove most, if not all, of the environmental sounds. It will give you a good idea of how much the environment is contributing and how much is from the mic, itself.
You should just tell Kevin Federline to go home and take care of his kids...
of course, as suggested by others, you should try to eliminate the noise at the source first. if you are recording using the built-in mic or the low cost electret mic often included with PC systems, it's close to hopeless. you will need to get a better mic and get it away from the PC. and as noted above, many sound cards/on-board sound systems are not than great in terms of SNR performance.
-=tension & release=-
+1 to it being a difficult task if you have low grade gear.
Eliminating all unwanted noise can become a pretty much endless chase after the holy grail of signal purity. From my own experience (as an amateur) it seems that every single link in the chain can contribute some kind of hiss, crackle or hum. The usual supects include the quality of your house power supply, poorly shielded cables (which can pick up noise from all the other electrical activity), crummy sound cards, cheap microphones, amps, pedals, bad connections and contacts, etc etc. The computer itself can be contributing noise in a variety of ways too.
For instance, I've just been hooking up a system to record with, and working through some of the same issues. When I hooked up the keyboard I was getting a lot of noise from it into the mixer. Changing cables didn't help. But there was also a USB cable attached to the keyboard that's used when the keyboard acts as a midi controller. When it's being used to either provide midi input, or play it back, that cable works just fine, and doesn't seem to contribute any hum. But when the keyboard is providing audio through the output socket then, even though the midi signal is set to 'off' in the keyboard software then the midi cable still completes some sort of unwelcome circuit and transfers noise to the audio. Disconnect the midi cable and the noise on the audio goes away. But then you start noticing the lesser noise that accompanies the note decay from the keyboard..... That's probably going to be unavaoidable with that model keyboard, and fortunately it's not enough to be a problem anyway.
Another difficulty cropped up with the mic. If I plugged it straight into one part of the chain the signal was weak and I had to crank up the setting so high further along that it started to cause hum. That was solved by plugging the mic into another place, where it was getting some sort of 'pre-amp' boost (at least that's how it seemed to me). Anyway, the stronger signal meant that I was able to turn the other setting down to a less cranked level, and that hum went away.
So it's pretty much a process of elimination. If the mic causes problems when plugged into the sound card, then it might work better if the audio was routed through a USB or firewire audio interface, or.... or... or.... there's no one answer unfortunately. Good luck with it all.
Aww man :( looks like its my sound card cause I can plug my tele in and I still got it.
If I remember right, Under effects in Audacity they have a Hiss removal and Pop removal. May not clean it up all the way but it's a start
Immature? Of course I'm immature Einstein, I'm 50 and in a Rock and ROll band.
New Band site http://www.myspace.com/guidedbymonkeys
What type of Soundcard do you use ?
And how do you connect your sound equipment to the soundcard ?
If it is a standard PC or Laptop soundcard, you should always avoid the mic inputs.
They are always noisy and bad, as they are only designed for use with the mic of a headset and not for recording.
The line-inputs are fairly good as it is easy and low cost to design a good analog amplifier for line levels.
An external analog pre-amplifier or mixer (in front of line-in port of the soundcard) is always much , much better than the mic input amplifier of standard soundcard.
A soundcard for recording purposes (internal or external) has much better analog parts and analog->digital /digital->analog converters than a standard soundcard and a low cost analog mixer
But for a low noise standpoint I would say that a standard soundcard with a good external pre-amplifier/mixer is good enough for most home recording use.
Tanglewood TW28STE (Shadow P7 EQ) acoustic
Yamaha RGX 320FZ electric guitar/Egnater Tweaker 15 amp.
Yamaha RBX 270 bass/Laney DB 150 amp.
Has anyone tried this?
my PC is contributing a bit more than i want to the mix.
I'm sure that my limited experience (none) in recording is affecting my noise levels, but i defiantly notice a lot of meter action before i start playing.
I'm curious as to weather this would be a simple fix to the problem?
also, a layman's explanation of the pros and cons of using this would be appreciated.
i know building a sound booth is the way to go but that really isn't an option... :wink:
here are several methods offered for noise removal, including some free ones that looked a bit more involved.
I found my problem and my recordings are so much better.
Thanks everybody! :D :D :D :D :D :D :D
TwistedLefty: This is what I posted on the youtube page you linked.
"The problem with this noise removal is that it leaves a lot of artifacting. Basically it takes the sample noise and applies it with the phase reversed. So, anything within the frequency range of the noise will be affected. You can try re sampling the dead space and applying that with the noise removal and "sometimes" it will get rid of the metallic, robotic sound that it created on the first pass. Sometimes. Not always. Listen the the quality of the recording and ask why.........?"
What I meant to say was. Listen to the crap quality of the recording of the "tip" and you have to wonder why this guy is giving tips. Yes the noise removal in Audacity works if used sparingly. Otherwise it leaves something to be desired. Don't over do it. Find other ways to eliminate noise first and you should be fine.
I found my problem and my recordings are so much better.
For future reference, what did you do to fix it?
I wrapped a newspaper ’round my head
So I looked like I was deep
|
OPCFW_CODE
|
There are very few scenarios where you need to flash firmware from a different model, nor do I recommend you do it, as 95% of the time it will result in a brick. Today, however, I needed to as part of another project I’m working on (had to hotflash a BIOS chip from a different motherboard). It was quite tricky to get it to work, but for anyone else getting a “ROM ID doesn’t match” or similar error and you KNOW FOR CERTAIN that you have the correct ROM (or you are also trying to hotflash a different board’s BIOS chip), read on.
- ASUS motherboard using an AMI APTIO IV or APTIO V BIOS.
- The .CAP version of the BIOS you want to flash.
- An MS-DOS bootable USB or equivalent (use Rufus if you need to make one)
- AMI Firmware Update APTIO IV or V, whichever your board uses (or download both)
- ASUS BUPDATER.EXE. I found mine in the downloads section for my host motherboard.
- First & foremost, it would be best for you to make a backup in case you make a mistake.
- Extract APTIO Firmware Utility somewhere then navigate to \afu\afudos and extract (again) AFUDos.
- Copy AFUDOS.EXE to your DOS bootable USB (do both versions if you’re unsure, I named one AFUDOS4.EXE and AFUDOS5.EXE).
- Copy BUPDATER.EXE and your BIOS.CAP file to your DOS bootable USB.
- Make sure you have Legacy Mode enabled in the BIOS and boot from the USB.
- After MS-DOS or equivalent has booted, this would be a good opportunity to carefully switch BIOS chips (only if you are using this guide to hot-flash a different chip).
- Run: AFUDOS.EXE BIOSNAME.CAP /X
- Run: AFUDOS.EXE BIOSNAME.CAP /X /B
- Run: AFUDOS.EXE BIOSNAME.CAP /X /N
(These commands might also run together as “AFUDOS.EXE BIOSNAME.CAP /X /B /P /N” but I’m just posting my exact steps for how I got it working).
If you are using an Intel motherboard you might also have to do AFUDOS.EXE BIOSNAME.CAP /X /ME /MEUF or something (I was using AMD, so it wasn’t necessary).
- If all those succeed, run BUPDATER.EXE /G
In the top left corner it should now show your new ROM info.
Select your BIOSNAME.CAP and hit enter, follow the prompts.
* This step may be redundant, but mine wouldn’t POST without doing it for some reason.
- After this completes, voila, you now have a different ASUS BIOS on your BIOS chip and you didn’t need to buy a flash programmer (but I really should one day).
If anyone tries this with an Intel motherboard and the process needs to be amended, please feel free to add on to this guide.
|
OPCFW_CODE
|
skulski at pas.rochester.edu
Mon Jul 20 23:27:11 CEST 2020
thank you for posting it. FYI, I published a web page on Oberon emulators. I tried to pull all info which I saw floating around. I also discovered some gems or would be gems, which I tried to dust off on this page. This is a kind of reference info which is most useful if it is available, or it gets easily forgotten if not.
Open www.RiskFive.com and find the Emulators section in the left column. Then click on it. I apologize for the heavy style. It was meant to confuse the enemy and to scare away the friends.
Concerning possible modifications.
> Perhaps we should start discussing, WHAT you want to change. At the moment I see three different levels of HW modifications: easy, more difficult, impossible.
> Basically, see an emulator as virtualization. In virtualization you have a host machine (where the emulator runs on) and a guest machine (this is the FPGA board). I ignore „containers“ at the moment
What are "containers" for my reference?
> Easy: if you want to extend the RISC-5 instruction set in Verilog (eg add an autoincrement instruction) the modification of the emulator is rather straight forward. I neglect the fact that you have to modify the Oberon-07 compiler to use this new instruction, because this is not important for the emulator.
This is "easy"? Oh my goodness.
> More difficult: if you want to add IO mapped HW (eg Ethernet or Wifi) or memory mapped HW (eg color display) to your FPGA and the host machine has an equivalent of this HW, with a little bit of effort the emulator can be modified to map the IO register commands or new memory layout to the corresponding host HW.
Sounds a bit difficult.
> Impossible: if you want to add HW to your FPGA and the host machine does not have an equivalent HW (eg a neutrino sensor) then it‘s impossible for the emulator to mimic the new FPGA HW.
This is actually the easiest part. We do it all the time in physics. We assume a certain characteristics of the detector, generate simulated data (using GEANT, for example), and we read the simulated data into the analysis chain as if it was real.
I would not go that far. My goals are modest. I can write a fake input buffer which generates some Monte Carlo numbers, and pass these along the SW as if it was coming from the actual hardware. With a bit of detector knowledge it is quite easy to generate a semi-realistic Monte Carlo. Some folks try to make it look very realistic, but I would not go that far. My goal is to develop some display routines and UI software. The super-duper realism is not needed for this. The "sort of" realism is entirely sufficient.
Now I need to figure out which emulator is running well, and which one would be the easiest to modify by a dummy like myself. These two need not be the same. I have some sense from the offline conversations, but not a solid understanding yet.
More information about the Oberon
|
OPCFW_CODE
|
The Drop Master has been described to be a set of components for the process of adding the inter-application and its support for the drag-and-drop process to the Delphi as well as the C++ Builder and the applications for the windows of the Microsoft. The Drop master can support the process of dragging and dropping the text data, graphics as well as custom formats. Is capable of coming with a collection that consists of 40 and more sample apps for the users and the developers to represent the result of the research which is extended into the drag and drop behavior of either of the popular applications that song commercial. It is described to be a set of elements prison for the process of incorporating as well as supporting the Microsoft Windows. It has the ability to come with a complete set of source code for all the components as well as packages, and even editors for design at no additional charges for the users and the developers. The process of documentation is provided easily within an extensive context which is a sensitive help system that is online. The Drop Master has a lot of features such as one-stop installation, automatic help for the process of integration as well as dynamic components for registration purpose. It is described to be a set of native for four controls of VCL for the use of the Delphi as well as the C++ Builder. The component of the VCL is included with the permit of Delphi as well as the C++ Builder the process of drag and drop right between the windows which is located in the same application. The Drop Master permits the developers and the users to add unique support for the process of dragging as well as dropping between applications. The functionality of the Drop Master can be divided accordingly by the wish of the user and the developer whether they desire to allow the user to drag from his particular application to somewhere else that is a source for dragging from somewhere else to his source. The component, in this case, is capable of allowing the users on the developers to receive feedback are the process of drag-and-drop when the process is still going on or to modify.
It is capable of dealing with the data as well as the dragging process of it with a user application to another. The user can assign the TWinControl from the user form to the donor component property as well as the process for detecting the drag in the mouse down event and also the process of calling the execution method. It provides appropriate content and then the same content is being dragged to a different application. For further sophisticated use, there is a property of text by which the user can assign any content as he desires. TDM Text Source he also described to be the component which is used for the process of dragging the arbitrary format through the utilization of the Custom Format Data property. In general, the user has the ability to drag more than one format as he desires and then target the application and accept any of them that are available. Such an example is whenever the user directs a particular cell from an Excel, then the data is made available in different formats starting from a plain text to a bitmap of cells and a lot more.
It can deal as well as manage the data which has been dragged into the user application from other sources. The user can assign the TWinCntrol on the form that belongs to him to a property of acceptor control. It permits a user to write and on drop handler to tell the TDMTextTarget as to how to deal and function with the data which has been dropped recently. Natively it can know how to accept the text in the format of HTML, RTF, URL at a lot of lists for files. TDMTextTarget is also described to be a component that can be used for the process of accepting the arbitrary format apart from the text via the use of the custom format property by the users and the developers.
Click on the below link to download Raize Software DropMaster with Source Code NOW!
Write your comment!
Access Permission Error
You do not have access to this product!
Dear User! To download this file(s) you need to purchase this product or subscribe to one of our VIP plans.
|
OPCFW_CODE
|
Take the complexity out of testing graphical user interfaces (GUIs) and human-machine interfaces (HMIs) – even in the face of product evolution and safety-critical applications.
Squish supports agile-oriented teams. Schedule routine or custom-triggered test executions, identify regressions before builds get to QA, and get that fast feedback on commits the team is looking for.
Seamlessly automate multi-technology applications or applications with more than one toolkit. Interact with UI controls of each type natively and automatically and focus your efforts on application quality.
Squish fully supports Behavior-Driven Development (BDD), an agile testing method which brings together technical and business project stakeholders to bring high-quality products to market.
Squish® features fully integrated BDD support, and is 100% compatible with the Gherkin (standard BDD) language. Create, record, maintain and debug BDD GUI Tests.
Squish GUI Tester features automatic test script recording and recognition of high-level interactions and objects instead of low-level events.
Insert verification points while recording or when refactoring scripts using Squish Verification Points and the Pick tool. Verify object properties, perform image comparisons validate table values.
Squish GUI Tester integrates recording, test execution and results, script debugging, object spying and advanced script editing and maintenance.
Drive your scripts using data from a variety of data sources. Even use the Make data-driven wizard to help.
Use Squish GUI Tester to execute sets of scripts, or batches, and review the detailed logging and execution results.
ALM, Test Management, Continuous Integration, Build Integration and Software Project Management.
Simplify test creation, maintenance and troubleshooting. Produce stable and powerful test scripts.
Seamlessly automate multi-technology applications, or applications with more than one toolkit, using Squish GUI Tester.
Advanced verification options of elements and groups of controls.
Identify custom controls or 2D/3D graphic plots and images with Image-based testing.
Squish® offers Optical Character Recognition support, a method of onscreen text recognition and verification that complements Squish’s already powerful Image-based and Object-based recognition capabilities.
Fully-integrated, one-click remote control solution for virtually any target.
Automated GUI Testing for native Windows applications with dedicated support for MFC, WinForms and WPF controls. Also supports automation via MSAA and UIAutomation.
Automated cross-platform GUI Testing for AWT, SWT, RCP, Swing and JavaFx applications, Java applets and Java WebStart apps.
Includes support for embedded Web content on desktop Windows, Linux, Unix and macOS, as well as on devices or simulators/emulators running embedded Linux, QNX and more.
Automated cross-browser GUI testing for Web and HTML5 applications.
Support on desktop, mobile and embedded platforms, as well as iOS and Android devices and emulators/simulators.
Automated cross-platform GUI and HMI testing for applications written with Qt Widgets, QML, Qt Quick, Qt WebKit, and Qt WebEngine. Includes support for automating embedded WebKit content.
Support on Windows, Linux, Unix and macOS desktops, as well as devices or emulators / simulators running embedded Linux, QNX, WinCE, Windows Embedded, Android and iOS.
Automated GUI Testing for native macOS applications including support for embedded Webkit content.
Full toolkit-agnostic display automation for any GUI technology.
Supports all applications running on desktop, mobile or embedded devices capable of running a VNC server.
The first maintenance release in 7.1 series keeps pace with recently released versions of popular GUI toolkits and comes with the support for Qt 6.5 LTS and Java 19.
A custom, comprehensive qualification tool to gain the confidence you need to ensure your test processes meet safety standards.Read more
Take a deep dive into the technical aspects of Squish.Learn more
Evaluation Guide is here to support you throughout the process of an evaluation, from downloading the tool to the point of installing and starting the use.Learn more
Discover more of what matters to youEnter the QA Orbit
|
OPCFW_CODE
|
I just got a new Toshiba laptop with Windows 8 pre-installed. As I hate Windows 8 I decided to make a clean install with Windows 7. I got my Windows 7 installation on a bootable USB (it works with my other computers running Windows 7/Vista/XP), plugged it in, went into the BIOS and changed the boot priority to USB as primary. But it still doesn’t boot from the USB, and just boots right back into Windows 8. I have tried the same thing with a Windows 7 DVD and still can’t boot from the CD.
I spent entire last night trying to figure out this issue and eventually I found the solution. All you got to do is to disable the UEFI mode and turn off the Secure Boot option in BIOS, which allows the EFI-based PC to operate in a legacy BIOS mode. Fortunately, nearly all modern EFI-based computers include a feature known as the Compatibility Support Module (CSM) that enables them to boot with legacy BIOS mode.
One of Microsoft’s new rules for Windows 8 is that any manufacturer that ships a windows 8 computer must enable UEFI secure boot by default. The UEFI BIOS is created for Windows 8 and if you need to use other system like Windows 7, Linux or any other OS, you have to disable the UEFI BIOS and Secure Boot options.
How to Set Windows 8 PC to Boot with Legacy BIOS Mode Instead of UEFI Mode?
When your computer is powered on, check the boot-screen for setup key (i.e. DELETE key, F8 key, F2 key) to enter BIOS Setup Utility. If you don’t know how to access your BIOS (UEFI) Setup, please check out this article: How to Set Your Computer to Boot from CD or USB Drive.
In the BIOS Setup Utility, change the boot mode from UEFI mode to legacy BIOS mode (or CSM boot mode), and disable the Secure Boot option. Here are steps of disabling UEFI secure boot in Toshiba laptop:
- Power on the system and while the “TOSHIBA” logo appears, press F2 key to enter the BIOS Setup Menu.
- Select Security tab and set the Secure Boot to Disabled.
- Select Advanced tab and go to System Configuration.
- Set the Boot Mode to CSM Boot.
- Press F10 key to save and exit.
- Press F12 key at “TOSHIBA” logo screen to toggle between the bootable devices and choose the medium which you want to boot from.
Note: Once you are finished and you want to return back to normal operation you need to revert the BIOS settings. The installed Windows 8 OS will not boot up, if you do not revert the BIOS settings!
The exact menu option in your motherboard’s BIOS may differ but look for phrases like “Boot Mode”, “Boot List Option”, “UEFI/Legacy Boot Priority”, “UEFI Boot”, etc. For example, in ASUS desktop or laptop computers, you need to enable the “Launch CSM” option and disable the “Secure Boot Control” or “Fast Boot” option.
|
OPCFW_CODE
|
We are currently working on a POC for a project, here we need 4 pins to change high/low depending on the angle of the joypad in the dabble app, and 6 pins need to go high/ low depending on buttons pussed in the app (when pressed high, when relesased low).
Very simple, will add example code below, where serial monitor already displays angle out from the app, and also if buttons are pressed, so all you need to do is add function to a given action.
pin 21 = Up
pin 19 = down
pin 18 = left
pin 05 = right
pin 13 = start
pin 15 = select
pin 35 = X
pin 13 = O
pin 14 = Square
pin 12 = Triangle
Should be a very simple job for anyone with just a little experience in Arduino/ esp32
Hi there, I just read your posting. It sounds like you need an expert in ESP32. I have a background in IoT and Microcontrollers Programming and have been doing this for three years. I had done 15 to 20 of good IoT p Plus
12 freelances font une offre moyenne de 35 $ pour ce travail
Hi, Thank you for providing the code, I've reviewed it and understood your requirements correctly. I've huge experience in firmware development (C++ and C) using Arduino IDE and ESP32 Boards. Please contact me to Plus
As an Electro-mechanical Engineer, I bid on your project because I have the expertise and experience to provide high quality work. I can achieve the results that you are asking for. So , respond to the offer so that w Plus
Hello, I am interested in your project and I hope to help you realize it, I am passionate about electronics and embedded development and I hope to share with you my knowledge and experience. I can do this work in 1 da Plus
Hello dear sir I hope you are fine I can complete your project. I have better experience in embedded developer field Send me message and start your project Thanks
Hello, I'm Kit I am capable of coding appropriate function for your application. If you ever need to convert your ESP32 to USB HID Gamepad, I've recently done that too using open source library. I would like to know mo Plus
Welcome to my profile, Hope you are having an amazing day. I have been working on a microcontroller-based project for a long amount of time. I am fitted to work with any kind of sensor with Arduino,esp32, and stm32. I Plus
Hi, my name is Simon, I'm an electrical engineer student from Argentina. I really like programming, I mainly program C/C++ for embedded projects, and in Python for software that runs on PC (and some Micropython too). I Plus
Greetings, I will work on your task with the highest priority and will finish within the scheduled deadline. I also have years of experience using C/C++ and Python for embedded as well as ESP32 specificly, and will be Plus
i used to work with this app and have a lot of experience with gamepad, if you need i can help.......
|
OPCFW_CODE
|
Potabi Beta 3 Launch Issues: What is holding up release
Beta 3 was originally thought to take no longer than the difference between Beta 1 (Released October 6), and Beta 2 (Released October 22), with maybe an extra week - or at worst, two - to be released. That is a total expected beta cycle of 16 to 30 days ( 23 for the week long scenario). Currently, we are in the 13th day of development of beta 3. Surely not the longest time between development dates, but it is concerning with how far behind we are with the actual system release, and what has to happen in order for it to function.
For the sake of transparency, we need to talk about what is going on, why there are problems, and what we will need to do to release beta 3.
Let's talk about how release cycles are being calculated, why they are being calculated, and what is going on. Due to Potabi's extensive history and slow early development before the beta 1 release, the time before the October 6 beta release is ignored. Right now, we want to keep under 20 days in development per beta until 1.0 releases. There is a reason for this: 20 days is a lot of time, and we don't need the betas to be polished. We are using this 20 day metric + beta timelines to help plan for the 1.0 release, which is going to have a lot of interesting things going for it - considering it being a 1.0 release. At this time, if we get all of the next 5 planned betas released in this time frame, we could expect a release 116 days after October 6th - January 30, 2022 (expecting I don't stop working for holiday seasons, which to be honest I probably will work on the project during the holidays, and expecting Beta X is the Beta 8 release, which it probably wont).
Okay, so enough dilly-dally. What is the issue, and why? Well, there is a few. First is the inefficient system of installing packages that absolutely will be replaced. Right now, they are added by a shell script, and not via our original plan of simply using FuryBSD's uzip system, which kept giving error after error. Issue after issue.
Second is the unreliability of the builds for our software. While it might not seem like it, Potabi is in a ton of tough shit because of vulnerabilities - ones we cant really control, and have to work around. FreeBSD 13.0 released with an OpenSSL flaw, and a couple of bugs that make working with NodeJS 16.x (a version we have to use for the Potabi Welcome Application). Doing extensive testing to get it working is going to take a ton of time, but it is what we have to do. We don't have another option.
Hopefully we will get something working before November 11th. If you can, please help us test, or donate to the Patreon. Join the Discord server if you want to help support us build our software. Link: https://discord.com/invite/8s8nNwndtF. If you want, support us via Patreon: https://www.patreon.com/potabi. Either would be amazingly helpful right now.
|
OPCFW_CODE
|
Hi, I am in the process of migrating a .NET nuke site to Plesk. I have a data export feature that exports transactions that have occurred to an access file. (.mdb) For some reason when I do the export the mdb file is blank. This worked on the old server. I need help fixing this. I will post some more details of the issue in PMB. Thanks.
HI I need a group or a person in charge of developing a site like "manta dot com" developers who have made sites by MODxcms (or have extJs experience) please bid for this because those whom have experience with MODx -revo or -evo are ahead of others to be selected.
Must be knowledgeable about PHP-NUKE. I am unable to get into the Advertising Module Admin (not banners). 1) I want to edit the TERMS and the PLANS and PRICES (these links show up on the Advertising Menu. [conectează-te pentru a consulta linkul URL] If this is not possible, then I need a new Advertising Module installed and set
...my website to the Drupal 6 CMS. My site has two main sections - the portal section uses PHP-Nuke while the directories section uses regular php programming. Both use the same MySql database. There are currently two scripts that will do the PHP-Nuke to Drupal migration automatically. One is a script while the other is a Drupal 6 module. Either
...follows: an HTML form page with data stored in form fields loads a .cab file that replaces tags such as <<this>> with the data from the form fields within a Word .dot template file when opening that template within MS Word on the user's computer using an Internet Explorer add-on. The .cab file is loaded as an object using the <OBJECT>
I want to make a vehicle tracking system to be made in C#
RAP BARS DOT COM is set to be a hip-hop quotables website. Please see the attached files.
? Description: re-modeling existing site by applying a design on its Dot net Nuke DNN skin. [ ] [ ] ## Deliverables ? The required site is in Arabic Language ? DNN version 5 is required. Target URL:[ ]<[conectează-te pentru a consulta linkul URL]> Test URL: <[conectează-te pentru a consulta linkul URL]> [Design required is on this URL:...
...JPEG output. 2. We are looking for a Software look similar to what is shown in the movie SALT and on the new CIA commercials. 3. We DO NOT want any user interface that is PHP Nuke theme or contains, gears, rivets or anything considered a representation of hardware. 4. We will provide the framework for the skin as to where each button will go and the
I have an HTML Template that I need converted to PHP-Nuke. It is located here: [conectează-te pentru a consulta linkul URL] I also need to create about 5 pages that follow the template, use PHP-Nuke, but have no content in the middle... The pages must say the following on top: Tournaments, Chat, Sponsors, Join the Team, and Contact. I will be putting the content
Hi, I have migrated my .Net Nuke site from a non-control panel software environment to a plesk environment without any issues for the most part. (with the help of my host who is really good) There are some permissions issues which I think need fixing. (ie. I can't upload through a form and submit some other forms) There are some custom coded components
We are looking for a Windows Server Admin to configure our Web server / IIS architecture. We will have on this server : - Regular .net projects/MSSQL - Dot Net Nuke Infrastructure - Regular PHP/MySQL Infrastructure - Joomla Infrastructure We already tried to install all this ourselves, but could really do it. So we need a Professional who can : 1. reinstall
We need a Website similar to 321ch33se dot com (33 stands for ee) Following Details are required: #1 Index Page - including - Shot Image Function - latest, best voted and most comment Images #2 Photo Detail Page - Image including Watermark - Voting - Embedcode to Image / Directlink to Image (Code to share Image in Social Networks) - Comments
I need to make a website like blue track dot com
I need to implement an "Article Spinner" in one of my dot net based desktop application tool, which will spin the content. NOT Looking for a synonym/word generator, but something, that acknowledges phrases as well and change automatically. 2 simple boxes to load content and get spinned content. Thast's ALL. With the source code of course
I have a simple post nuke site that I use as a platform for some php applications. I recently accidently uploaded a web page environment using front page to my site and broke something. the pages still work to some extent, but there are obviously broken links or some other issues. I need someone to look it over and see if they can find the issues
|
OPCFW_CODE
|
/**
* Created by Crow on 11/25/18.
* Copyright (c) 2018 Crow All rights reserved.
* @author Crow
* @brief This class will be used to control listen socket
* like TCPConnection to control the connect fd
*
* Tips: At first time, I prepare to use getpeername(2) to get the peer address
* But muduo use
* int accept(IPAddress &addr)
* to get the address, yeah, It can save a time syscall,
* Emmm, It can be so useful.
* This class test file in /test/acceptor_test.cc
*/
#ifndef PLATINUM_NET_ACCEPTOR_H
#define PLATINUM_NET_ACCEPTOR_H
#include <atomic>
#include <memory>
#include <functional>
#include "reactor/channel.h"
#include "net/socket.h"
namespace platinum {
class Socket; // forward declaration
class Channel; // forward declaration
class IPAddress; // forward declaration
class EventLoop; // forward declaration
class Acceptor {
public:
using NewConnectionCallback = std::function<void(int, const IPAddress &)>; // for connection callback
Acceptor(EventLoop *loop, const IPAddress &address);
~Acceptor() = default;
void Listening(); // start listening
void HandleEvent(); // deal with the readable event -> Get Connection
void set_connection_callback(const NewConnectionCallback &callback); // To Set callback event when on connection
private:
bool IsListening();
EventLoop *loop_; // use loop to control resource, like AddChannel()
Socket listenfd_; // RAII Handle, Listen Socket, the same as connfd_ to TCPConnection
IPAddress address_; // IPAddress, Bind with this
std::unique_ptr<Channel> channel_; // std::unique_ptr to manage channel
NewConnectionCallback callback_;
std::atomic<bool> listening_;
};
}
#endif //PLATINUM_NET_ACCEPTOR_H
|
STACK_EDU
|
Titles with de jure vassals transforming into titular titles
In my current game I won the crusade for the Kingdom of Andalusia which owned about 75% of the land in Iberia for hundreds of years. During inspection of my new holdings. I noticed many of the kingdoms in Iberia were titular. My two questions are:
What is the reason for the Kingdom of Portugal's titular title?
Is there any way to restore it's de jure duchies to this title?
P.S. I apologize if this is a duplicate, I don't know the name of this mechanism so I was unable to search for an answer on google.
De jure kingdoms are not constant. There is a game mechanic called de jure drift. When a complete duchy is controlled by a different kingdom than its actual de jure kingdom for over 100 years, that duchy's de jure kingdom changes.
Apparently all of Portugal was conquered by Andalusia for so long that all counties converted to the de jure territory of Andalusia and not a single de jure duchy of Portugal remains.
If you want to revive the kingdom of Portugal, you could give the titular kingdom title to any of your counts or dukes. They will then become king of Portugal and their current holdings will become the new kingdom of Portugal. When they manage to permanently hold their duchies for 100 years, they will become a new de jure kingdom of Portugal (which doesn't need to be anywhere near the "historic" Portugal).
Just a warning if you do give away the kingdom titles- giving a kingdom-level title to a vassal will make them independent if you don't hold an emperor title.
Philipp explained the mechanics well, so I'll skip to how you can get it back to its original de jure land if that's what you want to do. Ultimately, for land to drift back into the kingdom the following conditions have to be met:
1. The full duchy must be owned (either as demesne or through vassals) by one realm.
2. The owner's liege cannot own the de jure title of the land (in your case, the Kingdom of Andalusia).
Since your character is a King, you'll have to decide whether you will become an emperor or give up the land to vassals. Both have different risks in terms of getting the land completely back to normal, but both will probably at least allow you to get the titles back to being landed.
If you decide to become an Emperor, you'll need to give the original land of the Kingdom of Andalusia to one vassal, and only that land (at least in Iberia). Do the same for all the titular Kingdom titles. At this point, you can leave it alone and if nothing happens, the land will drift back in one hundred years, however, the owner of the Kingdom of Andalusia will have a de jure claim on all the other land in Iberia, so he may decide to fight for what is his. If you do not have Absolute Crown Authority, there isn't much you can do about this besides combining a few of the smaller northern Kingdoms so their owner can go toe to toe with Andalusia.
If you decide to remain a King, you'll need to do effectively the same as the above, but when you give the Kingdom title to your vassals they will no longer be your vassals. The one advantage of this is that you can set it up so you are allies with all Kingdoms but Andalusia and always join with the defender when they fight each other until the de jure drift finishes, effectively serving as a peacekeeper.
|
STACK_EXCHANGE
|