Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
SIGNAL: journey of noise-appreciation.
It started back in 2009 when I began to be interested in large industrial complexes. Slowly, I learned how these temples of work transformed cities and cultures in general; and how some things are made. A huge leap forward in this case was when I got an opportunity to spend a day in a foundry and actually see all that on my own. I found some strange visual beauty in it, which was very appealing to me. I started making trips just to visit various industrial sites, and it got reflected in my work: suddenly most of my photography and paintings were focused towards factories, power plants, smokestacks and so on - and it didn't end there. A couple months later I found myself strolling around freight railway terminals to listen the raw noises of moving wagons, getting on a roof of a forge to experience dark rumbling sounds of smashing steel, or sitting next to massive vents coming from a coal-fired boiler, that were echoing like organ pipes... the point is, that my taste in music converted as well.
I discovered musicians who incorporated recordings of such industrial sounds into their production, and eventually, I got into synthetic noises too.
All that resulted in a good sized desire to create something of this matter. The idea that I could orchestrate noises hands-on followed me for some years, until the fall of 2016, when I finally took a step ahead. I got a freeware virtual synthesizer, started to press buttons, turn knobs, and soon I preoccupied myself within a layer of audio mayhem. Sweet! I couldn't get enough of it..
Roughly at the same time, I learned that one you convert any electronic file into sound waves. It can be a .pdf, an excel sheet, but more importantly an image or a video.. then, one can edit it just as if they'll edit a song. This, obviously, cause a change of the original file (which fundamentally is not a sound) and will cause the file to be corrupted. When the corruption occurs in image data, it can result in a change of colors, blur, artefacts of noise... something most users would like to avoid - unless, you do all that on purpose.
The timing couldn't be better, as this visual noise works hand in hand with the audio noises.
It was a new world to me, and I enjoyed experimenting with it and producing deformed graphics. Similar to my musical progress, it was a path of a trial and error; but just as the music, I had fun making it. Soon, I felt a need to warp all of it into some package. Meanwhile, I was thinking about the load of information that is required to decode things properly. It was a couple of sleepless nights later when the project "SIGNAL" was born.
The video is made mostly from footage I captured in London when I used to live there. I intended it for a different project but ended not using it back then. Now, it came handy for the data-bending process - during which I noticed a pattern of a suppressed individuality across the city.
This ultimately triggered the idea, that the human beings creating values within a metropolis possess some similarities with the wireless signals which surround them – their importance is tremendous as a unit of hundreds or thousands, but it vanishes when the unit is split into isolated cases. Merged aspects of achievements and vanity might graduate into an overdose of sensations, which is overwhelming but needs to continue to maintain the machinery of productivity.
Audio-wise, I accompanied the synths sounds with recordings of various noises outside, such as plenty of elevators sound across the city and trains at a local railway station. I also sampled a part of Saul Williams' poem "Gunshots By Computer," to complement the story.
|
OPCFW_CODE
|
Why do some websites have language specific URLs, when the users still use the .com version?
I'm creating my first multilingual website and I have been looking at examples from other websites. So far I've noticed that many popular websites have a .com version and a kr.example.com or example.co.kr version. I know for a fact that people in that country (in my example it's Korea) still use the regular .com version.
What's the point of having a language-specific URL if the website automatically determines the language and allows a language selection if they determine incorrectly?
Are you suggesting that the "language-specific URL" is never used? (Although how would you know that?)
Note that the .kr TLD designates the country, not the language. This is important when considering that some languages are spoken in multiple countries and that residents of a country do not necessarily speak its main language.
@MrWhite Language-specific URLs are fine, but kr.example.com or example.com/kr/ would be much better than example.co.kr. example.co.kr could exist, but it would better refer to a company that only operates in Korea, or to the Korean branch of an international company, without any assumption on the language. Once you get the distinction between geographic and linguistic concepts right, you can combine them. For example, an English-speaking client living in Korea may order a product on the example.co.kr/en/ website.
@Tony Not sure why your comment was addressed to me? I was simply seeking clarification for a point the OP stated in their question, which seemed to imply that the website(s) in question allowed the user to change the language (or the language was auto-selected) without any change in the URL (ie. staying on the example.com host and no language subdirectory for all languages). (I assume this must have been the case, since otherwise there wouldn't be a need for the question. This would suggest a "fault" in the website(s) since language selection should result in a change in the URL.)
Automatic language detection doesn't work well. It is usually based on either the geographic region associated with the IP address or based on the Accept-Language parameter.
Geo IP databases are inaccurate for a small (but significant) percentage of users. Probably around 5-10%.
Geo IP doesn't work for users that are travelling abroad in a country where they don't speak the language.
Geo IP doesn't work for areas where multiple languages are spoken (like areas of Canada where they speak both English and French).
Accept-Language often defaults to English as that is the default language in which browsers can be downloaded. Users that speak other languages may not know how to change it, even if they know enough English to use the browser.
Accept-Language is often set incorrectly on borrowed devices and in Internet Cafes.
Even if you automatically detect the language, you need to give users a way to force it something else. Having separate URLs for different languages is a good way to do that.
I prefer to use language detection to tell users that another URL might be more appropriate for them. A prominent notification near the top of the page like:
You are currently on our French site, but your browser says you prefer English.
[ Switch to the English site ]
In addition, search engines don't support a single URL with multiple languages well. Search Engine crawlers typically don't send an Accept-Language header and only see the default language.
Google announced that they are now trying to crawl sites with different languages on the same URL. However, I don't know of any large sites that get good SEO traffic from multiple languages without having a set of URLs for each language.
Even if you have language detection on the .com, having multiple language URL choices is required to ranking in multiple languages on search engines.
For more information see How should I structure my URLs for both SEO and localization?
To answer your question, language-specific domain is one way to tell google certain pages are relevant for visitors with particular language or country. Thus for SEO purpose, google will only rank those specified pages to your targeted market visitors. In your case, your Korean users will only see its Korean version webpages; it helps improves customer conversion rate.
Beyond,you can use hreflang tag to tell google your multi-lingual site structure. It helps avoid penalty of having redundant content from multiple versions of webpages, and make google only display and rank specified webpages to different targeted countries. For example:
<link rel="alternate" href="http://example.com/" hreflang="x-default" />
<link rel="alternate" href="http://example.com/gb/" hreflang="en-gb" />
<link rel="alternate" href="http://example.com/gb/" hreflang="en-gb" />
<link rel="alternate" href="http://example.com/au/" hreflang="en-au" />
You can further conduct geo redirect to auto direct your visitors to correct Urls. Without visitors' efforts, it immediately brings local to visitors, and increases conversion.
|
STACK_EXCHANGE
|
can someone tell me how to do a proper Login/Register System?
I am Pretty new to this and I already tried around a few Things.
I also went through the tutorials.
I mean something like this:
A form with the Inputs E-Mail and Password, The Buttons Sign Up and Login. I know how to Event those Buttons, but how do I make like theres another form popping up instead of the same form the Login Thing is in? Because in my opinion that Looks Pretty weird.
I hope someone can help me!
What do you mean exactly.
Can you site an example, so we understand your question?
Your saying you have 1 button “login/signup” and you want the user to be able to click like “already have an account? login” and then then form switches to a login form, not a signup form? is that correct?
Start a new project. Use the wizard. There’s a quite well done header with integrated login widget you can inspect. (It’s not perfect, but it will help you get started.)
(Aside: Why do so many folks start new projects WITHOUT the wizard? That baffles me.)
Im sorry for the inconvenience.
My original Intention for using this whole “Software” was this:
Me and my Group which are medical (first-aid) responders for non-critical situations Need something to be able to communicate with each other, and also where everyone has Access to. That means we will Need a Login/Register form for the beginning. There should be two Buttons which are Register and Login, and two Inputs for E-Mail and Password. When you click on Register another Window should Pop up with Additional Information being saved in a database, and with the E-Mail and Password you would be able to log in. When you log in you would get redirected into the main form.
I really hope someone here can help me, and sorry for my Grammar but I am Pretty sleepy. For me it is 00:51 o’clock Right now.
Additionally it would be nice if someone could write to me on either Discord or Steam:
Steam: DrGreenTea / https://steamcommunity.com/id/DrGreenTea/
Thank you guys again!
this is pretty doable in the program, and is a good use case of bubble.
then main form with additional questions.
So - make the button, start edit workflow, >sign the user in/log the user in> navigate to page> “formpage”
(create a page"formpage")
on formpage, add all fields/questions you would need… then a button.
All the fields questions will need to be database objects as well…so in your “data tab” on the left, create a new thing “field1(or whatever you want to call it like “emergency location”)”…
Start/edit the button workflow. >make changes to a data thing> select the thing you just created > and update that info.
Does that make sense kinda?
Yes it does, already helped a bit though there are some Things which still dont work.
Could we stay in contact via Mail or something like that?
That would be way more comfortable for me.
And thanks for your help so far, really appreciate it!
This topic was automatically closed after 70 days. New replies are no longer allowed.
|
OPCFW_CODE
|
I'm using Twine 1.4 (because of the image handling) and after a lot of trial and error I decided to use Sugarcube 2 as "engine" (simply because of the somewhat easier management of variables in text and the way radiobuttons and checkboxes are designed + that you can set Setter Links with images) ... anyway ... here's my problem:
On many pages I want to have an image to the left (portrait format) and I want the text to flow along that image to the right. OK, that's quite easily done by prepending img with a <. The problem with that is that once the vertical end of the image is reached the text snaps under it. That's not what I want - I want the text to continue downward as if the image was longer.
In SugarCANE I managed to do that by defining a div style for the image with absolute positioning and by defining absolute positioning for the passage. By not setting a height for the div containing the image the text would not snap back. The left start of the passages was set to just where the image ended.
With SugarCube I cannot duplicate that. Even when I define my own div for the image - when I change the passage alignment (or even when I put the text to the right in a new div) the image is affected as well.
Since I am not really experienced in CSS ... can anyone throw an idea at me how to prevent the text from snapping back when the bottom of the image is reached? I'd also apprechiate any help in how to position the actual text without moving the image as well. I tried a table but I'm no good with HTML tables and while text did flow nicely the text column was way too small compared to the non-table test.
In addition to that the flora and fauna are a bit on the hostile side so accidents may occur - therefore there are also variations of those images showing injuries.
So the image to the left is in most cases of the same dimensions but has varying content.
Wouldn't that mean I'd have tofine tune each and every passge once the text is done to see how much space I need and - as you say - with the browser not in full screen so I can manipulate the width to see how the content is displayed?
Setting a fixed heigt has the disadvantage, I think, that the border around the passage will always be drawn around that area and I'd like to have it be drawn either around the image height (if there is little text) or around the text (if there is more text than the image) and have that adjust to the user's browser settings.
That's why I initially tried the table (and I may well end up with it) but I'm looking into table formatting options now because I don't really like the result I'm getting.
a. Add the following to your story's script tagged passage, if you don't have one yet then create one using the New Script Here content menu. The code inserts a new div element with an id of #profile into the HTML structure between the #story element and it's child #passages element, the reason it is inserted there and not within the #passages element is because the #passages element (and it's children elements) is destroyed and recreated each time the story moves between passages which means you would also need to re-create the #profile area each time. b. Add the following to your story's widget tagged passage. if you don't have one yet the create a new passage and assign it a widget tag.
The code adds a profile macro which will replace the contents of the new #profile area with whatever text you pass to the new macro. c. Start passage d. Next passage:
I did not use images in my example because I don't know how you are storing your images (embedded, external files, hosted files, etc) but it would not take much effort to change the above to handle images instead of text.
I also did not use CSS to style the above because I need to know how you wanted it to look.
All images reside in ./images relative to the story directory, which works fine in Twine 1 test mode (though not in Twine 2, sigh). All left side images are 512 by 800 pixels. There's other picture sizes but they are all to appear in context and in the "right" pane, so their positioning is never an issue.
Images on the left pane should be smack to the left of the displayed area and text in the right pane should start a few pixels to the right of the right border of the left image.
I've spent some time reading up on html tables and I've found a workable solution but I'll give your example a try as well and see which will work out better. Not being forced to clutter (almost) every passage with a table would be a welcome relief.
note: I am not at a machine with Twine installed so the following has not been tested. ... and you would change the call to profile within a passage to something like:
|
OPCFW_CODE
|
Medical Coding Best Practices: Exception-Only Coding
Medical coding software is an important tool, but without the right processes in place, technology is useless. Technology can only be powerful when it is backed by users who follow industry best practices. One of the most important best practices in medical coding is coding by exception-only. Despite this, most coders continue to practice the old way of manually reviewing every single encounter.
Defining Exception-Only Coding
Exception-only coding is a method of coding where coders only stop to review encounters with errors. They do not touch every single encounter. Rather than looking at every single encounter to find missed charges or errors, the coder waits for their rules-engine to flag them on an encounter that needs to be reviewed. This method leverages the power of technology to save time in FTEs and improve the accuracy and consistency of coders across the board.
To explain the importance of exception-only coding, we will give an example. Let’s think about a practice that handles 1000 encounters per day. The average coder who manually reviews every single encounter can process 25 encounters per hour. Let’s say that coder finds 2 missed charges, $25 each, that the software would not have caught. They saved the practice $50. It took them 40 hours to process all 1000 encounters, and with an $18 an hour salary, it cost the practice $720.
Practice B uses exception-only coding, and they rely on the technology to flag them on encounters with errors. With this practice in place, they can process 141 encounters per hour. They complete all 1000 encounters in 7 and a half hours, and it only cost the practice $135 – vs. the previous practice who spent $720. They missed the $50 in missed charges, but next time they will build a rule to capture those charges.
Was it worth it for the coders at the first practice to review every single encounter manually? Absolutely not! The second practice saved $535 per day by using exception-only. The charges that the technology did not catch were not significant enough to compare with the time savings they experienced.
Challenges to Exception-Only Coding
For coders, switching to exception-only coding can be very difficult. It’s a complete mindset shift. In the past, it was their responsibility to search through every encounter to find errors or missed charges. The sense of reward came from finding the needle in the haystack. It is hard to tell coders that what they used to find rewarding in their job is no longer necessary, and in fact, it costs the practice time and money.
There is a new sense of reward that can be found. It’s not in catching mistakes – they need to learn to rely on technology for that. Instead, they can be on the lookout for new rules to build. When they catch things that the rules engine is not catching, they can build a new rule to make sure they don’t miss any charges or make any denial-causing mistakes.
Using Analytics to Reinforce Exception-Only Coding
Another way to reinforce this process change for coders is to regularly monitor productivity through revenue cycle analytics. This not only allows you to see whether or not your coders are touching every single encounter, but it also allows you to monitor their productivity. You can set measurable goals for productivity improvements. Coders can find a new sense of reward by achieving a high number of encounters per hour.
White Plume loves to partner with clients to produce world-class revenue cycle outcomes. Our powerful rules-engine AccelaSMART allows coders who are coding on an exception-only basis to be 3x more productive than their peers. Find out more about our products and the ROI they could provide for you.
|
OPCFW_CODE
|
OpenStack-Ansible supports deployments where either the control plane or compute nodes may comprise of several different CPU architectures
Mixed CPU architectures for compute nodes¶
OpenStack-Ansible supports having compute nodes of multiple architectures deployed in the same environment.
Deployments consisting entirely of x86_64 or aarch64 nodes do not need any special consideration and will work according to the normal OpenStack-Ansible documentation.
A deployment with a mixture of architectures, or adding a new architecture to an existing single architecure deployment requires some additional steps to be taken by both the deployer and end users to ensure that the behaviour is as desired.
Example - adding
aarch64 nodes to an
Install the operating system onto all the new compute nodes.
Add the new compute nodes to
Ensure a host of each compute architecture is present in
This host will build python wheels for it's own architecture which will speed up the deployment of many hosts. If you do not make a repository server for each architecture, ensure that measures are taken not to overload the opendev.org git servers, such as using local mirrors of all OpenStack service repos.
Run the OpenStack-Ansible playbooks to deploy the required services.
Add HW_ARCH_XXXX Trait to Every Compute Host in Openstack
Although most CPU hardware traits such as instruction set extensions are detected and handled automatically in OpenStack, CPU architecture is not. It is necessary to manually add an architecture trait to the resource provider corresponding to every compute host. The required traits are:
HW_ARCH_X86_64 for x86_64 Intel and AMD CPUs HW_ARCH_AARCH64 for aarch64 architecure CPUs
openstack resource provider list openstack resource provider trait list <uuid-of-compute-host> openstack resource provider trait set --trait <existing-trait-1> --trait <existing-trait-2> ... --trait HW_ARCH_xxxxx <uuid-of-compute-host>
The trait set command replaces all existing traits with the set provided, so you must specify all existing traits as well as the new trait.
Configure Nova Scheduler to Check Architecture
Two additional settings in /etc/nova/nova.conf in all Nova API instances:
[scheduler] image_metadata_prefilter = True [filter_scheduler] image_properties_default_architecture = x86_64
image_metadata_prefiltersetting forces the Nova scheduler to match the
hw_architectureproperty on Glance images with the corresponding HW_ARCH_XXX trait on compute host resource providers. This ensures that images explicitly tagged with a target architecture get scheduled hosts with a matching architecture.
image_properties_default_architecturesetting would apply in an existing
x86_64architecture cloud where previously
hw_architecturewas not set on all Glance images. This avoids the need to retrospectively apply the property for all existing images which may be difficult as users may have their own tooling to create and upload images without applying the required property.
Undocumented Behaviour Alert!
Note that the image metadata prefilter and ImagePropertiesFilter are different and unrelated steps in the process Nova scheduler uses to determine candidate compute hosts. This section explains how to use them together.
image_metadata_prefilteronly looks at the HW_ARCH_XXX traits on compute hosts and finds hardware that matches the required architecture. This only happens when the
hw_architectureproperty is present on an image, and only if the required traits are manually added to compute hosts.
image_properties_default_architectureis used by the ImagePropertiesFilter which examines all the architectures supported by QEMU on each compute host; this includes software emulations of non-native architectures.
If the full QEMU suite is installed on a compute host, that host will offer to run all architectures supported by the available
qemu-system-*binaries. In this situation images without the
hw_architectureproperty could be scheduled to a non native architecture host and emulated.
Disable QEMU Emulation
This step applies particularly to existing
x86_64environments when new
aarch64compute nodes are added and it cannot be assumed that the
hw_architecureproperty is applied to all Glance images as the operator may not be in control of all image uploads.
To avoid unwanted QEMU emulation of non native architectures it is necessary to ensure that only the native
qemu-system-*binary is present on all compute nodes. The simplest way to do this for existing deployments is to use the system package manager to ensure that the unwanted binaries are removed.
OpenStack-Ansible releases including 2023.1 and later will only install the native architecture qemu-system-*` binary so this step should not be required on newer releases.
Upload images to Glance
hw_architectureproperty is set for all uploaded images. It is mandatory to set this property for all architectures that do not match
It is recommended to set the property
hw_firmware_type='uefi'for any images which require UEFI boot, even when this implicit with the
aarch64architecture. This is to avoid issues with NVRAM files in libvirt when deleting an instance.
Architecture emulation by Nova¶
Nova has the capability to allow emulation of one CPU architecture on a host with a different native CPU architecure, see https://docs.openstack.org/nova/latest/admin/hw-emulation-architecture.html for more details.
This OpenStack-Ansible documentation currently assumes that a deployer wishes to run images on a compute host with a native CPU architecure, and does not give an example configuration involving emulation.
|
OPCFW_CODE
|
import { createImmerReducer } from "utils/ReducerUtils";
import {
ReduxAction,
ReduxActionTypes,
UpdateCanvasPayload,
} from "@appsmith/constants/ReduxActionConstants";
import { MAIN_CONTAINER_WIDGET_ID } from "constants/WidgetConstants";
import { UpdateCanvasLayoutPayload } from "actions/controlActions";
const initialState: MainCanvasReduxState = {
initialized: false,
width: 0,
height: 0,
};
const mainCanvasReducer = createImmerReducer(initialState, {
[ReduxActionTypes.INIT_CANVAS_LAYOUT]: (
state: MainCanvasReduxState,
action: ReduxAction<UpdateCanvasPayload>,
) => {
const mainCanvas =
action.payload.widgets &&
action.payload.widgets[MAIN_CONTAINER_WIDGET_ID];
state.width = mainCanvas?.rightColumn || state.width;
state.height = mainCanvas?.minHeight || state.height;
},
[ReduxActionTypes.UPDATE_CANVAS_LAYOUT]: (
state: MainCanvasReduxState,
action: ReduxAction<UpdateCanvasLayoutPayload>,
) => {
state.width = action.payload.width || state.width;
state.height = action.payload.height || state.height;
state.initialized = true;
},
});
export interface MainCanvasReduxState {
width: number;
height: number;
initialized: boolean;
}
export default mainCanvasReducer;
|
STACK_EDU
|
One of the biggest challenges in the data science industry is the Black Box Debate and the lack of trust in the algorithm. In the talk titled “Explainable and Interpretable Deep Learning” during the DevCon 2021, Dipyaman Sanyal, Head, Academics & Learning at Hero Vired, discusses the developing solution for the black box problem.
Dipyaman Sanyal’s educational background consists of an MS and a PhD in Economics. His career only becomes more colourful, with his current title being the co-founder of Drop Math. In his 15+ year career, he has been awarded several honours, including 40 under 40 in India in Data Science in 2019. Sanyal is the only academic to have led three top-ranked analytics programs in India — UChicago-IBM-Jigsaw, Northwestern-Bridge and IMT, Ghaziabad.
The Black Box problem
On one side of the debate are people wondering how to trust a model and an algorithm to make decisions for them without knowing what is in the little black box. The other side of the spectrum are people who argue that it’s like trusting a doctor to do surgery. “Why is it that we are holding AI on a different benchmark than humans?” Sanyal asked.
There is a greater demand for Explainable AI, XAI, especially among decision-makers on the side of the spectrum that don’t trust AIs completely. Surveys have shown that ⅔ of AI projects don’t go beyond the pilot phase because there is no trust in the black-box model. For Sanyal, the next leap in the AI industry will only arise after a significant amount of explainability tied into the world of AI.
The saving grace
Governments have also increased the regulation in the data science field to prevent potential misuse of crypto, GANs, deep fakes, etc. There are two aspects that come as a solution here.
The first aspect is interpretability – how to interpret the situation at hand and predict what will happen; to be able to look at an algorithm and say, “yep, I can see what’s happening here.” Interpretability is being able to discern the mechanics without knowing ‘why’, but it answers ‘how’.
Recent advances in AI have allowed algorithms to change and develop quickly, making it even more difficult to interpret what is underlying these models. In addition, DNNs are inherently vulnerable to adversarial inputs – this leads to unpredictable model behaviours. While this has caused a rush in deep learning research and implementation, the complexity of explanatory models has been rudimentary.
Prevailing XML techniques
XML techniques overcome the need to unravel the bits and pieces of more complex algorithms. These include:
- Feature importance – Shows the importance of every feature.
- LIME: Local Interpretable Model Agnostic Explanations – Learning interpretability model locally around the predictions; it is model agnostic.
- SHAP: Shapley Additive Explanations – Provides the shaping value for each feature by taking conditional expectations into account and calculating the Shapley value.
- PDP: Partial Dependence Plots – Shows the marginal effect of a feature on the predicted outcome.
The next steps to unravelling deep learning models:
- Gradient-Based Approaches – this takes a gradient descent approach where the higher the gradient for a given feature, the more sensitive the model scoring function is to the change in the respective input. The approach has two methods:
- DNNs: Deconvolutional Networks – a method that approximately projects the activations of an immediately hidden layer back to the input.
- Guided Backpropagation – the method combines vanilla backpropagation at ReLUs with DeconvNets.
- Axiomatic approach – this approach has some formal notions or properties that define what explainability or relevance is and looks if the neurons follow the notions to be ‘relevant’. This is used for:
- Layer-wise Relevance Propagation – This is used to understand NN or LSTMs. It redistributes the prediction function backwards using local redistribution rules until assigning a relevance score to each input variable. It is a layer-wise relevance propagation every step of the way. It follows the conservation of total relevance for the layer.
Source: DevCon 2021
- DeepLIFT – This approach is useful for the tricky areas of deep learning while digging into feature selection inside an algorithm. It works through a form of backpropagation. The model takes the output and pulls it apart by ‘reading’ the different neurons that develop the original output.
Source: DevCon 2021
- CAMs – Class Activation Maps replace fully connected layers with a GAP with a CNN layer to reduce it into a one-dimensional tensor and a single linear layer on top before feeding to the softmax. It is used to make CNNs more interpretable.
In the architecture, the GAP layer reduces each feature map into a single scalar.
Source: DevCon 2021
“Financial structures favour structural models that are easy to interpret by people,” Sanyal noted. “While deep learning algorithms have made great progress, further work is needed to attain appropriate perception accuracy and resilience based on numerous sensors.”
|
OPCFW_CODE
|
If it is properly controlled and governed, citizen development can solve the desperate shortage of dev resources. Picture this: your people have a great idea to accelerate how you deliver on new business requirements. All you need to make it happen is some professional developers. Easy, right? Unfortunately, not. Finding highly skilled developers in the current marketplace is a challenge for every enterprise and in every vertical. Sadly, you aren’t the only business in the world with an IT need … and for low-stakes, non-critical business processes, the waiting list is especially long. So, this is where citizen development comes to the rescue.
“a ready-to-go army – people who don’t need business analysts to translate what the business requirements are, because they are the business requirement!”
Of course, we need to be careful to not over-romanticize the solution. There are legitimate concerns about how you keep control when you let the business into the sacred realms of IT. However, they need to be put into context; with strong governance financial services, organizations have a ready-to-go army of dev resources waiting to be unleashed – people who don’t need business analysts to translate what the business requirements are, because they are the business requirement!
How it can work
Done well, this is about empowering your employees, safely, to drive business forward. Citizen development enables all layers of your organization to create their own apps – without needing in-depth IT knowledge. They are given access to the right tool sets to create individual application experiences within a team that’s governed by experienced IT oversight. A shared IT backlog burden between business and IT can be shrunk at speed, by using each other’s knowledge to close the gap. Business users no longer need to rely on their “standard” office tools, but can also create the tools they need without waiting for IT to help them:
- To keep control, we provide a low-code center of excellence (CoE) enablement team to market your low code environment within the organization in a federated model, while a centralized team handles value enabling and, on the platform, multiple business and IT teams can focuses just on delivering value.
- The solution also unlocks the potential of any platform to become rapid by promoting reuse. Keeping the use case for the rapid application development in place is key to determining the architectural agility of your organization.
What you’re going to see happening
The gap between business and IT closing
By sharing the IT backlog burden between business and IT, you can start using each other’s knowledge to keep up the application creation process and improve business productivity – enabling IT to really focus on the core business critical systems for financial services firms.
An increase in company agility
The future is here. Big tech players are jumping on the bandwagon, so it’s easy to integrate with their existing IT landscape. By removing the dependency with IT, businesses can be agile to the situation and adopt quick changes to shorten the time to value.
Your workforce empowered
Today’s workforce is tired of constraint. They love freedom and agility, and they have untapped skills to deploy that will improve their own job satisfaction. By giving them the freedom to realize their great ideas, you empower them to empower your organization.
Be in control
With a community of expertise (CoE) structure you are in full control. A CoE guides your citizen developers to follow the platform guardrails, and coaches them to reuse existing components to ensure quality and integrity.
Be bold: adopt citizen development
At Capgemini, we have consultants who are masters on various low-code platforms. We do a free assessment and advise you on the right tool set based on your comfort level and preferences.
Instead of looking at citizen development as just a productivity enhancer, we think it’s about unlocking your people and your business to enjoy the full potential of rapid application development.
It’s time for a (r)evolution
By implementing a culture of citizen development, you can get the future you want and throw off the constraints imposed by scarcity of development resources. By giving your users, and therefore your business, tools to create their own agile workspace, you do much more. In short, it’s time for a (r)evolution.
Chief Technology Officer (CTO) Capgemini FS Benelux at Capgemini
Solution Architect at Capgemini
|
OPCFW_CODE
|
Our users have expressed a need for a cascading lookup within the Document Panel on custom document type.
1. I've created a custom document type that extends the Document content type.
2. I've created a form using Nintex
3. Within the form, I'm using a cascading lookup that is filtered by the control. (this works fine on the preview)
4. I've also created the list and added the custom content type to it.
Now my question is where is the form published so I can replace the doc panel form in the Doc Information panel settings?
Solved! Go to Solution.
This is really interesting Jan!
I've done a lot of stuff with documents. Not sure if this helps. But when I was doing something similar to this I was using InfoPath to create a custom DIP (Document Information Panel) the form would show up on top of the document.
Thank you Melissa,
Yeah I've been able to do it via InfoPath (because when you save/publish in InfoPath it updates the DIP custom form url)
My problem is that I'm not at all proficient with InfoPath, so ideally, I'd like to use Nintex. Especially since it seems that it's an intended function given the link to edit the form
I think I'm just missing 1 step, which is to figure out where the custom form is saved to and update the Doc Info Panel form url.
Try this not sure if it will work but give it a try.
In Step 4 - Choose the middle option Using custom template
Edit the template by clicking on "Edit this Template" - I would personally do something simple and save it. But when save it don't use the quick save. Go to File -->Publish --> Click on Workflow. I found that this works better then Quick Publish
Melissa, thank you for the suggestion. I believe you're suggesting to do the edit via InfoPath. While I may have to go down that road if I don't find a Nintex solution, I would like to avoid InfoPath as I've never really used it. (We have Nintex for our form needs )
Like I said, I have it all (I believe) just don't know when the Nintex publish process publishes the form so I can entered it into the Doc Info Panel url.
Ok well I've found the NFForm.xml that gets created under the Content Type -
However when I try to use that, I get an error while creating a new document:
Document Information Panel cannot open a new form.
The form template is not valid. If you are working with a document library on a SharePoint site, you can modify the library's document template, or you can publish a new form template to use instead.
The processing instruction in the file is missing or invalid.
I'll update this as I've have had a response from Nintex. Currently, Nintex does not provide this functionality so the only option is to use InfoPath to modify the Document Information Panel.
I'd agree that it would be really nice to have Nintex Forms for the Document Information Panel, however, you are reaching outside of the realms of SharePoint and into Office, so it's a completely different platform.
You can achieve what you want using Infopath,
If this or any of the other responses answers your question please mark the thread as answered for other members please.
Thank you Ryan. I should have updated the thread. I investigated further and spoke in some detail to Nintex support. It turns out that this feature is not supported by Nintex. The only way to do it is the way you suggest - via InfoPath.
However, given that in Office 2016, the Document Info Panel has essentially gone away (as Microsoft baked it into the Document Properties) it's less relevant.
We (as well as a large disgruntled community) are still hoping that Microsoft will bring the DIP back but until then, we're looking at other solutions.
|
OPCFW_CODE
|
Whether "hockey" refers to ice hockey or field hockey
In the US and Canada, when simply saying "hockey", it is understood that one is referring to ice hockey.
Is it correct to say that in most (all?) other English-speaking countries, "hockey" refers instead to field hockey?
It does in the UK.
It depends. Fans of ice hockey in the UK often refer to it as "hockey". Australia seems to be similar to the UK. If you wanted to do any work yourself, you could look at naming of official bodies, leagues, etc. Is there any particular English-speaking country you're interested in? Because I don't think anyone's going to give you a list of usage in all 67 or so sovereign nations where English has some de jure or de facto status, to say nothing of expatriate communities elsewhere.
In the US, field hockey is not as popular a viewing sport as ice hockey and so field hockey games are rarely aired on TV here, whereas during ice hockey season there's always a game to watch. Field hockey in the US until relatively recently has been played at the scholastic and college level only by girls and women; very few high schools have boys field hockey teams but there have been girls field hockey teams for more than a century. As a result, of the people in the US who call field hockey "hockey" and ice hockey "ice hockey", most are women.
We don't often communicate in single words, so context will often be present. When it's not spelled out ("Going to a hockey game!"), even being in a certain country of region counts as context. The question is looking for an artificial clinical abstraction that isn't real language.
Yes, that seems a reasonable distinction.
Just looking as some URLs for national teams or organisations:
Ice Hockey
https://teamusa.usahockey.com/ (contrast https://www.usafieldhockey.com/)
https://www.hockeycanada.ca/
Field Hockey
https://www.greatbritainhockey.co.uk/ (contrast https://icehockeyuk.co.uk/)
https://hockey.ie/
https://www.hockey.org.au/
https://www.hockeynz.co.nz/
https://sahockey.co.za/
https://www.hockeyindia.org/
http://pakistanhockeyfederation.com/
Whether "hockey" refers to ice hockey or field hockey
Or "football" refers to the American pastime or the international game; or whether cricket refers to a game or an insect; or whether bat refers to a flying mammal or a club-like instrument found in some sports, etc.
The answer to your question is "context."
The question was Is it correct to say that in most (all?) other English-speaking countries, "hockey" refers instead to field hockey? // Try not to fixate on just the title and actually read the full question. The key to understanding a question is "context".
@user182601 The answer is also "the context". Any answer you would have given would have ended in the same conclusion.
|
STACK_EXCHANGE
|
Delete NA values from a data frame with R
I have a large scale data frame with ?_? values which dimensions are 501 rows and 42844 columns. Using R , i have already replaced them with NA by using this code below :
data[data == "?_?"] <- NA
So i have NA values now and I want to omit these from the Data.frame but something is going bad....
When I hit the command below :
data_na_rm <- na.omit(data)
I get a 0 , 42844 object as a result.
dim(data_na_rm) #gives me 0 42844
data_na_rm[1,2] #gives me NA
data_na_rm[5,3] #gives me NA
############################
data_na_rm[2] #gives me the title of the second column
data_na_rm[5] #gives me the title fo the fifth
What i have to do?? I've spend on this thing to many hours. I would appreciate if anyone could spend some time for this in order to help me.
na.omit will drop all rows that have any NAs anywhere in the row. You probably have some NAs in every row somewhere
data[data == "?_?"] <- NA ... this looks strange to me. Wasn't your intention to replace values in a single column?
First of all i want to thank everyone who have spend time for me and my develop issue.Tim Biegeleisen my intention was to replace the ?_? values with NA everywhere in the data_frame. I want to run a Bayesian model with Bugs on this data_set, so in order to work with Bugs/R_Jugs I have to replace these values with NA first and then omit them. Nonetheless i haven't thought that maybe this Data_frame includes at least one NA value in each row.
Like what JackStat said in the comments, you might have NAs in every row. Maybe you should test for that?:
# Some Data. All rows have an NA but not all columns
df <- data.frame(col1 = c(NA, 2, 3, 4),
col2 = c(1, NA, 3, 4),
col3 = c(1, 2, NA, 4),
col4 = c(1, 2, 3, NA),
col5 = c(1, 2, 3, 4))
# test whether an NA is present in each row
apply(df, 1, function(x) {sum(is.na(x)) > 0})
[1] TRUE TRUE TRUE TRUE
This will help you find which columns are contributing the most NAs. It sums up the number of NAs:
apply(df, 2, function(x) {sum(is.na(x))})
col1 col2 col3 col4 col5
1 1 1 1 0
Oh my god!!!! I have at least one NA value in each row... You have right William and thank you very much for your help. I must search now for what i have to do in order to handle this data.frame in order to run a bayesian model on R_Jugs.
@GiorgosK Glad I could help. Good luck!
@GiorgosK I added an update that might help you find if there are certain columns that are contributing a large number of NAs.
Thank you very much, I've already checked it that way but it is quite more complicated.... This data-set includes SNIPs in other words genotypes ex(AA AB BB).I don't care about columns... note that each line contains the unique genotype of an animal. So i can't erase columns because the genotype sequence wil change. Anyway... thank you very much...
@GiorgosK No problem. Good luck with everything!
|
STACK_EXCHANGE
|
ClipFlair Forums. Create discussion topics and get answers
Hi Hennifer, both of those activities happen to use an identical (bit-by-bit) captions.srt file (attached as ZIP)
Trying to load the SRT from the import captions toolbar button of Captions component also fails
Looking into it (at first look with bare eye it doesn't look to have a problem, but I've noticed it uses Mum:... to say who's speaking - wonder if I had added some preliminary code to load the Actor column and it fails [don't remember])
speaking of how one specifies actors, what are the ways people use to specify them in captions? I think I've seen the following:
are there others used?
I found out what the issue is with that SRT. It uses INVALID TIME FORMAT!
00.00.02,000 --> 00.00.04,500
Mum:Yes,Giuseppe spends his days here doing puzzles
it should write 00:00:02,000 --> 00:00:04,500 instead.
if you see
The SubRip file format is "perhaps the most basic of all subtitle formats." SubRip (SubRip Text) files are named with the extension .srt, and contain formatted plain text. The time format used is hours:minutes:seconds,milliseconds. The decimal separator used is the comma, since the program was written in France. The line break used is often the CR+LF pair. Subtitles are numbered sequentially, starting at 1.
Start time --> End time
Text of subtitle (one or more lines)
HOW DID THEY MANAGED TO DO THIS? DID THEY OPEN UP THE .CLIPFLAIR.ZIP FILΕ, then opened the nested .clipflair.zip for the captions component and EDITED captions.srt manually?
Cause if they had tried to write such time format in Captions component Start/End time columns they'd see popup saying "Input is not in a correct format." as in the attached screenshot
Many thanks George.
We will ask the students if they had received the error message.
We will let you know what they say.
I have now configured the social site to allow upload of .clipflair files too (saved by default like that on Mac, will set it now to also save to that by default on Windows too [instead of .clipflair.zip which will also be supported for load on Windows - Mac filedialog has issue with that one]).
Also now allowing upload of .clipflair,.mp4,.srt,.tts,.text (the .text one is the file extension for XAML text that the text editor component saves)
btw, did you have any news on how they had managed to make those files?
Cristina should meet the students next week. We will let you know what they say.
Thanks for your interest.
I'm having problems with opening a clip in one Polish activity - attached. There is a link to the clip from the clips gallery, but the clip wouldn't start, though it worked previously. Do you know what the problem could be?
Agnieszka, at the back panel of the activity you should copy and paste this URL:
Now you have this one
George, Agnieszka copied the URL from the browser. Could we make it so that the clip works with both URLs?
Hi Agnieszka, those are the Studio launch URLs we didn't find time to see together at Carna.
tells Studio to add a new Clip component and set the Media URL property of it to ...
... can be a full URL to a video on the web, or a partial URL to our Clip gallery like
or the even shorter
or even just
I was thinking already to make Clip component understand Studio's launch URLs and extract the correct video URL from those (reusing the same code that the Studio app itself uses), will see to it. The URL the Clip component will get from the above would be http://gallery.clipflair.net/video/Rosa/Rosa.ism/Manifest.
Similar trickery will have to do at "Open activity from URL" dialog to understand Studio launch URLs for activities, like
and extract the correct activity URL from those (reusing the same code that the Studio app/launcher uses), that is http://gallery.clipflair.net/activity/Retro.clipflair
When implemented it will be probably replacing automatically the Studio URL that the user typed/pasted in, not just handling it silently as is done with the various dropbox URLs (where it fixes Dropbox file download page URL that user may have used instead of Dropbox file URL - might make the Dropbox case work the same too and show the fixed URL to better educate the users)
Thanks for this explanation. I'll change the link, but I have to admit I didn't understand George's answer ;)
What's the easiest way then to get the right link to the clip to insert in the activity?
|
OPCFW_CODE
|
Fix broken profiler in 4.0
This is my stab at https://github.com/godotengine/godot/pull/59634. It is 99% @tavurth's solution, but I tweaked it a little in my 2nd commit to address the PR feedback from @Faless .
I wasn't exactly sure what the desired behaviours were in all cases, so let me know if I got anything wrong.
Here are the behaviours tested:
No scene playing
Action
Result
Editor opened (no scene playing)
Debugger is stopped, "Start" option is enabled
Debugger started (no scene playing)
Debugger is running*, "Stop" option is enabled
Debugger stopped (no scene playing)
Debugger is stopped, "Start" option is enabled
Scene playing
Action
Result
Scene is played
Debugger is not running, "Start" option is enabled
Debugger is started
Debugger is running, "Stop" option is enabled
Scene is paused
Debugger is paused, "Stop" option is disabled, graph is not cleared
Scene is unpaused
Debugger is running, "Stop" option is enabled, graph is not cleared
Scene is stopped
Debugger is running*, "Stop" option is enabled, graph is not cleared
Scene is restarted
Debugger is stopped, "Start" option is enabled, graph is cleared
*but displays no output and doesn't advance any frames because nothing is running
Note:
I would like to call specific attention to this row:
Action
Result
Scene is played
Debugger is stopped, "Start" option is enabled
This means that if you start the debugger BEFORE pressing "Play", when you play the scene the debugger will stop. I.e. you have to click "Start" on the debugger to get it running again for the current scene.
This means that if you want to profile the very first frame of your project, you will have to speedily click "Start" before the playing scene has finished loading. I don't know how important an issue this is, but its fiddly to fix and I didn't want to block this PR.
Bugsquad edit:
Fixes https://github.com/godotengine/godot/issues/44252
This means that if you want to profile the very first frame of your project, you will have to speedily click "Start" before the playing scene has finished loading. I don't know how important an issue this is, but its fiddly to fix and I didn't want to block this PR.
When you say "very first frame" you mean really first?
In general I've always looked at the first few frames of load-time to be out of sample anyway as they are pretty polluted.
I would say that if it's such a minimal effect, it could well be present in the engine pre-3.5.
Thank you for fixing the PR!
Sounds like its not a problem then, which is good, but figured I would flag it just-in-case :-)
Could you squash the commits? See PR workflow for instructions.
It would also be good to credit @tavurth as co-author by adding this to the commit message:
Co-authored-by: Name <email>
(you can find name and email in the "From" field of https://patch-diff.githubusercontent.com/raw/godotengine/godot/pull/59634.patch)
Done :-)
Thanks to both of you!
I missed it before merging but the co-authoring line didn't work for Git as it should have been "Co-authored-by:" with two hyphens. Too bad :|
No problem, I'm glad this is implemented!
|
GITHUB_ARCHIVE
|
package org.mayszak.utils;
import org.mayszak.service.DB;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.HashMap;
import java.util.Random;
import static java.lang.System.currentTimeMillis;
public class SampleDataUtil {
private static HashMap<Integer, String> distinct = new HashMap<>();
private static int distCount = 0;
private static HashMap<Integer, String> common = new HashMap<>();
private static int commCount = 0;
public static void loadWords(){
InputStream in = ClassLoader.getSystemClassLoader().getResourceAsStream("abunchofwords");
try {
BufferedReader reader = new BufferedReader(new InputStreamReader(in));
java.lang.String line = null;
while ((line = reader.readLine()) != null) {
java.lang.String[] someWords = line.split(" ");
for(java.lang.String word : someWords){
if(distinct.containsValue(word)){
if(!common.containsValue(word)){
common.put(commCount, word);
commCount++;
}
}else{
distinct.put(distCount, word);
distCount++;
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static String[] words;
public static String getString(int idx){
return words[idx];
}
public static void generateAStrings(int howMany){
words = new String[howMany+1];
for(int i = 1; i <= howMany; i++){
words[i] = generateAString().toLowerCase();
}
}
public static String generateAString(){
Random random = new Random();
int numwords = random.nextInt(10) + 1;
int commonwords = random.nextInt(numwords);
StringBuilder myword = new StringBuilder();
myword.append("<'importantmsg':'");
for(int idx = 0; idx < numwords; idx ++){
if(commonwords > 0) {
int dice = random.nextInt(2);
if (dice == 0){
int randIndx = random.nextInt(commCount);
myword.append(common.get(randIndx));
myword.append(" " );
commonwords--;
}else{
int randIndx = random.nextInt(distCount);
myword.append(distinct.get(randIndx));
myword.append(" " );
}
}else{
int randIndx = random.nextInt(distCount);
myword.append(distinct.get(randIndx));
myword.append(" " );
}
}
myword.append("'/>");
return myword.toString();
}
public static HashMap<Integer, String> prime(DB dbinstance, int recordcount) throws IOException {
return prime(dbinstance, recordcount, 1);
}
public static HashMap<Integer, String> prime(DB dbinstance, int recordcount, int startoffset) throws IOException {
HashMap<Integer, String> points = new HashMap<>();
long timerstart = 0;
long timerstop = 0;
long runtime = 0;
SampleDataUtil.loadWords();
//generate the payloads in advance so it does not affect our results.
System.out.println("Generating test data");
SampleDataUtil.generateAStrings(recordcount);
System.out.println("Done generating test data");
System.out.println("Starting write performance load test for " + recordcount + " records");
timerstart = currentTimeMillis();
float fastestavg = 0;
float slowestavg = 0;
float fastestbatch = 0;
float slowestbatch = 0;
int samplerate = 25000;
for(int i = 1; i <= recordcount; i++) {
String aword = SampleDataUtil.getString(i);
int seededid = i + startoffset;
//periodically measure performance to see if it degrades over time
if(seededid % samplerate == 0){
points.put(seededid,aword);
timerstop = currentTimeMillis();
runtime = timerstop - timerstart;
System.out.println("Insert time at " + i + " records: " + runtime + "ms");
float precisionms = (float)runtime / (float)samplerate;
System.out.println("Avg. insert time per record: " + precisionms + "ms");
if(precisionms > slowestavg)
slowestavg = precisionms;
if(fastestavg != 0 & fastestavg > precisionms)
fastestavg = precisionms;
if(runtime > slowestbatch)
slowestbatch = runtime;
if(fastestbatch != 0 & fastestbatch > runtime)
fastestbatch = runtime;
//reset the timer...
timerstart = currentTimeMillis();
}
dbinstance.put(String.valueOf(seededid), aword);
}
System.out.println("Range:");
System.out.println("Fastest avg insert: " + fastestavg + "ms slowest avg insert " + slowestavg + "ms");
System.out.println("Fastest batch insert: " + fastestbatch + "ms slowest batch insert " + slowestbatch + "ms");
return points;
}
}
|
STACK_EDU
|
Is saying "JSON Object" redundant?
If JSON stands for JavaScript Object Notation, then when you say JSON object, aren't you really saying "JavaScript Object Notation Object"?
Would saying "JSON string" be more correct?
Or would it be more correct to simply say JSON? (as in "These two services pass JSON between themselves".)
It would be more accurate to say they pass JSON encoded objects between them.
Anyone who says this ought to be reported to the Department of Redundancy Department.
@MasonWheeler: at least twice!
@MasonWheeler: +1: Have you ever heard sentences like "... the TCP/IP protocol..."?
We've been using "PIN numbers" in the UK for years, it never seems to bother anyone :-)
JSON is notation for an object. Not an object itself.
A "JSON Object" is a String in JSON notation. That's not redundant.
Saying "JSON String" would be more clear than "JSON Object". But they would mean the same thing.
"JSON Object" can be shorthand for "JSON-serialized Object". It's a common-enough elision of confusing words.
No.
Let's think of a real-world example -- "English" would probably be a good analog for "JSON" in that they both name the notation being used. Still, I think saying, "They spoke in English sentences" adds precision from "They spoke in English."
I wouldn't ding somebody for leaving it out, but I don't think it's redundant to include it.
A JavaScript Object Notation Object is just that. You need to qualify what type the object is of. If you just said "my script returns a JavaScript Object Notation", then it doesn't make sense.
JSON doesn't mean an object in itself, it merely relays what type of object you are dealing with, such as an XML object or serialised object does, etc.
All of these are strings, but we organise them in our minds as objects.
This might be a bit more of an English idiosyncrasy, but the rule of thumb I've always heard is something a long the lines of "The sentence should still make sense if you spell out the acronym." Thus, my vote would be that use of the phrase "JSON" makes sense on its own as the following:
The server returns JSON.
Still makes sense when it is spelled out
The server returns JavaScript Object
Notation.
The trick is likely going to be more in the grammar of the sentence than if "object" is included or not.
+1: Good point. JSON, by itself, is incomplete because it's not a noun. It's an adjectival phrase.
to turn that around - think of how odd it would be to say "The server returns a JSON"
Technically yes, JSON Object stands for "JavaScript Object Notation Object". But it has specific meaning - in fact, depending on context, it may mean either "string representing serialized JavaScript object", or "JavaScript object that can be sent or received as JSON string using some communication protocol".
In the first case, it actually means 'serialized JavaScript object'. In the second case, JSON is used as qualifier because not every object can be represented in JSON notation - one example is objects that contain function values.
So 'on paper' it looks redundant, but if you consider actual meaning it's not.
I would also like to point out that even though it's not a case of redundancy here, redundant practices with acronyms are being done all the time, especially with technical terms (e.g. ATM machine, TCP/IP protocol, UDP protocol, SMTP protocol, etc.).
By being redundant and restating the essence of the acronym it is easier to convey what the acronym relates to.
|
STACK_EXCHANGE
|
System.Xml.Linq provides a nice, functional way of querying XML however with F# operator overloading we can create a nice mini DSL to have even cleaner code for XML querying.
With this DSL we can mimic XPath and XQuery quite closely.
Consider the following simple XML:
<Vehicle xmlns="urn:auto/v1"> <Engine> <Part number="abc" type="piston" cost="4.0"/> <Part number="def" type="camshaft" cost="25.0" /> </Engine> <Body> <Part number="234" type="fender" cost="32.47" /> <Part number="423" type="door" cost="45.0" /> </Body> </Vehicle>
Say we want to find just the engine parts whose cost is greater than $5.00 then a typical XQuery would be:
xquery version "1.0"; declare namespace auto="urn:auto/v1"; for $p in /auto:Vehicle /auto:Engine /auto:Part where $p/@cost > 5.0 return $p
Now with some simple definitions, we can write F# code that mimics the above XQuery reasonably closely.
First we define three operators (“/-“ , “/+”, “@”) and then we define a simple function “auto” to give back a fully namespace qualified name. Finally we use list comprehension to perform the actual query:
let doc = XDocument.Load(file) let inline (/-) (x : XContainer) n = x.Elements n let inline (/+) (xs : XElement seq) n = xs |> Seq.collect(fun x -> x /- n) let inline (@) (x:XElement) n = x.Attribute(XName.Get(n,"")).Value let auto x = XName.Get(x,"urn:auto/v1") [for p in doc /- auto"Vehicle" /+ auto"Engine" /+ auto"Part" do if p @ "cost" |> float > 5.0 then yield p]
Once the operators are defined, the rest of the code is very clean and quite similar to the XQuery code.
Note another useful operation would be (“/*”) to get all the descendents of a node as in:
let inline (/*) (x : XContainer) n = x.Descendants n
Over the course of doing some work with actual XML data, I found the following helper functions to be useful:
let first xs = if Seq.isEmpty xs then None else Some(Seq.nth 0 xs) let localName (x:XElement) = x.Name.LocalName let value = function None->None | Some (y:XElement) -> Some(y.Value)
The “first” function can be used to get the first element of a sequence. It can be composed with any of “/…” operators defined above and with the “value” function. For example:
someNode /- ns"Child1" |> first |> value
|
OPCFW_CODE
|
We are tackling the idea of the future for Customer Success. As a history major my natural inclination towards planning is identifying the past. Part of being able to progress and move forward is embracing our past and acknowledging where we came from.
So where did Customer Success come from?
There’s a common misconception that it was Salesforce, in 2005, who was the innovative force that drove this new department. However, if we dig just a little deeper, we will find that Salesforce simply made it a success (all pun intended) but was not the first in creating this department.
Back in 1996, a company called Vantive took something from an idea to a concept:
John Luongo, Vantive’s CEO, had encountered a very innovative usage of his company’s application by a customer and wanted to bring that innovation back to Vantive. He hired Marie Alexander, who had created the new approach, to come and run Vantive’s services group. In 1996-1997, Marie created a new department, called Customer Success, and began introducing the team to prospects prior to the signing of the contract.
Ref: The Customer Success Association
Going back to the original question on the future of Customer Success – knowing how our field started, we should have a better idea of how to prepare ourselves for the future. At my company, we deal with customers who are looking to drive Transparency in the food Industry. This is no easy feat, but our technology has allowed us to make great strides in just a few short years.
As we have worked with our customers there has been a common trend that I have noticed. There will always be technically savvy customers who will present you with a challenging use case. This may be because of the type of engineering architecture they support or because of the complexities in a data structure. Regardless, it often times is required for a CSM to dig a bit deeper into our own technology.
Prior to my adventure in this field, I could not tell you what NiFi, Postman, or what an API endpoint was let alone how they work. Yet, at this juncture in my career, I have a background in Web Development – since I had done a Full Stack Coding Bootcamp at Northwestern University. Even our CSMs who have not done a coding boot camp, though, are facing the reality that in order to create success for their customers, they too must be successful in educating themselves in modern technologies, function, and lingo.
The future of CSMs is what I consider to be a hybrid between understanding web development but also has the knack to be customer facing. I recall my time at boot camp and realizing that the skills I had just learned would be something useful if I became an engineer. Do not get me wrong, I love coding but I also loved talking to customers. For a while, I felt un-motivated to continue that path of learning to be a web developer.
Luckily enough though my current company had just posted a role that combined my recently acquired skills but also involved working with customers. It was the role I helped to develop into TCSM (Technical Customer Success Manager). My SQL skills were put to use just as much as my knowledge of python and the ability to understand the level of effort that a feature request involved.
Just like anything, we must always be able to – but more importantly, be willing to change and adapt. The reality is that our field is changing so we need to educate ourselves more, we have to be willing to be the voice of engineering AND customer success. We ARE the bridging gap between technology and success after all. I ask you, what wouldn’t you do to ensure your customer is successful?
Gary Marroquin is one of the co-founders of the Chicago Customer Success Podcast. He is currently a Technical Customer Success Manager with Label Insight.
|
OPCFW_CODE
|
This was originally posted as a private reply, but really there's no
reason why I shouldn't go public with it.
-------- Original Message --------
Subject: Re: a first shot at some questions....
Date: Mon, 03 Apr 2006 22:15:09 +0100
From: Ivor Williams
To: M.B.Gaved <M.B.Gaved(a)open.ac.uk>uk>, t.heath(a)open.ac.uk
A. Your Open Guide
1. How would you describe the Open Guide to somebody who wanted to find out about it?
My answer to this question depends on how technical or otherwise, the
person is. For the completely non-technical, I describe the Guide as a
website with lots of reviews of pubs and restaurants, and useful
information about anything in London. For the more technical, I use the
word wiki, and explain (if they don't know the concept) that anyone can
2. Who is the anticipated audience for your Open Guide?
Who are your users right now?
I think it's quite a broad portfolio. The initial intention was
targetting the guide at geeks. As such, the initial subjects we were
covering were ones of interest to geeks, particularly London.pm.
However, I saw the guide as having a much broader scope, and started
writing pages that could be of interest to anybody, and the content has
developed along those lines.
In terms of the current user base, I think it's quite diverse. I keep
running into people who've already heard of OpenGuides. I have
personally had feedback from CAMRA, from an employee of London
Transport, and from trade unionists finding my page on the West London
Trades Union Club.
3. What do you see as the purpose of the open guides?
(feel free to get philosophical!) e.g. how is it different from other wikis/city guides?
I think the primary function is to inform. A second aim is to be
objective. A third is to provide a vehicle for feedback, and rapid
updating by the virtual community that exists on account of their
Other city guides tend to be commercial, pandering to the wishes of
paying sponsors, and of limited use and limited objectivity. Also,
because "anyone can edit", I feel comfortable that I can visit an
establishment listed in the guide, and update it with my findings.
Compared with other wikis, the structured metadata is what sets
OpenGuides apart. In particular, the geodata can be used for finding
distances and plotting points on maps. I've not seen other wikis that
can do this.
4. Are there rules and regulations users must follow?
How about your admin team (e.g. how do you make decisions)?
My rules for contributors are set down in the page "Wiki Etiquette",
which was entirely my own wording. Common sense is the principle that
applies - I translated this into the Wiki Etiquette page so that others
would have a reference point where we could all agree on our policy.
This page has had to evolve over time, with changes in software, changes
in copyright and licensing policy, and in the light of various kinds of
abuse and attacks.
When it comes to admin decisions, the "common sense" rule applies. If
we're unsure we'll probably make a change (or change the node back) but
keeping the controvertial revision available. Something that's blatant
spam or offensive, will just get deleted. More recently, I have been
keeping an admin log page for deletes, so that we have a record,
including recording the IP addresses of any spammers and suchlike. I
tend to email the other admins directly (not via a list) for matters
which are sensitive, e.g. a security hole I discovered early on.
B. Your role in the Open Guide
1. How did you come to be involved in the Open Guide?- can you tell me what you do?
This was through London Perl Mongers. I was one of the 3 founders of the
first guide - involved both as a major contributor, and as a software
In terms of roles, I do pretty much everything bar the hosting. I don't
have access to the box OGLondon is running on, but I run a mirrored copy
on a couple of my machines.
2. What was your goal when your Open Guide (or your
involvement in it) started? What are the current goals?
The original goal of a useful guide to London that people can edit and
keep up to date, has definitely been met, and is continuing to be met.
The goal of sharing data and building a community of mutually supporting
websites is being met, albeit from a specialist niche.
I'm finding it difficult to separate the goals of OGLondon and the goals
of the OpenGuides project as a whole, since I am involved with the big
picture, including software development. In many ways, I now think it's
important to keep a standard common code base that all guides can share.
It's tempting to look at ways of improving the software for London, but
this feels wrong to me. Many of the developments that have happened,
have been applied to individual guides, leading to a fragmented picture
- this has created work for developers merging these patches in order
for everyone to get the benefit.
3. How long do you see yourself being involved in your
This is one of my main hobbies and loves. I don't see myself losing
4. Have people used the Guide in any ways you
didn't expect? (and has 'vandalism' been a problem?)
The most extreme case of vandalism was in September 2005, when Brazilian
hackers managed to trash the London database. This took us offline for a
few weeks, and lost some updates.
We also get regular spam attacks, though we are looking at ways of
combatting them in software. Version history has been a godsend here, as
we can always see what was there before, and indeed "diff" the content.
The spam situation for London is under control, as enough admins watch
the recent changes, and spam gets deleted fairly promptly.
We've also had salesmen creating pages for their own establishments -
restaurants, aromatherapy, a chain of car showrooms, even TimeOut
magazine! What happens is that we tend to blockquote what has been
written as "Some anonymous contributor, presumably from XYZ, wrote the
following:" and add that one of our regular contributors will review the
There was also the skating wars - two rival skate clubs, one of which
was defacing the entry for the other, which resulted in a ban. And there
was the pedicabs page, where someone keeps changing some of the web
links so they don't work. I tend to spot edits like that, and reach for
the "delete" and file the IP address in the Admin Log. I don't actually
have the power to ban anybody, but I would drop a mail to those that do.
C. Publicity and outreach
1. Do you publicise your Guide? How?
Word of mouth, and internet. We don't have any paid advertising, as this
is not only a potential waste of money, but would also probably upset a
few of our community as getting commercial. We had the same reaction
when we tried running Google ads.
Sometimes if I'm in the mood in a pub or restaurant, I'll mention to the
proprietor that I have a website that does reviews, and will be writing
one when I get home. This has got me a free drink on a couple of occasions.
D. Future of the Guide
1. How successful do you think the project is? Which goals have been met? Which remain
In terms of London, I think the goal of a useful guide has been met. Our
coverage is patchy, but that is due to the distribution of our
contributors' homes and workplaces. Over time this will improve. I think
we have built a successful community of contributors. In terms of Google
search league ladders, we are well up there.
For OpenGuides as a whole, there is still much to do. Much of this is in
the realm of software development, but we could also do more in terms of
spreading the word across the globe. There are many cities that have
potential for a guide - it's just a matter of people who are there
linking up to create guides.
2. How long do you see the project going on for?
I see this as a continuous process, rather than a project with a defined
3. If someone told you they were planning to start an
open guide, what advice would you give them?
How serious about it are you? How much time have you got to put into it?
This is going to occupy you if you want to get it off the ground.
How many people have you got to help as contributors? You need a minimum
of 3 (counting yourself), ideally between 3 and 12. Once you have
launched, you will hopefully acquire new contributors.
How long before you can get to 100 pages? Set this as your first
milestone. Don't try and launch until you've got 100 pages.
If you are talking about siphoning content from somewhere else, that's
different - you may be able to launch straight away. If this is the
Don't worry too much about the technicalities of hosting - we can offer
that to you as a service if you need it (but if you want to host your
own guide, that's OK too). What we can't do for you is: (a) be there at
your city and (b) have all the local knowledge to write detailed pages.
Have a good look at the other guides that are out there, and make use of
the best ideas (in your mind).
Join the dev list if you are running a guide. That's the best way of
finding out what else is happening to OpenGuides elsewhere. Also hang
out on IRC, where you'll find plenty of others running guides.
|
OPCFW_CODE
|
Download oracle jdk 8 for ubuntu - Hp photosmart c6380 mac os x drivers
Textbook " Objects First with Java: A Practical Introduction Using BlueJ" is a textbook co- written by the developers of BlueJ and has sold hundreds of thousands of copies worldwide. How to Download Eclipse. The Java language has undergone several changes since JDK 1. Download oracle jdk 8 for ubuntu. 0 as well as numerous additions of classes and packages to the standard library. How to download and install prebuilt OpenJDK packages JDK 9 & Later. The new Oracle Technology Network License Agreement for Oracle Java SE is substantially different from prior Oracle Java licenses. This practical can be completed in a 3- hour session. Jun 08, · How to Install Oracle Java on Ubuntu Linux.
Existing Java SE 7 downloads already posted as of April will remain accessible in the Java Archive. 04 Bionic Beaver Linux. As detailed in the Oracle Java SE Support Roadmap. AdoptOpenJDK provides prebuilt OpenJDK binaries from a fully open source set of build scripts and infrastructure. Therefore expensive, having synchronous writes to disks is unnecessary not worth the added safety it provides. The JDK is freely available from Sun Microsystems ( now part of Oracle). 9 is supported in C5. Experiments PPA: - Nautilus 3. However using the sync option will lead to poor performance for services that write data to disks, YARN, such as HDFS, Kafka Kudu.
Again, you should always update your system first before you do anything else. Where do I download JDBC drivers for DB2 that are compatible with JDK 1. Step 1: Update Ubuntu. Oracle has recently disallowed direct downloads of java from their servers ( without going through the browser agreeing to their terms which you can look at here: Oracle terms).
Were it not for the GPL linking exception components that linked to the Java class e these instructions to download install the Java Runtime Environment ( JRE) for Linux x64. This tutorial will cover the installation of 32- bit and 64- bit Oracle Java 7 ( currently version number 1.
Xml file from your old location like this: Oracle' s OpenJDK JDK binaries for Windows macOS Linux are available on release- specific.
Learn what is supported on which platforms. Download Oracle Java JRE & JDK using a script. Download oracle jdk 8 for ubuntu. Xml file from your old location like this:. After April, Oracle will no longer post updates of Java SE 7 to its public download sites.
), follow these instructions:. Eclipse is a open- source application made by the Eclipse Foundation to help you write java better and it has become the most popular java editor.
A web application ( unlike standalone application, webapp) runs over the Internet. Apr 02, · How to Download Eclipse. 4 with SolusOS patches including for Ubuntu 13. The Java Development Kit ( JDK) officially named " Java Platform Standard Edition" is needed for writing Java programs.
Oracle JDK 7 and JRE 7 Certified System Configurations. I managed to find versions of the driver bundled w. 04, Xfwm4 with sync to vblank. Oracle is committed to offering choice lower cost of computing for end users, flexibility we cannot stress enough the. Operating Systems; Browsers; For Certified System Configurations of other versions of the JDK JRE Java Mission Control visit:. Sublime Text 2 PPA: - Sublime Text 2 is an amazing TextMate- like text editor. You can have in the end an empty connection list. Apr 16 · Important Oracle Java License Update The Oracle Java License has changed for releases starting April 16 .
The objective of this tutorial is to install Java on Ubuntu. May 03, · The filesystem mount options have a sync option that allows you to write synchronously. Download oracle jdk 8 for ubuntu. In CDH most writes are already replicated. To install Java 8 ( which will reach its end of life January! Supported platforms include Linux Solaris, macOS, ARM, Windows AIX. For Certified System Configurations of other versions of the JDK. Use these instructions to download and install the Java Runtime Environment ( JRE) for Linux x64. What are Oracle' s plans to support open innovation in Java? Download oracle jdk 8 for ubuntu. If you’ re going to be installing tools like NetBeans Eclipse to manage your JAVA based projects you’ re still going to need JDK8 installed. In order to import it, you just need to copy the connections. They seem to be very elusive and I hit many dead- ends at IBM' s website. If you are having trouble getting eclipse for your system here' s. From Oracle to IBM Ubuntu to Windows, Firefox to Chrome here' s what you need to launch the ELK stack.
The aim of this tutorial is to make beginners conversant with Java programming language. How to install Java 8 using the Oracle JDK. Sometimes, upgrading Oracle SQL Developer will not import your existing connections. Take note that Tomcat 9 requires JDK 8 and later. The implementation is licensed under the GNU General Public License ( GNU GPL) version 2 with a linking exception.
Operating Systems; Browsers; Refer to the Supported Locales document for a list of supported locales and supported writing systems for each platform. Download oracle jdk 8 for ubuntu. Oracle JDK 8 and JRE 8 Certified System Configurations Contents. 0, it has a known cipher issue ( that is resolved in 5.
Oracle is committed to offering choice flexibility, lower cost of computing for end users, we cannot stress enough the importance of using open standards whether. This installation configuration guide is applicable to Tomcat 9 possibly the earlier versions. Oct 19, · How to install Java 8 using the Oracle JDK. OpenJDK ( Open Java Development Kit) is a free open- source implementation of the Java Platform Standard Edition ( Java SE).
Oracle JAVA JDK9 was recently released. Java Tutorial for Beginners - This is the first article of w3resource' s Java programming tutorial. 0_ 45) JDK/ JRE on 32- bit and 64- bit Ubuntu operating systems. 4 specify additions , which uses Java Specification Requests ( JSRs) to propose , the evolution of the Java language has been governed by the Java Community Process ( JCP) changes to the Java platform. I managed to find versions of the driver bundled with some tools such as IBM Data Studio. These instructions will. We intend to to continue to support open- source and open standards. The new license permits certain uses such as personal use , development use at no cost - - but other uses authorized under prior Oracle Java. RHEL / CentOS / OL 6.
This page is your source to download update your existing Java Runtime Environment ( JRE, VM, Java Runtime), also known as the Java plug- in ( plugin), Java Virtual Machine ( JVM Java VM). Points to note: Cloudera strongly discourages using RHEL 5 for new installations. We will be installing the latest version of Oracle Java SE Development Kit ( JDK) on Ubuntu 18. This will be performed in three ways: Installing Java using the Ubuntu Open JDK binaries installing Java via PPA .
It is the result of an effort Sun Microsystems began in. However still many more applications tools rely on JDK version 8 to function.
Download cd musicas blogspot gratis infantil
Mercury sound card driver free download for windows 8
Best mechanical engineering schools in texas
Solution manual elements of electromagnetics sadiku 5th scribd
Oracle download Protected
Oracle Java 8 is now stable. Below you' ll find instructions on how to install it in Ubuntu or Debian via a PPA repository. The PPA supports both 32bit and 64bit as well as ARM ( ARM v6/ v7 Hard Float ABI - there' s no JDK 8 ARM Soft Float ABI archive available for download on Oracle' s website).
May 02, · Use this tutorial to Install Oracle Java 8 on Ubuntu 18.
|
OPCFW_CODE
|
/**Objeto Planilha
* @param {String} id da planilha
**/
var Planilha = function (id) {
this.planilha = (id == undefined)? SpreadsheetApp.getActive():SpreadsheetApp.openById(id);
this.paginas = this.planilha.getSheets().map(function (sheet) {return [sheet.getName(),sheet]}).transpose().toObject();
};
/**Coleta a notação da página completa
* @param {String} nome da página
* @return {String} notação A1
**/
Planilha.prototype.paginaNotacao = function(name){
var pagina = this.paginas[name];
var notation = 'A1:'+getNotation(pagina.getLastRow(),pagina.getLastColumn());
return notation;
};
/**Coleta o intervalo de uma página em matriz
* @param {String} nome da página
* @param {String} notação A1
* @return {Array} valores da planilha
* */
Planilha.prototype.getIntervalo = function (name,notation) {
return this.paginas[name].getRange(notation).getDisplayValues();
}
/**Coleta o intervalo completo de uma página em matriz
* @param {String} nome da página
* @param {String} notação A1
* @return {Array} valores da planilha
**/
Planilha.prototype.getPaginaIntervalo = function(name){
var notation = this.paginaNotacao(name);
return this.getIntervalo(name,notation);
};
/**Insere uma matriz em um intervalo de uma página
* @param {String} nome da página
* @param {Array|String} valores da planilha
* @param {Array|String} notação em string "A1" ou [row,col].
* @param {String} notação A1
* */
Planilha.prototype.setValor = function (name,value,notation) {
value = value.toMatrix();
notation = notation.a1notation();
var anchor = value.matrixLength();
anchor = [1,1,anchor[0],anchor[1]].a1notation();
notation = notation.translar(anchor).a1notation();
this.paginas[name].getRange(notation).setValues(value);
return 'ok';
};
/**Instancia o objeto Planilha
* @param {String} id da planilha
* @return {Planilha} Objeto Planilha
* */
function planilha(id){
return new Planilha(id);
};
|
STACK_EDU
|
/* * This file is part of libgq *
* Copyright (C) 2009 Nokia Corporation.
* Contact: Marius Vollmer <email@example.com>
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public License
* version 2.1 as published by the Free Software Foundation.
* This library is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
* 02110-1301 USA
\brief GConfItem is a simple C++ wrapper for GConf.
Creating a GConfItem instance gives you access to a single GConf
key. You can get and set its value, and connect to its
valueChanged() signal to be notified about changes.
The value of a GConf key is returned to you as a QVariant, and you
pass in a QVariant when setting the value. GConfItem converts
between a QVariant and GConf values as needed, and according to the
- A QVariant of type QVariant::Invalid denotes an unset GConf key.
- QVariant::Int, QVariant::Double, QVariant::Bool are converted to
and from the obvious equivalents.
- QVariant::String is converted to/from a GConf string and always
uses the UTF-8 encoding. No other encoding is supported.
- QVariant::StringList is converted to a list of UTF-8 strings.
- QVariant::List (which denotes a QList<QVariant>) is converted
to/from a GConf list. All elements of such a list must have the
same type, and that type must be one of QVariant::Int,
QVariant::Double, QVariant::Bool, or QVariant::String. (A list of
strings is returned as a QVariant::StringList, however, when you
get it back.)
- Any other QVariant or GConf value is essentially ignored.
\warning GConfItem is as thread-safe as GConf.
class GConfItem : public QObject
/*! Initializes a GConfItem to access the GConf key denoted by
\a key. Key names should follow the normal GConf conventions
\param key The name of the key.
\param parent Parent object
explicit GConfItem(const QString &key, QObject *parent = 0);
/*! Finalizes a GConfItem.
/*! Returns the key of this item, as given to the constructor.
QString key() const;
/*! Returns the current value of this item, as a QVariant.
QVariant value() const;
/*! Returns the current value of this item, as a QVariant. If
* there is no value for this item, return \a def instead.
QVariant value(const QVariant &def) const;
/*! Set the value of this item to \a val. If \a val can not be
represented in GConf or GConf refuses to accept it for other
reasons, the current value is not changed and nothing happens.
When the new value is different from the old value, the
changedValue() signal is emitted on this GConfItem as part
of calling set(), but other GConfItem:s for the same key do
only receive a notification once the main loop runs.
\param val The new value.
void set(const QVariant &val);
/*! Unset this item. This is equivalent to
/*! Return a list of the directories below this item. The
returned strings are absolute key names like
A directory is a key that has children. The same key might
also have a value, but that is confusing and best avoided.
QList<QString> listDirs() const;
/*! Return a list of entries below this item. The returned
strings are absolute key names like "/myapp/settings/first".
A entry is a key that has a value. The same key might also
have children, but that is confusing and is best avoided.
QList<QString> listEntries() const;
/*! Emitted when the value of this item has changed.
friend struct GConfItemPrivate;
struct GConfItemPrivate *priv;
void update_value(bool emit_signal);
#endif // GCONFITEM_H
|
OPCFW_CODE
|
package golcas
import (
"archive/zip"
"encoding/json"
"io/ioutil"
)
// PackReader reads data sets from a zip file in the olca-schema
// package format.
type PackReader struct {
reader *zip.ReadCloser
}
// NewPackReader creates a new PackReader
func NewPackReader(filePath string) (*PackReader, error) {
reader, err := zip.OpenReader(filePath)
return &PackReader{reader: reader}, err
}
// Close closes the PackReader
func (r *PackReader) Close() error {
return r.reader.Close()
}
// GetActor reads the Actor with the given ID from the package
func (r *PackReader) GetActor(id string) (*Actor, error) {
fname := "actors/" + id + ".json"
for _, f := range r.reader.File {
if f.Name != fname {
continue
}
reader, err := f.Open()
if err != nil {
return nil, err
}
bytes, err := ioutil.ReadAll(reader)
if err != nil {
return nil, err
}
a := &Actor{}
err = json.Unmarshal(bytes, a)
return a, err
}
return nil, nil
}
// EachFile calls the given function for each file in the zip package. It stops
// when the function returns false or when there are no more files in the
// package.
func (r *PackReader) EachFile(fn func(f *ZipFile) bool) {
files := r.reader.File
for i := range files {
file := files[i]
if file.FileInfo().IsDir() {
continue
}
zf := newZipFile(file)
if !fn(zf) {
break
}
}
}
// EachCategory iterates over each `Category` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachCategory(fn func(*Category) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsCategoryPath(f.Path()) {
return true
}
val, err := f.ReadCategory()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachSource iterates over each `Source` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachSource(fn func(*Source) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsSourcePath(f.Path()) {
return true
}
val, err := f.ReadSource()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachActor iterates over each `Actor` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachActor(fn func(*Actor) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsActorPath(f.Path()) {
return true
}
val, err := f.ReadActor()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachUnitGroup iterates over each `UnitGroup` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachUnitGroup(fn func(*UnitGroup) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsUnitGroupPath(f.Path()) {
return true
}
val, err := f.ReadUnitGroup()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachFlowProperty iterates over each `FlowProperty` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachFlowProperty(fn func(*FlowProperty) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsFlowPropertyPath(f.Path()) {
return true
}
val, err := f.ReadFlowProperty()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachFlow iterates over each `Flow` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachFlow(fn func(*Flow) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsFlowPath(f.Path()) {
return true
}
val, err := f.ReadFlow()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachProcess iterates over each `Process` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachProcess(fn func(*Process) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsProcessPath(f.Path()) {
return true
}
val, err := f.ReadProcess()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachImpactCategory iterates over each `ImpactCategory` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachImpactCategory(fn func(*ImpactCategory) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsImpactCategoryPath(f.Path()) {
return true
}
val, err := f.ReadImpactCategory()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachImpactMethod iterates over each `ImpactMethod` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachImpactMethod(fn func(*ImpactMethod) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsImpactMethodPath(f.Path()) {
return true
}
val, err := f.ReadImpactMethod()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachLocation iterates over each `Location` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachLocation(fn func(*Location) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsLocationPath(f.Path()) {
return true
}
val, err := f.ReadLocation()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachParameter iterates over each `Parameter` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachParameter(fn func(*Parameter) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsParameterPath(f.Path()) {
return true
}
val, err := f.ReadParameter()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachSocialIndicator iterates over each `SocialIndicator` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachSocialIndicator(fn func(*SocialIndicator) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsSocialIndicatorPath(f.Path()) {
return true
}
val, err := f.ReadSocialIndicator()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
// EachProductSystem iterates over each `ProductSystem` data set in the package unless
// the given handler returns false.
func (r *PackReader) EachProductSystem(fn func(*ProductSystem) bool) error {
var gerr error
r.EachFile(func(f *ZipFile) bool {
if !IsProductSystemPath(f.Path()) {
return true
}
val, err := f.ReadProductSystem()
if err != nil {
gerr = err
return false
}
return fn(val)
})
return gerr
}
|
STACK_EDU
|
Running TestNG tests can be done in two ways: either directly from the IDE (by selecting the desired tests and choosing to ‘Run TestNG tests’) or from the command line. The latter option is very useful when trying to run only a selection of all the tests, that might spread across different classes or packages, or to run tests that belong to certain groups. To do this, you need to create some .xml files that will select the tests that need to run and/or exclude tests not to run, add a configuration in the Maven profile created for running the tests, and run the proper Maven command. Continue reading Running TestNG tests
After the project has been created, you will need to decide how you want your automated tests to run. Keeping in mind that developers write unit tests, which by definition will validate of the code by itself, without interaction with other components, they are suitable to validate that the code commited satisfies the requirements in isolation. They should run fast, and not need interaction with browsers for example.
On the other hand, acceptance tests, which are the tests written by QAs should validate the code in the actual environment where it will reside, having contact with all the components around it. These tests validate that the code still acts properly when it runs in the system that it is built in. For that reason, these tests might take a long time to run, use browser instances, and might be rather fragile when it comes to succeeding. For example, Selenium tests are some of the most fragile, since, if you run tests in some sluggish environment, they might fail because of the slow responsiveness of the environment, pages not loading on time, and so on. Hence, running these tests for every project compilation phase is not feasible, as the build might never compile successfully. Continue reading Create the Maven profile for running tests.
The central and most essential part of a Maven project is its’ pom.xml file. Among other information (like the project’s defining artifactID and groupID), it stores the list of dependencies your project has and the plugins the project will use. Dependencies that are declared within the pom.xml file will be downloaded from the Maven repository configured for the project, into the project, as external libraries or dependencies. The default repository that is configured is the Maven central repository (http://search.maven.org/#search|ga|1|), and except for the situation where you explicitly configure a local repository, this is the place to look for the latest versions of the libraries you will import into your project. Continue reading Import the testing dependencies
As a best practice, tests will reside in the same project as the code that they test. Also, ideally, they should be written in the same programming language as the code itself. If the code is Java, it’s useless to come up with some different language or so called framework to test it. Developers write Junit or TestNG tests, why shouldn’t QA’s do the same? The language itself offers most of what you need for testing, and where it doesn’t, there are plenty of libraries you can use to help out, that can easily be imported into the project. There is vast knowledge around, so if you are in doubt there are numerous people to turn to for advice. Also, it’s better if the developer and QA speak the same language. Developers can give you input regarding best practices for writing code, so that your tests can be easily readable by any member of the project team, maintainable, effective.
Having said that, if you are the one who will create the project, you can do it quite easily, using Maven. Continue reading Create a new Maven project
|
OPCFW_CODE
|
8887 Views 11 Replies Latest reply: Oct 22, 2007 9:54 AM by ro_1964
Welcome to the Apple Discussions!
There have been a number of reports of people having problems with USB or Firewire access to external drives after updating to 10.4.10 (balanced against reports of previously unusable drives that now work, not that that's of any consolation to you). It appears that some of the 'minor' tweaks Apple included in 10.4.10 to more closely match the published USB and Firewire standards cause problems with manufacturer's devices that are themselves slightly off the standards.
Although external DVDs should be recognized in 10.4 without any additional software, you can try checking the LG website for any mention of firmware or driver updates for your particular model drive.
You may want to resort to dropping down to 10.4.9 (while waiting to see if Apple releases a 10.4.11). Kappy describes how to downgrade the OS in this post. Ypu'll want this link as well: OS X Update Combo 10.4.9 (PPC)
Howdy. ro-1964 and welcome aboard.
Your drive should work simply by plugging it into the appropriate port (USB or Firewire). Not software or drivers necessary. Burning discs from the finder or various iApps should work. If you still have issues, download an app called "PatchBurn" from http://www.patchburn.de . You might also consider getting a copy of Roxio Toast, the best Mac burning software out there. It offers options the Apple software doesn't.
One other thing- you will usually attract more attention to a problem if you start a new thread rather than tagging onto an old one.
Thats exactly what I'm saying !
so if I look un der system-libary-system profiler-SPUSBreporter.reporter ??
I was looking foor it un der my Finder where I can see my hard drive and any other drive I have installed, but nada , zilch ??
thats why i reckoned I needed the software
many thanks Bill
To open the System Profiler, go to the Apple menu and click on "About this Mac" then in the window that pops up click on "More info...". On the left side, click on USB or Firewire to see what you mac thinks is connected. Does it see your drive? do you have another drive with the same interface and if so is it recognized?
BTW, is this a USB or Firewire drive? If it's Firewire, the FW ports on eMacs have been known to quit working for no apparent reason. Fixing them is a matter of resetting, easily accomplished by shutting down the eMac and unplugging it from the mains for about 15 minutes.
You can also try the typical mac things such as resetting the PRAM, repairing permissions, etc.
I re-read your posts and noticed you are running 10.3.9. You should definitely install Patchburn to get the iApps to recognize the drive. It shouldn't be necessary for simply reading a disc, though.
Hi again Bill,
well before doing wht you suggested I think I may have gotten to the bottom of the problem.
the DVD disc that I recieved with mp3 music files was loaded up from a Win machine and I've since learned that apple macs read these files in a different way ??
because I can use the dvd re-writer in my mac envoirment, i.e I can record to it and I can play from it.
So do I need a driver or some program to help my mac read these files ??
thanks again Bill
Macs can read standard Windows-formatted optical discs, and the MP3 format itself is not platform-specific. You should still work through Bill's advice.
There's are exceptions to the statement that Macs can read Windows optical discs: It's possible to select a non-standard / non-compatible format on some Windows burning software, although that's increasingly rare. More likely, if the Windows optical disc was not closed or finalized (the terminology depends on what Windows burning software was used), then that disc will be unreadable by any Mac until the disc is closed. (For that matter, an unclosed disc will be unreadable on a Windows machine, except possibly one with the exact same versions of Windows and disc burning software.) Check with the source of the disc you're trying to read and confirm if the disc was closed or finalized.
Is it possible these mp3 files have some sort of protection (such as digital rights management) on them? I am not sure this would cause you to be unable to read the entire disc, but it could prevent access to individual tracks. Since you can apparently read/write othe DVDs, it sounds like this particular DVD might be funky.
|
OPCFW_CODE
|
Private Sub Datagrid1_Cell Validating(By Val sender As Object, By Val e As System. Private Sub Datagrid1_Cell Validating(By Val sender As Object, By Val e As System.
Is there a different way to isolate the columns or edit events? Column 7 fires successfully when the column 7 textfield is edited. Column 8 does not fire when the checkbox is clicked by the user. you'd like me to give you the code example that is listed in the link? Current Cell Dirty State Changed Dim dgv Sender As Data Grid View = CType(sender, Data Grid View) If dgv Sender. Current Cell Is Data Grid View Check Box Cell Then Console. When the DGV is created it is sorted by date ascending, then it is set to readonly and then two columns are set to readonly= false: For Each DGVC As Data Grid View Column In DGV. Read Only = True 'Set the whole datagridview to Readyonly Next 'Make only certain columns editable If Menu Form. In this case I mean the desired If-Then block is not entered. The 'FLY' checkboxcolumn 'FLY' is not entered as coded. It's intermittent, sometime it works as expected, sometimes it fires the e. To commit the change when the cell is clicked, you must handle the Data Grid View. In the handler, if the current cell is a check box cell, call the Data Grid View. Code Bank: Manipulate HTML Page content in the Web Browser Control from VB - Drag Drop from Windows into Win Form - Launch new default browser instance to open URL - Display Internet Image in Picturebox - Download Files From Web With Progress Bar - IP Textbox User Control - Installing . How do I isolate a checkbox column from a text column other than by column Index. Edit: I have been experimenting with these methods and they seem to work well and good when you only have one editable column. Commit) End If End Sub This should not be happening if the DGV is databound and the underlying data source is updated. It does not matter in the long run, but it is good information to know as you may need to add The DGV is populated by a pivot table that is created in the SQL database at form load (SELECT INTO...), the database table is then dropped after it populates the DGV. Multi Select = False 'Prevents Multiple rows from being selected Data Grid SMT. NET Framework with INNO Setup Yes, I read that already but don't understand it. It's when you introduce two different types of columns that it randomly choose between the two. The information is so dynamic that it is 'old' as soon as it's populated. Read Only = False 'Now make only this column editable DGV. Read Only = False 'Now make only this column editable End If The only way that I know for this to happen is if this column is set to Read Only. I need to check that only one of the two can be checked. Its just that my UI does not respond correctly and I can't expect my users to get out of it that way. Bob "Bob" In a datagridview (vs2005, VB.net) I have two columns that are checkboxes. Private Sub DGVSMT_Cell Content Click(By Val sender As System. I've tried editing the text field and hitting enter, or just clicking off the text cell to let it update. Read the remarks section for Cell Value Changed https://msdn.microsoft.com/en-us/lib...v=vs.110)Specifically In the case of check box cells, however, you will typically want to handle the change immediately. How/Where do I handle the Current Cell Dirty State Changed event?
Cell Content Click 'This method attempts to convert the formatted, user-specified value to the underlying cell data type. Column Index = 8 checkbox IF statement will fire (but the checkbox itself in column 8 is not updated).
Data Property Name = "Cell_2") Then If CBool(Datagrid1("Cell_1", e. I wrote code in the cellvalidating event as follows. Data Grid View Cell Validating Eve nt Args) Handles Datagrid1.
Its not permissible to have the two selected to true, but they can both be false.
Validating event, this method ends the current cell edit and validates the cell and row values. On Validating(Cancel Event Args) method also allows derived classes to handle the event without attaching a delegate.
Raising an event invokes the event handler through a delegate. This is the preferred technique for handling the event in a derived class.
https://msdn.microsoft.com/en-us/lib...v=vs.110)Specifically In the case of check box cells, however, you will typically want to handle the change immediately. I thought I had made it clear that I tried the code in the example in my earlier post. Write Line("Committing checkbox in column: , row ", dgv Sender. I will try your code now and report back, although I don't see how both the checkbox column AND textbox column will be handled in your example.
|
OPCFW_CODE
|
import { Cucumber } from "../Cucumber";
import { StepManager } from "../stepManager";
import { Service, Model, ServiceResult } from "./Service";
import { ResultType } from "../feedback";
import { Step, IStep } from "../cucumberTypes";
import * as monaco from "monaco-editor";
import { editor } from "monaco-editor";
import Fuse from "fuse.js";
export interface CucumberServiceResult extends ServiceResult {
stepDef?: IStep,
stepVal: string,
id: number
}
//Handles all cucumber related stuff
export class CucumberService extends Service<CucumberServiceResult> {
static id:number = 0;
canHandle(line: string): boolean {
return Cucumber.STEP_PATTERN.test(line);
}
handle(model: Model, from:number) : ResultType {
const stepDef = Cucumber.findRightStep(this.peek(model, from));
if (stepDef){
const stepLine = this.consumeLine(model, from);
let step = Cucumber.extractValue(stepLine);
if (stepDef.args && stepDef.args.length > 0) {
const args = stepDef.args;
const lastArg = args[args.length - 1];
if (lastArg.type.endsWith("DataTable")) {
//Consuming data table
while(this.peek(model, from).trim().startsWith("|")){
const tableLine = this.consumeLine(model, from);
step += `\n${tableLine}`;
}
} else if (lastArg.type === "java.lang.String" && !lastArg.start){
//Consuming doc string
if (this.peek(model, from).trim().startsWith('"""')){
const startString = this.consumeLine(model, from);
step += `\n${startString}`;
while (!this.peek(model, from).trim().startsWith('"""')){
const str = this.consumeLine(model, from);
step += `\n${str}`;
}
if (this.peek(model, from).trim().startsWith('"""')){
const endString = this.consumeLine(model, from);
step += `\n${endString}`;
}
}
}
}
let event = {
service: this,
stepDef: stepDef.toIStep(),
stepVal: stepLine,
data: stepLine.trim().split(/\s+/)[0] + ' ' + step,
id: CucumberService.id++
};
StepManager.get().runStep(step).then((result) => {
this.dispatcher.dispatch({...event, status: result})
});
this.dispatcher.dispatch({...event, status: ResultType.WAITING});
return ResultType.WAITING;
} else {
//TODO give feedback about unknown step
const line = this.consumeLine(model, from);
this.dispatcher.dispatch({id: CucumberService.id++, stepVal: line, status: ResultType.FAILURE});
console.warn(`${line} is not a valid step`)
return ResultType.FAILURE;
}
}
searchOptions : Fuse.FuseOptions<Step> = {
shouldSort: true,
distance: 100,
minMatchCharLength: 1,
keys: [
{
name: "pattern",
weight: 0.6
}, {
name: "docs",
weight: 0.2
}, {
name: "location",
weight: 0.2
}
]
}
findClosestStep(line:string):Step | undefined{
const fuse = new Fuse<Step, Fuse.FuseOptions<Step>>(StepManager.get().getStepsSync(), this.searchOptions);
const steps = fuse.search(line);
return steps[0] as Step || undefined;
}
//Provide suggestion for a closest match to the current step
provideArgSuggestions(line:string, range:monaco.IRange) : Promise<monaco.languages.CompletionItem[]>{
const steps = StepManager.get().getStepsSync();
let fStep = steps.find(s => s.pattern.test(line));
if (!fStep){
fStep = this.findClosestStep(line);
}
if (fStep){
return fStep.getSuggestions(line, range);
}
return Promise.resolve([]);
}
//Provide suggestion for a closest match to the current step
async provideSuggestions(model:Model, position: monaco.Position, context: monaco.languages.CompletionContext) : Promise<monaco.languages.CompletionItem[]> {
const line = model.getLineContent(position.lineNumber);
const word = model.getWordUntilPosition(position);
const range = {
startLineNumber: position.lineNumber,
endLineNumber: position.lineNumber,
startColumn: word.startColumn,
endColumn: word.endColumn
};
if (line.match(Cucumber.STEP_PATTERN)){
const stepSuggestions = await StepManager.get().getSteps();
let sugs = stepSuggestions.map(step => ({
range: range,
label: step.pattern.source,
insertText: step.getPlaceholderText(),
kind: monaco.languages.CompletionItemKind.Value,
insertTextRules: monaco.languages.CompletionItemInsertTextRule.InsertAsSnippet
})) as monaco.languages.CompletionItem[];
sugs = sugs.concat(await this.provideArgSuggestions(line, range));
return sugs;
}
//Return cucumber keywords
return ["When", "Then", "Given", "And", "But"].map(word => ({
range: range,
label: word,
insertText: word,
kind: monaco.languages.CompletionItemKind.Keyword
}));
}
}
|
STACK_EDU
|
M: A Couple of My Rules for Startups - drm237
http://www.blogmaverick.com/2008/03/09/my-rules-for-startups/
R: kingnothing
"2. If you have an exit strategy, its not an obsession."
That doesn't make sense to me. Why are those two things mutually exclusive?
"7. No offices."
Beyond privacy, which is great because I don't want someone looking over my
shoulder all day long, offices provide a quieter environment than an open
floor plan. All of us here are paid to think, and the best thinking is
accomplished in a quiet environment; it is not done in a room full of whirring
electronics, phone calls, water cooler chat, et. al.
R: noonespecial
Ugh. He misspelled espresso _expresso_ too. Pet peeve. I guess that proves he
followed his own advice!
I also disagree. If an espresso machine makes your office a nicer place to
work for your employees, get two. The extra productivity from happy employees
is more than worth it and it clearly shows that you went the extra mile for
your employees.
R: mixmax
It's interesting to note how most hackers are obsessed with grammar and
spelling. On some forums there would be a flamewar for this mistake.
I like it though, it shows attention to detail and it makes it more enjoyable
to read sites like HN.
R: pchristensen
It's hard not to be picky if you've spent a lot of time with a compiler.
R: mixmax
Yes I presume compilers reading programs are more picky than investors reading
businessplans.
Investors don't crash if you miss a semicolon.
|
HACKER_NEWS
|
2011dec21 - Paulo Silva (nitrofurano_at_gmail_dot_com)
(english grammar and vocabulary fixes and improvement are welcome =) )
This page is about some stuff i'm doing for zx-spectrum retrocoding development - not only for contribution, as well for learning it.
Improved ULAplus support, and added SamCoupé palette support.
Added some tools for seeing and sorting SCRplus (ULAplus) palettes.
Added ULAplus support. I got amazed on how simple were implementing it, as well i hope soon more emulators will support it defaultly. (i really don't enjoy that much having to use Wine or Java to run emulators...)
Added a tms9918-like "text mode" pattern example, similar on the one i made for msx years ago (but without hardware sprites, since zx-spectrum doesn't have them, and i still don't know how to use sprite drawing routines to simulate similar effect). Also added some picture converters (in sdlbasic) that can be useful.
Added a snippet for a game idea using 4 players on one keyboard (since afaik there are none for zx-spectrum?), and some character maps.
Added some libraries, for drawing also bezier and cubic curves, and displaying text (print) with 4x6 characters (but still having bugs on the attribute handling).
Recently i'm trying Boriel's zx-basic compiler, while i struggled with Uschi Compiler - in the link above i'm starting to share some snippets and examples.
Later you will can see some screenshots of the above examples you can see from downloads above.
World of Spectrum
Boriel's zx-basic compiler
zmakebas - converts (tokenizes) zx-basic .txt files into .tap
JSpeccy (emulator) - supports ULAplus, needs Java ("java -rar JSpeccy.rar" from terminal or bash script)
information and links about ULAplus
Vortex Tracker (works fine on Wine)
Mojon Twins (retro game developers)
RetroWorks (retro game developers)
comp.sys.sinclair Crap Games Compo 2011, hosted by Mojon Twins
zx-spectrum webpage from Leszek Chmielewski (LCD)
ULAplus picture converter from Edward Cree (AYChip)
project from me, using a zx-spectrum for loading pictures from a webcam, sent from a PC (with Linux) via "cassette" audio cable - i also tried to extend this project to some other 8bit computers as well, and maybe also 16bit ones
Altervista (free webhost, this one i'm using now)
(This webpage is optimized for decent web browsers based on Gecko or Webkit, like Firefox, Icecat, Seamonkey, Iceape, Galeon, Kazehakase, Chrome, Chromium, etc. - mostly because the webfonts and css)
(If some link must be here above, please let me know)
(OSX seems to have a bug when displaying webfonts like from this webpage, on browsers like Firefox, Chrome or Safari - i really don't know how to get this situation fixed)
|
OPCFW_CODE
|
WIP: Use tarn as pool
Hi,
This experimental PR replaces the current pool generic-pool with Tarn. There has been a bunch of discussion about this change in the following issues: https://github.com/tgriesser/knex/issues/2339 https://github.com/tgriesser/knex/pull/1702
At this point the pull request is here mainly so that some brave soul, who is having problems with the current pool, could test this out to see if it solves their problems.
Oh, knex still tests agains node 4 and 5. Tarn currently requires 6. That should be easy to fix. There's something else wrong with the tests too. Funny that they all passed on my computer :smile: I'll look into it.
@koskimas Node 5 is EOL already, Node 4 wil be in 2 months. Do we still want to support them?
@igor-savin-ht It's not up to me. There are only a couple of things in Tarn that require 6. I can easily change those.
@elhigu Any plans to drop Node 4/5 support in the nearby future? Given that Node.js migration path is really smooth considered to other stacks, I wonder if there are any reasons to support legacy versions for a long time.
There's one test that fails and only on oracle
oracledb - acquireConnectionTimeout works
I have no idea what that's about. @elhigu Any ideas?
I think we can drop node 4 already () and later on drop support always when node version reach EOL. @koskimas no idea why that could be failing... it haven't failed before, need to check if test is done wrong or if that timeout is really not working with oracle...
@elhigu Created a PR for that. Do you want anything in knex/documentation changed as well?
@elhigu I changed that test so I probably broke it, but I did nothing oracle specific in this PR. I'll try to debug that later.
@igor-savin-ht yes please. I also discussed with @koskimas about the changing this PR to work with Node 4 and 5 after all so that there will be at least one version in knex 0.14.x that works and that will be last version supporting node 4.
@elhigu Tests now pass
15mins 100MB -> ~250MB
45mins stable ~105MB
@elhigu I added warnings about the old generic-pool config options
Now that this is merged, who should I give write access to tarn? I'll of course help maintain it too, but I also want to give full access for knex maintainers.
At least for @wubzz and @tgriesser I think. For the rest of the people access may be given on demand when they happen to need them.
I believe we also need to update the Upgrading section of the docs:
Upgrading 0.13 -> 0.14
generic-pool was upgraded to v3. If you have specified idleTimeoutMillis or softIdleTimeoutMillis in the pool config then you will need to add evictionRunIntervalMillis: 1000 when upgrading to 0.14.
See original issue #2322 for details.
Since technically 0.13 -> 0.14 now includes not only generic-pool v3 but also tarn.js 1.1.2. For better or for worse tarn.js uses the old idleTimeoutMillis instead of evictionRunIntervalMillis (don't get me wrong, I definitely prefer the old property), so the upgrading guide needs to say the reverse of what it's saying right now (evictionRunIntervalMillis back to idleTimeoutMillis).
Just want to double-check with @koskimas @elhigu that I'm right about this before I make an issue in docs repo.
Yup I think we should remove that upgrading section and add warning about using versions 0.14.0 - 0.14.2
I hope you dont drop support for node 4. I know it only has 2 months left for LTS, but it's really neccessary?
And yes, this will affect me :( I'm using node 4.6 with oracledb right now, and node migration doesn't depend on me,
@pablopen Given how smooth and painless is node upgrade process is, what is the rationale of whoever is responsible for making the decision not to do it? We can't support obsolete versions forever, so sooner or later update needs to happen anyway.
@pablopen it wont be dropped before 0.15 versions. 0.14.4 will be most stable knex released so far you can still use it after node 4 is deprecated. I’m pretty sure oracledb will drop support for node 4 too around the same time (they already dropped 5 and 7). So if you are not able to upgrade node you need to relay in older package versions.
The doc is not updated here: http://knexjs.org/#Installation-pooling
@bertho-zero good catch. Thanks
Only partially related: I'm wondering if the Sqlite adapter could now be improved so that read-only queries would not be blocked anymore while a transaction is in progress? In this aspect, knex.js performs worse than using node-sqlite3 directly.
@irsl AFAIK If you add more connections to the pool than just one, there is no blocking in knex side. Please open an issue if you think that knex is blocking multiple connections. IIRC sqlite allows more connections that just one nowadays (except for inmemory DB).
@elhigu I just verified, 0.14.6 still holds back the read queries while a transaction is in progress. Note: using the Sqlite library directly this just works fine.
I managed to hook Knex.js 0.12.6 back then to address this, but it was so ugly at the end I decided not to contribute it. My patches obviosly broke with the recent changes in Knex.
Anyways, I just created a small mini-project with my unit tests to see what my expectations are:
https://github.com/irsl/knex-sqlite-transactions/commit/ec22313f84c103875524f10e1cfac2a73e549e86
Sorry, I was inattentive regarding your remark on the pools. By increasing the defaults and tweaking a bit in one of my tests everything works as expected. Great!
@elhigu I just upgraded to the latest Knex, and started encountering SQLITE_BUSY errors without my custom transaction hooks (when the pool number is higher than 1). I also learned the node-sqlite3 library lacks busy timeout support (https://github.com/mapbox/node-sqlite3/issues/9) and as I see it won't change as it would block the complete applciation due to the way how Node works.
My question is:
To workaround this, do you think a transaction serialization feature could be added in Knex? My original patch for 0.12.6 looked something like this:
if(!settings.officialSinglePoolMode) {
const bluePromise = knex.Promise;
var transactionWaitList = [];
var originalTransaction = knex.transaction;
knex.transaction = function(callback){
if(debug) console.log("!!! transaction was called");
return new bluePromise(function(resolve,reject){
transactionWaitList.push({callback:callback, resolve: resolve, reject: reject});
// initial kickoff:
if(transactionWaitList.length == 1)
fireNextOne();
});
function fireNextOne(){
// are there any more transactions?...
if(transactionWaitList.length <= 0) return;
var stuff = transactionWaitList[0];
var oex;
return originalTransaction(stuff.callback)
.catch(ex=>{
// console.log("there was an exception!", ex.message)
oex = ex;
})
.then((returnValue)=>{
if(debug) console.log("!!! original transaction has finished");
transactionWaitList.shift();
fireNextOne();
if(oex) return stuff.reject(oex)
return stuff.resolve(returnValue);
})
}
}
}
Transactions should be automatically serialized if you have set pool size to be 1, because when transaction is started it tries to acquire connection from pool and if there are no connections available, knex will wait until the previous transaction returns the connection back to the pool.
So I'm failing to see the problem here that you are trying to workaround...
Pool size 1 limits the number of concurrent read operations (eg. select) as well. Sqlite is better than that.
@irsl So why are you getting SQLITE_BUSY errors? You think that only serializing transactions that there are no multiple of them running simultaneously would prevent that once and for all?
I'm still failing to see the problem here that you are trying to workaround...
|
GITHUB_ARCHIVE
|
Apache Arrow DataFusion 6.0.0 Release
19 Nov 2021
By The Apache Arrow PMC (pmc)
The Apache Arrow team is pleased to announce the DataFusion 6.0.0 release. This covers 4 months of development work and includes 134 commits from the following 28 distinct contributors.
28 Andrew Lamb 26 Jiayu Liu 13 xudong963 9 rdettai 9 QP Hou 6 Matthew Turner 5 Daniël Heres 4 Guillaume Balaine 3 Francis Du 3 Marco Neumann 3 Jon Mease 3 Nga Tran 2 Yijie Shen 2 Ruihang Xia 2 Liang-Chi Hsieh 2 baishen 2 Andy Grove 2 Jason Tianyi Wang 1 Nan Zhu 1 Antoine Wendlinger 1 Krisztián Szűcs 1 Mike Seddon 1 Conner Murphy 1 Patrick More 1 Taehoon Moon 1 Tiphaine Ruy 1 adsharma 1 lichuan6
The release notes below are not exhaustive and only expose selected highlights of the release. Many other bug fixes and improvements have been made: we refer you to the complete changelog.
The community worked to gather their thoughts about where we are taking DataFusion into a public Roadmap for the first time
- Runtime operator metrics collection framework
- Object store abstraction for unified access to local or remote storage
- Hive style table partitioning support, for Parquet, CSV, Avro and Json files
- DataFrame API support for:
limitand window functions
EXPLAIN ANALYZEwith runtime metrics
trim ( [ LEADING | TRAILING | BOTH ] [ FROM ] string text [, characters text ] )syntax
- Postgres style regular expression matching operators
- SQL set operators
- HyperLogLog based
is distinct fromand
is not distinct from
CREATE TABLE AS SELECT
- Accessing elements of nested
SELECT struct_column['field_name'], array_column FROM ...)
- Boolean expressions in
- Postgres regex match operators
- Support for Avro format
- Support for
- Automatic schema inference for CSV files
- Better interactive editing support in
datafusion-clias well as
psqlstyle commands such as
- Generic constant evaluation and simplification framework
- Added common subexpression eliminate query plan optimization rule
- Python binding 0.4.0 with all Datafusion 6.0.0 features
With these new features, we are also now passing TPC-H queries 8, 13 and 21.
For the full list of new features with their relevant PRs, see the enhancements section in the changelog.
async planning and decoupling file format from table layout
Driven by the need to support Hive style table partitioning, @rdettai introduced the following design change to the Datafusion core.
- The code for reading specific file formats (
JSON) was separated from the logic that handles grouping sets of files into execution partitions.
- The query planning process was made
As a result, we are able to replace the old
providers with a single
ListingTable table provider.
This also sets up DataFusion and its plug-in ecosystem to supporting a wide range of catalogs and various object store implementations. You can read more about this change in the design document and on the arrow-datafusion#1010 PR.
How to Get Involved
If you are interested in contributing to DataFusion, we would love to have you! You can help by trying out DataFusion on some of your own data and projects and filing bug reports and helping to improve the documentation, or contribute to the documentation, tests or code. A list of open issues suitable for beginners is here and the full list is here.
Check out our new Communication Doc on more ways to engage with the community.
|
OPCFW_CODE
|
Lisp Game Jam Log #8
Yesterday I got pretty far along with implementing user input. I spent more time on it than I should, but I'm fairly happy with the results. The system is much more complex than other demos, in order to be flexible enough to allow the user to customize their controls.
The way I ended up programming user input has several parts:
- The event handling system, which passes each event to a series of handler procedures
controls.scmfile, which defines the user's preferred control bindings
- The user input event handlers, which translate input events into gameplay events
- The gameplay event handlers, which cause changes to the game state
The event handling system consists of a procedure,
handle-event!, which accepts a single event and a list of event handler procedures. Each event handler procedure is a procedure (lambda) which takes the event, inspects it, and decides what to do with it. The event handler procedure may return true or false to indicate whether it "consumed" the event. If the event was consumed, no further handling of that event is performed. If the event was not consumed, the event is passed to the next procedure in the list.
Each event handler is focused on a specific kind of event. For example, there is an event handler which specifically looks for the S key to be pressed while the Ctrl key is being held. If it sees such an event occur, it saves a screenshot to disk, and returns true to indicate that it consumed the event. If it is given an event that it does not care about, it returns false so that the next event handler can try.
controls.scm file is a list of s-expressions, like so:
(action: p1-up scancode: w) (action: p1-left scancode: a) (action: p1-right scancode: d)
This says that when the W key is pressed or released, the
p1-up action should occur, with state true or false depending on whether the key was pressed or released. Likewise for the A key triggering
p1-left, and the D key triggering
p1-right. I am using scancodes (rather than keycodes) because it's not the key letters that matter, only their physical positions on the keyboard. That means on an AZERTY keyboard, it is actually the Z key which triggers
p1-up, and the Q key which triggers
The game loads the
controls.scm file at startup, and generates an event handler procedure for each s-expression in the file. For example, it generates a handler that looks for the W scancode to be pressed or released. If the handler sees that happen, it calls the
process-user-input-action with the symbol
process-user-input-action procedure then creates an instance of a custom event type which indicates that player1 started (or stopped, if the key was released) trying to move up.
There are five custom event types related to players, corresponding to the (hypothetical) five possible players: player1, player2, player3, player4, and player5. These types are registered with SDL using
register-events!. They are
sdl2:user-events, but I created several procedures to make them easier to work with. Aside from having different symbols, the event types are all the same. They hold an action (a symbol), and optionally one or two extra integer values. sdl2:user-event can't directly store symbols, so I had to do some clever tricks. The action symbol is mapped to an integer using a lookup table, then stored in the event's
code integer field. The two extra integer values are stored in the pointer addresses of the event's
data2 fields. There are other ways I could have handled this (such as evicting a Scheme object and storing a pointer to it), but this way works well.
There are many player event actions, indicating the various things that can happen to a player, such as trying to move in various directions, jumping, falling, landing on a tile, collecting treasure, and so on. I created an event handler procedure which looks for a player event, finds the player entity that it belongs to, and calls the appropriate procedure, such as
Those procedures then look at the player's state, and modify the player accordingly. For example,
player-start-up set's the player's
holding-up? property to true, and if the player is currently standing on the ground, calls
player-jump! set's the player's
jumping? properties to true, and gives the player some upward velocity so they jump into the air. It also nudges the player upwards a tiny amount so that they are no longer colliding with the tile they are standing on; otherwise, the collision detection would immediately think that the player had landed.
As I mentioned at the start of the post, I think I spent too much time on this system. It is very flexible, but I should have done something dead simple (like hardcoded key bindings) for the game jam, then made it more flexible later. Oh well, too late now!
I am now working on refining the player movement logic. In particular, the player's acceleration should change depending on the player's state. For example, if the player is on the ground and holding left or right, they should accelerate to the left or right to counteract friction. And the friction of the tile that the player is standing on should also affect their acceleration, so that slippery tiles are more challenging to walk on. Players should also probably accelerate a small amount while holding left or right in the air, so that the user has some control while in the air.
There are less than 30 hours left in the jam, so I better get cracking!
|
OPCFW_CODE
|
There were two main goals of the meeting: 1) provide administrative information and advice to project directors and 2) allow project directors to give a 3 minute overview of their project to the general public.
The morning was devoted the first goal. One highlight for me was ODH Director Brett Bobley's welcome in which he talked a bit about the history of the NEH (NEH's 50th anniversary is coming up in 2015). The agency is currently in the process of digitizing their historical documents, including records of all of the grants that have been awarded (originally stored on McBee Key Sort cards). He also mentioned the recent article "The Rise of the Machines" that describes the history of NEH and digital humanities. Bottom line, digital humanities is not a new thing.
The public afternoon session was kicked off with a welcome from the new NEH Chairman, Bro Adams.
The keynote address was given by Michael Whitmore, Director of the Folger Shakespeare Library. He talked about adjacency in libraries allows people to easily find books with similar subjects ("virtuous adjacency"). But, if you look deeper into a book and are looking for items similar to a specific part of the book (his example was the use of the word "ape"), then the adjacent books in the stacks probably aren't relevant ("vicious adjacency"). In a physical library, it's not easy to rearrange the stacks, but in a digital library, you can have the "bookshelf rearrange itself".
His work uses Docuscope to analyze types of words in Shakespeare's plays. The algorithm classifies words according to what type of word it is (imperative, dialogue, anger, abstract nouns, ...) and then uses PCA analysis to cluster plays according to these descriptors. One of the things learned through this visual analysis is that Shakespeare used more sentence-starting imperatives than his peers. Another project mentioned was Visualizing English Print, 1530-1799. The project visualized topics in 1080 texts with 40 texts from each decade. The visualization tool, Serendip, will be presented at IEEE VAST 2014 in Paris (30-second video).
After the keynote, it was time for the lightning rounds. Each project director was allowed 3 slides and 3 minutes to present an overview of their newly funded work. There were 33 projects presented, so I'll just mention and give links to a few here. (2015-07-24 update: links to videos of all lightning talks are available at http://www.neh.gov/divisions/odh/grant-news/videos-2014-digital-humanities-implementation-grantees and http://www.neh.gov/divisions/odh/grant-news/videos-2014-digital-humanities-start-grantees)
Lightning Round 1 - Special Projects and Start-Up Grants
- NEH Workshop on Military History, Northeastern - providing opportunities for military historians to learn about digital tools and methods
- Visualizing the History of the Black Press in the US, Johns Hopkins - looking for information on black press/newspapers "hidden" in archives
- Augmented Palimpsest, Northern Kentucky - augmented reality for early printed books
- PeriodO, Texas at Austin - gazetteer of scholarly assertions about the spatial and temporal extents of historical, art-historical, and archaeological periods
- Ethnic Layers of Detroit, Wayne State - digital storytelling platform
- Orientation for the Mississippi Freedom Project, Miami Univ. - location-based game that interprets the Mississippi Summer Project held at Western College for Women in 1964
- AIDS Quilt Touch, New School - mobile website to explore the AIDS quilt
- Archive What I See Now, ODU - tweets about our presentation - video
- Pop Up Archive, PRX, Inc. - archiving, tagging, transcribing audio
- Bookworm, Illinois at Urbana-Champaign - uses HathiTrust Corpus and is essentially an open-source version of Google n-gram viewer
Thanks to the ODH staff (Brett Bobley, Perry Collins, Jason Rhody, Jen Serventi, and Ann Sneesby-Koch) for organizing a great meeting!
For another take on the meeting, see the article "Something Old, Something New" at Inside Higher Ed. Also, the community has some active tweeters, so there's more commentary at #ODH2014.
The lightning presentations were recorded, so I expect to see a set of videos available in the future, as was done with the 2011 meeting.
One great side thing I learned from the trip is that mussels and fries (or, moules-frites) is a traditional Belgian dish (and is quite yummy).
|
OPCFW_CODE
|
Globular clusters are odd beasts. They aren’t galaxies, but like galaxies, they are a gravitationally bound collection of stars. They can contain millions of stars densely packed together, and they are old. Really old. They likely formed when the universe was only about 400 million years old. But the details of their origins are still unclear.
We know globular clusters are old because they don’t exhibit any star production, and the stars they contain are old, low-metal stars. This suggests that the clusters formed during the early star-formation period of the universe, and have long since depleted or cast off the dust and gas to form new stars. The stars of a globular cluster have similar ages and chemical compositions, which suggests a globular cluster formed from a single large molecular cloud.
At least that had been the theory. But as astronomers studied globular clusters more closely, they found odd chemical variations within the stars of a particular globular cluster. Some stars have distinct abundances of elements such as oxygen, nitrogen, and sodium not seen in other cluster stars. If the stars of a globular cluster form from the same molecular cloud at roughly the same time, why would some stars be so distinct? A new study published in Astronomy & Astrophysics points to a solution.1
In this study, the authors looked at a high redshift globular cluster known as GN-z11. This cluster is at a redshift of z = 10.6, so we see it at a time when it was only tens of millions of years old. Using observations from the James Webb Space Telescope (JWST) they found it a dense cluster of stars active with star production. They also found the cluster had a high relative abundance of nitrogen. This is interesting because such high nitrogen levels are likely produced within early supermassive stars.
Astronomers have long thought that many of the first generations of stars were supermassive stars. They would have been comprised purely of hydrogen and helium, with masses 5,000 to 10,000 times that of the Sun. These stellar beasts would only have only lived for a few hundred thousand years before dying in a cataclysmic explosion. Through their deaths, these stars would have seeded the universe with elements such as carbon, nitrogen, and oxygen.
Another possible solution to the nitrogen abundance of GN-z11 is through the by-product of second-generation Wolf-Rayet stars, which experience an extended period of casting off their outer layers before finally dying. But given the age of GN-z11, that doesn’t seem likely. The team would like to look at other high-redshift globular clusters to distinguish between these two possibilities.
If the supermassive star model is correct, it could be that globular clusters form in a way similar to galaxies. Just as supermassive black holes drive the early formation and star production of galaxies, supermassive stars might be the seeds for globular clusters.
Charbonnel, C., et al. “N-enhancement in GN-z11: First evidence for supermassive stars nucleosynthesis in proto-globular clusters-like conditions at high redshift?.” Astronomy & Astrophysics 673 (2023):L7. ↩︎
|
OPCFW_CODE
|
Hippo CMS has a pluggable architecture and many additional features are available as plugins. Hippo provides three levels of support for plugins:
- Hippo certified, available from Hippo's feature library.
- Hippo certified, available from the Hippo Forge.
- Community maintained, available from the Hippo Forge.
A "Hippo certified plugin" meets Hippo quality standards, is properly documented, and is tested against each new Hippo CMS release and updated if necessary.
Further alignment and integration of the Certified Plugins, promoted to Standard Plugins in Hippo 10.0
The upcoming Hippo 10.0 release will bring some notable changes regarding the Certified Plugins:
- The development and support for the Certified Plugins will be further aligned with Hippo CMS and the setup application, and these plugins will be promoted to Standard Plugins.
- As Standard Plugins they will be regarded as integral but optional part of the Hippo product and thus receive the same level of support and quality assurance.
- To be able to provide and support this alignment and integration, the development of the Standard Plugins will be moved to and continued in the Hippo open source SVN.
- The Documentation for the Standard Plugins also will be moved and integrated in the community website at www.onehippo.org.
- The code, documentation and maintenance of the Certified Plugins for Hippo 7.9 and earlier will remain at the Hippo Forge.
- Although the Maven coordinates (groupId, artifiactId) of the Standard Plugins will be changed to reflect these changes, there will be no code or configuration changes as result of this move.
In addition to these upcoming changes, we already promoted the Search Engine Optimization (SEO) as Certified Plugin as of the latest Hippo 7.9.5 release, and also will be a Standard Plugin for Hippo 7.10.
The Tag Cloud Management Plugin will not be promoted to Standard Plugin and therefore become non-certified and community supported only through the Hippo Forge as of Hippo 7.10.
Finally, the Selections plugin will be further promoted as default plugin: it will be configured by default for new projects through the Maven archetype, like the Gallery Picker and the Resource Bundle Editor plugins.
Hippo Certified Plugins in the Hippo Setup application/Feature Library
Hippo's setup application provides a library of out-of-the-box features that can be added to your project with a single mouse click. All features in the library are Hippo certified. Some are based on Certified Plugins hosted at the Hippo Forge, those are linked to their respective Forge projects in the list below.
- Content Blocks
- Dashboard Document Wizard
- Gallery Manager
- Related Documents
- Search Engine Optimization (SEO)
- Simple Content
Hippo Certified Plugins at Hippo Forge
At this moment there are three certified plugin hosted at the Hippo Forge that are not available from the Hippo setup application/feature library:
- Tag Cloud Management Plugin (as of Hippo 7.10 this plugin no longer will be certified but remains available as community plugin at the Hippo Forge)
- Gallery Picker
- Resource Bundle Editor
Community Plugins at Hippo Forge
Community plugins are maintained by the Hippo community. It is up to the community to make sure the plugins meet quality standards, are properly documented, and are tested against each new Hippo CMS release.
The following community plugins are frequently used in Hippo CMS projects:
|
OPCFW_CODE
|
Realm: replicated heap exhausted when creating many compact instances
In map_copy, we would like to create compact instances for the source requirements that have sparse domains. Something like:
if(IS_SRC && !req_domain.dense())
creation_constraints.add_constraint(Legion::SpecializedConstraint(LEGION_COMPACT_SPECIALIZE));
It works correctly on smaller problem sizes. However, when we try to scale up (i.e. run on 4 nodes with a larger problem size), we obtain the following error :
FATAL: replicated heap exhausted, grow with -ll:replheap - at least 17308992 bytes required
With the following backtrace:
Thread 10 "poisson" received signal SIGABRT, Aborted.
[Switching to Thread 0x154a52eb8e40 (LWP 43000)]
0x0000154a9250bacf in raise () from /lib64/libc.so.6
(gdb) bt
#0 0x0000154a9250bacf in raise () from /lib64/libc.so.6
#1 0x0000154a924deea5 in abort () from /lib64/libc.so.6
#2 0x0000154a955ea88d in Realm::ReplicatedHeap::alloc_obj(unsigned long, unsigned long) ()
from .../lib64/librealm.so.1
#3 0x0000154a9558f4d6 in Realm::InstanceLayout<2, long long>::compile_lookup_program(Realm::PieceLookup::CompiledProgram&) const ()
from .../lib64/librealm.so.1
#4 0x0000154a95572832 in Realm::RegionInstanceImpl::Metadata::deserialize(void const*, unsigned long) ()
from .../lib64/librealm.so.1
#5 0x0000154a955c3e09 in Realm::MetadataResponseMessage::handle_message(int, Realm::MetadataResponseMessage const&, void const*, unsigned long)
()
from .../lib64/librealm.so.1
#6 0x0000154a9567c143 in Realm::IncomingMessageManager::do_work(Realm::TimeLimit) ()
--Type <RET> for more, q to quit, c to continue without paging--
nments/cr_16_standard/.spack-env/view/lib64/librealm.so.1
#7 0x0000154a95547fdc in Realm::BackgroundWorkManager::Worker::do_work(long long, Realm::atomic<bool>*) ()
from .../lib64/librealm.so.1
#8 0x0000154a95548801 in Realm::BackgroundWorkThread::main_loop() ()
from .../lib64/librealm.so.1
#9 0x0000154a9563036f in Realm::KernelThread::pthread_entry(void*) ()
from .../lib64/librealm.so.1
#10 0x0000154a949931ca in start_thread () from /lib64/libpthread.so.0
#11 0x0000154a924f6e73 in clone () from /lib64/libc.so.6
The default size for replicated heap I believe is 16777216 bytes. Did you try passing -ll:replheap?
Increasing the replheap size helps, but then, when I increase the number of nodes and the problem size (weak scaling) I need an even larger replheap.
It only happens when we use compact instances. Is this the only way to use compact instances at large scale (i.e. keep increasing the size of the replicated heap) ?
Increasing the replheap size helps, but then, when I increase the number of nodes and the problem size (weak scaling) I need an even larger replheap.
It should only grow proportional to the number of compact instances that you make and not their sizes. As long as you are making new compact instances (presumably for nearest neighbors in the mesh) then the amount of space required will continue to grow. However, I'm presuming at some point your application should hit a maximum number of nearest neighbors and then the amount of replicated heap memory that you should need will plateau. My understanding of the upper bound on the number of nearest neighbors in FleCSI applications comes from conversations with @opensdh so please correct me if I'm wrong about that.
By "their sizes", do you mean the count of index points contained or the count of rectangles required to describe them or both?
I mean the total amount of memory required to represent the instance. The amount of memory needed in the replicated heap may be proportional to the number of rectangles in the compact instances but I'm presuming that the number of those rectangles is proportional to the number of nearest neighbors. The amount of memory required in the replicated heap will be independent of the total volume of those rectangles though.
For a structured mesh in FleCSI, the number of rectangles is proportional to the surface (hyper)area of the subdomain in question, because the linearization required for the two-dimensional index-space layout (with coordinates (c,i)) causes every index point on the faces normal to one dimension to be isolated. For scaling of any one problem, this is a constant (weak) or decreasing (strong), but it does scale with problem size.
For scaling of any one problem, this is a constant (weak) or decreasing (strong), but it does scale with problem size.
To be clear you mean this is the number of rectangles required by each (FleCSI) color right?
Is this particular case where the repl heap is being exhausted a structured or unstructured mesh? Also are we weak or strong scaling?
|
GITHUB_ARCHIVE
|
The Internet of Trees
Building a tweetable, colour-changing Christmas tree
The phrase “Internet of Things” gets thrown around rather a lot at the moment. It’s the idea that – in the very near future – it’ll be standard practice for physical objects to be network-connected. Everything from toasters to fridges to medical devices to… er… Christmas trees.
Yes, at Think Physics we thought we’d join the trend (and also help with reforestation of the Internet) by hooking our festive office foliage up to a Twitter account, so anyone anywhere in the world could change the colour it glowed. Not so much an ‘Internet of Things’ as an ‘Internet of Trees.’ Yes? No? Mildly amusing? Just a little bit? Just us? Drat.
Here’s the story, anyway:
We could, I suppose, have bought a tree. But nooooo, that would have been far too obvious. Instead, we followed these marvellous instructions from the Makedo folks. Makedo is the cardboard rapid-fixing system we featured in our Christmas gift guide.
Our plan was to illuminate the inside of the tree, which meant we had to hack holes in the thing to let the light out. If we’d planned ahead a little more (so… “at all” would have been a start) we might have noticed one of the comments on the Makedo tree Instructables page which shows a lovely tree made from coroplast translucent plastic sheet. Bothers. That would have been a really good idea. As it was, we decided to cut festive-themed shapes, which went OK until one of us insisted any snowflake patterns have the correct symmetry.
That took a while.
Our code is on GitHub so you can view, reuse or amend it as you wish. It’s probably not a great example of high-quality Python, to be honest – this was the first time I’ve used Python in anger. And I do mean ‘anger,’ at times. Much to learn, I have.
The hardest bit to wrap your head around is probably the Twitter authentication stuff. Or at least, that’s the bit I least understand myself. I’ve somehow ended up authenticating two almost-identical applications with Twitter, one registered to my own twitter username, and one to @thinkphysicsne. That probably wasn’t what I meant to do.
There are all sorts of gnarly problems trying to connect bits of physical computing to internet services. Hanging devices off our university network is merely the first challenge. However, after a little poking around I found some Python sample code which did most of what I needed. Adopting the first rule of programming:
Write less code.
…I hauled out one of our Raspberry Pis and got to work.
Hour 1: Wifi! Try to get the Pi on the university WiFi network.
Result (final): Connected the Pi to the university’s guest network, which is a bit flaky but hey, it’s a tree. Reliability means ‘not dropping needles everywhere.’
Hour 2: Blinky lights! We already had a Pimoroni Unicorn HAT, which is an 8×8 array of RGB LEDs. I spent an hour playing with that, and preparing some colour conversion functions.
Result: Success! Except I was now suffering huge after-images from looking too directly at the stupidly bright lights.
Hour 3: Twitter! I used an off-the-shelf Python twitter library, the examples for which already did most of what I needed.
Result: Success! Incoming tweet selection, etc.
Hour 4: Glue! Plugging all the bits of code together so they worked more-or-less in concert, and as expected. Involved working out how to extract things like hex colour codes from Twitter text.
Result: Abject failure! Nothing worked, everything that used to work was broken, I had no idea what any of the error messages mean.
Hours 5 & 6: Debugging! I spent a chunk of time googling and reading StackExchange, trying to figure out how Python does things. Things like: allow me to pass a number into a function, when what I have initially is a list of lists.
Result: Success! A working bot which sets the panel of LEDs to a colour determined from incoming tweets. Hurray!
Hours 7 & 8: Publish! Cleaning up the code, commenting it, and posting it to GitHub so other people can use it. Inevitably, I broke it really badly in the middle of all this, but in the process of fixing it I managed to clear out a few nasty bugs I’d backed myself into. The result is much more reliable than it was initially.
Result: Success! My first Python code in Github. I’m so proud, even if the code is fairly horrid.
Insert Pi in tree. Take to party:
OK, so we have a tree which changes colour. Here’s how you can operate it:
Firstly, check our Twitter stream to see if it’s currently plugged in and turned on. We’ll post when it’s working. If it is, send us a tweet (@thinkphysicsne) with one of two things:
Send the tweet, and if it’s online the tree will turn the colour you specify.
Of course, you’ll have to take our word for it, at least until we manage to integrate a webcam into the system so the tree replies with selfies.
Think Physics are @thinkphysicsne on Twitter.
We’re used to equating colours to temperatures in false-colour thermal images like this. We’re less familiar with thinking about the colour of light given off by objects at specific temperatures, which is what black body radiation spectra are all about.
The sun is usually represented like this – as an orange ball. We see it looking yellow/orange, but we’re always looking through the Earth’s atmosphere, which scatters a load of blue light out sideways.
So what colour is the sun? Yellow? White? Slightly pink?
There’s more physics involved in this tree than you might expect. But hey, did you not notice the name of the project? Did you seriously think our boss let us play with this stuff without it having any relevance to our work? Pfshaw!
If you take a lump of something – like a chunk of iron, maybe, or a big ball of gas like, er, a star – and heat it up, it radiates light. Physicists talk about ‘black body radiation’, which is the distribution of wavelengths – colours – emitted by the theoretically perfect chunk of stuff. It turns out, quite a lot of things (lumps of ion, balls of gas we like to call ‘stars’) fit quite closely to the ideal black body radiation emission spectrum, and we end up using that theoretical basis to describe shades of white.
Ah. I mentioned ‘shades of white’, didn’t I? Well, yes. OK, back up a bit: you know sometimes you take a photograph indoors, and everything looks yellow? Or you take a photograph outdoors and everything looks blue? That’s because your camera has guessed a colour temperature – a colour for ‘white’ – which is way off. It’s assumed the scene is being lit by an ideal black body which is glowing as if heated to a temperature that doesn’t match the light in the room. Your camera has to pick something to represent white, and every so often it simply guesses wrong.
Colour, it turns out, is as much about your perception as it is about the world itself. Your eye and brain are ridiculously good at guessing what colour is neutral – white – at any given, and they interpret other colours by referring to that. This works really well for us, unless we’re trying to decide if a dress is blue or white.
OK, back to ideal black body radiators: by convention, black body colours are described by the temperature of the chunk of stuff, expressed in scientists’ favourite temperature unit, Kelvin. One Kelvin is the same change as one degree Celsius, but with Kelvin you start counting from 0K = -273.15°C. You can’t have negative Kelvin, but that’s another story for another day. For now, roll with it.
Heat a lump of stuff to a mere 1500K (~1225°C) and it’ll glow a nice warming red. Heat it more, to about 6300K (6026°C) and it looks white, in most circumstances. Heat it way way more, to 12000K (11726°C) and it’ll blaze with an icy-looking blue-white heat. So here’s the really weird part: we call the redder-looking colours ‘warm’ and the more blue colours ‘cold,’ when the temperatures which generate them are completely the opposite. ‘Red hot’ is cooler than ‘white hot’ which turns out to be cooler in turn than ‘blue hot.’
In practice, not all that many things stick too closely to the ‘ideal black body’ colour temperature curves. Lumps of iron are pretty close, at least until they start to boil at around 3000K. But boil a big enough lump of iron, wrap it in rather a lot of hydrogen, and you’ve got yourself a star – and it turns out many stars are a reasonable fit to the colour profile above.
Our tree, then, can approximate the colour of a range of stars. Tweet it the surface temperature of a star, and it’ll come pretty close to glowing that colour. So tweet it 5280K, the surface temperature of our Sun, and it glows…
…ah, but then you have to know what colour the Sun is.
|
OPCFW_CODE
|
Rattlesnakes can pose a serious threat to human beings and other mammals because of their toxin, which is potentially deadly if left untreated. These creatures rarely bite, and the rattle from which they derive their name acts as a direct warning signal for other creatures to keep away. They congregate in dens during the winter months, and these dens can appear in most naturally occurring crevices. It can be hard to determine an area is a rattlesnake den unless it is occupied. Snakes gather in dens during the cold months to share body heat. The number of snakes per den varies depending on the climate of their locale and the amount of food available. If the region has an abundance of food and mild winters, dens can include few snakes, while in areas with little food and cold winters, these creatures huddle together in "balls."
TL;DR (Too Long; Didn't Read)
Rattlesnake dens can appear in most holes that occur naturally in stone. Any hole that provides adequate protection from the cold during winter could be a den. The dens can be difficult to identify unless snakes currently occupy them.
Where Snakes Sleep for the Winter
Rattlesnakes cannot burrow, so they rely on naturally occurring holes to act as a home during the winter months when the snakes hibernate. Small caves, gopher holes, rocky crevices and other such formations can act as homes for rattlesnakes during the winter. The area must be deep or otherwise protected to prevent dramatic temperature changes from affecting the hibernation. An unusually warm day, for example, can prematurely wake up the snakes, who will be sluggish and at heightened danger of depredation. Rattlesnakes can be found across North and South America, most often in rocky regions or near grasslands. During their active months, they can journey as far as 1.6 miles from their dens to their favorite hunting and basking areas.
How to Get Rid of Dens
Most governments do not consider rattlesnakes an endangered species, although some regional governments may have specific legislations surrounding killing them. Humans rarely die from their poison, and for the most part, the snakes lack aggression unless they are cornered or another animal gets too close. That said, they can appear in human settlements near their dens and pose a threat to livestock and children. Specialized traps, including glue traps, exist that can be placed outside a den to capture the snakes as they wake up from hibernation. A shotgun can also work, though that approach seems cruel. Dynamite has obvious drawbacks in that it's hard to determine how many of the snakes die after its use, and it also comes with a good amount of danger for the user. Pouring gasoline into a den can draw the snakes out where they can be dealt with from a distance. While rarely protected by laws, rattlesnakes play an important role in their ecosystems keeping prey populations at bay. If at all possible, nonlethal deterrent measures should be used.
About the Author
Doug Johnson is an Edmonton-based writer, editor and journalist.
|
OPCFW_CODE
|
MASSA Podcast: Mathematics of Data Science and Deep Learning February 7, 2021
I had the pleasure to be a guest of the MASSA Podcast. This podcast is an excellent initiative of the Mathematics, Actuarial, and Statistics Student Association (aka MASSA) of Concordia University. I have been a regular listener of the podcast since its launch in Fall 2020. It's been a great way to stay connected with students and colleagues from the Department during this period of social distancing. I'm quite proud to be part of this series!
In the episode, we talk about the Mathematics of Data Science, Compressed Sensing, and Deep Learning. Check it out here!
Invariance, encodings, and generalization: learning identity effects with neural networksJanuary 21, 2021
Matthew Liu (Concordia), Paul Tupper (SFU) and I have just submitted a new paper!
We provide a theoretical framework in which we can rigorously prove that learning algorithms satisfying simple criteria are not able to generalize so-called identity effects outside the training set. Identity effects are used to determine whether two components of an object are identical or not and arise often in language and in cognitive science. We show that a broad class of learning algorithms including deep feedforward neural networks trained via gradient-based algorithms (such as SGD or Adam) satisfy our criteria, dependent on the encoding of inputs. In some broader circumstances we also provide adversarial examples able to "fool" the learning algorithm. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. Our experiments can be reproduced using this code.
Deep neural networks are effective at learning high-dimensional Hilbert-valued functions from limited dataDecember 15, 2020
The accurate approximation of scalar-valued functions from sample points is a key task in mathematical modeling and computational science. Recently, machine learning techniques based on Deep Neural Networks (DNNs) have begun to emerge as promising tools for function approximation in scientific computing problems. In our recent paper, Ben Adcock, Nick Dexter, Sebastian Moraga and I broaden this perspective by focusing on approximation of functions that take values in a Hilbert space. This problem arises in many science and engineering problems, in particular those involving the solution of parametric Partial Differential Equations (PDEs).
Our contributions are twofold:
We present a novel result on DNN training for holomorphic functions with so-called hidden anisotropy. This result introduces a DNN training procedure and a full theoretical analysis with explicit guarantees on the error and sample complexity. Our result shows that there is a procedure for learning Hilbert-valued functions via DNNs that performs as well as current best-in-class schemes.
We provide preliminary numerical results illustrating the practical performance of DNNs on Hilbert-valued functions arising as solutions to parametric PDEs. We consider different parameters, modify the DNN architecture to achieve better and competitive results and compare these to current best-in-class schemes.
Check out our paper here!
Upcoming course in Winter 2021: Sparsity and compressed sensingSeptember 30, 2020
In Winter 2021, I will be offering a graduate course Topics in Applied Mathematics: Sparsity and Compressed Sensing at Concordia. The course is also available as an ISM course. Although mainly intended for graduate students, it can be also taken by (brave and hard-working!) undergraduate students at Concordia. The course is cross-listed as MAST 837, MAST 680/4, C and MATH 494.
Sparsity is a key principle in real-world applications such as image or audio compression, statistical data analysis, and scientific computing. Compressed sensing is the art of measuring sparse objects (like signals or functions) using the minimal amount of linear measurements. This course is an introduction to the mathematics behind these techniques: a wonderful combination of linear algebra, optimization, numerical analysis, probability, and harmonic analysis.
Topics covered include: l1 minimization, iterative and greedy algorithms, incoherence, restricted isometry analysis, uncertainty principles, random Gaussian sampling and random sampling from bounded orthonormal systems (e.g., partial Fourier measurements), applications to signal processing and computational mathematics.
Minisymposium: The mathematics of sparse recovery and machine learningJuly 1st, 2020
Ben Adcock and I have co-organized a minisymposium on The Mathematics of Sparse Recovery and Machine Learning that will virtually take place on July 16 at the 2nd Joint SIAM/CAIMS Annual Meeting. This event was selected as part of the SIAG/CSE Special Minisyposium Track. We are really looking forward to it!
You can find more info here.
The benefits of acting locallyJune 26, 2020
The sparsity in levels model has proved useful in numerous applications of compressed sensing, not least imaging. Reconstruction methods for sparse in levels signals usually rely on convex optimization, but little is known about iterative and greedy algorithms is this context. Ben Adcock (SFU), Matt King-Roskamp (SFU) and I bridge this gap in our new paper by showing new stable and robust uniform recovery guarantees for sparse in level variants of the iterative hard thresholding and the CoSaMP algorithms. Our theoretical analysis generalizes recovery guarantees currently available in the case of standard sparsity. We also propose and numerically test a novel extension of the orthogonal matching pursuit algorithm for sparse in levels signals.
Sparse recovery in bounded Riesz systems and numerical methods for PDEsMay 14, 2020
Sjoerd Dirksen (Universiteit Utrecht), Hans Christian Jung (RWTH Aachen), Holger Rauhut (RWTH Aachen), and I have just submitted a new paper. We study sparse recovery with structured random measurement matrices having independent, identically distributed, and uniformly bounded rows and with a nontrivial covariance structure. This class of matrices arises from random sampling of bounded Riesz systems and generalizes random partial Fourier matrices. Our main result improves the currently available results for the null space and restricted isometry properties of such random matrices. The main novelty of our analysis is a new upper bound for the expectation of the supremum of a Bernoulli process associated with a restricted isometry constant. We apply our result to prove new performance guarantees for the numerical approximation of PDEs via compressive sensing using the CORSING method.
Check out our preprint here!
When can neural networks learn identity effects? Accepted to CogSci2020!May 9, 2020
Motivated by problems in language and other areas of cognition, we investigate the ability of machine learning algorithms to learn identity effects, i.e. whether two components of an object are identical or not. We provide a simple framework in which we can rigorously prove that algorithms satisfying simple criteria cannot make the correct inference. We then show that a broad class of algorithms including deep neural networks with standard architecture and training with backpropagation satisfy our criteria, dependent on the encoding of inputs. We also demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs.
You can read our preprint here. A journal paper on this topic is currently in preparation.
Compressive Isogeometric Analysis March 17, 2020
Motivated by the difficulty with assembling the stiffness matrix in Isogemetric Analysis (IGA) when using splines of medium-to-high order, we propose a new methodology for solving PDEs over domains with a nontrivial geometry called Compressive Isogeometric Analysis. Through an extensive numerical illustration, we demonstrate that the proposed approach has the potential to mitigate this problem by assembling only a small fraction of the discretization matrix. This is possible thanks to a suitable combination of the IGA principle with CORSING, a recently introduced numerical technique for PDEs based on compressive sensing.
You can read our preprint here!
|
OPCFW_CODE
|
Build preloaders visually in Oxygen for hiding flashes of unstyled content (FOUC) as the content loads, hiding fallback fonts as webfonts.js become active or for displaying anything you like for a few seconds when a visitor first arrives on the site.
Type of Pre-Loader
- Presets – 10 included CSS based preloader animations.
- Image – Add your own image from your media library.
- Custom – add any Oxygen elements inside the preloader to build your own.
Size, Animation Durations
The preloader presets can be styled by changing the color, size & also the animation durations to speed up or slow down the animations. Some of the CSS loaders have inner animations with their own speed settings.
Visibility in Builder
This setting offers a quick way to hide the preloader while you’re working inside the builder. This has no effect on the front end.
Remove Preloader only after
All page content has loaded – Preloader will wait until all the images, videos, scripts, CSS etc have finished loading before making the page visible. (window.onload)
After x seconds – Preloader will only be visible for the number of seconds set.
After all web fonts are active – If using webfonts.js for google fonts or adobe fonts, the preloader will wait until all the fonts are active (when the HTML has the wf-active class) before making the page visible.
Page Fading Effect
The preloader can also be used to give your site a nice fade in effect by setting the preloader to custom, but adding in no elements, just the white background. Then set the overall transition by going to Advanced > Effect > Transitions and setting the transition to around 1s. The result will be that the page appears to fade in nicely as the user browses the site.
To avoid the preset animations being slightly janky on some iphones, it’s best not to add any background colours to the outer div of the component itself. Instead, stick to using the background colour control found in the Primary settings. It’s a bizarre quirk.
Preloading Sections Only
You may want to just hide one part of your site on loading, say a slider that doesn’t look great when it first loads. To do this you can go to the section > advanced > layout and set the positioning to relative. Then put the preloader component inside the section and give it a absolute positioning. It will automatically then take up the entire width and height of that individual section.
Other CSS loaders
There is no lack of preloaders around. I found some more cool CSS loaders/spinners at https://epic-spinners.epicmax.co/. You can copy the html into a code block inside the preloader set to manual. Be sure to add the provided CSS to a stylesheet, not the code block, to make sure that the CSS is stored in your global stylesheet and not in the footer.
|
OPCFW_CODE
|
Duo Protects Cloud Access at Amazon re:Invent 2015
Duo will be attending and exhibiting security solutions at the largest cloud computing conference, Amazon re:Invent 2015, hosted at the Venetian in Las Vegas next week, from October 6-8.
We’ll be demoing our advanced two-factor authentication and end user security solutions at booth #1141, Hall C - come by for free swag, demos and to answer any of your questions about our technology.
You can also stop by our booth to ask us about the AWS (Amazon Web Services) Test Drive demoing our two-factor authentication solution. AWS Test Drive lets you learn about third-party enterprise IT solutions by providing a private sandbox environment containing preconfigured server-based solutions. You can also sign up and test Duo Access before full deployment.
Duo provides two-factor authentication methods and a variety of security policies to AWS Single Sign-On (SSO) logins. Duo Security is also an official partner of AWS, providing AWS infrastructure and application protection for SSH, PAM, Open VPN, and web SDKs for Python, Ruby, Java, PHP, Node.js, Classic ASP, ASP.NET and ColdFusion.
This sold out show will bring together more than 17,000 attendees for sessions on cloud architecture, continuous deployment, monitoring and management, performance, security, migration and more.
Attendees will also get experience in full-day technical bootcamps, labs and hackathons, while getting their questions answered by AWS engineering teams and architects, engineers, product leads and expert users.
Here’s a few of the security-focused sessions we’re looking forward to at Amazon re:Invent include:
AWS Security State of the Union
AWS Vice President and Chief Information Security Officer, Stephen Schmidt will discuss how AWS can meet customers’ security and compliance requirements.
IAM Best Practices to Live By
AWS Principle Tech Program Manager, Anders Samuelson will cover best practices to improve your security posture, including:
- How to manage users and security credentials
- Delete or regularly rotate root access keys
- Demonstrate when to choose between using IAM users and roles
- How to set permissions to grant least privilege access control in your AWS accounts
Architecting for End-to-End Security in the Enterprise
AWS Principal Solutions Architect Bill Shinn will discuss lessons learned from the development of enterprise security strategy, security use-case development, end-to-end security architecture and more.
Architecting for HIPAA Compliance on AWS
Bill Shinn will cover how to architect for HIPAA compliance using AWS, and introduce new AWS services added to the HIPAA program. This session includes testimonials from customers that process and store protected health information (PHI) on AWS, with a focus on how to implement the Technical Safeguards in the HIPAA Security Rule.
Wrangling Security Events in the Cloud
AWS Senior Security Consultant Joshua Du Lac will cover incident response training and tools for cloud security events, including detection, response, recovery and investigation of the root cause.
And a security-focused bootcamp for cloud scaling:
Securing Next-Generation Workloads at Cloud Scale Bootcamp
This is a one-day bootcamp designed to teach security engineers, developers, solutions architects and other technical security practitioners how to design security controls at cloud scale for next-gen workloads.
There will be plenty of after-hours networking events too, including a welcome reception, pub crawl, Harley ride, eating competition, and re:Play party.
If you can’t make it to the event this year, sign up to watch the Livestreams of the keynotes and sessions next Wednesday and Thursday. Hope to see you there!
|
OPCFW_CODE
|
Coding definition, the transforming of a variate into a more convenient variate. see more. highlight Coding define coding at dictionary.com.The medical billing and coding job description involves using computers and medical codes to communicate treatment information between medical facilities and highlight Medical billing & coding job description.
Picture of What is Gene Expression? Define medical coding
Youtube video 2. "Medical Coding" - What Is It? - YouTube. Watch the video.
Icd-10 | medical coding & billing | ama. Hcpcs codes and ncci. medical billers and medical coders need to be aware of the current guidance established by the ncci when they submit claims to medicare. Introduction hcpcs level coding | medical billing .
- What is Medical Coding? - Medical Billing and Coding, Cracking the code: Learn the whys, whats, and hows of professional medical coding.
- What Does a Medical Coder Do? - AAPC, The first step in medical billing process is medical coding including CPT, HCPCS, ICD-10, and ICD-9 codes. Find what is medical coding and what does a
- What is Medical Coding? | Medical Billing and Coding U, In some settings, a medical biller also serves as a medical coder and, in fact, medical billers are familiar with the basic precepts of accurate medical coding.
- What Is Medical Billing and Coding? - Career Coders, Oct 13, 2017 Medical Coding is the process of converting diagnosis codes to ICD-10-CM codes and procedure codes to What is their personality like?
- Medical coder | definition of medical coder by Medical dictionary, Looking for online definition of medical coder in the Medical Dictionary? medical coder explanation free. What is medical coder? Meaning of medical coder
- Coding | definition of coding by Medical dictionary, 1. the assigning of symbols or abbreviations to classify field notes into categories. 2. the process of transforming qualitative data into numerical data that can be
- Medical Coding Specialist | Training & Coder Career Info - InnerBody, Mar 13, 2017 Discover how to become a medical coding specialist. Learn about salary, education requirements, certification, and the rewarding career path
- Medical classification - Wikipedia, Medical classification, or medical coding, is the process of transforming descriptions of medical. is single hierarchy for ICD. SNOMED CT concepts are defined logically by their attributes, whereas only textual rules and definitions in ICD.
- Medical Coding Specialist: Job Description and Requirements, Individuals searching for Medical Coding Specialist: Job Description and Requirements found the following What is your highest level of education?
- What is Current Procedural Terminology (CPT) code? - Definition, This definition explains the meaning of Current Procedural Terminology (CPT) code and details how CPT Software-as-a-service app aids with medical coding
- Medical Billing Terminology - Medical Billing and Coding Online, EOBs may also explain what is wrong with a claim if it's denied. Medical Coder: A medical coder is responsible for assigning various medical codes to services
- Medical Coder | explorehealthcareers.org, Earning a bachelor's degree or master's degree can strengthen a medical coder's career; however, it's not required to show proficiency. What is necessary is to
- What Is The Global Surgical Package? - Medical Coding and Billing, The Current Procedural Coding (CPT) manual, produced by the American Medical Association (AMA) gives an overview of the definition of the surgical package.
|
OPCFW_CODE
|
For the first time in years, I looked at Movable Type.
I walked away, like so many people, in May of 2004 when the restrictions and pay requirements were too much. I’d played with b2 before and WordPress, but that was when I fully moved to WordPress. While I’d remembered that the Open Source version had been fully restored in version 3.3, I forgot that when they released v6 in 2016, they ‘terminated’ the Open Source licensing option. Again.
In doing normal research of things, I ended up on MovableType.com, and was struck by how modern and out of date the site felt.
The site isn’t mobile friendly. Or at least not iPad friendly. It does this peculiar zoom in where the content is focused but it still has a sidebar. This means flicking down to read can causes my screen to wobble side to side as well. The zoom also didn’t work consistently, making me have to fix it over and over.
That said, it has a much nicer design and layout than I expected.
I have to say, that’s a much more modern front page than WordPress.org and less cartoony than the current WordPress.com pages. The same can’t be said of navigation, which was a little confusing. If you don’t know you have to purchase to download, seeing the Software License section without clarification is weird. That should be even more obvious, I think. I shouldn’t have to click on “Release Notes” and then see Install MT on the sidebar.
Once I ended up in the documentation, I poked around and had a laugh at the software requirements.
PHP 5.0 or higher (5.3 or higher is recommended)
Sounds familiar, doesn’t it?
The rest of the install direcrions are incredible weird and hands on. It has none of the simplicity I’ve come used to with WordPress. And please remember, I think that WordPress is far too complex for a new user, still, because WP’s NUX sucks. MT’s is worse.
What interested me the most is that, while you can’t get MT for less than $900, they have a public GitHub repo available.
Still, I didn’t install it. Instead I read the documentation to see what using it would look like, and was rather startling to read the author page on creating entries and see an interface that looked old.
It reminded me of WP 2.5. Which I guess is understandable since the documentation on how to import from WP to MT is very old. No, I’m serious, it has screenshots of what looks like WP 2.5 as their documentaion.
While I still think that MT lost out big time when they decided to separate from the Open Source community, their product doesn’t draw me in. It doesn’t look fun or nice to use, and that’s probably a reason it’s not as popular as it could be. The GitHub page has 22 contributors. WordPress 4.5, led by my coworker and friend Mike, had 298. Even the official, but not really used like that, WP GitHub repo has over 30 contributors.
I wonder how the web would have looked if Six Apart had never made the license changes.
I wonder would power 26% of the Internet in that world.
|
OPCFW_CODE
|
Extraordinary testers are those that can empathise with their team members
As testers we ultimately want to help other people prevent problems before they start manifesting into some unwanted behaviour of a system. That is the dream! However, it’s often the case that we discover problems. If we can understand some root causes of problems, then we will be able to prevent more problems and curing (fixing) less.
One problem that I want to explore in this blog post is team member inexperience in product, technology or code. There is naturally talk about empathising with our customers, but one conversation we need to highlight is how we need to empathise with our team members.
I don’t have a statistic for you, but I bet a fair amount of bugs are caused by a team member’s inexperience with the product, technology or code.
We are humans! We are not perfect and make mistakes.
This quote is important because it’s completely true. I know of course that we want to put our best foot forward at work, but mistakes happen by all of us. The key to mistakes is that they are fine as long as you are learning from them. It becomes problematic when you are making the same mistake time and time again.
Two ends of tunnel — safe or blame?
I have worked in teams where team members have been very open with what they know or don’t know. They explicitly call out that there is a risk here because they are new to this area of the product, code or technology.
On the other end, I have seen team members blame each other for a production bug. Oh the developer should have picked this up, or why wasn’t this tested? This scenario is horrible to be in. This is where managers can come in and squash this. Managers need to do everything they can to push the scale to that psychologically safe environment where you can talk about the risks openly, rather than a pointing finger environment. This is where test managers add value, to change the environment, so that everyone can have their right to speak and be open about matters.
Be aware of “lack of experience” problems
What I want you to do is be aware of problems created by a lack of experience in product/technology/code! Your team members are not perfect and will make mistakes. This goes to all people in your team, whether they are testers, developers, product owners, UX etc.
Have courage and ask questions
The second thing I want you to do is have courage and ask probing questions to your team.
- How are we feeling about working on this legacy area of the product? What are the risks? How can we prevent problems?
- What about this legacy area of the code? What are the risks? How can we prevent problems?
- Or how do we feel about working with this new tech? What risks are there? What can we do to prevent problems?
Have your team member’s back!
The third thing I want you to do is have other people’s back. One example I can give is where a developer said to me how nervous they were about making a particular change. Tell them you’ve got their back and that you can explore together. What in particular are they nervous about? Could we note some things down in our exploratory/regression testing to make us both feel more confident about the change? Perhaps we need to communicate to the wider team (like the support team) about the nervousness, so everyone can be aware and we can react accordingly if we need to. Ideally we’ll have everything covered, but it’s good to create awareness as back up!
Admit your mistakes
I try to be as transparent and honest as I can. I don’t know everything and I’m always working to improve. In a global conference I openly admitted that there was a time where I didn’t want to do a hotfix because it would affect the metrics. It was a lesson I took away
Metrics are a good conversation starter. Nothing more.
Overall though, if you can create empathy with your team members you’re going to guard against problems that are caused by team member inexperience in the product, code or technology. You’ll also be in a happier team as a result.
|
OPCFW_CODE
|
Will any program written with XLib work on any desktop environment (Gnome, KDE, etc.)?
I'm making an application that uses Win32 API on Windows but I want to support linux distros too.
Some people say linux doesn't have a native GUI API like Win32 API of Windows.
But I found XLib and it seems like it is somethink similiar to Win32 API. If I use XLib on my project, can I compile and run it on any linux distro that uses any desktop environment without using external libraries?
I don't want to use something like GTK+ or SDL because I'm learning Win32 (just for fun) and using a platform independent library along with Win32 API sounds really absurd.
You'll have to decide: do you want to learn how to program under Windows with the Win32 API, or do you want to have a portable app?
You say you don't want to use a portable solution because you are learning Win32. Well that means you will be using a non-portable solution. At best you will be using conditional compilation, at worst you will be doing a project for each platform. And that while you want to learn Win32? For your mental health, I suggest to focus on Win32 until you are ready to move to something else.
Short answer: yes, XLib will work for the foreseeable future.
Although in some enviroments there will be additional libraries required. You don't need to worry about that.
Long answer:
XLib
XLib targets X Windows Systems (a.k.a X11). XLib is the classical library to use with X11 as works very close to it. XLib also includes a lot of helper code that eases the integration with the desktop environment.
X11 has been the core for desktop interfaces in the Linux ecosystem for a long time. Desktop Environments such as Gnome and KDE build on top X11... in essence X11 is a protocol, where the client sends requests and the X Server responds (will talk a bit more about that below), this is hidden by XLib which stores the requests in a buffer which are then taken by the X Server which will respond asynchronously, but XLib will make it appear synchronous.
High level components such as buttons et.al. exist as widgets on XLib, so this is the closest to the equivalent Win32 API.
XCB
All the communication hidden by XLib makes error handling harder, which led to the creation of the competition: XCB, which is intended to replace XLib by providing direct access to the X protocol. That means that XCB has a smaller footprint, it is asynchronous and has better performance. This makes XCB better in terms of performance, in particular for threaded applications.
Plot twist: XLib since the version 1.2.0 is actually built ontop of XCB. This means that you can mix XLib and XCB code with no problem... but it means that you need an additional library for XLib since it depends on XCB.
X Server
As mentioned above, there is an X Server that provides the abstract information and talks to the screen and input devices. The server doesn't know anything about user interface design. Then there are X Clients, which will interface with the server... this architecture facilitate for Clients to work over the network allowing access to remote terminals/desktops (although that's an over-simplification).
The X Server doesn't know about GUI, allowing it be changed, that's what Desktop Environments do. In this scenario X11 does all the rendering, although libraries for hardware accelerated graphics might bypass this.
Wayland
In the past there were competitors to X11, many deprecated... and there are new competitors to X11, in particular I want to mention Wayland.
Wayland is another protocol, much simpler than X11, that yields control of the actual rendering to the client.... basically Wayland shares a buffer for video memory where the client renders whatever it wants however it wants, so there is no graphics API. In fact, Wayland will handle the concept of windows and the composition of these buffers in video memory according to the windows they represent. This makes Wayland better for encapsulation in general (no API means less surface, and the model allows to isolate the memory easily).
X11 has needed a lot of updates to remain relevant with new technologies and standards (e.g. supporting font and image formats, supporting hardware acceleration, etc...), but with Wayland that problem goes to the graphic libraries the developer use.
While support for Wayland is increasing, X11 will stay around because of all the software build around it. What is expected to happen is that code from X11 will become part of the kernel, become extensions to Wayland or become libraries to be used by developers.
For instance, what do you do if you have a machine that runs Wayland but you need to run a program that uses X11? Well, there is XWayland that will run an X Server on top of Wayland.
Mir
I also have to mention Mir, which is a competitor to Wayland made by Canonical Ltd. (the people behind Ubuntu). It differs from Wayland in that Wayland doesn’t handle input but Canonical wanted that in there... so they made their own.
Similarly to Wayland, it will be able to run X11 using XWayland... but on top of Mir.
Conclusions
There may come the day where X11 isn't what you want to target. And also in the present there are environments where targeting X11 requires additional libraries.
Is it possible (is it good idea) to have app that uses X11 with XCB/Xlib and work in Gnome (KDE) environment? Will there be any conflict between "raw" X11 user application and rest of Gnome software and desktop environment extras?
@i486 Gnome and KDE, coexist, and are build on top of X11. I'm not aware of any conflicts with Xlib. If anything, using Xlib may require more effort for the same result than using GTK or Qt, but will not depend on the desktop environment. This may further clean some doubts: Difference between Xorg and Gnome/KDE/Xfce.
|
STACK_EXCHANGE
|
Here is the solitary most handy system I have at any time taken in regards to helping me while in the office. SQL by itself, which I'd zero prior expertise, is necessary for virtually every data analytics job I see. R is likewise ubiquitous and very imporatant. Terrific to find out how to utilize the two collectively.
I cherished The brand new option to send out assignments on the teaching assistant for strategies prior to publishing the perform to become marked. Poonam was really helpful and provided the ideal amount of steerage without the need of gifting away the proper remedy. Great system!
I came into this program knowing the basics of what Structural Equation Modeling could do. I'm leaving this system with an entire new comprehension of the ways it can be employed to reply numerous different types of analysis thoughts.
I don't believe that I've at any time taken a application that a lot more specifically impacted my job as swiftly as this system has.
23 quickly breaks freed from Mister Moloch's Command and sets out to damage all humanity making sure that he can repopulate the earth with excellent, artificial beings like himself. Now it is actually nearly GeeKeR to defeat the just about indestructible 23.
I look forward to using An additional program on stats.com - a terrific way to proceed learning inside of a structured manner, but flexible ample to go right here take part even though Existence continues.
Dr. Unwin is FANTASTIC. Wow, I think he is easily the most palms-on instructor which i've experienced in the whole Go system. He definitely cares about giving suggestions For each scholar and every assumed.
This can be very bewildering, so such a intro helps to avoid some of the pitfalls and lifeless ends. It absolutely was an exciting and exciting class, with a great deal of opportunity to create graphics and understand new strategies.
I would contend that this Bayesian Computing class (along with other Data.com classes I have taken) has extremely excellent value. I've gotten so much education and learning for rather tiny price... the assistant Trainer's comments was very helpful.
The next grep instructions echo the filename (In this instance myfile.txt) for the command line if the file is of the desired type:
Whilst made use of primarily by statisticians and various practitioners demanding an ecosystem for statistical computation and software enhancement, R may run like a typical matrix calculation toolbox – with performance benchmarks similar to GNU Octave or MATLAB. Arrays are stored in column-big purchase. Offers
This program was challenging, but immensely gratifying. The teacher's lecture notes have been created extremely Plainly and were being very easy to examine
This program gave me a stable qualifications on construction of databases and the best way to use SQL statements to question a databases
It helped me immediately with my work. I had been able to comprehend and use the principles to the time-series analysis. The R package deal I had been making use of (vars) demanded which i read the documentation, which applied A lot of your terminology and principles With this course.
|
OPCFW_CODE
|
Dagger2 Inherited subcomponent multibindings
Hope to find some help here after days and days researching about this very interested subject "Inherited subcomponent multibindings which you can find here Inherited subcomponent multibindings which is the last subject in that page.
According to the official documentation:
subComponent can add elements to multibound sets or maps that are bound in its parent. When that happens, the set or map is different depending on where it is injected. When it is injected into a binding defined on the subcomponent, then it has the values or entries defined by the subcomponent’s multibindings as well as those defined by the parent component’s multibindings. When it is injected into a binding defined on the parent component, it has only the values or entries defined there.
In other words. If the parent Component has a multibound set or map and a child component has binding to that multibound, then those binding will be link/added into the parent map depending where those binding are injected within the dagger scope if any.
Here is the issue.
Using dagger version 2.24 in an Android Application using Kotlin. I have an ApplicationComponent making use of the new @Component.Factory approach. The ApplicationComponent has installed the AndroidSupportInjectionModule.
I also have an ActivitySubComponent using the new @Component.Factory approach and this one is linked to the AppComponent using the subComponents argument of a Module annotation.
This ActivitySubComponent provides a ViewModel thru a binding like this
@Binds
@IntoMap
@ViewModelKey(MyViewModel::class)
fun provideMyViewModel(impl: MyViewModel): ViewModel
the @ViewModelKey is a custom Dagger Annotation.
I also have a ViewModelFactory implemented like this.
@Singleton
class ViewModelFactory @Inject constructor(
private val viewModelsToInject: Map<Class<out ViewModel>, @JvmSuppressWildcards Provider<ViewModel>>
) : ViewModelProvider.Factory {
override fun <T : ViewModel?> create(modelClass: Class<T>): T =
viewModelsToInject[modelClass]?.get() as T
}
A normal ViewModelFactory
The difference here is that I am providing this ViewModelFactory in one of the AppComponents modules. But the bind viewModels within the ActivitySubComponent are not getting added into the ViewModelFactory Map in the AppComponent.
In other words. What the documentation is describing is not happening at all.
If I move the viewModels binding into any of the AppComponent Modules, then all work.
Do you know what could be happening here.
Hey, did you find solution to this?
I am also struggling with this exact situation.
I had a similar situation and I had to move the ViewModelFactory provision to every sub-component instead of parent component (using module inheritance).
I would be very interested as well to see why it's not working as documented.
You're scoping your ViewModelProvider.Factory as @Singleton. This ensures that it will be created and kept within the @Singleton component.
It's safe to remove the scope since it doesn't keep any state, and it would allow the factory to be created where needed with the correct set of bindings.
The documentation is accurate. While Dagger really operates the way it is described when generating Set/Map Multibindinds, it works differently for because you are in a corner case.
Explanation by example
Imagine you have the following modules:
/**
* Binds ViewModelFactory as ViewModelProvider.Factory.
*/
@Module
abstract class ViewModelProviderModule {
@Binds abstract fun bindsViewModelFactory(impl: ViewModelFactory): ViewModelProvider.Factory
}
/**
* For the concept, we bind a factory for an AppViewModel
* in a module that is included directly in the AppComponent.
*/
@Module
abstract class AppModule {
@Binds @IntoMap
@ViewModelKey(AppViewModel::class)
abstract fun bindsAppViewModel(vm: AppViewModel): ViewModel
}
/**
* This module will be included in the Activity Subcomponent.
*/
@Module
abstract class ActivityBindingsModule {
@Binds @IntoMap
@ViewModelKey(MyViewModel::class)
}
/**
* Generate an injector for injecting dependencies that are scoped to MyActivity.
* This will generate a @Subcomponent for MyActivity.
*/
@Module
abstract class MyActivityModule {
@ActivityScoped
@ContributesAndroidInjector(modules = [ActivityBindingsModule::class])
abstract fun myActivity(): MyActivity
}
If you were to inject ViewModelProvider.Factory to your application class, then what should be provided in Map<Class<out ViewModel>, Provider<ViewModel>> ? Since you are injecting in the scope of AppComponent, that ViewModelFactory will only be able to create instances of AppViewModel, and not MyViewModel since the binding is defined in the subcomponent.
If you inject ViewModelProvider.Factory in MyActivity, then since we are both in the scope of AppComponent and MyActivitySubcomponent, then a newly created ViewModelFactory will be able to create both instances of AppViewModel and MyViewModel.
The problem here is that ViewModelFactory is annotated as @Singleton. Because of this, a single instance of the ViewModelFactory is created and kept in the AppComponent. Since MainActivityComponent is a subcomponent of AppComponent, it inherits that singleton and will not create a new instance that includes the Map with the 2 ViewModel bindings.
Here is a sequence of what's happening:
MyApplication.onCreate() is called. You create your DaggerAppComponent.
In DaggerAppComponent's constructor, Dagger builds a Map having a mapping for Class<AppViewModel> to Provider<AppViewModel>.
It uses that Map as a dependency for ViewModelFactory, then saves it in the component.
When injecting into the Activity, Dagger retrieves a reference to that ViewModelFactory and injects it directly (it does not modify the Map).
What you could do to make it work as expected
Remove the @Singleton annotation on ViewModelFactory. This ensures that Dagger will create a new instance of ViewModelFactory each time it is needed. This way, ViewModelFactory will receive a Map containing both bindings.
Replace the @Singleton annotation on ViewModelFactory with @Reusable. This way, Dagger will attempt to reuse instances of ViewModelProvider, without guarantee that an unique instance is used accross the whole application. If you inspect the generated code, you will notice that a different instance is kept in each AppComponent and MyActivitySubcomponent.
You got the right idea but it's not because of the singleton scope. It's because he created the map in the AppComponent but he's trying to add a ViewModel to the map in a subcomponent. See my explanation for more info.
The problem
It's because the map is being created in the AppComponent and you're adding the ViewModel to the map in a subcomponent. In otherwords, when the app starts it creates the map using the ViewModelFactory. But MyViewModel is not added to the map since it exists in a subcomponent.
I struggled with this for quite a few days and I agree when you say the dagger documentation doesn't outline this very well. Intuitively you think that dependencies declared within the AppComponent are available to all subcomponents. But this is not true with Map Multibindings. Or at least not completely true. MyViewModel is not added to the map because the Factory that creates it exists inside the AppComponent.
The solution (at least one possible solution)
Anyway, the solution I ended up implementing was I created feature-specific ViewModelFactory's. So for every subcomponent I created a ViewModelFactory that has it's own Key and set of multibindings.
Example
I made a sample repo you can take a look at: https://github.com/mitchtabian/DaggerMultiFeature/
Checkout the branch: "feature-specific-vm-factories". I'll make sure I leave that branch the way it is, but I might change the master at some time in the future.
When Dagger instantiates your ViewModelFactory, it needs to inject a map into its
constructor. And for all the key/ViewModel pairs in the map, Dagger must know how to
construct them at the CURRENT COMPONENT level.
In your case, your ViewModelFactory is only defined at the AppComponent level, so the map
Dagger uses to inject it does not contain any ViewModel defined in its subcomponents.
In order for Dagger to exhibit the inherited subcomponent binding behaviour you expect, you must let your subcomponent provide the ViewModelFactory again, and inject your fragment/activity with the subcomponent.
When Dagger constructs the ViewModelFactory for your subcomponent, it has access to your
ViewModels defined in the subcomponent, and therefore can add them to the map used to inject the factory.
You may like to reference Dagger's tutorial at page 10:
https://dagger.dev/tutorial/10-deposit-after-login
Please notice how the tutorial uses the CommandRouter provided by the subcomponent to have
the inherited multibinding.
|
STACK_EXCHANGE
|
I'll try to give a general answer from a non-CS perspective.
tl; dr: yes, there are errors out there. A lot of errors, clerical and not, even in oft-cited papers and books, from any field. It's inevitable: though they do their best to avoid errors, authors are human after all, and reviewers are humans too (I know, you never find a damn robot when you need one). Thus, whenever you read a paper, maintain critical thinking.
I'll start the too long section with an anecdote. When I was working at my master's thesis, some twenty years ago, I needed a result published in a much cited paper from a renowned author in the field of electromagnetics. At the time, (almost) young and inexperienced, I thought that papers were always absolutely right, especially when written by recognized authorities. To practice the technique of the paper, I decided to rederive the results: after a week spent redoing the calculations over and over again, I couldn't find the same final equation. I was able to discover the correct equation – the one I was finding – in a book published later by the same author. Indeed, it was a clerical error that absolutely didn't change anything in the paper, but it was annoying and taught me an important lesson: papers and books contain errors. And, of course, I later published papers with mistakes in equations (not for revenge!) [*].
After that first experience, I've discovered that you can find more fundamental errors, even in well known books and papers. I'll give you here a few examples, taken from different fields, to underline how broad the phenomenon is (in bold, the mistaken claim; within parentheses, the field):
- (Classical mechanics) In Newtonian mechanics, the correct equation of motion in case of variable mass is F = dp/dt. This statement can be found in many classical books about newtonian mechanics, but it is plainly wrong, because that equation, when the mass is variable, is not invariant under Galilean transformations as it is expected in Newtonian mechanics (actually, the concept of variable mass in Newtonian mechanics can be misleading if not properly handled). For a deeper discussion see, e.g., Plastino (1990), Pinheiro (2004) and Spivak's book Physics for Mathematicians, Mechanics I. As a curiosity, that wrong equation is used by L. O. Chua in this speech (14:50 min) as an example to introduce the memristor.
- (Circuit analysis) Superposition can't be applied directly to controlled sources. It was just a few years ago when I came across this statement for the first time, and I was stunned: hey, I've applied superposition to controlled sources since I was in high school, and I've always get the right result. How it possibly can't be used? In fact, it can be applied, the important thing is to apply it correctly, but there are really many professors (I have several examples from Italy and US) who don't understand this point and fail to notice that the proofs of several theorems in circuit analysis are actually based on the applicability of superposition to controlled sources. For more on this, see e.g. Damper (2010), Marshall Leach (2009) and Rathore et al. (2012).
- (Thermodynamics) The Seebeck effect is a consequence of the contact potential. This false statement can be frequently read in technical books and application notes about thermocouples.
À propos of my own errors, a couple of weeks after having written this answer I discovered an error in an equation of a published conference paper which I co-authored. A fraction that should have been something like -A/B became B/A. Hey – I told one of the other authors – how could we possibly have written this? And how did it get past the reviewers? The fact is, that that equation was associated to a simple, well-known, example given in the introduction, an example so simple that probably neither us authors nor the reviewers gave a second look at the equation (of course, how can anyone write this wrongly?). I feel that many clerical errors like this one happen because of last-minute changes to notation: you have almost finished the paper and you realize that you could have employed a better notation... so, let's change it on the fly! And here is where certain errors sneak-in. Avoid last-minute changes, if you can.
|
OPCFW_CODE
|
Saving raw request & response of Soap test step in user defined path
I want to save soap test step raw request & step in a path which is read from configuration file imported in test suite custom properties.
How can I do that?
Using the below script but with fixed location that was defined in the script.
def myOutFile = "D:/TestLog/Online_Test/PostPaidSuccess_Payment_BillInqReq.xml"
def response = context.expand( '${BillInq#Request}' )
def f = new File(myOutFile)
f.write(response, "UTF-8")
def myOutFile = "D:/TestLog/Online_Test/PostPaidSuccess_Payment_BillInqReq.xml"
def response = context.expand( '${BillInq#Request}' )
def f = new File(myOutFile)
f.write(response, "UTF-8")
Are you sure to store raw request alone using configured value?
I would suggest to avoid additional Groovy Script test step to just store previous step request / response.
Below script assumes that, there is user defined property (REQUEST_PATH) at test suite level with its value(valid file path to store data, path separated by forward slash '/' even on windows).
Instead use Script Assertion for the Billing request step itself(one more step less in the test case)
//Stores raw request to given location using utf-8 encoding
new File(context.testCase.testSuite.getPropertyValue('REQUEST_PATH') as String).write(context.rawRequest,'utf-8')
Actually there is a small difference between context.request and context.rawRequest and the above script using rawRequest.
context.request - will have the variables as it is, not the actual value.
For eg:
<element>${java.util.UUID.randomUUID().toString()}</element>
Where as context.rawRequest - will have the actual value that was sent in the request.
For eg:
<element>4ee36185-9bfb-47d2-883e-65bf6d3d616b</element>
EDIT Based on comments: Please try this for ACCESS DENIED issue
def file = new File(context.testCase.testSuite.getPropertyValue('REQUEST_PATH') as String)
if (!file.canWrite()) {
file.writable = true
}
file.write(context.rawRequest,'utf-8')
EDIT2 Based on further comments from OP, the request file name should be the current test step name.
//Create filename by concatenating path from suite property and current test stepname
def filename = "${context.testCase.testSuite.getPropertyValue('REQUEST_PATH')}/${context.currentStep.name}.xml" as String
new File(filename).write(context.rawRequest,'utf-8')
Have you defined test suite level custom property REQUEST_PATH and its value? have you used rawRequest? what happens if you have log.info context.rawRequest in the script? Does it show response?
yes i defined Request path property value and now iam getting access denied error
How did the nullpoint go? Hope you are following the answer thoroughly. You had this same error in previous question too and not sure how that was resolved. This is something very peculiar in your environment. Can you share the log from error.log?
ERROR:java.io.FileNotFoundException: D:\TestLog\Online_Test (Access is denied)
java.io.FileNotFoundException: D:\TestLog\Online_Test (Access is denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(Unknown Source)
at java.io.FileOutputStream.(Unknown Source)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.newWriter(ResourceGroovyMethods.java:1637)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.newWriter(ResourceGroovyMethods.java:1658)
at
org.codehaus.groovy.runtime.ResourceGroovyMethods.write(ResourceGroovyMethods.java:804)
at org.codehaus.groovy.runtime.dgm$862.invoke(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:271)
at
org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at Script27.run(Script27.groovy:1)
at com.eviware.soapui.support.scripting.groovy.SoapUIGroovyScriptEngine.run(SoapUIGroovyScriptEngine.java:92)
at com.eviware.soapui.impl.wsdl.teststeps.WsdlGroovyScriptTestStep.run(WsdlGroovyScriptTestStep.java:141)
at com.eviware.soapui.impl.wsdl.panels.teststeps.GroovyScriptStepDesktopPanel$RunAction$1.run(GroovyScriptStepDesktopPanel.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Was the parent directory present? where is the file extension?
D:\TestLog\Online_Test but they are files not 1 file so each one will be with soap test step name and suppose to be in xml formal
what is the value at RESOURCE_PATH
@MarwaAbdelgawad, Have you set D:/TestLog/Online_Test/PostPaidSuccess_Payment_BillInqReq.xml in RESOURCE_PATH in test suite properties? can you try the updated answer ? See the below EDIT for access denied error?
D:/TestLog/Online_Test
@MarwaAbdelgawad, for RESOURCE_PATH, have the same value as myOutFile that you have in the your script i.e., D:/TestLog/Online_Test/PostPaidSuccess_Payment_BillInqReq.xml. You can save the file using directory name which why you getting error.
but i want to save automatically with each test step name per test case
Which you are mention it now? not earlier. Let me update. By the way it would have been appreciated if you at least try the solution as it is once before doing your own changes. Would it not be the problem if you run the test multiple times?
@MarwaAbdelgawad, please see the updated answer EDIT2. Note that if you run the test multiple times, it may overwrite the existing file. I believe that it should also help your previous question while saving the file.
i tried and still getting access denied issue Thu ERROR:java.io.FileNotFoundException: D:\TestLog\Online_Test (Access is denied)
java.io.FileNotFoundException: D:\TestLog\Online_Test (Access is denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(Unknown Source)
at java.io.FileOutputStream.(Unknown Source)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.newWriter(ResourceGroovyMethods.java:1637)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.newWriter(ResourceGroovyMethods.java:1658)
at org.codehaus.groovy.runtime.ResourceGroovyMethods.write(ResourceGroovyMethods.java:804)
at org.codehaus.groovy.runtime.dgm$862.invoke(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:271)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:120)
at Script2.run(Script2.groovy:5)
at com.eviware.soapui.support.scripting.groovy.SoapUIGroovyScriptEngine.run(SoapUIGroovyScriptEngine.java:92)
at com.eviware.soapui.impl.wsdl.teststeps.WsdlGroovyScriptTestStep.run(WsdlGroovyScriptTestStep.java:141)
at com.eviware.soapui.impl.wsdl.panels.teststeps.GroovyScriptStepDesktopPanel$RunAction$1.run(GroovyScriptStepDesktopPanel.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
Really sorry, you do not seem to show or try exactly what is provided in first place and mixing your own stuff. There are two lines in the given script, it is showing in your log that line 5 has problem. Please show screen shot. Making simple things more complicated.
def file = new File(context.testCase.testSuite.getPropertyValue('TESTLOG') as String)
if (!file.canWrite()) {
file.writable = true
}
file.write(context.rawRequest,'utf-8')
@MarwaAbdelgawad So, you did not pick the script from answer EDIT2? where it appends current test step name to the path. I suggest you to carefully read the answers or comments. You making the same errors repeatedly.
where TESTLOG is the property name and value is D:/TestLog/Online_Test and suppose to complete it with test step name.testcase name.xml
|
STACK_EXCHANGE
|
How and why do people cheat? Most (if not all) people would like to think of themselves as honest. But why are exams designed as if people are cheaters? How do otherwise honest individuals inevitably end up in certain situations in which they’re more likely to cheat? In an exam? Online exam?
For edtech practitioners and advocates like us at moodLearning, learning why we cheat might help us avoid the temptation of cheating or at least not help incentivize the behavior among ourselves. However, if the question is all about the hows and whys of cheating, we might just as well send you down the work of psychologist and behavioral economist Dan Ariely as a good place to start. There we'd learn more about things like feeling disconnected from the consequences or when our willpower is depleted, can lead us down to a cheater's path, however we put ourselves in high regard.
In any case, even after reading about the whys and hows of cheating, you still feel the need for a toolbox to help prevent cheating, especially in online exams, then you could put our work to good use. At moodLearning, we recognize that online cheating is a complex behavior. Yet, we definitely can help not incentivize it in at least one narrow area of online test taking.
What we propose is the use of an arsenal of measures, including technical, organizational, methodological, and behavioral--fine-tuning them according to the needs of specific exams for particular test-takers. Most of these measures could well fall outside the exam itself. Hence, the need to orchestrate the conduct collaboratively. The level of "strictness" would depend on what the exam exercise is all about, what goals to achieve, or how "trusting" the teacher or proctor is.
If you're on a moodLearning-supported learning management system, you have at your disposal at least all the technical measures indicated in this infographic:
For instance, using Safe Exam Browser, you can implement an exam lockdown: only one access device is allowed, only one browser can be used (with only one tab available), only one screen allowed. You can force fullscreen and have printing, download, and clipboard disabled, cache cleared, or external storage device disallowed. If the test taker logs out, you may disallow re-entry. You can even force the test taker to signify adherence to an Honor Code.
We certainly do not recommend relying solely on technical means to thwart cheating. What competences do you expect to measure with the exam? What kind of people or professionals do you expect to come out of your training program and exams? While the technical measures would give you a semblance of "strictness", at the end of the day, what the exam is about and what it seeks to achieve are all that matter.
|
OPCFW_CODE
|
Alright guys, I've got the basic elements of the GUI done up in processing, including the core objects to handle screen display, display of the keypad (absolutely essential for hardware w/o keypads - a little annoying for PC users, I know =), and the general workflow for handling input.
The Processing-based GUI uses the excellent ControlP5 library from Andreas Schlegel, who has been very communicative and fast to add some new features we needed to the library distribution.
Some notes about the intent of this GUI:
* It should run on any Mac, PC, or Linux-based computer
* It should run, as is, on any Linux-based PDA
* It should run, as is, on any Linux ARM platform
* Windows PDAs/Phones are a hope, but not direct target
* Iphone is not a target for this GUI
* Android is an eventual target, but dependent upon the Android Processing port
* It should enable all functionality without the use of a physical keyboard
* It should expose all functionality of the engine
* It should provide both quick and easy means of shooting a video, along with the ability to easily define the more advanced capabilities of keyframing and action scripting
* It should enable a first-time user to create their first video in under ten minutes
* It should enable experienced users to create simple videos (no AS/KF's) in under 1 minute
* It should abstract the user from the engine's complexity
** allow naming of axes, input by relative distance (inches or degrees instead of steps)
** allow manual control of axes using simple directional buttons
** abstract user from complex keyframe activities, instead provide options of "run action here, etc."
* Allow for both real-time interaction (changing of parameters on the fly) w/ engine, and 'scripted' style activity, where a user first inputs and checks all individual settings before uploading to engine
Included in the svn repo is a PDF of the latest worfklow document, I'll start working on getting more of the known conversations about the UI documented here in the forum.
For new developers: those who haven't yet contributed to the source code or design are very much welcome to suggest or make changes, no matter your programming abilities. However, until we've established a working communication channel between us and you, we will need you to submit them to us for inclusion, rather than letting you commit to svn directly. After a couple of submissions have been made successfully, at this point will grant access to commit to subversion. All we'll ask then is that you work in your own branch until everyone can agree that the changes be included - at which point in time we'll merge the code.
Please keep any discussion about the OM GUI development to this forum, that way everyone knows where to look. =)
The SVN repository for the code as it stands can be found here:
Some notes about this version (correct as of my posting this message):
* The only screens that are working so far are the movie menu->easy movie setup, and then the camera sub-menu there.
* There is no communication with the engine yet, we are simply laying out the UI presently
* There are still a lot of efficiencies to be gained and modularity to be added. Nothing is set in stone yet
* There is a known issue where left-navigation panel buttons are not retaining their clicked state. I broke this in some recent changes and will track it down and fix it in the next couple of days.
I look forward to your feedback and suggestions!
|
OPCFW_CODE
|
3 edition of C++ How To Program & C++ in the Lab, Lab Manual (4th Edition) found in the catalog.
C++ How To Program & C++ in the Lab, Lab Manual (4th Edition)
Harvey M. Deitel
September 16, 2002
by Prentice Hall
Written in English
|The Physical Object|
Laboratory Exercises, C++ Programming General information: The course has four compulsory laboratory exercises. You shall work in groups of two people. Sign up for the labs at The labs are mostly homework. Before each lab session, you must have done the assignments (A1, A2, ) in the lab, written and tested the programs. 6. Students will bring their lab manual and are understood to have gone through the manual thoroughly. 7. The students should maintain and preserve the lab manual properly and deposit the same with the concerned lab Incharge at the end of the semester. In case of non-submission of the lab manual, a fine would be charged as per the norms.
Object Oriented Programming Lab Manual This book is an attempt to standardize the lab instructions through the development of a lab curriculum that is based on the class curriculum. Introduction C++ programming that you studied in the previous semester is purely based on writing a set of instructions in a particular sequence such that. End of Lab 6 - C++ Classes Complete the Exercises on the Answer Sheet. Turn in the Answer Sheet and the printouts required by the exercises. References 1. Wang, Paul. C++ with Object-Oriented Programming, PWS Publishing Company, Return to the top of this lab Return to the link page for all labs.
Introduction to the lab manual 2. Lab requirements (details of H/W & S/W to be used) 3. List of experiments 4. Format of lab record to be prepared by the students. 5. Marking scheme for the practical exam 6. Details of the each section of the lab along with the examples, exercises & expected viva questions. Instructor Solutions Manual (Download only) for C++ How to Program (Early Objects Version), 10th Edition Paul Deitel, Deitel & Associates, Inc. Harvey M. Deitel, Deitel & Associates, Inc.
Out in art
Proceedings, fourteenth annual West Virginia Surface Mine Drainage Task Force Symposium, Morgantown, West Virginia, April 27-28, 1993
Inside the night
Elements of the infinitesimal calculus
The fountain-head of all blessings: or, The great store-house opened.
International careers with Census
Wrestling with an angel
future of marriage
Wild flowers on the verges.
Education and training for the 16 to 19 age group
Designed to accompany the textbook, C++ How to Program, Fourth Edition, in a laboratory environment. This suite of course materials offers hundreds of exercises that cover introductory and intermediate C++ programming concepts by enabling students to learn by doin For introductory C++ programmers desiring additional hands-on activities to /5(2).
This Lab Manual is designed to accompany the book, C++ How to Program, Third Edition in a laboratory environment. It offers hundreds of exercises that cover introductory and intermediate C++ programming concepts by enabling users to "learn by doing"—a core philosophy at Deitel & Associates, Inc.4/5(1).
/* X, H-4, Deitel, C in the Lab, 2E */ This Lab Manual is designed to accompany the book, C How to Program, Fourth Edition in a laboratory environment. It offers hundreds of exercises that cover introductory and intermediate C programming concepts by enabling users to "learn by doing" — a core philosophy at Deitel & 5/5(2).
Get this from a library. C++ in the lab: lab manual to accompany C++ how to program, fourth edition. [Harvey M Deitel; Paul J Deitel; T R Nieto].
description: Product Description: This Lab Manual is designed to accompany the book, C++ How to Program, Third Edition in a laboratory environment. It offers hundreds of exercises that cover introductory and intermediate C++ programming concepts by enabling users to "learn by doing"—a core philosophy at Deitel & Associates, Inc.
Online extras were there - all of them. End-of-chapter programming challenges ranged in difficulty. The exercises went to the point, and offered additional chance to absorb the lessons. C++ How To Program, 8th Ed. is not the best book to use for an Introduction to Programming course.
It is fine for a C++ course for students who know the s: C++ How to Program is a well-written C++ textbook designed for use in college undergraduate computer science classes.
It includes all the information you'd need regarding computers, programming languages, and C++. At the end of each chapter is a summary of the concepts covered, and a set of self-text s: Reference Book: 1.
Reema Thareja, Computer Fundamentals and Programming in C, Oxford Press, Practical Examination Procedure: 1. All laboratory experiments (Fourteen) are to be included for practical examination. Students are allowed to. Lab Manual CS Introduction to Computer Programming C++ IntroductionINTRODUCTIONThe objective of this lab manual is to give students step-by-step examples to becomefamiliar with programming concepts, design, and coding.F E AT U R E STo ensure a successful experience for instructors and students alike, these lab munalsincludes.
A Complete Guide to Programming in C++ Book of Year. A Laboratory Course in C++ Data Structures Book of Year. An Introduction to C++ A Complete Beginners Guide Book of Year.
21st Century C Book of Year. User’s Guide for VectorCAST/RSP for C/ C++ Book. About C++ Programming. Multi-paradigm Language - C++ supports at least seven different styles of programming. Developers can choose any of the styles. General Purpose Language - You can use C++ to develop games, desktop apps, operating systems, and so on.; Speed - Like C programming, the performance of optimized C++ code is exceptional.; Object-oriented - C++.
Object Oriented Programming Lab Manual Continuous Assessment Practical Exp No NAME OF EXPERIMENT RemarkDate Sign 1 Write a C++ program using Static data member to record the occurrences of the entire object.
2 Write a C++ program to use Multiple Constructor in a class for displaying complex value. > C++ How to Program (6e) by Deitel & Deitel - Solution Manual, Code Solution, Lab Manual > > CMOS VSLI Design A Circuits and Systems Perspective (3e) by Neil Weste and David Harris > > Computer Organization and Architecture Designing for Performance (8e) by William Stallings - Project Manual + Solution Manual + Testbank >.
C++ How to Program, 10th Edition. Personalize learning with MyLab Programming MyLab ™ Programming is an online learning system designed to engage students and improve results.
MyLab Programming consists of a set of programming exercises correlated to the programming concepts in this book. C PROGRAMMING LAB MANUAL For BEX/BCT/ BY BABU RAM DAWADI RAM DATTA BHATTA. / From the Book: Capsules of C Programming Appendix - B C PROGRAMMING LAB SHEETS Dear Students, Welcome to C programming Lab.
For the practical works of C programming, you have to complete at least eight to ten lab activities. This book is devoted to practical C++ programming. It teaches you not only the mechanics of the language, but also style and debugging.
The entire life cycle of a program is discussed, including conception, design, writing, debugging, release, documentation, maintenance, and. GE – Computer Programming in “C++”, King Saud University 6.
Lab (2) Objectives of this lab: • More about math functions • Learn about Simple. and. if-else. statement. • Simple loop mechanism. • Exercise 1: write a C++ program to evaluate the. Programming Grade in Industrial Technology Engineering This work is licensed under a Creative Commons Reconocimiento-NoComercial-CompartirIgual España License.
Start the IDE from the Program folder Dev-C++ or Bloodshed Dev-C++. Figure 1. Running Dev-C++ in the computer lab b. Create a project. Unlike static PDF C++ How To Program 7th Edition solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. No need to wait for office hours or assignments to be graded to find out where you took a wrong turn.
MyLab Programming is the teaching and learning platform that empowers you to reach every student. When combined with educational content written by respected scholars across the curriculum, MyLab Programming helps deliver the learning outcomes that students and instructors aspire to. Learn more about how MyLab Programming helps students succeed.
This Lab Manual for C++ Programming: From Problem Analysis to Program Design has been updated in accordance with the first seventeen chapters of the third edition of Dr. D.S. Malik's text. Ideal for a lab setting, this lab manual continues to offer a hands-on approach for tackling difficult introductory C++ programming topics.1/5(1).
For courses in computer programming. C How to Program is a comprehensive introduction to programming in C. Like other texts of the Deitels’ How to Program series, the book serves as a detailed beginner source of information for college students looking to embark on a career in coding, or instructors and software-development professionals seeking to learn how to program Reviews: 4.C++ Programming - Pearson course.
|
OPCFW_CODE
|
So Berkeley’s “to be is to be perceived” thing has been bothering me ever since we studied him last semester. So today I worked out a quick (and probably logically flawed) argument against his idea.
In case you may not know, here’s his idea in a few sentences: nothing exists if it is not being perceived in some sense by someone. This would make him the advocate of the idea that a room winks out of existence when everyone leaves it, but he’s got his catch to establish the idea that this does not occur…he says that God is constantly watching everything*, and therefore everything is always in existence (handy, huh?). But basically, he says that there is no material underpinning to the world—everything is and exists solely because it is perceived. There is no material world, existence is dependent on perception.
So because I have no job, no life, and an online class that is extremely easy and therefore takes up very little of my time, I sat around today and tried to work out a semi-coherent argument stating that there is something independent of our perceptions, and this thing is necessary for existence. This is confusing, but it works out in my head, so now I have to write it in a coherent manner. I want to see if other people can follow this train of thought, so I’m going to break it into little small sentences that build on each other in a sequential manner. It kind of work likes a proof, but I didn’t really feel like making a proof, so this is what you get.
- Existence cannot be perception due to the fact that to perceive something (the “positive”) requires space (the “negative”), or something in which the thing is perceived.
- The “negative,” or space, is imperceptible by itself.
- You need to perceive the “positive”, or things, in order to perceive the negative.
- But to perceive the positive, you need to perceive the negative.
- If existence were to be solely perception, it would be impossible for the things we perceive to exist because we are unable to perceive space, the quality that allows things to exist.
- However, one cannot perceive the positive without the negative, or the negative without the positive.
- If we take Berkeley’s theory as the base, then we have to perceive things in order for them to exist.
- Because of this idea, that means we would have to perceive space to perceive at all, since perceiving the negative is necessary to perceive the positive.
- However, we are not able to perceive space.
- But since we are able to perceive things, that must mean that space exists in some sort of sense.
- Space, therefore, must exist independent of our perception, because we can’t perceive it and yet we know it is there because we can perceive the positive, or things.
In fact, I’d say that space, not perception, is necessary for existence.
Does that make any sense at all? If it does, does it seem like a circular argument? I’d like to hear your reactions to this, especially if it seems unclear.
*Yes, he has a way of explaining how God exists if no one is necessarily perceiving him…don’t ask me to explain it, though, ‘cause I can’t remember it.
|
OPCFW_CODE
|
Inside every silver cloud is a nasty lining. The VMWare did, in fact, work well. After I had tested a few things I decided to remove it from my laptop and save the virtual hard drive for use on my desktop after I get home. This was a good decision, but it’s execution was horrible. The VMWare uninstall stalled about halfway through and locked up the computer. Vista was a slightly better error recovery but even it couldn’t kick start the uninstall procedure. Calling up Program manager and trying to kill the process didn’t work. After messing about for almost half an hour, I decided to try killing explorer.exe. This did work, sort of. I regained control of the computer, but now had several uninstall windows on the desktop that wouldn’t go away. Apparently, they were still in the video RAM, but not connected to Windows at all so I couldn’t do a thing to them.
I looked into the Event Viewer and saw quite a few errors where a VMWare driver or two failed to exit properly. Most notably were the DHCP and NAT drivers. I had to go to the Services applet and stop them both, then mark them as ‘disabled’. Finally my computer would reboot. When it came back up, I once again tried to uninstall VMWare. This time it wasn’t listed as an installed program. I checked on the hard drive and everything was still there so I just grabbed it all and deleted it. Then I went into the Registry and deleted all mention of VMWare. A tedious task at best. A final reboot and all was good again.
A word to the wise: if you want to put VMWare on a Vista machine, do NOT start it up using a wireless adapter – do it with a wired connection. The VMWare network drivers interfere heavily with your wireless connection. So bad that ONLY the virtual machine has Internet connectivity. Your local host machine loses its connection to the Internet. And, if you use wireless, you stand a good chance of blowing up like mine did if you try and uninstall while your wireless network is in operation as the VMWare drivers will NOT back out gracefully.
I am sure that there are posts in their help forums concerning this, but as a simple casual user of VMWare I had no intention of searching through posts looking for answers to my current problem. Anyway, VMWare is gone from the machine.
Today is Thanksgiving and the daily newspaper arrives at over four pounds – not cost, but weight, with all the advertisements for stores that open at very early morning hours to trap consumers with tens of dollars to spend. “Black Friday”, as it is known, is predicted to ‘not be as good as previous years’. Gosh, I wonder why? Could it be that we have taken a horrible financial beating the entire year? Could it be that everyone is simply waiting to see what President-elect Obama is going to do his first 90 days in office? Could it be that everyone who has any money at all is saving it to spend on really important things – like food, heating, and household expenses?
State governments are missing a really good opportunity to garner some extra cash by raising the gasoline taxes slightly as prices drop the same amount. We’d never miss it, or, perhaps, even see it; but they would grab some extra cash. One never knows.
|
OPCFW_CODE
|
Applications and API Keys
An API Key is a token which identifies a specific licensed client application. The API Key is reserved for specific organizations, such as a financial institution's development team or IT department, or an application vendor. Each client application should use its own API key.
To request an API key, follow the following steps:
- Register as a user and log onto the Apiture developer portal
- Register the client application
- Choose an API product that the client application will use
- Identify one or more API environments where you wish your client application to call Apiture Digital Banking APIs
See also Secure Access for additional guidance on API key management and keeping your API keys secure.
A developer requests API keys to allow their app to call Apiture APIs in one or more
API environments. These may be test environments, partner environments, demo
environments, or full production environments. For example, a (fictional) financial
institution 3rd Party Bank operating on the Apiture platform may have three
|API Product||API Products are collections of Apiture Digital Banking APIs that support a specific set of features. Licenses are granted for specific API products. Examples of API products may be Digital Banking or Digital Account Opening. Some products may include (embed) other products.|
|Client Application||A web application, mobile application, or service application that uses the Apiture Digital Banking APIs. Each client application is a unique entity and requires its own API key.|
An API key is a unique private token that identifies a client application within a
specific API environment.
At present, a unique API key is required for each combination of:
If an application runs against multiple environments (such as dev, uat, and production), the client should register separate API keys for each.
A client ID and client secret (also known as client credentials) are additional authentication credentials. These credentials allow non-interactive service applications to authenticate to the Apiture APIs. The client ID and client secret combination are only used to authenticate a client application within secure client environments, such as back-office automation processes behind a secure firewall.
Each client application is tied to a partner, also known as a
partner organization. The partner is based on the email domain of the developers in
that organization. For example, developers with
Members of a partner organization can invite other developers with the same email domain to join the partner organization on My Company page.
Registering a Client Application
To request API keys, begin by registering a new client applications.
- You must be logged in to register a client application.
- In addition, you must complete your profile on the My Profile page and accept the terms and conditions outlined there.
- If your partner organization data is not complete, enter your partner information in the My Company page.
(My Profile and My Company are also available from the account menu under your user ID at the top right of the page.)
To register a new application, open the My APIs page and click the Create a new Application button. Fill in the information on the form:
The form fields are described below:
|Application Type||Select the type of application: a desktop app, a web application, a mobile application, or a secure back-office service application. The first three require user authentication; the final may use client credentials authentication.|
|Authentication||Select the type of authentication the application will use. Authentication and security constraints for applications vary by application type. For example, web and mobile applications are harder to keep secure because they often operate in insecure public networks. Web and mobile applications require Authorization Code Flow which uses an OAuth2 call back URL to complete authentication and authorization flows and return an access token for that user. Trusted service applications use a unique Client ID and Client Secret for the Client Credentials Grant OAuth2 flow to obtain a service token|
|Application Name||This is your name for your application, so you can return to and access the application on the dev portal. You should use a unique name for each application within your company/organization name.|
|Description and Purpose||A description of your application and its intended use. This is for your information, to help you manage multiple client applications.|
|Site URL / Download Location||The URL of the application, or a web page which describes the application or allows others to download the application. This is for information purposes.|
|Redirect URL||The redirect URL used in Authorization Code Flow. When a user authenticates with the application's authorization server, the authorization server will redirect to this URL to complete the authentication process. This prevents other applications from using the application's client ID and secret.|
|Products||Each client application must select one or more API product, which determines which APIs the client is licensed to invoke. In the future, API rate limits will also be available.|
|Environments||Choose one or more runtime API environments where the client application will run. When approved, the developer portal will provision a unique API key for each environment.|
Click the Submit button. The portal will queue a request to provision the API keys for that application. When that process is complete, you will receive an email.
Return to the My APIs page to view the API keys and Client ID/secret for the application. Only the application owner may view the API keys and Client ID/secrets. However, an application owner may add other members of the company as co-owners and grant them access to view and manage the API keys. This should be done with caution.
Managing Your Applications
The application owner can return to My APIs at any time to manage their applications. When you select the expand icon for the application in the My Applications list, the table expands to show all environments and the status of each API key.
The application owner(s) can also perform the following actions on the applications:
|Show Details||View the API keys and credentials for each environment. You can independently revoke API keys for each environment|
|Edit the application||This operation lets you change the application description, product, application URL, products, and environments. New keys will be provisioned for any new environments. Use this with caution, as keys associated with removed environments are deleted, not disabled, and they are not enabled again if you edit the application later add those environments back.|
|Invite Owners||You may invite other developers to the Apiture developer portal. Other users can see the company's applications, but not edit or view the API keys for applications owned by others in the company. They can create and edit their own applications. Note: Members must share the same email domain.|
You can temporarily deactivate an application. This deactivates the application's
corresponding API keys and client credentials from all environments. This is useful if you
suspect misuse or suspicious activity with your API keys.
Warning: After disabling the application, any deployed application instances will no longer work. All API calls using these keys or client credentials will fail until you restore the keys.
|Restore||You can restore an application you have temporarily deactivated. This restores the application's corresponding API keys and client credentials in all its environments.|
Delete the application. This removes the application's API key from all environments and
deletes the application from the My Applications list. Any deployed instances of the
software applications using these keys or client credentials will no longer work: all API
calls using these keys will fail. Use this only when you and your application users are no
longer using the application.
Warning: This operation cannot be undone.
Using API Keys
See also Secure Access for additional guidance on how to pass your client application's API keys to the Apiture Digital Banking APIs.
|
OPCFW_CODE
|
I am trying to build/install Mapnik (3.0.10) on a CentOS 7 system and I am having trouble getting all the dependencies in place and recognized.
I have installed gcc/g++ compiler from the gcc6 series to ensure c++14 support, which Mapnik needs. If I point directly to my new
gcc and poll its version I get this:
[root@raven ~]# /usr/local/bin/gcc --version gcc (GCC) 6.5.0 Copyright (C) 2017 Free Software Foundat/..snip..
Next I temporarily overrode the
$CXX environment variables so the updated compiler would be used for subsequent builds, then I installed Boost 1.69.0 from source, like this. Note that this actually installs a second Boost, so the
--prefix parameter establishes where this alternative Boost will install to:
export CC=/usr/local/bin/gcc export CXX=/usr/local/bin/g++ cd /root/downloads wget https://dl.bintray.com/boostorg/release/1.69.0/source/boost_1_69_0.tar.gz tar -xzf boost_1_* cd boost_1_* ./bootstrap.sh --prefix=/opt/boost ./b2 install --prefix=/opt/boost --with=all
Now, if I check
/opt/boost, I see what I expect..
[root@raven ~]# dir /opt/boost/ include lib
Finally I get to installing Mapnik itself. I'm basically using the approach noted here. However, I've already got several of the dependencies in place due to previously installing GDAL and PostGIS. But, when I run Mapnik's
.configure step, it fails to find my optional dependencies.
For example, I built
proj from source and know exactly where it is..
[root@raven ~]# dir /usr/proj49/ bin include lib share
ldconfig finds it too..
[root@raven mapnik-v3.0.10]# ldconfig -p | grep libproj libproj.so.12 (libc6,x86-64) => /usr/proj49/lib/libproj.so.12 libproj.so.0 (libc6,x86-64) => /lib64/libproj.so.0 libproj.so (libc6,x86-64) => /usr/proj49/lib/libproj.so libproj.so (libc6,x86-64) => /lib64/libproj.so
So I specify my alternative Boost location as well as
.configure parameters.....but it still fails to find
proj? Here's some abridged output. Note that it finds my Boost, as specified, but it fails to find
proj, in spite of the config params..
[root@raven ~]# cd /root/downloads/mapnik-v3.0.10 [root@raven mapnik-v3.0.10]# ./configure BOOST_LIBS=/opt/boost/lib BOOST_INCLUDES=/opt/boost/includes PROJ_LIBS=/usr/proj49/lib PROJ_INCLUDES=/usr/proj49/include ..snip.. Searching for boost libs and headers... (cached) Found boost libs: /opt/boost/lib Found boost headers: /opt/boost/include Checking for C++ header file boost/version.hpp... yes Checking for Boost version >= 1.47... yes Found boost lib version... 1_69 ..snip.. Checking for C library proj... no Could not find optional header or shared library for proj ..snip..
Most of the optional dependencies seem to be similarly overlooked—
libjpeg-devel, sqlite3, the tiff stuff, proj, etc. And most of those are package installs, not source builds.
As we'll be using this server for quite some time I would like to have the latest Mapnik with its full complement of support (especially for proj, png, and jpeg). It's especially vexing that my
proj install isn't tying in, because I know exactly where it is and provided Mapnik those parameters.
I apologize for the long noisy read, but does anyone see what I am missing?
[Update: 4.24.19 5pm]
Ok, I may have figured it out. I was hoping it had something to do with environment/shell setup, rather than having to build all dependencies from source with the same compiler, and stumbled onto an old post where a "Dom Lehr" recommended modifying the
LD_LIBRARY_PATH environment variable to include specifically
/usr/local/lib. Well, I tried that, and it didn't solve the problem. However, I optimistically expanded the parameter values to include some additional locations, and now I can complete the
.configure step with all dependencies recognized. Here's how I did it..
vi /etc/profile.d/sh.local # Add colon-separated paths to the LD_LIBRARY_PATH variable like this.. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/usr/local/lib64:/usr/lib:/usr/lib64:
I added everything I could think of where I found seemingly related .so files. Save the file, then..
Then I logged out of my terminal session and logged back in to refresh the shell. Then double-checked that it worked:
[root@raven mapnik-v3.0.10]# echo $LD_LIBRARY_PATH /opt/remi/php70/root/usr/lib64::/usr/local/lib:/usr/local/lib64:/usr/lib:/usr/lib64:
It did. So I refreshed my temporary compiler arguments, went to the Mapnik source folder, and re-ran the
.configure instruction, and it worked without issue..
export CC=/usr/local/bin/gcc export CXX=/usr/local/bin/g++ cd /root/downloads/mapnik-v3.0.10 ./configure BOOST_LIBS=/opt/boost/lib BOOST_INCLUDES=/opt/boost/includes
And it worked.
Now, whether it will build without issue is still a mystery, but this seems to have done the trick!
Well I spoke to soon. Now
make fails with..
<mapnik::geometry::geometry_collection<double> > >&&’ scons: *** [src/json/mapnik_json_geometry_grammar.o] Error 1 scons: building terminated because of errors. make: *** [src/json/libmapnik-json.a] Error 2
....which is in-line with a lot of others posts I've found where folks are having trouble with mismatched compiler/dependency issues. Back to the drawing board.
|
OPCFW_CODE
|
M: Show HN: Add hidden messages to your tweets, Facebook posts, WORD docs and more - hellotextmark
Hey Hacker News! This messa ge has a h i dden m essa ge i nside it called a textmark. Find out what it is at textmark.io, use the find textmark tool. We created textmark to allow you to protect your valuable content or send hidden messages to friends, the skies the limit. Just copy and paste this message into the "find textmark" box.
R: lukasschwab
Interesting. Did some quick poking around.
Looks like the system adds a number of non-printing unicode characters into
the string. In this thread title, for example, the textmark is stored in the
phrase "messa ge has a h i dden m essa ge i nside."
(Thanks, textdiff!)
Specifically, it adds non-printing unicode characters repeatedly. I'd guess
these form a unique series, which is mapped to a record in the textmark
database.
If I wanted to automatically remove the textmark from a text, in order to
steal your content, I would write a script to automatically remove any non-
printing unicode characters that appear in it. This StackOverflow answer
includes a script achieving precisely that task:
[https://stackoverflow.com/a/11598864/6226586](https://stackoverflow.com/a/11598864/6226586)
R: ozgrozer
I was thinking the same things like you. Looks like they don't event hide the
actual message. They basically hide some sort of ID that points a row in the
database. After some research I've made a web-app which can hide any message
into any text.
If you wanna look it's on G itHub. Here are the links.
Demo: [https://ozgrozer.github.io/titus/](https://ozgrozer.github.io/titus/)
Source:
[https://github.com/ozgrozer/titus/](https://github.com/ozgrozer/titus/)
R: bernardhalas
Hi there, great idea. Sort of like text stenography.
Is the message really hidden in this message? Or are you just doing the lookup
your database of hidden messages based on the hash of the public message?
Regarding the website UX - after I pasted the message to your text field I
thought the hidden message is the youtube video. But that didn't make sense to
me and only later I understood the message is above the video.
If you are interested in more UX feedback, feel free to visit
[https://usability.testing.exchange](https://usability.testing.exchange) which
is a free community platform built exactly for this purpose.
Also, looking at the payment options - I'm missing a pay-per-message approach.
For your consideration.
R: hellotextmark
Great feedback the message has been updated.
R: arkitaip
Cool idea. Needs a demo to show how it actually works and not just for
decoding messages. Like what's the use case.
Also, all those trademark symbols makes this look comically unprofessional.
R: hellotextmark
Hey thanks for the feedback! Definitely will add a video shortly just putting
it out there to get great feedback like yours.
R: r4meau
I only changed a word from it and it wouldn't find the trademark anymore. Not
that useful IMHO.
R: lclr
Hey textmark.io! Steganogra phy is so c ool, b ut
e nsu r e ownership without a robust crypto system is a little bit
pretentious ... And in some cases, dangerous!
|
HACKER_NEWS
|
degenerate GROUP_BY behavior when one host is nearly out of resources
Using a ["hostname","GROUP_BY"] constraint on an app with many instances one expects for the instances to be distributed evenly across available agents (subject to all of the complex baggage associated with what "evenly" means to Marathon at any given point in time).
It seems like the list of hostname values that are subject to the spreading logic in GROUP_BY is chosen without regard to the other resource fit parameters. For example:
Two agents, for exaggeration purposes one is a much smaller agent compared to the other in terms of total cpu/mem resources.
The smaller agent is nearly maxed out, but is still emitting offers with values like mem=1G, cpus=0. If the agent was totally out of ALL resource types then I suppose there wouldn't be an offer at all and this problem would go away, but it's not and there is an offer.
The larger agent is totally vacant.
Someone tries to deploy an app with 16 instances with [["hostname","GROUP_BY"]].
I would expect that because there is only one agent that can match the resource requests (the larger one) then the GROUP_BY will just spread over the one remaining agent (putting 16 instances on the same agent).
What actually happens is that somehow those stray Offers from the smaller agent disrupts this logic and causes the deploy to stall out at around 3/16 instances deployed.
Reproducible scenario using docker-compose: https://github.com/rboyer/marathon-groupby-issue-reproduction
The marathon logs during the stall:
#################################################################
### correctly failing to match the smaller agent (<IP_ADDRESS>) ###
[2016-08-31 19:50:01,719] INFO Offer [110679d4-9a6a-4198-9db8-b913d1bb6a5b-O11]. Considering unreserved resources with roles {*}. Not all basic resources satisfied: cpus NOT SATISFIED (0.01 > 0.0), mem SATISFIED (16.0 <= 16.0) (mesosphere.mesos.ResourceMatcher$:marathon-akka.actor.default-dispatcher-5)
[2016-08-31 19:50:01,720] INFO Offer [110679d4-9a6a-4198-9db8-b913d1bb6a5b-O11]. Insufficient resources for [/sleep] (need cpus=0.01, mem=16.0, disk=0.0, ports=(1 dynamic), available in offer: [id { value: "110679d4-9a6a-4198-9db8-b913d1bb6a5b-O11" } framework_id { value: "110679d4-9a6a-4198-9db8-b913d1bb6a5b-0000" } slave_id { value: "110679d4-9a6a-4198-9db8-b913d1bb6a5b-S1" } hostname: "<IP_ADDRESS>" resources { name: "ports" type: RANGES ranges { range { begin: 31000 end: 31755 } range { begin: 31757 end: 32000 } } role: "*" } resources { name: "mem" type: SCALAR scalar { value: 1008.0 } role: "*" } resources { name: "disk" type: SCALAR scalar { value: 3988.0 } role: "*" } url { scheme: "http" address { hostname: "<IP_ADDRESS>" ip: "<IP_ADDRESS>" port: 5051 } path: "/slave(1)" }] (mesosphere.mesos.TaskBuilder:marathon-akka.actor.default-dispatcher-5)
[2016-08-31 19:50:01,721] INFO Offer [110679d4-9a6a-4198-9db8-b913d1bb6a5b-O10]. Constraints for app [/sleep] not satisfied.
The conflicting constraints are: [field: "hostname"
operator: GROUP_BY
] (mesosphere.mesos.ResourceMatcher$:marathon-akka.actor.default-dispatcher-5)
###########################################################
### errantly failing to match larger agent (<IP_ADDRESS>) ###
[2016-08-31 19:50:01,721] INFO Offer [110679d4-9a6a-4198-9db8-b913d1bb6a5b-O10]. Insufficient resources for [/sleep] (need cpus=0.01, mem=16.0, disk=0.0, ports=(1 dynamic), available in offer: [id { value: "110679d4-9a6a-4198-9db8-b913d1bb6a5b-O10" } framework_id { value: "110679d4-9a6a-4198-9db8-b913d1bb6a5b-0000" } slave_id { value: "110679d4-9a6a-4198-9db8-b913d1bb6a5b-S0" } hostname: "<IP_ADDRESS>" resources { name: "cpus" type: SCALAR scalar { value: 0.98 } role: "*" } resources { name: "mem" type: SCALAR scalar { value: 992.0 } role: "*" } resources { name: "disk" type: SCALAR scalar { value: 3988.0 } role: "*" } resources { name: "ports" type: RANGES ranges { range { begin: 31000 end: 31295 } range { begin: 31297 end: 31903 } range { begin: 31905 end: 32000 } } role: "*" } url { scheme: "http" address { hostname: "<IP_ADDRESS>" ip: "<IP_ADDRESS>" port: 5051 } path: "/slave(1)" }] (mesosphere.mesos.TaskBuilder:marathon-akka.actor.default-dispatcher-5)
I think it's related to #3220
We've hit this as well. Different racks have different number of Mesos machines and one full rack can easily kill the deployment.
If nobody at mesosphere has the bandwidth to fix this, is there anyone who could spare some time to guide me into figuring out what fixing this would entail?
Can you share your idea for the solution with some examples how it will work?
For implementation reference you can take a look at https://github.com/mesosphere/marathon/pull/3885 it touches all places that need to be changed to add new constraint.
I'm suggesting changing the behavior of GROUP_BY to only synthesize the "how many hosts exist" metric it uses from offers that match all of the other resources and attributes, instead of apparently considering those as valid machines. I'm not proposing a new constraint.
I'm suggesting changing the behavior of GROUP_BY
I'm not proposing a new constraint.
If it is not backward compatible change (and it looks like it isn't) we should create new constrain for it.
Can you describe how it's going to work? Please provide examples when it works the same as GROUP_BY and when it takes different actions. We'll use it as a unit test. You can follow the notation introduced in https://github.com/mesosphere/marathon/issues/3220#issuecomment-183067176
could we make it a configuration parameter? ie - graceful degredation of GROUP_BY
--groupby_consideration_timeout
Set to the number of minutes to wait for a host to have resources before removing it from group_by consideration. If the host regains resources during a deployment, it can be put back into rotation in which this timer would restart
If you are considering a patch for this, please consider using a feature flag (e.g. available features/enabled features) instead of adding a new command line parameter.
Note: This issue has been migrated to https://jira.mesosphere.com/browse/MARATHON-2067. For more information see https://groups.google.com/forum/#!topic/marathon-framework/khtvf-ifnp8.
|
GITHUB_ARCHIVE
|
using System.Numerics;
using System.Threading.Tasks;
using Lykke.Service.EthereumCore.Core.Repositories;
namespace Lykke.Service.EthereumCore.Services
{
public interface IGasPriceService
{
Task<(BigInteger Min, BigInteger Max)> GetAsync();
Task SetAsync(BigInteger min, BigInteger max);
}
public class GasPriceService : IGasPriceService
{
private readonly IGasPriceRepository _repository;
public GasPriceService(
IGasPriceRepository repository)
{
_repository = repository;
}
public async Task<(BigInteger Min, BigInteger Max)> GetAsync()
{
var gasPrice = await _repository.GetAsync();
return (gasPrice.Min, gasPrice.Max);
}
public async Task SetAsync(BigInteger min, BigInteger max)
{
await _repository.SetAsync(new GasPrice
{
Max = max,
Min = min
});
}
}
}
|
STACK_EDU
|
Why Orion can't be used to service JWST?
Hubble Space Telescope is a marvel of astronomical tools - particularly judging by how much it moved the science. It took a lot of fixes along the way, which certainly prolonged its useful life.
Its successor, James Webb Space Telescope, is going to be a significantly better tool. However, it's going to work only for a handful of years, and the reason is "we can't service it" (e.g., to replenish supply of liquid helium). It's going to sit in Earth-Sun L2 libration point, just a few times farther away from Earth than the Moon.
Why we can't service JWST with modern generation of spacecrafts able to fly farther than the Moon? Orion in particular could be a good candidate. Equip it with a propulsion unit - which would cost another launch of a heavy rocket and docking on LEO - and a airlock, which is relatively lightweight and cheap, and you can fly a few weeks mission. Similar capabilities are reachable by other spacecrafts (even with Soyuz, though I doubt it will logistically work to modify it for such an unusual mission). In exchange we'd get longer mission for such a unique tool as JWST promises to be.
I think there are two parts to this; the servicability of the JWST (I don't think it was built to have bits replaced) and getting there in an Orion spacecraft
I doubt you could do this mission for less money than building and launching a JWST replacement.
Because "Orion" is just an imaginary headline for covering up a multi-billion fraud in the accounting of NASA. Orion was never intended to, and will never, fly. (This might not be apparent to everyone right now, but do check back ten years from now to see that this is true!)
I don't think the logistics of servicing something at L2 has even ever been seriously considered. You need a ton of fuel leftover to maneuver there, and get back, not to talk about that you have to take all your life support systems with you (which should be more than an Orion capsule).
Have a look at all the answers to The JWST - What happens if/when it breaks? as well as Besides HST, JWST and stations, are there any examples of satellites designed for service in space?
@LocalFluff "never suspect malice when incompetence will answer"
I'm not convinced that the Hubble service missions made any sense, economically speaking. The telescope initially cost 1.5 billion, and each of the 5 service missions cost another 0.5 billion. So for the same price we could have sent 2, maybe 3, full non-serviceable telescopes...
@LocalFluff - The blame for this fraud lies more with Congress than with NASA. A better way to look at it: Orion (along with the SLS) is a multi-billion fraud in the accounting that Congress has foisted upon NASA.
@asdfex agreed. When we have even basic autonomous manufacturing and assembly in orbit, then I’d like to see more efforts towards making telescopes serviceable and upgradeable - until then, our capabilities just aren’t robust enough.
@asdfex There is much, much more at play wrt the human conquest of the Solar System than mere economics. The Hubble program has demonstrated this in spades by reminding NASA that the element of human drama (evident during the various servicing missions) has the potential to greatly increase public taxpayer support of space exploration!
I believe JWST's service limit is determined by propulsion fuel, not refrigeration (helium). JWST's halo orbit is unstable and requires occasional nudges. Once out of fuel, it would no longer be able to maintain its position at L2. Once fuel is depleted to a pre-determined level. JWST will maneuver into a stable heliocentric graveyard orbit. JWST does not expend helium for refrigeration.
James Webb Space Telescope Program Scientist Dr. Eric Smith spoke about this on TMRO recently. The main reason is that the telescope wasn't designed to be serviced so it is not as modular as Hubble and systems are integrated throughout the telescope rather than being discrete units that can be removed and replaced like on Hubble. It was designed like this from the beginning because they knew it would be at L2 rather the LEO where it would be much more difficult to service.
That being said they did include optical targets on the bottom of of the telescope where the fuel ports are so a mission in the future could potentially come and refuel it. I don't know if that is something that Orion could potentially do but I wouldn't think a manned mission would be required for a refuel and if it's not manned why use Orion.
Accepted. At the same time the main limitation on the service life is said the helium limit, which should be relatively easy to transfer. And not being modular could be actually harder to achieve (modularity allows, for example, parallel development and less dependencies of systems on each other), even though it's more optimal for size and weight. Would be interesting to learn about development process of JWST in another thread.
I believe JWST's service limit is determined by propulsion fuel, not refrigeration (helium). JWST's halo orbit is unstable and requires occasional nudges. Once out of fuel, it would no longer be able to maintain its position at L2. Once fuel is depleted to a pre-determined level. JWST will maneuver into a stable heliocentric graveyard orbit. JWST does not expend helium for refrigeration.
|
STACK_EXCHANGE
|
Publish the the list of Traffic Manager Probe IPs
We have several VMs which provide a service to our web roles. We use traffic manager to loadbalance between these VMs.
As the the only valid traffic to these VMs is from our webroles, our office or the TM probes, we use windows firewall on the VMs to restrict all other traffic.
The issue we have is that the traffic manager Probe IPs change on occasion.
If the list of Probe IPs was published, we could ensure that our FW rules are kept upto date ensuring that TM is doing it's supposed to be doing!
This feature has been completed. The IP addresses used by the Traffic Manager health checks are now fixed, and can be included in ACLs/firewall whitelists.
The list of health check IP addresses is published here: https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring#faq
For services in Azure, we are planning in future to make it easier to whitelist these IP addresses via a pre-defined NSG rule.
This feature is available in the Azure Public Cloud. It is not yet deployed to the Azure China Cloud, German Cloud, or FedGov Cloud.
I believe the above link is not exact, here is the correct link: https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-faqs#traffic-manager-endpoint-monitoring
I use this list in a UDR because my system is configured by cross advertised of expressroute and force tunnnling. We can't use NSG for routing. So if you change this list, I strong request you to notice by email in advance.
Dave H [MSFT] commented
Looks like the published list has moved to https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-faqs#what-are-the-ip-addresses-from-which-the-health-checks-originate
Any Update on this?
Dilip L [MSFT] commented
We have the list of IP addresses from where Traffic Manager probes will originate published at https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring#faq
Neil Moran commented
Been wondering this myself - but found a list of IPs published on https://azure.microsoft.com/en-gb/documentation/articles/traffic-manager-monitoring/
Oliver Simmons commented
Hi is there any update on when this list will be published? It has been 4 months since "we plan to do in the near future" do you know when this information will be published. This is preventing us from migrating to Azure. In the current climate it is not acceptable to have a production website fully open to the internet just to allow a sla monitor to function.
Is there any update, when the IP addresses will be listed? this is preventing a large project.
Is there any more information on when the probe IP's will be published? this is preventing us from using this service until we are able to lock down our firewall.
Admin, do you have a eta of when the IP addresses for the probes will be published?
We use web roles with acl's in the service coniguration *.cscfg file to disable access from the internet. We have identical deployments in two different azure data centers east us and west us. When we enable the acl rules, Azure Traffic Manager gets blocked out, and shows both endpoints as degraded.
We followed directions per this msdn blog article:
You can only have 50 ACL rules per endpoint and the list is to large for the entire Azure IP range. It seems traffic manager doesn't work with ACL's so in order to use the Traffic Manager product offering you have to enable global access to your sites which is disappointing.
|
OPCFW_CODE
|
You do not have permission to extract to this folder
I am running Ubuntu 16.04 LTS.
I am trying to extract a tar.gz file using Archive Manager.
When I try to extract to /opt it says "You do not have the permissions to extract to this folder".
How can I overcome this problem?
@edwinksl Bad idea. He uses a graphical program to extract the files.
@UTF-8 Ah, right, I missed that. Let me delete it before bad things happen.
Most software shouldn't be installed this way.
First you should make sure /opt is really where you want to put it, and that you really want to install the file from that archive at all, rather than another source.
/opt is usually used for software that is not part of Ubuntu and not installed using Ubuntu's package manager (since when it is, it usually goes in multiple subdirectories of /usr.). In addition, most often it's used for software that is provided as pre-built binaries, and not software that you're building from source code (since that usually goes in multiple subdirectories of /usr/local).
A lot of software that is available as pre-built binaries is also packaged officially for Ubuntu (see also this question), is packaged in an unofficial PPA, or is provided in a downloadable .deb file. Those methods are often preferable (usually in that order) to installing unpackaged pre-built binaries.
If you're sure you want to unpack the archive to /opt...
If you know you want to install software in /opt, you should check if there are official guidelines, specific to that application, for how best to do so. The developer or vendor of the software may have specific recommendations that cover installation issues beyond how and where to unpack the files.
With all that said, if you know you want to extract it in /opt, you can do this in the Terminal (Ctrl+Alt+T) by running:
cd /opt
sudo cp /path/to/filename.tar.gz .
sudo tar xf filename.tar.gz
You would replace the italicized /path/to/filename.tar.gz with the actual location and filename of the file you want to put in /opt and extract. Dragging a file from Nautilus (the file browser) into the Terminal application will automatically paste its full path on the current line, which can make things easier.
If you feel you must use the Archive Manager to extract the file...
If you want to extract the software with the Archive Manager, you'll have to run the Archive Manager as root, just as the cp and tar commands in the above example were run as root. It's widely recommended to avoid running graphical programs as root unless you really know you want to--often they are not designed or tested with such use in mind.
If you really do want to run the Archive Manager as root, you can do so by pressing Alt+F2, typing the command gksudo file-roller and pressing Enter. You should be very careful after doing this -- for example, you can access and overwrite important system files. You should make sure to close the program when you're done so you don't later forget that it's running as root (rather than as your user) and use it for something else where it's not needed.
If you're using Ubuntu MATE, run engrampa instead of file-roller.
If you find you don't have the gksudo command, you can get it by installing the gksu package in the Software Center. (Or you can use one of the other ways to run graphical applications as root.)
Of course, you can avoid all this complexity by unpacking archives with sudo tar... in the Terminal (see above) when you need to do as as root, and reserving the Archive Manager for the more typical cases when you want to extract an archive without elevated privileges.
I don't recommend doing it this way, and this section is provided mainly for completeness or in case you're sure you want to run the Archive Manager as root. You do not need to run the Archive Manager as root to install software, even if you choose to install it from a .tar.gz archive file and choose to put it in /opt.
General information about manual installation from downloaded archives:
Often it's better to install software through the other ways listed, with links to guides, in the first section of this post. However, if you are going to install software by unpacking a .tar.gz (or similar) archive, I recommend you read How do I install a .tar.gz (or .tar.bz2) file? before proceeding.
As a side-note, in the old days, software to be admitted to software center was required to be installed into a single, self contained directory in opt, apart from the man pages and launcher. Seems nothing wrong with that.
@JacobVlijm Are you specifically talking about software, such as proprietary payware applications sold through the Software Center, that didn't make it into Ubuntu's repositories in the usual way? This reminds me of the rules for packaging Quickly apps, but the Software Center had existed for a while when Quickly was introduced. Most packages available through the Software Center are merged from Debian into Ubuntu. My understanding is that no more than a tiny fraction have ever used /opt for anything.
it was in the days I produced this one (2014) https://launchpad.net/~vlijm/+archive/ubuntu/qle, I tried to get it through the never ending review procedure. I stopped trying, but kept the required format for the greater part: a self- containg dir in /opt, launcher in /usr/share/applications, man pages in the usual place. It was even told to ignore lintian warnings on the directory. I posted a question here on it, which I cannot find anymore (removed) :/. The requirements I remember clearly though.
@EliahKagan replying to your comment using sudo -H seems the most efficient way really, if messing up local config is the only possible damage that can be done by sudo <xapp>. I read in one answer/comment somewhere that sudo -i was what the devs want us to use for some reason, but I have searched in vain for that post more than once! >_< I have undeleted my answer since, I suppose, it has some different information to yours that might help someone - thank you :)
what do I change for the last command to extract a zip instead of a tar?
@ScottF To unzip filename.zip in the current directory, run unzip filename.zip. In situations where you must do this as root, just add sudo (as with the tar xf command shown in this answer): sudo unzip filename.zip. The unzip command is provided by the unzip package, which you could install in a terminal using sudo apt-get install unzip (you should run sudo apt-get update first if you haven't recently). See How to unzip a zip file from the Terminal? for details and other ways to extract zip files.
It is not true at all, installing Android Studio in Ubuntu does require the user to extract to which on newbie we do not know where.
I highly recommend Eliah Kagan's answer
UTF-8's answer also suggests a good approach
You can overcome this problem by launching the archive manager program as root
sudo -i
file-roller 2>/dev/null &
(if using Ubuntu MATE, the program is engrampa instead of file-roller) Navigate to the file, extract to your desired location, and back in the terminal, don't forget to
exit
when done, to drop privileges.
Also worth knowing, you can extract to a target directory with the -C option to the tar command, for example, if the archive is in your Downloads directory:
cd Downloads
sudo tar xvfz name-of-your.tar.gz -C /opt
Isn't there gksudo for that because bad things can happen when you just launch graphical applications as root? Configuration files can be created which only root can access or only root can modify.
@UTF-8 I believe using sudo -i is the preferred way of avoiding that problem :) gksudo is actually deprecated. See my question which has some links to other posts (still waiting for good answer)
@Zanna: Use of the -H flag is mandatory. Failing to use this flag may corrupt critical system files and prevent you from logging in. - Quoting from Ubuntu community wiki page.
@AneesAhmed777 it is not necessary when using sudo -i because that starts a root shell with a clean env and uses root's home. The -H flag simply makes sudo use root's home, which avoids writing to user's own configuration files.
Extract the files to some folder in your home folder or to a folder in /tmp. Then either of these commands:
sudo mv ~/yourfolder /opt
sudo mv /tmp/yourfolder /opt
You don't have permission to write to /opt as a normal user. Only root can do that. mv moves the files and sudo tells your computer to do it as root. You'll have to enter your password. Note that you won't see your password nor will you see dots, stars, whatever. Just type your password and hit Enter.
commenting here because I'm going to delete my answer because Eliah Kagan's is better. I believe using sudo -i is the preferred way of avoiding the problem you mentioned :) gksudo is actually deprecated. See my question which has some links to other posts (still waiting for good answer)
@Zanna, I like to believe you, but never actually found anything sbout gksu or gksudu being deprecated. Do you have any source on that?
@JacobVlijm maybe that's the wrong word, but in the answer to the second (I think) linked post in my answer, they explain that gksudo is not installed in Ubuntu by default any more because the devs want us to use sudo -i instead. But gksu(do) still works fine for now.
@Zanna I think people mix up the fact that it is not installed by default anymore with being deprecated....
@Zanna In a post linked in my answer here about running GUI programs as root I gave sudo -H for systems w/o gksu[do]. Do you know if sudo -i is now preferred to sudo -H, though? I can edit my answer here, or add a note to that one, or post another answer there contrasting bare sudo to sudo -i, etc. Btw, I think your answer (and this one) are helpful--mine's longish and it's nice to have short and focused posts too (plus you show sudo -i). You might consider undeleting it, though of course that's up to you.
@Zanna if you have a good answer I don't believe you should delete it because another is better. It's only if your answer causes harm or is blatantly off-topic you should delete it. IMO diversity is the spice of life :)
@WinEunuuchs2Unix I have undeleted my answer since, I guess, it has some different information from Eliah's
Ahhh I'm proud of you :)... Happy New Year!
|
STACK_EXCHANGE
|
CW complex of iterated loop spaces
In Milnor's book Morse Theory, it is proved that the loop space $\Omega S^n$ of the n sphere has the homotopy type of a CW complex with one cell each in the dimensions 0, n-1, 2n-2, 3n-3, ... Or more generally, given non conjugate points p, q on a complete Riemannian Manifold M, the path space $\Omega(M,p,q)$ (of all continuous path joining p to q) has the homotopy type of a countable CW complex which contains one cell of dimension $d$ for each geodesic from p to q of Morse index $d$.
For iterated loop spaces $\Omega^k M$, do we have a similar theorem? Say is there any known result concerning the CW structure of $\Omega^k S^n$?
By a result of Milnor, the space of maps from a finite CW complex to any CW complex is homotopy equivalent to a CW complex. This gives a general reason why spaces like $\Omega^k M$ have a CW structure.
There is a well-known construction from which you can, at least in principle, get an explicit cell structure for spaces of the form $\Omega^k \Sigma^k X$, where $X$ is a based path-connected CW-complex. This includes $\Omega^k S^n$ as a special case (for $n>k$). The general construction goes as follows. For a finite set $i$, let $F(\mathbb R^k; i)$ be the space of injective maps from $i$ to $\mathbb R^k$. The assignment $i\mapsto F(\mathbb R^k; i)$ gives a contravariant functor from the category of finite sets and injections to the category of topological spaces. Similarly we have a covariant functor between same categories, $i\mapsto X^i$, where the functor structure is given by basepoint-inclusions. Given a pair of functors like this, one may form the coend $$F(\mathbb R^k; i)\otimes_i X^i.$$ A classic theorem of (I think) Milgram, May and Segal asserts that this coend is homotopy-equivalent to $\Omega^k\Sigma^k X$. This endows the homotopy type of $\Omega^k\Sigma^k X$ with a natural filtration, and it also can be used to equip it with a CW structure, since the spaces $F(\mathbb R^k; i)$ can be endowed with CW structures compatible with maps between them. One can get different CW models for the homotopy type of $\Omega^k\Sigma^k X$ by using cellular approximations of the functor $i\mapsto F(\mathbb R^k; i)$. Specific cellular approximations were constructed by Milgram, Barratt-Eccles, Jeff Smith, and possibly some others. This paper of Clement Berger gives a nice historical survey of constructions of this type.
When $k=1$ there is a particularly small CW model, as you indicated. In other cases it is not going to be so simple to enumerate the cells. Nevertheless, this construction is useful for many purposes. For example, it was used to describe the homology of $\Omega^k\Sigma^k X$ (with field coefficients) as a functor of the homology of $X$.
|
STACK_EXCHANGE
|
Are Multiple API Users Required to Restrict Access to Specific REST Endpoints? Can we limit based on Connected App info?
Based on this question: Securing a REST API and this one How to limit an access to specific REST endpoint? it seems that the only way I can limit access to specific custom endpoints is to create multiple API users. Currently we only have one API user for our connected application, but we were hoping to make the process more secure by restricting access to certain APIs depending on which connected 'app' was connecting. We created multiple Connected Apps in Salesforce to be dedicated to specific functions, but all requests actually originate from the same app (just different areas within the app).
The above linked questions are 2 years old, so I am curious if this answer still holds true. Do I really need to create multiple API users if I want to restrict access to specific custom REST endpoints? Or can I somehow limit a specific connected app to only have access to a specific subset of endpoints? Seems kind of unfair to make us waste valuable user licenses just to make our APIs more segmented, no?
If the answer still holds true, is it possible to add additional security to my endpoints by somehow checking programmatically to see what connected app initiated the request, and kick back an error message if it didn't originate from the correct connected app (client key/secret combo)
Due to security reasons salesforce has not provides us access to the table showing which User got which session using which connected app. Which I belive is failry straight forward from salesforce to stop people misusing other people's session ID.
Can you do a workaround for the same? Well Yes you can.
Create an object/custom setting Having two following fields.
APIKey|AppName.
Now modify your rest request to in such a way that you send the API key as a parameter in your request body.
Now in your apex class Query the table and check the APIkey is same in app and in your request.
{
"body": {},//Your rest Body
"api-key": "YourAPIKey"
}
Your Apex class will look something like:
@RestResource(urlMapping='/CUSTOM_DATA/*')
global with sharing class MY_CUSTOM_DATA{
global static void myMethod(){
//Query the table
//Parse the body to see if the API key matches the request
//Do your JOB
}
}
Make sure you only share specific API key with the integrating partners they want to connect with.
Its a bad architecture to share same user password with multiple integrating partners.
Thanks for the suggestion! I agree it is bad security to share an API user/pass with multiple partners and, preferably, I wouldn't. However, at the same time, user licenses are expensive and limited in my org so I can't afford to give separate ones to everyone. I wish Salesforce would not count API users as Salesforce license users
|
STACK_EXCHANGE
|
This content has been marked as final. Show 8 replies
The datafiles of these 2 tablespaces are in AUTIEXTEND ON mode by default.You can check this by querying
select file_id,file_name,(bytes/1024/1024)MB,AUTOEXTENSIBLE from dba_data_files where tablespace_name=UPPER('&tbs_name')
To know who and what object is occupying the space:-
select owner,segment_name,segment_owner,(bytes/1024/1024)MB,tablespace_name from dba_segments where tablespace_name='SYSTEM';
and then run the same for SYSAUX tablespace changing the tablespace_name
For SYSAUX tablespace:--
select OCCUPANT_NAME,OCCUPANT_DESC,SCHEMA_NAME,MOVE_PROCEDURE,SPACE_USAGE_KBYTES from V$SYSAUX_OCCUPANTS;
Edited by: Anand... on Mar 4, 2009 7:54 PM
For what purposes are u using these tablespaces...for ex: AWR statistics reports reside on SYSAUX tablespace which might eat up some space...
I have never used those 2 TS. for anything. I ran the scripts suggested by I don't see any weird segment so now am a little bit confused because I don't know which one to drop
Can you post the output of the following queries.
1. select owner,segment_name,segment_type,(bytes/1024/1024)MB,tablespace_name from dba_segments where tablespace_name='SYSTEM' and owner not in ('SYS','SYSTEM','OUTLN','WMSYS','ORDSYS')
2. select OCCUPANT_NAME,OCCUPANT_DESC,SCHEMA_NAME,MOVE_PROCEDURE,SPACE_USAGE_KBYTES from V$SYSAUX_OCCUPANTS;
I am having the same problem with system and sysaux getting full.
the owner that is using system segment are SYS, SYSTEM AND OUTLN
SQL> SELECT OWNER, SUM(BYTES)/1024/1024 FROM DBA_SEGMENTS WHERE TABLESPACE_NAME= 'SYSTEM' GROUP BY OWNER;
I also have snaptshot from oct. 2008 to now. i am thinking to delete some. if i delete the old snapshot, will i free some space from sysaux?
the message exceeds the maximum length of 30000 characters- Actually I can't format the ouput so that it fits on one page, and I can't see the attachment option here
With * owner not in ('SYS','SYSTEM','OUTLN','WMSYS','ORDSYS')* option you have so many tables in the SYSTEM tablespace.Who is the owner of the tables and indexs.
The owner is MDSYS.
|
OPCFW_CODE
|
Where is your original video footage?
Your video needs to be on your computer's hard drive in order for you to edit it in Premiere Elements. If it is still on your camcorder or on an external drive and you disconnect the camcorder or drive after importing the media, the program will lose its connection to your files.
To get the video from your camcorder to your computer, use Add Media/From Flip or Camera to copy the files over a USB connection.
All my original footage is in a folder on my computer, i checked. I imported it several weeks ago and it is still there.
Is the footage all in the same folder? If so, just relink that folder to the media in your project.
Or, when you first open the project and you get the Media Missing message, browse to the missing file listed.
If all of your media files are in the same folder, the program will see the rest of the files and automatically reconnect them.
Yes it is. I can browse for all the missing files and they are all there but i would have to import them one by one and even and when i tried it said "error importing".
I then tried to import the whole folder back in which worked but did nothing. All the footage is still red and says "offline all".
Something is not making sense, Aydan.
When you relink to one file, the program will automatically relink to all of the other files it finds in that directory folder. You certainly don't have to do it one file at a time. Also, if the files are already in your project then reconnecting them should by no means give you importing error.
You're doing something that's not working, I'm just not sure what it is and you're not telling me enough to know.
Do you have any idea what you did that broke the connection with the media files in your project? Did you move the files or rename them?
Are they are your C drive or an another drive? What operating system are you using?
If none of this things I'm suggesting are working, there is a very real possibility that you've got a malware issue or your drive is failing. Files don't just disconnect from a project on their own.
Thank you so much Steve! I right clicked on each file and replaced them one by one! Thank you for saving my video! Now i will know what to do next time this happens!
I'm glad you were able to solve your problem, Aydan, but I'm sorry it took so much work! It usually does happen automatically, for the most part, once you show the program where the files are.
Meantime, I hope you've also figured out why it happened so it won't happen again.
But in any event, glad things are up and running for you.
|
OPCFW_CODE
|
*Note: This review and score is purely based on the information disclosed by the validator service and the scoring rubric.
Last Updated: Oct 10, 2019
Cosmostation’s validator service operates on the Cosmos and IRISnet blockchains. Built by the creators of mintscan.io and CosmosJS, Cosmostation differentiates itself with a strong focus on open source contribution of user and developer-friendly tools. Cosmostation presently offers 12% commission validation services, and is based in Seoul, Korea.
Team Background (62.5/100)
- Full-Time/Part-Time (10/10)
- Prior Blockchain Dev/Impact (10/10)
- Systems Experience (0/10)
- Recognizability (5/10)
Current Voting Power (63/100)
- Total Staked: (9/10)
- Unique Self-Bonders: (10/10)
- Commissions: (0/10)
Historical Metrics (100/100)
- Uptime (10/10)
- Proposals (10/10)
- Legal Compliance/Insurance (0)
- Innovations (0)
Cosmostation is currently led by “Brian”, an anonymous individual with no publicly available background information. The company possesses 11+ full-time employees, including David Park (CMO), Jay Park (COO), and JayB Kim (CTO). The team broadly possesses a breadth of experience in mobile app development, wallet infrastructure, video, and front end web development – having previously built crypto wallets, audio-rendering tools, video content, and web applications.
Compared to other validators, Cosmostation has had less exposure to building highly available systems. Instead, the team’s background (and focus) has been on product development.
Cosmostation has been live on mainnet since March 2019, and is based in Seoul, Korea.
Cosmostation is presently the #7 validator on the Cosmos hub by delegation with ~7.993M atoms delegated. At time of writing, this translates to approximately $31M USD. Much of these funds are delegated, and come from Cosmostation’s reputation in the Korean and broader Asian crypto communities.
It is additionally hypothesized that much of Cosmostation’s delegation originates from self-branded funnels, including their wallet application, developer tools, and block explorer (mintscan).
Cosmostation currently offers a 12% commission rate on Cosmos Hub, placing itself slightly more expensive than its competition (average: 10%). Despite this high rate, Cosmostation appears to have attracted one of the largest delegator bases – with 53 accounts delegating north of 19,000 ATOMS ($60K+ USD at time of writing).
Outside of Cosmos, Cosmostation is the #30 validator on IRISnet, with 6.523M IRIS delegated (~$275K USD at time of writing). The company offers 10% commission on this service, and controls 1.14% of the network’s voting power. What is interesting to note is that Cosmostation recently lost 5M IRIS in delegation. Motive for this unbonding event is unclear (transaction hash here).
Cosmostation has maintained 100% uptime since its entrance into the active validator set in the March 2019. The company has additionally yet to be penalized or warned for consecutive missed blocks. Cosmostation presently has 4.67% of the network’s voting power, and has been trending upward since their conception. Of the five major proposals on the Cosmos Hub thus far, Cosmostation has participated in four (80%). All of Cosmostation’s votes have been in favour of popular opinion on the network.
Cosmostation describes itself as a “product-focused” validator. The company has built a number of tools for both users and developers, which are described below:
Mintscan.io: “Block explorer for exchanges and everyday users.” Displays recent Cosmos and Kava transaction activity, and summarizes the delegations of top validators on the Cosmos Hub.
Cosmostation Mobile Wallet: “A decentralized mobile wallet for Tendermint-based chains.” Provides an intuitive interface for users to create and import wallets, delegate, redelegate, and undelegate ATOMS, send tokens, claim rewards, learn about governance proposals, and re-invest earnings.
What is additionally interesting to note is that the company is openly praiseful of other validators, and has stated that it has leaned heavily on the work of others in its setup. The company has specifically pointed to articles from Certus One as particularly helpful to their team.
Cosmostation is also part of the “Korean Validator Alliance”, comprised of ATEAM, B-Harvest, and more. The company has stated that it hasn’t been in touch with other validators as much as they should be, however they actively participate in governance, have invested heavily in communicating with delegators, and have only missed/abstained from one vote to date.
In the face of a slashing event (via double-sign, missed blocks, or more), Cosmostation expects delegators to understand the risks of delegating, and does not provide any insurance policy accordingly.
- Failover (8/30)
- Private Peering (10/10)
- Agreements with other Validators (10/10)
- Sentry Scaling (5/10)
- Backup Strategy
Cosmostation presently operates one validator node, located in either Google Cloud or AWS. This validator can only be accessed inside a VPN. The company made the decision to “go cloud” after purchasing server hardware and an HSM (YubiHSM2) and deploying a validator in a Korean data center – coming to the opinion that cloud services were more secure and production-ready.
No failover is in place at the time of writing. A backup node is currently running, however no files are in place to back up the validator quickly. All key management occurs in the cloud. In the future, Cosmostation intends on moving its validator service to an internal server – eventually returning to the YubiHSM2 or Ledger Nano S.
Cosmostation presently deploys a three-tiered sentry architecture with private peering. The company presently has approximately four sentries across three continents exposed on a public P2P network for RPC’s – which communicate to a number of “private sentries”, which exist on a private network. These private sentries communicate with Cosmostation’s cloud validator, which is solely responsible for signing blocks.
Cosmostation additionally maintains an active snapshotting schedule, which allows the team to spin up new sentries rapidly if needed. The company recently tried to implement autoscaling and load balancing – however they have recently ran into some issues and have since scrapped the effort.
At time of writing, Cosmostation has not developed any custom code on their validator infrastructure. This is as most of the team’s time is spent developing other software products for users.
The company has additionally mentioned an internal effort on transitioning to a Kubernetes-centric setup, however the setup is not yet at production-level. Cosmostation is waiting for changes from Cosmos Hub 2 to Cosmos Hub 3 before making changes or releasing additional information.
Monitoring Tools (100 /100)
- Network Level (10/10)
- Hardware Level (10/10)
- Paging (10/10)
Single Point of Failure (100/100)
- Multi-Cloud (10/10)
- Multi-Region (10/10)
Key Management (25/100)
- HSM Selections (0/10)
- Smart Key Management (5/10)
Validator Access (0/100)
- Physical/Remote (0/10)
Cosmostation utilizes a number of logging, monitoring, and alerting services – including Amazon Web Cloud, Prometheus, Grafana, and the Mintscan explorer.
The team additionally gets alert notifications using the Slack API – ensuring that the whole team is notificatied in the case of downtime. If a critical issue arises (i.e a relay node or validator node is not syncing for 15 seconds) core engineers receive slack, email, and text messages. This is coupled with an on-call rotation.
Lastly, Cosmostation leverages its mintscan explorer, to store all transaction and block data, ensuring that, whenever something happens that is unexpected, they get an alarm for it.
Single Points of Failure
Due to the relatively small nature of Cosmostation’s technical team, only a couple engineers are responsible for taking care of Key management. In the event these engineers are unavailable, Cosmostation’s CEO is also informed about best courses of action.
Cosmostation also employs a “least privilege” strategy, only allowing technical people to manage keys on mainnet. The company has mimicked much of the infrastructure of Certus One and Chorus One in this regard.
Comostation lastly deploys sentry nodes across two cloud services (Google Cloud and AWS), and four regions (in three countries).
Cosmostation currently has all key management infrastructure in the cloud. Double sign prevention is software-based.
Access to physical hardware is non-existent in Cosmostation’s setup. This is typically considered to be a negative, as troubleshooting cloud services is more difficult and complicated than unplugging and fixing malfunctions in a physical machine.
When prompted, the Cosmostation team provided two focal points for future efforts.
1) User-friendly tooling
– Building more infrastructure like the Cosmos Wallet and Mintscan
2) Developer tooling
– Finding unique ways to expand the Cosmos SDK developer pool. This includes providing materials and workshops for developers in Korea, as well as providing libraries and modules that other devs can plug into.
|
OPCFW_CODE
|
PERFORMANCE INFORMATION FOR THIS PROBLEM BOOK PROJECT
Alex Collins: Industrial Consultant and Visiting Lecturer and the University of Roehampton
Dr Charles Clarke: University of Roehampton
Dr Natalie Coull: Abertay University
VALIDATED COLLABORATORS ON THIS PROBLEM BOOK PROJECT
PROBLEM BOOK OVERVIEW
A common approach when using virtual machines for assessments is to utilise a "one size fits all " template. Particularly when working with large student cohorts. However, this approach can often introduce limits to the scope, scale and creativity of authentic assessments, due to a lack of differentiation in assessment scope.
Authentic Learning (as described in ) provides opportunities for students to engage in "learning activities that are either carried out in real-world contexts, or have high transfer to a real-world setting". According to , there is "...a call for more emphasis to be placed on authentic learning as it provides an environment that cultivates students who would be prepared for the complex working world". This aligns with guidance from the Office for Students (OfS) who have defined expectations for student continuation levels, employment and further study after graduation.
Who are the principal stakeholders of this project?
What are principal aims of this problem book?
In this problem book forum, we will explore opportunities for creating authentic learning activities and assessments, that utilise virtual machines. The aims are to provide resources that:
are free to use for learners.
are free to use for academics.
are authentic learning focused.
are authentic assessment focused.
are scalable and adaptable to multiple use case scenarios.
A key area of interest is the use of scripting and automation to create customised virtual machines, on a per student basis, at scale! Through automation, variability and differentiated levels of complexity can be improved.
What are the indicative objectives of this project?
To collaboratively design, create, share, optimise and validate authentic cyber security learning activities. Objectives include:
Share experiences and examples of current virtual machine-centric learning labs.
Design, create, implement and test scripts to auto-generate uniquely customised virtual machines for use in digital forensics, security testing, networking, asset monitoring, log file analysis, etc.
Create an authentic learning framework that centres on scalable, plug and play virtual machine networks, that aligns with CyBOK Knowledge Areas.
Create and validate a series of themed lab activities (e.g. digital forensics, networking, firewalls, subnets, security testing, etc.).
Share outputs as open source resources for use in the cyber security education community.
What are the anticipated outputs and outcomes of this project?
Shared scripts and techniques for auto-generating customised virtual machines
Shareable virtual machines
Shared authentic learning activities
Shared authentic assessment activities
J. H. Galindo, “Authentic Learning (Simulations, Lab, Field),” The Derek Bok Center for Teaching and Learning. [Online]. Available: https://ablconnect.harvard.edu/authentic-learning. [Accessed: 08-Apr-2023].
C.Lee, “Authentic learning: What is it, and why is it important?,” Turnitin, 07-Apr-2023. [Online]. Available: https://www.turnitin.com/blog/authentic-learning-what-is-it-and-why-is-it-important-subtitle-essentials-series. [Accessed: 09-Apr-2023].
SHARE YOUR COMMENTS
COLLABORATE TO INNOVATE
Problem Book ID:
Dr Charles Clarke
University of Roehampton
21 June 2023 at 14:47:46
Problem Book Domain:
|
OPCFW_CODE
|
First off, Nintendo's gonna be so pissed at that. You will face many challenges along the way, as you search for the Pokemon that rules time in Pokemon Diamond Version. Dunno if they could be implemented. So, for every 999 battles you do, you'll get 1 Legendary attacking you next time you reach a random encounter. Look at all the offensive Super Mario Bros. Credits Hack's name: giradialkia Hack's Logo: Mucrush Sprites: Coronis, minime010, Red6095, n0rul3s, spaceemotion Overworlds: Kyledove, Raven-Lux Music: xXchainchomp01, lala19357, Truearagon, tobinus, themutestranger Special thanks to my friend ArcNecrotech.Next
Though, the game did freeze on me quite a bit. No matter which you pick, it sounds like Pikachu. Yes, N's battle themes are better that the Galactic ones! You start out in the usual Pokémon Diamond way, going to the lake to get your first Pokémon, but instead of a Starly attacking you, it's a Darkrai clone, which absorbs your starter. Battle and Trade with your friends around the world using Nintendo Wi-Fi Connection! I reckon that Nintendo are mature enough by now not to get too worried about hacks. You can also share pokemon dark diamond gba rom pokemon diamond gba rom or any other file with the community. I like how you changed the news story at the start to say black Rayquaza instead of red Gyarados.Next
Kindof feels good to see that someone else wants to de-Legendarise the Legendaries. . Also, I think you should give some reasoning as to why exactly the Rival thinks a legendary dragon like Rayquaza would just show up at a lake. His clothes are weird and have a xG logo on them. If you still have trouble downloading pokemon dark diamond gba rom pokemon diamond gba rom or any other file, post it in comments below and our support team or a community member will help you! How to download pokemon dark diamond gba rom pokemon diamond gba rom files to my device? After re-reading the story, yeah, I realised there'd probably be changes : Names.Next
Filename: 1016 - Pokemon Pearl v05 U Legacy. Download pokemon dark diamond gba rom pokemon diamond gba rom files found Uploaded on TraDownload and all major free file sharing websites like 4shared. Watch as day turns to night with the return of the real-time clock feature! You're going to need to alter the beginning sequence where you pick a pokemon too. Thanks for the help, but some of the things, like the pokeballs at the start, can't be changed. I mean, it was kind of cool, but really just unexpected. It will be normal, I'm not sure if you can make it so they move around the map.Next
I'm not sure why it freezes up though. I really like Dark Pikachu starting out with Shadow Ball. The first time was after I selected my name, and the second was after the Dusknoir battle. With this story, we can make Primal Dialga or Shadow Lugia a part of it. If it's possible, just put one pokeball in the case, it contains the Dark Pikachu, then have the rival find a second one hidden in a secret compartment. Discover more than 100 new Pokemon in the Sinnoh Region! Will you be changing the battle backgrounds too? And what is up with the wild Dusknoir fight right at the start? As a rookie Pokemon Trainer, you'll need to catch, train, and battle Pokemon on your journey to become the Pokemon League Champion. Hah, I re read the story and it says Pikachu hasn't fully gone dark yet.
Click on the button below to nominate Pokemon Diamond Version v1. Name: Pokemon Dark Diamond Remake From: Pokemon Diamond Remake by: Markitus95 - Spiky Source: Description: Living in Twinleaf Town has always been boring. Diamond and Pearl are set in the fictional region of Sinnoh, an island based on the 25 Sep 2016 Discover more than 100 new Pokemon in the Sinnoh Region! I know they're appropriate for a Diamond. Also, why does Rowan say I made the Pikachu evolve? They'd be still rather rare, like a 1 in a 1000 chance of one appearing? Maybe one ear is normal colored? Filename: 1015 - Pokemon Diamond U v05 Legacy. If you found that any of above pokemon dark diamond gba rom pokemon diamond gba rom files may have been subject to copyright protection. Professor Rowan is being attacked by a strange person. Ad Content View Poll Results : Should I replace the Galactic Grunt and Admin battle themes with N's battle themes? DarkDiamond sounds cool, but you'd need something to make it Dark in the first place, so.Next
But, because the machine they use doesn't work well, the Pokémon they create are shadow pokémon. . . . . . .Next
. . . . . . .
|
OPCFW_CODE
|
Hello – okay – new teeth, that’s weird. So, where was I? Oh, that’s right… Barcelona!
Well, we did it. After a couple of years of pandemic-induced isolation, we managed to get most of the team together in early April for the first time in forever. We successfully brought people in from as far afield as western Canada and Australia, from Europe and from India, for a few days of brainstorming, hacking, direction-setting, updates and, yes, beer. Without straying into politics, we had notable absences, of course, from our Russian and Ukrainian team members. Stay safe, guys: you were genuinely missed.
As a break from our normal day-by-day account, this year we’re writing this as a retrospective of the whole conference, so you get the flavour of everything all in one go. Think of it as a badly-consumed tapas, then… so, off we go.
As usual, topics were many and varied over the days, including:
- SAKE: Simple ASCII Kodi emulator – an emulator for Kodi add-ons that can run and debug all the basic Python stuff
- PVR multi-instancing – so you can have multiple copies of the same PVR addon (e.g. for different backends)
- v20 and release management – a general conversation about how we do releases, when, communications to the community, etc.
- Generalised timeshift – extending the existing PVR timeshift mechanism to a broader capability, perhaps to be rolled back into core Kodi code (instead of duplicating in every PVR addon)
- TheDataDB – a special guest slot from a former Team member, talking about more generalised metadata storage – lyrics, logo artwork, etc. – separate from e.g. episode listings
- Smart home – ideas and existing developments to integrate home automation into Kodi in various ways
- LibreELEC update – a broad update on developments around one of the major platforms for Kodi
- V4L2 status update – HDR, DRM, ffmpeg patches, and everything to do with core video display on Linux platforms
- Binary addon sandboxing – ideas around improving security and resilience by improving compartmentalisation of binary addons
- C++ 20 – options and plans to modernise our code to current standards
- Jenkins deep dive – an overview plus challenges and opportunities around the open source automation server we use to test/build
- Board stuff – internal governance and oversight of all things Kodi
- Android maintenance and Play Store – our ongoing issues with a lack of Android devs, and implications (e.g. our current inability to update in the Play Store because of API versions)
- Flatpak – how we maintain an support our Flatpak, general Flathub changes and how to improve the user experience in the future
- GSoC 2021+2022 – Google Summer of Code – updates on students’ work, plans, mentoring, and so on
There were also things we can’t really talk about – top secret plans, and bar bills, for example – but this gives a flavour of what we discussed. There’s enough here to keep us all busy for many years yet, and that’s before new stuff inevitably gets added.
In the meantime, please delight in a picture of your favourite open-source devs, relaxing in the sunshine after a hard day huddled around laptops and projector screens.
|
OPCFW_CODE
|
Investigate Possible Embedded Broker Shutdown Issue
See https://github.com/spring-projects/spring-framework/issues/31453#issuecomment-1787252754
While we don't support the embedded broker in native images; the problem seems to be that the broker is not shut down properly.
The problem is rather that the broker is starting "manually" as part of a callback that should not start any bean, see
https://github.com/spring-projects/spring-kafka/blob/0dfde15edf4b66706fc411137c1ec624e0064d4b/spring-kafka-test/src/main/java/org/springframework/kafka/test/context/EmbeddedKafkaContextCustomizer.java#L127-L128
Customize the supplied ConfigurableApplicationContext after bean definitions have been loaded into the context but before the context has been refreshed.
I think it should rather contribute a RootBeanDefinition. Unfortunately, the contract of this method is a bit strange as it should provide a registry, not a context since it has not been refreshed yet.
Here is an example in Spring Boot of what I mean: https://github.com/spring-projects/spring-boot/blob/8f2ec227389391fdd173db0ab64f26abd2752f20/spring-boot-project/spring-boot-test/src/main/java/org/springframework/boot/test/web/client/TestRestTemplateContextCustomizer.java#L62
Just debugged one of the test using @EmbeddedKafka. The EmbeddedKafkaZKBroker.destroy() is called and all the brokers and Zookeeper are shouted down in normal JVM.
I fail to find that @DisabledInAotMode to mark our @EmbeddedKafka, but I see that mentioned TestRestTemplateContextCustomizer uses if (AotDetector.useGeneratedArtifacts()) {.
Maybe that would be enough for us as well.
At the same time I don't see too much difference with what we have so far with a ((DefaultSingletonBeanRegistry) beanFactory).registerDisposableBean(EmbeddedKafkaBroker.BEAN_NAME, embeddedKafkaBroker); and what Spring Boot does:
if (beanFactory instanceof BeanDefinitionRegistry registry) {
registerTestRestTemplate(registry);
}
Even if we are about migrating to the RootBeanDefinition, we still need a BeanDefinitionRegistry registry.
So, I don't understand what we are pursuing here.
Thanks
Again, this isn't a problem with shutdown. It's not even a problem with AOT actually even if it surfaces only when one tries to process the test context using Spring AOT.
I don't see too much difference with what we have [...] and what Spring Boot does
There's a huge difference. Spring Kafka creates a singleton an force start it at a time where only the context can be customized before the refresh (see javadoc). Spring Boot registers a bean definition that's going to be handled just like any other bean at the appropriate time.
Even if we are about migrating to the RootBeanDefinition, we still need a BeanDefinitionRegistry registry.
Correct and there's nothing in the issue I can see that's asking that. I've tried to explain that with "Unfortunately, the contract of this method is a bit strange as it should provide a registry, not a context since it has not been refreshed yet.".
Still not clear why is that harmful.
The EmbeddedKafkaBroker instance does not have any dependencies, therefore that beanFactory.initializeBean(embeddedKafkaBroker, EmbeddedKafkaBroker.BEAN_NAME); is thing by itself just to call its afterPropertiesSet() to start embedded brokers before the real test suite.
And therefore changing it to the:
((BeanDefinitionRegistry) beanFactory).registerBeanDefinition(EmbeddedKafkaBroker.BEAN_NAME,
new RootBeanDefinition(EmbeddedKafkaBroker.class, () -> embeddedKafkaBroker));
won't have too much difference for the end result.
(Assuming that we agreed we don't need to rework it to plain BeanDefinition configuration since no AOT support for this type of tests.)
The difference is initializeBean() creates the broker immediately, whereas registering a bean definition defers the creation until the context is refreshed.
|
GITHUB_ARCHIVE
|
import pandas as pd
import matplotlib.pylab as plt
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import TimeSeriesSplit
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LassoCV, RidgeCV, LinearRegression
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV
# error计算误差
def mean_absolute_error(y_true, y_pred):
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
# data split分割数据,训练集、测试集、预测集
def timeseries_split(x, y, test_size, pred_size):
index_test = int(len(x) * (1 - test_size))
x_train = x.iloc[:index_test]
y_train = y.iloc[:index_test]
x_test = x.iloc[index_test:len(x) - pred_size]
y_test = y.iloc[index_test:len(x) - pred_size]
x_pred = x.iloc[-pred_size:]
y_pred = y.iloc[-pred_size:]
return x_train, y_train, x_test, y_test, x_pred, y_pred
# calculate mean计算均值特征
def cal_mean(data, x_feature, y_feature):
return dict(data.groupby(x_feature)[y_feature].mean())#利用分组,计算均值特征
# make feature计算平移特征
def build_feature(data, lag_start, lag_end, test_size, target_encoding=False, num_day_pred=1):#target_encoding是否要开启均值特征,num_day_pred预测多少天
# build future data with 0
last_date = data["time"].max()
# 预测点个数,由数据粒度决定
pred_points = int(num_day_pred * 24) # 1h粒度,1day = 24个点
pred_date = pd.date_range(start=last_date, periods=pred_points + 1, freq="1h")
pred_date = pred_date[pred_date > last_date] # 排除掉last_date(非预测), preiods = pred_points +1,也是因为last_date为非预测point,所以后延1个point
future_data = pd.DataFrame({"time": pred_date, "y": np.zeros(len(pred_date))})#先将预测时间段的value设置为0,然后在利用shift和均值等构件特征,做预测
# concat future data and last data
df = pd.concat([data, future_data])
df.set_index("time", drop=True, inplace=True)
#print(df)
# make feature
# shift feature平移特征,lag_start,lag_end分别为shift平移多少,如从80-120,80,81,82,,,119,120.
for i in range(lag_start, lag_end):
df["lag_{}".format(i)] = df.y.shift(i)
#diff feature#差分特征,对平移后的lag做差分操作,此特征作用不大
df["diff_lag_{}".format(lag_start)] = df["lag_{}".format(lag_start)].diff(1)
# time feature时间特征
df["hour"] = df.index.hour
# df["day"] = df.index.day
# df["month"] = df.index.month
df["minute"] = df.index.minute
df["weekday"] = df.index.weekday
df["weekend"] = df.weekday.isin([5, 6]) * 1
df["holiday"] = 0
df.loc["2018-10-01 00:00:00":"2018-10-07 23:00:00","holiday"] = 1
#print(df)
# df["holiday"]
# average feature
if target_encoding: # 用test
# 计算到已有数据截止,然后在映射到预测的数据中,这样就训练、测试、预测都有此特征
df["weekday_avg"] = list(map(cal_mean(df[:last_date], "weekday", "y").get, df.weekday))#时间均值特征
df["hour_avg"] = list(map(cal_mean(df[:last_date], "hour", "y").get, df.hour))
df["weekend_avg"] = list(map(cal_mean(df[:last_date], "weekend", "y").get, df.weekend))
df["minute_avg"] = list(map(cal_mean(df[:last_date], "minute", "y").get, df.minute))
df = df.drop(["hour","minute","weekday", "weekend"], axis = 1)
#one-hot没有作用
#df = pd.get_dummies(df, columns = ["hour", "minute", "weekday", "weekend"])
# data split
y = df.dropna().y
x = df.dropna().drop("y", axis=1)
x_train, y_train, x_test, y_test, x_pred, y_pred = \
timeseries_split(x, y, test_size=test_size, pred_size=pred_points)
return x_train, y_train, x_test, y_test, x_pred, y_pred
# predict
def predict_future(model, scaler, x_pred, y_pred, lag_start, lag_end):#model拟合的模型,scaler归一化,lag平移特征,x_pred/y_pred预测x和y
y_pred[0:lag_start] = model.predict(scaler.transform(x_pred[0:lag_start])) # 预测到lag_start上一行
for i in range(lag_start, len(x_pred)):
last_line = x_pred.iloc[i-1] # 已经预测数据的最后一行,还没预测数据的上一行,即shift,上一行填充到斜角下一行
index = x_pred.index[i]
x_pred.at[index, "lag_{}".format(lag_start)] = y_pred[i-1]
x_pred.at[index, "diff_lag_{}".format(lag_start)] = y_pred[i-1] - x_pred.at[x_pred.index[i-1], "lag_{}".format(lag_start)]
for j in range(lag_start + 1, lag_end): # 根据平移变换shift,前一个lag_{}列的值,shift后为下一个列的值
x_pred.at[index, "lag_{}".format(j)] = last_line["lag_{}".format(j-1)]
# 已经预测的最后一个值,赋值给lag_start对应的"lag_{}.format(lag_start)列
# x_pred.at[index, "lag_{}".format(lag_start)] = y_pred[i - 1]
y_pred[i] = model.predict(scaler.transform([x_pred.iloc[i]]))[0]
return y_pred
# plot显示结果
def plot_result(y, y_fit, y_future):#y真实,y_fit拟合,y_future预测
assert len(y) == len(y_fit)
plt.figure(figsize=(16, 8))
# plt.plot(y.index, y, "k.", label="y_orig")
plt.plot(y.index, y, label="y_orig")
plt.plot(y.index, y_fit, label="y_fit")
plt.plot(y_future.index, y_future, "y", label="y_predict")
error = mean_absolute_error(y, y_fit)
plt.title("mean_absolute_error{0:.2f}%".format(error))
plt.legend(loc="best")
plt.grid(True)
plt.show()
# coefs显示重要性
def plot_importance(model, x_train):
coefs = pd.DataFrame(model.coef_, x_train.columns)
#coefs = pd.DataFrame(model.feature_importances_, x_train.columns)
coefs.columns = ["coefs"]
coefs["coefs_abs"] = coefs.coefs.apply(np.abs)
coefs = coefs.sort_values(by="coefs_abs", ascending=False).drop(["coefs_abs"], axis=1)
plt.figure(figsize=(16, 6))
coefs.coefs.plot(kind="bar")
plt.grid(True, axis="y")
plt.hlines(y=0, xmin=0, xmax=len(coefs), linestyles="dashed")
plt.show()
# read data
if __name__ == "__main__":
dataf = pd.read_csv("data.csv")
dataf["time"] = pd.to_datetime(dataf["time"])
dataf = dataf.sort_values("time")
dataf.rename(columns={"sump": "y"}, inplace=True)
lag_start = 80#要根据数据周期,调试
lag_end = 120#平移特征
x_train, y_train, x_test, y_test, x_pred, y_pred = build_feature \
(dataf, lag_start=lag_start, lag_end=lag_end, test_size=0.3, target_encoding=True, num_day_pred=1)
scaler = StandardScaler()
x_train_scaled = scaler.fit_transform(x_train)
x_test_scaled = scaler.transform(x_test)
tscv = TimeSeriesSplit(n_splits=5)
#lr = LassoCV(cv=tscv)
lr = LinearRegression(normalize= "l1")
#可以尝试随机森林的效果,也不错。也可以做多模型结果融合,请自己尝试。
#lr = RandomForestRegressor(n_estimators=100, max_depth=10) #lag_start = 288, lag_end = 320
# lr = RidgeCV(cv = tscv)
lr.fit(x_train_scaled, y_train)
#train_score = lr.score(x_train_scaled, y_train)
#test_score = lr.score(x_test_scaled, y_test)
#print("num_tree", each, "score", train_score, test_score)
# future预测
y_future = predict_future(lr, scaler, x_pred, y_pred, lag_start, lag_end)
#print(x_pred)
# now 拟合
y_fit = lr.predict(np.concatenate((x_train_scaled, x_test_scaled)))
y = pd.concat([y_train, y_test])
# 显示结果
plt.figure(figsize=(16, 8))
plt.plot(dataf["time"], dataf["sump"])
plot_result(y, y_fit, y_future)
y_future.to_csv("y_future_lr_test.csv")
plot_importance(lr, x_train)
|
STACK_EDU
|
import folium
# Latitude, Longitude
LOCATION_DATA = [
("41.90093256", "12.48331626"),
("41.89018285", "12.49235900"),
("41.89868519", "12.47684474"),
("41.89454167", "12.48303163"),
("41.90226256", "12.45739340"),
("41.90269661", "12.46635787"),
("41.91071023", "12.47635640"),
("41.90266442", "12.49624457")
]
LOCATION_NAMES = [
"Trevi Fountain",
"Colosseum",
"Pantheon",
"Piazza Venezia",
"St. Peter’s Square",
"Mausoleum of Hadrian",
"Piazza del Popolo",
"Fountain of the Naiads"
]
if __name__ == '__main__':
folium_map = folium.Map()
for cords, name in zip(LOCATION_DATA, LOCATION_NAMES):
folium.Marker(location=[cords[0], cords[1]],
popup=f"Lattitude:<br>{cords[0]}<br>"
f"Longitude:<br>{cords[1]}<br>"
f"Name:<br>{name}"
).add_to(folium_map)
south_west_corner = min(LOCATION_DATA)
north_east_corner = max(LOCATION_DATA)
folium_map.fit_bounds([south_west_corner, north_east_corner])
folium_map.save("FoliumMap.html")
|
STACK_EDU
|
Tried with 0.5.2 and the latest nightly of mGBA, on windows 10 64-bit.
When grabbing two pegs at the same time your character gets turned around. This makes the game pretty difficult to play as you can't jump up, and even makes the tutorial character that is supposed to show you the ropes screw up all of their jumps.
The text was updated successfully, but these errors were encountered:
I'd already gotten this report in an email, but haven't gotten around to looking into it.
Here are the details from the email:
I'm using Windows 8.1 and mGBA 0.5.2 (the latest stable release, as of this writing.)
There is a serious movement orientation bug in DK: King of Swing which makes it difficult to play.
When your character is holding onto a peg with one hand, and tries to grab a peg with the other, they are almost always turned to a different direction than they should. This direction change continues after they let go; the character continues rotating from that direction, as if they had started from that direction to begin with.
After examining the specific cases where there were problems (listed below), the problem appears to be with vertical orientation detection when two pegs are grabbed, where the vertical orientation is reversed once a player grabs a second peg (i.e. if they are supposed to be facing up, they face down, and vice versa). The only case where this doesn't happen is if the character is supposed to be facing down, where they face down (see the bottom of the list below)
The specific errors are (these are demonstrated in the attached .avi file):
If they are supposed to be facing diagonally left-down, they face left-up.
If they are supposed to be facing diagonally left-up, they face left-down.
If they are supposed to be facing up, they face down.
If they are supposed to be facing diagonally right-up, they face right-down.
If they are supposed to be facing diagonally right-down, they face right-up.
They do face down when they are supposed to, but if you look carefully, you can see the sprite face up for a frame.
I went through the old nightlies just to be a bit more thorough.
Last version that worked: 7401371
In 812d063 the problem is slightly different and DK's right hand can't grab pegs at all, most of the time.
Finally, 7212854 is the first version with the specific issue described above/in your email.
So I'm assuming the problem is with the threaded rendering changes in 812d063 and ac11542.
|
OPCFW_CODE
|
Can we shorten/remove the question title length requirement?
Currently, the JSE requires its question titles be at least 15 characters long. However, this 15-character requirement was set for English, in which a word is easily 4-10 letters long, and any question in under 15 characters can be safely assumed to be too brief. In Japanese, however, with Kanji, the information density is much higher, and a complete question can be well under 15 characters.
My personal experience: I recently asked a question about the word 雛たち in Japanese. I wanted to make the title:
雛たちとは? [6 chars] OR 雛たちとは何でしょうか? [12 chars]
Had I asked in English, the lengths would be:
What does 雛たち mean? [19 chars]
I had to change the question to 雛たちとは何の意味でしょうか? to pass the 15-char limit.
In short, I feel the 15-char limit is designed to work with English, which has a lower information density than Japanese. Having to expand my question to fit the 15-char requirement felt extra and unnecessary. Should we lower the 15-char limit, or just have it removed?
Just want to let the community knows that technically it's possible to change the length limit, as shown on Japanese SO with 8 characters.
In principle I agree that titles entirely in Japanese should have a lower character limit, however there may be some other issues since majority of the questions appear to be written in English.
One possible problem is that it lowers the barrier to low quality questions. We frequently receive (poorly titled) translation or transcription requests. But one may argue that a higher length limit doesn't do much to improve the quality of such questions anyway. I.e. a user may just change the title from "translate" to "please translate" just to get around the length filter.
Now it seems (as of writing this post) there are no strong opinions either way. If there is sufficient community support in changing the minimum length, we'll escalate to the community team to implement the changes.
A quick reminder that votes in Meta may not convey the same information as votes normally do on the main site. On Meta, it is difficult to distinguish between an upvote for indicating that the issue is worth discussing, and an upvote that indicates agreement with the proposition in the post. (For example, my upvote on the question post indicates that I feel that this issue is worth discussing, but it may or may not indicate my support for the changes)
For this reason I will state clearly the use for upvotes on this answer post.
Upvote this post if you are in agreement with decreasing the minimum character limit
Downvote this post if you do not agree with decreasing the minimum character limit
|
STACK_EXCHANGE
|
//
// ViewController.swift
// Project18
//
// Created by Marcos Martinelli on 1/14/21.
//
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
/* DEBUGGING TECHINIQUES: */
// 1. print() - not the best, but it works
print("viewDidLoad() has been triggered")
print("viewDidLoad()", "has", "been", "triggered")
print("viewDidLoad()", "has", "been", "triggered", terminator: "") // no new line, "" instead
print("viewDidLoad()", "has", "been", "triggered", separator: "->")
// 2. Assetion with assert() - will crash your program before any harm is done
// such as losing use data
assert(1 == 1, "Math failure") // will not trigger b/c 1 does equal 1
assert(1 == 2, "Math failure") // this will fail, and will crash your app
// *** assert() will not appear in release code! this can be used liberally!
// *** it will be removed when the code is built for the app store!
assert(reallySlowFunction() == true, "The reallySlowFunction failed!")
// 3. Breakpoints
for i in 1...100 {
print("got number \(i)") // click line number to activate, re-click to deactivate, right click to delete
// fn+F6 to STEP-OVER line by line
// CMD+Ctrl+Y to CONTINUE until the next breakpoint
// (lldb) is a command line
// DRAG the green Thread breakpoint
// CONDITIONAL BREAKPOINTS!!!!
// - pause the breakpoint
// 4. View Debugging to capture your view hierarchy
// for autolayout!
}
}
func reallySlowFunction() -> Bool {
// this is just so the example above doesn't throw a compiler error
return true
}
}
|
STACK_EDU
|
Test on device-events using the Likelihood Ratio Test, originally proposed by Huang & Tiwari (2011). From the family of disproportionality analyses (DPA) used to generate signals of disproportionate reporting (SDRs).
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Required input data frame of class
Further arguments passed onto
Optional string indicating the English description of what
was analyzed. If specified, this will override the name of the
Required positive integer indicating the number of unique times counting in reverse chronological order to sum over to create the 2x2 contingency table.
Alpha or Type-I error rate in the range (0, 1), used to determine signal status. It is the threshold for determining if the observed reporting rate is greater than the expected based on Monte Carlo simulations of the null.
Number of Monte Carlo samples for constructing the null distribution based on empirical data. Lowest recommended is 1000. Increasing iterations also increases p-value precision.
This is an implementation of the "Regular LRT" per
Huang & Tiwari (2019). It assumes a test on a single event of interest
where all other events & devices are collapsed, effectively testing a 2x2
table only. Therefore this is a test on the significance of the likelihood
ratio instead of the maximum likelihood over
i events for a given
j (refer to Huang & Tiwari, 2011).
ts_event, in the uncommon case where the
device-event count (Cell A) variable is not
"nA", the name of the
variable may be specified here. Note that the remaining 3 cells of the 2x2
contingency table (Cells B, C, D) must be the variables
"nD" respectively in
df. A named character
vector may be used where the name is the English description of what was
analyzed. Note that if the parameter
analysis_of is specified, it will
override this name. Example:
ts_event=c("Count of Bone Cement
A named list of class
mdsstat_test object, as follows:
Name of the test run
English description of what was analyzed
Named boolean of whether the test was run. The name contains the run status.
A standardized list of test run results:
for the test statistic,
ucl for the set
p for the p-value,
signal status, and
The test parameters
The data on which the test was run
mds_ts: LRT on mds_ts data
default: LRT on general data
Huang L, Zalkikar J, Tiwari RC. A Likelihood Ratio Test Based Method for Signal Detection with Application to FDA’s Drug Safety Data. Journal of the American Statistical Association, 2011, Volume 106, Issue 496, 1230-1241.
Huang L, Zalkikar J, Tiwari RC. Likelihood-Ratio-Test Methods for Drug Safety Signal Detection from Multiple Clinical Datasets. Comput Math Methods Med. 2019, PMC6399568.
1 2 3 4 5 6 7 8 9
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.
|
OPCFW_CODE
|
There has been an increasing interest on the impact of climate change on future air quality at both global and regional scales. The largest amount of research up to now used global-scale modelling tools to address the issue, while few recent papers use regional scale models to assess the impact of climate change on large urban agglomerations. The main issues of concern related to a regional scale set-up focusing on a city are the representativeness of the emission estimates of a regional inventory for the city as well as uncertainties in the emission projections. Regional scale projections, may be consistent with global scale climate scenarios but they are not representative of the future trend of a specific city. In this study we modelled air quality in the city of Paris, France at a mid-21st century horizon (2045-2055) under two emission and climate scenarios. The emission scenarios were developed for Europe from the Global Energy Assessment (GEA) to be consistent with the IPCCs recently developed Representative Concentration Pathways (RCPs) which incorporate only climate change actions. The emission scenarios include both climate (RCP consistent) and regional air quality policies. To cope with the aforementioned problems we combined two sources of information to project emissions for the city of Paris to the mid-century horizon. The first stems from a local agency (AIRPARIF) and includes a bottom-up high resolution emission inventory compiled for the year 2008 based on information on local activity and statistics. This inventory is projected by AIRPARIF to the year 2020 based on various air-quality policies already in place or planned for the next years. The second is a set of projection coefficients extracted from the two GEA scenarios for France and applied to the 2020 local inventory in order to obtain an emission inventory for 2050. Global scale concentrations were modelled with the coupled LMDz-INCA system and then downscaled with the regional scale air-quality model CHIMERE using two-level one-way nesting first at 0.5° (50km) grid covering Europe and then at a 4km horizontal resolution grid over the greater Paris area (Ile-de-France region). The IPSL-CM5-MR global-scale model was used to drive the WRF meteorological model for a regional domain in 50km resolution covering Europe which was subsequently downscaled to 10km resolution in order to derive meteorology for the Ile-de-France region. Two sets of simulations are performed: a continuous control run from 1995 to 2004 representing present time air-quality and a continuous run over the 2045-2054 decade representing air-quality projection to the mid-21st century. This effort aims in the development of a health impact assessment study for ozone and PM2.5 in the area and the potential differences that arise in air quality and health by using a local scale setup-up compared with a regional scale setup-up.
EGU General Assembly Conference Abstracts
- Pub Date:
- April 2013
|
OPCFW_CODE
|
Algebra math word problem solver
One instrument that can be used is Algebra math word problem solver. We will also look at some example problems and how to approach them.
The Best Algebra math word problem solver
This Algebra math word problem solver provides step-by-step instructions for solving all math problems. How to solve math word problems? Believe it or not, there is a process that you can follow to solving just about any math word problem out there. Follow these steps, and you'll be on your way in no time: 1) Read the problem carefully and identify what is being asked. What are the key words andphrases? What information do you already know? What information do you need to solvethe problem? 2) Draw a diagram or model to visualize the problem. This will help you to better understandwhat is happening and identify what information you need. 3) Choose the operation that you will use to solve the problem. This will likely be addition,subtraction, multiplication, or division, but could also be more complex operations such asexponents or roots. 4) Solve the problem using the operation that you have chosen. Be sure to show your workand explain your thinking so that someone else could follow your steps. 5) Check your work by going back and plugging your answer into the original equation. Doesit make sense? Are there other ways that you could check your work? If not, ask a friendor teacher for help.
If you're working with continuous data, you'll need to use a slightly different method. First, you'll need to identify the range of the data set - that is, the difference between the highest and lowest values. Then, you'll need to divide this range into a number of intervals (usually around 10). Next, you'll need to count how many data points fall into each interval and choose the interval with the most data points. Finally, you'll need to take the midpoint of this interval as your estimate for the mode. For example, if your data set ranges from 1 to 10 and you use 10 intervals, the first interval would be 1-1.9, the second interval would be 2-2.9, and so on. If you count 5 data points in the 1-1.9 interval, 7 data points in the 2-2.9 interval, and 9 data points in the 3-3.9 interval, then your estimate for the mode would be 3 (the midpoint of the 3-3.9 interval).
Math questions and answers can be a great resource when you're stuck on a tough math problem. Sometimes all you need is a little bit of help to get over the hump, and there's no shame in that. Math questions and answers can be found all over the internet, in books, and even in magazines. Just do a quick search and you'll find tons of resources to help you out. And if you really get stuck, don't forget to ask your teacher or tutor for help. They'll be more than happy to walk you through the problem until you understand it.
Integral equations are a powerful tool for solving mathematical problems. However, they can be difficult to solve. In general, an integral equation is an equation that involves an integral. The most common type of integral equation is a differential equation. A differential equation is an equation that involves a derivative. For example, the equation y'=y^2 is a differential equation. To solve a differential equation, you first need to find the integrating factor. The integrating factor is a function that multiplies the derivatives in the equation. It allows you to rewrite the equation as an equivalent first-order differential equation. Once you have found the integrating factor, you can use it to rewrite the original equation as an equivalent first-order differential equation. You can then solve the new equation using standard methods. In general, solving an integral equation requires significant mathematical knowledge and skill. However, with practice, it is possible to master this technique and use it to solve complex problems.
There are many ways to solve problems involving interval notation. One popular method is to use a graphing calculator. Many graphing calculators have a built-in function that allows you to input an equation and then see the solution in interval notation. Another method is to use a table of values. This involves solving the equation for a few different values and then graphing the results. If the graph is a straight line, then the solution is simple to find. However, if the graph is not a straight line, then the solution may be more complicated. In either case, it is always important to check your work to make sure that the answer is correct.
Solve your math tasks with our math solver
Very Helpful! A really great app for all ages! Although at sometimes I won't get the answer I expected, like for example 17 + n = 30, when I typed that I wasn't given my expected answer, would be great if they added another tab to put the things that are not that common, like a tray where you put the unnecessary items you have and just in case you need something it might be there.
This is so helpful for my math subjects. It's easy to use and you can pick between capturing the equation or typing it yourself which is really accessible. It even shows how to solve the problem in different ways for free so you can also learn how to solve it yourself. Thank you!
|
OPCFW_CODE
|
The Diversity Problem In Design Is A Lot More Complicated Than You Think
Before entering the MPS Communication Design program at Parsons, I was working in the design industry after completing my undergraduate studies at SUNY Purchase. I realized that there’s a lack of racial and cultural diversity in the New York design scene. It’s a subject I constantly strive to discuss, but often find too difficult to approach. When I started Parsons, I began questioning the idea of diversity in design and technology.
What does diversity mean in design?
What does diversity mean? What does it mean in the design landscape? The question became a lot more complicated than I thought as I discussed with other designers and classmates. Lack of diversity is not just in the design work environment. It really expands to all facets of design, from the tools we use to what we are taught in design education.
Lack Of Diversity In Design Education
I realized the lack of diversity in design education from the start of the semester. When I took graphic design history courses while pursuing my undergraduate degree, we were mostly taught Western graphic design movements and pieces. Even most publications focus on Western graphic design. Movements such as Swiss Modernism, Bauhaus, Dada, De Stijl, and Art Deco are all heavily emphasized in design education. Graphic design education rarely talks about designers, pieces, or movements outside the Western world. There is a lack of exposure to design from Latin America, South America, Africa, and all of Asia as well. Only after I graduated from Purchase did I start exploring design outside of the Western context. In order to truly create innovative and beautiful designs for the future, I think design education must reshape itself and expose students to designs outside of Western design.
Lack Of Diversity In The Tools We Use
Most interactive technology has been designed by Western people. Because of this, the way we design for screens specifically has been conformed to Westernized standards. Tools like coding have been developed for the English language — understandably so, as it is the most universal language globally. However, this creates the necessity to not only learn the English language to design for screens, but also require a person to read left to right, when a large portion of the world reads right to left. A lot of software have built-in languages, but in the area of development, when someone wants to learn to code a website or software from another part of the world they have to learn English before pursuing a project. However, before even considering such pursuits, a person must first have access to the technology itself.
Lack Of Diversity In Access To Design Technology
Pursuing a design career is a real privilege on an economic level. If you are studying design in a non-Westernized country, you must have a certain amount of economic — and sometimes even social — privilege to pursue a design career. Tools and technology should be accessible to all if someone wants to pursue a design career path. To even consider exploring the idea of design, one must have access to a computer, Internet, and of course software such as the Adobe Creative Suite. It is hard to access all of that unless you own the technology yourself or have access to an educational environment such as a school or library.
Even if you own a computer, you still need to pay for technology to even start creating something. It is approximately 20 dollars a month for just an Adobe student account, not even full price which is 50 dollars a month.Most people in the world have to pirate design software if they want to explore the idea of making work. However, the recent decisions in the US surrounding net neutrality could threaten the easy access (pirating) to software.
A lot of my friends who are raised in lower-income communities, started learning design by pirating design software. Without that access, a lot of people won’t be able to pursue their creative endeavours. What will happen when net neutrality laws are repealed? With internet companies able to selectively throttle or block certain traffic, file sharing and piracy could potentially disappear from the Internet. The access to tools will be increasingly difficult for people in lower income communities to even pirate the software. Tools and technology should be accessible to all if someone wants to pursue a design career path. This is also how we can tackle the problem of lack of diversity and inclusion in the work environment.
Lack Of Diversity In The Work Environment
Finally, the lack of diversity and inclusion in the work environment. To move forward in design, especially interactive design, we must include people from all walks of life. We are living in a time where technology and design are evolving so rapidly. New facets of design such as service and social innovation design require people of all backgrounds to truly make designs that work for all people. We can solve this issue by learning more about different design cultures and making design technology accessible for all people.
|
OPCFW_CODE
|
Transcript from the "Focus State Assertions" Lesson
>> Marcy Sutton: Okay, so I've got something that's opening that. I got the event to work, first of all, now I need to make an assertion. So, I need to grab the test ID off of here. So, it was dropped down item list. I'm gonna wait until I've actually opened the thing, just to make sure.
[00:00:25] Maybe this is a case for breaking up your test a little bit since I'm kind of mixing setup and assertions into each other. But YOLO, we're going to keep coming [LAUGH]. So, let's do the drop down list. We'll do dropdown.getByTestId.. I'll pass that string in for the drop down item list, and then I am going to assert.
[00:00:53] And I actually want to go and get this. So, the first item, the first list item and the first anchor inside of that is what's getting focused. So, let's go look at what this gives me.
>> Marcy Sutton: Let's see, getByTestId, I'm gonna go refer to that API a little bit and just to make sure that is gonna work the way that I want.
[00:01:16] So let's go back to Queries. TestId.
>> Marcy Sutton: ByTestId. So, close out of this.
>> Marcy Sutton: It should return an element to me, doesn't really say, does it? It just says, here you go, and doesn't really tell you what to expect. Here, returns an HTML element. Okay, so that's what we want.
[00:01:47] That means that I can run query selector on it and go and find the children. So, I'm gonna say, const firstAnchor. And I'll say, dropdownList.querySelector. And I can just pass it an anchor directly. And then I can say, expectfirstAnchor.toHaveFocus.
>> Marcy Sutton: I love these mergers. It was huge pain to do this before those existed.
[00:02:21] So if I run the test again, wohoo, that worked. And maybe in the spirit of test-driven development, I could go and verify, maybe I go and hack this or something. So, tab index of negative one would technically still be focusable because we're using a script to send focus.
[00:02:42] Even though it would be taken out of the normal tab order. So, to really test this and make sure that it's really working, I can just do something really quick and dirty and change this to a div. I like to make sure that it's doing what I really expect cuz sometimes it'll pass and it's not actually working.
[00:03:02] And that did work. So, if I change that to a div, we get that return state of, yeah, it can't focus that item because it's not focusable. So that's pretty cool. If I go back and put this link back in, run the test again. It's just good to verify.
[00:03:20] Yeah, so, awesome. It's pretty solid. I can assert that focus management and I feel much more confident about that component. If I took a vacation and came back and was a little rusty about how that worked. Or maybe my co-worker who's new to accessibility was adding features to it or something.
[00:03:41] The test act as a contract for your component APIs. And that's so powerful, especially if you think of automation with continuous delivery and continuous integration. You could prevent a deployment from going out if there's a bad merge or something breaks this. It is, and we'll talk a little bit about the different kinds of tests, but unit tests are great.
[00:04:04] Especially if you have these reusable components that you used a bunch of different places. You can test those inputs, test those APIs. We even highlighted that issue with the items, maybe there is a more robust implementation of that component that the test kind of highlighted. If someone goes to use it and they don't pass in the items, should it warn you?
[00:04:25] I don't know, maybe. Maybe there's some kind of like, we could use TypeScript or something and have it be like, eh, you forgot the items, that could help. I mean, it just kind of uncovers these issues where we otherwise, I don't know, we're kind of just winging it.
[00:04:40] And we don't really know how our APIs might be used or misused. So I think that's super powerful. I'm pretty happy with that.
|
OPCFW_CODE
|
Knockout mapping - validate arrays
How to set validation to Arrays using knockout validation?
My object definition
//c# code
public class Trophy
{
public string Name { get; set; }
public string Category { get; set; }
public double PrizeMoney { get; set; }
}
public class Player
{
public string Name { get; set; }
public List<Trophy> Trophies { get; set; }
}
I am able to set validation like 'required' using ko validation for simple types like 'Name' but I cannot set to Trophies which is an array. For simple types I use as below
// javascript code
var localModel = ko.mapping.fromJSON(getPlayerModelJson());
// Validation
localModel.Name.extend({ required: { message: 'Please enter first name' } });
Please let me know how to do for Name, Category and PrizeMoney with in Trophies?
I tried to make use of 'Customizing object construction using “create”' as mentioned in the
http://knockoutjs.com/documentation/plugins-mapping.html but it is creating a duplicate Trophies array item, for example if I have two list item in the Trophies the resulting object also has two items but it is duplicate of last item
// Java script code
var Trophies = function (data) {
Name = ko.observable(data.Name).extend({ required: { message: 'Please enter name' } }),
Category = ko.observable(data.Category),
PrizeMoney = ko.observable(data.PrizeMoney)
}
var localModel = ko.mapping.fromJSON(getPlayerModelJson(), TrophiesMapping);
//Custom mapping
var TrophiesMapping = {
'Trophies': {
create: function (options) {
return new Trophies(options.data);
}
}
}
All I wanted is validate the properties with in the array. Thanks
Here's a JSFiddle using mapping. I think your problem might be the following line:
var localModel = ko.mapping.fromJSON(getPlayerModelJson(), TrophiesMapping);
I copied your code and was scratching my head as to why it didn't work until I changed it to
ko.mapping.fromJS(...)
Thanks Corey Cole. The problem was not with ko.mapping.fromJSON but with the trophies object, the difference in your code was you are referenceing using self(this), like self.Name , self.Trophy and self.PrizeMoney. So that solved the problem.
Take a look at this example upida.azurewebsites.net
Click - Add Order, and in the new Window you will see Array of products.
Try to add several products, and fill them with data.
Try to Save, and see how validation works, it is knockout.js.
It is server side validation, with no any client side constraints.
hmm ... doesn't look like an answer to me
|
STACK_EXCHANGE
|
With the evolution of software development, the need for automation has become more apparent than ever before. The implementation of AI-backed tools, the adoption of DevOps technology, and the use of bots have significantly enhanced the efficiency of automation, reducing human error and manual labor.
- Leveraging AI-Powered Tools to Enhance Automation Efficiency
- Key Considerations in Implementing DevOps Technology for Automation
- Reducing Human Error and Manual Labor through Intelligent Automation
- The Role of Pipeline Automation in Streamlining Operations
- Exploring the Use of Bots to Automate Routine Tasks
Artificial Intelligence (AI) has been a game-changer across various industries, and software development is no exception. The advent of AI-powered tools has revolutionized the way automation is implemented in software development.
These tools leverage advanced machine learning algorithms and natural language processing techniques to automate tasks, which were previously performed manually. For instance, AI-powered code review tools can automatically analyze code to identify potential bugs, security vulnerabilities, and code quality issues. This not only speeds up the code review process but also reduces the likelihood of human error.
Similarly, AI-powered testing tools can generate test cases, execute tests, and analyze the results automatically. This enhances the efficiency of the testing process and improves the accuracy and reliability of the tests.
Leveraging DevOps Technology to Enhance Automation Efficiency
DevOps technology plays a crucial role in enhancing automation efficiency. It involves the use of various tools and practices to automate the entire software development lifecycle, from code integration to testing, deployment, and monitoring.
By automating these processes, DevOps technology enables teams to deliver software products faster and more reliably. It also reduces the amount of manual labor required, thereby freeing up team members to focus on more strategic tasks.
Moreover, DevOps technology promotes a culture of continuous improvement, wherein processes are regularly reviewed and updated to enhance efficiency and effectiveness.
The Integral Role of Bots and Autonomy in Streamlining Automation
Bots, or autonomous programs designed to perform specific tasks, play an integral role in streamlining automation. They can automate routine tasks such as scheduling meetings, answering common questions, and managing tasks and workflows.
For instance, chatbots can automate customer service by answering common customer queries, thereby reducing the workload on customer service representatives. Similarly, task bots can automate project management tasks such as tracking progress, updating task statuses, and notifying team members of upcoming deadlines.
By automating these routine tasks, bots not only enhance efficiency but also improve accuracy and consistency, thereby contributing to the overall quality of the software product.
Software orchestration factories refer to the automated processes and tools used in software development. These include continuous integration and deployment, automated testing, and infrastructure as code. These practices can significantly enhance the efficiency of automated processes by reducing manual work, preventing errors, and speeding up the development process.
Moreover, software orchestration factories promote a culture of collaboration and learning. They encourage team members to share knowledge and best practices, leading to continuous improvement and innovation.
Security Audits and Control Gates as Key Elements of Efficient Automation
Security audits and control gates are key elements of efficient automation. Security audits involve the systematic evaluation of a system's security measures to identify potential vulnerabilities and ensure compliance with security standards. Control gates, on the other hand, are checkpoints in the development process where certain criteria must be met before proceeding to the next stage.
By automating security audits and control gates, teams can ensure that security measures are consistently applied and that any potential issues are identified and addressed promptly. This not only enhances the security of the software product but also improves the efficiency of the development process.
The Criticality of Code Quality and Product Distinction in Automated Processes
Code quality and product distinction are critical in automated processes. High-quality code is easier to read, understand, and maintain, which makes the automation process more efficient. Product distinction, on the other hand, involves differentiating a software product from its competitors through unique features, superior quality, or innovative technology.
By focusing on code quality and product distinction, teams can enhance the efficiency of automated processes and deliver superior software products that stand out in the market.
Azure DevOps, a suite of development tools from Microsoft, is a key player in enhancing automation efficiency. It offers a range of features for continuous integration and deployment, automated testing, and infrastructure as code.
By leveraging Azure DevOps, teams can automate their development processes, enhance efficiency, and deliver high-quality software products swiftly and reliably.
The Role of Security Checks in Continuous Integration/Deployment Systems
Security checks play a crucial role in continuous integration/deployment systems. They involve automatically checking the code for security vulnerabilities at various stages of the development process.
By automating security checks, teams can identify and address potential security vulnerabilities early in the development process, thereby enhancing the security of the software product and reducing the risk of security breaches.
Assessing the Impact of Microservices on Packaging and Deployment Efficiency
Microservices, a software development technique where an application is structured as a collection of loosely coupled services, can significantly enhance packaging and deployment efficiency. Each microservice can be developed, packaged, and deployed independently, which allows for more efficient use of resources and faster deployment times.
By adopting a microservices architecture, teams can enhance the efficiency of their automated processes, deliver software products faster, and more easily scale their applications to meet changing demand.
In conclusion, harnessing efficiency through automation involves leveraging AI-powered tools, implementing DevOps technology, utilizing bots for routine tasks, and adopting practices such as security audits and control gates. By doing so, teams can streamline their operations, reduce human error and manual labor, and deliver high-quality software products swiftly and reliably.
- Continuous Integration/Continuous Deployment (CI/CD) Tools: Jenkins, CircleCI, and GitHub Actions can automate the integration and deployment of code.
- Infrastructure as Code (IaC) Tools: Terraform, Ansible, and Chef can automate the provisioning and management of infrastructure.
- Container Orchestration Tools: Kubernetes and Docker Swarm can automate the deployment, scaling, and management of containerized applications.
- Test Automation Tools: Selenium, JUnit, and Mocha can automate testing processes to reduce human error and manual labor.
- Configuration Management Tools: Puppet, Chef, and Ansible can automate the process of configuring software systems.
- Code Review Tools: Crucible and Gerrit can automate the process of code reviews.
- Monitoring and Alerting Tools: Prometheus, Grafana, and PagerDuty can automate the process of monitoring systems and alerting developers to issues.
- Security Automation Tools: OWASP Zap and SonarQube can automate the process of identifying security vulnerabilities in the code.
- Chatbots: Tools like Slackbot or Hubot can automate routine tasks like scheduling meetings or answering common questions.
- AI-Powered Tools: Tools like DataRobot or H2O.ai can automate data analysis and machine learning tasks.
- Scripting Languages: Python, Bash, and PowerShell can be used to write scripts to automate various tasks.
- Robotic Process Automation (RPA) Tools: UiPath, Blue Prism, and Automation Anywhere can automate repetitive tasks.
Talk to an Expert
Connect with our diverse group of UDX experts that can help you implement successful DevOps practices within your organization.
|
OPCFW_CODE
|
In case you missed any of the memos over the past decade, it’s now confirmed -- Python is very, very important to getting hired in the high paying field of data science.
In a worldwide survey of almost 20,000 data professionals, Python was used by 87 percent of those surveyed(Opens in a new window), more than double the no. 2 programming language, the database driver SQL. And that represented an increase from the previous year, proving Python is not only overwhelmingly popular, but it’s actually growing in usage too.
Even if you’re learning coding for the first time, the training in The Python 3 Complete Masterclass Certification Bundle ($29.99, over 90 percent off) is a huge first step to both learning a foundational web skill and also working as a real web development or data science pro.
Over seven courses, more than 30 hours of instruction and loads of practical exercises, projects and other teaching tools, students get a complete overview of Python from the basics through to the advanced uses that can elevate you to the standing of a true Python master.
Starting with Python begins with the four-part Python 3 Masterclass course, a gradual system for moving from novice to expert at a student’s own pace.
The building blocks are forged in Python 3 Complete Masterclass: Part 1, as new Python learners explore concepts like strings and string methods, handling syntax errors and exceptions and more with the use of examples and exercises based on real world situations.
In Python 3 Complete Masterclass: Part 2, Python’s many abilities are brought to bear on other apps, explaining how Python can be used to automate Excel sheets, build database tables and get several devices all working together.
The training continues in Python 3 Complete Masterclass: Part 3, this time getting students up to speed on the intricacies of data analysis and data visualizations. Here, students train in using PostgreSQL databases to automate tasks, reformat and process data in a variety of file formats, and use Boken to create eye-opening visualizations of your results.
Finally, the opening salvo is complete with Python 3 Complete Masterclass: Part 4, including extensive practical training in performing several key Python functions, from basic script testing to web content extraction. Students also learn 10 effective steps for turning your Python skills into a paycheck and even building a responsive portfolio that will help you land that job.
Meanwhile, the ins and outs of networking also gets some extensive coverage with the Python 3 Network Programming - Build 5 Network Applications and Python 3 Network Programming (Sequel): Build 5 More Apps courses. Aimed at experienced Python engineers and admins, this course has students put together 10 different networking apps that stretch their understanding of Python’s uses.
The training closes with the Python Regular Expressions: From Beginner to Intermediate Level, a deeper exploration of expression building in Python, including metacharacters, special sequences, extension notations and more.
Each course in this Python learning package is a $199 value, but with this collection, it’s all available for a fraction of that price, just $29.99.Note: Terms and conditions apply. See the relevant retail sites for more information. For more great deals, go to our partners at TechBargains.com(Opens in a new window). Now read:
|
OPCFW_CODE
|
In this lesson, we'll use Ramda's
toPairs function, along with
compose to create a reusable function that will convert an object to a querystring.
[00:00] I've included Ramda, and I'm using de-structuring to grab a few of its utility functions. I also have a query string object with the page size and total property. My expected result is a query string made up of those key/value pairs. I'd like to replace this hard-coded value with a function that'll take in our object and result in this string.
[00:19] I'm going to grab the string, and I'm going to cut it. I'm just going to drop it up here in a comment so we have it for reference. I'm going to start by setting my result to create qs, which is going to be a function that we'll create in a second. I'm going to pass that our query-string object. Then I'll define our create qs function.
[00:40] For now I'm going to set that to equal two pairs. We'll see that it's generated an array of arrays where each inner array is a key/value pair. We're going to need to process this further, so I'm going to make this a composition. I'm going to wrap. I might call it two pairs in a compose.
[01:05] After we've converted it to these nested arrays, I need to map over each one of those inner arrays and convert each of those into a string. For that I'm going to use map. Then in the map I'm going to use join, and I'm going to join each item and each inner array with an equal sign.
[01:22] We'll see our result is an array of three strings with our key and value pairs. Now that our array's been flattened out I can join each one of the strings in this array with an ampersand. We have a string that has our key/value pairs.
[01:40] The only thing left is to add the question mark to the beginning. For that I'm going to use CONCAT, and I'm just going to CONCAT with a question mark. I'm just going to newline each of these functions so it's easier to see the entire composition. You'll see that we have a function that'll take in any object and convert it into a query string.
[02:02] I can come up here if I wanted to change these values. Say I have a total of 205. We'll see there are string updates to reflect that. If I add another property, so we'll add something else, we'll see that our query string is updated to reflect that new value.
|
OPCFW_CODE
|
I need to get a PHP code that can retrieve datas from a FIREBASE DB. The code MUST be PHP5 compatible The code MUST be portable (copy/past the files and it works) I provide the DB URL + Auth JSON file I want to execute the code on a simpliest hosting and get my object that can access the datas, piece of cake for a firebase friendly programer
Hi, I would like to ask somebody to 1) Search a word in Google. For example "cryptocurrency" 2) Crawl the google search result up to 1000 page. 3) Get the URL. 4) Check the first page of URL whether it has the word "bitcoin" or not. 5) If yes save it into a text file.
...retrieving data from a remote MS SQL server database that is currently exposed to the internet. Having the database exposed to the internet in this way is bad practice and is open to it being attacked. The alternative is to create an API web service on the same server where the SQL Database resides. I would then like to have 4 basic web api's that can
JSON data should be saved in SQL datbase (using Python or some other coding language) Saved data in sql should be displayed on html page
JSON data should be save in SQL datbase (using Python or some other coding language) Saved data in sql should be displayed on html page
Access a url using python and retrieve the resulting json data to a excel sheet
Build a web solution that client supply the IMEI of a lost mobile phone, and we will be able to get the current mobile number inserted into the mobile phone. We are looking to build a system which will be able to track a lost phone and retrieve the current phone number that is using the phone using IMEI. EXAMPLE: If someone stole a phone and removed
It's a WordPress website, and here is the algorithm for PHP script that needs to be executed within particular .php file: 1. Check if WordPress user membership level is "dolphin": <?php if(woocommerce_members_only(array("dolphin"))): ?> ...do the rest <?php endif; ?> 2. Get user ID (X) 3. Get user membership start/end dates from: Table 1...
...they wants on time after 48 hours they closed our accounts and all our websites down and they said nothing received from us!! after that they asked to pay them 200$ to retrieve hosting files, emails and domain names we paid for them what they asked but after 12 hours we received email below "At this time your account is terminated so we will
Retrieve a order as soon as it is created on my shopify store and post the results to a excel sheet
I go to events in different cities around my area to sell goods. Some events I make money at, and others I break even or lose money. I wanted to keep track of the details and be able to predict if it would be worth me going to the next event. (see images) I want to be able to enter in a location, the series of an event (promoter), and what round (number) of the event. Then, I want the information...
...turn over the document I filed privately in 1989. What would be the court costs? What would be your court time costs? What would be the cost for you to retrieve the document? Can you retrieve it over the phone with the Land Registry? What would be the fee charge from the Land Registry? I can provide you with an apostilled passport. TYVM in advance
...assistance with restoring a windows small business server 2011 backup that has exhange 2008 data files in order to retrieve individual email accounts. Our server was hit with ransomware and we had to reload SBS2011 however we lost multiple accounts emails and entire data files that we would want to restore back with out affecting the current reload.
Hello, I have an excel file with part number...numbers from my [url removed, login to view] file - and pull list price and my cost price from my account and put into new file. I will not create milestone until You Assure Me that you can retrieve both List Price & My Cost Price As shown in my Account.. So Check url's first. My budget is $50 if this can be done.
...dialogflow.com. I have an information guidebook in text format, word and pdf. The information can be largely categorized into 8 categories. The chatbot must be able to retrieve and display the links to the information based on the inputs by the user. As the chatbot learns, it will need to answer or redirect the user to the specific page of the information
user can enroll to many events + events can have many users table user_event : id / eventId / userId flow :- 1) insert data to table user_event 2) get all events user enrolled with display in html I need it now using teamviewer
|
OPCFW_CODE
|
TestExecute sometime dont close even with the /exit argument
Hi everyone, I'm currently experiencing a weird issue.
After an upgrade from TestExecute 12 to TestExecute 14.60, sometimes TestExecute doesn't close after completion.
As I am using Jenkins to launch a serie of ProjectSuite, if testExecute does not close, every other Project Suite are not launched as the previous one is seen as running.
I've attached a picture of TestExecute not closing even after completion.
Even if I'm right clicking the icon and click on "Exit", nothing happen and TestExecute stays opened. I am forced to kill the process to exit TestExecute.
It does not happen during the test execution, but only when all tests are finished...
The command line launched by Jenkins via the TestComplete plugin:
'"C:\Program Files (x86)\SmartBear\TestExecute 14\bin\TestCompleteService14.exe"' //LogonAndExecute //lDomain: "" //lName: "Testadmin" //lPassword: ******** //lTimeout: "14580000" //lUseActiveSession: "true" //lCommandLine: '""C:\Program Files (x86)\SmartBear\TestExecute 14\x64\bin\TestExecute.exe" "C:\Jenkins_Infra\workspace\MICRO-SESAME - Tests journaliers VBS\Projets\ACCES\ACCES.pjs" /run /SilentMode /ForceConversion /ns /exit "/ExportLog:C:\Jenkins_Infra\workspace\MICRO-SESAME - Tests journaliers VBS\1597996642359.tclogx" "/ExportLog:C:\Jenkins_Infra\workspace\MICRO-SESAME - Tests journaliers VBS\1597996642359.htmlx" "/ErrorLog:C:\Jenkins_Infra\workspace\MICRO-SESAME - Tests journaliers VBS\1597996642359.txt" /Timeout:14400 /DoNotShowLog /JenkinsTCPluginVersion:2.5.1"' //lSessionScreenWidth: "1920" //lSessionScreenHeight: "1080"
This command line is the same for every Project Suite launched, and hangs only sometimes. I have never encountered this issue after almost 3 years of using TestComplete/TestExecute 12...
If you have any solution or idea, that will be awesome.
Solved! Go to Solution.
Okay I think I've found the error.
On TestExecute logs, Silent.log, there is the following log each time TestExecute is trying to close:
Tried to open the error dialog? Message; At least one modal dialog is open.
Please close all the modal dialogs before closing the project suite..
And indeed, I have a modal dialog opened (A zip compression error).
I already had this dialog in TestExecute 12 without having any issue, but it now seems that this dialog prevent TestExecute from closing...
@GGuezet Great that you found the cause of the issue. Did you find a way to work around this?
The change in behavior could be caused by the new tool version. But I think the Support Team would need to look at your issue more closely to investigate this properly.
Community and Education Specialist
The solution I have found is to update my code to properly handle zip compression errors (Files used by another process, etc...)
So without a modal dialog, TestExecute is closing correctly 😉
|
OPCFW_CODE
|
So I was looking through the TempleOS history (I really miss Terry, it was really good at least as a background noise when I was programming, and I really enjoyed his knowledge.), and I found out that the TempleOS was originally called SparrowOS, and before that LoseThos. So I checked out the website of SparrowOS in the archive.
(check it for yourself.)
And there is no mention of God, and it looks like Terry is completely normal, so do you think that his Schizoprenia got more severe in the last years, and do you think that the first goal was really to make it for God? I just want some discussion, I know that all of Terry's fans are nice people, so let's see if someone has more information maybe, Terry was a really interesting person
Trying to find the video clip where he's talking about how horrible drawing an elephant with more colors is. He states it's Agony. Even if you remember a rough time frame or potentially the title of the stream or video and I'll scrub it myself. I can't justify scrubbing every video in the google drive video dump. Thank you in advance fellow CIANF's
How the fuck haven’t I heard of this earlier? All you can find online is 1 article. Anyways, RIP Terry. It’s sad that he had so many problems in his life. At first he was funny to laugh at because of his schizophrenia, but when I actually learned of the story behind Terry it made me realize just how much pain this guy has to go through. RIP
Does anyone know if the entire time Terry was working on the project he believed he was communicating with God? From what I can tell, early on when Terry was programming LoseThos it didn't have such a religious focus. In the LoseThos demo videos Terry isn't talking about God and he provides rational explanations for why he does things like 640x480 and how he named it LoseThos as a parody of "Win"dows.
At the same time, there are some old rants Terry was posting on forums when it was still LoseThos talking about God telling him how to implement things and the usual. I haven't been too stringent with dating these things. But what it seems like to me is what started off as his hobby project (LoseThos) only took on its full-blown "mission from God" characteristic after it was already pretty much totally functional, then much later became TempleOS.
Did early versions of LoseThos have the "words from God" feature, bible, or religious games? It seems likely that Terry's interest in making a hobby OS predated his schizophrenic break. I'm not sure when exactly that happened (I believe 1999 was the car incident?) or when he first began coding the OS.
I don't think I have actually felt this bad about somebody dying that I never have and will never meet. To me he seemed like one of those dudes that would outlive you no matter the odds against him.
IDK I just wish he was able to live his life without the terrible illness that plagued him. Such a great mind.
RIP Terry Davis <3
I am building a 16 bit CPU/Minicomputer from scratch using 74HC logic on wirewrap boards, similar to BillBuzbee's Magic-1 Minicomputer, and I been interested in possibly running TOS on this system, however TOS is 64bits and my system is 16 bits.
Is it possible to port TOS to a 16bit system?
I am here asking for help from anyone who would like to participate. I can teach you digital electronics as a "bonus".
Somebody should really try and construct one. It would be very interesting especially in regards to mental health and his daily exploits. He was obviously a genius but that was taken away from his schizophrenia. It could be like a Netflix documentary and there would be funding if anyone seriously took up a project.
|
OPCFW_CODE
|
Structured, Unstructured and Semi-structured Logging Explained
This blog was originally published July 20, 2021 on humio.com. Humio is a CrowdStrike Company.
Structured, semi structured and unstructured logging falls on a large spectrum each with its own set of benefits and challenges. Unstructured and semi structured logs are easy to read by humans but can be tough for machines to extract while structured logs are easy to parse in your log management system but difficult to use without a log management tool.
What Is Structured Logging?
Structured logging formats log data so it can be easily searched, filtered, and processed to enable more advanced analytics. The standard format for structured logging is JSON, although other formats can be used instead. Best practice is to use a logging framework to implement structured logging and that can integrate with a log management solution that accepts custom fields.
Differences Between Structured, Unstructured and Semi-structured Logs
Unstructured logs are massive text files made up of strings, which are ordered sequences of characters that are meant to be read by humans. In logs, these strings contain variables, which are placeholders for qualities that are defined elsewhere. Sometimes the variable is a wildcard, which is a placeholder that represents an unknown quality, just like in poker.
People can understand variables easily, but that’s not always true for machines. They can’t always tell the difference between a variable in one string and a similar sequence of characters elsewhere in the log file. When that happens, the results can be confusing, leading to slowed productivity, increased fallibility, and wasted man-hours and processing cycles.
Structured logs consist of objects instead of strings. An object can include variables, data structure, methods, and functions. For instance, an object that’s part of a log message might include information about an app or a platform. The organization can define the criteria they wish to include in the object in order to make the logs most useful in meeting their unique needs. This is the “structure” in a structured log.
Here is an example of a structured log:
Because structured logs are meant to be read by machines, the machines that read them can perform searches on them faster, produce cleaner output, and deliver consistency across platforms. Humans can still read structured logs, but they are not the primary audience. They are the audience for the output once a machine has finished operating on the data.
Semi-structured logs support both machines and humans, the logs consist of strings and objects. These logs usually need to be parsed into tables before they can be analyzed properly. These semi-structured logs haven’t found a standardization yet, thus making it harder for several programs and systems to identify and categorize them. For example, the quoting rules for the value of a white space, is not universally defined. Humio has taken steps in the right direction and can adapt to semi-structured logs in your environment.
Logging 101 Workshop > Watch now to learn how log data can be used to understand the health of the IT environment, keep it more secure, enhance business intelligence, and strengthen relationships with customers.
Why Use Structured Logging?
Finding an event in an unstructured log can be difficult, with a simple query returning far more information than desired and not the information actually wanted. For example, a developer seeking a log event created when a specific application exceeded disk quota by a certain amount may find all disk quota events created by all apps. In an enterprise environment, that’s going to be a big file.
To find the right event, the developer would have to write a complicated regular expression to define the search. And the more specific the event, the more complicated the expression. This approach is computationally expensive at scale because the conditions defined in the match expression have to be compared to every row value in the log record. If wildcards are used, the computational expense is even higher. And if the log data changes, the match expression won’t work as intended.
In some organizations, the developers write code in the form of strings, while Ops teams write code that parses those strings into structured data. This takes more time and increases the computational expense. If a developer or an Ops team member makes an error, the logging process breaks and more time is lost finding the source of the error.
Structured logging eliminates these problems by structuring the data as it’s generated. The organization can choose the format that works best for them, such as fixed column, key value pairs, JSON, etc. Most businesses today choose JSON format because it integrates well with automation systems, including alerting systems.
Text logs continue to have a place in enterprise because structured logging has a few drawbacks. Structured logs define data as it is created, so the data can only be used for purposes served by that definition. And if the structured data is stored on-premise or in any data warehouse with a rigid data schema, changes to that schema will require the structured data to be updated, which is a vast and costly endeavor. When deciding on a logging strategy, organizations should consider who will be using the data, what type of data is collected, where and how the data will be stored, and whether the data needs to be prepared before storing it or if it can be prepared when used.
Free Log Management Course > Follow along on this 6-part course to master tactics that allow for scalability while planning, designing, and integrating development and security practices into every aspect of your infrastructure.
Humio Supports Structured, Semi-structured and Unstructured Logs
The benefits of structured logging can only be realized with a flexible, scalable logging management system that supports development, compliance, and security needs.
Humio handles all unstructured, semi structured and structured messages, Humio, works with any data format, and is compatible with the leading open-source data shippers. Custom parsers make it easy to support any text format, so integrating Humio is simple and quick.
Most users send structured data to Humio as JSON objects. They don’t have to be formatted in any special way, they just have to be valid. Time stamps can be sent as part of the log entry, and Humio will use your time stamp instead of replacing it with its own. When sending unstructured data, time stamps are generated at the time of ingestion as a long comma delimited string and do not impact the ingestion time stamp.
Humio customers report better observability, flexibility, reliability, and cost effectiveness. Humio’s purpose-built logging tool, featuring innovative data storage and in-memory search/query engine technologies, delivers blazing-fast log management and index-free data ingestion that can’t be attained with traditional log management tools.
|
OPCFW_CODE
|
In its efforts to extend the functionality of web apps, Google has been developing and supporting two new HTML APIs that may end up making the web less safe and compound on the existing security issues of the Internet of Things. The two new APIs are Web Bluetooth, which has already been enabled in the latest version of Chrome, as well as the WebUSB API.
Connecting Everything To The Web
As a mainly internet-dependent company, it makes sense for Google to want to connect as much as possible to the internet. Google owns some of the most-used internet services, both user-centric such as Gmail and developer-centric such as Google Analytics. That means the more “things” are connected to the Internet, the more data ends up being collected by the company, which it can then monetize.
The two new APIs are tied to Google’s Physical Web initiative, which aims to replace native apps that control the Internet of Things with web apps. Google believes this will make it easier for users to connect to any device they want, anywhere in the world, through the web.
Making The IoT Security Problem Worse
According to one Chrome security engineer, web-connected Bluetooth devices could be subject to the following types of attacks and vulnerabilities:
An abusive software developer, trying to do embarrassing or privacy-insensitive things that don’t go outside devices’ security models.A malicious software developer, trying to exploit users using nearby Bluetooth devices.A malicious hardware manufacturer, trying to exploit users or websites who connect to their devices.A malicious manufacturer/developer, who can push cooperating hardware and software.Weakly-written device firmware, which doesn’t intend to hurt its users, but might be vulnerable to malicious connections.Weakly-written kernels, which might be vulnerable to either malicious userland software or malicious connections.
In other words, even if the Chrome team tried to make this specification “as secure as it can be,” there’s no question that the API will increase the number of ways in which web-connected devices could be hacked, compared to the status quo. In terms of security, it’s a net negative.
Malicious software and hardware developers will always exist, as will weakly-written device firmware. As for weakly-written kernels, that’s already the status quo, whether we’re talking about the decades-old legacy-supporting Windows and Linux kernels, or about the mostly un-patched Android kernels out there.
The WebUSB API is potentially even more dangerous than the Web Bluetooth API. For instance, consider all the surveillance cameras getting hacked and feeds from them being made public on the web. The same could happen with USB-connected webcams that can be remotely controlled via the web. We would also likely see many more printers getting hacked through the Internet, too.
Can We Even Secure Web-Connected Devices?
If the recent rise of massive distributed denial of service (DDoS) attacks and ransomware that targets city infrastructure has taught us anything, it's that perhaps we shouldn’t allow absolutely every single device or component with a chip in it to be remotely controlled over the web.
Doing that seems to lead to only two outcomes. One is that securing these devices is going to require too many resources to ensure that everything that has now been exposed to the web is virtually unhackable. The second, and the more likely one, is that most companies are not going to put in the effort and resources necessary to ensure that their devices can’t be hacked.
Therefore, when Google or other companies support technologies that deliberately open devices to remote access, they're already accepting that those devices might be hacked. This is regardless of what requirements (such as having to use TLS encryption for controlling Web Bluetooth devices, for example) they make to minimize the damage after already deciding to design such a protocol.)
Perhaps connecting everything to the web is just the natural evolution of technology and nothing can stop it. However, we could at least ensure that the potential damage is reduced by designing specifications with stricter requirements from the beginning. It’s not clear if that’s what’s happening right now.
Google may aim to make a specification “secure,” but a trade-off will always be made between how secure it can be, and what the developers of device manufacturers are willing to spend to implement that specification. Therefore, deciding on the right compromise is not something fixed in stone.
The specification can be made less secure if there is a bigger push-back from the software developers and manufacturers, or it could be made more secure if there is a similar push-back from users or even the editors of the specification (which in this case happens to be mainly Google).
Prioritizing Local Control Of Smart Devices
We've learned over the past few years that everything connected to the internet tends to be less secure. Therefore, it follows that a device can be made more secure if it's not connected to the internet. Perhaps we should strive to minimize how many devices can be connected directly to the internet by emphasizing localized control and asking ourselves, "Do we really need internet-controlled light-bulbs?"
This may not be to Google's advantage, as it won't be able to obtain as much data from non-internet-connected devices, but it may be to the benefit of the internet at large. Some devices may actually work better and be more useful when connected to the internet, but the majority of the "Internet of Things" probably doesn't actually need an internet connection, especially if those devices can be controlled locally, either through a physical push of a button or through local networks such as Bluetooth, NFC, Thread, or other P2P mesh networking technologies. The latter could bring much of the same convenience of controlling a smart device from an app, without the downside of allowing someone from the other side of the world to connect to it as well.
|
OPCFW_CODE
|
This week, we’re talking about how addressing the skills gap can help you when hiring great engineers
Back in 2014,hackajob was started by founders Razvan Creanga and Mark Chaffey. Frustrated with the traditional recruitment agency approach to hiring, Raz had witnessed first-hand the difficulty to employ top engineering talent.
As frustrating as it is, that’s not the only problem. There’s a major digital skills gap in the UK, and it’s growing. According to the Financial Times, information technology faces one of the biggest labour shortages. IT is now the most in-demand skill set, with developers, programmers and software engineers being particularly sought after. When asked by the FT, the head of policy at Tech UK could only agree, stating, “we’re not producing the right skills… Businesses will go elsewhere if they can’t get access to the right people here in the UK”.
So what do we do? How do we address the gap that’s having a huge impact on businesses in the UK? And how do we communicate with the skilled workers that we so desperately need, and in the right way?
Our recommendation? Change the recruitment and hiring process. The attitude around tech recruitment needs to change and must focus on skills, not CVs. If a candidate has the skills to do a certain role well, then why shouldn’t they get the job? Unbiased hiring is the way forward, and it’s the secret to hiring great engineers.
When discussing this topic previously, we know that people want to be unbiased, but as humans, we are naturally conditioned to seek familiarity. And it’s not our fault. For example, we all like traits that we recognise in ourselves, which is why we self-select when it comes to surrounding ourselves with people who we like. It’s exactly the same scenario if you’re a recruiter, talent manager or head of HR. Ultimately, we’re all guilty of accidental bias.
Hiring based on skills, not CVs is a fantastic idea, but how do we put it into practice? At hackajob, the beauty of our platform is that we fast-track the hiring process by hiring software engineers who have already proven their ability in their specific domains. Giving candidates the opportunity to upload past projects from repositories like GitHub, we also offer coding challenges within our custom-built IDE. Allowing us to see individual skills and strengths instead of CVs and where people went to school, our AI matches candidates with the perfect companies for them. What’s more, we ensure that companies apply to candidates directly, with no recruiters involved and salaries offered upfront.
One of the best things about our platform is that it values the quality and skills of candidates above everything else. We’ve helped a full-time chef at Wagamama hang up his apron and make his passion of coding a reality, and have encouraged taxi drivers who code in their spare time to sign up and get matched with their ‘dream’ companies – their words, not ours.
Diversity should not be about ticking a checkbox. Instead, it’s about proving how brilliant individuals really are, based on their skillset. Again, it really is the secret to hiring great engineers. Looking at what they can actually do, how they’ve contributed in the past, their strengths and weaknesses. It’s not about choosing someone because they’ve got a degree from a ‘good’ university, or they wrote the best ever cover letter and they attached a photo that makes them look ‘nice’. As we wrote about in one of our previous articles, it’s experience gained on the job, commitment to projects out of work, and the lessons learnt from it all that ultimately provide the best, most-rounded experience.
Do you have any secrets to hiring great engineers? Make sure to let us know.
|
OPCFW_CODE
|
What is current best practice for setting up recycling?
I'm interested in public policy as regards recycling. I recently read an interesting Atlantic article on single-stream recycling, which noted that single-stream certainly makes it as easy as possible for end users to recycle, but can raise expense and result in potentially-recyclable materials being put in a landfill.
Cities need to choose the "best" policy for recycling, but "best" has to include cost, convenience for the end user, what percentage of discarded material ends up in an undifferentiated landfill, least harm to the environment, as well as public relations.
For lowest cost, the best policy is probably collecting only the most valuable recyclable materials (steel, aluminum and newspaper) and chucking the rest in a landfill. For best public relations (assuming their public is clueless) they'd just paint RECYCLE on the side of all their trash trucks, and still chuck everything in the landfill. For most material actually recycled and/or least environmental harm, they'd do multiple-stream recycling and pay a lot of money for hand-separation of additional materials. But none of these would be a balanced policy.
So, what is the current state-of-the-art for recycling policy? If location matters, then choose the Boston, Massachusetts area, or specify whatever location you like.
@JanDoggen Aside from implementation details, it could be seen as a legitimate approach to satisfying a recycling-happy public while keeping expenses (and taxes) low. For example, my town has single-stream recycling, but I have no idea how much of what I put in my recycling bins gets diverted into a landfill. What if it's 70%, and nobody in the city or "recycling" company fesses up; how would I know?
it could make sense to tax the businesses that produce the most recyclables to offset cost, but this will just make them throw it in the trash.
Frankly if you taxed everyone according to the weight of their trash, it would be a simpler way to reduce waste, but it could lead to folk burning trash or dumping. This question is a pickle.
this article may be of interest. http://www.mdpi.com/2079-9276/4/2/384/htm
@flummingbird Thanks. Another, more popular-oriented article: The Recycling Game Is Rigged Against You.
A recent article in Scientific American describes research showing that making recycling free but charging residents per bin of rubbish increases recycling rates by 8% on average compared to municipalities where both types are charged the same.
@LShaver Thanks, but I'm expect that just increases the amount of material put into recycling bins; it has nothing to do with how much of that material actually gets recycled in the end.
The research article itself is behind a paywall but given comments in the article I read it seems like the researchers looked at that. Also, just saw that you are in Boston -- the research involved 245 municipalities in Massachusetts.
What the best recycling policy is is a political question. As you mentioned in your question there are many aspects to this and all good recycling policies will address issues such as cost, environmental impact, human health impact, end-user awareness, convenience of using the system, compliance, amount of generated waste, worker safety and more. Which aspect is considered to be most important is often reflected in national or local waste processing laws. Besides politics, local circumstances such as market prices of recycled materials and the availability of processing facilities and landfill areas can have a big influence.
Europe
I'm not very familiar with recycling policies outside of Europe, but many European countries put emphasis on achieving high recycling rates.
If you look at this blog post for example you can see that several European countries (and South Korea) score high on municipal recycling rates. Part of the reason why European countries score this high has to do with the various EU directives on waste processing and environmental protection. EU directive 2004/12/EC for example says that:
Recovery and recycling of packaging waste should be further increased to reduce its environmental impact.
Implementation of this directive is however left up to each EU member state and consequently there are big differences between EU countries. Countries like Germany, Slovenia and Austria score 55%+, where as others such as Slovak Republic, Greece and Portugal score 11 - 26%
Source: page 50 of the OECD document 'Environment at a Glance 2015
To achieve high recycling rates, multiple waste streams are a necessity, because mixing different waste streams causes contamination of the materials and reduces their recyclability. However, keep in mind that recycling rates can be misleading. They are difficult to compare because of the different definitions of (municipal) waste and the different methods of calculation. Also municipal waste (which I assume your question is primarily about), is only a small fraction of all generated waste. Roughly about 10% of all waste is municipal waste, but it does account for about 1/3 of the costs.
In terms of waste policy EU directive 2008/98/EC is interesting which says:
The first objective of any waste policy should be to minimise the negative effects of the generation and management of waste on human health and the environment. Waste policy should also aim at reducing the use of resources, and favour the practical application of the waste hierarchy.
The waste hierarchy describes the methods to process waste from most favourable (e.g. reuse) to least favourable (e.g. dumping in landfills). What you often see is that countries over time slowly work their way up the hierarchy and start using more 'advanced' waste processing methods. For example, the EU recently proposed to ban single-use plastics, which is a measure that belongs at the top of the waste hierarchy (prevention and minimization).
The Netherlands
As an implementation example, let's look at The Netherlands (because I live there and because it does fairly well). The Netherlands has a recycling rate of 51%. In Europe only Austria, Germany and Belgium are doing better in terms of recycling rates (based on page 13 in this document)
All cities here separately collect wastes types such as paper and cardboard, kitchen and garden wastes, glass, PET bottles, clothing, and batteries. Some cities also have a separate stream for what they call PMD which stands for Plastics, Metals and Drink cartons (e.g. Tetrapak) and claim that separating these waste types by end users is the most effective method of recycling. Other cities disagree with this and claim that separating these 3 waste types from a 'general other' waste stream is more effective. So far I haven't seen a final conclusion in this debate.
This week the Secretary of State who is responsible for waste management announced that The Netherlands has to transform into a full circular economy (news article in Dutch). This goal should be achieved by 2050. One of the major steps towards this is to make improvements in product design; all products should become repairable, reusable and/or recyclable by design, and may not contain any harmful substances.
New Zealand's position on the chart, and your statement that recycling rates can be misleading makes me hopeful that perhaps the average kiwi just produces so little waste that recycling isn't cost effective, and there's just a small back lot landfill somewhere that won't be full for another 100 years. Is there a per capita or per GPD version of this chart?
I'm afraid the situation in New Zealand looks rather bad, at least at first glance. This website puts New Zealand at place 10 of the countries who produce the most waste per capita (3.68 kg/capita per day). Also a quick search on "New Zealand recycling rate" returns this article as first hit: "Kiwis are rubbish at recycling, report finds"
Interesting that 11 of the top 13 from that list are islands. Sounds like a good topic for another question...
|
STACK_EXCHANGE
|
When you receive your credit card statement (or bank statement if you used a debit card), you should see a line for the purchase expressed in the format
11,123 JPY @ 0.0094443 : $105.05. It really doesn't matter what rate your bank uses, they're going to collect from you in USD, thus your expense was incurred in USD.
In other words, you did not pay anybody in JPY. Your bank paid the merchant and then subtracted from your balance (or added to your debt) however much USD they felt was fair in exchange for the amount of JPY they just gave to the merchant.
Alternatively, if you had previously converted a sum of money into JPY banknotes or you have a JPY bank account that you fund periodically and later on paid for something using those funds/cash, that's when the exchange rate becomes more important.
It's important because the value of the JPY you exchanged for goods/services may have changed since the time you purchased it*, and we want to know its value at the time you spent the money, not the time you bought it. For that situation, the IRS says:
Translating foreign currency into U.S. dollars
You must express the
amounts you report on your U.S. tax return in U.S. dollars. Therefore,
you must translate foreign currency into U.S. dollars if you receive
income or pay expenses in a foreign currency. In general, use the
exchange rate prevailing (i.e., the spot rate) when you receive, pay
or accrue the item.
A spot rate is used when you aren't converting cash from one currency to another (because you already have the right one to settle), but you want to know what it would have cost you if you did need to buy the foreign currency at that moment in time to settle the transaction. The spot rate is determined by the demand for each currency based on the current bid and ask amounts on foreign exchange markets. There are plenty of websites you can reference to find out exactly how much of one currency you would have needed to buy an exact amount of another currency at any particular point in time.
Since there are multiple websites, whose spot rate should you use? Well, the IRS says:
Currency exchange rates
The Internal Revenue Service has no official
exchange rate. Generally, it accepts any posted exchange rate that is
When valuing currency of a foreign country that uses multiple exchange
rates, use the rate that applies to your specific facts and
So as long as you are consistent and don't flounder around a bunch of different sources looking for the "best" rate for each individual transaction, they don't mind, and neither should you.
Since they tell you to use the rate that applies to your specific facts and circumstances, it would make sense to use the spot rate provided by whomever sold you the JPY you are using for the transaction (if published/available).
*If you really want to do things by the book, you should also be reporting any change in the value of foreign currencies purchases as a capital gain/loss, but that's a whole different kettle of fish and beyond the scope of this question.
|
OPCFW_CODE
|
Nonlinear mapping (NLM) :
Nonlinear mapping (NLM) is a mathematical technique that involves the use of nonlinear functions to transform input data into output data. This technique is often used in machine learning, image processing, and other areas where the relationships between input and output data are complex and not easily represented by linear functions.
One example of NLM is the use of a neural network, which is a type of machine learning algorithm that uses multiple layers of nonlinear functions to process input data and make predictions. In a neural network, input data is fed through multiple layers of processing units, or “neurons,” which apply nonlinear functions to the data to extract features and make predictions. For example, a neural network might be used to classify images of animals based on various features, such as size, shape, and color. In this case, the nonlinear functions applied by the neurons would allow the neural network to identify patterns in the data that might not be immediately apparent to a human observer.
Another example of NLM is the use of nonlinear filters in image processing. Nonlinear filters are used to enhance or modify the appearance of an image by applying nonlinear functions to the pixel values. One common type of nonlinear filter is the median filter, which replaces the value of each pixel with the median value of the pixels in its neighborhood. This filter is often used to remove noise or other unwanted artifacts from an image.
There are many different types of nonlinear functions that can be used in NLM, and the choice of function depends on the specific application and the characteristics of the data being processed. Some common types of nonlinear functions include sigmoidal functions, which have an “S” shaped curve; ReLU (Rectified Linear Unit) functions, which are used in neural networks to introduce nonlinearity; and polynomial functions, which are used to model complex relationships between variables.
One key advantage of NLM is that it allows for the modeling of complex relationships between input and output data. In contrast to linear models, which can only represent linear relationships, nonlinear models can capture more complex patterns and interactions in the data. This can be particularly useful in areas such as machine learning, where the relationships between input and output data are often highly nonlinear and difficult to predict using linear models.
Another advantage of NLM is that it can provide more accurate predictions and classifications than linear models in many cases. For example, in a machine learning task, a nonlinear model might be able to identify patterns in the data that a linear model would miss, leading to more accurate predictions. Similarly, in image processing, nonlinear filters can often produce more aesthetically pleasing results than linear filters.
There are also some challenges and limitations to using NLM. One potential drawback is that nonlinear models can be more difficult to interpret than linear models, as the relationships between input and output data are often more complex and harder to understand. Additionally, nonlinear models can be more sensitive to noise and other types of interference, as the nonlinear functions used to process the data can amplify small differences or variations in the input data. Finally, nonlinear models can require more computing power and time to train and evaluate, as the nonlinear functions used in the model are often more complex and require more processing resources.
Overall, NLM is a powerful and widely-used technique that allows for the modeling of complex relationships between input and output data. Its ability to capture and represent complex patterns and interactions in the data makes it an important tool in many fields, including machine learning, image processing, and other areas where the relationships between input and output data are complex and nonlinear. So, NLM plays a vital role in the fields of data analysis and machine learning.
|
OPCFW_CODE
|
The LAN Turtle is a covert Systems Administration and Pentesting tool providing stealth remote access, network intelligence gathering, and MiTM (man-in-the-middle) surveillance capabilities through a simple graphic shell.
- LAN Turtle Classic
- LAN Turtle SD (Storage expansion via Micro SD, capture packets, exfiltrate data)
- LAN Turtle 3G (Unclocked world-band 3G modem, out-of-band remote access)
LAN Turtle [specs, design, features]
The LAN Turtle is packed with features for remote access, MITM and network recon, it can act like a simple and handy USB ethernet adapter (if you turn off all modules), but it also allows you to run surveillance operations from anywhere and interact with the device.
- Atheros AR9331 SoC at 400 MHz MIPS
- 16 MB Onboard Flash
- 64 MB DDR2 RAM
- 10/100 Ethernet Port
- USB Ethernet Port – Realtek RTL8152
- Indicator LED (Green Power, Amber Status)
- Button (inside case for Factory Reset / Firmware Recovery)
- Pivot with a persistent Meterpreter session in Metasploit.
- Scan the network using nmap.
- DNS Spoof clients to phishing sites.
- Exfiltrate data via SSHFS.
- MiTM inline computers capturing browser traffic
- Access to the entire LAN through a site-to-site VPN with the LAN Turtle OpenVPN client acting as gateway.
- Automate a management script with the results sent every hour by email
- Write code on the openwrt-based Linux platform for any inline Ethernet application.
- Maintain access to your home network from anywhere using a persistent reverse SSH shell.
LAN Turtle comes with numerous downloadable modules/scripts, but you can also create your own (Bash, Python, PHP, etc.). See/download full list of available LAN Turtle modules.
How it works?
Hacking with LAN Turtle
- LAN Turtle is small device and it can be secretly installed on a target computer to poison DNS, providing possible phishing endpoints. Victims wont even notice it, especially coworkers, girlfriends, and other non tech-savvy people.
- Once slotted into a machine, it will act like a network card, which allows the machine to connect to the network. But the LAN Turtle will also gain internet and network access.
- You can install it between a target machine and LAN to intercept and log web traffic.
- You can do some serious damage, meaning that if you sat this device between target network cable and computer,cthey are in serious trouble.
- With numerous modules created for the LAN Turtle you can simply sniff (listen) all the data that are sent over the network.
- After you do some setup and connect it back to your server, you will be able to access target’s network and steal usernames/passwords from a locked PC without even needing a network connection.
- You can configure and plug in a LAN Turtle into a locked but logged in machine and it will store user hashes onto the Turtle in no time.
- You can also connect it to your Android smartphone/tablet using a USB OTG cable for use on-the-go with an Android SSH client.
- There are a lot of documentation and setup videos/tutorials offered on LAN Turtle wiki, so you will easily understand all the options and setup steps. The documentation is easy to follow, so you don’t have to worry. Just follow the steps.
- and much more…
If you are a pentester or sysadmin, this tool can become very handy for you guys. It cost around $50, so we can say that it’s cheap and very useful, both for pentesting/hacking and education. Since it comes with numerous useful tools, you should give it a try. Once you plug it into a target machine, you can access it from anywhere using your cloud server. Just imagine what you can do then… :]
|
OPCFW_CODE
|
Apply regex between two words
I need confirm if exists one \s01\s between one part of my text, so i kind need a delimiter.
I have this huge text:
...
RESUMO DO FECHAMENTO - EMPRESA MODALIDADE : "BRANCO"-RECOLHIMENTO AO FGTS E DECLARAÇÃO À PREVIDÊNCIA<PHONE_NUMBER>39<PHONE_NUMBER>02<PHONE_NUMBER>51<PHONE_NUMBER>15 Nº ARQUIVO: NmDA0FH71Ig0000-3 Nº DE CONTROLE: BdmBPppCuyu0000-1 INSCRIÇÃO: 57.692.055/0001-27 COMP: 11/2010 COD REC:115 COD GPS: 2100 FPAS: 612 OUTRAS ENT: 3139 SIMPLES: 1 RAT: 3.0 FAP: 1.57 RAT AJUSTADO: 4.71 TOMADOR/OBRA: INSCRIÇÃO: LOGRADOURO: AVENIDA ALEXANDRE COLARES 500 3 ANDAR BAIRRO: VILA JAGUARA CNAE PREPONDERANTE: 4930202 CIDADE: SAO PAULO UF: SP CEP: 05106-000 CNAE: 4930202 CAT QUANT REMUNERAÇÃO SEM 13º REMUNERAÇÃO 13º BASE CÁL PREV SOC BASE CÁL 13º PREV SOC 07 2 1.100,35 429,09 1.100,35 0,00
...
And in this particularly piece i need confirm if exists 01 and 07, but if the 01 doesn't exist, the regex is trying to catch in other part of the text, as you can see here: http://regexr.com/3d03m
How could i make the the regex work only between this two words? Is it possible?
Regex: (?: RESUMO DO FECHAMENTO - EMPRESA MODALIDADE : "BRANCO")(.*? 01 )(?:.*?(?=TOTAIS:))
It's not clear to me, what text are you trying to capture? What is your desired output?
Could you give an example with a smaller sample? Can't quite grasp what you want.
In my text, i'm trying to confirm if a 01 and 07 exists between the words RESUMO DO FECHAMENTO - EMPRESA MODALIDADE : "BRANCO" and the first TOTAL:, the problem is, i have other TOTAl in the text, so if the 01 doesn't exist between this words, the regex will try to match with the next TOTAL.
So, the text in the demo should not be matched at all? Try to replace all .*? with (?:(?!TOTAIS:).)*. This is not the best solution though, an unrolled version is preferable (.*? --> [^T]*(?:T(?!OTAIS:)[^T]*)*).
Yes! Now is working, could you explain to me, please? And.. it is a unrolled version??
And post as answer, please @WiktorStribiżew
The problem you are having is that .*? - though called "lazy" or "reluctant" - still tries to match as many characters as it can to return a valid match. As . matches any character but a newline, it matches your leading multicharacter delimiter (and trailing, too).
If you had 1 char delimiters, like [ or ], you would use a negated character class [^\]\[]* instead of .*?. Here, you may use a tempered greedy token:
(?:(?!TOTAIS:).)*
See the regex demo
To support multiline text, . must be replaced with [\s\S].
However, this solution is rather resource consuming as we basically check each position, if it starts the sequence of TOTAIS:, we stop matching. A more efficient approach is to unroll this token, say, as:
[^T]*(?:T(?!OTAIS:)[^T]*)*
See another regex demo
This version matches across newlines, too. It matches 0* characters other than T and then 0* sequences of a T that is not followed with OTAIS: followed with 0* occurrences of characters other than T. However, it cannot check if TOTAIS is a whole word.
I am busy right now, but if anything is unclear, please drop a comment, I will answer in half an hour.
Thanks, that's perfect!
What is a tempered greedy token? :B.
Here is some explanation of the tempered greedy token. It is based on a dot matching with a negative lookahead that prevents overflowing the delimiters.
I don't understand what you are trying to do.. sorry..
But based on what your title "apply regex between two words,"
I assume, if "01" and "07" comes, you want to put comma between.
If that's the case, then it will be: (Perl)
s/(01)\s+(07)/\1,\2\3/g;
i want confirm if exists between the words RESUMO DO FECHAMENTO - EMPRESA MODALIDADE : "BRANCO" and TOTAL like i said @Gon, not in the whole text.
|
STACK_EXCHANGE
|
08-15-2021 04:35 PM
I have an R610 access point that will ends up in a reboot loop (its NOT an R710, as it thinks..) the moment that its given configuration (or so I suspect).
The odd bits are that the device seems to have written, at some point, that its an R710 instead of the R610 that it is.
At login, the "motd" shows the following message with a completely zero'd serial number:
Please login: super password : Copyright(C) 2021 Ruckus Wireless, Inc. All Rights Reserved. ** Ruckus R710 Unleashed AP: 000000000000
This ap is running the absolute latest version of 200.10 - 188.8.131.52.246:
rkscli: get version Ruckus R710 Multimedia Hotzone Wireless AP Version: 184.108.40.206.246 OK
The boarddata seems.. very wrong:
rkscli: get boarddata name: R710 magic: 35333131 cksum: c8f rev: 5.4 Serial#: 000000000000 Customer ID: 4bss Model: R710 V54 Board Type: GD50 V54 Board Class: QCA_ARM Random#: 0000 0000 0000 0000 0000 0000 0000 0000 symimgs: yes ethport: 2 V54 MAC Address Pool: yes, size 16, base 6C:AA:B3:3D:59:10 major: 258 minor: 1 pciId: 0000 wlan0: yes 6C:AA:B3:3D:59:18 wlan1: yes 6C:AA:B3:3D:59:1C eth0: yes 6C:AA:B3:3D:59:13 eth1: yes 6C:AA:B3:3D:59:14 eth2: - 6C:AA:B3:3D:59:15 eth3: - 6C:AA:B3:3D:59:16 eth4: - 6C:AA:B3:3D:59:17 eth5: - 00:00:00:00:00:00 uart0: yes sysled: no, gpio 0 sysled2: no, gpio 0 sysled3: no, gpio 0 sysled4: no, gpio 0 Fixed Ctry Code: no Antenna Info: yes, value 0x00000000 usb installed: no Local Bus: disabled factory: yes, gpio 8 serclk: internal cpufreq: calculated 0 Hz sysfreq: calculated 0 Hz memcap: disabled watchdg: enabled FIPS SKU: no FIPS MODE: disabled OK
I can't find a way to correct this. There's little to no difference in the command outputs I've seen from both the userspace commands (using uart to access the console) nor in the u-boot bootloader environment.
In the kernel messages during boot, there seems to be an issue with the system retrieving the required data:
[ 0.084163] read_version: Error in QFPROM read (-95, 0) [ 0.084175] [ 0.084175] Version Read Failed with error -95pinctrl_dt_to_map: Choosing R610/R710 pinmux
Is there any hope to fix the data read in by the system? If it was able to be inadvertently corrupted..could I "corrupt" it back? Does this speak to malfunctioning hardware?
09-13-2021 01:23 PM
If this was purchased by an authorized Ruckus reseller, please contact them or directly open a support case with Ruckus support.
They can access the AP and correct the serial number, but only after verifying the authenticity of device and its owner. If you have more than 1 AP in such condition, all such APs will be replaced.
10-30-2021 08:06 PM
How strange, I have the exact same problem with a r510 I purchased. The boarddata it reverts back to is that of an r710.
|
OPCFW_CODE
|
RiceVarMap is a database that contains 17,397,026 variations (including 14,541,446 SNPs and 2,855,580 small INDELs ) of 4,726 cultivars all over the world.
The database provides various variation query functions (like 'Search for Variation by Region', 'Search for Variation in Gene' and 'Search for Genotype With Variation ID' et. al. ) and many useful analysis tools (like primer design tools, 'Design Primer by Region', 'Design Primer by Variation ID' and 'Haplotype Network Analysis' et. al.). All the variations are annotated by snpEff, CooVar, and ployphen2. The GWAS results and chromatin accessibility data from ATAC-seq are also used to curate the variation effects, all the significantly associated variations are saved in the database. Moreover, the query results are available for users to download through our website.
You can visit this version of RiceVarMap by entering its address http://ricevarmap.ncpgr.cn/v2. Background of data collection, processing and evaluation could be found in Notes and Data Evaluation page.
You can click this link, specify a chromosome and input a range (required), and also you can filter Variations by major allele frequencies (optional). For results display, you can select only output SNP or INDEL and populations major allele frequencies in this region.
The results show like this:
You can click this link, enter one MSU Osal Rice Loci (e.g. LOC_Os01g01070), and if you want to search for variations in the upstream or downstream region, you can enter the upstream or downstream distance between the gene (e.g. upstream: 0.5kb, downstream 0.2kb, optional).
First, you can see a figure of variations distribution in this region, it can be zoom in or download.
And the results table is also can be sorted and searched.
Firstly, you should collect variation IDs you need (you can find that in "search for Variations by region" page), and select "Variation Information" and click "Search for Genotype with Variation ID" (the link). Search and select cultivar name, click ">" (only add selected cultivars) or click ">>" (add all cultivars), enter variation IDs (e.g. vg0500004123,vg0500073814,vg0500152745,vg0500208833) , you can select only get the raw genotype (from the VCF file), only get imputed genotype (Imputed by an LD-KNN algorithm) or output imputed and raw genotype, and then click submit to get results.
In the result table, each column represents one variation and chromosome, position, variation type (SNP?INDEL), reference allele, major allele and minor allele information are also listed out. The allele in red color means it is a minor allele, background color in green means it is the raw genotype (not imputed).
The results are two tables, the first is the cultivars information table, including cultivar name, ID name in the database, subpopulation information, and location information. The second table contains the genotype of two selected cultivars.
In this link, you just need input one variation ID and can get the result.
Results include the Flanking Sequence (20bp) around the variation, allele frequencies of each population. variation effect and the GWAS results of the variation.
In this link, cultivars can be selected and the results will output the cultivars location image and the detailed information table.
By using this link, first, you should select one phenotype name, and then choose one population. Up to now, we collected 14 agronomic traits and three populations (All, Indica All and Japonica All).
For each phenotype, we first draw one phenotype distribution histogram plot.
And you can download phenotype information table for each cultivar:
If the GWAS data available, the LMM model results will be plotted ( p values <= 1e-5 and in top 20000):
The Significant Candidate Loci information can be downloaded also.
'Design Primer by Variation ID' is intended for researchers to pick PCR-primers to validate SNPs/INDELs or develop molecular markers, 'Design Primer by Region' is intended for researchers to pick PCR-primers to amplify genomic regions avoiding to overlap with known SNPs/INDELs. They all Primer3 as a backend engine.
In result, the primer positions and variation position are marked out.
Haplotype network is frequently used in population genetic analysis, you can input selected variation IDs (must be SNP, INDELs will be filtered out) and select one population classification for haplotype analysis. and users can download CSV format and SVG file for further analysis (the page link).
You can just input one variation ID or the chromosome position or convert the variation ID on this page.
|
OPCFW_CODE
|
This is my talk, feel free to leave me a message.
Just Cause Wiki
I've managed to get a test version of the site working at http://justcause.grandtheftwiki.com
- This uses GTW usernames and passwords, and being logged in on one will log you into the other (like Wikia)
- Only you and me have staff access there. You can promote users to admin, but make sure you know who they are - as not every user will have the same username as they did on Wikia.
- All pages have been copied from Wikia, including user pages and file pages, but NOT the actual users and images.
- Like GTW, no user accounts have been copied from Wikia. People will need to follow the instructions as per GTW.
- At first. the only edits which count for users will be ones they make after they create their account. If a user wants to regain their full edit count, I can run a script to do this.
- Users are advised to make a new account with the SAME username. If they wish to change their username, it needs to go through me. I will not do this for many users, only a handful, as it requires me manually changing the record for all their historical edits.
- Most images have NOT been copied across. You will need to do this yourself using jc:Special:Import
- One other problem is with the <gallery> tag - you need to add File: to the start of every image name, unlike on Wikia. Sorry, but that has to be done manually.
- This uses the SAME codebase and the SAME extensions as GTW. If you want different extensions, that has to go through me.
- No, you can't have FTP access
- You can use interwiki links to get to and from GTW, like gtw:Pagename and jc:Pagename
- The domain name is temporary. If you want a proper one, someone's going to have to pay $10 for it.
- Don't break it. If there are any problems, or you are unsure, ask me!
Enjoy - Gboyers 00:25, 10 November 2010 (UTC)
Just wanted to point out that I'm here too now. --GMRE 20:12, 10 November 2010 (UTC)
The engine has fuel and the wheels are greesed but the train has no name.
I've been contemplating on how to raise enough money for the domain name, the wiki size is too small to hold a fundraiser, and i'm quite sure you(Gboyers) and your wiki's users don't want to pay.
I want to donate but i don't use credit cards...
This is quite a boggle.
Either than that everything is running good, but i have noticed some of the editing text doesnt work here, what is that?
This is what i mean <mainpage-endcolumn /> <mainpage-rightcolumn-start /> 300px
<bloglist summary="true" summarylength=200 timestamp="true" count=3> <title>Blogs</title> <type>bloglist</type> <order>date</order> <category>Blog posts</category> </bloglist>
thats on the JC2 portal page, ideas?
For the logo, do i just upload it as wiki.png?
Also how do i edit the background? is it the same as wikia? --Kronos890989 21:47, 10 November 2010 (UTC)
|
OPCFW_CODE
|
VAD (Vagrant-Ansible-Docker) stack for Ubuntu and Apache
I'm having trouble establishing a Ubuntu-LAMP environment with continuous integration - I feel lost from the different solutions out there and time and again I fear that my vanilla-Bash Ubuntu-LAMP establishment-program of four different scripts (aimed for maximally-self-managed hosting platforms like DigitalOcean or Linode) will quickly become outdated:
That some or all of the entire system getting vulnerable/unsupported and then I'll have to create another environment with a newer operating system with newer server environment (web/email) and moving manually all web applications and their data to this new environment, which is hard and consuming when I work alone maintaining my own personal web applications.
VAD (Vagrant-Ansible-Docker)
From all my reading so far I get the impression that a VAD stack (Vagrant-Ansible-Docker) is the only way for me to avoid the problematic state I described above (if I want a VPS environment and not just shared-server hosting platform):
Release updates and upgrades for my OS (Ubuntu 16.04 to 18.04 to 20.04 - to whatever version; and ufw but without changing my ufw directives like ufw --force enable && ufw allow 22,25,80,443).
Updates and upgrades for all my packages (Apache 2.4-3.4 and so forth; unattended-upgrades curl wget zip unzip mysql php php-{cli,curl,mbstring,mcrypt,gd} python-certbot-apache ssmtp, Composer).
Docker images will help me automate creation of bare-metal web applications that I would then change credentially to create new web applications.
This way, for example, Ubuntu will go from 16.04 to 18.04 directly and the Apache package will go from 2 to 3 and all my Apache virtual hosts for Apache 2.4 virtual-host files will automatically transduce into 3.x.x format.
This sounds like a sweet dream with the only disadvantage of performance (I'm not sure a 5$ or even 20$ cloud-partition could handle such stack).
My question
Is my description accurate and if so - what is the common solution that combines these three that I should use (assuming there is some combo which is an industry-standard)?
On the top of such VAD solution, I'll execute much less vanilla-Bash directives (about 25 lines instead 150-200 lines) which will be much easier for me to maintain myself, at least by means of package-management.
just in regards to your edit. There are modules for many things in ansible, so it acts as an abstraction for some parts of the system. In regards to ufw, you would just define your wanted state and Ansible would bring it to that state (if necessary) https://docs.ansible.com/ansible/2.7/modules/ufw_module.html
I think your research is leading you in the right direction, but I cannot see the value of Docker in here.
I found managing LAMP environments with Ansible completely unproblematic and the scripts I have used for 16.04 only needed a single line changes for upgrade to 18.04. I have also used them on a local development environment and there is no need to have the additional layer and security can be established easily with standard Linux permissions.
I can see the use of Docker, if you need to run the same vhost twice with different configurations on the same bare metal.
With Vagrant, this is a good tool to manage a different technology stack on the same bare metal, for example upgrading to MySQL 8 when at the same time you need to have MySQL 5.7 running on your dev machine at all times for regressions. I have not had this need so far, but can imagine situations where it is useful.
|
STACK_EXCHANGE
|