Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Git: Finding the two nearest commits of two repositories?
I have a collection of repositories that were derived from one another. Instead of creating a new branch for each major revision, a new repo B was created with the last state of the old one A.
I'm trying to bring these poor things back together, but to do so I need to find out which commit in A was the initial commit of B taken from.
How can I do that?
Edit: note that there is no certainty that B's initial commit contains identical source to one of A's commits. For example, a Makefile or something may have been changed to accomodate the new name of the project.
Well, I expect that the source tree of the initial commit in B is completely the same as in some commit(s) in A. In this case you can solve your problem relatively easily:
First look at the tree field of the first commit in B: git log --pretty=raw <B_initial_commit> and remember SHA-1 of the tree.
Now you can do git log --pretty=raw in the A repository and try to locate this SHA-1. If you find such a commit, then bingo! you've got the required (actually there could be a few of them, but it doesn't matter since all these commits have absolutely same source trees, just pick any).
The situation, however, could be slightly worse. Those who "cloned" that B repository could omit some of the files (e.g. .gitignore or other files, which don't make a source tree different for a human, but make it different for a robot).
Edit: here's the possible solution for the case. The idea is to understand what changes have been done to the initial commit of B when B was forked from A and make a script that would revert such changes. Or if the changes are too complex to be easily reverted, then simply remove such files from both B and A and then simply compare the resulted source trees again with the method described above.
So suppose you'd make such a script, which e.g. removes (some of) Makefiles and perform other possible changes on B, and another corresponding script for A. Then you use git filter-branch with tree-filter and thus create 2 new repositories from A and B, let's call them A' and B'. Then as described above, you may try locate a tree within A' with the same SHA1 as the SHA1 of the tree of the initial commit in B'. Since commits in A and A' correspond one by one, then you get then solution.
Right, the files in the initial commit of B varied slightly, for example by changing the project name in some Makefiles. So I'll have to compare the tree objects.
One thing that helps is I can limit my search to commits in A only up to the timestamp of B's initial commit, and a bit earlier (like a day or two).
|
STACK_EXCHANGE
|
doesn't work properly, cannot compile
I'm trying to use gemacs on Windows. Unfortunately, some key bindings do not work.
The C-<space> does not work and when I try C-x C-v the C-v does an OS-command "paste" on the window.
I'd like to look into the code to see if I can fix it even though I am a total Golang beginner.
But I can't even compile it with go on Windows (have go 1.21.1).
Perhaps over the time of five years, it became incompatible with current Golang compilers?
I can build with go1.18.1, but not run (Windows 10)
gemacs.go:759 panic: character set not supported.
I regularly run standard emacs under cygwin. I would recommend that rather than gemacs, which is kind of just a curiosity at this point.
https://www.cygwin.com/
Garret may have fixes for the encoding issue -- for anyone who really does want to get this working.
https://github.com/gdamore/tcell/issues/583
I don't have capacity atm.
I know and have standard emacs.
I was looking at gemacs while searching for a very small/lightweight standalone editor that I can easily take with me onto almost every system (Windows, Linux ...) where I have to work as an admin.
Ah. Good use case.
I updated the vendor of Garret's tbox. I still get the panic when running gemacs under a cygwin window, but it actually did work under a regular CMD.exe windows terminal.
This is building under go1.18.2
You could also try cross-compiling with a windows target?
Here's a link to the go1.18.2 compiled binary on my google drive. Let me know once you've got it so I can delete (or if you don't want it :)
https://drive.google.com/file/d/1vT9gYsFiSWChP9b7cjCLR0-eyMEQYIbS/view?usp=sharing
I installed go1.21.4 on my windows box, and it did build gemacs at tip.
See the "make release" target in the Makefile for cross compilation tips.
After setting GO111MODULE=auto things started working.
I could now build gemacs. It behaves the same as the official release executable, e.g. C- does not work on Windows.
Let's see if I can find out why and fix it. I'm not sure, given my current track record.
hmm. The control key works for me on windows under a cmd.exe console. Is that what you are trying? what windoze version are you on?
Ah, I see C- specifically. I can confirm it does not work for me either, on Windows. It does work on macOS and Linux.
You may want to consult Garrett for his thoughts on why. He knows 1000x more than I do about terminals and such. (He wrote github.com/gdamore/tcell ).
Will do so. Actually, I think it will not be easy to fix if at all as long as pure Go is used. So I thought about adding some other key binding in gemacs as an alternative to C-<space> because that is a really essential function.
In which module are the key bindings set? I haven't found that yet ...
don't know off hand; but since regular emacs can use ctrl space and gemacs ctrl space works on linux/darwin, it must be a windows setting.On 26 Nov 2023, at 08:07, kai-uwe-rommel @.***> wrote:
Will do so. Actually, I think it will not be easy to fix if at all as long as pure Go is used. So I thought about adding some other key binding in gemacs as an alternative to C- because that is a really essential function.
In which module are the key bindings set? I haven't found that yet ...
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
I assume that recognizing such keys will require a different way of getting keystrokes than the pure Go libraries use on Windows. But let's see what Garret responds with.
why would it be different?
I would recommend that you go and play with tcell on windows so you have some context.
By the by, @kai-uwe-rommel I've made the latest version v1.1.1 module compatible. You should be able to build now, if you are still interested in doing so. I don't have a ton of time to maintain gemacs though, so alternative solutions like https://micro-editor.github.io/ might be preferred.
|
GITHUB_ARCHIVE
|
Computational biology, the main focus of my research, is interdisciplinary in nature and lies at the intersection of computing, life sciences, and statistics. My goal, in this realm, is to bridge the gap across these traditional disciplines, to bring computational innovations to wet labs and, eventually, to clinical practice in order to make a difference in peoples’ lives.
While much of the recent work in the computational biology community has focused on an organism-level understanding of genes, proteins, and their interactions, the overarching theme of my work has been on specializing these datasets to tissue, cell type, and pathology-specific context. Scaling from models for a single ``canonical’’ cell to models that can handle assays of billions of potentially distinct interacting cells in different states of disease progression or maturation, can easily overwhelm current computational paradigms and challenge statistical models. My work builds on, and significantly advances, areas of probabilistic modeling, machine learning, and complex network analysis. The scale and scope of the emerging data violate many of the underlying assumptions in traditional machine learning techniques, including, statistical independence, underlying distributions, and required sampling rates. New techniques must account for strong correlations, heavily skewed distributions, and significant undersampling, while supporting well-characterized notions of statistical significance, correlations, and causality.
The focus of my recent work is to develop efficient Computational tools coupled with rigorous statistical models to:
Identify cell types and tumor subclasses from single-cell gene expression profiles, in order to dissect heterogeneity of tumor microenvironment. Single-cell transcriptomic data has the potential to radically redefine our view of cell type identity. Cells that were previously believed to be homogeneous are now clearly distinguishable in terms of their expression phenotype. Developing methods to automatically identify de novo cell types and their functional identity aids in prioritizing combination therapies to simultaneously target different tumor subclasses, including rare subpopulations.
Deconvolve expression profile of individual cell types from tumor biopsies, to gain a mechanistic understanding of tumor-immune interactions. Biological samples are typically heterogeneous in terms of their constituting cell types. Dynamic changes in the relative cell type proportions, in different conditions, can highlight the underlying biological response and are indicative of progression of disorders ranging from neurodegenerative diseases to cancers. Deconvolving relative fractions of the constituting cell types in a given complex mixture has immense diagnostic, prognostic, and pharmaceutical applications.
Construct robust tissue/cell type-specific networks, with the goal of uncovering pathways that are involved in drug-resistance in targeted therapies. The majority of human proteins do not work in isolation but take part in pathways, complexes, and other functional modules. These physical interactions are commonly modeled using an undirected network, also referred to as interactome. However, this global network does not provide any information regarding the spatiotemporal context that the interactions occur. Computational techniques to construct a reliable model of active interactions in a cell can potentially unlock pathways responsible for unique susceptibility of cell types/tissues to pathologies and therapeutic agents.
In the short term, I plan to build on my current efforts and extend them along multiple dimensions. In terms of single-cell analysis, I am interested in developing methods to identify complex relationships among cell types, infer an ordering among them, and establish a ``history’’ of changes between cells. I am also interested in developing new techniques that utilize a structural prior on the relationship between the cell types to significantly scale the deconvolution problem. Finally, I aim to combine single-cell analysis with deconvolution. Single-cell transcriptomics can provide a reference panel, which can be used to perform supervised deconvolution. On the other hand, deconvolution techniques can estimate the underlying fractions of cell types in the mixture, which can be incorporated into the single-cell analysis to correct for sampling biases, among other confounding factors.
In terms of application, I plan to expand the scope of my work in two ways. First, I will collaborate closely with experimentalists to validate my findings and cross-examine them. On the other hand, most of the problems I am working on are motivated by clinical applications, and I am keenly interested in testing them on real data from patients.
My long-term vision is to build a comprehensive set of methods and models to extend traditional computational paradigm along spatial, temporal, and pathological dimensions. These methods have a direct impact on dissecting the heterogeneity of tumor microenvironment, the knowledge of which significantly impacts the effectiveness of targeted and immuno-therapies. These challenges can only be tackled using a large-scale collaborative effort to bring scientists across different disciplines together to ask the right questions and seek the right answers. During my Ph.D., I initiated lasting collaborations with colleagues across different institutions and disciplines. In future, I plan to expand these collaborations to establish a multi-institutional, interdisciplinary effort.
|
OPCFW_CODE
|
Internet Explorer 8: Review
When I was prompted to install IE8 I didn't hesitate a second. After all, if there had been a significant disadvantage to using the new version surely, I thought, someone would have made a noise in advance. Likewise I wasn't expecting any great improvement on IE7. It's just that I like to keep up-to-date when it comes to applications I use frequently, like my browser.
If things pan out as expected, the browser will become the main application for everything from word processing to spreadsheets. Cloud apps are on the horizon, as the success of Facebook attests.
I have tried Chrome and loved it. Not only does it seem faster, but the simplicity and uncluttered interface appealed to me. Unfortunately, Chrome is not supported by my bank, so it's a no-go option.
But IE8 takes a leaf out of Chrome's instruction manual, in the form of favourites that can be added to the favourites bar. This was an element of Chrome that sincerely attracted me. It means you can place a handy link to frequently-visited sites, right in front of you. No more browsing through a menu to access favourites.
Unlike in Chrome, however, you cannot rely entirely on the favicon to identify a site. IE8 does not allow you to delete all text associated with the favicon, as Chrome does. This means you cannot fit as many sites in the favourites bar, as you can with Chrome, where text is not requisite.
Another favourites bar item in IE8 is the 'web slice'. I installed a web slice at smh.com.au and use it infrequently. Having favourites in the favourites bar is far more useful. Web slices are slow: you need to wait until it refreshes if you haven't used it recently.
The whole point of web slices is that you can see some (underscore 'some') content from sliced pages in a menu thingy in your favourites bar, at the same time you are viewing another web page. This means you don't have to leave the current page to see content from a page that supports web slices. However, if you have that page set up as your home page or if a link to it is installed via the favourites bar, there's no great advantage here.
When you open a new tab in IE8 a pictorial Google menu appears. This is quite nice but, again, most of the pages you want to visit are already set up in your favourites bar. Nevertheless, this is quite a decent feature. It helps to remind you where you've been, and you may take the opportunity to revisit a page.
But the menu seems to slow down the process of opening a new tab. I've timed the lag and it can be as long as 20 seconds. That's a long time to wait for a new tab to open.
IE8 also offers a range of 'add-ons' which are small tools that integrate your browser in a variety of ways with popular websites. However, I haven't seen the point in these so I don't use them at all. I mean, if I want to post a story to LinkedIn I'll just go to that site and post it. I don't need an 'accelerator' to perform all the occasional actions that might occur to me.
|
OPCFW_CODE
|
Changing the default boot option in the boot menu
How do I change the boot options so that I can boot from USB?
If your BIOS doesn't let you, then you don't.
More information is required... What make and model PC? Can you not access the one-time boot menu? Can you not change the boot priority (Boot Option #1), etc?
i can change the options - eg the Boot Option #1 - but wich one is the correct one...?!
it is the following Notebook MEDION® AKOYA® E4254
http://www.computerbild.de/artikel/cb-Tests-Notebooks-Netbooks-Aldi-Notebook-Medion-Akoya-E4254-im-Test-21762503.html
https://www.pc-magazin.de/news/medion-akoya-e4254-test-aldi-laptop-schnaeppchen-check-lohnt-sich-3199491.html
helllo - just click the link above - then you see more infos and datas - Thank you in advance
Note: Keep in mind you'll be navigating and adjusting settings with your keyboard only. This model of UEFI doesn't have mouse support.
Switching the boot priority
First, for simplicity, you may just want to disable Fast Boot with the option shown in the screenshot. Many guides recommend this step when booting from a USB(depending on your version of Windows). Use the arrow keys to go up to select Fast Boot, then hit Enter to select it and choose Disabled if available.
Next, go up to Boot Option #1 and hit Enter. You'll want to now select any option that has "USB" in the title. Then hit F10 to save and quit. Since you may have multiple USB ports, you might need to do repeat this step a couple times, booting up, entering your firmware, and selecting another USB option if the first one didn't boot from the right place.
Tweaking additional options in case no USBs are working on boot
Along with disabling fast boot, you may need to change additional options to be able to load your USB drive.
Legacy USB Support: This option may be found in the Main section you view above.
Legacy boot mode(instead of UEFI): You may also need to find an option called Boot Mode and switch it to "CSM" or "Legacy" instead of "UEFI". This may also be in the Main section but have a look through all the sections for this since every version of UEFI can be quite different.
This video may help you to feel more comfortable viewing and changing the various options necessary to boot from USB. It shows a user adjusting the first of these two options.
|
STACK_EXCHANGE
|
So the question is, “Which coding language should you choose when starting to learn to code without any prior experience?”
Python is a popular choice for learning to code due to its ease of use and versatility. It is widely used in fields such as data analysis, scientific computing, and machine learning. However, it may not be as commonly used for end-user projects within companies.
If you had to choose between using a cutting-edge smartphone, like the iPhone 14, or a phone from 1995, which would you choose? Clearly, the iPhone 14 would be the better choice, because it has been designed with the latest technology and advancements, making it much more efficient, reliable, and user-friendly.
It may not be an appropriate comparison to match a computer language with a mobile phone, so let's compare it with another well-known programming language C.
The C programming language was developed in 1972. Although C is still considered an important language, it is not often used today unless you need to write system or embedded software. This is because C is an older language and can lack certain features and be prone to mistakes.
Dart is the primary language used for the development of Flutter which is also developed by Google. It is a UI toolkit for building high-quality, natively compiled applications for mobile, web, and desktop from a single codebase.
Today, Dart is considered a growing and important language in the development community. It has a large and active community of developers who contribute to its development and maintenance, and it is supported by a range of development tools and resources. Its integration with the Flutter framework has made it a popular choice among developers looking to build high-quality, cross-platform apps.
HTML (Hypertext Markup Language) is a markup language used to structure content on the web. It is used to define the structure and content of a web page, including the text, images, links, and other elements.
CSS (Cascading Style Sheets) is a stylesheet language used to describe the look and formatting of a web page. It is used to define the visual style of a web page, such as the font, color, and layout of elements.
However, if you choose to learn Dart, you only need to learn Flutter. In fact, Flutter is just one Dart library, meaning it's essentially the same thing.
In Flutter, the visual components of an app are defined using a widget-based framework, and the layout and style of these widgets are specified using Dart code. This eliminates the need for separate style sheets or markup languages, as everything can be defined in one language and in one place.
Having everything defined in one language in one place can simplify the learning process and reduce the overall learning curve for beginners. By having everything in one place, beginners can focus on a single language and set of tools, rather than having to learn multiple languages and stylesheets.
Furthermore, this also means that there is a lower barrier to entry for beginners, as they only need to focus on a single language, rather than having to learn multiple languages and stylesheets in order to get started. This can also make it easier for beginners to experiment and build prototypes, as they only need to work with one language.
With Dart, developers can write code for a wide range of platforms, including web, mobile, desktop, and server-side, using the same language. This eliminates the need for context switching between languages and can help to improve productivity.
The cross-platform capabilities of Dart in Flutter also provide an added benefit in terms of code reuse and maintenance. By writing code in one language, developers can more easily reuse code across different platforms, reducing the amount of time and effort required to maintain separate codebases for different platforms.
Overall, the use of Dart in Flutter provides a powerful and versatile language for developing cross-platform apps that can help to improve productivity and reduce the amount of time and effort required to develop high-quality apps.
This is because Dart is a statically-typed, object-oriented language, with a syntax that is similar to other popular languages like Java and C#. It also provides many of the modern features found in other languages, such as async/await, generics, and more.
In addition, Dart and Flutter are designed to be easy to learn and use, with a focus on simplicity and readability. This makes it easier for new developers to get started and quickly become productive.
So, if you have a strong foundation in Dart and Flutter, you should find it relatively easy to pick up other languages, as many of the concepts and syntax will be familiar to you. This can be a great way to build your skills as a developer and expand your knowledge of different programming languages.
|
OPCFW_CODE
|
Assertion error in isQuantize subroutine
Describe the bug
Quantize::isQuantize method is called from the very beginning of Quantize::parse method via self.shorty.
Quantize::parse does not set self.status.parsed before jumping to Quantize::isQuantize.
Thus, assertion inside Quantize::isQuantize fails.
To Reproduce
Try to convert the following model: https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/prediction/1
Detailed steps to reproduce:
git clone https://github.com/jackwish/tflite2onnx
cd tflite2onnx
python3 -m virtualenv /tmp/onnx-env
source /tmp/onnx-env/bin/activate
python3 -m pip install -r requirements.txt
python3 setup.py install
wget https://storage.googleapis.com/tfhub-lite-models/google/lite-model/magenta/arbitrary-image-stylization-v1-256/fp16/prediction/1.tflite
tflite2onnx ./1.tflite /tmp/out.onnx
Full log:
$ tflite2onnx ./1.tflite /tmp/out.onnx
2020-12-14 20:06:08,918 D [tflite2onnx][convert.py:37] tflite: ./1.tflite
2020-12-14 20:06:08,918 D [tflite2onnx][convert.py:38] onnx: /tmp/out.onnx
2020-12-14 20:06:08,924 D [tflite2onnx][model.py:21] Parsing the Model...
2020-12-14 20:06:08,925 D [tflite2onnx][graph.py:58] Parsing the Graph...
2020-12-14 20:06:08,925 D [tflite2onnx][graph.py:61] Parsing operator: 0
Traceback (most recent call last):
File "/tmp/onnx-env/bin/tflite2onnx", line 14, in <module>
load_entry_point('tflite2onnx==0.3.0', 'console_scripts', 'tflite2onnx')()
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/convert.py", line 55, in cmd_convert
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/convert.py", line 44, in convert
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/model.py", line 39, in convert
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/model.py", line 31, in parse
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/graph.py", line 63, in parse
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/quantize.py", line 33, in parse
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/common.py", line 122, in shorty
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/quantize.py", line 25, in type
File "/tmp/onnx-env/lib64/python3.7/site-packages/tflite2onnx-0.3.0-py3.7.egg/tflite2onnx/op/quantize.py", line 29, in isQuantize
AssertionError
Additional context
Possible workaround:
diff --git a/tflite2onnx/op/quantize.py b/tflite2onnx/op/quantize.py
index ced5559..8c67f6f 100644
--- a/tflite2onnx/op/quantize.py
+++ b/tflite2onnx/op/quantize.py
@@ -26,8 +26,11 @@ class Quantize(Operator):
@property
def isQuantize(self):
- assert(self.status.parsed)
- return self.inputs[0].dtype is TensorProto.FLOAT
+ if self.status.parsed:
+ return self.inputs[0].dtype is TensorProto.FLOAT
+ else:
+ opcode = self.model.OperatorCodes(self.tflite.OpcodeIndex()).BuiltinCode()
+ return opcode is tflite.BuiltinOperator.QUANTIZE
def parse(self):
logger.debug("Parsing %s...", self.shorty)
Hi @briangrifiin thank you for reporting this issue! This was not caught because we have not tested tflite2onnx with TFLite models containing Quantize or Dequantize with debug log enabled (the debug log should be why no such issue not reported as I have seen many people using similar models pattern).
The fix looks good for me. If it works for your model, would you like to create a PR? That would be very helpful for people using a similar model pattern. @briangrifiin
|
GITHUB_ARCHIVE
|
Please use the form below to submit a story to the Dot.
What is suitable content for KDE.News?
We accept articles that are of general interest to the KDE community, including the users of KDE software. Therefore we are interested in content about:
- Feature releases of KDE applications. We do not generally accept stories on development or bugfix releases (subject to the exceptions described below). Do not assume that "everybody knows this app", as the Dot's targeted readership is much wider than, for example Planet KDE's. Including screenshots and screencasts also makes your software more tangible to users.
- Community events
- Mentions of KDE on other news sites or popular media
- Community news
- Topical feature articles on KDE software, websites or communities - how-tos on getting involved with the KDE community, reviews of recently released software (describe the features. don't include too much of your personal opinions)
What is not generally suitable content?
Anything that is not directly related to the KDE community or its software and is not likely to be of general interest to our readers - there are plenty of other sites covering these types of news. Examples of things that would not normally be suitable are stories about:
- Linux distribution releases, unless they introduce a significant new KDE technology (such as the Kubuntu technical preview of Plasma Netbook)
- Development or bugfix releases of KDE software, with the following exceptions:
- The software compilation (devel and maintenance)
- First new platform ports (devel)
- Really exciting new features (devel)
- Significant new application (devel)
- "Maintenance" releases which DO include new features (e.g. Amarok, sometimes)
- Advertising or a press release for your company or organization. However, if your organization is partnering with KDE on a particular topic that would be of interest to our readers we are quite happy to run an article that acknowledges your organization's contribution (for example, recent articles about Nokia collaboration with KOffice and Appliki's collaboration with the Oxygen team)
What if I'm not sure?
You can send an email to [email protected] with details of your article idea and we can let you know whether we think it would be suitable.
Do I have to ask before submitting an article?
No, if you have an article that you think is suitable please feel free to submit it directly. The Dot editors team will then take care of reviewing and publishing it.
How do I include pictures?
Ideally, put them online somewhere and provide link locations at the top of the article text. We will then upload them to the Dot. If this is not possible, provide a brief description in the article text and one of the editors will contact you to have them emailed privately.
How long should an article be?
It depends. Simple news topics can be very brief - application releases can sometimes be only a couple of paragraphs with a link to the application team's website for more information. Similarly, we sometimes use brief articles of a paragraph or two to summarize a news story on another site and then provide a link. Features will be longer; we prefer articles to be under 2000 words in general. If you submit an article that is too long or too short we may either edit it or ask you to submit a shorter or longer version
What if I've heard a good story but don't want to write it myself?
Send an email to [email protected] with brief details and links to further information. Then one of the promotion team members may pick it up and write the article or help you doing so.
|
OPCFW_CODE
|
Manage compiler dependencies with shards
The compiler has two external dependencies vendored in at lib/: reply and markd.
To keep these synced with upstream, we use git subtree. Or at least that was the plan. It was never setup for lib/reply.
And while it's not terribly complex to setup and maintain, git subtree adds some overhead and it's not very straightforward to understand what's going on.
In the Crystal ecosystem we actually have a great tool for managing dependencies: shards!
Now of course the compiler should not require shards for building to avoid circular dependencies. But there's no need to. If we continue to keep the lib sources vendored in the repository, shards is only used for updating the sources. That would be a simple shards update then. Compare that with the commands for git subtree which I'm far from being able to recall from the top of my head. Of course, that could be automated. But the point stands that shards is just so much easier to use. And you have the full features available to check which versions are installed, check if they're outdated etc.
We should expect compiler dependencies to be published as Crystal shards, so this should work with any future additions.
So here's what I'm proposing:
Put a shard.yml into this repository with declared dependencies of reply and markd at the currently installed versions.
Run shards install and there should be no difference in the lib/ folder except a new lib/.shards.info file (which will also be checked in like the rest of lib/, and shard.lock as well)
This is a huge :+1: from me. I guess I was too early when I proposed that 5 years ago.
Let's treat the compiler as a regular crystal application already.
Trying to do this with the current declared versions in the respective shard.ymls:
name: Crystal
version: 1.12.0-dev
authors:
- Crystal Team<EMAIL_ADDRESS>
dependencies:
markd:
github: icyleaf/markd
version: "0.4.2"
reply:
github: I3oris/reply
version: "0.3.0"
license: Apache-2.0
It looks like neither dependency matches exactly.
To avoid breaking existing code, we might as well update those dependencies to include at least these patches:
markd: https://github.com/icyleaf/markd/commit/53203b6407508a17ff51fcbb3795edef667b6e59 (removes an incorrect Crystal::VERSION check)
reply: https://github.com/I3oris/reply/commit/7f87b054bf592d8aa2918b1314e6b78802324ec6 (fixes ioctl's signature on Android, although apparently Solaris does this too)
The currently installed version of markd is exactly https://github.com/icyleaf/markd/commit/5e5a75d13bfdc615f04cc7ab166ee279b3b996d3
reply is a bit weird as it does not match any commit in the main branch exactly. But https://github.com/I3oris/reply/commit/90a7eb5a76048884d5d56bf6b9369f1e67fdbcd7 is functionally equivalent with no relevant changes.
@ysbaddaden I suppose the main selling point would've been that we can just keep lib/ checked in.
I'm stubborn, maybe we'll drop the lib folder in another 5 years :joy:
More seriously, both dependencies are external to crystal-lang so we'll probably want to ensure that they're vendored in.
Yeah, I've recently thought about whether checking in lib shouldn't be a general recommendation for any shard.
There's basically no real downside except adding a bit more code to the repository. But that's not a lot of memory. It's just a snapshot of the dependency. Not its entire history. It allows to work on the code without needing to deal with shards except for updating dependencies. If a repo is unreachable, you still have all the code you need to build.
|
GITHUB_ARCHIVE
|
Buy SigmaPlot Now! Note that if template.jnt is not beingused as the default template file, open whatever file is being usedinstead. 2. This is the option yougenerally select if you are computing error bars for a grouped bar chart from raw data. Most common examples include: 1) incomplete software installation; 2) incomplete software uninstallation; 3) improperly deleted hardware drivers, and 4) improperly deleted software applications. my review here
The equation is added to your current Regression Library. Run the RGBCOLOR Transform Function Finally, you can run a user-defined transform to place colors into a worksheet. SigmaPlot Version 13 currently supports line widths from a worksheet column. You can also use PowerPoint graphs that have inserted TIFF files. http://www.itninja.com/software/systat-software/sigmaplot/11-192
Yes Applies to: Microsoft Windows Update Microsoft Update Vista Business Vista Enterprise Vista Home Basic Vista Home Premium Windows Vista Starter Vista Ultimate Windows 7 Enterprise Windows 7 Home Basic Windows What serial number I should use?: Windows 7 & Windows 8 Users: Login to your computer as an Administrator. display more than 25 legend entries? . . . If you enter an existing menu name, the command will be appended to the existing menu. 4.
Bring the worksheet into focus, and enter color(s) into a worksheet column. Open the Graph Properties dialog box to the Symbols panel. 2. At the dialog prompt for Installation or Uninstallation, proceed with the uninstallation option and just remove SigmaPlot from the application list of software from your client computer. The thing that can induce the occurrence of this error is the current change in your PCs hardware or software.
use Histograms and variable x spacing? . . . The one thing to keep in mind is that you must have at least two plotscreated for your graph in order to add another axis. To hide axis lines, you can simply press the Del key to hide selected lines. Open the Regression Wizard (F5), and open the Equation you want to use as the starting point for you own equation. 2.
SigmaPlot 8.0 allows you to simply uncheck or check a legend entry from Graph Properties to hide or show it. Both Windows Vista and Windows 7 systems have a pre-installed Memory Diagnostics tool. Formatting a graph for PowerPoint There are basically three things to keep in mind when preparing SigmaPlot graphs for PowerPoint: Turn off page color, change or turn off your graph background, and One option is to simply convert the number to a text cell thereby removing the data point from both graphing and computation. 1.
If you do not want these to be rescaled when you scale a graph, you must turn off the Graph objects resize with graph option in the Options dialog box.FAQ: https://systatsoftware.com/products/sigmaplot/sigmaplot-faqs/ The control for page color can be found under the Page Setup dialogbox, under the Page Layout tab. Copy this block of cell and Paste it anywhere within this same Excel file - data pasted is erroneous. Change the symbol color or other attributes to distinguish the data.
interpolate values for regression curves? . . . this page Create the graph as usual. To hide these lines, you need to open the Graph Propertiesdialog box, select the 3D View tab, and uncheck all the Frame lines. [ top ] How do I make my Using Graph Propeties You can also use the Data panel of the Plot options in Graph Properties to select a regions and/or sample of data to plot. [ top ] How
Your command will now appear on the SigmaPlot menu. [ top ] How do I save a notebook to an earlier version of SigmaPlot? Change your axis scale in the Graph Properties dialog box Axis Tab, Scale panel, to Linear. What's New? get redirected here To turn on backup files, choose the Tools menu Options command, select the System tab and check the Backup files option. FAQ: How do I preserve attributes of hidden objects?
Automatic saving of the previous file as a backup is off by default for performance reasons. Right-click the 2nd plot, and choose the Add New Axis… command.3. I've searched for a solution and noted lots of diverse suggestions.
I have tried several methods but none seemed to have worked. The file must be executed from a DOS command prompt. You're not done yet! To place a macro into a menu : 1.
Reconverting the Result (see Worksheet) to Date&Time after the Regression, should display the Regressed Line. [ top ] How do I copy a block of data, associated with a formula or Sigmaplot 11 Uninstall Internal Error 2762 5 out of 5 based on 29 ratings. Move to another location in the worksheet and paste the data. 4. useful reference FAQ: How do I display a regressed line on top of an existing graph with date and time data?
During installation, I'm getting an internal error 1904 during the install. Click the graph(s) you want to change to select them. 2. With the tips given above, you will definitely be able to fix any of the above issues in the future. Forums New Posts FAQ Calendar Forum Actions Mark Forums Read Quick Links Today's Posts View Site Leaders Advanced Search
Set the Tick intervals to 2. 5. If your data repetitions are arranged along rows rather than down columns, select one of the row-wise options. If you want to enter a time for the next day, you need to set the date to 1/2/4713. For example, the following data are entered:The expectation is that this When the Function dialog box opens, click the Add As… button. 3.
For other features displayed by this graph, see create ticks between bars and hide a single legend entry. [ top ] How do I sample data points/display only some symbols? Now, copy the same data associated with this same macro or formula in Excel file. This converts the number to text characters; you can tell this if the alignment of the cell changes to be left aligned. 4.
|
OPCFW_CODE
|
Fix racc warnings when using rails
Closes https://github.com/tenderlove/aarch64/issues/398
Test steps
Create a new rails app within the same directory as an aarch64 clone:
rails new test_aarch64
cd test_aarch64
echo 'gem "aarch64", path: "../aarch64"' >> Gemfile
bundle
bundle exec rails console
require 'aarch64/parser'
Before
Warnings:
$ rails new test_aarch64
$ cd test_aarch64
$ echo 'gem "aarch64"' >> Gemfile
$ bundle
$ bundle exec rails console
Loading development environment (Rails 7.0.8)
3.0.5 :001 > require 'aarch64/parser'
racc/parser.rb:188: warning: already initialized constant Racc::Parser::Racc_Runtime_Version
/Users/user/.rvm/gems/ruby-3.0.5/gems/racc-1.7.3/lib/racc/parser.rb:190: warning: previous definition of Racc_Runtime_Version was here
racc/parser.rb:189: warning: already initialized constant Racc::Parser::Racc_Runtime_Core_Version_R
/Users/user/.rvm/gems/ruby-3.0.5/gems/racc-1.7.3/lib/racc/parser.rb:191: warning: previous definition of Racc_Runtime_Core_Version_R was here
racc/parser.rb:207: warning: already initialized constant Racc::Parser::Racc_Main_Parsing_Routine
/Users/user/.rvm/gems/ruby-3.0.5/gems/racc-1.7.3/lib/racc/parser.rb:209: warning: previous definition of Racc_Main_Parsing_Routine was here
racc/parser.rb:208: warning: already initialized constant Racc::Parser::Racc_YY_Parse_Method
/Users/user/.rvm/gems/ruby-3.0.5/gems/racc-1.7.3/lib/racc/parser.rb:210: warning: previous definition of Racc_YY_Parse_Method was here
racc/parser.rb:209: warning: already initialized constant Racc::Parser::Racc_Runtime_Core_Version
/Users/user/.rvm/gems/ruby-3.0.5/gems/racc-1.7.3/lib/racc/parser.rb:211: warning: previous definition of Racc_Runtime_Core_Version was here
racc/parser.rb:210: warning: already initialized constant Racc::Parser::Racc_Runtime_Type
/Users/user/.rvm/gems/ruby-3.0.5/gems/racc-1.7.3/lib/racc/parser.rb:212: warning: previous definition of Racc_Runtime_Type was here
=> true
After
With these code changes; No warnings:
$ bundle exec rails console
Loading development environment (Rails 7.0.8)
3.0.5 :001 > require "aarch64/parser"
=> true
3.0.5 :002 > parser = AArch64::Parser.new
=> #<AArch64::Parser:0x0000000139c59358>
3.0.5 :003"> asm = parser.parse <<~eoasm
3.0.5 :004"> movz x0, 0xCAFE
3.0.5 :005"> movk x0, 0xF00D, lsl #16
3.0.5 :006"> ret
3.0.5 :007 > eoasm
=>
#<AArch64::Assembler:0x000000013cece220
...
3.0.5 :008 >
Feedback applied, thanks! :+1:
|
GITHUB_ARCHIVE
|
Dr. Jerry Breger is distinguished professor emeritus of ecconomics at the University of South Carolina. He retired in 1993 after holding faculty positioins at several universities. Duriing his years at USC, he taught management and economics courses and served as director of the Bureau of Urban and Regional Affairs and Diredtor of the Center for Economic Education.
The following work is copyright © 2010. All rights reserved. No distribution or reprinting in any form whatsoever without written permission from the author.
Monty Steinhart and Hypnosis
It's all true. Everything that follows is true. Of course, you'd expect that, but once you've read about Monty Steinhart and hypnosis, you'll most likely have serious doubts that it happened like I said it did, or you'll think that I took a little bit of truth and embellished it beyond exaggeration. To begin, I don't ever remember meeting Monty for the first time or getting to know him. He was just there and we were friends. The year was 1944 and he and his mother and sister had come to Miami from London to escape the war. Monty was about six months older than I and very sophisticated. He had a pleasant English accent and he was nattily attired -- hardly your typical American adolescent. His interests were different too and I found them fascinating. He didn't care for sports or school activities or the social milieu. He liked girls, but not girls our age -- grown women. He was a good student, but his favorite books were novels and best sellers -- the racier the better. And while we were learning blackjack and poker he was playing bridge. But for all his sophistication, he was a good friend and never made me feel ill at ease.
Occasionally, Monty talked about hypnosis and post-hypnotic suggestion. I sloughed it off as more of Monty's world I could not understand. Then one day, Monty brought another boy about our age whom I had never seen before to lunch. His name was Jay Robinson. His parents were in Miami for the winter and that's why he was in school with us. He seemed to be a nice enough kid, but I never got to know him. He became Monty's shadow. One day, Monty asked me to come to his house after school. He was going to hypnotize Jay. That afternoon, we all met at Monty's place and went into Monty's room. Monty told Jay he was going to hypnotize him and asked him to lie down on the bed, close his eyes, and see a white frame on a black background and think of nothing else. Then Monty spoke softly to Jay telling him to relax and concentrate on the frame. In about five minutes Monty asked Jay if he could hear his voice. Jay said "yes" and Monty told him to hear his voice and no other voice. Monty nodded to me and I spoke to Jay, but he didn't answer. Monty said he was hypnotized and what followed was amazing. Monty asked Jay how he felt and Jay replied that he felt fine. Monty then told Jay he was coming down with a bad cold and Jay began to sneeze and his nose ran and his eyes were sore. I don't know if he had a fever. I didn't touch him, but he had the genuine symptoms of a cold. Then Monty told him he was getting well and his symptoms disappeared. Monty also told Jay to call his house at 7 p.m. and ask if I was still there. He then ordered Jay to wake and the session was over. The next time I saw Monty's mother a few days later she told ma Jay had called and asked if I was still there and she didn't understand what that was all about. Our next session with Jay took place two weeks later and it was dramatic. Monty went through the same procedure with Jay to hypnotize him. I remember that I was uncomfortable with all this, but Jay was totally submissive and in a few minutes he was under. The first thing Monty did was to ask Jay what day It was and Jay told him it was Wednesday. Monty told him he was going to bring him into Thursday on the count of five, and he did that. He asked Jay what the news headlines were that Thursday and without hesitation Jay spoke clearly about military actions in Europe and the bombing of Leipseig. That was shocking to me since Jay had never said anything about the war at all. Next, Monty told Jay the years were passing and now he was thirty-five. He asked Jay what kind of work he was doing. Again, without hesitation, Jay said he was a songwriter. He and his partner were well known -- the team of Steinhart and Robinson. At that point, my curiosity got the best of me and I asked, "What's your latest hit?" Jay said, "Cabin in the Sky." That was vaguely familiar, but I couldn't place it. So I said, "Can you sing it for me?"And Jay began to sing; words and melody totally new to me and very good. I was amazed. Monty then told Jay he was growing old, very old. Jay's expression changed; he looked sullen and his voice was low and halting. Monty asked him how old he was and Jay said, "ninety-one." And then, as if it were an encore, Monty said that he was going to take this ninety-one year old man back to babyhood. Monty told Jay he was getting younger and younger and soon he would be nine months old. Jay responded accordingly. His facial expression changed from a dour frown to a warm smile and he made baby sounds in a high-pitched voice. He was full of joy and he reached for Monty apparently to pick him up. Instead, Monty told Jay that on the count of ten he would awaken and would not remember any of this. Jay awoke as expected, said he had a headache, and he was going home. It was a long day and I felt sorry for Jay.
Over the next few weeks I saw Monty and Jay at school a number of times, but there were no home sessions. They had become inseparable. I thought the hypnosis affair was over, but was I ever wrong. One day at physical education, Coach Livermore lined the boys up in groups of five or six for races -- 100 yard dash. Monty did not participate, but surprisingly Jay did. In street clothes and dress shoes he ran his race and finished third. I had never seen him run before and it was an unnatural run I saw that day. Coach Livermore started to congratulate Jay, but he launched into an angry rant about being required to take physical education which added nothing to his knowledge and wasted his time. That was grossly out of character for Jay Robinson. I tried to talk to Jay -- tried to tell him to apologize to Coach Livermore. But he was hostile and said he expected me to be on his side. I realized then that I was not talking to Jay Robinson, but to a look-alike I did not know. That experience upset me. Something was very wrong. Later on, when I talked to Monty about it, he said that he had given Jay a post-hypnotic suggestion to be a tough guy on command. However, Jay began to assume that role erratically without Monty's command and Monty could not always bring him out of it. A couple of days later, I met Monty and Jay for lunch and Jay seemed to be his mild mannered normal self, but when the waitress brought his lunch he said it looked rotten and cursed her. She brought another plate and he took it with a gruff and sarcastic thanks. At the cash register, he said he was short-changed, but Monty assured him he was not and he moved on. Apparently, he had assumed the character of the tough guy and played the role for real. Another day, we were all walking down the street and talking about school when Jay's voice changed and he pronounced a violent threat against someone who had cheated him, and that went on for about ten minutes. Jay was virtually out of control, he roared with anger. When he was Jay again, he didn't remember a moment of it. Incidents like this continued for several weeks, and then one day Jay disappeared. I heard he had withdrawn from school, but that didn't make any sense since final exams were only a week away. I asked Monty about it and at first he denied knowing anything, but eventually he told me that Jay was not only behaving erratically at school, but more so at home, and he appeared to have lost his memory. He told his parents about the hypnosis and post-hypnotic suggestion, and they were furious with Monty. He was examined by a psychiatrist who told the family to take Jay away from the people and places involved in his hypnosis so that it would no longer influence him. I continued to be friends with Monty, but with far less ardor than before. At the end of the school year, Monty and his family went back to London. It was safe then.
|
OPCFW_CODE
|
Certainly! Here’s the revised content to ensure it is plagiarism-free:
In a surprising disclosure, Jensen Huang, the CEO of Nvidia, a key player in the tech industry, has ignited a considerable amount of debate by suggesting that individuals should avoid learning coding. This unexpected statement has drawn attention across the tech community, sparking discussions on the future of employment in the age of artificial intelligence (AI). Let’s explore the intricacies of this statement and the potential consequences it might have on the job market, unraveling the story in five key points.
Point 1: The Unconventional Advice
Jensen Huang’s counsel to steer clear of learning coding, a skill traditionally considered essential in the tech industry, has left many puzzled. The CEO argues that as AI continues to progress, coding might become less critical for individuals entering the job market. This challenges the common belief that coding proficiency is a crucial gateway to numerous opportunities in the technology sector.
Point 2: The Rise of AI and Automation
At the heart of Huang’s statement lies the acknowledgment of the rapid advancement of AI and automation technologies. The argument implies that as these technologies advance, certain routine coding tasks could be automated, potentially diminishing the demand for human coders in those specific domains. This prompts contemplation on the changing nature of work in a world increasingly shaped by intelligent machines.
Point 3: Shifting Job Landscape
The CEO’s cautious advice underscores broader concerns regarding the transformation of the job market. While AI and automation bring forth efficiency and innovation, they also raise questions about the displacement of traditional roles. The employment landscape is expected to undergo significant changes, with some jobs becoming obsolete and others emerging to address the evolving needs of the tech-driven economy.
Point 4: The Human Element in Tech
Contrary to the potential job displacement, some argue that the human element remains irreplaceable in the tech industry. Creativity, problem-solving skills, and a profound understanding of human needs are aspects that AI may struggle to replicate entirely. Thus, while routine coding tasks may be automated, the demand for individuals with a comprehensive understanding of both technology and human dynamics could be on the rise.
Point 5: Navigating the Future
As individuals, educational institutions, and policymakers grapple with the implications of Huang’s advice, the key lies in adaptability. Embracing a mindset of continuous learning and staying attuned to the evolving demands of the job market is crucial. Rather than abandoning coding skills entirely, individuals might consider diversifying their skill set, incorporating aspects like critical thinking, problem-solving, and creativity.
The Verdict: Balancing Act for the Future
In conclusion, Jensen Huang’s unconventional advice not to learn coding serves as a stark reminder of the dynamic nature of the tech industry. While the advancement of AI and automation presents challenges, it also opens doors to new possibilities. The future of work will likely require a delicate balance – leveraging the capabilities of technology while cherishing and enhancing the unique skills that make us distinctly human. As we navigate this ever-changing landscape, staying informed, adaptable, and innovative will be the keys to success in the era of AI.
|
OPCFW_CODE
|
TehGhodTrole wrote:It is optional. You can optionally minimise it, or even optionally ignore it.
Respectfully: I'm not sure whether you were attempting to be funny, failed to read my entire post, or simply didn't understand it. <SHRUGS> One of the problems with trying to use this Internet critter as a method of communication is that I cannot see facial expressions or hear tones of voice. I also cannot tell whether someone will look at a short post of only 5,000 - 10,000 characters (or even less) and decide to only skim it "because it is so big"
I'm going to guess here and assume that you just didn't understand it (if I am incorrect, please forgive). Some computers cannot successfully complete Mint 14's installation process unless the user disables - as in, removes - the slideshow that runs during that installation process.
I know this because after trying repeatedly to install it on my desktop computer (from multiple verified-burn discs made from md5-verified source .ISO), I did some searching and found a thread here on the Mint forum that discussed the issue and gave instructions for disabling/uninstalling the slideshow.
Which, aside from a little frustration, was fine - in my particular case
. I had previously installed Mint 14 on my laptop, so I knew what a nice OS Mint is. And I am used to things that work for "everyone" and having to search for a solution and then implement it so that it will actually work on my hardware. So it was NBD. But for someone who is trying Mint for the first time - or, perhaps, trying linux
for the first time - the minor frustration combined with possibly not knowing that it is an issue that is easily dealt with, could cause that prospective user to not try Mint. Making the slideshow optional along with adding a statement explaining that, in the event that Mint fails to successfully install, the user should try again after selecting "Disable Installation Slideshow" because allowing it to run causes an install failure on some computers seems like a good way to ensure that those prospective new (Mint or linux) users actually manage to successfully complete the installation process.
And, yes, I realize that one can "try out" Mint via it's live/install media. But it's not the same - there's no persistance on optical media and possibly not on USB, ether (I am unsure) and the experience is a lot slower than with an installed Mint OS.
Again, I just thought it would make it that much more likely that someone would try Mint out. I have nothing against the slideshow, and do not mind its inclusion - other than the fact that it causes the installation of the OS to pooch on some computers, lol, which would seem to be of importance to those who are affected. Other than that, the slideshow or something like it is probably a good thing. When everything works correctly, it provides somewhat of an introducton and reassures any users who don't think to look at their drive activity light that, no, their computer hasn't locked up
. And while I have no idea as to the actual number of affected computers, I would guess that they are in the minority. So I would not suggest that Clem get rid of it altogether - merely make it optional somehow, either through an easy to set (and easy to understand) option or via some method of autodetection. I would assume that the former would be easier to implement than the latter, but that is just a guess.
|
OPCFW_CODE
|
Within our company we use Drush and Drush alias files a lot. Recently I wrote a company blog post (in Dutch) about the workflow we’ve set-up and this post is its English translation. For those of you not familiar with Drush, I’ll start with a short introduction. If you are already familiar with Drush and Drush alias files, you can skip to the interesting part.
What is Drush?
A lot of Drupal developers use Drush (Drupal Shell) to speed-up their daily processes. After you’ve install Drush locally, you can run tasks on the command-line which you would normally run via the interface. Here are some often used commands:
- drush cc all: clear all Drupal caches
- drush fra: revert all features
- drush upwd Baris –password=”test”: change the password of user Baris to ‘test’
- drush sql-dump > dump.sql: export the current database to a file
- drush sqlc < dump.sql: import an exported database-dump
It is quite easy to create custom drush commands for task that you need often in your daily work. Many contrib modules come with their own drush implementation (for example drush search-api-index to index your site content).
What are Drush alias files?
In so-called alias files you describe the server information per website. They contain paths and usernames of all environments of a site (dev, test, staging, live). Here’s an example:
Filename: customer.aliases.drushrc.php. I’ve placed this file in my ‘aliases’ folder within my .drush folder. On Linux you can find it here: ~/.drush/aliases
$aliases['dev'] = array( 'root' => '/Users/BarisW/Sites/company.com', 'uri' => 'dev.company.com', 'path-aliases' => array( '%dump' => '/tmp/dump-company.sql', '%files' => 'files', ), ); $aliases['test'] = array( 'root' => '/var/www/company-test/htdocs', 'remote-host' => 'webserver1.company.com', 'remote-user' => 'username-test', 'uri' => 'test.company.com', 'path-aliases' => array( '%dump' => '/tmp/dump-company.sql', '%files' => 'files', ), ); $aliases['prod'] = array( 'root' => '/var/www/company-prod/htdocs', 'remote-host' => 'webserver2.company.com', 'remote-user' => 'username-prod', 'uri' => 'www.company.com', 'path-aliases' => array( '%dump' => '/tmp/dump-company.sql', '%files' => 'files', ), );
The main advantage of using alias files is that you can use them to run all the drush commands on external servers. To be able to do this you need SSH access to the external server and that Drush is installed on the external server as well. If you don’t want to enter your password each time you run an external Drush command, you can also add your SSH key to the external server.
Using these alias files I can now simple run a command like this to clear the caches on the production environment:
drush @company.prod cc all
Or, to copy the production database to my local machine:
drush sql-sync @company.prod @company.dev
Ideal! Optionally, Drush sql-sync can also sanitize the data (to obscure all e-mails and passwords). This prevents developers to store sensitive customer data on their laptops.
drush sql-sync @company.prod @company.dev --sanitize
Extremely handy, but each developer had to enter all the settings from the various local environments in their alias files. The solution we found for this is simple and very effective: Dropbox / SparkleShare or similar.
We created a folder in Dropbox that contains all alias files. We symlinked the alias directory in ~/.drush/aliases to that folder. In this way, each team member always uses the correct data for all the project environments. You can also use these same alias files for your Continuous Integration environment (we re-use them also for our automated deployments using Jenkins).
To be able to do so we had to change one setting: instead of the ‘dev’ alias we use the names of our employees (because each employee runs his local environment somewhere else). So in my case it is now:
drush sql-sync @company.prod @company.baris
Bonus tip: alias files inheritance
The real fun starts when you start using inheritance within your alias files. For example; we use a ‘localdev’ alias for all local environments:
$aliases['localdev'] = array( 'target-command-specific' => array( 'sql-sync' => array( 'sanitize' => TRUE, 'confirm-sanitizations' => TRUE, 'no-ordered-dump' => TRUE, 'no-cache' => TRUE, 'enable' => array( 'devel', 'stage_file_proxy', 'ds_ui', 'fields_ui', 'views_ui', ), ), ), );
$aliases['localdev'] = array( 'parent' => '@defaults.localdev', 'uri' => 'dev.company.com', 'path-aliases' => array( '%dump' => '/tmp/company-dump.sql', '%files' => 'files', ), ); $aliases['baris'] = array( 'parent' => '@company.localdev', 'root' => '/Users/BarisW/Sites/company.com', ); $aliases['eric'] = array( 'parent' => '@company.localdev', 'root' => '/Users/EricM/Sites/dev.company.com', );
This setup ensures that every sql-sync is automatically sanitized, and that a number of dev modules are enabled that are turned off on the live environment (like the devel module).
PS: to use the 'enable' command, you need to copy the sync_enable.drush.inc file from your drush installation folder to your ~/.drush folder.
How do you use Drush alias files in your team? Please share your tips in the comments!
|
OPCFW_CODE
|
I like the idea
But I’m afraid to see features requests created in the wrong category (we’ll have to reclassify them properly…)
Or to get topics “I can’t create a feature request” or “how to create a feature request” or “feature request don’t work” or…
This isn’t a bad thing, I say. Some of the best feature requests already started as a general question on how to do things and after discussing the whole thing with the community, the user probably has the propper trust level and enough material to compile a proper feature request from this.
Another thing I’ve seen once is that feature requests can only be created from a discussion, kinda RFC like, but that would probably require too much moderation.
There will be a pop-up message no one’s going to read x3.
While this makes sense, I am afraid this is going to add in to the myth that devs or the community doesn’t listen or is contrarian and what not. People assume too much without trying to find any reason as to why something is done like that.
We can however lock it above trust level 1 so that extremely new user won’t be able to make request
Or we can move the low effort feature request out of the category to lounge and just add a comment that unless they add in details it won;t be moved to feature request. if community discusses it and subsequently we understand what the request is we can move it back.
Or we can ignore low effort request anyway and let it be lying there without votes.
OKay that too makes sense. I edited the category to lock out trust level 0 users, these are users who have just joined the site. If needed we can later lock out trust level1 too. While the users are locked out to start a topic they can however reply on the existing topic or topic created by others who are allowed.
unfortunately no matter what you do, there is still gonna be people who no matter what you do, say or give them, they’ll never be satisfied or happy with what the end result or answer is to their quesiton. The amount of post I see in the #develop:feature-requests that consist of features that already exist (renamed or not). something similar or not the highest priority is pretty bad. I do think having a guideline in place that would have people either search if the feature request has already been suggested, implemented or just not the highest priority before actually making a feature request post would help cut down on the amount of duplicate post.
Looks like an autocracy for me. At this point I’d rather quit using Krita. If you really care about it, you should moderate posts, introduce rules or just let the community balance itself. Otherwise, it looks like you are trying to limit what people are allowed to say.
The thing is we get more and more features request for things that are already implemented; people don’t take time to read doc, to search on forum or to ask for help: they just ask to implement things.
The goal is not to stop people to ask for features, but to incite them to ask for help before to reduce the number of features request that are created.
And as I said, if something is really a feature request, it could always be created by another user.
Also, to get the member level 2, it’s not really difficult:
Visiting at least 15 days, not sequentially
Casting at least 1 like
Receiving at least 1 like
Replying to at least 3 different topics
Entering at least 20 topics
Reading at least 100 posts
Spend a total of 60 minutes reading posts
It’s just taking time to participate a little bit in the forum life…
Understandable and clear rules which users have to agree with when they post anything or enter the forum. Not some greyed out button which doesn’t make any sense.
If you are so cool, please point out where it is written that only approved people can request new features? It is not written in your guide which you reference in the forum Developing Features — Krita Manual 5.0.0 documentation
You can say anything you like (subject to ToS and CoC) but Feature Requests are becoming a problem.
You can say anything you like too but this is not how problems should be resolved. It’s going to always end up in a bigger problem. Besides, what problem is it? To move a feature request into some general help forum? Wow, that’s really a huge problem.
I’ll tell you what. I will post a screen recording in my feature request and you tell me if that is how your beloved creation should behave. At least, there’s one bug.
|
OPCFW_CODE
|
10 Tips for Which Linux for Data Science
The unending debate of what is the best OS for data science has finally came knocking on our doors here at Runrex and we thought it wise to address the same in this blog post. As you already know, Runrex is very passionate about data science and one of our intentions is to help our community members to understand some of the best tools to use to optimize their data science expeditions.
So, which is the better OS for data science? Linux or Windows?
Linux or Windows?
As people who are obsessed with optimizing the working environment, we always try out new applications and software that make work easier and results better and you can therefore expect us to vouch for an OS that we found the most effective.
We have tried both high end computers running on windows and Linux for data science and we have always been swayed towards Linux for a number of reasons. Although we are not experts in operating systems, we objectively think that Linux is better than Windows at data science. Here is why;
The file manager in windows is whack and unless you download a third party application, navigating between multiple tabs in windows is very difficult. In Linux on the other hand, file managers provide for multiple tabs and you can open two directories side by side.
Windows does not have an application or library that lets one install packages which have no binary forms for specific packages while Linux lets you to install packages directly from the source and even though it takes some time to compile, you don’t need a third party such as Rtools to install the missing files need to compile packages.
The other reason why Runrex prefers Linux over Windows, is the fact that Linux comes with a package manager while Windows does not. Package managers are important as they take care of dependencies of system libraries when packages are installed or removed.
And there you have 3 reasons why we are very particular about the use of Linux in matters data science. These might look like minor reasons to some of you but as we earlier on mentioned, we are people who are try to optimize the work station as much as possible and will therefore opt for the more effective alternative.
Now that we have explained why we prefer Linux, it is time to explore which Linux distros are the best for data science. Here is a look at the tops for choosing the best Linux distro for data science operations;
Considerations for the Best Linux Distros for Data Science
Speed- Speed is one of the most important considerations when it comes to choosing the best distro for data science. You want a distro that is very quick in data processing and providing feedback based on the data fed into the applications contained in the specific distro.
You also need a Linux distro that is compatible with as many applications as possible. In essence, you want a distro that does not require additional third party applications. Research about all the applications you will need in your data science ventures and look into the distro that supports most if not all of them.
Linux is not renowned for user interfaces but that does not mean that you should not look to try and get a distro that makes your work easier when crunching numbers and trying to make sense out the data.
The size of the supportive community
The other thing that you should consider is the size of the community using a certain distro. You don’t want to use a distro which isn’t being used by a lot of people. You never know when you will need some form of help from the community and you should therefore choose a distro that has a good number of users.
Hardware support of the distro
The other thing you will need to consider, is the hardware support offered by the distro. Data science requires sophisticated software and you will therefore need a distro with good support for the hardware that you need for data science.
Suggestions of some of the best distros for data science
Depending on the type of data that you are handling, there are a number of distros that you can consider for data science. Some of the best include Fedora, Ubuntu, Centos, Mandrake, Mageia and Mint.
Talk to Runrex today on the Linux distro to use
Want to get a better understanding of data science and the programming language and Linux distro you need for data science? Give us a call here at Runrex. Runrex is a digital marketing agency with special interest in data science. Runrex not only offers the best data science service to companies, it helps interested individuals to get started in the world of data science.
Give Runrex a call today about your queries and we will be on hand to help you get started in the world of data science and the best software to use in data science.
|
OPCFW_CODE
|
We are going to deploy a Go application directly from your repo to AWS with Cloud 66. Any application using any language on any framework can be deployed with Cloud 66 as long as it has a Dockerfile. Note: Rails applications are exceptions as we deploy them natively.
If your application does not have a Dockerfile, we will suggest one for you based on your code. However, we would recommend reviewing what we have suggested and making sure the Dockerfile meets your requirements.
Have a look at the following resources on how to create a Dockerfile for Go applications:
Go App Overview
We are going to use a simple 'Hello World' single web application written in Golang. This application already contains a Dockerfile.
Deploy a Go App on AWS
To get started sign up for Cloud 66 account, with a four-week free trial. You can sign up using your GitHub or Google accounts, or use your email and create a password. Once you have your account we will ask you for (read-only) access to your code repository so that we can build and deploy your application for you. You will also need access to your AWS account.
Step 1: Add Application Repo Details
If you signed up with GitHub, you can select your application repo from the drop-down. However, for the purposes of this post, you can use our sample app by clicking on the 'Enter the repo URL' link and pasting it in the Hello World app repo link: https://github.com/cloud66-samples/helloworld.
Next, choose the master branch, production environment, give your app a name, 'Hello World' and click the ANALYZE button.
Step 2: Add or Build Your Images from Scratch
If your application does not have a Dockerfile, this is the time to build one and add it to your code repository.
Since our Hello World sample has a Dockerfile already, all you need to do is validate your image. Click the "Validate Repo access and Dockerfile" link and we will analyze the image details to make sure there are no issues before moving to the next step.
In this step, you can also add additional Docker images to your application if needed.
Step 3: Configure Your Application Services
In this step, you can set up your application workloads, network, and storage. We will ask you to configure network access for each of your services.
- Ports - set your container port, HTTP port, and HTTPS port.
- Storage - you can configure storage volumes for each service. (Optional)
- Instances - you can specify how many instances of each service you’d like to run and a variety of other custom settings. (Optional)
Since our sample is a simple application, we only have one service to configure. You should use HTTP port
80 and container port
Step 4: Configure Servers and Add AWS for Deployment
Now, we are going to add AWS cloud as a deployment destination for our Hello World application. You can select AWS from the dropdown, next we will ask you to specify the region and the server size for the deployment. Just before deploying your application, we will ask you to connect AWS with Cloud 66 account, by asking you to enter your AWS API Key.
To get your AWS API key, log into your AWS console. Now click on "My Account" in the top right corner and select "Security Credentials". Under the "Access Credentials", click on "Access Keys" to get your AWS API key. For more information on how to set up an AWS API key, follow their documentation.
In this step, you can also add additional services like databases. We are going to add Redis.
Note: You can decide on how to deploy your databases. As this is a demo app we'll choose the 0ption to share the webserver. However, we do not recommend this option for critical apps or the production environment.
All done! Click the START DEPLOYMENT button.
Once, the application is deployed, use our application menu on the right-hand side to manage and secure your application. For example setting up regular backups, adding load-balancers, and creating a failover group.
In the dashboard overview, you can manage your application Services, Jobs, Images, Servers.
|
OPCFW_CODE
|
Serverless Microservice with DDD, Onion Architecture in Nx/Monorepo (NestJS, AWS Lambda and AWS CDK)
Updated: Apr 7, 2022
Migrating from monolithic to microservice is a long battle in every developer's life. Since microservice is a different concept from monolithic and also needs to apply a bunch of new techniques for development and operation, I would love to write this article to share the experience which I have gotten when start developing my company project
This article contains a lot of not easily understandable techniques and terms. I hope you can enjoy it. You can find an example Nest project implementing all these concepts on GitHub.
Monolithic ---- The system consists of multiple services, but for whatever reason, it must be deployed as a whole.
Easy to develop and test.
Deploys as a single deployment unit.
Easily scale horizontally with the same deployment unit.
Requires less technical expertise and sharing the same underlying code.
Allows high performance by centralizing code and memory.
Suitable for small applications.
The complexity of a system increases with time.
New features take a long time to be released.
Production hotfixes take longer.
When a small change occurs in a module, the entire application has to be updated.
Unable to adopt newer technologies for better performance due to their close relationship with one technology.
Single point of failure.
Code becomes complex and doing new features becomes increasingly challenging due to high coupling.
High dependency on key developers who understand the entire code base
Continuous deployment is challenging.
Individual modules are difficult to scale.
The high coupling between modules causes reliability and availability issues.
Security concerns as all deployments are at one place.
Microservice ---- also known as microservice architecture ---- is an architectural style that structures an application as a collection of services that are
Highly maintainable and testable
Independently deployable and scalable
Adapt newer technology more easily
Resilience to failures
Accelerates the velocity of software development by enabling small, autonomous teams to work in parallel
Every coin has two sides, microservices also have disadvantages such as operation and management costs increasing when they become bigger. That's why Serverless was born to wipe out that cost.
Moreover, microservices have a symbiotic relationship with domain-driven design (DDD), which will be explained in the next section.
Domain-Driven Design (DDD) ---- is a design approach where the business domain is carefully modeled in software and evolved over time, independently of the plumbing that makes the system work. DDD is a key and necessary tool when designing microservices.
The business goal is important to the business users, with a clear interface and functions. This way, the microservice can run independently from other microservices. Moreover, the team can also work on it independently, which is, in fact, the point of the microservice architecture.
Eric Evans, introduced the concept in 2004, in his book Domain-Driven Design: Tackling Complexity in the Heart of Software, which focuses on three principles:
The primary focus of the project is the core domain and domain logic.
Complex designs are based on models of the domain.
Collaboration between technical and domain experts is crucial to creating an application model that will solve particular domain problems.
Serverless / AWS Lambda --- is an approach to software design that allows developers to build and run services without having to manage the underlying infrastructure. Developers can write and deploy code, while a cloud provider provisions servers to run their applications, databases, and storage systems at any scale.
Serverless architecture is best used to perform short-lived tasks and manage workloads that experience infrequent or unpredictable traffic
While serverless architecture has been around for more than a decade, Amazon introduced the first mainstream FaaS platform, AWS Lambda. If you have any concern about monitoring AWS Lambda, you can deep dive into this article
Since Serverless Architecture costs on-demand usage and runs easily without managing the infrastructure, it resolves microservice's disadvantages, which are reducing the management cost and operation cost.
Onion Architecture --- is an architectural pattern that enables maintainable and evolutionary enterprise systems to archive these goals:
Independent of Frameworks. The architecture does not depend on the existence of some library of feature-laden software. This allows you to use such frameworks as tools, rather than having to cram your system into their limited constraints.
Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element.
Independent of UI. The UI can change easily, without changing the rest of the system. A Web UI could be replaced with a console UI, for example, without changing the business rules.
Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.
Independent of any external agency. In fact, your business rules simply don’t know anything at all about the outside world.
User interface: a place for components designed to handle communication with a user by a specific channel; also provides the domain to the application (don’t confuse it with the UI on the front end) (in my case this one is API Controllers)
Infrastructure: Databases, Messaging systems, Notification systems, etc ...
Application services: the place for an application service/facade and, optionally, commands and queries (in my case this one is Services)
Domain services: repository interfaces and domain logic involving several entities (in my case this one is Repositories)
Domain model: the very center of the Model, this layer can have dependencies only on itself. It represents the Entities of the Business and the Behaviour of these Entities.
Onion Architecture is one of the specific applications of the concepts of Clean Architecture, but it's quite more simple, that's why I choose to apply.
NodeJS / NestJS ---- A framework for Node-based server-side applications, can be seen as Angular on the backend
You can use any programming language, like Java, C#, or Python to develop a microservice, but Node.js is an outstanding choice for a few reasons.
For one, Node.js uses an event-driven architecture and enables efficient, real-time application development. Node.js single-threading and asynchronous capabilities enable a non-blocking mechanism. Node.js can leverage plenty of superb NPM libraries. Developers using Node.js to build microservices have an uninterrupted flow, with Node.js code being fast, highly scalable, and easy to maintain.
While plenty of superb libraries, helpers, and tools exist for Node, Nest provides an out-of-the-box application architecture that allows developers and teams to create highly testable, scalable, loosely coupled, and easily maintainable applications.
Many technologies are supported out of the box (GraphQL, Redis, Elasticsearch, TypeORM, microservices, CQRS…)
Built with Node.js and Supports both Express.js and Fastify
Dependency Injection built-in
According to a lot of advantages of NodeJs and NestJS listed above, and also my front-end project is mainly developed in Angular. I decided to choose them
AWS CDK ---- stands for AWS Cloud Development Kit. A framework for defining cloud infrastructure in code and provisioning it through AWS CloudFormation.
The AWS CDK lets you build reliable, scalable, cost-effective applications in the cloud with the considerable expressive power of a programming language. This approach yields many benefits, including:
Build with high-level constructs that automatically provide sensible, secure defaults for your AWS resources, defining more infrastructure with less code.
Use programming idioms like parameters, conditionals, loops, composition, and inheritance to model your system design from building blocks provided by AWS and others.
Put your infrastructure, application code, and configuration all in one place, ensuring that at every milestone you have a complete, cloud-deployable system.
Employ software engineering practices such as code reviews, unit tests, and source control to make your infrastructure more robust.
Connect your AWS resources together (even across stacks) and grant permissions using simple, intent-oriented APIs.
Import existing AWS CloudFormation templates to give your resources a CDK API.
Use the power of AWS CloudFormation to perform infrastructure deployments predictably and repeatedly, with rollback on error.
Easily share infrastructure design patterns among teams within your organization or even with the public.
I love Typescript, that's why I choose AWS CDK to do my Infrastructure. I treat AWS CDK Code same as the Business Logic Code and manage them like a normal library. Every microservice will have its own AWS CDK code to deploy independently.
From now, everything in my project will be developed in Typescript.
Nx ---- All in one smart, fast, and extensible build system with first-class monorepo support and powerful integrations.
Since we have a lot of things in our backend until now
DDD / Onion Architecture: Domain Core (Services, Repositories, Domain Models), Infrastructure, Utils, ....
Microservice / AWS Lambda Function
Thank God, Nx was born for managing them all
. └── src ├── apps │ ├── query-api <-- aws lambda (lib) │ │ ├── app <-- controllers (dir) │ │ └── cdk <-- aws cdk (dir) │ └── command-api <-- aws lambda (lib) │ │ ├── app <-- controllers (dir) │ │ └── cdk <-- aws cdk (dir) └── libs ├── domain <-- domain folder (dir) │ ├── core <-- core folder (dir) │ │ ├── domain <-- interfaces (lib) │ │ ├── repositories <-- interfaces (lib) │ │ └── services <-- interfaces (lib) │ ├── infrastructure <-- (lib) │ │ └── repositories <-- database access (dir) │ ├── services <-- (lib) │ │ ├── command <-- command service(dir) │ │ ├── query <-- query service(dir) │ │ └── models <-- request model (dir) │ └── utils <-- utilities (lib) ├── infra <-- infra group (dir) │ └── cdk <-- aws cdk (lib) ├── shared <-- shared libs group (dir) │ ├── database <-- shared database (lib) │ ├── environment <-- shared env (lib) │ └── utils <-- shared utils (lib)
Finally, everything will be deployed with the design below
In this infrastrure, I choose
AWS API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services
AWS DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. DynamoDB offers built-in security, continuous backups, automated multi-Region replication, in-memory caching, and data export tools.
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS eliminates the complexity and overhead associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work.
|
OPCFW_CODE
|
This is minor, but can someone clarify for me why the disk encryption phrase has the option of ‘f1’ to hide the bullets?
I’m just curious what security benefit this gives, as any threat of visual collection of the number of characters is obviously nullified by the fact that you are typing the pasword in plain view. Is this an aesthetic choice?
I’ll also note I had an issue with pulseaudio idling and causing audio issues, which a friendly user helped me solve and that is in this thread: Qubes 4.2 audio issues - Need to reset audio output for each new Qube audio use - #3 by KarlinQubes
I disagree with this. If you are not recorded with a camera, another person will hardly be able to count how many keys you have pressed, especially if you have many and you do it quickly.
Also, this is not Qubes-specific, since this prompt comes from Fedora that runs in dom0.
Moved to a new topic, as this was not very relevant to R4.2rc1 release
I had only noticed this on the new Qubes 4.2 so I assumed it was something Qubes pushed themselves.
I suppose it makes sense as having a strong sense of the number of characters makes brute-forcing easier, but it still does strike me as being of marginal value. If you are close enough to observe the number of characters in bullet form when someone is typing fast and yet aren’t recording but ARE enough of a threat to access the machine and brute force it later, that really didn’t strike me as an obvious thing to need a counter for.
Feel free to close this thread I just wanted to know the purpose of this, and didn’t realize it was from fedora and not a qubes-specific change.
Perhaps you are right. But it also does not make a large decrease in convenience, does it?
We do not close threads on this forum. Users can always find some additional relevant questions about the topic. Also, it’s relevant for the security of Qubes more or less.
I guess it’s new in Fedora 37, which runs in dom0 in Qubes 4.2.
No I agree, I even use it when there is no legitimate need lol. I’m all for user choice and small increments of security gained. So I am not upset by it, I was just struggling initially to understand the reason and was hoping there was something a little deeper.
Thanks for clarifying.
Since I am not a developer, you should take my guesses with a grain of salt. There still might be something deeper, which I don’t know about.
If there IS an issue with people counting characters by looking at the dots in the password field, this can be overcome by having a password longer than the field is wide. Once it’s full of dots, you see no change for additional characters.
(For those things I use often enough to not need keepass, I tend to use pass phrases, which, since they are not random, should be longer than random strings. Even long passphrases are a lot easier to remember.)
Over time I have got a lot better at remembering long passphrases with a variety of characters and symbols, as many as 35.
What I tend to do is take randomly generated complex passwords and find strings within them that I can make some kind of narrative about, then stitch those together in to my passphrase and I can kind of ‘recite’ a narrative that the characters represent which helps retention.
All of the characters & symbols are completely random, but I can recite a kind of ‘mantra’ that helps me retain them all with great accuracy.
The more I’ve done this, the easier it has become. I can remember old long complex passwords that I haven’t used in at least a year just because of that narrative style, but again all random characters and symbols.
This is off-topic and I will leave it here.
|
OPCFW_CODE
|
S it is verry handy.
Visual Studio for Mac
From Microsoft: Simplify the basic tasks of creating, debugging, and deploying applications. Deliver business results using productive, predictable, customizable processes and increase transparency and traceability throughout the lifecycle with detailed analytics. Whether creating new solutions or enhancing existing applications unleash your creativity with powerful prototyping, architecture and development tools that let you bring your vision to life targeting an increasing number of platforms and technologies including cloud and parallel computing.
Realize increased team productivity by utilizing advanced collaboration features and use integrated testing and debugging tools to find and fix bugs quickly and easily creating high quality solutions while driving down the cost of solution development.
Once you are on the shell prompt, execute the following commands:. Change to that directory and start up the node. Point your browser over to the official download site for Visual Studio and select the edition you want to install. I will be taking advantage of the 90 day free trial for Visual Studio Premium during this blog post.
- why cant i download mp3 rocket on my mac.
- AWS Toolkit for Visual Studio.
- What would you like to build?.
- Announcing the new Visual Studio for Mac | The Visual Studio Blog.
Once you have this downloaded, execute the installation and follow the onscreen instructions. Depending on the speed of your Internet connection, the installation process could take a while to complete. For the full installation, it will require roughly 7gb of disk space on your computer.
In order to do this, close down Visual Studio, if you have it running, and point your browser over to the official NTVS download site. Once you download the correct package for your Visual Studio installation, I am using Visual Studio , begin the installation process.
Visual Studio Code
To verify this, open up Visual Studio Node. As you can see in the above example, we set a String variable and then verified the Node. If you've previously installed any of the packages you'll need to uninstall them first or repair all packages in the order listed. Microsoft has an official Client Compatiblity matrix.
Visual Studio Download
In addition to it, this post also lists the required hotfixes to make everything work. Supported operating system: Windows 10, Windows 8.
- bearshare 9 for mac os x?
- Stay informed.
- mac os x on pc amd;
- printscreen di mac os x.
- Download Visual Studio - Best Software & Apps?
- Download MySQL for Visual Studio.
You won't be able to trigger builds or access work items using the version of Visual Studio you are now using. Instead you must start Team Explorer or higher to interact with these features from Visual Studio. When you use Visual Studio or higher to configure your Version Control mappings, you need to make sure you select a "Server Workspace".
Related microsoft visual studio 2013 for mac
Copyright 2019 - All Right Reserved
|
OPCFW_CODE
|
16 May 2011
Adventures in Mac-Land: My New MBP
For my new job I was given a brand new Macbook Pro. This caused both joy and trepidation. I’m always excited by new tech toys, but the last time I used a Mac was in secondary school when the iMac looked like a giant piece of neon fruit. Since then I’ve been primarily Windows-based with occasional Linux use. So how have I been dealing with this new environment?
My first experiences with the MBP were rather negative. Coming into OSX without any sort of primer was a jarring experience. How do I install apps? What’s this ‘dock’ thing? Why don’t the Home and End buttons work? The feeling was quite strange. Here was a computer, something that I’ve been building my life and career around for years, that I’m struggling to do even the basics with. If I may use a comic-book analogy, I felt like a Superhero losing his powers. All the pieces were there, but whereas I would usually be able to assemble them into a powerful tool, for some reason they weren’t fitting together like they used to.
A few years ago I fancied myself as a sysadmin, so I cobbled together some old PC components and made myself a home server running Ubuntu. I didn’t have a spare monitor, so after the initial setup it lived completely headless. For a number of months I did everything via the command line, and quite enjoyed it. I installed Apache, got Wordpress and Mediawiki set up, and then delved into getting temperature stats from the internal sensors. It was fun for a while but eventually I got bored with the endless little tweaks I had to make to get everything running just so. I abandoned Linux and went back to the comfort and familiarity of Windows. That said, I missed the level of control that bash gave me. Cue the MBP. All of a sudden, even though most things were alien, I found myself in familiar surroundings once I opened up the terminal. The environment I missed from Linux, the power of a decent shell, it was all there. There was hope.
Starting From Scratch
I realized that if I wanted to take command of my new system, if I wanted to turn it from an uncooperative servant into a powerful companion, I was going to need some help. I had done the same thing on Windows over the years. Over time, I’d build up a library of tweaks, tools and hacks that allowed me to use the system the way I wanted to, not the other way round. The same needed to happen here. I didn’t have the time to organically build this arsenal, but fortunately my co-workers had already done a lot of the legwork for me.
First to go was Safari, replaced by my old friend Firefox and its extensive suite of familiar plugins. Firefox doesn’t have the best of reputations on Mac, but so far I’ve found that the current version performs just as well as it ever did on Windows.
Next I needed a better text editor. On Windows I’ve always used Notepad++ but I’d heard that the standard on Mac was TextMate so I gave it a try and was impressed. The lightweight ‘project’ model is really useful, and like NP++ it has an extensive set of extensions available for further customization.
Thanks to the cross-platform AIR runtime, I was able to stick with my twitter client of choice: TweetDeck. As I’m partly responsible for the @ShopifyAPI account now, the multi-user, multi-column features are invaluable.
I also installed a few other tools recommended to me: Alfred to replace Spotlight, Text Expander to handle email signatures/templates and other commonly used text snippets, Divvy for window management, Evernote, and Visor for the terminal.
Still a Long Way to Go
I’m feeling more comfortable with my new environment now that I’ve customized it a bit. That said, I’m definitely not there yet. I’m too reliant on the mouse right now which is going to slow me down until I get comfortable with the various ‘standard’ keyboard shortcuts. I might switch out a couple of my new tools for alternatives before I’m 100% happy and the OS itself still feels slightly strange but I’m getting more adept daily. The most useful tool I’ve found so far though has to be my co-workers who have been using the platform for years and who are always happy to answer my stupid questions about really basic things. They’re awesome.
Thanks for reading! If you like my writing, you may be interested in my book: Healthy Webhook Consumption with Rails
David at 10:00
|
OPCFW_CODE
|
About Google's approach to research publication
I understand the concern over Timnit Gebru’s resignation from Google. She’s done a great deal to move the field forward with her research. I wanted to share the email I sent to Google Research and some thoughts on our research process.
Here’s the email I sent to the Google Research team on Dec. 3, 2020:
I’m sure many of you have seen that Timnit Gebru is no longer working at Google. This is a difficult moment, especially given the important research topics she was involved in, and how deeply we care about responsible AI research as an org and as a company.
Because there’s been a lot of speculation and misunderstanding on social media, I wanted to share more context about how this came to pass, and assure you we’re here to support you as you continue the research you’re all engaged in.
Timnit co-authored a paper with four fellow Googlers as well as some external collaborators that needed to go through our review process (as is the case with all externally submitted papers). We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.
A cross functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why. It ignored too much relevant research — for example, it talked about the environmental impact of large models, but disregarded subsequent research showing much greater efficiencies. Similarly, it raised concerns about bias in language models, but didn’t take into account recent research to mitigate these issues. We acknowledge that the authors were extremely disappointed with the decision that Megan and I ultimately made, especially as they’d already submitted the paper.
Timnit responded with an email requiring that a number of conditions be met in order for her to continue working at Google, including revealing the identities of every person who Megan and I had spoken to and consulted as part of the review of the paper and the exact feedback. Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google.
Given Timnit's role as a respected researcher and a manager in our Ethical AI team, I feel badly that Timnit has gotten to a place where she feels this way about the work we’re doing. I also feel badly that hundreds of you received an email just this week from Timnit telling you to stop work on critical DEI programs. Please don’t. I understand the frustration about the pace of progress, but we have important work ahead and we need to keep at it.
I know we all genuinely share Timnit’s passion to make AI more equitable and inclusive. No doubt, wherever she goes after Google, she’ll do great work and I look forward to reading her papers and seeing what she accomplishes.
Thank you for reading and for all the important work you continue to do.
I’ve also received questions about our research and review process, so I wanted to share more here. I'm going to be talking with our research teams, especially those on the Ethical AI team and our many other teams focused on responsible AI, so they know that we strongly support these important streams of research. And to be clear, we are deeply committed to continuing our research on topics that are of particular importance to individual and intellectual diversity -- from unfair social and technical bias in ML models, to the paucity of representative training data, to involving social context in AI systems. That work is critical and I want our research programs to deliver more work on these topics -- not less.
In my email above, I detailed some of what happened with this particular paper. But let me give a better sense of the overall research review process. It’s more than just a single approver or immediate research peers; it’s a process where we engage a wide range of researchers, social scientists, ethicists, policy & privacy advisors, and human rights specialists from across Research and Google overall. These reviewers ensure that, for example, the research we publish paints a full enough picture and takes into account the latest relevant research we’re aware of, and of course that it adheres to our AI Principles.
Those research review processes have helped improve many of our publications and research applications. While more than 1,000 projects each year turn into published papers, there are also many that don’t end up in a publication. That’s okay, and we can still carry forward constructive parts of a project to inform future work. There are many ways we share our research; e.g. publishing a paper, open-sourcing code or models or data or colabs, creating demos, working directly on products, etc.
This paper surveyed valid concerns with large language models, and in fact many teams at Google are actively working on these issues. We’re engaging the authors to ensure their input informs the work we’re doing, and I’m confident it will have a positive impact on many of our research and product efforts.
But the paper itself had some important gaps that prevented us from being comfortable putting Google affiliation on it. For example, it didn’t include important findings on how models can be made more efficient and actually reduce overall environmental impact, and it didn’t take into account some recent work at Google and elsewhere on mitigating bias in language models. Highlighting risks without pointing out methods for researchers and developers to understand and mitigate those risks misses the mark on helping with these problems. As always, feedback on paper drafts generally makes them stronger when they ultimately appear.
We have a strong track record of publishing work that challenges the status quo -- for example, we’ve had more than 200 publications focused on responsible AI development in the last year alone. Just a few examples of research we’re engaged in that tackles challenging issues:
* Measuring and reducing gendered correlations in pre-trained NLP models
* Evading Deepfake-Image Detectors with White- and Black-Box Attacks
* Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context
* CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims
* What Does AI Mean for Smallholder Farmers? A Proposal for Farmer-Centered AI Research [forthcoming]
* SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
* Accelerating eye movement research via accurate and affordable smartphone eye tracking
* The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
* Assessing the impact of coordinated COVID-19 exit strategies across Europe
* Practical Compositional Fairness: Understanding Fairness in Multi-Component Ranking Systems
I’m proud of the way Google Research provides the flexibility and resources to explore many avenues of research. Sometimes those avenues run perpendicular to one another. This is by design. The exchange of diverse perspectives, even contradictory ones, is good for science and good for society. It’s also good for Google. That exchange has enabled us not only to tackle ambitious problems, but to do so responsibly.
Our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication. To give a sense of that rigor, this blog post captures some of the detail in one facet of review, which is when a research topic has broad societal implications and requires particular AI Principles review -- though it isn’t the full story of how we evaluate all of our research, it gives a sense of the detail involved: https://blog.google/technology/ai/update-work-ai-responsible-innovation/
We’re actively working on improving our paper review processes, because we know that too many checks and balances can become cumbersome. We will always prioritize ensuring our research is responsible and high-quality, but we’re working to make the process as streamlined as we can so it’s more of a pleasure doing research here.
A final, important note -- we evaluate the substance of research separately from who’s doing it. But to ensure our research reflects a fuller breadth of global experiences and perspectives in the first place, we’re also committed to making sure Google Research is a place where every Googler can do their best work. We’re pushing hard on our efforts to improve representation and inclusiveness across Google Research, because we know this will lead to better research and a better experience for everyone here.
|
OPCFW_CODE
|
Precalculus Vector Questions
An airplane is flying on a compass heading (bearing) at 340 degrees at 325 mph. A wind is blowing with the bearing 320 degrees at 40 mph.
Find the actual ground speed and direction of the plane
This is my work:
I know I went wrong someone, I just don't know where. Can someone tell me what I got wrong?
2.A force of 50 lbs acts on a object at an angle of 45 degrees. A second force of 74 lbs acts on the object at an angle of -30 degrees.
Find the direction and magnitude of the resultant force.
This is my work:
The work is correct, except the final step. I looked at the answer and it was the negative angle (-1.226 degrees) I was wondering what the logic was behind this.
Thank you very much
Your questions lack questions. What were you asked to find?
sorry, the questions have been added
Wind does not blow in a bearing. Planes do not fly with a bearing. A bearing is an angular measure from the plane to a point on the ground (a visual reference point or a radio beacon). And wind directions are reported in the direction the wind is blowing from. i.e. the weather vane points into the wind.
Don't bother converting compass orientations to standard orientations. Just assume that the real world does things like the mathematicians do, even when they don't
$(325\cos 340^\circ, 325\sin 340^\circ) + (40\cos 320, 40\sin 320)$
It should really be minus the wind vector, but I am sure that the person who designed this question doesn't know the actual conventions. But, in my previous statement I just said disregard what is done in the real world, so we will keep doing that.
$\text {ground speed} = \sqrt { (325\cos 340^\circ + 40\cos 320)^2 + (325\sin 340^\circ + 40\sin 320)^2}$
True course.
$\theta = \arctan \frac {325\sin 340^\circ + 40\sin 320}{325\sin 340^\circ + 40\sin 320}$
However the arctan convention is to quote figures in $(-180,180)$ and you will need to add $360^\circ$ to get a number in $(0,360]$
In these sorts of questions it is also possible for the calculation to be off by $180^\circ$ You will have to do a little analysis as to which quadrant you are in and which adjustment you need to make.
For 1 you want $v_{groundspeed} = v_{airspeed} + v_{wind}$. You have ground and air speed backwards. It's not a well explained question and you have to think of "flying on a heading..." as "aiming"
For the second question you have found out all the sides of the triangle and want to know the angle. If you use cos, which is symmetric about zero, you won't know if the angle is + or -. Using sin, which is symmetric about 90 degrees also gives two solutions. Using them together narrows it down to a unique solution. But then you already know the y coordinat is negative, so know that the angle is negative.
In your case it is best not to use cos. cos is almost flat near zero degrees so small errors in knowing the cosine will lead to large errors in the angle. sin and tan have a slope of 1 near zero so errors in sin,tan give the same level of error in the angle. You will thus have more acvurate results using sin and tan.
|
STACK_EXCHANGE
|
API changes in 1.20.7-beta
Summary of changes:
https://discord.com/channels/652602255748497449/832061334531211276/995815107600855112
VTube Studio 1.20.7 is now on the beta branch (see above for information on how to access it).
This update brings some big changes to the VTube Studio API, including the first part of the Item API.
Added functionality to request list of hotkeys in a Live2D item using the new field live2DItemFileName in the HotkeysInCurrentModelRequest.
https://github.com/DenchiSoft/VTubeStudio#requesting-list-of-hotkeys-available-in-current-or-other-vts-model
Added functionality to trigger hotkeys in Live2D Items using the new optional field itemInstanceID in the HotkeyTriggerRequest.
https://github.com/DenchiSoft/VTubeStudio#requesting-execution-of-hotkeys
Added "add" mode for the InjectParameterDataRequest (new mode field), which allows multiple plugins to add values to a given parameter. This makes it possible for multiple plugins to work together without interfering with each other, for example external tracker plugins and bonk-type plugins.
In general, the parameter value injection code has been rewritten from scratch, so if your plugin uses this request, please make sure it still works (there should be no issues).
https://github.com/DenchiSoft/VTubeStudio#feeding-in-data-for-default-or-custom-parameters
Item API: ItemListRequest, returns a list of items in the scene (or a list of available items in the user's "Items" folder).
https://github.com/DenchiSoft/VTubeStudio#requesting-list-of-available-items-or-items-in-scene
Item API: ItemLoadRequest, loads items of any type into the scene.
https://github.com/DenchiSoft/VTubeStudio#loading-item-into-the-scene
Item API: ItemUnloadRequest, unloads items currently loaded into the scene.
https://github.com/DenchiSoft/VTubeStudio#removing-item-from-the-scene
Item API: ItemAnimationControlRequest, controls item animations and other stuff like item brightness and transparency (for example for reactive-PNG type plugins).
https://github.com/DenchiSoft/VTubeStudio#controling-items-and-item-animations
Item API: ItemMoveRequest, moves around items in the scene using various movement modes.
https://github.com/DenchiSoft/VTubeStudio#moving-items-in-the-scene
Documentation diff: https://github.com/DenchiSoft/VTubeStudio/compare/13827014c4d3e401fa429bd2b5976f128203e147..1be2dc5f4b3ac6b6c5b010566618991405a6ba87
A bunch of stuff has changed since I opened this issue. Here's a more up to date link for the documentation diff:
https://github.com/DenchiSoft/VTubeStudio/compare/fd16347063d84ef99769082d7b4947eb5e611404..5e45a961a91156a3edf75e09a9c5b2df9b82b9a8
Closed by #56
|
GITHUB_ARCHIVE
|
The Add-ons section lets you manage secondary scripts, called “Add-ons” that extends Blender’s functionality. In this section you can search, install, enable and disable Add-ons.
Blender comes with some preinstalled Add-ons already, ready to be enabled. But you can also add your own, or any interesting ones you find on the web.
- Supported Level
Blender’s add-ons are split into two groups depending on who writes/supports them:
Official: Add-ons that are written by Blender developers.
Community: Add-ons that are written by people in the Blender community.
- Enabled Add-ons Only
Shows only enabled add-ons for the current Category.
Add-ons are divided into categories by what areas of Blender they affect.
There are hundreds of add-ons that are not distributed with Blender and are developed by others. To add them to the list of other add-ons, they must be installed into Blender.
To install these, use the Install… button and
use the File Browser to select the
.py add-on file.
Now the add-on will be installed, however not automatically enabled. The search field will be set to the add-on’s name (to avoid having to look for it), Enable the add-on by checking the enable checkbox.
Scans the Add-on Directory for new add-ons.
User-Defined Add-on Path
You can also create a personal directory containing new add-ons and configure your files path in the File Paths section of the Preferences. To create a personal script directory:
Create an empty directory in a location of your choice (e.g.
Add a subdirectory under
addons(it must have this name for Blender to recognize it).
Open the File Paths section of the Preferences.
Set the Scripts file path to point to your script directory (e.g.
Save the preferences and restart Blender for it to recognize the new add-on location.
Now when you install add-ons you can select the Target Path when installing 3rd party scripts. Blender will copy newly installed add-ons under the directory selected in your Preferences.
Enabling & Disabling Add-ons#
To enable or disable an add-on check or uncheck the box to the right of the add-ons shown in the figure below.
The add-on functionality should be immediately available.
Add-ons that activate or change multiple hotkeys have a special system of activation. For example, with the 3D Viewport Pie Menus add-on for each menu there is a selection box to activate the menu and its hotkey.
If the Add-on does not activate when enabled, check the Console window for any errors that may have occurred.
You can click the arrow at the left of the add-on box to see more information, such as its location, a description and a link to the documentation. Here you can also find a button to report a bug specific of this add-on.
Some add-ons may have their own preferences which can be found in the Preferences section of the add-on information box.
Some add-ons use this section for example to enable/disable certain functions of the add-on. Sometimes these might even all default to off. So it is important to check if the enabled add-on has any particular preferences.
|
OPCFW_CODE
|
SQL SELECT JOIN and DB Architecture
As I cannot mange to build a join for getting expected results I started thinking the whole architecture may be wrong.
Models (relevant fields only):
public class AspNetUsers // this is ASPNET default identity table modified
{
public string Id { get; set; }
public int GeoID { get; set; } // FK to GeoData PK
public string Email { get; set; }
public partial class Product
{
public int ID { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public int CategoryID { get; set; } // FK to Category PK
public string UserID { get; set; } // FK to AspNetUsers PK
public partial class Category
{
public int ID { get; set; }
public string Name { get; set; }
public class GeoData
{
public int ID { get; set; }
public DbGeography GeoLocation { get; set; }
public class WishList
{
public int ID { get; set; }
public string userEmail { get; set; } // FK to AspnetUsers 'Email'
public int frequency { get; set; }
public int category { get; set; } // FK to Category PK
public int range { get; set; }
public int geoid { get; set; } // FK to Geodata PK
The idea is: we have products and categories. Products are user's related. The userID FK in Products model accounts for that, linking each Product to its owner in the AspNetUsers table as every User registers, logs in ecc... through aspnet identity.
Before setting their products Users must geolocate themself so that their products can be conveniently geolocated (searched) by other users.
The Geodata table accounts for worldwide coordinates, postal codes, placenames ecc..
Now to the WishList. Every user may set up to 'n' wishlists setting a category they're interested in, plus a location and a range from that location.
Whislists' result is being emailed by sql server to users, scheduled upon 'frequency' field.
I am not sure this is the best possible db architecture. Sometimes you just start with some building blocks (AspNetUsers, Products, Categories), adding other functionality time by time.
Anyway.. here's a basic SELECT to build the emails to be sent depending on users' different wishlists:
DECLARE C1 CURSOR READ_ONLY
FOR
SELECT [userEmail], [frequency], [category],[range], w.[geoid], [searchCity], u.UserName, g.[GeoLocation]
FROM WishLists as w
JOIN AspNetUsers as u ON w.userEmail = u.Email
JOIN GeoData_IT as g ON g.ID = w.geoid
OPEN C1;
FETCH NEXT FROM C1 INTO
@userEmail, @frequency, @category, @range, @geoid, @searchCity, @userName, @GeoLocation
WHILE @@FETCH_STATUS = 0
BEGIN
IF @geoid > 0
BEGIN
SELECT p.ID, c.Name, p.Name, g.PlaceName
FROM WishLists as w
RIGHT OUTER JOIN GeoData_IT AS g ON w.geoid = g.ID
JOIN AspNetUsers AS u ON g.ID = u.GeoID
JOIN Products as p on u.Id = p.UserID
JOIN Categories AS c ON p.CategoryID = c.ID
WHERE g.GeoLocation.STDistance(@GeoLocation) <= (@range*5000)
END
FETCH NEXT FROM C1 INTO
@userEmail, @frequency, @category, @range, @geoid, @searchCity, @userName, @GeoLocation
END
CLOSE C1;
DEALLOCATE C1;
GO
I am not even introducing the 'category' complication at this point, just checking whether wishlist contains a relevant geoid (if geoid >0) trying to extract, for each wishlist, any product belonging to users whose geolocation is in the given range from the given geoid.
Yet I am getting some duplicated result, randomly on certain wishlist only, and I cannot figure out why.
This looks fine:
1 Nursery Loved Crib Bale Genoa
2 Baby products Crib Genoa
3 Baby products Cot Bed Genoa
4 Feeding Circus Crib Bale Genoa
While this shows duplicates:
1555 Baby products this uaga product Recco
1555 Baby products this uaga product Recco
1556 Automotive uaga product Recco
1556 Automotive uaga product Recco
Is the entire architecture failing or just the SELECT?
There are several problems with your data model including why a wishlist has a geoid and why does it have the userEmail instead of userId? Also Products shouldn't have a UserId; the wishlist should have the ProductId. As far as the select and why you have duplicates? It could be that 2 wishlists for two different users have the same geoid and then you would see two rows like your results.
As userEmail is unique in AspNetUsers table it can be used as a PK thus eliminating the need to involve JOIN in many circumstances. 2. There's a one to many relation between userid and products. Sure I could complicate the architecture using a 3rd table: would this be strongly advisable ? 3. A wishlist has nothing to do with single products. A wishlist targets a location, a range from that location and eventually a category. Sure in case users ask for two or twenty identical wishlists (not very wise indeed), asking for same place, range distance and category they will get identical results.
I realized after my comment that wishlist was location based; I can also ignore the userEmail in the wishlist. About (3) in your comment, the problem is not if users request identical wishlists, the problem is if two different users have a wishlist for the same geoid, both users will get duplicate rows by your select statement FROM WishLists as w RIGHT OUTER JOIN GeoData_IT AS g ON w.geoid = g.ID JOIN AspNetUsers AS u ON g.ID = u.GeoID because you are missing the join on the Email. Apart from this, the relationship between user and product doesn't make any sense to me.
I also don't have any idea why you would be using a right outer join - if users don't have a wishlist, send them an empty email anyway? Are users both owners of products via the UserID on products and consumers of products via wishlist - category - product? That kinda makes sense but is not straightforward from the schema. Also if you are only going to use the geoid, GeoLocation and range variables in the cursor (ugh), I wouldn't load the other variables. Overall I would say yes your architecture needs help.
There was nothing wrong with the architecture, the problem being the SELECT not referencing WishLists ID on fetching. Here is the corrected code:
DECLARE C1 CURSOR READ_ONLY
FOR
SELECT w.ID, w.userEmail, w.category, w.range, w.geoid, w.searchCity, u.UserName, g.GeoLocation
FROM WishLists as w JOIN AspNetUsers as u ON w.userEmail = u.Email JOIN GeoData_IT as g ON g.ID = w.geoid
WHERE w.frequency = 1 order by w.ID
OPEN C1;
FETCH NEXT FROM C1 INTO
@WishListID, @userEmail, @category, @range, @geoid, @searchCity, @userName, @GeoLocation
WHILE @@FETCH_STATUS = 0
BEGIN
if @geoid > 0
begin
if @category > 0
begin
SELECT p.ID, c.Name, p.Name, g.PlaceName
FROM Products as p
JOIN Categories AS c ON p.CategoryID = c.ID
JOIN AspNetUsers AS u ON p.UserID = u.Id
JOIN WishLists AS w ON @WishListID = w.ID
JOIN GeoData_IT AS g ON u.GeoID = g.ID
WHERE p.IsApproved = 1 AND p.IsDeleted = 0 AND p.DateExpire > convert(date, getdate())
AND w.IsDeleted = 0 AND g.GeoLocation.STDistance(@GeoLocation) <= (@range*5000)
AND p.CategoryID = @category
order by p.ID
end -- category != 0
end -- @geoid > 0
FETCH NEXT FROM C1 INTO
@WishListID, @userEmail, @category, @range, @geoid, @searchCity, @userName, @GeoLocation
END
CLOSE C1;
DEALLOCATE C1;
the key line being
JOIN WishLists AS w ON @WishListID = w.ID
That SQL is significantly different from the original (including getting rid of the right outer join). Also, how are you connecting the users to the wishlists again? This select is still wrong and problematic. And no, your architecture still has significant problems.
Thanks buddy I just would like to express my gratitude. I couldn't have made it without all of your technical suggestions. It's really hard to say which one of your posts was the more useful, all of them telling so much about your will for helping other users of this community from the heights of the deep experience and knowledge you've been so kind to share. You're the best!
You are welcome; I do this in my downtime and I'm happy to help :)
If you intend to use stored procedures or cursors (cursors are to avoid whenever possible, try to use different techniques such as CTEs), then you should definitely go database first. There is no benefits for you to use the code first approach.
That said, you are apparently not declaring relationships between tables using ICollection.
Furthermore, you are not using data annotations to determine primary keys using [KEY] or limiting the size of your string columns using [MAXLENGTH(n)]. Your tables are without primary and foreign keys and all string columns are NVARCHAR(MAX) as a result. Your tables are going to take a lot of room as a result.
What you are getting is a Cartesian product and it is expected. If you want to avoid it without looking deep into the logic of your query, use DISTINCT or GROUP BY.
The 'partial' declaration should suggest you other partial classes are being used for defining metadada. Nevertheless, thank you for your suggestions.
Yes, there are two partial classes, but without the rest of the implementation code, we can't really help you either.
As is, this post sounds more like a "do my job for me" than a reasonable theoretical question.
|
STACK_EXCHANGE
|
#Author-cmactagg
#Description-Copies the parameter reference and applies them to another component
import adsk.core, adsk.fusion, adsk.cam, traceback
import logging
def run(context):
ui = None
try:
app = adsk.core.Application.get()
ui = app.userInterface
product = app.activeProduct
design = adsk.fusion.Design.cast(product)
hasMisalignment = False #whether some parameters werent able to copy due to incompatible units
#ensure the select items are components
if ui.activeSelections.count == 2 \
and isinstance(ui.activeSelections[0].entity, adsk.fusion.Occurrence) \
and isinstance(ui.activeSelections[0].entity.component, adsk.fusion.Component) \
and isinstance(ui.activeSelections[1].entity, adsk.fusion.Occurrence) \
and isinstance(ui.activeSelections[1].entity.component, adsk.fusion.Component):
comp0 = ui.activeSelections[0].entity.component
comp1 = ui.activeSelections[1].entity.component
index = 0
#loop through the parameters of component1 and reference the parameter in component2
for index, param in enumerate(comp0.modelParameters):
#only apply/copy the reference if they are the same unit type - a length parameter cant reference an angle parameter
# if they are of different unit types, skip it and try to apply to the next parameter
if comp1.modelParameters.count > index and param.unit == comp1.modelParameters[index].unit:
comp1.modelParameters[index].expression = param.name
index = index + 1
else:
hasMisalignment = True
else:
ui.messageBox('Select two components')
if hasMisalignment:
ui.messageBox('Some parameters were not able to copy due to the order of the parameters and incompatible units.')
except:
if ui:
ui.messageBox('Failed:\n{}'.format(traceback.format_exc()))
|
STACK_EDU
|
require_relative '../lib/pay/merchant'
require_relative '../lib/pay/db'
require_relative '../lib/pay/user'
require_relative '../lib/pay/transaction'
RSpec.describe Pay::Transaction do
before :each do
Pay::DB.remove_db
end
context "creating" do
it "creates transaction with all required data" do
merchant = Pay::Merchant.new(["m3", "m3@email.com", "0.5%"]).save
user = Pay::User.new(["u3", "u3@email.com", "500"]).save
txn = Pay::Transaction.new(["u3", "m3", "200.50"])
expect(txn).to be_kind_of(Pay::Transaction)
end
it "raises error if any of the required data is missing" do
expect{ Pay::Transaction.new(["u3", "m3"]) }.to raise_error(Pay::Error, "User name, merchant name, amount are mandatory")
end
it "raises error if user does not exists" do
expect{ Pay::Transaction.new(["non_existent", "m3", "200"]) }.to raise_error(Pay::Error, "User does not exists")
end
it "raises error if merchant does not exists" do
user = Pay::User.new(["u2", "u2@email.com", "500"]).save
expect{ Pay::Transaction.new(["u2", "non_existent", "200"]) }.to raise_error(Pay::Error, "Merchant does not exists")
end
it "raises error if transaction amount is not valid" do
merchant = Pay::Merchant.new(["m3", "m3@email.com", "0.5%"]).save
user = Pay::User.new(["u3", "u3@email.com", "500"]).save
expect{ Pay::Transaction.new(["u3", "m3", "0"]) }.to raise_error(Pay::Error, "Provide an positive integer value transaction amount")
expect{ Pay::Transaction.new(["u3", "m3", "str"]) }.to raise_error(Pay::Error, "Provide an positive integer value transaction amount")
expect{ Pay::Transaction.new(["u3", "m3", "-1"]) }.to raise_error(Pay::Error, "Provide an positive integer value transaction amount")
end
end
describe ".all_transactions" do
it "returns all transactions from db" do
merchant = Pay::Merchant.new(["m3", "m3@email.com", "0.5%"]).save
user = Pay::User.new(["u3", "u3@email.com", "500"]).save
user1 = Pay::User.new(["u1", "u1@email.com", "500"]).save
Pay::User.record_transaction(["u3", "m3", 100])
Pay::User.record_transaction(["u1", "m3", 100])
txns = Pay::Transaction.all_transactions
expect(txns.length).to eq(2)
expect(txns.map(&:class).uniq[0]).to eq(Pay::Transaction) # all objects are of Transaction type
expect(txns[0].user).to eq("u3")
end
end
end
|
STACK_EDU
|
Manage episode 229263337 series 2484708
Sunil Pai is a Software Engineer at Facebook, React team member, and the creator of Glamor. He joins us on The Undefined to talk about the state of the web, the past, present, and future of text editors and IDEs, how he learned to code, and how our community needs to evolve to survive.
- Sunil Pai – Twitter, GitHub
- Ken Wheeler – Twitter, GitHub, Website
- Jared Palmer – Twitter, GitHub, Website
- Stop writing code - Sunil's talk at React Europe 2018
- The “Something” Statements - Sunil's talk at React Rally 2018
- Visual Basic
- Adobe Flash
- Atom Editor
- Visual Studio Code
- List of mergers and acquisitions by Microsoft
- Prettier - Opinionated Code Formatter
- VSCode Color Picker Extension (aka "Fady Gradient "F**ker)
- Framer X
- Glamor - Sunil's CSS-in-JS library for react et al
- Atlaskit by Atlassian - Atlassian's official component library
- Salesforce's Commerce Cloud (formerly Demandware)
- Sketch Symbols
- Glamor's CSS selectors (e.g. ":hover")
- JSX spec
- Vue-loader Scoped CSS
- React Hooks - They let you use state and other React features without writing a class.
- "My Coding Journey" - Revel Carlberg West @ ReactNYC
- React createClass
- Create React App - Set up a modern React web app by running one command.
- Codesandbox.io - CodeSandbox is an online editor that helps you create web applications, from prototype to deployment.
- AWS Cloud9 - A cloud IDE for writing, running, and debugging code
- Web Audio API
- Android Studio/SDK
- Service Worker API
- Web Components API
- "Tumblr will ban all adult content on December 17th"
by Shannon Liao, The Verge, Dec 3, 2018
- Codepen.io - A front end web development playground.
- Sunil's Gist on CSS-in-JS: "How does writing CSS in JS make it any more maintainable?" - (Hacker News Thread, Original Tweet)
- Addy Osmani, Sarah Drasner, and Dan Abramov (blessed be he)
- Mozilla Firebug Editor Extension for Firefox
- "What is a JavaBean exactly?" - Stack Overflow
- Array.prototype API on MDN
- Ken Wheeler on Spotify
- kenwheeler/cash - Ken's absurdly small jQuery alternative for modern browsers
- FormidableLabs/react-music - Make beats with React!
- Rust Programming Language Tutorial
|
OPCFW_CODE
|
Welcome to the jungle, where the animals are not only wild but also quite aggressive! Let’s see how to code a poker betting game in pure Java. It doesn’t get much simpler than this: we will be creating a Java application where users will place bets on the outcome of a poker game. The application will take in account all of the relevant rules and regulations in order to properly execute the transactions made by users. Let’s get to work.
The Low Hanging Fruit
Before we start coding, let’s take a quick detour to discuss the requirements of this particular project. This application will be based on certain design principles which will make it more organized and easier to maintain. One of the most important things to keep in mind is that we need to follow established practices whenever possible. Another principle worth considering is to always opt for the simplest solution; in this case, if we take a step back, we will realize that our requirements are quite simple and can be completely satisfied by leveraging off-the-shelf java library classes. With that said, let’s dive into the coding.
The Poker Game
As stated above, this application will allow users to place bets on the outcome of a poker game. For the sake of this example, let’s assume that we are dealing with Texas Hold’em poker. The rules and regulations of this particular game model are fairly simple, and are as follows:
- Each player is dealt two cards face down
- Each player is allowed to look at their cards, but not show them to anyone else
- Players must place their bets before the commencement of the first round of betting
- The dealer’s card is revealed at the end of the hand
- The player with the highest card wins
- Players can change their bets as often as they want during the game
- There is a continuous betting round, where players can adjust their bets as often as they want
- Players are not allowed to make duplicate bets
- Players must declare their intentions to the dealer at the beginning of each round; this is to ensure that nobody rigs the game in favour of another player
- A player’s hand is determined by the two cards dealt to them and the sum of their bets
- The dealer’s function is to take in account all of the bets placed and to randomly deal out cards to the players
- If two players’ hands are tied, the game results are a push
- Players are required to follow all relevant betting rounds and to keep track of their own bets
Since we already have the rules of the game laid down, we can start creating the application. With that said, let’s write some Java code to model the game:
As we mentioned above, this application will allow users to place bets on the outcome of a poker game. In order to keep things organized, let’s create a separate class to model the game. This class will serve as the foundation of our application and will encapsulate all of the logic required by the poker game rules. As such, this class will contain only minimal constructors which will be used to initialize the class with the necessary values. In the next section, let’s discuss how to create this class.
Game Rules And Initialization
This class will adhere to certain software design conventions that will make it easier for other programmers to follow along and understand its logic. One of the most important things to keep in mind is that this class will be used as the basis for our application, and as such, it will not directly interact with the users. For that reason, all of the members of this class will be private, except for the main constructor:
- Sets up private variables that will be used throughout the class
- Implements the necessary operations required by the rules of the poker game
- Creates a private instance variable to store the dealer’s card
- Creates private instances for each player to store their two cards
- Declares a private constructor to avoid instantiating this class
- Implements the necessary checks to ensure that the class follows the rules of Texas Hold’em Poker
Once we have a class for modeling the game, let’s move on to the next step.
The Database Persistance
If our application is going to be an interactive one where users are going to be making frequent transactions, then we need to find a way to save the state of the application between sessions. For this reason, let’s create a separate class to handle the database persistence of this application. This class will keep track of all of the transactions that are made by the users and will ensure that the states of the application are persisted in a reliable manner. Let’s call this class the TransactionManager:
In this class, we will encapsulate all of the logic required by the database persistence of the application. This class will have two main responsibilities:
- Track the states of our application, and ensure that these states are saved coherently
- Allow users to make transactions, and keep track of the actions taken by the users
As you can see, this class will have a single instance variable which will be of type HashMap. The reason for this is simple: we do not want to directly access the database from this class. For this reason, we are using a Map to store the data as a concierge of sorts; the database access will be handled by a separate class, the DBConfiguration.
The HashMap will save all of the states of our application as keys and values. These keys will be Strings which will denote the state of the application at a given moment (e.g., “NEW_GAME”,”USER_LOGGED_IN”,”BETTING_ONGOING”,”FINAL_ROLL” ).
As a result of these keys, this map will contain a wealth of information about the current status of the application. This map will have several getters and setters, and these will be used to read and write to the database. For example, the getUserLoggedIn() method will return true if the user is currently logged in, and false otherwise.
The class will also have a single method which will be used to persist the states of our application. This method will take in a HashMap
The reason we need to handle the persistence of the application is simple: if a user is not logged in, then they will not have a means to interact with the application. For this reason, let’s ensure that the application will remain functional even if the user is not logged in. When the user logs in, we will need to ensure that their details are stored in the database and that all of the relevant states are restored when they log out.
The Presentation Layer
Since this is a GUI application, we need to create a separate class to handle the user interface. For the sake of convenience, let’s call this class the UI Manager:
The role of the UI Manager will be to take care of all of the interactions which can be made by the users with the application. This class will have two responsibilities:
- Present the information to be displayed to the user
- Take in account the states of our application, and present the user with relevant feedback
As we mentioned above, this application will allow users to place bets on the outcome of a poker game. For the sake of this example, let’s assume that we are dealing with a standard deck of cards, where each card is valued at its face value. For this reason, we will not need to worry about dealing with currency issues in this application. Let’s also assume that all of the players are using real money, and that they are ready to place their bets. As a result, this application is now in a position to start accepting bets from the users.
Since we have a class to take care of the database persistence of our application, let’s move on to the next step.
|
OPCFW_CODE
|
[Bug] Installation fails on <IP_ADDRESS>
SailfishOS VERSION (Settings → About product → Build): <IP_ADDRESS>
HARDWARE (Settings → About product → Manufacturer & Product name): Xperia 10 Plus
SailfishOS:Chum GUI application VERSION (<Top pulley> → About): 0.3.0-1
BUG DESCRIPTION
Installation fails on <IP_ADDRESS>.
2023-02-17T11:26:39+01:00 [Info] PID 14744 is logging to /var/log/sailfishos-chum-gui-installer.log.txt
2023-02-17T11:26:40+01:00 [Debug] Installed, now running: sailfishos-chum-gui-installer-0.3.0-1
2023-02-17T11:26:42+01:00 [Step 1 / 2] pkcon -pv repo-set-data sailfishos-chum refresh-now true
11:26:42 PackageKit Verbose debugging enabled (on console 0)
11:26:42 PackageKit role now repo-set-data
Transaction: Setting data
11:26:42 PackageKit notify::connected
Status: Waiting in queue
Status: Waiting for authentication
Status: Waiting in queue
Status: Starting
Status: Querying
Status: Starting
Status: Running
Status: Refreshing software list
Status: Finished
Results:
2023-02-17T11:26:44+01:00 [Debug] sailfishos-chum-gui-installer's main script (PID: 14744) finishes
2023-02-17T11:26:45+01:00 [Step 2 / 2] pkcon -pvy install sailfishos-chum-gui
11:26:45 PackageKit Verbose debugging enabled (on console 0)
11:26:45 PackageKit role now resolve
Transaction: Resolving
Status: Waiting in queue
Status: Starting
Status: Querying
Package not found: sailfishos-chum-gui
11:26:45 PackageKit role now resolve
Transaction: Resolving
Status: Waiting in queue
Status: Starting
Status: Querying
Package not found: sailfishos-chum-gui
Command failed: This tool could not find any available package.
STEPS TO REPRODUCE
Try to install SailfishOS:Chum GUI Installer on a device without Chum.
ADDITIONAL INFORMATION
I also downloaded the RPM from your 0.5.5 zip to install 0.5.5, but the RPM installation fails because it links to libpackagekitqt5.so.0.
Thank you for the proper bug report.
ADDITIONAL INFORMATION
I also downloaded the RPM from your 0.5.5 zip to install 0.5.5, but the RPM installation fails because it links to libpackagekitqt5.so.0.
This is expected, because it is compiled for SailfishOS 4.4.0, as the directory name inside the ZIP-archive shows. Should be fine with the next release.
STEPS TO REPRODUCE
Try to install SailfishOS:Chum GUI Installer on a device without Chum.
I will look at this issue; hope to find some time for it this weekend.
I assume yesterday the target "<IP_ADDRESS>" was not configured for the
SailfishOS:Chum community repository at the SailfishOS-OBS; now it is, see at the bottom right of this page: https://build.merproject.org/project/show/sailfishos:chum
Thus the SailfishOS:Chum GUI Installer should be working fine now.
It does. I removed and installed the GUI installer again and it worked fine this time.
Thank you!
Thank you for reporting and testing again!
|
GITHUB_ARCHIVE
|
In order to impose term limits on congress, we would need to modify the constitution. In order to do that, we need the support of congress. Essentially, we would need congressmen to give up the power, prestige, and profits that come with being a US congressman. Beyond that, we would also need to limit their benefits after they leave office also. Since we will be making more of them, we don't want to have to pay more for them to leave.
Not only should Congress have term limits, I believe that they should be limited to one term. Most members of Congress are more concerned with gaining re-election than with doing what is best for their constituents. If they only had a limit of one term, then there would be no reason to worry about re-election.
Virtually all other offices in the United States have term limits, and it's about time that members of Congress have them too. The terms should be increased, though, from 2 to 4 years, and there should be a maximum of 3 terms. Having candidates sit in Washington their entire lives isn't good for democracy.
I believe this is a must. Having law makers in the office for so long causes issues with advancement. Once someone is in the office for more than 10 years, It becomes unproductive.
Opponents of term limit argues that this is bad for communication, Learning curve, Etc. This is completely wrong. Without term limit, These law makers gain more power as they stay longer. Although we the public vote them into the office, Whoever can lobby the best using their seniority and communication of big companies will always win.
The main reason we need term limit is to keep advancement of our nation. We need younger law makers with fresh new ideas. We need to remove older law makers with ideas from back in the day which are killing the nation.
E f; e;lk klj ' 4gijorgfrwe agjk r rjg gjk thjreh rh hjkhrhjk hrg jkr rh r kjh hghghh hhhhhjfhfjgjgjgjgj g gj hg jh gj j g ko guj ijh guf dty dt ftf f ygy gyg y f fy utud s yiyol u tuteet ry6 yur ty ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggggghhhhhhhhhhggghhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhbbbbbbbbbbbbbbbhbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbvvvvvvvvvvvvvvvvvvvvvvvvvvvvvkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkn h
We could use 4-5 year terms.
Old people smell bad h had chd chdndnxndms d can. Did. H. Doc. He do. He chd. If find. Nf not be. Fm. Nf not jvnjr vjt jvnjr f f d n. N n n min fun fun the. F. F u find f. If j. F if. I. If
Old people r old and boring.
Using nf. If. F v c f f f f f f c c c doc. D d c granitic fm v. Nf n fun. Nf nf nf Ning. Nfvnfvnfvnfvnfvnf nf f vet vjt vjt vjt v jr vjt vjt f for I've j river v
We should not impose term limits on members of congress. The framers of the constitution intended for the legislature to be the most powerful branch of the government. Term limits are a limiting factor on a politician's power. Also, members of congress benefit from having a lot of experience in office.
Congress should not have term limits no matter who they are. It is up to the people to decide if they are fit to serve our country. Many members of our congress have served many terms, and they have done so because they people voted them in. In the end, the limits that should be required is that of the president, so as not to become a dictator.
|
OPCFW_CODE
|
scip-java for Bazel does not handle .srcjar inputs
Hey, thanks for this tool. We just started trying it out to try and index Java code in our Bazel monorepo.
We hit an issue where running the Bazel aspect would hit compilation failures for some code. e.g.
info: /tmp/scip-java3739267713397442642/s/com/x/y/z/File.java:33: error: cannot find symbol
info: @MissingClass private final String some_variable;
info: ^
info: symbol: class MissingClass
info: location: class AnotherClass
Digging deeper, it turns out that MissingClass is a java file that is stored in a .srcjar, that is passed as an input to a java target. We use the .srcjar mechanism to allow us to generate some Java files beforehand and easily pass them to a java target (for those unfamiliar with srcjar in Bazel, it is simply just a normal jar of .java files, that Bazel will first unpack before passing to the javac compiler)
It looks like scip-java calls javac directly, but does not unpack the .srcjar contents beforehand, and so it fails.
Here is a small repro. Note interestingly the scip-java run does work for this small repro, but we get an error message "Some SemanticDB files got generated even if there were compile errors. In most cases, this means that scip-java managed to index everything except the locations that had compile errors and you can ignore the compile errors.". However in the larger private example, it seems the compilation failure is blocking. If we run the command that the aspect is running, we see the compilation failure
# testing/BUILD.bazel
genrule(
name = "generated-srcjar",
outs = ["sources.srcjar"],
cmd = "echo 'package com.testing; public class Bar {};' > Bar.java && jar cf $(@) Bar.java",
)
java_library(
name = "testing",
srcs = [
"Foo.java",
":generated-srcjar",
],
)
// testing/Foo.java
package com.testing;
public class Foo {
public Bar foo(Bar value) {
return value;
}
}
$ bazel build //testing # successful
$ "/home/code/scip-java" index --no-cleanup --index-semanticdb.allow-empty-index --cwd "/home/code" --targetroot bazel-out/k8-dbg--cd/bin/testing/testing.semanticdb --scip-config "bazel-out/k8-dbg--cd/bin/testing/testing.scip.json" --output "bazel-out/k8-dbg--cd/bin/testing/testing.scip"
info: $ /opt/.../jdk/bin/javac @/home/code/bazel-out/k8-dbg--cd/bin/testing/testing.semanticdb/javacopts.txt
info: /home/code/testing/Foo.java:4: error: cannot find symbol
info: Bar value;
info: ^
info: symbol: class Bar
info: location: class Foo
info: 1 error
info: Some SemanticDB files got generated even if there were compile errors. In most cases, this means that scip-java managed to index everything except the locations that had compile errors and you can ignore the compile errors.
info: Result of /opt/.../jdk/bin/javac…: 1
info: /home/code/bazel-out/k8-dbg--cd/bin/testing/testing.scip
Also, inside of the testing.scip.json, we see the srcjar listed in sourceFiles:
{
...
"sourceFiles": [
"testing/Foo.java",
"bazel-out/k8-dbg--cd/bin/testing/sources.srcjar"
]
}
but then in the generated javacopts.txt, it disappears:
"-encoding"
"utf8"
"-nowarn"
"-d"
"/tmp/scip-java8260898732520975292/d"
"-s"
"/tmp/scip-java8260898732520975292/s"
"-h"
"/tmp/scip-java8260898732520975292/h"
"-classpath"
"/tmp/scip-java8260898732520975292/semanticdb-plugin.jar"
"-Xplugin:semanticdb -targetroot:/home/code/bazel-out/k8-dbg--cd/bin/testing/testing.semanticdb -sourceroot:/home/code"
"-source"
"11"
"-target"
"11"
"-g"
"-parameters"
"/home/code/testing/Foo.java"
As a side note, I notice that scip-java calls javac directly from the provided JAVA_HOME. Bazel has a wrapper around Javac that you'll see in the "Javac" action. 2 points from this:
It seems sub-optimal to rely on the JAVA_HOME variable set on the system, as it can potentially be different than what is configured for Bazel. Maybe it could access the java_home from the toolchain instead, and then this env variable doesn't need to be used?
It could be done by adding an attrs to the aspect, then accessing it like so:
attrs = {
"_jdk": attr.label(default = Label("@bazel_tools//tools/jdk:current_host_java_runtime")),
},
...
"javaHome": ctx.attr._jdk[java_common.JavaRuntimeInfo].java_home,
# also need to include 'ctx.attr._jdk[java_common.JavaRuntimeInfo].files' as an input to the ScipJavaIndex action
I tried testing by modifying the command line that bazel uses to compile the Javac action (i.e., add -Xplugin:semanticdb ... to javacopts and semanticdb-plugin.jar to the classpath in the command line). This did seem to work, handling the .srcjar appropriately, and generating the semanticdb data. Might be an avenue to explore, but unsure how complex it is to integrate.
@JohnnyMorganz are you using scip-java as a Sourcegraph customer? If so, would you mind reaching out through your account's assigned Support Engineer for help?
Yes we are, I can forward it to them thanks
|
GITHUB_ARCHIVE
|
My name is Dede Precious. An alumna of AbaLIFE 2017 Batch A. I heard about AbaLIFE after my secondary school education through a friend of mine in July 2016. When she told me that there is a training center here in Aba where computer courses, skills, and other training could be acquired free without paying a dime, I doubted her. I became so curious that I had to follow her to the center when she wanted to return her application form.
When I got there, I exclaimed, what a nice place. The environment got me amazed and curious as well. I nudged aside and asked my friend again, are you sure you know what you are talking about? will they not demand money along the line? She told me No though I was not confident with her answer. I decided to meet the program officer as he came out and explained everything. I was amazed, and out of my curiosity, I told him to show me the ICT room.
To my greatest surprise, I saw a room fully air-conditioned and equipped with computer sets and projectors. The program officer told me that each enrolled student is entitled to one computer set until he/she would have completed the training. I could not believe my ears because other computer centers I know would provide a computer set quite right but would not be your permanent computer for the training duration.
Having heard this, I was fully convinced because I wanted to acquire ICT skills but could not afford the money at that moment because I was still a fresh school leaver with no capital at hand, my parents could not afford it either due to some financial constraints.
On getting home, I told my parents that I saw a training center where I could learn computer lessons free of charge. I had to cancel my trip to Owerri to attend the program. When I returned a few days later to pick the form, registration has ended for that batch. I was disappointed but did not lose hope. I decided to wait for the next batch.
During my waiting period, I got a teaching job till the end of 2016. At the beginning of 2017, I left the teaching job then applied for the program. I picked up the application form, followed the procedures, and got selected for the training. I had difficulties in transportation because my house was far from the center. At times I had to trek for an hour to the training ground due to lack of transport.
Though distance could have been a barrier, what I saw, learned, and practiced on my first day kept motivating me. The lessons were very simple and easy and what excited me the most was the practical aspect where you practice alongside the facilitator and also opportunities given to display your work in form of a presentation on a projector screen. I thought earlier ICT was for a particular set of people and that it takes years of training but to my surprise, I acquired the skills within a space of 10 weeks.
After acquiring this knowledge, it changed my mentality and got me more exposed. I got a better teaching job with an attractive salary than the one I got before the training. I also lectured my elder siblings on ICT. I encouraged my elder sister to start up a business and the importance of product branding and packaging. Now I can browse freely on the internet and get information from it. I would also love to learn more about web design and development.
|
OPCFW_CODE
|
I’ve added a couple pieces of “functionality” (not obviously useful) to my previously posted Ski Trip Player (here and here, and now HERE). There is now a button on the map portion that allows the user to switch between projections (1. Mercator and 2. Lambert Azimuthal Equal Area). The entire map switches and the little ski man keeps moving along. The trickier part was creating the compass rose and having it rotate within the map boundary, which it does. Take a look at the source code to see how I do it – I’m positive there is a more efficient way to do it. Let me know!
I had some time to kill waiting for a Python script to wrap itself up, so I worked on my D3-powered Ski Trip Player. I focused on the elevation profile, adding: 1) diverging color scale to indicate climbing/sliding areas, 2) horizon line to demarcate the travel path, and 3) legend. I suspect I’ve compromised looks for function in this iteration, but it was instructive nonetheless.
It’s official – I like D3. Here‘s an example using XML data from one of my ski excursions (collected by GPS watch, downloaded from Garmin Connect), and trail data from a Google Fusion table (run through ft2JSON to get a JSON response). There’s an animated elevation profile with associated animated map (over 1700 points animated!). It works well, but as always, please feel free to take a look at the code and see if you can make it more efficient.
Due to a mid-winter heatwave (chinook), last week was too hot for cross-country skiing. I found myself with plenty of spare time on my hands to think about cross-country skiing, though. That led to me checking the Hinton Nordic Centre website, which resulted in me looking at the ski trail conditions (a text table), which made me want a map showing groomed trails, which prompted me to build one (here).
The map shows:
- Grooming history (symbolized by days since last groomed) – currently I’m the only one keeping this data, stored in a Google Fusion Table, current. If you’d like to be able to edit the table, let me know and I’ll set you up with a password.
- Trail difficulty (from here)
- Dog friendly trails (from here)
- Optional nordic centre overlay map
There is also a Twitter feed, showing tweets containing the hashtag #hintonnordiccentre. I seriously hope people will tweet the current temperature from the trails, since there can be a drastic difference between the nordic centre and town.
Like any of my other Nordic Centre maps, this is entirely my own personal project (i.e. not made by the Hinton Nordic Centre), not to be confused with an official Hinton Nordic Centre project, and hopefully I’m not stepping on any toes with this (especially the groomers, you’re amazing human beings).
Here (http://darrenwiens.net/nordiccentre_animated.html) is an old project I made last year and updated over the holidays. It’s a map of the Hinton Nordic Centre, with the handy feature of calculating the length of your route. Click and drag the green points to change the start and end positions. Click and drag anywhere on the blue highlighted route to move that part of the path. Oh, if you notice an error in the trails (for example, it doesn’t exist), go ahead and change it using Google Map Maker.
Disclaimer: this was something I did for fun, because I love skiing at the Hinton Nordic Centre, and I hate calculating my distances. This is not any sort of official product of the Nordic Centre. Also, this map does not work well in old browsers like Internet Explorer 8, but if you are using it, it’s time to upgrade anyway.
Click here for an example of using the Google Maps API new animated symbols. Drag the start and end points, or any point in between, to change the path.
|
OPCFW_CODE
|
This class is about two things. First, it’s about the abstractions of the operating system that sit between your code and the hardware of the computer. Understanding how these features work allows you to make your programs fast and efficient beyond their Big-O runtime.
Second, this class is about critical thinking abd problem solving about practical computer science problems. The engineers who designed the operating systems we use today had to solve hundreds of hard engineering problems that no one had ever solved before. This course is designed to help you gain the problem solving skills necessary to solve hard practical computer science problems. This is directly appicable to three domains:
- You will be ready to reason about and design performant software systems, as well as diagnose and solve problems throughout the software stack that supports modern software (the software itself, along with its libraries, operating system, computer hardware, and how those pieces interact).
- You will be ready to decompose and solve the types of questions that are asked in technical interviews at many of the best employers of software engineers.
- By practicing technical problem solving within a specific context, you train yourself to better decompose, communicate about, reason about, and solve technical problems more broadly.
In addition to oral and written problem solving exercises, there are also a collection of programming assignments. Completing these assignments will give you experience with the many steps along the path from source code to running program, memory management and virtual memory, process creation, communication and control, and a healthy dose of concurrent programming.
By the end of this course, you will have a good understanding of the main elements that work together to form modern computing environments. You will have acquired some familiarity with standard diagnostic tools, debuggers, dynamic memory allocation, concurrent programming, and file I/O both with physical files and network sockets.
The main conceptual prerequisites for this class are CS 211 (the C part), CS 261 (machine organization), and CS 251 (data structures). A solid understanding of the theory of how things are stored in the computer, as well as the theory of how a processor executes instructions, as well as a basic understanding of programming (and specifically programming in C) are the tools you’ll need to succeed in this class.
Whenever possible, course information will be conveyed using this website. Course discussion will happen via Piazza. Course assignments and assignment grades will be collected and returned through Gradescope. We will use Blackboard Collaborate for synchronous class sessions (but might use MS Teams also/instead depending on student preferences). You are responsible for checking this website for the reading schedule and ensuring that you complete all assignments, and keeping up to date on Piazza for any corrections/clarifications regarding assignments or other important information.
Typically, this course is taught using Peer Instruction, a teaching model which places stronger emphasis on classroom discussion and student interaction. This doesn’t map so well onto remote instruction, but we will be trying to approximate it the best we can. Typical peer instruction consists of readings before class, a short beginning of class quiz, and a collection of discussion questions during class that you complete by yourself, discuss in small groups, and then discuss with the entire class.
This semester, the intended flow of a week of class is this:
- Throughout the week: there will be assigned readings, video lectures, and discussion questions.
- Mondays: there is a synchronous, online, required lab session with a short activity and a Gradescope quiz based on that activity that must be completed before the end of the day.
- Tuesdays: open Q&A office hours.
- Thursdays: content quizzes are due. discussion questions are due. I will go over discussion question answers in class. Discussion questions are loosely based on exam question style, structure, and content.
Like peer instruction, we will still have required readings and required videos. Rather than having a quiz at the beginning of class, there will be a quiz graded for correctness each week based on that content. These will be released on Wednesdays and due at 12:30pm on Thursdays.
Each week there will also be one set of discussion questions. These questions are meant to be considered by yourself, but you are encouraged to discuss the questions in small groups. You must answer these questions on Gradescope as well, but you will only be graded on completeness and effort, not on correctness.
Homework late policy
Every assignment in this course is due at exactly the time stated on Gradescope, and while we will grade late assignments, they earn zero credit.
Gradescope deadlines are precise - an assignment is late if it was turned in one millisecond or one month late.
Gradescope deadlines are universal - you must turn in your code, and it doesn’t matter whether you didn’t turn it in because it wasn’t compiling, or couldn’t upload it to git, or couldn’t upload it to gradescope. You can turn in homework assignments an unlimited number of times, so we recommend that you turn them in early and often.
Because these deadlines are so rigid, by default we will not include your lowest exam, your lowest homework, and your two lowest lab, discussion, and quiz scores in your final grade for the class. The later assignments and exams in the course are more difficult than the earlier ones, and there is no exceptional late policy - we recommend that you do not use these unless you genuinely need to, so that they’re available if unexpected issues come up.
If your lowest exam and/or lowest homework grades are higher than your course average, we will include them in the calculation of your final grade. This means that your lowest exam and lowest homework can’t hurt your final grade, they can only help it, so it will always be worth it to complete every assignment.
Grades are curved based on an aggregate course score. This means that the course score cut-offs for an A, B, C etc. are not defined ahead of time: these will be set after the end of the course. There is no quota for grade assignments, and there will be a hard “ceiling” for the curve of 90/80/70/60 (i.e. the cutoff for an A will never be higher than 90% of all points possible in the course). Each individual quiz/problem set/homework/exam is worth the same as each other quiz/problem set/homework/exam - i.e. each of the 13 reading quizzes is worth 10%/13 ≈ .77% of your final grade.
The course grade weighting is:
|Task||% of total grade|
|Video/reading quizzes (15, lowest two dropped)||10|
|Class discussion question sets (15, lowest two dropped)||10*|
|Lab activities (14, lowest two dropped)||10|
|Homeworks (6, lowest is dropped)||30|
|Exams (5, lowest is dropped)||40|
Class participation is an incredibly important component of this course regardless of whether it is online or in person. Unfortunately, “participation” is very hard to grade. The 10% of your grade that is marked as “participation” is largely an honor system - you aren’t required to get the questions correct or write several paragraph long answers to the discussion questions to get full credit. However, students who meaningfully engage with each other, either through Piazza or during the class discussion sessions can raise their discussion grade up to a maximum of 20/10. Additionally, I will consider high quality discussion question answers for students who are very close to any grade cutoffs at the end of the semester. Submitting bug fixes or providing test cases for homework assignments can also earn extra credit.
We will be using Computer Systems, a programmer’s perspective by Randal E. Bryant and David R. O’Hallaron, 3rd edition, as our main textbook. We will be covering the content from Chapter 7 through the remainder of the book.
You may also find The C Programming Language by Kernigan and Ritchie (colloquially referred to as K&R) a helpful reference when writing C programs.
Overcoming challenges enables growth
This is not a lecture-oriented class or one in which mimicking prefabricated examples will lead you to success. You will be expected to work actively to construct your own understanding of the topics at hand, with the readily available help of the instructors and your classmates. Many of the concepts you learn and problems you work will be new to you and ask you to stretch your thinking. You will experience frustration and failure before you experience understanding. This is part of the normal learning process. Your viability as a professional in the modern workforce depends on your ability to embrace this learning process and make it work for you. You are supported on all sides by the professor and your classmates. But no student is exempt from the process and the hard work it entails.
We value your mental health and emotional wellness as part of the UIC student experience. The UIC Counseling Center offers an array of services to provide additional support throughout your time at UIC, including workshops, peer support groups, counseling, self-help tools, and initial consultations to speak to a mental health counselor about your concerns. Please visit the Counseling Center website for more information (https://counseling.uic.edu/). Further, if you think emotional concerns may be impacting your academic success, please contact your faculty and academic advisers to create a plan to stay on track.
If you have a disability that might impact your performance in this course or otherwise requires special accommodation, please contact me as soon as possible so that appropriate arrangements can be made. Support is available through the Disability Resource Center. You will need to contact them to get your disability documented before accommodations can be made.
Consulting with your classmates on assignments is encouraged, except where noted. However, turn-ins are individual, and copying code from your classmates is considered plagiarism. For example, given the question “how did you do X?”, a great response would be “I used function Y, with W as the second argument. I tried Z first, but it didn’t work”. An inappropriate response would be “here is my code, look for yourself”. You should never look at someone else’s code, or show someone else your code. Either of these actions are considered academic dishonesty (cheating) and will be prosecuted as such.
To avoid suspicion of plagiarism, you must specify your sources together with all turned-in materials. List classmates you discussed your homework with and webpages from which you got inspiration. Plagiarism and cheating, as in copying the work of others, paying others to do your work, etc, is obviously prohibited, and will be reported. We will be running MOSS, an automated plagiarism detection tool, on all handins.
There are consequences to cheating on two levels - the consequences for your grade, and the consequences at the university level. Within class, the first time cheating on a programming assignment or problem set will result in a 0 on the assignment. A second time on a programming assignment, or first time on an exam will result in failing the class. Egregious cheating on a programming assignment (including but not limited to purchasing a solution online) is also grounds for failing the class.
I report all suspected academic integrity violations to the dean of students. If it is your first time, the dean of students allows you to informally resolve the case - this means the student agrees that my description of what happened is accurate, and the only repercussions on an institutional level are that it is noted that this happened in your internal, UIC files (i.e. the dean of students can see that this happened, but no professors or other people can, and it is not in your transcript). If this has happened before, in any of your classes, this results in a formal hearing and the dean of students decides on the institutional consequences. After multiple instances of academic integrity violations, students may be suspended or expelled. For all cases, the student has the option to go through a formal hearing if they believe that they did not actually violate the academic integrity policy. If the dean of students agrees that they did not, then I revert their grade back to the original grade, and the matter is resolved.
|
OPCFW_CODE
|
During this last month, I've been working to improve the code I've already written and to cover the last details for this feature in order to work like previewed in the mockups.
Here's the branch of my final submission.
This is a summary of all the work I've done so far in my GSoC project.
display one forecast in weather;
- I've updated the calendar widgets ordering, and also the weather section according to the mock-up.
move notification UI to its subclasses (I mentioned why in this post);
- The base class of a notification shouldn't be responsible for its UI anymore, because the subclasses don't share any similarities, and if any new type of notification needs to be created, its style can be fully implemented. This makes maintenance easier.
update notification icons handling;
- A regular notification will always display the icon of its source on the top left corner, and for that, we need the app icon from its desktop entry.
- A new class to represent the group that will also handle its notifications. In this post, I detailed this implementation.
- Layout manager created to handle the collapsed/expanded states of a source group. In this post, I detailed this implementation.
remove max notifications limit;
- In the current version of the Shell, there is a limit in the code, that prevents a notification source to have more than 3 notifications in the queue. Now that we have the groups per app with the stacked logic, there's no need to keep this limit anymore.
add group expanding animations;
- In my last post, I've shown you the code I've written at that point, the groups were working, but without any kind of animation. To create a smoother opening and close transition effects.
update message footer through loop;
- As more and more notifications get received and stacked into a source group, the most recent one needs to display a counter, if the group is collapsed.
Inspecting the code I've also found some unused code, and opened a merge request to remove it: !1346 - messageTray: Remove
To be completed
To merge this branch into Shell's code, there's some work left to do. Below is what I plan to continue working on:
- Add actions in the notification messages:
- Blur the messages around the expanded group;
- Collapse the expanded group if the user clicks outside the popup or the stack area;
- Add the option to customize the planner column;
Minimal adjusts in the UI:
- Update the design of the header button for the expanded group;
- Improve the avatar/cover handling of a notification message;
The ending of a journey
I want to thank the GNOME team for accept and support me on this amazing journey.
I also want to thank my mentor, Florian, for his patience and for his help along with this project that certainly doesn't end here.
Thanks for reading!
|
OPCFW_CODE
|
Support urdf geometry types for display (and collisions ?)
The urdf standard allows the geometry (visual or collision) for a link to be either a mesh file, or one of three simple geometric primitives: box, cylinder or sphere. However, pinocchio is only able to load meshes, and ignores the other types.
I think it would be nice to support the display of the other basic geometry types as well. It's useful for demos and examples on very simple systems (like an inverted pendulum), because it makes the URDF standalone. Beyond that, it would make pinocchio::RobotWrapper compatible with any URDF: currently a user may be a bit disappointed to see that his perfectly valid URDF is not displayed by pinocchio.
Note that this is only a limitation of pinocchio``: both gepettoandmeshcat``` support the display of these objects easily.
I remember that at one point - up to early 2018 I would say - pinocchio supported these primitives, at least on the python side for visual display. I had a quick look but didn't find a clear trace of when this was removed ; current the limitation lies with pinocchio::GeometryObject only supporting meshes.
Do you remember why this was removed ? I'm guessing it was because this feature was hard to maintain, especially if we want to be able to use it for collision checking as well. And this may remain a valid reason not to add it back.
If we ignore collision checking - which is something I would be fine with, and this is anyway already the current behavior of pinocchio - and just want to support visuals, a relatively simple solution would be to add attributes to the GeometryObject to support all types (a type enum, radius length and size to are enough to support the three new primitives). The attributes would be unset in the case of a mesh geometry (just like currently, a geometric primitive from the urdf is represented in pinocchio as a GeometryObject with empty mesh path). Then, I would modify the python interfaces to both viewers to display the new geometric primitives as well.
This is because you're not compiling with hpp-fcl support.
Gepetto-viewer is well supported. For Meshcat, it should be also the case, as @gabrielebndn did a very good job for supporting both.
Indeed, I didn't look at the code there as this seemed unrelated. One could argue that you don't need FCL to display those primitives, but it's not really important IMHO.
I'll check right away for meshcat support as well.
I confirm all primitive shapes are supported both on Pinocchio and Meshcat when compiling with HPP-FCL. Maybe we should make the requirement clear somewhere in the documentation (or issue appropriate warnings if the requirement is not respected), because I understand it might be confusing
Indeed, things were well handled by @gabrielebndn !
I'm in favor of adding a warning in the case where pinnochio is compiled without hpp_fcl and the user tries to load a geometric primitive. Something like this patch for master:
warn_no_fcl.patch.txt
@matthieuvigne, it looks good to me. Maybe you can open a PR with your patch. When you do it, please do it on the devel branch, that's were all PRs go :)
I think this issue is solved. I let @matthieuvigne provides his suggestion as PR.
|
GITHUB_ARCHIVE
|
chinese corpus UnicodeDecodeError
I use window10 , python2.7
this is my file
test.py
# -*- coding: utf-8 -*-
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
chinesebot = ChatBot("Training Example")
chinesebot.set_trainer(ChatterBotCorpusTrainer)
chinesebot.train("chatterbot.corpus.chinese")
chinesebot.get_response("早上好,你好吗?")
then i run python test.py, i get error
F:\AnacondaWork\lib\site-packages\chatterbot\storage\jsonfile.py:30: UnsuitableForProductionWarning: The J
not recommended for production environments.
self.UnsuitableForProductionWarning
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\80920\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package wordnet to
[nltk_data] C:\Users\80920\AppData\Roaming\nltk_data...
[nltk_data] Package wordnet is already up-to-date!
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\80920\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package vader_lexicon to
[nltk_data] C:\Users\80920\AppData\Roaming\nltk_data...
[nltk_data] Package vader_lexicon is already up-to-date!
Traceback (most recent call last):
File "test.py", line 9, in <module>
chinesebot.train("chatterbot.corpus.chinese")
File "F:\AnacondaWork\lib\site-packages\chatterbot\trainers.py", line 117, in train
trainer.train(pair)
File "F:\AnacondaWork\lib\site-packages\chatterbot\trainers.py", line 82, in train
statement = self.get_or_create(text)
File "F:\AnacondaWork\lib\site-packages\chatterbot\trainers.py", line 25, in get_or_create
statement = self.storage.find(statement_text)
File "F:\AnacondaWork\lib\site-packages\chatterbot\storage\jsonfile.py", line 46, in find
values = self.database.data(key=statement_text)
File "F:\AnacondaWork\lib\site-packages\jsondb\db.py", line 98, in data
return self._get_content(key)
File "F:\AnacondaWork\lib\site-packages\jsondb\db.py", line 52, in _get_content
obj = self.read_data(self.path)
File "F:\AnacondaWork\lib\site-packages\jsondb\file_writer.py", line 15, in read_data
obj = decode(content)
File "F:\AnacondaWork\lib\site-packages\jsondb\compat.py", line 28, in decode
return json_decode(value, encoding='utf-8')
File "F:\AnacondaWork\lib\json\__init__.py", line 352, in loads
return cls(encoding=encoding, **kw).decode(s)
File "F:\AnacondaWork\lib\json\decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "F:\AnacondaWork\lib\json\decoder.py", line 380, in raw_decode
obj, end = self.scan_once(s, idx)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xd4 in position 0: invalid continuation byte
how can i clear the console output warning like [ntlk_data] Downloading ....
and how to solve the error.
i can't find the same error in any issue.
thanks for support
@sunchenguang the nltk downloading will search different zip files available on you machine, if any one of the file not found it starts downloading it from server.
I think most of the times in windows can't convert Unicode characters properly, have seen same issue on Linux machine?
REF Link: https://wiki.python.org/moin/PrintFails
@sunchenguang Try to remove database.db and re run your script with below modification it is working fine on my machine.
--- a/chatterbot/input/input_adapter.py
+++ b/chatterbot/input/input_adapter.py
@@ -19,14 +19,14 @@ class InputAdapter(Adapter):
Return an existing statement object (if one exists).
"""
input_statement = self.process_input(*args, **kwargs)
- self.logger.info('Recieved input statement: {}'.format(input_statement.text))
+ self.logger.info('Recieved input statement: {%r}'.format(input_statement.text))
existing_statement = self.chatbot.storage.find(input_statement.text)
I find the file in below
F:\AnacondaWork\Lib\site-packages\chatterbot\input\input_adapter.py
I modify the file as you do, but it doesn't work.
I use the same test.py file and the console just show same error.
did you removed previous database.db file?
yes i do. I would like to try it on python3
@vkosuri this issue also happened when try to train bot with other unicode languages like persian .more details could found in this closed but unsolved bug
The same code ,it works in python3.
Maybe python2 has encode decode bug. Just bypass the problem
@sunchenguang
yeah.... python2 has encode decode issues when using chatterbot. Python3.+ is a good choice...
@sunchenguang i also observed this issue will pop up if we training with exisisting database.db, try to remove ```database.db`` and retrain your bot, It may work. Similar issue https://github.com/gunthercox/ChatterBot/issues/567
|
GITHUB_ARCHIVE
|
This action might not be possible to undo. Are you sure you want to continue?
How to Start Manager: $COMMON_TOP/admin/scripts/<context>/adcmctl.sh start apps/apps_password 3. Where are the conc. Manager log files stored? $APPLCSF/$APPLLOG/<context>/*.mgr The icm log file is the important log file, which will have default name with databasename_timestamp.mgr eg: VIS_0728.mgr Other Manager logs will be with naming convention: w*.mgr or t*.mgr 4. How to know what is my manager logfile: Goto concurrent->manager->administer choose a manager and press processes button. pick the value from column concurrent and check it in the $APPLCSF/$APPLLOG/<CONTEXT>/*<value>.mgr 5. View the concurrent managers from front end (applications) Login to apps: http://<server>:port/dev60cgi/f60cgi or any other url if you have enter the username/ password: username: operations password: welcome
CHOOSE THE SYSTEM ADMINISTRATOR RESPONSIBILITY FROM THE LIST Navigation path: Concurrent -> Manager -> Administer Here you would see the below columns: 1. Name : the concurrent manager name 2. node: on which node the manager is running 3. actual: actually running os threds for that manager 4. target: target processes that are defined to run for this manager. 5. running: currently running jobs on this manager 6. pending: pending jobs on this manager If the actual and target are equal means the manager is up and running. 6. TO SEE THE DESCRIPTION OF A MANAGER: Navigation path: Concurrent -> Manager -> Define Press F11 to turn the form to query mode. type the manager name: eg: Standard% and press Control+F11 This would give the description of the manager. To see the work shifts: Press the workshifts button and you will see what work assigned to this Manager.
7. How to change the processes for a manager: Go to : concurrent -> manager -> define Choose A Manager Press the workshifts button. Change the value under the processes column. Save it. ( from the menu: file -> save ) Navigate To Concurrent -> Manager -> Administer Screen Select that manager which you modified the processes (click your cursor On that manager) Terminate It (Press Terminate Button) Close the window. Give a break for 1-2 mins Relogin To The Same Screen and press the verify button. that will show you the changed processes.
8. What are the os threads that are related to my manager processes: Navigate to: Concurrent - Manager - Administer Choose A Manager Press Processes Button From the new screen, pick the values under system column (choose the Values that have active status under status column) grep these values from the shell: ps -ef|grep <value> You would find the os threads that are running for this manager.
How to find the icm from the os. ps -ef|grep CPMGR You should find 1 row from os like this: applmgr 10894 10886 0 01:06:12 ? 4:34 FNDLIBR FND CPMGR FNDCPMBR sysmgr="" diag=N logfile= How to find whether managers are running from os: from os grep the FNDLIBR process: ps -ef|grep FNDLIBR applmgr 24565 811 0 Aug 17 ? 0:06 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/9B867E9E918EA4000000000000 applmgr 24540 811 0 Aug 17 ? 0:06 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/9B867E9E918EA4000000000000 applmgr 24562 811 0 Aug 17 ? 0:06 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/9B867E9E918EA4000000000000 applmgr 23961 811 0 Aug 17 ? 0:12 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/9B867E9E918EA4000000000000 applmgr 28419 811 0 23:25:55 ? 0:01 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/88899A84920000000000000000 applmgr 20437 811 0 Aug 19 ? 0:43 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/9B94A296977E95000000000000 applmgr 20434 811 0 Aug 19 ? 0:43 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/9B94A296977E95000000000000 applmgr 1863 811 0 Aug 22 ? 0:09 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/978D7D958F9400000000000000 applmgr 3521 811 0 Aug 22o ? 0:02 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/978D7D958F9500000000000000 applmgr 23827 811 0 Aug 17 ? 0:06 FNDLIBR FND Concurrent_Processor MANAGE OLOGIN="APPS/9B867E9E918EA400000000
You should find couple of rows like this. (but, if you need clear Description about managers Login to apps and check it from concurrent - manager - administer Screen)
CONCURRENT PROGRAMS: How to view the concurrent programs: Navigate to: Concurrent - Program - Define Here Query For All Programmes (Control + F11) How to submit a concurrent request from the apps: Navigation: Requests -> Run Choose Single Request From the name field you can pick the programmes from the pick list Button (at the end of the name column) Choose active users from window. (this is a conc. Programme) Press submit button (if you want to schedule it to run it at a later Time, you can do that using the schedule button). (note down the request number) Once you submit the request a record will be inserted into Fnd_concurrent_requests table. And Navigate To Request -> View Here choose the all my requests button and press find. That will move you to a screen where you can find all your conc. Requests that are running/completed/pending. To view the log for your request: press view log button To view the out file for your request: press view output button. This will show you, your request log / out files.
How to see where my log/out requests from os: From shell as applmgr: Log files: cd $APPLCSF/$APPLLOG/<CONTEXT>/l<requestid>.req out file: cd $APPLCSF/$APPLLOG/<CONTEXT>/o<requestid>.out
Few more questions: How to see the no. Of processes assigned to standard manager: Navigate To: Concurrent - Manager - Define press F11. Type : Standard% press Control + F11 Make Sure You Selected The Right Manager (Using Up/Down-arrows) Once you select the right manager, press work shifts button and see how Many processes assigned.
How to see the count of records in the fnd_concurrent_requests table: Go To Shell $ sqlplus username: apps password: apps select count(*) from fnd_concurrent_requests;
How to stop a single manager: From front end: Navigate To: Concurrent - Manager - Administer Choose The Manager To Stop (Click On That Manager) Press terminate button. How to start a single manager: Go To Concurrent - Manager - Administer Choose The Manager Which Is Down And Click On that Button and press restart button that will be started in a next minute. How to stop all manager: From shell as applmgr: $COMMON_TOP/admin/scripts/<context>/adcmctl.sh stop apps/apps_password How To Check Whether Icm Is Running (From Sqlplus) login to apps database user and execute the below: sql> @$FND_TOP/sql/afimchk.sql Status Since Method ----------------------------------------------- ---------------------------Internal Conc Manager is running on - linux 23-AUG-06 10:52:40 PM RDBMS Log File - <logfile location> If you get message like above, means, it is running.
How to stop managers from the shell (not using the script): As applmgr execute the below at shell $ CONCSUB apps/apps_password SYSADMIN 'System Administrator' SYSADMIN CONCURRENT FND ABORT How to see the complte list of manager status from sqlplus: as applmgr login to sqlplus apps/apps sql> @@$FND_TOP/sql/afcmstat.sql this would show you how many processes and what managers are running.
|
OPCFW_CODE
|
I've now moved my main projects over to using modular Java 11 as well as using git. I've quite a few projects remaining.
I'm undecided on many of them whether they just go in the bin, get updated and published (and abandoned) or get worked on actively again.
- blogz is almost ready.
This is mostly a git setup but I'm not actually using it yet on this site either, although i've synchronised the code between them now. Once I do both it'll be there.
- playerz is a mess.
This is in the midst of a large amount of changes and I lost track of where I was at. Its a fairly interesting little project though so I will get back to it.
- izlib is unfinished
I think I discussed this years ago but basically it's unpublished and well from unfinished. It's basically an image processing toolkit and an experimental platform for Java Streams and API refinement.
I don't really have any plans but I noticed the other day there's a lot of code just sitting there rotting away.
- mediaz is archived
mediaz was a project I had on google code which was sort of a predecessor to izlib as well as a collection of early JavaFX experiments which are more or less tutorials for others. And the basic start of a layered bitmap image editor (imagez).
It was about the only project anyone asked about when google code went down. I have the repository archived so I guess I could at least convert it.
- termz is unfinished
This was a little fun project using OpenCL to drive a terminal emulator display. It's pretty much pointless and could be done directly with OpenGL.
- cdez is rotting
I actually have a C version of dez 1.2 too. It could be updated and published or something.
- rez and crez are complicated
rez is a project that implements a `personal' versioned blob store. It supports free branches and cheap copies (iirc), renames, metadata and all that jazz, and uses dez for compact storage. Berkely DB (JE) is used as an embedded database.
This is a project i've worked on and off for years (15?+) with separate C and Java implementations. The original driver was for a web CMS. The last time I worked on it was a year ago porting it to C (including a C versin of dez), for this very blog - although in the end I stayed with the simple blogz.
It's explicitly not `enterprise' oriented on purpose.
There's probably plenty of alternatives around now and I don't really know what to do with it. I never quite got the API down to the point I was really happy with. The C version should probably use lmdb now.
What I have done actually works though so I should probably drop it out somewhere. The problems aren't all that complicated but I think I solved them in a fairly tidy and compact and reusable way.
- libeze - 2.0
I just need to drop this into git as is.
This was also in the midst of a large number of changes but I kept them out for the last release. But I have a good number of bits and pieces I can add to it. playerz is the main driver here, but that doesn't need much.
- SynthZ - the popular one
I'm not doing anything with this, but it has been downloaded fairly often. Maybe I will git it.
I did learn some more about SourceDataLine so the soft keyboard is real-time now!
- wanki is who the hell knows
This was a wiki engine using texinfo markup and with the abiity to properly organise and export multi-page documents. It's always been pre-alpha and gone through a number of varients, from C to JavaEE.
It was the original driver for what became (c)rez.
Who knows, maybe one day, I still think it's got some merit. Wiki's still have major trouble with multi-page documents ordered like a book.
- DuskZ is unfinished
An attempt at working on a simple game. I got caught up in pointless (de)serialisation stuff and other sort of unimportant details. I'm sorry to the lad that I chatted to about it. He is a nice fellow. But it was sort of at a bad time personally and I just dont have the interest to work on it anymore. I hate letting people down.
I don't know if there's really anything salvagable at this point as all I did was break a bunch of stuff.
- socles is archived
This was an OpenCL image processing library. Nobody cared and eventually neither did I. There might be something salvagable for an eventual izlib backend, maybe.
- low level arm code, puppybits, zedos
puppybitz is somewhat stale but maybe there's some stuff useful there.
zedos stopped when I hit the USB driver. Fuck intel for making that junk.
- paralella is DEAD TO ME
The parallella stuff i'm not longer working on and it will be staying where it is (at least it's published).
I still have some boards but the whole thing soured me on kickstarter and I will never put money into anything of the sort again. It seems it's mostly used for these sort of projects as a 'lets get some zero-cost bridging finance so our real investors have more confidence' which is pretty much fucked-up capitalism at it's finest.
Wherein capital should be the one taking risks to invest in capital to make more of it. This instead is, well, get the plebs to give us free money and take all the risk for no return!
- Android apps are going nowhere
Again the source is already out, it's unlikely I will do more but whether I do will be on a case-by-case basis.
I'm sure there is more - that's just what I found from my archive of google code and a quick look at what I have sitting on THIS computer. I've got drives and backups elsehwere, who knows.
Probably if anything I should start checkpointing more often and dumping shit on code.zedzone.space. Until I had that I didn't really have anywhere to put the random otherwise not really publishable-in-themselves experiments which abound.
It will be a while before this list is fully processed.
|
OPCFW_CODE
|
Welcome to Red Hat Linux 6.0!
At Red Hat Software, we believe we offer the best Linux distribution on the market. We hope you'll agree that the time and the money you spent for Red Hat Linux was well spent, indeed.
Recently, Linux has gained quite a bit of attention from the national and international media. What began as a ``hacker's hobby'' several years ago has been embraced as a powerful and economical computer operating system.
If you count yourself among the many Linux users who are discovering Red Hat Linux for the first time, this book is for you!
Inside, you'll find valuable tips which will help you get acquainted with your new desktop environment and with the way your Red Hat Linux system works. You'll be able to learn some basics and you'll find pointers to places where you can turn for more information.
This publication is divided into two parts:
Written by David A. Wheeler and Red Hat Software, the GNOME User's Guide is an indispensable resource for navigating and customizing GNOME. You can find the GNOME User's Guide, among other places, both on the Web, at www.gnome.org and on an installed Red Hat Linux system, under /usr/share/gnome/help/users-guide/C/, beginning with the Index page.
GNOME stands for GNU Network Object Model Environment. That's a fancy acronym, but it translates into a pleasing environment which offers all the power of Linux. GNOME is the default X Window System environment for Red Hat Linux 6.0.
In the GNOME User's Guide you'll find ways to create, move and copy files, investigate your new system and much more -- all within a pleasing graphical environment.
Here's a preview of what you'll find:
You'll find quite a few translations of the GNOME User's Guide, as well as the latest GNOME documentation and software at the official website:
Now, on to some of your Red Hat Linux system's details...
The Newbie's Guide to Red Hat Linux
Are you rattled by terms like root and user account? The following is for you!
The second part of the Red Hat Linux Getting Started Guide, this ``newbie's guide'' will help you gain a toehold on the basics of your new Linux system -- from creating a new account to working with files in a non-graphical environment.
There's nothing wrong with a little hand-holding -- and that's what you'll find in these remaining chapters.
Here's a glimpse of what you can find:
As Linux evolves, so does the support you'll find for Red Hat Linux. The Red Hat Linux Getting Started Guide is part of that support -- and evolution. In coming editions, expect to find more essential information to help you get the utmost from your Red Hat Linux system.
That's also where you come in.
If you'd like to make suggestions about the Red Hat Linux Getting Started Guide, please mention this guide's identifier:
That way we'll know exactly which version of the guide you have. You can send mail to:
This guide is the definition of a group project, since so many provided valuable assistance, from offering suggestions and sharing knowledge to proofreading.
Thank you to Edward C. Bailey, the documentation department's manager. Ed was there from concept to ``when the rubber hit the road,'' offering his expert advice on style and substance.
Thank you also to Sandra A. Moore, in charge of the Official Red Hat Linux Installation Guide, for her patience and help in formatting and proofing. And to David Mason, RHAD Labs' technical writer, who worked like to a demon to put together the GNOME User's Guide.
Red Hat Software's support team -- particularly Stephen Smoogen and Eric Rahn Nolen (``Thor'') were more than generous in offering their time and advice.
And to the engineers, who build the best Linux distribution, a big ``thank you''! It is their work which makes Red Hat Linux so worthwhile.
And, of course, thank you to Linus Torvalds and the thousands of Linux developers around the world. Ultimately, this is their operating system -- and it is a wonder.
|
OPCFW_CODE
|
It is also known that grub-0.94-r1 and grub-0.94-r2 should work correctly. As a result we are closing this bug. Once the live usb has been created, I restart my computer. Finally, I use Wireless connection in the library so when I boot with Live CD, then internet is disconnected. http://multimonitorinformation.com/error-15/error-15-file-not-found-fedora-17.php
Try to disable it in the BIOS or in the kernel. Solution One reported cause was an exotic configuration of disk devices, like ultra/non-ultra DMA disks on one cable. Error 15:File not found. Ubuntu manages the boot up menu (Went back to look at my notes from the original setup) The owner tried to update to ub 11.04 and afterall was said and done Read More Here
Back to top #4 Wonko the Sane Wonko the Sane The Finder Advanced user 12955 posts Location:The Outside of the Asylum (gate is closed) Italy Posted 30 October 2010 - I got an option for ubuntu although this does look different now (showing up as 'ubuntu, with linux 2.6.35-22-generic-pae') when i go to this option i get the message "error 15:file I also used the fedora liveusb creator tool. What is the best way out?
It then proceeded to produce the grub screen but instead of producing an Error 15 this time, i just rebooted.Written communication can be really confusing sometimes... Finally, tried the 32bit linux versions as well (same result). The Gentoo Name and Logo Usage Guidelines apply. Logged "The earth is one country and mankind its citizens."Baha'u'llah "La tierra es un solo pais y la humanidad sus ciudadanos."Kernel: 4.6.4-pclos1.
Home | New | Search | [?] | Reports | Requests | Help | NewAccount | Log In [x] | Forgot Password Login: [x] | Report Bugzilla Bug Legal PCLinuxOS-Forums Main Password Forgot Password? In boot options #1 it's set to [UEFI: Voyager Corsa...] which is the USB. I get this error at revision 419 svnsync: File not found: transaction '419-bn', path '/devel/Scripts/Engines/Doom/Lamp Room/LampController.cs' I am using svnsync, version 1.6.5 (r38866) View 6 Replies View Related Red Hat ::
DOS uses INT 13h. More details on exactly the steps you took using unetbootin might help. size=0x46e190] [Initrd, addr=0x7e3c5000, size=0x1735d60]" Then it hangs... Anywhere else TAB lists the possible completions of a device/filename.
The paths that are created in the configuration file aren't even accurate paths on the USB drive. http://www.pclinuxos.com/forum/index.php?topic=124300.0 Fedora 21 installation freezes No wireless network interface on live persistent fedora 20? After some research and going through the forums, I thought I found the solution by following these instructions to copy the GRUB files from the Live CD (https://help.ubuntu.com/community/Gr...0from%20LiveCD)3. When installing GRUB, it just hangs Situation When installing GRUB, it hangs: root #grub At this stage, the installation stops.
Copied /boot/gfxmenu and /boot/grub, including all the files it contains to the first partiton.Used the grub commands: root (hd1,0) and setup (hd1) with success.Found the UUID of the new (used) HDD get redirected here But now each time I boot up I get this two lines: error file not found grub rescue> I have NO idea what to do.[Code]... Have a great morning! I renamed my label on my usb stick to "LIVE" as well.
Powered by vBulletin Version 4.2.2Copyright ©2000 - 2016, Jelsoft Enterprises Ltd. You are using windows! The contents of this document, unless otherwise expressly stated, are licensed under the CC-BY-SA-3.0 license. navigate to this website You will be missed.
root #grub --no-floppy Uncompressing Linux... Solution cyrillic provides information that it is possible to "map" the disks in a different order by changing grub.conf's Windows entry like so: FILE grub.confMapping diskstitle Windows XP map (hd0) (hd1) The following is incorrect, you need to define a partition to boot from (using root parameters).
I filed a bug HERE yesterday but there has been no action on it yet. but I did it to the "desktop" version and not "dvd" version (silly me!) This fix works for the DVD version to everyone else with same issue. Since you have installed grub2 on Gentoo, you could maybe use it. # Gentoo entry in menu.lst ( in Ubuntu) title Gentoo root (hd0,0) kernel /boot/grub/core.img https://wiki.gentoo.org/wiki/GRUB_Error_Reference#Grub_Error_15 share|improve this answer edited my review here Please help me out, I don't want to go over to Ubuntu, I'm really frustrated about this issue, it's 7th time i format the USB in hope to get this to
I watched a few videos but stlll can't grasp clearly how to do. xp has 210gb, ub has 80 and their is a 100gb shared storage. or it will attempt to boot then give this message and freeze: "Trying to allocate 1135 pages for VMLINUZ Got pages at 0x2c9c000 [Linux-EFI, setup=0x102a. Placed it as secondary HDD, created two partitions with PCC, Manage Partitions, the first one of 12 Gb and the second the other 480 GB.
|
OPCFW_CODE
|
TypeScript: Unexpected reserved word
I have a project in WebStorm IDE & Node.js. I created a file named "mongo-manager.ts" which I used in the routes\index.js file. It worked seamlessly.
Then I did 2 things:
WebStorm had always asked me whether or not to compile TypeScript to JavaScript and I had always ignored it, but then once I clicked on "yes" and since then after solving some errors it's compiling now ts files successfully to js.
I converted index.js into index.ts and now I still have index.js which is an output of the compilation.
Ever since then, when I try to run my app (using the triangular green button in WebStorm which in turn uses the command line: "C:\Program Files (x86)\JetBrains\WebStorm 11.0.1\bin\runnerw.exe" "C:\Program Files\nodejs\node.exe" bin\www), I get this error:
import mongodb = require('mongodb');
^^^^^^
SyntaxError: Unexpected reserved word
Now, there's no specific problem with the "import" statement because if I omit it, the next non JavaScript statement throws the same exception. It just tries to treat TypeScript like it was JavaScript.
I've found these questions:
Typescript: unexpected reserved word in PHPStorm,
Unable to import in typescript file in nodejs
They suggest either to use "tsc" rather than node.exe, or to make sure that I run with node.exe the js file rather than the ts file.
Now the thing is that the file that node.exe runs directly is "www" which is a JavaScript file and in turn calls app.js which is still JavaScript, but later the files index.ts and mongo-manager.ts are called, so I have a hybrid of both js and ts.
And another and most important thing is that my code has already worked seamlessly even though it's hybrid and without taking any of the suggestions in these questions, until I answered WebStorm "yes" to the question whether to compile TypeScript to JavaScript, which had always appeared.
Anyway, I only want my app working again, no matter how.
Any help will be profoundly appreciated!
You need to know differences between compilation error and runtime error. It seems that what you've experiencing is a runtime error. Please check out the JavaScript file being executed by Node.js.
Currently Node.js does not yet support ES6 module syntax, you need to have your module field in tsconfig.json to be commonjs. Ideally in your output JavaScript file, import mongodb = require('mongodb'); should be emitted as const mongodb = require('mongodb'); when targeting ES6 or var mongodb = require('mongodb'); when targeting ES5.
In my output file I have var mongodb = require('mongodb'); I wrote that it's not the problem because if I omit it so the next non JavaScript statement throws the same exception. It seems that it doesn't run the js file but the original ts file. Anyway as I said: before that, I didn't even compile ts to js and it worked seamlessly.
Are you requiring files with ".ts" extension?
Yes I am. If I try to add a reference to the js file instead, I get a compilation error: TS6054: File ‘C:/ProjectFolder/dal/mongo-manager.js’ has unsupported extension. The only supported extensions are ‘.ts’, ‘.tsx’, ‘.d.ts’
Actually you should omit the extension.
|
STACK_EXCHANGE
|
Finally, it is feasible to change the shade plan, picking from 7 preset or a few customizable designs. Among the preset techniques is typical, the standard black track record Employed in previously variations of Stata.
Statalist is interdisciplinary Quite a few questions will probably be of fascination to just some Statalist members. Constantly remember that members originate from many alternative sciences. Consider to show a little sensitivity to Individuals noneconometricians, nonbiostatisticians, or whoever it may be who may possibly know very little regarding your query. Flag your dilemma as of restricted interest so that individuals can delete it quickly, or get the trouble to stay away from topic-precise jargon. Precise literature references remember to! Make sure you tend not to believe which the literature acquainted to you personally is common to all associates of Statalist. Usually do not check with publications with just minimum particulars (such as, creator and day). Thoughts like “Has any individual implemented the heteroscedasticity less than a complete moon exam of Sue, Grabbit, and Runne (1989)?” admittedly divide the entire world. Anybody who hasn't heard about the stated exam would not be helped by the full reference to reply the issue, but she or he might well recognize the full reference. References must be within a form that you'd probably be expecting in an educational publication or technological doc. One example is, consist of entire creator name, day, paper title, journal title, and quantity and webpage numbers in the case of the journal report. Stata operates on unique platforms In the same way, make sure you understand that Stata runs on Windows, Macs, and Unix platforms. Windows is not the only OS on the globe (or perhaps the top). Specify the platform you're applying Should your query is particular to that platform. The area will not be worldwide Statalist is a global checklist. Be sure to describe information that may make sense only in your own corner of the earth (even whether it is America). References to time of working day (superior morning), time of calendar year (take pleasure in the warm weather conditions), or sporting arcana (how some workforce fared lately) can search fairly silly or obscure to associates in other longitudes or latitudes. Retain non-public or particular stuff off the checklist Every one of us goof by in some cases forgetting to check destinations ahead of mailing, even so the theory is important. Edit previous postings Edit mail to make sure that viewers effortlessly see what The difficulty is and what your contribution is. You should don't repost the whole Variation of an incredibly lengthy information together with your a single-sentence tidbit.
To obtain even less messages, you even have the option of unsubscribing from Statalist and scanning the archives on occasion. two.six I want Far more Repeated Statalist messages. What do I do?
Help us to help you by generating self-contained inquiries with reproducible examples that explain your data, your code, as well as your trouble.
But my their explanation things had been in numerous scales but it was standardlized. My problem now is how you can adapt a whole new scale for The brand new assemble, presented the 4 things I'd made use of ended up on distinctive scales. My scores range from -two to two. How do I interpret this?
cannot help v exprverbal expression: Phrase with Exclusive which means performing as verb--as an example, "place their heads collectively," "arrive at an close."
Choose this feature if you want to your quest to seem in the textual content of help products in addition to their titles.
From Stata eleven on, a PDF Edition from the manuals is incorporated with Each individual copy of Stata so that each one people have access to the manuals. your local Stata skilled or technical help individual
“Consider any attribute that psychometricians now imagine they can evaluate (for instance any of the different mental talents, character traits or social attitudes that the textbooks mention), and request the problem, Is that attribute quantitative?
While it truly is pleasurable to type instructions interactively and see the final results straightaway, critical perform requires which you preserve your final results and monitor the commands that you have applied, to be able to doc your get the job done and reproduce it later on if wanted. Below are a few practical suggestions.
The compromise is represented by most statistical exams in typical use, such as the t and F checks, the place P-values count on unhappy assumptions.”
How can I realize if official ado upgrades (and executable upgrades) incorporate the operation of earlier STB/SJ contributions (so that the STB/SJ contributions come to be "obsolete")?
You didn't present adequate data. One example is, postings of the shape “I tried applying -foobar-, but it did not function” usually are difficult to answer, except by asking For more info. Your query is simply too unclear or much too sophisticated to be familiar with. By way of example, extremely intricate knowledge-management tasks or significant chunks of code that are not working tend to be as well very similar to hard work to understand, even for Stata professionals. It is achievable which you might take pleasure in endeavoring to make your difficulty A lot clearer or simpler. Remember that a really prolonged publishing that has a mass of specific clarification is just as offputting as a matter that is definitely cryptically brief. The best advice is always to rewrite the query so that the important issue is made as very clear as is possible but will also is mentioned as briefly as possible. But in all situation, there is a straightforward rule of thumb: A rewrite or perhaps 1 repost of the first is tolerable, but more than one repost is not really. If just after two attempts you've not received an answer, There exists too slender a chance that you're going to get an answer on Statalist to warrant another try. 6. Personal email messages to These Energetic on Statalist
Having said that, in Investigation we tend to be tempted to assigned numbers one particular by five to these types and get usually means and accomplish Look At This stats as if the assigned quantities mirrored equivalent spacing. This can be a pretense at finest.” (emphases extra)
|
OPCFW_CODE
|
Having problems getting Facebook Like buttons to work on your website? Are you also using something like the Root Relative URLs plugin to make all your URLs relative to the root of the site instead of to the domain?
The Root Relative URLs plugin is a wonderful add-on to WordPress which not only not only help reduce the size of your web pages, but more importantly to developers, it make your website or blog portable by making all new links created on your website relative to the root of your website instead of the domain.
While this works fine in 99% of all situations, it fails to produce the desired results in places where your website generates links, such as the OpenGraph og:url meta property that Facebook uses. This problem first came to my attention while troubleshooting an issue with a Facebook Like counters using the Facebook Developer's Debugger.
WorPress SEO generates OpenGraph information that Facebook uses, for example, whenever you post link to your website on Facebook or when people click the Facebook Like button on your website.
WITHOUT the Root Relative URLs plugin installed, WordPress SEO generates the following OpenGraph line of code:
<meta property='og:url' content='http://www.yoursite.com/'/>
WITH the Root Relative URLs plugin installed, WordPress SEO generates the following OpenGraph line of code:
<meta property='og:url' content='/'/>
While I realize that Root Relative URLs plugin is at the root of the problem, it would have been great if WordPress SEO took the initiative to ensure all URL's are complete in instances where links are used by other site, like Facebook, to link back to your website.
How to Fix the Problem (short-term solution)
In the case of og:url, it is possible to fix the OpenGraph og:url meta property by editing the file named wp-content/plugins/wordpress-seo/frontend/class-opengraph.php and replace this line...
echo "<meta property='og:url' content='" . esc_attr( $this->canonical( false ) ) . "'/>\n";
... with the following two lines ...
$SiteRoot = (substr(esc_attr( $this->canonical( false ) ),0,1) == '/' ? (!empty($_SERVER['HTTPS']) && $_SERVER['HTTPS'] !== 'off' || $_SERVER['SERVER_PORT'] == 443 ? "https" : "http") : '') . "://" . $_SERVER['HTTP_HOST']; echo "<meta property='og:url' content='". $SiteRoot . esc_attr( $this->canonical( false ) ) . "'/>\n";
The Longer-Term Solution
In the longer term, I am hoping that wonderful developers of WordPress SEO will fix the problem. You can track the status of this issue and add your comments by visiting the WordPress SEO Support Forum.
In the meantime, remember to always re-apply this fix after updating WordPress SEO until the issue has been resolved.
|
OPCFW_CODE
|
How to download charts from chartmuseum?
Hi ,
I am trying to download charts from chart museum. But as per the documentation. There is only a way to list the charts but not to download them to local. Please let me know possible way to achieve this?
@plokimju can you elaborate?
ChartMuseum meant to be directly with the Helm client, so you can do something like
helm repo add chartmuseum http://localhost:8080
helm fetch chartmuseum/mychart
This will download latest chart .tgz to current directory
Hi @jdolitsky ,
Thanks for your prompt response:
When I do a "helm fetch" I see:
helm fetch chartmuseum/workflowengineservice
Error: chart "abc" matching not found in chartmuseum index. (try 'helm repo update'). No chart version found for abc-
@jdolitsky - I am able to see the chart index.yaml. Interestingly error append "- " at the end of chartmuseum "No chart version found for abc-"
@plokimju would you mind sharing contents on index.yaml? Have you added the repo with helm repo add ?
@jdolitsky - yes I did
$ helm repo list
NAME URL
chartmuseum http://:8080//
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "chartmuseum" chart repository
Update Complete. ⎈ Happy Helming!⎈
Index.yaml
$ curl http://:8080//index.xml
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0apiVersion: v1
entries:
abc:
apiVersion: v1
appVersion: 0.1.1-SNAPSHOT
created: "2018-06-14T19:00:10.459Z"
description: A Helm chart for Kubernetes
digest: 47a71903ea865dff80a2837496ad7a90d949ef2b9dc84e5c2090b0d771
name: abc
urls:
charts/abc-0.1.1-SNAPSHOT.tgz
version: 0.1.1-SNAPSHOT
generated: "2018-06-20T19:12:47Z"
closing as I am able get the the charts with curl
Also hit this issue.
Found out that the problem is related to the version. If version has a suffix with a dash -, helm install chartmuseum/CHART can't find the chart version.
Sematic Versioning supports dashes and Suffixes. Not sure if bug in helm or chartmuseum.
Well, it's not a bug, but an annoyance for people like us, I guess.
SemVer, and thus helm, consider that version strings w/ an additional -tag are actually pre-releases. Helm is simply not showing you, or using those by default.
I see 3 easy-ish solutions here:
Specify --devel in your helm calls, so that it considers those pre-releases
Specify --version $VERSION in your helm calls
Move the git sha from version, and specify your complete version string in the charts' appVersion field
Annoying.
This didn't affect me until I upgraded to helm 2.10
|
GITHUB_ARCHIVE
|
Most internet users want to control how their information is collected and subsequently used. In addition, they wish to understand who knows what about them and how they learned it. But on top of that, users appreciate their informational privacy and security in the digital world even more.
Although, some users do not mind giving away their personal information to the service providers to improve functionality. However, most internet users want some guarantees about who has the privilege to use personal information and to what extent. That’s why and how the terminologies like “data privacy,” “consent for sharing,” “data abuse,” and “privacy breach” trigger heated debates.
Some common threats to informational privacy
Digitalization and the internet happened at an unprecedentedly fast pace in human history. Thus, they’ve created an environment never seen before that caught us all somewhat unprepared.
Sure, the new technology offers many new possibilities and advantages. But it also poses previously unknown threats to our privacy.
Notably, the rise of social media happened in an environment rapidly adopting the monopolization of the internet’s most essential features. The recent shift of Meta (previously known as ‘Facebook’) from social interactions to the metaverse thing is one such example. This trend is eroding users’ already meager control over their data.
Though losing control over personal data does not automatically constitute a tragedy, it poses some disadvantages.
But such drawbacks are not a fact of nature or an inevitable consequence of the internet’s dynamics. Instead, they happen because more and more entities have gained access to our data and are trying to obtain even more.
Luckily, governments, privacy enthusiasts, and activists sensed the upcoming privacy disaster early. Thus, data protection laws appeared in many jurisdictions to prevent abuses in the increasingly high personal data processing.
But then, data science emerged as a new discipline, bringing in big data, machine learning, deep learning techniques, and the corresponding privacy threats.
Likewise, platform economies are also around, backed by tech giants with global reach that profit from private data. So naturally, this led to more data storage and processing.
As they say, “information is power,” some actors among these giants abused this power against people. Then along came Edward Snowden with a series of scandalous revelations.
Initially, people dismissed Snowden’s warnings as unreasonable. But then, the Cambridge Analytica scandal happened, proving him right. The threats about lack of privacy and control over our data are genuine and devastating.
Besides technology, some of the world’s governments also actively target users’ data privacy.
China, India, and North Korea are the most notorious governments with large data infrastructures to collect even more information about their citizens. Unfortunately, though, they are not the only regions with such measures. Many other countries have also implemented specified programs for ‘spying’ on users, such as the USA’s PRISM program. Furthermore, the governments have also agreed to share their citizens’ data (the 14 Eyes alliance), worsening their privacy.
It raises further concerns. Collecting private data that establish general trends without identifying individuals is one thing. But gathering that same data in a way that allows a government to pinpoint a single person’s behavior is another. Hence, such aggressive data collection and surveillance often facilitate implementing harsh steps against freedom of speech, internet freedom, and other fundamental rights for the citizens.
|
OPCFW_CODE
|
aaamos at gmail.com
Wed Jan 2 07:06:41 UTC 2008
I guess what makes me reluctant to agree to the suggestions of having
more special characters with special syntactical meaning is that it
would only serve to complicate a language, one of whose most beautiful
aspects is its simplicity... I'm afraid it a) might become a slippery
slope for more of the same, and b) would make translation/portability
between different flavours of Smalltalk more difficult.
Just out of curiosity (not that I'd advocate this), would an
equivalent of the distinction between line comments and block
comments, e.g. in Java:
... some code ...
/* This is a block comment
spanning multiple lines, including
... some more code ...
// and even containing line comments
... yet more code ...
give you what you're after?
Have you considered using other delimiters (such as curly brackets)
within comments to allow "double-click-and-do-it" execution?
Are you absolutely sure that class comments and protocols for "test
methods" won't do the job? In my experience, classes that are so
complex they need additional "documentation" should be broken down or,
if that's not at all possible, they should be thoroughly documented in
the class comment, with possible references in the methods (simply
"see class comment" should suffice). And class comments aren't parsed,
the code doesn't necessarily have to be valid, etc. Similarly,
consider breaking methods that absolutely need comments between lines
of code into smaller methods, so that you'll only need method comments
(even if those methods are only meant for testing/demonstration
purposes). But then, you probably knew people were going to suggest
Documentation for tutorials shouldn't necessarily all be in the code,
imho, there should be some accompanying medium like webpages or a PDF
or something (or a class comment?).
I realise this doesn't address all you were after, but maybe it'll at
least give you a few alternatives to consider =o)
On 1/2/08, Keith Hodges <keith_hodges at yahoo.co.uk> wrote:
> A slightly less radical solution which would offer the same benefit
> would be if the compiler stopped parsing if it encountered six or so
> comment quotes on a line like so. (perhaps this could be reduced to two)
> I was asked what problem this is solving...
> 1. In particular I find it difficult to provide code snippets in method
> comments which work via select and do-it. particularly if these code
> snippets are themselves commented. I think that tutorial type
> documentation would benefit from this facility.
> 1b. Also I have a habit of putting test code in comments at the bottom
> of my methods, but again I can't comment or use quote characters in this
> 2. In some circumstances it is useful to use methods to hold data,
> however the restriction that that data be valid smalltalk code is a pain
> in some situations... i.e. the scripts in the dev image script manager
> are maintained and generated from methods. However they have to be
> manually pre-processed into valid smalltalk code escaping quotes and so
> 3. This scheme allows packages to have their own script manager like
> documentation as classes within existing browser tools.
More information about the Squeak-dev
|
OPCFW_CODE
|
You can make an argument for that abomination in C when perform prototypes have been exceptional to ensure banning:
void examination(string& s) Nefarious n; // issues brewing string duplicate = s; // duplicate the string // damage copy and after that n
A technique of contemplating these recommendations is being a specification for resources that occurs to be readable by people.
Expert writers choose help of all most current data and pertinent information and facts to complete these and lots of other sorts of assignments successfully.
It will require great coding style, library guidance, and static Investigation to eradicate violations devoid of main overhead.
A perform specifies an motion or possibly a computation that usually takes the procedure from just one dependable point out to another. It truly is the basic setting up block of courses.
Purpose of Cross-cultural Management Study A supervisor must have the knowledge of other cultures, especially the tradition of individuals working under him. Inside of a multicultural Modern society, like in the USA, the united kingdom or Australia, the workforce inevitably gets multicultural much too. In Australia alone, it truly is approximated that just about 50% of your workforce while in the country belongs to other cultures from Asia, Africa and Latin America. Cultural analyze and relating that analyze With all the administration of people is needed for managing and managing varied workforce far more efficiently. A noteworthy condition wherever higher attention on cross-cultural administration is necessary is the 1 concerning variances among Japanese and Western values, which existing continual discrepancies in many portions of work cultural and organisational conduct. Therefore, the principal goal of the supervisor is to keep regularity, knowing and rapport among group users even Should they be culturally distinctive.
Do more helpful hints it through the initially get in touch with of the member perform. A Boolean flag in the base course tells whether put up-development has taken place yet.
Like copy semantics Unless of course you will be developing a “intelligent pointer”. Value semantics is the simplest to explanation about and what the common-library services count on.
No. These guidelines are about how to most effective use Typical C++fourteen (and, When you've got an implementation obtainable, the Ideas Technical Specification) and publish code assuming you do have a fashionable conforming compiler.
See GOTW #100 and cppreference for your trade-offs and extra implementation information linked to this idiom.
Evaluating the functionality of a fixed-sized array allocated around the stack in opposition to a vector with its features on the totally free keep is bogus.
The use of volatile would not make the 1st Look at thread-safe, see also CP.two her explanation hundred: Use unstable only to talk to non-C++ memory
On the other hand, see the modernization area for a few possible ways to modernizing/rejuvenating/upgrading.
|
OPCFW_CODE
|
One save causes multiple reloading
Hi!
Saving my index.android.js one time causes multiple auto reloading. Here's the log from my last test:
[4:55:27 PM] <START> find dependencies
[4:55:27 PM] <END> find dependencies (85ms)
[4:55:27 PM] <START> transform
transforming [========================================] 100% 322/322
[4:55:27 PM] <END> transform (81ms)
[4:55:27 PM] <START> request:/index.android.bundle?platform=android&dev=true
[4:55:27 PM] <END> request:/index.android.bundle?platform=android&dev=true (12ms)
::ffff:<IP_ADDRESS> - - [05/Nov/2015:18:55:27 +0000] "GET /flow/ HTTP/1.1" 404 18 "-" "okhttp/2.4.0"
[4:55:28 PM] <START> find dependencies
[4:55:28 PM] <END> find dependencies (67ms)
[4:55:28 PM] <START> transform
transforming [========================================] 100% 322/322
[4:55:28 PM] <END> transform (82ms)
[4:55:28 PM] <START> request:/index.android.bundle?platform=android&dev=true
[4:55:28 PM] <END> request:/index.android.bundle?platform=android&dev=true (9ms)
::ffff:<IP_ADDRESS> - - [05/Nov/2015:18:55:29 +0000] "GET /flow/ HTTP/1.1" 404 18 "-" "okhttp/2.4.0"
[4:55:29 PM] <START> find dependencies
[4:55:29 PM] <END> find dependencies (77ms)
[4:55:29 PM] <START> transform
transforming [========================================] 100% 321/322[4:55:29 PM] <START> request:/index.android.bundle?platform=android&dev=true
transforming [========================================] 100% 322/322
[4:55:29 PM] <END> transform (168ms)
[4:55:29 PM] <END> request:/index.android.bundle?platform=android&dev=true (115ms)
::ffff:<IP_ADDRESS> - - [05/Nov/2015:18:55:29 +0000] "GET /flow/ HTTP/1.1" 404 18 "-" "okhttp/2.4.0"
[4:55:30 PM] <START> find dependencies
[4:55:30 PM] <END> find dependencies (63ms)
[4:55:30 PM] <START> transform
transforming [========================================] 100% 322/322
[4:55:30 PM] <END> transform (76ms)
[4:55:30 PM] <START> request:/index.android.bundle?platform=android&dev=true
[4:55:30 PM] <END> request:/index.android.bundle?platform=android&dev=true (10ms)
::ffff:<IP_ADDRESS> - - [05/Nov/2015:18:55:31 +0000] "GET /flow/ HTTP/1.1" 404 18 "-" "okhttp/2.4.0"
There's another strange thing happening, but I don't know if it's related to this issue. I'm using Emacs + Web Mode + Flycheck to develop my RN apps. For some reason, every key I type it causes the emulator to auto reload.
I'm new to RN so is this really an issue or am I missing something?
Can you try another editor to potentially narrow down whether the problem is with your normal editor?
hmm good idea @ide 😅
I'll try Atom.
Hmm yeah, both issues only happens on Emacs. Closing the issue now.
Thanks, @ide !
Just in case anyone else ends up here as the result of a Google search, the problem is caused by Flycheck.
:smile: :cry:
Mystery solved :smiley: :+1:
Great work investigating!
Thanks, @ide
I just found out the easy way to "fix" this behavior is to put this temp files on .gitignore. Watchman ignores everything that is there :)
|
GITHUB_ARCHIVE
|
Honor 9 Lite review
I really really hope you can help me. I recently reformated my Acer Aspire 3630 (windows xp), and it seemed to go well. I reinstalled all necessarly drivers, and reinstalled the software i regularly use.
However, ever since the reformat my laptop seems to have slowed down! Although it starts up much faster, it seems as though opening software/internet explorer etc takes far longer than before.
I've noticed that at times, my CPU usage (according to the performance tab on task manager) is at 100%. I also get windows pop up telling me my virtual memory is too low. This never used to happen!
Anyone got any ideas how to speed it up? I realise buying RAM may help, but considering the fact that my hardrive is virtually empty, and these problems only started after the reformat Im not sure this is what I need. What do you guys think?
Also, I'm not sure how relevant it is, but before the reformat I had two partitioned hardrives, but when I reformated I combined them into one hardrive. Could this have something to do with it?
I'd be grateful for your help,
What's the size of your hard disk & how much free space does it have?
What's the size of your virtual memory at present? Have you tried increasing it?
Are there any conflicts in Device Manager? G
smackheadz - thanks for that link, Iv just applied the changes and I'll let you know if it makes any difference.
Crossbow7 - my hard disk has a capacity of 74.5 GB, of which 14.5 GB is currently being used.
My virtual memory was set to 1300 - 2500, but after reading smackheadz link I've changed it to 672 - 1344 (my RAM is 448 MB). This seems like a big difference, do you reckon this could be it?
As for conflicts in my device manager - the following seem to not have any drivers installed.
advanced programable interrupt controller
direct memory access memory
numeric data processor
programmable interrupt controller
Any ideas where I can get hold of them?
probably could do with more memory.
It could be your security programs running scans at start up.Maybe stop them running at start up and run them when you want to run them.
Under CPU if nothing running system idle process should be showing about 95% this is normal anything else using a lot of the CPU up let us know what. or Google for it.
Try here. click here
Did you try all the drivers available at Acer for your laptop? click here to have a look.
If no joy, try using Windows Update click here to install these - choose the 'Custom' scan. If it has anything to offer, they may show up under 'Hardware, Optional'. If you've not used Win update as yet, then do this first till you've exhausted all the critical ones (at least). Make sure to reboot whenever it asks you to, & re-visit till all are complete.
Usually setting a fixed virtual memory size does a better job, as it doesn't have to keep changing size. But too much tweaking is also not good. Leave as it is, or enter 1344 for both fields. Everybody seems to have their own theory about what's best regarding this!
Anyway, try to get rid of all the Device Manager problems & it should be a lot better. G
Thanks for the help guys
The drivers Im missing arn't on the acer site, but thankfully I backed them all up before reformating.
This may sound like a silly question but how do I now reinstall the drivers Im missing?
I've tried to do it via "update driver", install from specific location, "i will choose the driver to install" route, and although it appears to install, once completed it still claims there are no drivers installed.
If they don't install then it's quite possible they aren't the correct drivers.
How did you backup the drivers btw?
Did the laptop come with an OS Restore Disc? G
I used driver magician to back up the drivers.
Its strange, they seem to install but yet when I re-check them the same "no driver installed" message appears
This thread is now locked and can not be replied to.
|
OPCFW_CODE
|
Andrea announced the lighting talk in IT for StackExchange. This will be 5 minutes talk.
Software discussion at the ECFA. Common software challenges and HSF/CWP could be a topic. The ECFA meeting ison 3-6 October.
Got the acceptance from all the reviewers. They are organizing the travel. Meeting rooms reserved.
DOE program manager, Lali Chatterjee, has objected to an “HSF review of the GeantV project” with the argument that no external organization should review a DOE (partially) funded project.
Daniel Elvira basically proposes to remove the word ‘review’ from the GeantV review and call it “HEP software community meeting: detector simulation R&D”
Torre, John and Pere are of the opinion that she is probably not correct here. There are many examples of internal reviews of DOE funded projects For example each of the US LHC projects (ATLAS, CMS) organize internal reviews, which reports the findings and recommendations to the projects themselves.
We agreed that we will write a page (in the HSF website) explaining that HSF is just offering a service to the community to organize ‘peer-reviews’ to software projects on their request. The role of the HSF is just to put together a set of experts on a given domain and organize the logistics. This will be the case for the GeantV. The recommendations (mainly technical and on the project strategy, not about the level effort or blessing/killing a project) will be given to the project and not to any funding agency or organization.
Discussion: Michel agrees with the proposal. Suggests that on the HSF website, it is said explicitly that HSF peer-review complements the funding agency reviews and is not intended to be an alternative to them.
John: it is a good role for HSF for projects that are transversal to several experiments.
Liz: Lali sees the HSF in competition to her organization CCE process. Liz does not see this way, HSF is the grassroot component, complements the CCE process.
On the table we have the proposal to hold the kick-off meeting at SDSC/UCSD (San Diego) in January as I described below. Pete filled in a few more things in the HSF Google calendar, but if there are no fundamental objections to doing it in that place sometime in the dates below, we can make an announcement/doodle on the full HSF list to check whether there are other possible conflicts.
Pete updated the calendar with events for 2017.
SDSC is an attractive place in many ways: NSF place, located in US (as we plan the final meeting in Europe) after the HSF workshop in Paris
2 weeks look possible: Jan 16-20 or Jan 23-27. To help decide, Pete will send a mail to the HSF general mailing list with a link to the calendar, asking if people are aware of any conflicts we may have missed.
In addition we need to make a plan to start engaging the experiments in order to make preparations towards the 22-23 Sep, 4-7 pm pre-CWP organizational meeting. This organizational meeting goals are:
(a) Breakdown in working groups
(b) Build a list of questions/charges for the different areas
(c) List of people to participate to the WS and possible convener.
Proposal to use our meeting slot for this initial discussion next week.
There will be a Spack meeting today.
Some discussion about Portage happened on the mailing list:
Weekly meetings resumed!
Pete has managed to have the job opening for the IPCC for ROOT modernization. He is currently looking for good candidates.
|
OPCFW_CODE
|
nuxt how to manually change axios baseUrl after nuxt build
now I have a question, how can I manually change axios baseUrl after build. Just if specified request url changed, I can manually change baseUrl, rather than rebuild the project
You have to create a new axios instance using axios.create() function. You can then use this instance to make calls on the new baseUrl.
const myNewInstance = axios.create({
baseURL: 'https://my-domain.com/api/',
timeout: 3000,
headers: {'X-Custom-Header': 'foobar'}
});
You can also prototype it to Vue object inside a plugin and then you can access it from all components and pages.
I`m not really understand how to manually change baseURL. Can I put baseUrl config into static file.
You cannot change it after the build. Why would you want to change it in runtime? If you have multiple APIs or endpoints you want to access, you should create a separate axios instance for each one of them.
Also its currently impossible to change it during runtime. This feature is supposedly coming soon (https://github.com/nuxt-community/axios-module/issues/88).
Because my projects actual requset address is depend on actual deployment servers ip address, So I just want to simply change the static baseUrl,rather than to rebuild.
Its's not possible. I suggest you to create a config file where you have the baseUrl defined, then import it in nuxt.config.js and set it for axios. Then create a separated build for every deployment, where you get the config variables from ENV.
@JackieTsien if you still wanted to know how to set config variables for ENV check out my answer ;)
You can overwrite the baseUrl using an environment variable called API_URL. Just set it to whatever URL you need. Please refer to the official documentation here: https://axios.nuxtjs.org/options.html#baseurl
Also, in case you want to overwrite the browserBaseUrl as well, you can do so the same way, using the API_URL_BROWSER environment variable.
Hint: you can set environment variables for a single command by prepending them. So, technically, this should work:
API_URL="https://example.com" nuxt start
Use cross-env: npm install cross-env
In your project add a folder named environment
and under the folder you could have different environment (e.g. development, staging, production)
Your environment configs will have your baseUrl and other configs, let's say for development you could have your localhost then for the staging or production you could have your API_URL
defaults.json \\development
defaults.prod.json \\production
In nuxt.config.ts build > extend configure your different environment
This will replace defaults.json depending on which environment we will run our script in package.json.
In package.json configure your script to what environment will be run (e.g npm run start will use NODE_ENV=development which will use the defaults.json with baseUrl: http://localhost:3000 and npm run build will use defaults.prod.json with baseUrl: http://www.API_URL.com and other configs
For more detail you could see cross-env
can I change api url after webpack packaging?
You can use axios-multi-host, it has built in changing host by calling host.updateList()
example:
import axios from 'axios'
import { AxiosMultiHost } from 'axios-multi-host'
const host = new AxiosMultiHost(['https://google.com'])
const api = axios.create()
api.interceptors.request.use(host.middleware)
then change the host
host.updateList(['https://bing.com'])
|
STACK_EXCHANGE
|
Sixth Catalog of Orbits of Visual Binary Stars: format of text version The format of the Sixth Orbit Catalog has been extensively modified, in order to address a couple shortcomings in the original format. First, some users of the catalog requested that published formal errors for orbital elements be included when available. Also, new techniques such as long-baseline interferometry have in recent years yielded orbits with ever shorter periods and smaller semi-major axes. The range in these values (for example, periods >100,000 years on the one extreme, periods quoted to 0.00000001 years or less at the other) did not fit in the number of characters initially allocated. Accordingly, the master file was widened considerably to accommodate both formal errors and higher precisions. Flags have been added to the period and semi-major axis columns, allowing periods to be quoted in centuries or days as well as years, semi-major axes in milliarcseconds as well as arcseconds, and T0 in modified Julian date as well as fractional Besselian year. (Codes for other units, such as periods in hours or semi-major axes in micro-arcseconds, will be added as needed). The format of the text version of the Sixth Catalog is as follows: column format description 1 T1,2I2,F5.2, epoch-2000 right ascension (hours, minutes, seconds). A1,2I2,f4.1 epoch-2000 declination (degrees, minutes, seconds). 2 T20,A10 WDS designation (based on arcminute-accuracy epoch- 2000 coordinates). 3 T31,A14 Discover designation and components, or other catalog designation. 4 T46,I5 ADS (Aitken Double Star catalog) number. 5 T52,I6 HD catalog number. 6 T59,I6 Hipparcos catalog number. 7 T67,F5.2,A1 Magnitude of the primary (usually V), and flag: > = fainter than quoted magnitude < = brighter than quoted magnitude v = variable magnitude k = magnitude is in K-band or other infrared band ? = magnitude is uncertain 8 T74,F5.2,A1 Magnitude of the secondary (usually V), and flag: > = fainter than quoted magnitude < = brighter than quoted magnitude v = variable magnitude k = magnitude is in K-band or other infrared band ? = magnitude is uncertain 9 T82,F11.6,A1 Period (P) and code for units: m = minutes (not yet used!) h = hours (not yet used!) d = days y = years c = centuries (rarely used) 10 T95,F10.6 Published formal error in P (in same units as for P). 11 T106,F9.5,A1 Semi-major axis (a) and code for units: a = arcseconds m = milliarcseconds (mas) M = arcminutes (used only for alp Cen + Proxima Cen) u = microarcseconds (uas - not yet used) 12 T117,F8.5 Error in a. Units are the same as for a. 13 T126,F8.4 Inclination (i), in degrees. 14 T135,F8.4 Error in i. 15 T144,F8.4,A1 Node (Omega), in degrees. An identified ascending node is indicated by an asterisk following the value. If the ascending node is later determined to off by 180deg, it is flipped, and a "q" code added to indicate the change. 16 T154,F8.4 Error in Omega. 17 T163,F12.6,A1 The time of periastron passage (T0) and code for units: c = centuries (fractional year / 100; used only for alp Cen + Proxima Cen) d = truncated Julian date (JD-2,400,000 days) m = modified Julian date (MJD = JD-2,400,000.5 days) y = fractional Besselian year 18 T177,F10.6 Error in T0. Units are the same as for T0. 19 T188,F8.6 Eccentricity (e). 20 T197,F8.6 Error in e. 21 T206,F8.4,A1 Longitude of periastron (omega), in degrees, reckoned from the node as listed. If the published omega value is later determined to fall in the wrong quadrant, the value is flipped by 180deg; a letter "q" indicates the quadrant has been corrected. 22 T215,F8.4 Error in omega. 23 T224,I4 Equinox, if any, to which the node refers. 24 T229,I4 Date of the last observation used in the orbit calculation, if published. 25 T234,I1 Orbit grade, ranging from 1 ("definitive") to 5 ("indeterminate"). Additionally, a grade of 8 is used for interferometric orbits based on visibilities rather than rho and theta measures (hence not gradable by the present scheme) and a grade of 9 indicates an astrometric binary (also lacking rho and theta data). 26 T236,A1 A flag "n" to any notes for this system. 27 T238,A8 A code for the reference (usually based on the name of the first author and the date of publication). 28 T247,A18 Name of image file (png format) illustrating orbit and all associated measures in the Washington Double Star database. Columns in the ephemeris file (orb6ephem.txt) are as follows: (Note: ephemeris file was updated 6 Jul 2015, following discussion with users) column format description 1 A10 WDS designation, as above. 2 T12,A14 Discoverer designation, as above. 4 T28,A8 Reference code, as above. 5 5(F5.1,F7.3,4X) Predicted values of theta and rho over a 5-year or 5(F5.1,F8.4,3X) timespan. Theta is given in degrees and rho in arcseconds. Rho values for a given pair are listed to 1mas precision unless at least one predicted value for that pair is under 10mas. In this case all values for the pair are listed to 0.1mas precision. 6 T116,A20 Text indicating astrometric orbit or pair with incomplete elements. Rho values are those of the photocenter relative to the barycenter for astrometric solutions. Obviously no theta or rho values are listed for pairs with incomplete elements.
|
OPCFW_CODE
|
Bug#48866: Time Zone config broken re: 'GMT or local' selection
Current potato boot-floppies don't correctly handle the question:
"Will the hardware clock be set to GMT?"
A change in the sysvinit package has caused this. The file
/etc/default/rcS now contains the following:
"# Set UTC=yes if your system clock is set to UTC (GMT), and UTC=no if not.
I usually answer no to the question and I ended up with the above
Methinks that utilities/dbootstrap/tzconfig.c needs to be fixed. I'd
fix it myself if I knew how :)
-- System Information
Debian Release: potato
Kernel Version: Linux tigers 2.2.13 #1 Tue Nov 2 02:08:50 EST 1999 i686 unknown
Versions of the packages boot-floppies depends on:
ii ash 0.3.5-8 NetBSD /bin/sh
ii bison 1.28-3 A parser generator that is compatible with Y
ii debiandoc-sgml 1.1.33 DebianDoc SGML DTD and formatting tools
ii gettext 0.10.35-11.0.0 GNU Internationalization utilities
ii libgd1g-dev 1.6.1-1.1 GD Graphics Library (development version).
ii libnewt-dev 0.50-4.1 Developer's toolkit for newt windowing libra
ii libpaperg 1.0.3-12.2 Library for handling paper characteristics [
ii libpopt-dev 1.3-4 lib for parsing cmdline parameters - develop
ii libwww-perl 5.46-1 WWW client/server library for Perl
ii lynx 2.8.2-3 Text-mode WWW Browser
ii m4 1.4-10 a macro processing language
ii make 3.78.1-1 The GNU version of the "make" utility.
ii makedev 2.3.1-28 Creates special device files in /dev.
ii man-db 2.3.10-69s Display the on-line manual.
ii pointerize 0.3 Internationalization utilities, based on get
ii recode 3.5-1 Character set conversion utility.
ii slang1-pic 1.3.9-1 The S-Lang programming library, shared libra
ii slice 1.3.4-2 Extract out pre-defined slices of an ASCII f
ii tetex-bin 1.0.6-1 teTeX binary files
ii tetex-extra 1.0-5 extra teTeX library files
ii zlib1g-dev 1.1.3-4 compression library - development
ii libc6-pic 2.1.2-8 GNU C Library: PIC archive library
^^^ (Provides virtual package glibc-pic)
ii perl-5.005 5.005.03-4 Larry Wall's Practical Extracting and Report
^^^ (Provides virtual package perl5)
|
OPCFW_CODE
|
Maintaining a critical number of participants around an activity, maintaining focus on action to achieve shared goals, sustain operations until the accomplishment of the shared mission or goals.
Engagement is a recurrent problem in open p2p communities and networks. It translates into keeping a critical number of contributors/participants (see contribution and participation) focused on a task or a project until its completion.
Open ventures (networks) do not operate based on engagement contracts, such as job contracts used in traditional organizations. They crowdsource their activities and allow free flow of participation. This open and participatory mode of operation brings in a new “management” paradigm.
Challenge of engagement in open networks
The challenge in open ventures is to keep an effective level of redundancy for participation in tasks. In other words, at any moment, from the beginning to the end, every role, based on the planning, should be energized by a critical number of affiliates, to bring the probability for a specific activity to be completed close to 100%. By activity we mean anything, a task, monetary contributions, contributions with materials, tools or equipment, etc.
Scarcity vs abundance
Traditional organizations operate on a scarcity model. They can be seen as confined environments for economic activity. The best term that describes this situation is the french “boîte” (meaning a box), used as a synonym for a company (‘’Je travaille dans une boîte’’). In more concrete terms, if you want to launch a new venture you'll need to ask yourself the following questions:
- Is there enough money for this venture?
- Are there enough people covering the entire skills-base for this venture?
- Is there a space / environment (physical and virtual) to accommodate this new venture?
- Are there enough materials, tools, equipment required for this venture?
All these elements and more are dimensions of confinement for traditional organizations. In order for the project to be accepted all these conditions must be met, along with others.
Let's zoom into the human input dimension for a moment. Since the operation model of the company is a transactional one (salary promise in exchange of time and skills contributions), project managers try to minimize costs by employing the smallest number of individuals to cover the entire skills-based required for the project. Since the salary is negotiated upfront for a preset engagement (amount of hours per week), the company puts in place time management processes, i.e. the project manager makes sure that employees spend the paid time producing value for the company. Management is designed for scarcity, i.e. stay within budget and try to get the most out of a very limited number of individual contributors. Traditionally, this arrangement normally takes the form of an 8 hour shift, where everyone is brought into the same space for better coordination and monitoring. With the advent of the Internet some organizations have implemented task/deliverable-based management techniques, with more freedom with regards to when and where the work is performed. The bottom line is that traditional organizations operate on high levels of engagement.\
Open ventures operate based on an abundance model. They can be seen as attractors of economic activity, where almost everything is crowdsourced. In more concrete terms, if you bring a new venture to an OVN you’ll be asking yourself if this venture is attractive enough to enlist the crowd’s participation with time/skills, money, materials, etc., assuming that, like in traditional organizations, the OVN provides a secure environment (physical and digital) for collaboration, with a diverse system of incentives and motivation.
Open ventures are very elastic, they can grow and shrink dynamically according to their needs. They are only bound by the willingness of the crowd to contribute.
Let's zoom into the human contribution dimension for a moment. Since the operation model of open ventures is based on voluntary contributions and shared risk (everyone contributes with an expectation of future benefits that will be shared in a fair way), there is no immediate transaction between the venture and participants (the venture is the participants), no finite budget to manage, no need to minimize the number of participants. All participants need to cover the entire skills-based required for the venture. Since no one is contractually obligated to stick with the venture, those who form the core group of a n open venture need to put in place mechanisms to ensure an effective level of redundancy, i.e. to have a as many contributors per role as needed in order to maintain close to 100% probability that a task will be done in effective time. In other words, instead of having one affiliate per role and making sure that this affiliate’s productivity is maintained at a maximum level within the 8 hour workday, in open ventures we expect to find many affiliates per role, enough to reach a high probability that at least one of them will complete a task within the required time, with the required quality. Furthermore, since benefits are distributed in proportion to actual contributions (more on benefit redistribution algorithm), there is no need for time management. Only those who deliver are rewarded, no matter how many affiliates are on standby to take a task. Coordination and resource allocation as processes are designed for abundance, assuming that there is always someone out there willing to perform a required action. Contributions to the venture, in this arrangement, takes the shape of a long tail curve, with a small number of affiliates contributing a lot, and the majority of them contributing sporadically. Open ventures have implemented task/deliverable-based techniques and tools, with great freedom with regards to when and where the work is performed. The bottom line here is that the level of engagement varies from one contributor to another. For a good performance, the open venture needs to sustain a critical mass of core contributors, which will take care of all the important peripheral processes. Core affiliates play support roles. Engagement measures need to be applied to maintain this core for the lifetime of the venture. The collapse of an open venture is the collapse of its core.
The debate on whether or not this new paradigm of production works is now closed. This is how network-type organizations like Wikipedia, Linux and Bitcoin operate. Nothing is guaranteed though, as it is also the case with any traditional organization. Everything hinges on the governance, IT infrastructure, methodologies of work and work culture of these organizations, tuned for their specific economic reality.
Engagement within traditional organizations translates into motivating and incentivizing the pool of employees attached to a project to maintain their creativity and productivity at a maximum, for the 8 hour workday that they are paid for. A mix of positive and negative incentives (carrots and sticks) are used in traditional management techniques. The ultimate punishment for not delivering according to the expectations set in the job contract is firing, which is an exclusion from that particular economic activity with the retraction of all the benefits that come with it. Soft techniques to motivate employees are widely used today, which take the form of company mission and purpose, work culture, good work conditions, acknowledgement of excellence, referrals and work experience, etc.
Engagement in open ventures translates into motivating and incentivizing the crowd to stick around the venture and jump in to complete a task when they have an opportunity to do so.
Open ventures are flow through organizations.
Outreach takes an important part of the development process of the open venture, to maintain an equilibrium between new contributors and those who leave. Work is socialized (work out loud), broadcasting a lot of signals on social media or targeting communications in communities of practice to get people’s attention and draw them towards the activity.
The work is planned in small bite size tasks, easy to complete, to increase the probability of participation, minimizing costs. Everything is well documented to also diminish the costs of understanding the task and increase the level of stigmergy.
Core participants offer onboarding services, facilitation and coordination.
Core participants also communicate a set of incentives that can range from tangible rewards (getting paid for the task, acquiring sweat equity in the project, ...) to intangible ones (gaining experience, building a network, sense of belonging, etc.). They also use various techniques to build purpose into the project and trigger intrinsic motivation for participation.
The role associated with engagement
Apart from incentives, which are built into the system and communicated through the collaborative platform by core contributors, motivation is also important.
- Core contributors in this role need to follow an intervention strategy. They provide feedback.
- Dimensions for the intervention strategy
- What contributors to target? (core, long tail, new contributor, …)
- What content should be provided to contributors? (congratulations, public acknowledgment, encouragement, securing, … )
- What should the delivery method be? (direct messages to contributors, to all, meeting, social media, …)
- Timing (when to intervene)
- Duration (for how long to intervene)
3 motivational cues
- Feel good about oneself for making a contribution: “waw, your work has us save the world”
- Sense of belonging: “waw you're a good peer”
- Fear of making a mistake:”don't worry”
- Add more dimensions of intervention for motivation
Environment programmed measures
One popular measure is badges that contributors earn. Badges can be just honorary, signaling to others positive aspects about their peers. Related to pride. Badges can also be linked to benefits such as access to governance, possibility of taking more responsibility, to represent, ...
AI and engagement
See this video
|
OPCFW_CODE
|
Log::Rolling - Log to simple and self-limiting logfiles.
Log::Rolling is a module designed to give programs lightweight,
yet powerful logging facilities.
One of the primary benefits is that,
while the logs can be infinitely long and handled by something like
the module is capable of limiting the number of lines in the log in a fashion where by the oldest lines roll off to keep the size constant at the maximum allowed size,
if so tuned.
This module is particularly useful when you need to keep logs around for a certain amount of available data,
but do not need to incur the complexity and overhead of using something as heavy as
logrotate or other methods of archiving.
Since the rolling is built into the logging facility,
no extra cron jobs or the like are necessary.
Data is buffered throughout the run of a program with each call to
commit() is called,
that buffer is written to the log file,
and the log buffer is cleared.
commit() method may be called as many times as necessary; however,
it is best to do so as few times as required due to the overhead of file operations involved in rolling the log--hence the reason the entries are stored in memory until manually committed in the first place.
use Log::Rolling; my $log = Log::Rolling->new('/path/to/logfile.txt'); # Define the maximum log size in lines. Default is 0 (infinite). $log->max_size(50000); # Add a log entry line. $log->entry("Log information string here..."); # Commit all log entry lines in memory to file and roll the log lines # to accomodate max_size. $log->commit;
my $log = Log::Rolling->new('/path/to/logfile.txt'); my $log = Log::Rolling->new(log_file => '/path/to/file', max_size => 5000, wait_attempts => 30, wait_interval => 1, mode => 0600 pid => 1);
If no logfile is given, or if the logfile is unusable, the constructor returns false (0).
This method defines the path of the logfile. Returns the value of the logfile, or false (0) if the logfile is unusable.
This method sets the maximum size of the logfile in lines. The size is infinite (0) unless this method is called, or unless the size was defined using
new(). Returns the maximum size.
This method sets the maximum number of attempts to wait for a lock on the logfile. Returns the maximum wait attempt setting.
This method sets the interval in seconds between attempts to wait for a lock on the logfile. Returns the wait interval setting.
This method sets the file mode to be used when creating the log file if the file does not yet exist. The value should be an octal value (e.g., 0644). Returns the file mode.
This method sets whether the process ID will be recorded in the log entry. Enable PID with 1, disable with 0. Returns the value of the setting.
Adds an entry to the log file accumulation buffer in memory. No entries are ever written to disk unless and until
commit() is called.
Commits the current log file data in memory to the actual file on disk, and clears the log accumulation buffer.
This method rolls the oldest entries out of the logfile and leaves only up to max_size lines (or less, if the contents are not that long) within the logfile. Returns true (1) if log was successfully rolled, or false (0) if it was not. This method is not meant to be called independantly. Doing so will simply return false (0).
This method clears the buffered log entries without writing them to file, should it be deemed necessary to "revoke" log entries already made but not yet committed to file. Returns true (1).
Mark Luljak, <fairlite at fairlite.com>
Fairlight Consulting - http://www.fairlite.com/
Please report any bugs or feature requests to
bug-log-selfmaintaining at rt.cpan.org, or through the web interface at http://rt.cpan.org/NoAuth/ReportBug.html?Queue=Log-Rolling. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes.
You can find documentation for this module with the perldoc command.
You can also look for information at:
Copyright 2008-2009 Mark Luljak, all rights reserved.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
OPCFW_CODE
|
Wowhead AddOns (Client/Looter) Issue? Report here!
Return to board index
This totally helped the issue I was having with wowhead not seeing a valid executable in the World of Warcraft folder. Just changed the value to be the correct location and blam! Workie. Thank you.
Invalid status code: 500
im getting an issue where its saying my username and password is incorrect. i used this on my old computer but having difficulties on this one.
I had to delete the WoW Cache folders.
After playing a WoW session (client closed), I tried to upload my data with Wowhead addon client.
At the end of the upload I always get an error message since then.
Is there something I need to consider addon wise (like a reinstallation is necessary etc.)?
Sorry, working again seems to be the problem
just was the coincidence of happening when I also deleted the caches of WoW.
I'm also getting the "Could not find your World of Warcraft live folder" issue. I know it's a known issue, and that's fine I understand. My question though is does that error just disable me from launching WoW from the Wowhead client? Really as long as I'm still adding the the database I'm happy.
Downloaded the latest client and still getting the invalid path for Live and PTR. It detects the Beta install just fine.
The Beta and PTR are on the same drive while the Live version is on another drive.
I am getting the error for not finding the game exe file since I've upgraded to Windows 10 two months ago. In the meantime, I have uninstalled, re-installed and re-downloaded the client but to no avail.
After getting used to using the awesome tool, it makes me a sad panda not being able to use it anymore.
Can we get a fix soon, please?
I'm having the before mentioned problem of the invalid path message, though it seems to upload everything except the Items and ADB files.
My Items say it's 16% uploaded and the ADB files says 3%, The rest has all been updated to 100% after I click on upload.
I'm on win 7, having the same error with the client not being able to find the live install folder, but it's also not uploading my heirlooms. In the client itself it says there's 691 items, 201.3 KB and 0% uploaded, and also 7 ADB files, 225.1 KB, 1% uploaded. Dunno if that helps with anything, just throwing it out there!
Authentication fails with a 100 character long password. Changing my password to only 25 characters long fixed the issue.
It is amazing that the error "
Can't find the WOW folder"
has not been fixed for SO LONG!
I have encountered this error. No solutions worked. So no data for Wowhead frome.
And fresh data is so important at the beginning of an expansion!
Same problem with the WoW path but I
upload the Data so it's not a big deal.
Fix this broken client please. Why isnt it uploading ADB or loot or items?
Wowhead looter is causing the game to slow down/freeze whenever the transmog window is open for me.
I know you guys are working on this, but man it's frustrating.
This expansion is more full of hidden stuff than I've ever played before, and the devs have made it clear previously that they rely on the fact that there's third-party websites we can find the info on. So when I'm desperately searching for How to X, and Wowhead has no data because the client isn't working... if I'm lucky I can find others sharing anecdotal information, but that's not really what I come here for.
Without data, Wowhead becomes just another bunch of forums. I don't want Wowhead to fade away :-\
I think there's a misunderstanding here. Only an isolated number of people are experiencing the known issues outlined here... we're receiving a vast amount of client data on a daily basis right now. If you're getting the path error the client is usually still collecting data and you should be able to upload. The only exception if you're in the even smaller group of people who can't even install it. I apologize for that but rest assured we're still getting the data necessary to update the database.
So because I updated my password, I can't upload data because I can't update the password in the client because it's not recognizing the Live and PTR paths I'm providing. It recognizes the Beta path fine.
This has been going on for at least a month and very frustrating as a supporter of the sight that little to no attention is being paid to this issue at all.
We just want to help and provide data and support the site and we can't because something in the tool is broken and no help is forthcoming. Very frustrated.
I keep starting the Client with 0% data uploaded after a reinstall onto Windows 10... Attempts to Upload will be followed by "
:, Make sure your username and password is correct." Fair enough, I open up the Options menu, type in my password and press Enter... And each and every time, with no failure, I get "
Live path not found
." I -know- it's the right path. And I do believe the Client itself knows so too, because it can launch the Wow-64.exe with no issue.
Is there any reason for this? Is it Windows 10? Is it possible that the issue is that I play on the 64-bit client? I want to use the Looter/Client combo to see on Wowhead what I have done and not... But I can't.
I think there's a misunderstanding here. Only an isolated number of people are experiencing the known issues outlined here
Would not being able to find the install folder, and not uploading the "items" and "ADB" files be why I can't see my heirlooms?
Are there any news regarding fixes for these issues?
I have both the "Can't Find Path" and "Items and ADB files not uploading data" problems.
Edit: It seems my weapon relics are also not being updated.
You are not logged in. Please
to post a reply or
if you don't already have an account.
|
OPCFW_CODE
|
HPO 590 follow-up for discussion: noobaa-db-pg-0 does not failover if its host worker node has a network bond that goes down
Environment info
NooBaa Version: VERSION
INFO[0000] CLI version: 5.9.0
INFO[0000] noobaa-image: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:ef1dc9679ba33ad449f29ab4930bd8b1e3d717ebb29cca855dab3749dbb6d8e4
INFO[0000] operator-image: quay.io/rhceph-dev/odf4-mcg-rhel8-operator@sha256:01a31a47a43f01c333981056526317dfec70d1072dbd335c8386e0b3f63ef052
INFO[0000] noobaa-db-image: quay.io/rhceph-dev/rhel8-postgresql-12@sha256:98990a28bec6aa05b70411ea5bd9c332939aea02d9d61eedf7422a32cfa0be54
INFO[0000] Namespace: openshift-storage
Platform:
[root@c83f1-infa ~]# oc get csv
NAME DISPLAY VERSION REPLACES PHASE
mcg-operator.v4.9.5 NooBaa Operator 4.9.5 mcg-operator.v4.9.4 Succeeded
ocs-operator.v4.9.5 OpenShift Container Storage 4.9.5 ocs-operator.v4.9.4 Succeeded
odf-operator.v4.9.5 OpenShift Data Foundation 4.9.5 odf-operator.v4.9.4 Succeeded
[root@c83f1-infa ~]#
Actual behavior
Bug Description
This is a follow-up to noobaa from this HPO defect.
https://github.ibm.com/IBMSpectrumScale/hpo-core/issues/590
I am pasting 590 below, but to help the reader of this noobaa defect, I am pasting the last comment from Ulf and Nimrod who requested that this defect be open as a discussion place for this type of outage as well as loss of PVC .
TROPPENS commented 2 days ago
I had a call with Nimrod to discuss resiliency against loss of high-speed network. Current NooBaa does not have any resiliency against the loss of the high-speed network and the loss of the PVC. He suggested to create a bug in the NooBaa GH to start a discussion on potential enhancements.
This is a paste from HPO 590:
Bug Description
Our bare metal cluster in the POK lab has a problem where one of the network bonds goes to state down.
The node is dan1. The noobaa-db-pg-0 is hosted by dan1.
When the bond goes down, the noobaa-db-pg-0 pod does not fail over to another node. The database is not available when this condition occurs and is an I/O outage
Logs are in the
https://ibm.ent.box.com/folder/145794528783?s=uueh7fp424vxs2bt4ndrnvh7uusgu6tocd
The are labeled as hpo 590
Steps to reproduce
Disable a network bond
Expected behaviour
the noobaa db pod should failover to another node but not the CSI attacher node
nilesh-bhosale commented 14 days ago
With bond going down, does the OCP node status become 'NotReady'? I believe, only when the node state becomes NotReady, k8s will trigger a pod fail over to one of the available (Ready) nodes in the cluster.
Monica commented below:
As I noticed, the OCP node still is in the ready state, the OCP nodes are using provision network, those interfaces are up and running.
# oc get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
c83f1-dan1.ocp4.pokprv.stglabs.ibm.com Ready master,worker 140d v1.22.3+e790d7f <IP_ADDRESS> <none> Red Hat Enterprise Linux CoreOS 49.84.202201042103-0 (Ootpa) 4.18.0-305.30.1.el8_4.x86_64 cri-o://1.22.1-10.rhaos4.9.gitf1d2c6e.el8
c83f1-dan2.ocp4.pokprv.stglabs.ibm.com Ready master,worker 140d v1.22.3+e790d7f <IP_ADDRESS> <none> Red Hat Enterprise Linux CoreOS 49.84.202201042103-0 (Ootpa) 4.18.0-305.30.1.el8_4.x86_64 cri-o://1.22.1-10.rhaos4.9.gitf1d2c6e.el8
c83f1-dan3.ocp4.pokprv.stglabs.ibm.com Ready master,worker 140d v1.22.3+e790d7f <IP_ADDRESS> <none> Red Hat Enterprise Linux CoreOS 49.84.202201042103-0 (Ootpa) 4.18.0-305.30.1.el8_4.x86_64 cri-o://1.22.1-10.rhaos4.9.gitf1d2c6e.el8
We are using high speed network to deploy CNSA/CSI/DAS, if one of HS interface went down, not sure how can we catch it.
@TROPPENS
TROPPENS commented 2 days ago
The bond should provide protection against single network link failures. Given that we have lost a whole bond, I assume that there have been multiple network link failures or that there is an issue in the underlying OpenShift network configuration. The current test systems are configured using NMState Operator which is in Tech Preview mode for OCP 4.9. This could explain the network glitch.
@TROPPENS
TROPPENS commented 2 days ago
I had a call with Nimrod to discuss resiliency against loss of high-speed network. Current NooBaa does not have any resiliency against the loss of the high-speed network and the loss of the PVC. He suggested to create a bug in the NooBaa GH to start a discussion on potential enhancements.
For MVP we need to document a limitation.
Your environment
Build Version:
Machine:
Steps to reproduce
<Step 1>
<Step 2>
Expected behaviour
Expected behavior
Steps to reproduce
More information - Screenshots / Logs / Other output
Closing, HPO is discontinued.
|
GITHUB_ARCHIVE
|
Return to LAS FAQ
How can I serve remote data via DODS?
The Distributed Ocean Data System (DODS) is used by Ferret to access remote data. Many data centers are now providing access to their data via DODS and you can provide access to their data through your LAS installation. We have created an number of .xml configuration files, .jnl init script and .des descriptor files for accessing DODS data. They are available at:
http://ferret.pmel.noaa.gov/LAS_xmlFiles http://ferret.pmel.noaa.gov/LAS_jnlFiles http://ferret.pmel.noaa.gov/LAS_desFiles
Alternatively, you may wish to provide access to a collaborator's data without having to duplicate the data at your site. Your collaborator would only need to set up a DODS server.
Note: This is entirely distinct from sister servers which can also be used to enable collaborative data serving.
Ferret can access DODS datasets if the full DODS pathname is specified in the Xml file. As an example, the following file describes two DODS variables and one local one. First is the Reynolds OI SST Analysis dataset at NOAA's Climate Diagnostics Center (CDC). The second variable associates the LAS variable names "mnmean" with the actual variable name "SST" to be used when each of these files is opened by Ferret (see creating a multi-variable dataset for further explanation). The third variable is from the local file coads_climatology.nc.
<variables> <mnmean name="Monthly Means of SST" units="degC" url="http://www.cdc.noaa.gov/cgi-bin/nph-nc/Datasets/reynolds_sst/sst.mnmean.nc#sst"> <link match="/lasdata/grids/CDC_reynolds_sst_mnmean_LON_LAT_TIME_grid"/> </mnmean> <day_ltm name="Daily Long Term Mean of SST" units="degC" url="http://www.cdc.noaa.gov/cgi-bin/nph-nc/Datasets/reynolds_sst/sst.day.ltm.nc#SST"> <link match="/lasdata/grids/CDC_reynolds_sst_day_ltm_LON_LAT_TIME_grid"/> </day_ltm> <coads_sst name="local COADS SST" units="degC" url="file:coads_climatology.nc#SST"> <link match="/lasdata/grids/CDC_reynolds_sst_wkmean_1990_present_LON_LAT_TIME_grid"/> </coads_sst> </variables>
Jonathan Callahan: Jonathan.S.Callahan@noaa.gov
Last modified: October 26, 2001
|
OPCFW_CODE
|
Firmware update - 34C3 CTF solution
I saw this recently published riddle and its solution in
this link.
My mathematical background is not so strong. Is it possible that someone here to solve the riddle? My background in math is not strong.
I understand that there is some problem in that they repeatedly XOR each (file_name + '\0' + file_content) with the prev. (file_name + '\0' + file_content), and only then use it for authentication against the asymmetric signature.
I understand that our goal is somehow to zeroed our added files hash by XOR with some calculated unique file names, but I do not know how to do it without weakness in the hash algorithm itself.
Please explain the solution in simplest possible way (but at the same time I want to understand exactly what the weakness here is and what exactly is the solution).
The relevant mathematics is elementary linear algebra.
Focusing on the relevant part of the problem, the defender calculates a signature
$$S(m_1, \ldots, m_n) = H(m_1) \oplus H(m_2) \oplus \ldots \oplus H(m_n)$$
where $\oplus$ is bitwise xor, $H$ is a cryptographic hash function and $(m_i)$ is a family of messages (which are file names) of variable size. The goal of the attacker is to find a family of messages such that:
The signature is equal to some value $S_o$ which is the signature of the legitimate message.
One of the messages has a specific content which is the payload that the attacker wants to inject.
No two messages can be equal, because there can only be a single file by a given name.
So we need to find $n$ and $m_1, \ldots, m_n$ such that
$$S_0 = H(m_0) \oplus H(m_1) \oplus H(m_2) \oplus \ldots \oplus H(m_n)$$
i.e.
$$H(m_1) \oplus H(m_2) \oplus \ldots \oplus H(m_n) = S_1$$
where $S_1 = S_0 \oplus H(m_0)$.
If $H$ was invertible, we'd just take $n=1$ and $m_1 = H^{-1}(S_1)$. But since $H$ is a cryptographic hash function, this approach is out. Nonetheless, let's look a bit more at what we could do if we could find messages whose hash has a specific value.
$H(m)$ is a 256-bit value. Suppose we could find 256 messages $(M_1,\ldots,M_{256})$ such that $H(M_i)$ is all-bit-zero except that the $i$th bit is set. Then we could take a subset of these messages corresponding to the bits that are set in $S_1$: $S_1 = \bigoplus_{i \mid \text{the \(i\)th bit of \(S_1\) is set}} H(M_i)$.
We're trying to express $S_1$ as the xor of a bunch of values. The xor operator is linear: it satisfies all the axioms that define a vector space over the field $GF(2) = \mathbb{Z}/2\mathbb{Z}$. In the field $GF(2)$, the elements are $0$ and $1$, the addition $\oplus$ satisfies $1 \oplus 1 = 0$ (i.e. it's xor) and three's only one way to define multiplication ($0$ times anything has to be $0$ and $1$ times anything has to be that thing). Bitwise xor on 256-bit numbers is vector addition in the vector space $GF(2)^{256}$. And expressing a vector as the result of adding a bunch of vectors is a well-studied problem: we're trying to express $S_1$ as a linear combination of some values.
Any value can be expressed as a linear combination of $100\ldots0$, $0100\ldots0$, $00100\ldots0$, …, $0\ldots010\ldots0$, …, $0\ldots01$. Picking the bits that are set in the resulting value means taking each term $0$ times if the corresponding bit in the result is clear and $1$ times if the corresponding bit is set. A set of vectors such that any value can be expressed as a linear combination is called a generator set of the vector space. A set of vectors such that none of them is a linear combination of the others is called linearly independent. A linearly independent generator is called a basis of the vector space.
If a vector space has a finite basis, then all of its bases have the same number of elements. This number is called the dimension of the vector space. Here we're working in a vector space with a 256-element basis, so it's a 256-dimensional space. The dimension has other nice properties, including the fact that in a vector space of dimension $d$, if a set of $d$ vectors is a generator then it's a basis, and if a set of $d$ vectors is linearly independent then it's a basis.
Recall that our problem is to find a way to express $S_1$ as a linear combination of some vectors. Furthermore each vector must be the hash of some message. We can do that by:
Finding a basis made of vectors that are the hashes of known messages.
Expressing $S_1$ as a linear combination of this basis.
For part 1, recall that it is enough to find 256 hashes that are linearly independent: since the vector space has dimension 256, such a set of 256 hashes will be a basis. $H$ is a cryptographic hash function, so it behaves essentially at random. We can take any message, calculate its hash, and if we don't like the resulting hash then we take a different message and try again. We can construct 256 successive messages such that each new message's hash is not a linear combination of the previous ones. That is:
Take an arbitrary $m_1$. While $H(m_1) = 0$, change $m_1$ and try again.
Take an arbitrary $m_2$. While $H(m_2)$ is either $0$ or $H(m_1)$, change $m_2$ and try again.
Take an arbitrary $m_3$. While $H(m_3)$ is a linear combination of $\{H(m_1),H(m_2)\}$, change $m_3$ and try again.
Take an arbitrary $m_4$, etc.
After building $k$ messages $(m_1,\ldots,m_k)$, there are $2^k$ hashes that are linear combinations of the hashes of those messages. This means that the number of forbidden hash values at each step is $2^0$, $2^1$, $2^2$, etc. At the 256th and last step (i.e. after building 255 messages), the number of forbidden hash values is $2^{255}$, which is half of the total number of possible hash values. This means that the probability of a successful draw is $1/2 = 1-1/2$ for each attempt at the last stage, $1-1/4$ at the previous stage, $1-1/8$ at the stage before, etc. Note that the probability of success is always at least one half, so this randomized algorithm is practical, it won't get stuck trying to find something that's almost impossible to find.
Part 2 is a classical problem in linear algebra which led to the definition of multiplication of matrices. We saw above how to express any vector as a linear combination in the basis made from vectors with all-bits-zero-except-one. We now want to express the old basis in terms of the new one, i.e. to express each all-bits-zero-except-one vector as a linear combination of the new basis. This means finding the inverse of the matrix that expresses the new basis in terms of the old one, which is equivalent to solving a system of 256 linear equations (one for each bit position) with 256 unknowns (the coefficient of each vector in the basis).
|
STACK_EXCHANGE
|
Radio button & Option list values migrate successfully with Feeds Importer but disappear in node edit
I'm doing a couple data migration projects. I'm using Feeds Importer and for the most part, things are going well.
But when I migrate option list fields, I get this weird thing where the data is migrated OK (I see it when I view the node and I see it in the appropriate table in the database), but when I edit the node, the option list field is empty/not-selected.
On one of the projects, I was using the radio button widget and the work-around ended up being changing the field's widget from that to Select list.
But on my second project, I was already using the Select List widget, and I can't seem to work around the problem. Tried changing it to radio buttons (hey - I'm not picky), but it didn't help.
It's a really sinister problem because it means I migrate the client's data OK, but when they edit one of these nodes, they will essentially delete data because the radio button or select list widget is not displaying the data. Help?
Did you go the field settings for that field and made sure that the values of the field list are present there?
I don't have enough reputation to comment, so I'm posting this as an answer.
I'd suggest that you install and enable the devel module to compare the data structure of the field in question in these three situations:
when the data is input manually
when the data is loaded via feeds
when the data loaded via feeds is later edited
If you haven't used it, with Devel installed, you can get the 'devel' view of the node.
Thanks, caelbtr! Good advice. I didn't actually use the devel module though it is great for many things. But I did set one of the fields manually and saved it so I could scrutinize it in the database. While doing that, I realized my mistake: I had a trailing space at the end of my key! So I had my Option list value settings like this:
Thanks, caelbtr! Good advice. I didn't actually use the devel module (though it is great for many things).
But I did set one of the fields manually and saved it so I could scrutinize it in the database. While doing that, I realized my mistake: I had a trailing space at the end of my key! So I had my Option list value settings like this:
Yes |Yes
No |No
Changing it to...
Yes|Yes
No|No
...makes the migration (using Feeds Importer) work correctly.
It's fascinating that a trailing space would cause this behavior - but there you go! ; )
|
STACK_EXCHANGE
|
from . import mesh
import numpy as np
class HeightmapMesh(mesh.Mesh):
def __init__(self, heightmap, pixel_size):
super(HeightmapMesh, self).__init__()
mesh_width = heightmap.getWidth() * pixel_size
mesh_height = heightmap.getHeight() * pixel_size
mesh_vertices = self._createMeshVerticesFromHeightmap(heightmap, mesh_width, mesh_height)
mesh_faces = self._createMeshFacesFromHeightmap(heightmap)
for vertex in mesh_vertices:
self.addVertex(mesh.MeshVertex(position=list(vertex)))
for face in mesh_faces:
self.addFace(mesh.MeshFace(indices=face))
def _createMeshVerticesFromHeightmap(self, heightmap, mesh_width, mesh_height):
x_positions = np.linspace(0.0, mesh_width, num=heightmap.getWidth())
y_positions = np.linspace(0.0, mesh_height, num=heightmap.getHeight())
z_values = heightmap.getHeightmap()
xv, yv = np.meshgrid(x_positions, y_positions)
mesh_vertex_positions = np.dstack((xv, yv, z_values))
return mesh_vertex_positions.reshape((heightmap.getWidth() * heightmap.getHeight(), 3))
def _createMeshFacesFromHeightmap(self, heightmap):
width = heightmap.getWidth()
height = heightmap.getHeight()
upper_left_tris = [(y*width + x, (y+1)*width + x, y * width + x + 1) for y in range(height-1) for x in range(width-1)]
lower_right_tris = [((y+1)*width + x + 1, y*width + x + 1, (y+1)*width + x) for y in range(height-1) for x in range(width - 1)]
all_faces = []
all_faces.extend(upper_left_tris)
all_faces.extend(lower_right_tris)
return all_faces
|
STACK_EDU
|
A blazingly fast Stack Overflow clone running the real Stack Exchange dataset.
NOTE: The repository is no longer being actively maintained by the Dgraph team. If something is broken, we'd happily accept a pull request from you, but won't fix anything ourselves.
UPDATE: This project is properly updated to work with version 20.xx.x of Dgraph. It's working as expected on macOS and Linux. There are some problems running the project on Windows, that can be solved by starting JS server and JS client separately. See "syntax_changed.md" for detailed instructions
First Thing first
Before starting, make sure that Dgraph is running on default ports (8080, 9080 ...) Then go to Ratel UI or by cURL and set the Schema in the schema.txt file. Without this it won't work.
Avoid to use ACL with this project.
- You have to open Ratel UI, go to the panel schema. Then click in "Bulk Edit". And paste the file "schema.txt" in this repository.
- You may also have to read the
syntax_changes.md. Cuz you may need to create a fake user if you don't wanna import the dataset we provide. You gonna run a "clean" GraphOverflow. And also workaround some bugs in Windows.
npm installin the root directory.
npm installin the
- In the root directory, run
npm run dev.
You can also instead of steps 2, 3, and 4, you can just run sh ./run.sh
We have a dockerized env so you can run this project.
- Run Docker-composer
The first build will take time.
docker-compose up # In the first time or docker-compose up --build # If you wanna rebuild it
The compose has a script that will prepare everything for you. You might wait for the deployment to be done and updated the page if so.
- Now go to localhost:3000 or the IP if you are running docker in a VM.
If anything goes wrong
If the page loads but keeps showing the animated loader. It means that something goes wrong. You should see a loaded site with empty questions.
You don't need to do much when running this docker-compose. In general, if you are running docker in a VM the IP is 192.168.99.100 (between 99 and 199, you can check it via docker-machine). In that case, you gonna need to change all addresses in the code from localhost or 127.0.0.1 to the VM IP.
Paths you might change:
Pay attention that this docker-composer will create a volume in your docker env.
If your docker is binded to localhost. Don't change anything.
Clean up docker.
docker-compose down or docker-compose rm
docker stop $(docker ps -a -q) docker rm $(docker ps -a -q) docker volume ls docker volume rm graphoverflow_dgraph
This app is currently compatible with Dgraph v20.xx.x
docker run -it -p 8080:8080 -p 9080:9080 \ -v ~/dgraph:/dgraph --name dgraph dgraph/dgraph:v20.03.1 \ dgraph alpha --bindall=true
PS. You can also run this project with Dgraph binaries instead of Docker.
Convert stackexchange data from relation to graph. From the current directory:
for category in comments posts tags users votes; do go run $category/main.go -dir="$HOME/Downloads/lifehacks.stackexchange.com" -output="/$HOME/dgraph/$category.rdf.gz"; done
Run the schema mutation
Load graph data files into Dgraph. From the current directory:
for category in comments posts tags users votes; do docker exec -it dgraph dgraphloader -r $category.rdf.gz; done
|
OPCFW_CODE
|
Views play a central role in TrendHub. A TrendHub View is the collection of everything that is visualized on the current chart. This collection includes the following:
- All active tags and their settings (visualized or hidden, scale and color setting, group settings)
- Visualisation mode (trend, stacked trend or multiscatter)
- The configuration of the chart (e.g. gridlines enabled/disabled, trendline filling enabled/disabled)
- Time frame of the focus chart and the context chart
- All active sublayers and their state (visualized or hidden)
- Lock state of the focus chart
- Live mode enabled or disabled
- Active data scooters
- Added filters and their state (enabled or disabled)
- Added fingerprints (visualized or hidden)
At any time, views can be saved, updated, opened or shared, which will be explained in the current article.
When starting a new TrendHub session, a new view is created and shown as ”New view” in the view bar. Once the current view is saved or when starting from a saved view, the name of that view will be indicated in the view bar.
Follow the steps below to save a view:
- Add tags or attributes to your view (e.g. through the tag and asset browser) and see them displayed in the focus chart. Customise the chart to your liking (set time frame, chart configuration, tag settings, ...)
- Click on the "Actions" button situated top right of the focus chart and select "Save as" in the dropdown.
- A "Save view" panel will appear from the right side of the screen. Fill out the open fields and choose a location to save your view.
- Click the "Save view" button. The name of the saved view will now be indicated in the view bar.
The time frame that will be visualized when opening a saved view in a later stage depends on the view type, which is defined by the state of the live mode button and the lock state of the focus chart.
The following table explains how the state of the live mode and the lock determines the type of view.
Live mode state
Live mode fixed time span
Live mode fixed start
The picture below explains conceptually how the view type will affect the visualized time frame when opening the view. t0 represents the moment the view was saved, while t1 represents the moment the saved view is opened. The time frames of the sublayers will be altered in the same way.
Historical views are useful whenever you want to snapshot a specific event. Live mode views are typically used for daily follow-ups to make sure the most recent data is always visualized.
Note: Saved views can be added to DashHub views. The view type will again determine which time frame will be visualized on the dashboard.
Saved views can be accessed from various locations.
- From the "Saved view" option in the menu bar. This opens a right-side panel of the work organizer, filtered on the TrendHub View type.
- From the Work Organizer link on the call to action when there are no active tags. This opens the same right-side work organizer panel.
- From the work organizer itself, which can be found in the top bar.
The followings steps explain how to load a view, starting from the "Saved view" option in the menu bar. Except for the initial opening of the work organizer, all steps are exactly the same, regardless of where you start from.
To load a view, follow these steps:
- Click on "Saved views" in the TrendHub menu . A "VIEWS" panel will open left of the focus chart.
- Click on the "Work organizer" located within the VIEWS panel. This action will open the side panel of the work organizer, filtered on the TrendHub View type.
- Select the view you wish to open and click on the "Open view" button. A details panel will open for the chosen view containing all info of the saved view.
- Select the load action that needs to be performed in the loading preference section and click "Load view" to execute the loading action.
- Replace current view: Everything that is currently on the screen will be removed and all settings of the saved view will be loaded.
- Only tags and/or attributes: The current visualized time frame and layers will remain unchanged. The tags of the saved view will be added to the current active tag list.
- Layers only: The current visualized time frame, layers and active tags will remain unchanged. The layers of the saved view will be added as additional sublayers.
Note: If tags or attributes within a view you are trying to load have been deleted, your view will still load. However, you will get a warning message telling you that not all tags or attributes have been loaded. The tags or attributes not loaded will be highlighted in the warning.
Note2: The maximum number of active tags is 35. When selecting the option "Only load tags and/or attributes", it is possible this limit will be crossed. In this case, TrendMiner will cancel the opening action.
Updating saved views
Updating the content of a view
The name of a currently opened view is indicated in the view bar. As soon as any changes are performed (e.g., visualized time frame is altered, new tags are added, ...), TrendMiner will indicate that the view contains unsaved changes.
Click on the "Actions" button and select "Save" in the dropdown to save the applied changes. Select "Save as" to save the current visualisation as a new view.
Editing view details
Editing the name, description and location of the view can be done using the "Edit details" action in the view bar
- Click on the "Actions" button located at the top right of the focus chart.
- Click edit details. A view panel will open from the right.
- Edit the fields as necessary.
- Click "Edit view".
Note: If you have updated your view, be sure to save these updates first by clicking on the "Actions" button situated top right of the focus chart and clicking on the "Save" option before editing the view's details.
How to share views
A view can be shared in 2 ways:
- By sharing the saved view via the Work Organizer
- By creating a shareable link
To learn how to share your work through the Work Organizer, read the work orgnanizer article here. A view that is shared through the Work Organizer will always reflect the current state of that view.
A shareable link, on the other hand, is a static reference to everything that was visualized at the time the link was created. Future changes are not considered. A shareable link can also be created for non-saved views.
To create a shareable link, click "Actions" and select "Get shareable link" in the dropdown. You can easily copy the link from the pop-up window.
Note: All details about a view, tags, attributes, colors, scalers, shifts, groups, selected focus periods, layers as well as graphical displays (incl. Chart settings), except for possible included items (filters, and fingerprints) are included within a shareable link.
Views created using the shareable link option are stored for up to two years. When the link is not used within 2 years, the URL becomes invalid.
|
OPCFW_CODE
|
feat(cairo_native): add batcher compiler struct
This change is
crates/blockifier/src/execution/native/contract_class_manager.rs line 33 at r2 (raw file):
Previously, Yoni-Starkware (Yoni) wrote…
Better to have this private
How do you want to call this function if it's private?
My understanding is that the PyBlockExecutor is going to create the ContractClassManager under Arc when it starts running and then give a clone of the Arc to the compilation thread and to each newly created PapyrusReader. The compilation thread just runs handle_compilation_requests_loop, and the PapyrusReader uses the caches and sends compilation requests.
If this function is private, the compilation thread cannot run it. To deny access to this function from the PapyrusReader we can create two structs: one containing the sender, the other containing the receiver (both containing the caches). Will that be better?
crates/blockifier/src/execution/native/contract_class_manager.rs line 33 at r2 (raw file):
Previously, Yoni-Starkware (Yoni) wrote…
Run the thread inside pub fn new(, and let the struct hold it. Why not?
Sounds good. I'll try that.
crates/blockifier/src/execution/native/contract_class_manager.rs line 33 at r2 (raw file):
Previously, avi-starkware wrote…
The receiver can't be part of the struct. It can only be held by one thread, so rust doesn't let me clone the arc of the struct if the struct contains the receiver.
I think this is nicer now.
To create the manager all you need is the caches struct. The compiler and channel can be created inside the method that generates the struct and starts the compilation thread.
crates/blockifier/src/execution/native/contract_class_manager.rs line 33 at r2 (raw file):
Previously, Yoni-Starkware (Yoni) wrote…
Don't you need to store the thread somewhere? what happens when new is over?
Doesn't it drop the thread?
Please add a super short test to make sure this is working
We can store the JoinHandle returned from thread::spawn, but all it does is allow us to wait for the thread to terminate, and this thread is an infinite loop.
In rust, a spawned thread stays alive as long as the main thread is alive and it did not terminate by itself.
I tried a simple example spawning a thread from inside a function and it stayed alive until it finished even after the function terminated.
crates/blockifier/src/state/contract_class_manager.rs line 69 at r4 (raw file):
Previously, Yoni-Starkware (Yoni) wrote…
I don't want it to stop the batcher. In case the queue is full, let's not propagate this error. Maybe add a log
This size should be part of the config (add a TODO)
Done.
Added TODO where the channel is created.
crates/blockifier/src/state/global_cache.rs line 55 at r12 (raw file):
Previously, meship-starkware (Meshi Peled) wrote…
Should it be changed to Compiled class as well? If so, I'll just put it here to see if it changed after a rebase.
Done.
Let me know what you think
crates/blockifier/src/execution/native/contract_class_manager.rs line 33 at r2 (raw file):
Previously, avi-starkware wrote…
Done.
The termination mechanism and drop might not be strictly necessary, because when the Sender end is dropped the channel closes (unless there are additional clones of the Sender...). When the channel closes, receiver.iter() ends without error.
See here. When a sender is dropped the function disconnect_senders is called, which calls disconnect_receivers if there are 0 senders.
So perhaps we can assume dropping the contract class manager automatically stops the thread since it drops the sender and therefore the receiver. WDYT?
crates/blockifier/src/state/global_cache.rs line 56 at r20 (raw file):
Previously, Yoni-Starkware (Yoni) wrote…
For now, be consistent with native's type here
But where will we cache Cairo 0 contracts? Do you want to add another global cache for that?
|
GITHUB_ARCHIVE
|
Do PSHA maps overpredict or are there shaking deficits in the historic record?
- Northwestern University , Earth and Planetary Sciences, United States of America (firstname.lastname@example.org)
Probabilistic Seismic Hazard Assessment (PSHA) attempts to forecast the fraction of sites on a hazard map where ground shaking will exceed the mapped value within some time period. Because the maps are probabilistic forecasts, they explicitly assume that shaking will exceed the mapped value some of the time. At a point on a PSHA map, the probability p that during t years of observations shaking will exceed the value on a map with a T-year return period is assumed to be described by the exponential cumulative density function: p = 1 – exp(-t/T). The fraction of sites, f, where observed shaking exceeds the mapped value should behave the same way. To assess the 2018 USGS National Seismic Hazard Model maps for California, we created the California Historical Intensity Mapping Project (CHIMP), a 162-yr long dataset that combines and consistently reinterprets seismic intensity information that has been stored in disparate and sometimes hard-to-access locations (Salditch et al., 2020). We use two performance metrics; M0 based on the fraction of sites where modeled ground motion is exceeded, and M1 based on of the difference between the mapped and observed ground motion at all sites. M0 is implicit in PSHA because it measures the difference between the predicted and observed fraction of site exceedances and is therefore a key indicator of map performance.
We explore these metrics for CHIMP. Assuming the dataset to be correct, it appears that the hazard maps overpredicted shaking even correcting for the time period involved. Assuming the model is also correct, a shaking deficit exists between the model and observations. Possible reasons for this apparent overprediction/shaking deficit include: 1) the observations in CHIMP are biased low; 2) the observation period has been less seismically active than typical – either by chance or temporal variability due to stress shadow effects; 3) the model overpredicts due to either the earthquake rupture forecast or the ground motion models. Similar overpredictions appear for past shaking data in Italy, Japan, and Nepal, implying that seismic hazards are often overestimated. Whether this reflects too-high models and/or biased data remains an important question.
How to cite: Salditch, L. and Stein, S.: Do PSHA maps overpredict or are there shaking deficits in the historic record?, EGU General Assembly 2021, online, 19–30 Apr 2021, EGU21-12450, https://doi.org/10.5194/egusphere-egu21-12450, 2021.
|
OPCFW_CODE
|
We do this at my company. We run thousands of MySQL master-master pairs that use this method. We developed a service that performs the switch you describe. We call this a "successover" because that has a more positive connotation than failover. We can and do run this anytime of day, without even notifying the app team that we are doing it.
Here are the general steps:
"Master1" is the current writeable master MySQL instance, where the VIP is defined.
"Master2" is a replica, in read-only mode.
- Make Master1 read-only.
- Remove VIP from Master1.
- Start killing any outstanding queries on Master1. Continue killing any queries in a loop, in case the apps using it try to run new queries. We shouldn't rely on the prior steps terminating queries or connections. I.e. clients might have bypassed the VIP and connected directly to the physical IP address.
- Wait for replication lag on Master2 to reach 0. Ideally, compare slave status on Master2 to master status on Master1. If the failover occurred because Master1 is down, then you can't do that comparison, but you can compare if slave status on Master2 shows SQL thread is caught up to the IO thread.
- Make Master2 writeable.
- Add VIP to Master2.
- Stop killing queries on Master1.
When it's automated, this results in less than a second of downtime, provided there is no significant replication lag. Moving a VIP is a lot faster than updating DNS.
Then the apps must reconnect to the VIP, and thus get access to the new primary instance and re-run any queries they need.
Obviously apps should not use the standby MySQL instance, because the point is to allow it to be taken offline for updates, config changes, etc. This also gives you a quick way to respond if there's a problem with the host server that MySQL is running on. We get a few host crashes or failed disks per week, so we do this switch quickly to make sure the app can continue.
It's unavoidable that this causes a brief "blip" where the apps have their connections dropped and have to reconnect, but it's more brief of an interruption than any other solution. Still, apps have to be designed to detect a dropped connection and reconnect. Our biggest issue is educating the app developers to do this. They keep complaining that they get alerted for a lost connection, and we tell them, "we've already documented what you need to do -- it shouldn't alert for a single blip."
This system has been working for several years, but we now have a mixed environment where we have many cloud MySQL instances. We need a new solution, because we can't create VIPs using BGP in the cloud.
So we have prototyped using Envoy as a proxy to MySQL. We believe this works, but we need to develop a service to notify the Envoy proxy of the change when we do a successover. Envoy supports GRPC protocol, so we can send it a message dynamically and it'll start routing traffic to a different target MySQL instance. This proxy-based solution should work identically in the cloud as it does in the datacenter. But this solution is probably more work than you had in mind.
We could also use ProxySQL to do something similar, but Envoy has a lot of adoption within our company for service-to-service traffic already, so if we can leverage that instead of a new type of proxy we have one fewer pieces of technology to use.
Update re comments:
Step 4 in the above list waits for replication to catch up while both instances are read-only. So there are no new updates allowed on Master1 during that time, and Master2 only has to execute a set of remaining updates that were committed before Master1 was set to read-only. Hopefully, this is a very brief wait, unless Master2 was already lagging behind by a lot.
The service that runs the successover refuses to even begin the operation if it detects that Master2 has high replication lag. The user is encouraged to try again later.
Of course, in the real world sometimes you have to override that sort of restriction because of urgent circumstances, so there's an option to force the successover. But this comes with a risk, because if you make Master2 the primary and start allowing new queries directly on it, while there are still outstanding updates to process because of replication lag, you end up in a split-brain situation: You could make an update which will then be replaced by an event from the binary log that actually occurred in the past.
So in theory you could enable the VIP on Master2, but leave Master2 read-only until it catches up fully with respect to replication. This at least performs part of the successover and allows clients to read data, but not update, temporarily. This might be acceptable for some apps for a short time, but it depends on the app's requirements.
In practice, our implementation of successover doesn't do this temporary read-only mode. We just try to be very reluctant to use the forced-successover option, because a split-brain is extremely difficult to clean up (it may not be possible). We'd rather try the successover when there is no replication lag.
|
OPCFW_CODE
|
Suggestion: Flag for getting all source code for all installed formuals in an dedicated dir
Provide a detailed description of the proposed feature
We are preparing a docker image for distribution containing a Brew tap (to own tap) and installation. To easily keep up with license compliance, we would like to be able to easily distribute the sources of all the forumulas brew installs, by a single commando maybe something like "with-source-code" or something, fetching all the downloaded formulas source code to a special source dir (with patches etc). As I understood, today this is not easily achieved automatically (if i'm wrong here, sorry), although the source code for the formulas might be in the cache dir by default (read that it still does not is 100% regarding patches etc in formulas).
What is the motivation for the feature?
To be able to distribute a brew installation in a docker container, and at the same time easily fullfil license compliance request regarding source code.
How will the feature be relevant to at least 90% of Homebrew users?
They will be able to easily full license compliance requests without manual work. They can easily use brew in container images and such, knowing that the layer which installs brew formulas is perfectly fine for license compliance requests.
What alternatives to the feature have been considered?
Looking around I understand this can partially be achvied by lookin in an eventual cache. and manually, but it would much better if there was an easy way to achive this out of the box.
How does including the source help with license compliance? Most licenses just require a copy of the license to be distributed with the software even when repackaged in binary form. This is something we do already.
See
https://github.com/Homebrew/brew/blob/9b42a104ee51fd2c45f91a72fc18bbad73b1aa5a/Library/Homebrew/build.rb#L175-L177
https://github.com/Homebrew/brew/blob/58e34293554a827e717a7383940c2e57c2d0b360/Library/Homebrew/extend/pathname.rb#L418-L433
Great, and that should cover most of the licenses except for copyleft styles like GPLx where distribution also means that the distributor also must provide source code on request. In other words. I.e if one distributes a docker image, where source code must be provided upon request. There has be a lot written about this . for example https://www.linuxfoundation.org/wp-content/uploads/Docker-Containers-for-Legal-Professionals-Whitepaper_042420.pdf and some solutions. Regarding solutions Fedora/Red Hat has solved it with serving a source image available to their base images .https://opensource.com/article/20/7/compliance-containers . For Debian and Alpine there is the OASDL image. https://www.osadl.org/OSADL-Docker-Base-Image.osadl-docker-base-image.0.html. Having an option like that would make it far easier for example use brew in an official docker image and be assured that upon compliance requests it would be supereasy to handle those requests.
I know these things are not that fun, but It is for my orga a real life use-case, and if we have it I guess other have it.
It doesn't do everything that you want, but you can use brew unpack:
Usage: brew unpack [options] formula [...]
Unpack the source files for formula into subdirectories of the current working
directory.
--destdir Create subdirectories in the directory
named by path instead.
--patch Patches for formula will be applied to
the unpacked source.
-g, --git Initialise a Git repository in the unpacked
source. This is useful for creating patches
for the software.
-f, --force Overwrite the destination directory if it
already exists.
-d, --debug Display any debugging information.
-q, --quiet Make some output more quiet.
-v, --verbose Make some output more verbose.
-h, --help Show this message.
That I missed! Should be a good enough, with the --patch option, one then only has to step through the formulas I guess and pick them out.. Will try. Thanks, I think it is good enough for the use case.
|
GITHUB_ARCHIVE
|
Probability plays a large role in Statistics. It is the foundation for hypothesis testing, and many other areas of society. There are many different probability distributions that are used frequently in many sectors. Below you will find types of probability, and commonly used probability distributions. Both discrete and continuous probability distributions are addressed.
In this section, the different types of probability will be defined with examples of how to find them. Types of probability include theoretical (or classical), experimental (or empirical) and subjective probability.
Theoretical or Classical
Theoretic Probability of Playing Cards
Experimental or Empirical
Experimental Probability from Frequency Table
Law of Large Numbers
Discrete vs Continuous Random Variables
Probability from a Two-Way Table
There are many different rules for probability, as well as many types. Currently, there are examples for probability that involves combinations.
Finding Prob. Involving Combinations
Finding Prob. Involving Combinations-TI-84
Finding Prob. Involving Combinations-TI-Nspire
Discrete Probability Distributions
In this section, we will look at how to create a discrete distribution, and some commonly used discrete distributions. A discrete random variable is a variable that can be counted. It cannot contain any fractions or decimals, only counting numbers for the random variable x. There are general discrete probability distributions, and then special cases. The videos in this section show how to work with general discrete probability distributions. The graph of a discrete distribution is always a histogram. In the next section, we will look at a common discrete distribution which is the Binomial distribution.
Creating a Discrete Distribution TI-Nspire
Creating a Discrete Prob. Distribution TI-84
Finding Probabilities from a Prob. Distribution
Mean, Variance, St. Deviation of Random Discrete Variables - TI-84
Mean, Variance, and St. Deviation Discrete Random Variable-TI-Nspire
Expected Value for a Prob. Distribution
Expected Value of a Prob. Distribution TI-84
Expected Value of a Prob. Distribution Nspire
Binomial Probability Distributions
In this section,we will look at the binomial distribution. In order to be binomial, the outcomes must be able to be classified as a success or a failure, the probability of success must be the same throughout, the events must be independent of each other, there must be a fixed number of trials, and the random variable x represents the possible number of successes in the experiment. An example of a binomial experiment would be rolling a die 10 times, and recording the number of 5's that are rolled. Binomial distributions are very commonly used discrete distributions.
Binomial Prob. TI-84
Binomial Prob. TI-Nspire
Binomial Prob. Distribution and Histogram in TI-84
Binomial Prob. Distribution in TI-Nspire
Mean, Variance, and St. Deviation for Binomial Distribution
The normal distribution is the most commonly used continuous distribution in Statistics. The normal distribution follows the empirical rule or the 68-95-99.7 rule. The normal distribution has a lot of uses in our society. The standard normal distribution allows us to compare different distributions with different scales. The random variable x would be converted to a z-score in order to use the standard normal distribution. The standard normal distribution has a mean of 0 and a standard deviation of 1. Normal distributions can have any mean and any standard deviation.
Prob. for Normal Distribution -Z-table
Prob. for Normal Distribution-TI-84
Prob. for Normal Distribution-TI-Nspire
Finding z-score Corresponding
to Given Area - z-table
Find z-score Given Area : TI-84
Find z-score: TI-84
Find z-score Given Area - TI-Nspire
Find z-score TI-Nspire
Find X-Value for a Normal Distribution - TI-84
Central Limit Theorem
The central limit theorem states that if you start with a normally distributed population (any sample size), or if you have a sample that is at least 30, the sampling distribution of the sampling means will approach a normal model with mean equal to the population mean, and standard error equal to the population standard deviation divided by the square root of the sample size. The distribution of the sampling distribution of the sample proportions will also approach a normal model.
The Central Limit Theorem
Mean and Standard Error of Sampling Distribution of Sample Means
Finding Prob. of Means
in a Sampling Distribution
Finding Prob. of Means in a Sampling Distribution-TI-84
Finding Prob. of Means in a Sampling Distribution-TI-Nspire
|
OPCFW_CODE
|
Introducing: Azure Sentinel Data Exploration Toolset (ASDET)
Security Analysts deal with extremely large datasets in Azure Sentinel, making it challenging to efficiently analyze them for anomalous data points. We sought to streamline the data analysis process by developing a notebook based toolset to reduce the data to a more manageable format, effectively allowing analysts to easily and efficiently gain a better understanding of their dataset and detect anomalies therein. Our toolset has three main components that each provide a different way of turning raw data into useful insights: data inference, feature engineering, and anomaly detection.
This project is a set of Python modules intended for use Jupyter notebooks. These, along with sample notebooks are open source and available on GitHub for use by the community. If you would like to follow along with the example Notebooks, as well as to learn more about ASDET, you can do this at the GitHub repo.
You can find the notebook for this section in the identification folder in the GitHub repo.
What are entities in the context of Azure Sentinel? An Azure Sentinel workspace contains many tables, which contain different types of data that we classify into categories called entities. For example, the data of a particular column in a particular table might be an instance of an entity like IP address. Other common entities include account, host, file, process, and URL. It’s useful for Security Analysts to know what entities are in their dataset because they can then pivot on a suspicious data point or find anomalous events.
The goal of this section is to automatically infer entities in the dataset. We want to do this because entities are key elements for analysts to use in investigations and can be effectively used to join different datasets where common entities occur. We detect entities using regular expressions for the entities and applying these to each column in a table. Since most entities have unique identifiers with patterns specific to that entity, using regular expressions usually leads to accurate results. When more than one regular expression matches for a column, we resolve this conflict by first comparing the match percentages and choosing the entity for the regular expression with the highest match percentage. If the match percentages are the same, we use a priority system which assigns a priority level to each entity based on the specificity of the regular expression. For example, an Azure Resource Identifier looks like a Linux or URL path and so also matches the regex for a file, so the Azure Resource ID regex has a higher priority than the file regex.
Note: some entity identifiers such as GUIDs/UUIDs are not generally detectable as entities since patterns like this are not specific to any single entity.
If an analyst wanted to know the entities in the table OfficeActivity (which contains events related to Office 365 usage), they would simply import the Entity Identification module, select the table from a dropdown list, and run the detection function on it. Then they would be able to see what the entities found in a table and what columns they correspond to.
In addition, analysts may find visualizations of entity-table relationships helpful, particularly when identifying elements such as common entities between tables.
Using the results of the previous section, our toolset also allows users to autogenerate KQL queries to investigate a specific instance of an entity. For example, if the analyst wanted to know where the user firstname.lastname@example.org appears in the dataset, they would pass the email address as well as its entity type, which is account, into the query function. A list of KQL queries is returned which can be run to find where the email@example.com email is found.
You can find the notebook for this section in the feature engineering folder in the GitHub repo.
What is feature engineering? When dealing with large datasets, it is often impossible to develop models that use the entirety of the features (columns within the dataset) available to us in the feature space. We use feature engineering to pick and choose the most important features. However, when dealing with unknown data, it is often time consuming to pick and choose the most important features, so we developed a programmatic way to reduce the dimensionality of a dataset by picking features that are relevant to us.
Our toolset is composed of two broad areas: the data cleaning toolkit and the data signature toolkit. The data cleaning toolkit is composed of several functions that were able to reduce features (columns) in datasets by approximately 50%
To clean the table automatically, we can simply import our module and call our function on our Pandas DataFrame.
from utils import cleanTable result = cleanTable(df)
The cleanTable module contains functions for the following tasks:
The table below shows the result of running feature engineering on our sample dataset “Office Activity”. It managed to reduce the number of columns from 131 columns to the 46 most important columns.
The data signature toolkit builds on the Binarization Mapping function mentioned previously. It works by assigning a “signature” to each unique row of data based on whether columns are populated with data or not. In the binary signature 1’s represent a present value in that column and a 0 represents an absent value in that column. For example, 1100 would indicate that the first two columns are filled in and the last two columns are not. We can use data signatures to learn more about the following:
To call the data signature, we import the module and call the findUniques function on our Pandas Dataframe.
from signature import DataSignature data = DataSignature(df) data.generateSignatures() data.findUniques()
The animated GIF below shows us an example of how the data signature notebook works.
Anomalies can be defined as any data point that does not follow a normal behavior. It can be very effective in security analysis by helping focus analysts on key events which would otherwise be very difficult to find in large datasets.
ASDET Anomaly Detection gives security analysts the option to explore data and identify anomalies through user selected entities (obtained using the data inference described earlier) and other features (data columns) whilst reducing the need to code and model. We have implemented two anomaly modeling methods – Isolation Forests and Time Series Analysis.
You can find the notebook for this section in the isolation forest folder in the GitHub repo.
A security analyst can identify anomalies in any Azure Log Analytics table through the Isolation Forests ML model. They can do this by selecting a table, an entity, and other features (columns), and the time range. The entities can be easily derived by using the Entity Identification feature of ASDET that we covered earlier in the blog, and are presented to the user in the form of a drop-down menu . After selecting entities and features, the data is cleaned and the machine learning model – Isolation Forests – is used to identify the anomalies using an anomaly score which is generated for each datapoint, classifying how anomalous it is. The For example an anomaly score such as 0.7 indicates high anomaly whereas 0.1 indicates low anomaly.
You can learn more about the Isolation Forest algorithm here - Isolation forest - Wikipedia
The importance of these anomalies and anomaly score is that they help security analysts identify users that exhibit unusual activity (which could be suspicious activity) that otherwise would be challenging to spot within a large dataset. Security Analysts can then further explore this flagged data through various visualization methods and single out any areas they determine to be malicious.
In this animation below - The user is selecting their entities, features and their time range. After which, a subset of the selected Azure Sentinel table is created, based on the user selections, and the data is modeled on to obtain any anomalous users.
After modeling, the users are marked as anomalous or not. A flag ’1’ indicates anomalous users and a flag ‘0’ indicates non-anomalous users. Here, nine datapoints are marked anomalous because of high number of Login Times, Operation, and other user selected columns. Outputs are available in numerous formats such as an Excel, DataFrame with a data as itself and count of distinct occurrences. This is shown in the following animation.
The following image shows a histogram of the obtained Anomaly Scores for each user with a right tail end and adjustable bin sizes. This visualization helps identify how the distribution of the anomaly scores look throughout the modeled data.
The following image shows a line graph for the total number of logins for each user. The highest points signify anomalies and are further visualized in another line graph. This visualization helps identify how the distribution of the number of logins per user look throughout the modeled data.
The following animation shows a series of bar graphs for each anomalous user visualizing the distribution of their total logins over time. This visualization shows how often a user has logged in and if their logins are consistent or unanticipated (suspicious)
X – axis = Dates (YYYY-MM-DD), Y-axis = Number of total logins
You can find the sample notebook for this section in the anomaly folder in the GitHub repo.
What are time series? Time series are a way for us to measure one or more variables with respect to time. This is useful when dealing with security log data because all of the features (columns) in security logs have an associated time stamp. In our case, we are modelling the distinct values within a feature per hour (e.g., number of distinct client IP addresses) using the MSTICPy Time Series decomposition functions. By generating a set of time series models for a set of features, we can identify common trends during certain time periods for all selected features at once. This allows analysts to more quickly analyze multiple features at once within a time series.
*Please note that this method is not truly multivariate timeseries analysis, but rather we independently generate the time series for each individual feature and form a composite image representing the dataset. It also allows users to discern if a timeframe is anomalous within a single feature or multiple features.
The user first selects a table to analyze. From there, they can select a subset of features as well as a timeframe they want to analyze.
The user can then query the data automatically for that time frame, model the time series, and map anomalies to the time stamps. The result is a table displaying the timestamps at which anomalies occur.
The user can then choose a time range to view, and a graph displaying the unique values within an hourly time frame will displayed with the anomalies marked in red.
ASDET provides a security analyst a complete set of tools to explore any security log dataset programmatically instead of manually. While the examples here show their use with Azure Sentinel and Azure Log Analytics data, the tools can be used with log data from most other sources.
Exploring data programmatically saves an analyst’s time and means they can investigate new datasets quickly and effectively. Moreover, ASDET’s capabilities such as Data Inference, Feature Engineering and Anomaly Detection are not just restricted to Azure but with slight modification can be functional to any general dataset. To find out more details and to see the code, check the ASDET GitHub out at microsoft/ASDET (github.com)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
|
OPCFW_CODE
|
...process automatically Are you interested to join this virtual team, share profit, experience, knowledge, etc PLZ, send a short information about yourself, and how you can contribute I am also open minded to have any suggestion regarding payment, rules, administration, and so on I know three companies that have the same problems (specially the first
...report or audio recording of your advice. Will need professional photo of you and your resume for the company site. Benefits: you'll be a duly appointed adviser and be able to contribute to a growing company and of course get paid for (quite honestly) very little work outside of you speaking about what you already know. No risk or liabilities whatsoever as
We’re building something people use every day. From ordering a sandwich to book a table at a restaurant, the work we contribute moves the world a few steps forward. And that’s one of the best parts about working on this project—knowing that the work you do helps shape the future. Build a Database Tables with the Restful API and Admin Management,
...the moment we open our eyes every morning, we are bombarded with noises, flashing lights, notifcations. We get good news, bad news. We face stress, suffer from depression, anxiety. A lot of us go through difficult emotional setbacks. Well’ that's life. We cannot change that. But, what if I told you there is a simple tool we can use to help us deal better
We are building a question bank for various software [url removed, login to view] are looking for developers who can contribute to [url removed, login to view] need 500 interview questions on PHP (multiple choice) with varying difficulty levels from 1 to 5(1 being easiest and 5 being toughest).We need 100 questions for each level making it a total of [url removed, login to view] should not ...
...20-40 year old range. Another obstacle is that I am relatively young and young looking. I need a logo that projects confidence, competence, and trust while alleviating anxiety. LASIK is also a sexy/designer procedure that uses laser technology so logos can but don't have to borrow from those aesthetics. But the should definitely try to create
...and "Game for Good"! This is 21st Century research: mass participation, big data and open-innovation problem solving. Players get a great game and the opportunity to contribute to an area of research they care about; Scientists get more valuable data they could ever have dreamed of in a fraction of the time; Funding partners get the credit for
...focusing on the writing, but not putting in any effort to actually get the content promoted - so that's the problem we're aiming to solve. We are looking for writers to contribute topics that we then promote through our own services to build traction. This is a new blog and we are focused on quality blog posts of 1,500+ words with images/screenshots
Evergrande Properties is a Melbourne Based Luxury Property Developer, due to exceptional growth, w...preferably in Melbourne Be conversant with the software solution Estatemaster Strong attention to detail, report writing and analytical skills Ability to work in and contribute to a harmonious team environment Ability to problem solve
I need someone that can program using a microsoft API to perform...com/810092/a-computer-watched-the-debates-and-thought-clinton-happy-trump-angry-sad/ This is the api: [url removed, login to view]://azure.microsoft.com/pt-br/services/cognitive-services/face/&source=gmail&ust=1521823735524000&usg=AFQjCNF3Yke078l1ecEM8GpfLmJJSP6cnw
...minimum individual contribution, admin fee %. When the users submits the pool a smart contract is created and the admin receives a link to share with anyone that wants to contribute funds. The repo for the Ethereum version we would like to build can be found here: [url removed, login to view] NOTE: This repo does NOT have a license so cannot
...of these products while creating a fair price platform for the skilled artisans who have very limited access to these markets. Every time you buy a product from us, you contribute to a livelihood somewhere in our country. ...
I'm looking for a Salesforce expert/developer to assist with integrating and implementing apps, triggers and procedures so we can automate our workflow processes. Thank you
...outlines, though ideas are always appreciated.) 2.Contribute to Facebook and Twitter accounts multiple times per week. [url removed, login to view] with website copy as needed. [url removed, login to view] of managing weekly mass emails. You’ll also have a portfolio of blog posts proven to earn a lot of shares on social media. Finally, we need you to send thes...
...projects using a variety of criteria. It also provides a deep level of analysis on each project. So blockrazor is a combination of THREE things: 1. a social platform that anyone can contribute to (think: wikipedia style), 2. a broad and up-to-the-minute vision of ALL cryptocurrencies, ICOs, and other blockchain projects - what's good about them
...If I'm happy with the qulaity of the code and the delivered code, I will send you next tasks. I'm sure my experiences and knowledge will also help you gain experience and contribute to you carrer. |====> INSTRUCTIONS - I provide the development environnement on which you will be able to build the code. - The development environnement will be a linux
HI I require some info graphics designed to use on my website and for training material. A lot of information hear and I require this to be easy on eye/ easy for user to understand. Open to a variety of ideas and understand all may not going into one interactive graphic so hit me with ideas. Message me and I can send some links to ideas I like as not sure if allowed to post in description...
...re-purposed by copywriters to use across blog posts, press releases, web/seo content, social blurbs, data for info graphics, etc. as applicable. Each content piece should be accompanied by 5 to 15 headlines or sound-bites that can be used across blogs and social media. Meta titles and descriptions that can be used for SEO. We will assign you
...room to accommodate a dishwasher in the layout of your kitchen, install one. Maintenance How maintenance is handled and the time frame in which it is handled can often contribute to a long or short term tenancy. Imagine how frustrating it would be if you had something broken in your home and could not fix it. Tenants will certainly move on from
...you’re curious, motivated, want to be part of a unique community, and help shape the future – then take a look at this opportunity. WRITER, Resource Development, to contribute to and implement content strategy while helping to ensure consistent messaging in style and tone across all Resource Development units. Will work closely with colleagues
|
OPCFW_CODE
|
How to Program Frequencies Into a Uniden Bearcat Scanner
Radio-frequency scanners, like the Uniden Bearcat Scanner, check for active radio communications in your immediate area. They're commonly purchased by hobbyists, but they also have many business applications. For example, warehousers can use them to monitor communications between warehouse staff, truckers and security guards. Frequencies can be added during scanning, but it's quicker and more effective to manually program known frequencies.
Programming a Base Model on the Uniden Bearcat Scanner
Connect the scanner to its supplied antenna, or use an external antenna for better reception. Plug in your scanner. Turn on your scanner and ensure that it's working.
Hold down the "Prog" key on your scanner's keypad until the letters "CH" begin blinking on its display.
Choose a storage bank for your first frequency. This can be either Private, Fire/EMG or Police, depending what the frequency is used for.
, use the numeric keypad to enter the first frequency from your list. For example, if the frequency is 123.4567, you'd press the 1, 2 and 3 keys, then the decimal, then the remaining digits.
Use the numeric keypad to enter the first frequency you want to monitor. For example, if the frequency is 123.4567, you would press the "1," "2" and "3" keys, then the decimal and then the remaining digits.
Repeat the process to add any additional frequencies and press the "Prog" key again to exit the programming mode.
Programming a Handheld Model of the Uniden Bearcat Scanner
Ensure the battery is fully charged, or plug in the AC adapter. Attach the flexible antenna, or connect your portable to an external antenna for better reception.
Press the "Scan" button to put the handheld into scanning mode and press "Manual" to enter manual programming mode. Your scanner will have a number of available channels that can be programmed. Enter the channel number you wish to use and press "Manual" again.
Enter the frequency you wish to store, using the numeric keypad. For example, if the frequency is 123.4567, you would press the "1," "2" and "3" keys, the decimal point and then the remaining digits.
Press the "E" or "Enter" key. The display will flash to show the frequency has been stored successfully. If the scanner beeps, that indicates you've already programmed that frequency into a different channel. Press "E" again to store it anyway, or the asterisk if you want to enter a different frequency.
Press "Scan" again to return to scanning mode.
Uniden manufactures a wide range of Bearcat scanners. There may be some minor variation in the name of the programming keys.
Your local Radio Shack typically has a list of frequencies for your area, or you can check on the Internet for radio hobbyist groups.
- Uniden manufactures a wide range of Bearcat scanners. There may be some minor variation in the name of the programming keys.
- Your local Radio Shack typically has a list of frequencies for your area, or you can check on the Internet for radio hobbyist groups.
Fred Decker is a trained chef and certified food-safety trainer. Decker wrote for the Saint John, New Brunswick Telegraph-Journal, and has been published in Canada's Hospitality and Foodservice magazine. He's held positions selling computers, insurance and mutual funds, and was educated at Memorial University of Newfoundland and the Northern Alberta Institute of Technology.
|
OPCFW_CODE
|
/* Cinema Chair class, describes a chair */
/**
* @param key The key, eg: 'A1' or 'C5'
* @param isSelected
*/
function CinemaChair(key, isSelected) {
const self = this;
self.key = key;
self.selected = isSelected;
}
/* Cinema room, describes a room */
/**
* @param chairMap A matrix of booleans describing the chair map
* @param {Element} container A DOM element container
* @param {Element} counterElement A DOM element container
*/
function CinemaRoom(chairMap, container, counterElement) {
const self = this;
self.chairMap = chairMap;
self.container = container;
self.counterElement = counterElement;
self.chairsArray = [];
self.selectedIds = [];
/**
* Generates all the chairs based in the chairMap
*/
self.genChairs = function() {
let chairId = -1;
for(i in self.chairMap) {
let domRow = document.createElement("div");
self.container.appendChild(domRow);
for (j in self.chairMap[i]) {
let chair = self.chairMap[i][j];
if (chair) {
// Appends the chair element
chairId += 1;
chairObject = new CinemaChair(chairId, false);
let domChair = self.createDomChair(chairId, chairObject);
domRow.appendChild(domChair);
// Add it to the array
self.chairsArray[chairId] = chairObject;
if(self.selectedIds.indexOf(chairId) !== -1) {
self.selectChair(domChair, true);
}
}
else {
let domEmptyChair = self.createEmptyChair();
domRow.appendChild(domEmptyChair);
}
}
}
self.updateSelectedChairs();
};
/**
* Generates a chair DOM element
* @returns {Element}
*/
self.createDomChair = function(id, chairObject) {
chair = document.createElement("span");
chair.classList.add("chair");
chair.classList.add("avaible");
chairImg = document.createElement("img");
chairImg.src = "/vendor/icons/armchair.svg";
chair.chairId = id;
chair.chair = chairObject;
chair.appendChild(chairImg);
chair.addEventListener("click", self.chairClick);
return chair;
};
/**
* Generates a empty chair DOM element
* @returns {Element}
*/
self.createEmptyChair = function() {
chair = document.createElement("span");
chair.classList.add("chair");
chair.classList.add("empty");
return chair;
};
/**
* Handles a chair click
* @param {MouseEvent} event
*/
self.chairClick = function(event) {
const chairDom = event.srcElement.parentNode;
if (chairDom.chair !== undefined) {
const chair = self.chairsArray[chairDom.chair.key];
if (chair.selected) {
self.unselectChair(chairDom);
}
else {
self.selectChair(chairDom, false);
}
}
};
/**
* Selects a chair
* @param {Element} chairDom
* @param {boolean} cookieConsumed
*/
self.selectChair = function (chairDom, cookieConsumed) {
chairDom.chair.selected = true;
// Manages the classes
chairDom.classList.remove("avaible");
chairDom.classList.add("occuped");
if (!cookieConsumed) {
self.selectedIds.push(chairDom.chair.key);
self.updateSelectedChairs();
}
};
/**
* Unselects a chair
* @param {Element} chairDom
*/
self.unselectChair = function(chairDom) {
chairDom.chair.selected = false;
// Manages the classes
chairDom.classList.remove("occuped");
chairDom.classList.add("avaible");
const index = self.selectedIds.indexOf(chairDom.chair.key);
self.selectedIds.splice(index, 1);
self.updateSelectedChairs();
};
/**
* Updates the counter element and cookies
*/
self.updateSelectedChairs = function() {
self.counterElement.textContent = self.selectedIds.length + ' / ' + self.chairsArray.length;
self.saveCookies();
};
/**
* Save the selected chairs to cookies
*/
self.saveCookies = function () {
const expiresDate = new Date();
expiresDate.setTime(expiresDate.getTime() + (60*60*1000));
const expires = "expires="+ expiresDate.toUTCString();
const chairsJSON = JSON.stringify(self.selectedIds);
document.cookie = "selectedChairs=" + chairsJSON + ";" + expires + ";path=/";
};
/**
* Sends the selected chairs from cookie to the
* self.selectedIds array
*/
self.getSelectedChairsFromCookies = function() {
equalIndx = document.cookie.indexOf("=");
try {
self.selectedIds = window.JSON.parse(document.cookie.substr(equalIndx + 1, document.cookie.length - 1));
}
catch (e) {}
};
// Init the room
self.getSelectedChairsFromCookies();
self.genChairs();
}
|
STACK_EDU
|
and here I was thinking nobody else on the forum would be interested in this :)
Glad to see I was wrong.
This quote was also interesting, way down the page
This is only a *partial* judgment, which means that SCO v. Novell case is stillAs far as I can research, this seems to be true.
going. Until all claims are resolved, SCO can't appeal(*). And by the time all
claims are resolved, it will be too late for SCO.
(*)Not entirely true. They can ask the judge for permission to file
interlocutory appeal, but I don't see Kimball granting it.
If SCO decides to try to resolve the Novell case quickly and turn around for an appeal, they'll basically have to surrender what charges are left, which will prevent an appeal.
If SCO decides to fight it out with Novell... well, the other cases will go forward, which is a problem because the summery judgment effectively removes the lawsuit against IBM, but not from IBM.
To me, this does raise some questions about the Microsoft deals with SCO and Novell. Microsoft now has some explaining to do over why they purchased licenses from SCO and backed SCO through Baystar. I think this also sheds some new light on the Microsoft-Novell deal... what are the chances that Microsoft saw the implosion coming and decided to try to mitigate the damage Novell could do before the summery judgment came back. Is Novell really holding the upper hand in the deal with Microsoft?
Questions raised, questions answered. This should be interesting to watch the mop up.
Lemme see if I can wrap my brain around this...
SCO was trying to sue someone for stealing something from them that they had stolen from someone else?
If that's the case, then that whole company deserves to crash and burn. Seriously I don't mind a little theft here and there, but if you're arrogant enough to steal something, and then trying to sue someone else for stealing. Well you're just asking for it.
That's like a shoplifter ratting out another shoplifter by informing security that they saw him stealing a shirt while they were busy stealing a pair of pants.
it goes beyond that Netrogo.
It is apparent now that SCO didn't have the rights to Unix as they claimed they did when they took IBM to court, and that they knew they didn't have those rights to begin with.
What SCO apparently hoped to accomplish was having a judge rule that the rights should have been turned over in the Novell deal with oldSCO (the Santa Cruz Operation) if those rights had not been turned over, or declare that the rights had in fact been turned over to begin with.
In other words, the ends justified the means. It didn't matter if SCO didn't have the rights to Unix to begin with, as long as the rights became theirs in the legal actions along the way, everything else would be good.
Current SCO still has the fallback they did develop software for Unix, as well as UnixWare itself, in the intervening years of the Novell deal with oldSCO. In Current SCO's mindset, even if the judge did rule that the Unix rights had not been turned over, Current SCO could continue to make the charges based on the code Current SCO and oldSCO had already written.
It is my opinion that the fallback position isn't going to work either. Current SCO never made any separation between the underlying Unix code from the Novell / oldSCO deal, and the underlying Unix code written up to Current SCO's lawsuit date. Current SCO has treated the entire code base as one product that they wholly owned and had the rights to. As is, the original claim of millions of lines of code that were copied from oldSCO / Current SCO Unix into Linux has already been collectively proven to be a false charge.
As to why SCO did it? Because Microsoft needed to attack Linux. Microsoft needed to slow down Linux development and remove Independent Software Vendor support for Linux. SCO's tenuous claim to Unix rights seemed to be a fairly easy way out for Microsoft to do so, and a couple of license agreements later and funding through Baystar, and the deal was sealed.
Where this is going to get interesting is that Microsoft makes a big deal out of their products not using Unix technology. When Microsoft purchased the Unix license from SCO many in the tech sector regarded the purchase as a funding of SCO, myself among them.
However, after the Novell/Microsoft deal and the recent ruling, it suddenly is a very real question... how much of Microsoft's "new" technology just re-written or copied code from Linux and Unix?
This will probably be a minor boost for Linux products, but a whole new boon for Apple because they're one of the few companies in a position to take advantage of offering people a domestic version of Unix at anywhere near a reasonable learning curve/cost
I don't really see how. OSX is based on BSD, which had its copyright issues settled in 1993 with USL v BSDi. The code that SCO has been claiming was licensed to SCO by Novell in 1995. This shouldn't affect BSD or Apple in any way.
it doesn't. Apple was never a target in the original lawsuit against IBM.
Apple is also in one of the few companies in the worst possible position to push a domestic version of Unix, mainly because they've been doing so ever since the launch of OSX, and have been repeatedly surpassed each year by both Lindows/Linspire and Xandros in terms of retail sales, not to mention utterly dominated by Red Hat, Novell, IBM, HP, Dell, and other vendors in terms of server and desktop OS installs.
What the ruling will mean for Linux on the desktop is uncertain. The original lawsuit against IBM did not do anything to slow down or stop Linux development. If anything, it forced a review of existing documentation procedures, and also clarified ownership of the code in use. In some aspects, Current SCO's lawsuit assisted in promoting Linux as several proprietary Unix clients with no reason to move products would have heard about the lawsuit and done their own investigations. Getting tangled up with AutoZone only served to promote Linux to car repair personal who maybe would have never been introduced any other way. Going up against IBM and Novell only ensured that the lawsuit would be read about in most tech press oriented outlets.
For the most part then, the benefit of the lawsuit has already been realized during the lawsuit. Very few tech companies were waiting on an official ruling to pursue the development of Linux or implementation of Linux in their products, exampling Dell.
There have also been big announcements from IBM's former computer division, now sold through Lenovo, that think pads could come with Suse Linux Enterprise Desktop pre-installed. Hewlett Packard also mentioned several months ago that it would compete with Dell in offering Linux based desktops as well, although the vendor provider has yet to be confirmed.
Sony also launched the Playstation3 in partnership with IBM and Toshiba without regard to the SCO lawsuit.
So, as far as immediate retail change goes, little will happen.
The biggest change could come from ISV's who have not yet moved to a multi-platform strategy. Microsoft's Intellectual Property claims are even on shakier ground than before as a firm precedent has been set against companies who claim property as their own and attempt to license it... without owning the property.
Even that, however, would not be an immediate unless Microsoft immediately sues somebody over IP violations in Linux... and while Microsoft has pulled some really odd stunts in the past, I don't see them being that dumb.
|
OPCFW_CODE
|
Trouble starting nodeos - multiple errors
I'm working off of this:
https://developers.eos.io/welcome/latest/getting-started/development-environment/start-your-node-setup
1.1: keosd starts correctly.
1.2: after I enter the nodeos commands I get the following:
[2] + exit 254 nodeos -e -p eosio --plugin eosio::producer_plugin
--plugin --plugin
I'm unsure if that's expected or an error message.
2.1: i get the following after I run $ tail -f nodeos.log
std::exception::what: Unknown option 'bnet-follow-irreversible' inside
the config file /Users/wilfra/Library/Application
Support/eosio/nodeos/config/config.ini
error 2020-10-22T12:28:00.847 thread-0 main.cpp:131
main ]
/Users/anka/eos/libraries/appbase/application.cpp(298): Throw in
function bool appbase::application::initialize_impl(int, char **,
vector<appbase::abstract_plugin *>) Dynamic exception type:
boost::wrapexceptstd::runtime_error std::exception::what: Unknown
option 'bnet-follow-irreversible' inside the config file
/Users/wilfra/Library/Application Support/eosio/nodeos/config/config.ini
error 2020-10-22T12:35:50.406 thread-0 main.cpp:131
main ]
/Users/anka/eos/libraries/appbase/application.cpp(298): Throw in
function bool appbase::application::initialize_impl(int, char **,
vector<appbase::abstract_plugin *>) Dynamic exception type:
boost::wrapexceptstd::runtime_error std::exception::what: Unknown
option 'bnet-endpoint' inside the config file
/Users/wilfra/Library/Application Support/eosio/nodeos/config/config.ini
I'm not sure what other information is needed here. I'm on MacOS and just upgraded to the latest version of EOSIO via Brew.
Thanks for any help with this.
I was given a solution by conr2d on Telegram.
The problem was I had an old config.ini file. I deleted it and ran the nodeos commands again and it solved these errors. Running nodeos generates a new config.ini if one does not exist.
File is in: ~/Library/Application\ Support/eosio/nodeos/data
The key to the problem is this part of the error message:
std::exception::what: Unknown option 'bnet-follow-irreversible' inside the config file /Users/wilfra/Library/Application Support/eosio/nodeos/config/config.ini
It tells you that you have an option in your config.ini file that is no longer valid, and removing it would fix the problem (or at least reveal the next option in your config file that wasn't supported)
|
STACK_EXCHANGE
|
Every few weeks we get a status report via email to the webteam inbox. They have only two sections we need to worry about: New Hires and Terminations. First, upon receiving a status report you need to add all the information to the google document. Then once people's dates arrive you can make the live site changes.
Adding people to the google doc: every time you get a status report you need to add the person’s information the google document shared on the drive. You can follow the same steps for New Hires and Terminations here.
- Open the email about Status Reports
- Open the “Directory Updates" google document that's shared with you. This document has two tabs, “New Hires” and “Terminations.” Start by selecting the “New Hires” tab.
- Compare the information between the google doc document and email. Make sure no information has been changed for those names already on the sheet (dates occasionally change)
- Add any new people who appear on the latest Status Report that weren’t on your google doc.
- Repeat steps 1-4 for “Terminations” as well.
The google spreadsheets are what you use to keep track of updates for new hires and terminations. You should create (unpublished) profiles for new hires the week before they start. On the day they start you need to publish the profile and all child pages. For terminations you need to unpublish them the day their last day is. Below is a guide to both new hires and terminations.
- Open the "Directory Updates" google doc. This document has two tabs, “New Hires” and “Terminations.” Select the “New Hires” tab. For people whose dates are within a month, you need to create a profile for them.
- Check if the new hires have an existing profile by going to Content Management then All Content then searching for their name in the Title contains box. If they have an existing unpublished page, clear any irrelevant information (room, phone, etc.). If they have an existing published page leave it as is.
- If they have an existing page skip this step. If they have no existing page then create one by going Content Management then Create Content then Person
- Enter in their full name, first name, and last name
- Click Yes to displaying them in the directory
- Click the Page Visibility tab and unpublish them. Once their start day occurs publish their profile.
- Click Save
- Update the excel document to "Created" for the visibility section.
- Open the "Status Reports” excel document that’s saved on the desktop. This document has two tabs, “New Hires” and “Terminations.” Select the “Terminations” tab.
- Once someone's last day has passed, go to their profile and click Children. Unpublish any child pages that are related to the role they were just terminated form. IMPORTANT: if they have other roles that they were not terminated from leave these child pages published. Also leave their main page published.
- To unpublish a page click its Edit link then go to Page Visibility and uncheck the box that says Pubslished. Click Save when done.
- Once you've unpublished all their child pages related to the role they were just terminated from you can unpublish their main bio page if they have no other role at the law school.
- Update the google document to "Unpublished" for the visibility section.
|
OPCFW_CODE
|
Hi there I'm looking for suggestions to determine whether my G4 Quicksilver is really dead. Here's brief config. G4 867PPC 1.5 gbram 40 & 60 gb drives, OSX10.4.6, old superdrive and Epson inkjet printer.
Last night it presented me with a blank white screen on boot. I pushed in the reset button and at the screen prompt typed "mac-boot". What I got was a OS-9 like folder with a flashing ? on it. Guess it was trying to tell me it couldn't find a disk to boot from.
Managed to get the Superdrive tray to eject, inserted the OS-X install disk and booted from it. Neither the Disk Utility nor the installation script found any disks to boot from. I have two disks installed and both were spinning (I can hear them).
I think my disk controller is Kaput! and it's on the motherboard I believe. This system is 7 years old, doesn't owe me a dime but I won't spend a dime on it either. Suggestions/opinions on how to confirm it's death are most welcome before I scavenge the hard disks, maybe the superdrive and a recently added USB2.0 card to use with a new intelMac (Macmini or iMac?-another post).
You may be able to boot from a FireWire enclosure, which are very inexpensive. I'd give that a try. Would be a shame to completely scrap that machine just because it can't boot from internal drives.
If you can boot from the SuperDrive, then at least one of your disk controllers must be OK. You could always try putting a hard drive above your SuperDrive connected on the same channel as the SuperDrive. I'd recommend AGAINST putting the drive in the Zip drive bay because of inadequate cooling. Just sit it on top of the SuperDrive, there's plenty of room up there and the SD doesn't generate a lot of heat.
There is always the possibility that both your hard drives are pooched, don't discount that. If they are stacked one on top of the other, maybe one overheated and took the other one down with it. Also possible that the second drive may still be OK, but just temporarily disabled because of the heat. I've seen that happen in a PC I built. Once I let the hard drive cool off and moved it to a cooler position (not sandwiched between two other hard drives), it started working again.
Mac User since 1989
MacBook Pro 15.4"/2.33GHz Core 2 Duo/4GB/250GB HD/256MB VRAM
Mac mini/2.0GHz Core 2 Duo/1GB/120GB HD
PowerMac G4 "Sawtooth"/1.4Ghz G4/1GB/2 x 120GB HD/64MB ATI Radeon 8500
iPhone 3GS 32GB on Rogers Canada Master of the Art Of Geek.
Thanks Madgunde and Darian for your suggestions. I have decided to remove the hard drives and put them into a new system. Actually have a Macmini coming in a couple of days and I'll mount the drives in enclosures. In the final instance I decided that I wouldn't spend any money on this old box just focus on a replacement.
Thank Gerbill for the opinion and the suggestion to sell it. But if the HDD controller is dead on the motherboard what good will it do someone? As you can probably tell the only time I've held a soldering iron was when I had a wood-burning kit as a 10 year old!
All right I've got it now - now I'm not holding the point end but the blunt end! I've got to say that I'm not interested in replacing the m/b at all. I think this system has served me well and I'm intent on replacing it. I'll probably offer it up at minimal charge to someone who can replace m/b and add their own disk (I've kept mine to put into external enclosures).
Cheers to all and thanks for the suggestions - I'm switching to the intel Macs and moving with the tide!
|
OPCFW_CODE
|
Do you think my skins suck?
Do you think I take my sweet ass time with updates? (more of a fact than an opinion...)
Are you a CSS wizard?
Then fear not, for custom CSS skinning has finally arrived! You can learn more about how to create your own CSS skins with an example file I've uploaded to GitHub's gist. To use a custom css files, simply add &css=[url to CSS file] to the end of the URL (Use ? instead of & if that's the only parameter you're adding to the URL).
Please note that I highly recommend using GitHub's gist for uploading your custom CSS files since it's simple, quick, and lets you make changes easily. For image uploads, you can use any image host that allows hotlinking direct images, like Imgur.
NOTE: If you're using gist, make sure to name your css file!It doesn't matter what you call it, so long as it ends in .css else the site won't read it and think it's just a plain text.
Also, if you're thinking of making your own fightstick skin, there's good news too! I've added a few HTML tags to output the information necessary to style a fightstick. You can read more about that in the CSS file I linked above.
Here's the updated parameters list:
- Added custom styles! Now, you too can make your own controller skin and share it with your friends!
- To apply a custom skin, add &css=[url to CSS file] to the end of the address.
- Added some HTML for future fighstick functionality.
Codehttp://mrmcpowned.com/gamepad/?p=[Player Number]&s=[Style Number]&sc=[Scale Multiplier]&css=[URL to CSS file] For example: http://mrmcpowned.com/gamepad/?p=1&s=3&sc=1.3&css=https://gist.github.com/mrmcpowned/31eddbc3f8344ee69799/raw/d272922bc388ea845a66dcd2a1e383d0b35706fe/custom%2520gamepad%2520css%2520example.css -Player number can be 1-4 and a player must be specified for this to work. -Default scale multiplier is 1 (e.g. 1 times the size of the controller skin) and can also take decimal values -There's no point in setting a style if you're setting a custom CSS as the custom CSS will override the style ID Style numbers are as follows: 0 = White Xbox One Controller 1 = Xbox One Controller (not necessary to add &s= for this skin as it's set by default) 2 = PlayStation 3 Controller 3 = NES Controller 4 = Xbox 360 Controller
|
OPCFW_CODE
|
Editing Compositions with Intervals¶
A HoloEdit composition is made up of Intervals.
Intervals are displayed as long grey boxes aligned with individual stages.
Each stage can have zero or more intervals, and each interval represents a specific portion of time where that stage will be applied to the Composition.
The stage determines what kind of edits will be applied to a track, but the intervals contain the frame range and the particular settings for each edit.
If a stage contains gaps of time without intervals, no edits are applied for those frames, and the data inside intervals in previous stages (if any) flow directly downwards to subsequent stages.
Most compositions start out with a Load Asset stage containing one long interval to add initial data to the track, and then go on to contain more stages and intervals for editing and compression.
Creating an Interval¶
There are multiple ways to create intervals:
Via the “Add Interval” option on the Timeline right-click menu when a “Track Time Selection” is present
Via the “Propagate Intervals” option on the Timeline right-click menu when nothing is selected
Automatically at Stage Creation: Certain stages, or methods of creating stages, may automatically produce pre-configured intervals. For example, the Load Asset stage created by dragging a clip onto the Track View.
Automatically when dragging-and-dropping a clip into the Track Editor
Intervals appear as grey boxes on the Timeline. Each interval displays many pieces of information:
Sample Status Display: The Sample Status is the state of the data present in each frame in an interval. This is conveyed through a few visual indicators
Light and dark highlighting on interval body: Areas which are lighter have data present (loaded or created). Areas which are darker have no data
Dashed Yellow Bar at the top of the Interval: Indicates that the currently present data is “dirty” and requires reprocessing to reflect the current settings or input data
Solid Red Fill On Interval: The highlighted frames were returned Failed from the Job Server
Audio Waveforms: On Load Asset stages including audio, the audio waveform is displayed as a series of thin grey bars at the center of the interval. The length of the bars corresponds to the audio.
Sample Status Icons: For intervals with Failed or Dirty segments, icons will be displayed at the top right corner of the Interval
Yellow Warning Sign: This interval has one or more dirty samples
Red Warning Sign: This interval has one or more failed samples
Interval Range: The frame numbers of the first and last frames of the interval are displayed as a pair of numbers at the center of the interval, such as “0-10”
Processing: Once an interval has been Executed it will begin processing, and the interval will be animated to indicate it is currently processing. The animation will display horizontally scrolling diagonal bars throughout the interval.
Processing Status: A colored line will appear at the bottom of the interval box appears indicating its status. Each color indicates a specific state:
Green: Successfully processed
Red: Failed processing
Intervals that belong to certain stages will change status all at once, and others will update one frame at a time. Once a job has finished, you can hide the status lines by Dismissing the job from the Job Viewer in the Tools menu.
Current Sample: A Square Bracket at the top of the interval indicates the position and length of the data currently under the playhead. Appears in light gray if that sample is currently being rendered in the viewport, dark gray otherwise
Segment Divider: Vertical dashed Light Grey Lines through the interval indicate the boundaries between stabilized segments
Keyframe Marker: The Keyframe Marker is a small diamond marker that indicates where on the interval a Keyframe is located, if present. The Keyframe marker can appear one of two ways depending on the interval state
Large, filled diamond: A user-specified keyframe. The diamond can be dragged to adjust the position of the keyframe
Small, hollow diamond: A derived keyframe. This keyframe is determined earlier in the Track, and cannot be edited here
When an interval is selected, it shows vertical pink tabs at its edges. These tabs can be dragged to resize the interval (see below).
Adjusting an Interval¶
The position of intervals can be adjusted by selecting one or more intervals, and then clicking and dragging on the grey body of one of the selected intervals.
The length of an interval can be adjusted by selecting an interval, and then clicking and dragging on the pink end-tabs. Intervals can be of any size, but must be at least one frame long. Intervals can have gaps between them but are not allowed to overlap with other intervals on the same stage.
When multiple intervals are selected, clicking and dragging any selected interval will move all selected intervals together. If two or more selected intervals have starting or ending frames positioned on the same frame, dragging any one of those starting or ending handles will move all of those handles in unison.
You can also adjust the position and length of intervals numerically using the Stage Inspector.
For almost all stages, moving an interval does not move any data inside it. The exception is the Load Asset Stage.
When an interval in a Load Asset stage is moved, the data inside the interval (which is from its associated clip) is moved along with it. Changing the size of a Load Asset interval will include more or less of the clip, but will not move the data.
Data and Intervals¶
Most stages operate using the data streams present above them in the track as an input. When stages are re-arranged or intervals are re-run with new settings, that stage’s output data may change, causing all subsequent intervals to be out of date. HoloEedit detects this state and labels those intervals “dirty”, indicated by a dashed yellow line above the interval. Because dirty intervals are using out of date streams, they will return different results when they are run again. Unless you made changes to a previous stage by mistake, you should usually re-run intervals as soon as they become dirty.
Right-clicking on an interval and choosing “Execute” will automatically create a new job and dispatch that job to your selected Job Server.
If one or more intervals are selected when you choose execute, all selected intervals will be executed as a single job. If subsequent intervals from multiple stages are selected, the data from the first stage will be uploaded to the job server, and the job server will process each stage in order, beginning on the next as soon as data is available from processing the previous stage. This way, you can set up multiple stages and intervals, and then execute everything at once.
Any interval currently processing or awaiting results from the Job Server will be locked.
|
OPCFW_CODE
|
Placing a logo in the bottom right corner of a document
I have made a Latex document that has a front page, and I have a logo on the front page in the bottom left corner. But this logo now needs to be placed in the bottom right corner instead and also on every page, if I place my current logo in the bottom left corner of every page the text overlaps the logo, so when the logo is in the bottom right corner the text also needs to float around the logo.
If the margins of the document are changes I would still like the logo to move to the bottom right if at all possible.
To do the logo in the bottom left I have used the eso-pic package.
\AddToShipoutPictureFG*{\put(0,0){\includegraphics[width=40mm,scale=1]{images/logo.png}}}
\documentclass[a4paper, twoside, 12pt, hidelinks, final]{article}
\usepackage[top=1in, bottom=1in, left=0.75in, right=0.75in, headheight=35.4pt, footskip=35.4pt]{geometry}
\usepackage{eso-pic} % https://ctan.org/pkg/eso-pic?lang=en
\usepackage{graphicx} % https://ctan.org/pkg/graphicx?lang=en
\AddToShipoutPictureFG*{\put(0,0){\includegraphics[width=40mm,scale=1]{images/logo.png}}}
\begin{document}
\clearpage\mbox{}\clearpage
\end{document}
You should take a look at the eso-pic package.
@Bernard I have already, I did mention above that I am using it already, but how to place the logo in the bottom right and have the text wrap around it?
Sorry, I read too fast your question. You want the logo at the bottom right of the text area or of the physical page?
@Bernard That's ok no worries, I would like it to be at the bottom right of the physical page.
Could you post a compilable code, with the ceometry of the page, the document class, the paper format, &c.?
@Bernard Sure I have done that now.
Would you like the logo to be on the right side or on the outer side (i.e. alternately on the left and on the right side). Also, I don't know how to wrap text at that place automatically, so the size of the logo should be slightly smaller.
Trying to get the text to float around the logo will require things like flowfram and manually inserting fake paragraph breaks (see https://tex.stackexchange.com/questions/163075/how-to-arrange-a-large-picture-on-the-side-on-the-current-page/163104?r=SearchResults&s=1|36.2346#163104). Either shrink the logo or increase the margin.
@Bernard The logo would need to be always on the right side. The logo also needs to be the same size on all pages.
This shows how to move the image to the lower right corner. It shrinks the image to fit the margin.
\documentclass[a4paper, twoside, 12pt, hidelinks, final]{article}
\usepackage[top=1in, bottom=1in, left=0.75in, right=0.75in, headheight=35.4pt, footskip=35.4pt]{geometry}
\usepackage{eso-pic} % https://ctan.org/pkg/eso-pic?lang=en
\usepackage{graphicx} % https://ctan.org/pkg/graphicx?lang=en
\newsavebox{\logo}
\savebox{\logo}{\includegraphics[width=0.75in]{example-image}}% do once, then copy
\AddToShipoutPictureFG*{\put(\LenToUnit{\dimexpr \paperwidth-0.75in},0){\usebox\logo}}
\begin{document}
\clearpage\mbox{}\clearpage
\end{document}
This solution sets the margin to match the image size.
\documentclass[a4paper, twoside, 12pt, hidelinks, final]{article}
\usepackage{graphicx} % https://ctan.org/pkg/graphicx?lang=en
\newsavebox{\logo}
\savebox{\logo}{\includegraphics[scale=0.3]{example-image}}% do once, then copy
\usepackage[top=1in, bottom=1in, left=\wd\logo, right=\wd\logo, headheight=35.4pt, footskip=35.4pt]{geometry}
\usepackage{eso-pic} % https://ctan.org/pkg/eso-pic?lang=en
\AddToShipoutPictureFG*{\put(\LenToUnit{\dimexpr \paperwidth-\wd\logo},0){\usebox\logo}}
\begin{document}
\clearpage\mbox{}\clearpage
\end{document}
This solution uses flowfram. I added an \intextsep gap above and a \columnsep gap beside the logo. I also added a \dp\strutbox gap between the two flow frames to try to emulate \baselineskip.
In this case, the first break occurs between two paragraphs, but you still need to add a \framebreak to prevent the next paragraph from being formatted at the wrong width. To manually insert \framebreak and \nopar, run it without and the appropriate location should be obvious.
\documentclass[a4paper, twoside, 12pt, hidelinks, final]{article}
\usepackage[top=1in, bottom=1in, left=0.75in, right=0.75in, headheight=35.4pt, footskip=35.4pt]{geometry}
\usepackage{graphicx} % https://ctan.org/pkg/graphicx?lang=en
\usepackage{flowfram}
\usepackage{lipsum}% MWE only
%framebreak within a paragraph
\newcommand{\nopar}{\parfillskip=0pt\framebreak\parfillskip=0pt plus1fil\noindent}
\newsavebox{\logo}
\savebox{\logo}{\includegraphics[scale=0.5]{example-image}}% get width and height
\newstaticframe{\wd\logo}{\ht\logo}{\dimexpr \paperwidth-0.75in-\wd\logo}{-1in}
\setstaticcontents{1}{\usebox\logo}
\newflowframe{\textwidth}{\dimexpr \textheight+1in-\ht\logo-\intextsep}{0pt}{\dimexpr \ht\logo-1in+\intextsep}
\newflowframe{\dimexpr \textwidth+0.75in-\wd\logo-\columnsep}{\dimexpr \ht\logo-1in+\intextsep-\dp\strutbox}{0pt}{0pt}
\begin{document}
\lipsum[1-6]\framebreak
Sed commodo posuere pede. Mauris ut est. Ut quis purus. Sed ac odio. Sed vehicula
hendrerit sem. Duis non odio. Morbi ut dui. Sed accumsan risus eget odio. In hac habitasse
platea dictumst. Pellentesque non elit. Fusce sed justo eu urna porta tincidunt. Mauris felis odio,\nopar
sollicitudin sed, volutpat a, ornare ac, erat. Morbi quis dolor. Donec pellentesque, erat ac sagittis
semper, nunc dui lobortis purus, quis congue purus metus ultricies tellus. Proin et quam. Class
aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Praesent
sapien turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus.
\lipsum[8-10]
\end{document}
This resolved half of my question, the 2nd solution worked best for me, the first cut the logo in half with half being on the page and the other half off the page.
Now if only I could get the text wrapping around the image that would be great, I look at the link you posted, but didn't understand any of what was said, I sort of understood what they was trying to do with the help of the pictures.
The link was mostly to show the manual addition of \nopar when the column width changed. As for the logo, you must have removed the [width=0.75in] argument (or replaced it with [width=40mm] as in the original).
Ah you are correct, I did do that! Is there not an easier solution to achieve MS Word like wrapping around images?
You can use \parshape to implement word wrapping on a single paragraph, but not to synchronize the gaps to page locations. Flowfram will change the column width automatically, but when you split a paragraph, the formatted width does not change until the next paragraph starts.
thank you for your help, but I have been unable to figure out the word wrapping with \parshape, so instead have just decided that I'll put the logo on the front cover of the document only. Not what I was really wanting but it'll have to do.
|
STACK_EXCHANGE
|
My view on what will kill 'traditional' system administration
Phil Hollenback recently wrote DevOps Is Here Whether You Like It Or Not, in which he writes that traditional system administration is dying. While I sort of agree with him about the death, I don't think it's necessarily for the reasons that Phil points to.
Fundamentally, there has always been a divide between small systems and large systems. Large systems have had to automate and when that automation involved applications, it involved the developers; small systems did not have to automate, and often do not automate because the costs of automation are larger than the costs of doing everything by hand. Moving to virtualization doesn't change that (at least for my sort of system administration, which has always had very little to do with shoving actual physical hardware around); if you have only a few virtualized servers and services, you can perfectly well keep running them by hand and it will probably be easier than learning Chef, Puppet, or CFEngine and then setting up an install.
(If you're future-proofing your career you want to learn Chef or Puppet anyways, so go ahead and use them even in a small environment.)
There are two things that I think will change that, and Phil points to one of them. Heroku is not just a virtualization provider; they are what I'll call a deployment provider, where if you write your application to their API you can simply push it to them without having to configure servers directly. We've seen deployment providers before (eg Google App Engine), but what distinguishes Heroku is how unconstrained and garden variety your API choices are. You don't need to write to special APIs to build a Heroku-ready application; in many cases, if you build an application in a sensible way it's automatically Heroku-ready. This is very enticing to developers because (among other things) it avoids lockin; if Heroku sucks for you, you can easily take your application elsewhere.
(This has historically not been true of other deployment providers, which makes writing things to, say, the Google AppEngine API a very big decision that you have to commit to very early on.)
Deployment providers like Heroku remove traditional system administration entirely. There's no systems or services to configure, and the developers are deeply involved in deployment because a non-developer can't really take a random application and deploy it for the developers. If there is an operations group, it's one that worries about higher level issues such as production environment performance and how to control the movement of code from development to production.
The other thing is general work to reduce the amount of knowledge you need to set up a Chef or Puppet-based environment with certain canned configurations. Right now my impression is that we're still at the stage where someone with experience has to write the initial recipe to configure all N of your servers correctly, and you might as well call that person a sysadmin (ie, they understand Apache config files, package installation on Ubuntu, etc). However it's quite possible that this is going to change over time to the point where we'll see programs shipped with Chef or Puppet recipes to install them in standard setups. At that point you won't need any special knowledge to go from, say, writing a Django-based application to installing it on the virtualization environment of your choice. This really will be the end of developers needing conventional sysadmins in order to get stuff done.
The general issue of the amount of hardware in a small business (and virtualizing the hardware) ties into a larger question of how much hardware the business of the future is going to need or want, but that's a different entry. I will just observe that the amount of servers that you need for a given amount of functionality has been steadily shrinking for years.
Sidebar: what virtualization does change now
I think that plain virtualization does mark a sea change today in one way: it moves sysadmins away from a model of upgrading OSes to a model of recreating their customizations on top of a new version of the OS. Possibly it moves away from upgrading software versions in general to 'build new install with new software versions from scratch, then configure'.
This is partly because the common virtualization model is 'provide base OS version X image, you customize from there' and partly because most virtualization makes it easy to build new server instances. It's much easier to start a new Ubuntu 12.04 image than it is to find a spare server to use as your 12.04 version of whatever.
(Note that virtualization may not make it any easier to replace your Ubuntu 10.04 server with a new 12.04 server; there are a host of low level practical issues that you can still run into unless you already have a sophisticated management environment built up.)
I don't think that this is a huge change for system administration, partly because this is pretty how much we've been doing things here for years. We basically never upgrade servers in place; we always build new servers from scratch. Among other things, it's much cleaner and more reproduceable that way.
Comments on this page:Written on 05 February 2012.
|
OPCFW_CODE
|
For those who are having the same problem with this scanner heres what i did to make it work with scanimage. You will need to install the canon_dr backend with a little adjustment.
Download the backend here: http://www.sane-project.org/snapshots/
Unzip and go to the 'backends' directory, in there find 'canon_dr.c'. Open this file and search for the method, 'init_model()'. In this method you will see a bunch of the dr-models in an if(), else if() statement.
else if (strstr (s->model_name,"DR-2510")), etc. the DR-3010 is not among them so i coppied the whole scope of the DR-2510 and changed 2510 to 3010. (the code is below here, you can just copy that)
To make it work, this is what the developer told me and it worked, you will need the s->invert_tly = 1 flag. Note that this is commented out with the 2510. It will look like this:
Save the file like this.
else if (strstr (s->model_name,"DR-3010"))
s->rgb_format = 1;
s->always_op = 0;
s->unknown_byte2 = 0x80;
s->fixed_width = 1;
s->valid_x = 8.5 * 1200;
s->gray_interlace[SIDE_FRONT] = GRAY_INTERLACE_2510;
s->gray_interlace[SIDE_BACK] = GRAY_INTERLACE_2510;
s->color_interlace[SIDE_FRONT] = COLOR_INTERLACE_2510;
s->color_interlace[SIDE_BACK] = COLOR_INTERLACE_2510;
s->duplex_interlace = DUPLEX_INTERLACE_2510;
s->duplex_offset = 400;
s->need_ccal = 1;
s->need_fcal = 1;
s->sw_lut = 1;
s->invert_tly = 1;
/*only in Y direction, so we trash them in X*/
Now go to the top directory of the backend in the terminal and type:
"BACKENDS=canon_dr ./configure --prefix=/usr --sysconfdir=/etc –localstatedir=/var"
(this may take a while)
"sudo make install"
The backend is now installed. Check if (in the terminal) "scanimage -L" finds the scanner, else turn it off and back on. If the scanner is found try putting a paper in it and type "scanimage -T" to test the scanner.
If it still doesn't work you might want to try changing the permissions on the usb port. if "scanimage -L" shows "'device `canon_dr:libusb:005:004' is a CANON DR-3010C scanner" then type:
sudo chmod a+w /dev/bus/usb/005/004
The solution aint perfect yet, not all the flags coppied in the code from the 2510C are needed for the 3010C (i only know s->invert_tly = 1 is needed as it didnt scan without it) but i havent had time to finetune and test it properly.
Nevertheless, i hope this is helpfull for those having the same problem and that im not forgetting anything important.
If it doesnt work, let me know and ill try my best to help.
|
OPCFW_CODE
|
Content Type vs Webform vs Form API vs Custom Form vs?
I'm currently working on a new Drupal 7 site, which will be used as a data collection and reporting site. After much research / trial and error / etc, I've decided to bring this question to this forum, and am sincerely hoping that it won't be thrown out as being "too generic."
I've created a Content Type, which consists of roughly 25 fields, and am wondering what the best approach for having users populate that content would be? Out of the box (even using Manage Display) the form itself still resembles nothing that I'd actually want to present to the users. And, it seems a bit insane to alter the form using one of the hook_form_alter flavors, because the code involved seems way overkill just to hack the form into looking/acting like I intend it to. I've also spent a bit of time with the Webform module, which is ok, but there's still quite a bit of customizing that needs to be done in order to have the form look/act like I intend it to, and there seems to be mixed reviews on that module in general (no offense to anyone!). Same goes for using the Form API...lots and lots of code just to come up with a basic form to collect data.
My latest approach is to simply use the markup type to create the form itself (basically, just straight html / css / javascript, and then populate / update the Content Type behind the scenes. From a developer's point of view, this approach is very, very straight forward - no rocket science - easy to maintain, but I'd certainly like to hear from those of you that know your way around Drupal. Am I overlooking something?
Thanks in advance!
cmmsites
I'm using Drupal for exactly the same purpose. Basically, what I'm doing is using the Field Group module to wrap particular items together in DIV tags so that I can use CSS styles to properly layout my forms.
Drupal provides a very structured set of container elements with appropriately named classes. Accordingly, leveraging the power of CSS to your advantage should be the first thought that goes through your mind when it comes to manipulating layouts.
I mucked with the Webform module, and although it is quite capable, I've found that just sticking with Drupal's standard content type system for collecting information has suited me best. Utilizing Views and the Rules module, I'm able to generate reporting pages that show me the critical information I want with less modular bloat to my Drupal installation.
So definitely read up on Views, Field Group, and the Rules module to see if the functionality as described will fit your bill.
Lester: I'll take a look at Field Group. Using Views seems to come into play AFTER the data has already been collected...I'm just trying to determine the best / wisest / safest method of getting the data in there in the first place :) Thanks a ton for the feedback...it'd be really sad if there was no one out there to turn to for advice!
Yeah Stack is great, I practically live and breathe these sites. Seriously though, I've had no problems with using Field Group + CSS to get everything laid out the way I need it to be.
If you want to move form elements only, you must take a look at Display suit.
You can put the element as you want without write nothing of code.
I know this question is old, but here is what I think. How you want to use the data is an important determination for how you build the form.
Webform is made for surveys and does it well. It is easy to list your forms and enable/disable them, or change as needed. Each webform has a link to its submissions.
Using a content type (node/entity/whatever) has different permission implications, and allows more flexibility in actions later. You can use the 5-star module or Flag module on a content type, but not on a webform submission.
Making a custom form with the form api is good if you want to process the information yourself and not store it in the database for long term retrieval. I would not go through the trouble of defining a separate form, validation and submission handlers to just do what both node and webform already do. But it sounds like you have already invested the time in doing this. That just goes to prove there is always more than one way to skin a drupal.
Both webform and a custom content type can use the Markup field, https://www.drupal.org/project/markup
Both can trigger rules to fire on submission for minor automation tasks (or large automation, depending on your developer's skill)
Both have views integration for making a statistics or summary page, if desired. (https://www.drupal.org/project/webform_mysql_views)
So if the question really is only "Which method is easiest for custom form display?" then I would say the answer is the custom content type. Field groups, field collections (rarely needed, irritating to handle, but so useful when they are what you need), and the above mentioned markup field will do most things for you. I have used field groups and js to convert a set of radio buttons into tabs when tabs within vertical tabs were not possible with contributed modules. There was one project I built where the form had 80+ fields with heavy repetition. I made this module for use there because I only needed 6 field styles and I was tired of using field names in the form-alters and having copy-replace errors. https://www.drupal.org/sandbox/developerweeks/1964036
I worked with webforms extensively over the past couple of months. I am using Webform module, for some things it's really an overkill, but it really depends what is your form, is it complicated with plenty of conditional fields, validation rules etc.
Try having a look into Entity Forms drupal entityforms
And some interesting read here http://stidwill.com/content/web-forms-dead-long-live-entity-forms
I you don't need nodes, I'll try with form api and entities. I think it's drupal's future.
It's true that form api it's hack to customice, but it's easy with form api that use a node-form.
Also, if you use the form api you will work as drupal say, if not it'll be hard to have a code for future developers.
Oskar
Oskar: I'm actually populating the node when the user submits the form. I'll work some more with the Form API as you mention...perhaps I didn't give a fair enough evalutaion. Thanks for the feedback!
|
STACK_EXCHANGE
|
macOS vbcc vscode
Hi,
found your youtube series, appreciate it! and would like to test it on a mac.
Could not really make it work last year when I first tried with VBCC and KS1.3 + libs. Something of my setup was wrong. While I could start up "helloWorld" and even some intuition-gui programs, linking with gfx, audio libs never worked for me.
Could you please upload a setup for mac (in case you have one) which works with VBCC and VSCODE.
Thank you
Hi thank you for your input ! Could you please provide me with a little more information about your setup ? I actually currently work on a Macbook Air m1 most of the time and have used it to write the examples for my past 3 or 4 videos.
However, you need to make sure that your environment variables are all set correctly in order for the Makefile to work.
The most important environment variable is the path to your NDK include files.
Mine are set to
export NDK_INC=~/Development/amigadev/NDK_3.9/Include/include_h
export NDK_ASMINC=~/Development/amigadev/NDK_3.9/Include/include_i
On my Mac I have put this line into my .zshrc, because ZSH seems to be Apple's default terminal shell.
which can be in a different location on your system. Depends on where you unpacked your NDK to. I regularly compile between Linux, MacOS X and Windows 10 (in the Ubuntu console), and it works everywhere I am using it, by opening a terminal window, going to the source directory and typing "make".
I don't have much experience using VS Code, I only use it to display source code in my videos, but I can see that it has an extension for using Makefiles.
Hi, I have a macbook air m1 as well :)
But last year I tested it on my old macbook air and did download lots of demos and tutorials.
Specifically I had trouble loading includes and libs and linking those. Most of the samples resulted in a crash on startup with a generic error, but some worked. I will try to set it up again. I remember having issues with the Makefile and Linker using NDK_3.9. Some kick13 libs existed in v3.9 and some did not. I targeted kick13 (OCS) but could not find all the libs/files that were needed for the samples in the NDK_3.9. So some c source files (kick13 demos) would not work with that version. So I was clueless and thought I had chosen the wrong libs (NDK) or had misconfigured the Makefile to launch a kick13 version
Could you maybe provide a set of all the lib files (NDK) that you are using for your samples? Or a link to the NDK you have been using so that I do not search in vain for bugs in a corrupt NDK which result in crashes on startup.
Hi,
the official Amiga NDK is still available at https://www.haage-partner.de/download/AmigaOS/NDK39.lha
It's the same as in the description of my video "Cross Development for the Amiga with VBCC"
Also, please make sure that you have set your VBCC environment variable to the location where your VBCC is installed, the compiler needs that to locate the targets and standard include files.
So for example my VBCC variable is set to ~/local/vbcc-0.9f
And the contents should look about like that, please make sure that the targets
directory structure is correct, otherwise you might run into linker issues
weiju@monokuma ~/local/vbcc-0.9f $ ls
bin config targets
weiju@monokuma ~/local/vbcc-0.9f $ ls targets/
m68k-amigaos m68k-kick13
weiju@monokuma ~/local/vbcc-0.9f $ ls targets/m68k-kick13/
include lib
weiju@monokuma ~/local/vbcc-0.9f $
Hope that helps !
|
GITHUB_ARCHIVE
|
Simple exercise about conservation of momentum
A block of wood of mass $M$ is dropped, with no initial speed, from a height $h$ with respect to the ground. When it is at altitude $\frac{h}{2}$ it is hit by a bullet of mass m that travels horizontally with speed v. After the impact the bullet remains embedded in the wood. Determine the coordinate of the ground impact point of the block + projectile system. Consider the wood block and the bullet as material points. Perform the calculations for: $M = 1 kg$, $h = 10 m$, $m = 10 g$, $v = 800 \frac{m}{s}$
My problem:
From conservation of momentum along the x axis I can write that formula(1): $(m+M)V_x=mv$ so $V_x=(\frac{m}{m+M})v$
There isn't conservation of momentum along y axis so I cannot write the following formula(2): $(m+M)V_y=MV_{\frac{h}{2}}$ where $V_{\frac{h}{2}}$ is velocity of mass M calculated from height $\frac{h}{2}$. $V_{\frac{h}{2}}$ from conservation of energy is simple to calculate, so $V_{\frac{h}{2}}=\sqrt{gh}$
The solution of this exercise write formula $(1)$ and formula $(2)$. I'm not able to understand why I can write formula (2).
Thanks!
Why do you think there is no momentum conservation in the vertical? The question has given no indication of this.
Try writing it as $lim_{\epsilon->0} (m+M)v_{h/2 + \epsilon}=mv_{h/2-\epsilon}$
@PaulChilds Because I know that there is momentum conservation when the sum of external forces is 0. On the vertical we have force of gravity.
The change in the vertical momentum due to the force of gravity over a time interval $\Delta t$ is given by $\Delta \vec p = \vec f_g\Delta t$. If the impact of the bullet is instantaneous, you can write a conservation of momentum equation between before and after, assuming $\Delta t=0$ between before and after impact.
Mass is a scalar quantity and velocity is a vector quantity. Due to this, momentum is also a vector quantity. This means that momentum can be resolved into components, and conservation of momentum applies to each component direction.
Given the fact that conservation of momentum applies to both the x and y directions, it is seen that you need to use conservation of momentum only in the x direction regarding the bullet, because that bullet is moving horizontally when it strikes the block. Since there is no vertical component of velocity for the bullet, the vertical velocity that is calculated based only on straight-line kinematics or conservation of energy for the vertical direction, is the correct physics description for velocity in that direction.
So I didn't understand, does formula (2) make sense or not?
Since in my case, along the y-axis, there is the force of gravity, along this axis the momentum is not conserved right?
Formula 2 makes sense. There is no conservation of momentum in the vertical direction because there is no vertical component of momentum involved with the bullet collision.
You don't use conservation of energy. The fact that the bullet has embedded in the block means you have an unknown energy change due to deformation. Momentum conservation in the vertical direction is used as @stochastic has rightly pointed out.
|
STACK_EXCHANGE
|
M: Object injection vulnerability enables remote code execution in WordPress 3.6 - mathias
http://vagosec.org/2013/09/wordpress-php-object-injection/index.html
R: IgorPartola
An old (2.x) version of WordPress I worked on included an eval() statement
that amounted to basically just doing variable assignment. I am sure there was
some reason for this (probably not a good one), but it turned me off to the
WordPress core. The fact that every WP release is quickly followed up with a
patch for some critical remote code execution vulnerability tells me that
there is something systematically wrong with its handling of user input and
security.
Because of that, I moved off WordPress for personal blogging and onto Pelican
[1]. You can't compromise static content.
[1] [http://docs.getpelican.com/en/3.2/](http://docs.getpelican.com/en/3.2/)
R: actionscripted
Pelican isn't exactly client-friendly and has far fewer features than
WordPress. Pelican and the rest of the static-site generators might be great
for developers or tech-savvy folks, but you'd be hard-pressed to sell the
system to the average web client.
R: IgorPartola
Exactly, but it's perfect for me. I guess having a WYSIWYG editor and a web UI
would make it more user friendly.
I also like that I can have my content under version control.
R: juddlyon
Statamic tries to blend the best of both worlds (client-friendly UI + no
database): [http://statamic.com/](http://statamic.com/)
R: cryptbe
Cool research. I like how you "connect-the-dots" from the benign-looking
MySQL's behaviour to the bad code in Wordpress. This reminds me of
[http://www.suspekt.org/2008/08/18/mysql-and-sql-column-
trunc...](http://www.suspekt.org/2008/08/18/mysql-and-sql-column-truncation-
vulnerabilities/).
I'm surprised that the fix in Wordpress wasn't explicitly marking fields that
need to be serialized/unserialized, instead of second-guessing based on the
broken promise by MySQL.
> MySQL replaces characters it doesn't recognize (for the given character
> set), with a placeholder. MySQL will sometimes replace byte sequences with
> "?" or "�" (U+FFFD). Such replacements would not be harmful.
This is so wrong. A database must never change any data that it's asked to
stored. Wordpress, and other applications, always make that assumption, and
when it isn't true anymore all hell breaks loose.
PS: it blows my mind that it looks like strpos in PHP could return either
boolean or integer [1].
[1] [http://core.trac.wordpress.org/browser/tags/3.6.1/wp-
include...](http://core.trac.wordpress.org/browser/tags/3.6.1/wp-
includes/functions.php#L0)
R: hamburglar
I'm always sort of flabbergasted when I see PHP programmers doing this
"maybe_xyz" stuff. I recall there's a PHP api for escaping stuff that has
weird options for either allowing "double escaping" or ignoring successive
invocations. It screams amateur hour to say "uh, i have a string, and I don't
know if it's escaped yet, so I'll just call this API that escapes it because
it magically avoids 'double escaping' for me." There's no such thing as
"double escaping" \-- it's just "escaping". The fact that you might be
escaping something that appears to be an already-escaped string is irrelevant.
If you are dealing with user input strings and you don't know for sure whether
a string is escaped or not (or how many times), you are probably writing a
security hole somewhere.
R: fooyc
While I totaly agree, this is in no way specific to PHP.
Ruby on Rails has such ugliness too, a view helper called "escape_once":
[http://api.rubyonrails.org/classes/ActionView/Helpers/TagHel...](http://api.rubyonrails.org/classes/ActionView/Helpers/TagHelper.html#method-
i-escape_once)
What's crazy is that I can't even find an "escape" helper. Ho it's called
html_escape. Ho and there is a html_escape_once too!
Python Django too:
[https://docs.djangoproject.com/en/dev/ref/utils/#django.util...](https://docs.djangoproject.com/en/dev/ref/utils/#django.utils.html.conditional_escape)
R: hamburglar
Fair enough. It's funny how angry I get when I think of someone needing an
"escape_once" function or "is_serialized". I think this discussion might have
to become part of my interview process, because if someone doesn't understand
the absolute undeniable terribleness of trying to determine if a string has
been escaped or serialized by inspecting its contents, then I really don't
want them in my code.
R: satyap
WordPress is the PHP of web frameworks....
I'll be in the corner.
|
HACKER_NEWS
|
According to Apple Enterprise, if you pull the vpp spreadsheet again it should give you credit for the iTunes download. You then have to go back into configurator and load the (new) spreadsheet...supposedly it will have given you a new code. They said this was a recent fix because you shouldn't have to buy an extra.
I think it is an ANNOYING fix, but I'll give them credit for trying. Let us know if it works. I honestly haven't had a chance to give it a go.
Update: Was able to test this and pulling the spreadsheet from vpp account DID work... however, it did not actually give me a new code, just made it accept the old code...like it validates that it was used to redeem the app in itunes for the same apple id tied to configurator for that app.
So... you do not have to buy an extra... but you do have to jump through a few hoops.
This is happening with every app that we purchase through the VPP program and the solution is BEYOND ANNOYING as I install the apps but don't manage the account and rigamarole that is requried to purchase apps. I have to send a note to the person that did the purchase asking them to redownload the spreadsheet. No matter how carefully I word my request, I always get back a copy of the original spreadsheet and have to send another email to AGAIN explain that it has to be a new download from the VPP portal. I know the purchasing dept just can't believe that they really have to go back and do the whole process over again just to reclaim the one license that won't apply. I see over 200 views of this thread so this a big problem and Apple needs to fix it!
It's agreed. Snafu by Apple. This isn't the place to contact apple. It's user to user.
"Use the form below to send us your comments. We read all feedback carefully, but please note that we cannot respond to the comments you submit."
By 'pull the spreadsheet again' do you mean re download it and add to Configuator? Ive tried this but im still getting problems.
Im very much doing the same thing as CCawthon for my school at the moment.
As a test ive purchased two liscences for a VPP app and have tried to install them to 2 ipads. I managed to get it to install on 1 ipad and one failed as expected.
I removed the app, redownloaded the spread sheet and reimported it to configuator. Now I cant install the app at all, just getting the eplanation mark with the error Can't install app. The code importer window is telling me that 1 code has been redeemed (from when I downloaded it from itunes im assuming) and I cant get it to change from that state.
Any ideas or am I doing something wrong?
|
OPCFW_CODE
|
[Android 13] Using OpenSettings, app restarts when user returns to the app
Bug summary
We are using a button in the app that lets the users open the app settings to control the push notification permissions.
For this purpose, we are using the function openSettings from this library.
Using Android 13, when the user opens the settings, switches off the push notification permissions, and returns to the app, the app restarts. But when the user switches the push notification permission on and returns to the app, the app doesn't restart and works normally (like backgrounding the app and making it active again, no restart).
I researched about it, but couldn't find any solutions for it, could you help us, please?
Library version
3.6.1
Environment info
This issue has happened in a physical device with Android 13
S22 Utral, OneUI 5.0
Steps to reproduce
Open the app on an Android 13 device
Press the Open settings button
Settings of the app opens, then switch off all notifications permissions
Press back button to return to the app
App will restart
Reproducible sample code
import {openSettings} from 'react-native-permissions'
const handlePermissionsChange = useCallback(async()=>{
await openSettings()
},[])
<Button label="Open settings" onPress={handlePermissionsChange}/>
Hi @mahdieh-dev 👋
I just gave it a try using the example app and my Pixel 4a (Android 13) and I cannot reproduce this.
Are you using a real device? Can you try the example app to confirm that's not related to your app code?
Hi @mahdieh-dev 👋
I just gave it a try using the example app and my Pixel 4a (Android 13) and I cannot reproduce this. Are you using a real device? Can you try the example app to confirm that's not related to your app code?
Hi @zoontek!
It was a real device, I couldn't also reproduce it on other Android phones and emulators.
It's a good idea! I will make a build from the example app and give it to my colleague to test it and will let you know about the results.
Thanks!
I've also being experiencing this issue. I've tested on Android 13 and also Android 11 but my issue is not for notifications but for location settings. When the user leaves the app and changes a location setting, my app reloads.
Hi @mahdieh-dev
There's a possibility that the phone you tested has a low ram and it ran out of swap memory. A few android phones don't have swape memory support and a few have very limited space for that. So when it ran out of memory, it kills non-active applications.
@Shoukat488 Indeed, that might happen.
I can confirm this for iOS AND Android for the CAMERA permissions. Blocked notification requests work just fine, but they seem to be a different pair of shoes entirely.
This had been working perfectly before, so this might be due to some version update of RNPermissions or RN itself (?) – I will try to investigate.
Currently on 3.8.0, RN 0.71.6.
I'm having the same issue.
Samsung A52S / Android 13
But it is working fine on my Iphone 13 / iOS 16.3.1
Has anybody found a solution or maybe a workaround to get more clarity of whats going on?
Cheers.
Experiencing the issue on all iPhones as low as version 15.6
Can confirm this only for the Camera settings on two physical devices. Changing notifications does not cause a restart.
RN: 71.8
iOS: Apple 12, 16.0.3
android: Galaxy A13, v 12
Hello guys, any update here?
im also experiencing this issue both android and ios
👋 similar problems on Samsung A54 with ONE UI 5.1... Camera permission causing exactly the same issue as above. Someone who already dug deeper and discovered something here?
I don't think this is a bug, just something new in Android now. A lot of apps are not rechecking permissions after startup, forcing the app to reboot is a solution. Apple does that on iOS too.
I am experiencing this with FINE Location, Even though its granted via device settings, The App takes some time to reflect the permission in runtime. Otherwise, I have to kill the App.
@zoontek - It would be really great if you could share some documentation for this new behaviour in Android.
@AmalMenachery That's undocumented, like a lot of Android changes. But feels free to open an issue on their tracker
I think the behaviour described in the original issue is documented by Android docs here: https://developer.android.com/training/permissions/requesting#app_process_terminates_when_permission_revoked
As with any permission, if the user revokes your app's one-time permission, your app's process terminates.
So when revoking permissions, the app will terminate, leading to a fresh start.
I haven't looked for matching iOS docs, but I suspect they do the same thing. Accordingly I don't think there's a bug and nothing this library could address.
|
GITHUB_ARCHIVE
|
OK this appears to be due to a very confusing set of bugs that are currently deployed as the PowerApps offering:
My wholehearted recommendation is to remove the Production/Trial dropdown selection altogether and simply create the environment based on the user's current paying status. If they are paying the $40/mo, then make it an actual Production environment. Otherwise, make it a Trial environment. That way you are still counting against the two amount allowed (2) and the user is none the wiser and never knows that particular detail at all.
Of course, this begs the question: If I create a Trial environment and put 30 days' worth of effort into building out its schema and data... is there a way to easily "upgrade" it to an actual Production environment once I have paid into the actual Plan 2? Or (even better) does it automatically upgrade into a Production environment once I fork over my hard-earned cash? 🙂
I've gone ahead and created a new idea here to address this problem:
Great @ManasMSFT thanks for letting us know. To be sure though there are two problems: one is the confusion between trial/production environments. The other is that it seems that environments do not "upgrade" when you convert to a paying customer, so you essentially have to suffer a migration path of some sort (assuming there is one).
It would be great to know if you are addressing both or perhaps a little more information on what is being done to remedy this issue. Thanks again!
So if I have a 365 Dynamics Licence which costs over $100 I can't create an additional Production Enviornment but, If I shelve that Licence and only Pay $40 I can. Something seems wrong in that philosophy!
Just to clarify:
Are you hitting an issue with provisioning a 'Production' environment when you DO have a Plan 2 license?
Hi @jo, thank you for your reply and taking the time to clarify. To answer your question, no I do not have a (non-trial) PowerApps Plan 2 license, but yet I am shown the user interface and option to create a production environment as if I do have a PowerApps Plan 2 license. Very confusing experience.
Got it, and I 100% agree, hiding the Production option when they user does not have the correct entitlement was a miss and we are tracking to fix this soon.
This training provides practical hands-on experience in creating Power Apps solutions in a full-day of instructor-led App creation workshop.
Come together to explore latest innovations in code and application development—and gain insights from experts from around the world.
At the monthly call, connect with other leaders and find out how community makes your experience even better.
|
OPCFW_CODE
|
HTTP response header to indicate request was forwarded to another server by reverse proxy
Given a system of services and a reverse proxy, such that requests may either be conclusively processed by the proxy or ultimately handled by any of the services behind the proxy.
Consideration has be given to the X-Forwarded-For, Forwarded, X-Forwarded-Host fields. Though they appear naturally fitting for the request phase, would it be confusing to use them in the response phase?
What header field is conventionally used to declare the host that primarily provided an HTTP response?
-- The motivation for including information on proxied servers is to ease debugging and application support process.
-- I don't believe an attacker benefits specially from learning that a service makes use of a reversed proxy. Proxied servers can be identified with aliases which cannot lead to direct access to them. In the image below, the proxied server has the alias 03.
A different implementation could use an alias as follows: X-Backend-Server: mickymouse. That's a pointless piece of information for anyone but it's author.
Can you provide some details on why this behavior would be desirable? As Dan Wilson mentions in his answer, this is something that while not directly insecure, provides details to the clients that they shouldn't need.
It would ease debugging and application support process.
In that case, you probably don't need to follow any standard. I would suggest looking into log aggregation, though. It will help you with this and a lot more without exposing details about your architecture in headers.
I think what you want is Via header. From the RFC,
The "Via" header field indicates the presence of intermediate
protocols and recipients between the user agent and the server (on
requests) or between the origin server and the client (on responses),
similar to the "Received" header field in email
Regarding security concerns, you should use pseudonyms for internal servers. From the RFC:
An intermediary used as a portal through a network firewall SHOULD
NOT forward the names and ports of hosts within the firewall region
unless it is explicitly enabled to do so. If not enabled, such an
intermediary SHOULD replace each received-by host of any host behind
the firewall by an appropriate pseudonym for that host.
Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing
What header field is conventionally used to declare the host that primarily provided an HTTP response?
None
A reverse proxy serves resources on behalf of some other server. Revealing any details about the origin server is almost universally unwanted behavior, and could be a potential security risk.
It would be confusing to use X-Forwarded-For, Forwarded, and X-Forwarded-Host headers since these are used with forward proxies.
If you want to identify the origin server handling a request for debugging purposes then you are free to use whatever header you like. A better option might be to set a cookie, similar to a session cookie inserted by load balancers, that you can disable when no longer needed.
Finding an authoritative source is difficult since there is no official standard for this, but I offer the following.
By intercepting requests headed for your backend servers, a reverse proxy server protects their identities and acts as an additional defense against security attacks.
How does a reverse proxy server improve your security?
What Is a Reverse Proxy Server?
Kindly see my updated question.
Reverse proxies are common in application architecture these days. Any attacker worth her/his salt should know this already. And any suggestion of using a reverse proxy would not necessarily make an attack easier.
@IgweKalu It won't 'necessarily' make an attack easier but because of the asymmetry of security (they need to find a flaw or two - you fix them all) it's good practice to aggressively cut out any information or features that aren't strictly necessary for normal use. Returning stack traces in error responses, for example, can be useful for debugging and support but doing so is well known to increase the risk of successful compromise.
@IgweKalu Fingerprinting servers is a useful tool in penetrating a network. When I see X-Backend-Server: 03 I assume 1) that you are using a proxy and 2) that you have at least three servers behind your proxy. That may not help enable an attack by itself, but it's one more piece of the puzzle as I map out your network. Information leakage is a real threat.
A different implementation could use an alias as follows: X-Backend-Server: mickemouse
That said, the benefits are worth more than the perceived risk.
You can return a signed uuid per request per server. Each server has its own secret (e.g. server name). This way you can determine which server sent the response without revealing the server to the client.
|
STACK_EXCHANGE
|
[SciPy-User] Help optimizing an algorithm
Wed Jan 30 23:10:50 CST 2013
> Thanks, but considering that in practice we can have values up to 2^16 from this camera, this seems likely to create an approximately 2^16 x 2^9 x 2^9 array, because the camera's outputs are unsigned 16-bit integers. With two bytes per element, that'd mean 32.768 GB of RAM for the lookup table. Of course it'd be much more feasible if the input range can be constrained to not be the full range of data.
> Another thing I should note is that the nonlinearity in the camera response is fairly localized -- there's one region about 20 counts wide, and another about 500 counts wide, and the rest of the response is basically linear. So outside of those two regions, we can just linearly interpolate and be confident we're getting the right value.
Ah, yeah, if you're getting legitimately 16 bits of dynamic range then a per-pixel lookup table scales rather badly!
I think that your best bet would be to look at scipy.ndimage.map_coordinates(). It's like the fancy-indexing trick I proposed in that you provide an array of indices that map into an look-up table, except now the indices can be float values and map_coordinates() interpolates the value of the look-up table between positions (interpolation is with spline fits of a specified order, including 1 aka linear).
So you'd need a 2^9 x 2^9 x n array for the table, where n is the number of "control points" for the (linear or spline) fit to your transfer functions. Then for each input image, you'd pass in a 3 x 2^9 x 2^9 array of coordinates to map into the input. In the simplest case, the coordinate array would look like this: [x_indices, y_indices, n * input.astype(float) / 2**16)], where (x_indices, y_indices) = numpy.indices((2**9,2**9)).
This would be for n control points evenly spaced across the 2^16 range, obviously. You could of course also set the positions of the control points for each transfer function differently for each pixel, which would necessitate a slightly more complex transformation of the original image's values into interpolation positions in the input array, but that's straightforward enough.
If this doesn't make clear sense, I'll send proper example code.
And if that doesn't run fast enough, you can use OpenGL to do the same thing but with the linear interpolation running on the GPU, using GLSL and a 2D texture sampler (the input image) to build up the coordinates to send to a 3D texture sampler (the lookup table). This is actually a lot less scary than it sounds, and I can give some tips if needed.
PS. What sort of camera has a true 16-bit depth and needs a per-pixel linearity correction? Is it an EMCCD?
More information about the SciPy-User
|
OPCFW_CODE
|
using Xunit;
namespace Arbor.KVConfiguration.Tests.Unit.Urn
{
public class UrnEqualsTests
{
[Fact]
public void UrnsWithDifferentUrnOrNidShouldBeEqual()
{
var urn1 = Primitives.Urn.Parse("urn:foo:a123,456");
var urn2 = Primitives.Urn.Parse("URN:foo:a123,456");
var urn3 = Primitives.Urn.Parse("urn:FOO:a123,456");
Assert.True(urn1.Equals(urn2));
Assert.True(urn2.Equals(urn3));
}
[Fact]
public void UrnsCasingShouldNotBeEqual()
{
var urn1 = Primitives.Urn.Parse("urn:foo:a123,456");
var urn2 = Primitives.Urn.Parse("URN:foo:A123,456");
var urn3 = Primitives.Urn.Parse("urn:FOO:a123,456");
Assert.Equal(urn1, urn3);
Assert.NotEqual(urn1, urn2);
Assert.NotEqual(urn2, urn3);
}
}
}
|
STACK_EDU
|
I have an American Standard Acculink Thermostat that can be controlled through the NexiaHome service. I think it’s a z-wave compatible device (but I don’t have a z-wave hub at home), I wonder if anybody has any experience with this device, or with any Nexia devices?
I have American Standard (Trane) XL624 Thermostats with Nexia Zwave support.
See this post:
Regrettably, after numerous communications with Trane, they were unwilling to share the Z-wave device parameters/associations needed to get this to fully work on OH2.
They are however identified by the Zwave binding in OH2 and I am able to get Temp and Humidity readings from them.
All the other good stuff like making any settings or reading other info from them does not work.
Interesting, I did add the z-wave binding (I have no other z-wave devices in the house… so I didn’t know what to expect when I installed the binding)… I searched for ‘things’ thinking that since the thermostat was on my network, it might find it. But nothing showed up in Paper UI’s inbox. Forgive my absolute ignorance wrt z-wave, but I’m not even sure where to start. I don’t think I need the nexia hub (from what I can read, it says the hub is built in to the thermostat).
On another note, I can control the thermostat with the ‘nexiahome’ web site, so I’m thinking of reverse engineering the web API and writing my own interface.
I have the same issue. You can setup an automation in Nexiahome, connect Nexiahome to IFTT and then connect OH to IFTT. Unfortunately nexiahome only gives you a subset of controls (like you can’t change home/away mode for some reason, just change temperature setpoints).
Fascinating, I had no idea you could connect it to IFTTT.
Tell you what. If I ever get around to writing a library (probably python) to interface with nexiahome, I’ll put it on GitHub and link to it here.
One more thing
Towards the bottom of that help article it says:
If a Zwave thermostat is already enrolled into your Nexia account, then it can also be added to the remote.
On the remote, press* and hold “Setup” until screen shows LGHT SETUP.
- On the remote, use arrow to scroll to THERMOSTAT. Press OK.
- On the remote, press ADD
- On the thermostat, enable the inclusion process.
- (On a Trane or American Standard 400/500 thermostat, to start inclusion, use the thermostat menu buttons to scroll to “Zwave Install”.
- Press Select, and the press “YES” when asked if you want to remove this thermostat.
- NOTE: This process will not remove the thermostat from Nexia; it will simply indentify the thermostat to the remote.)
- When completed successfully, the remote can be used to adjust basic thermostat settings.
Unfortunately I can’t help you here. My Trane thermostat unfortunately isn’t zwave enabled. But it looks like you should be able to do an exclude as per above and then include it into OpenHAB assuming it’s in the Zwave database (http://www.cd-jackson.com/index.php/zwave/zwave-device-database/zwave-device-list) you should then be good to go.
Turns out my model of thermostat is not z-wave compatible!!! (argh) Acculink 950. I’m considering ordering a new one. Might be more fun to reverse engineer the API though
That’s been done already for smart things.
Does this help?
|
OPCFW_CODE
|
var Firebase = require("firebase");
var tokensRef = new Firebase('https://warp-attack-3.firebaseIO.com/board');
var tokRedRef = new Firebase('https://warp-attack-3.firebaseIO.com/red');
var tokBlueRef = new Firebase('https://warp-attack-3.firebaseIO.com/blue');
var turnRef = new Firebase('https://warp-attack-3.firebaseIO.com/turn');
var playersRef = new Firebase('https://warp-attack-3.firebaseIO.com/players');
module.exports = {
updateColors : function(changedCell) {
// console.log('Entering updateColors');changedCell
var cell;
// console.log("The updated cell, " + changedCell.row + ":" + changedCell.col
// + " is now " + changedCell.color + "/" + changedCell.rank);
// convert main board
// leave stars alone
if (!((changedCell.row==5 && (changedCell.col==3 || changedCell.col==7))
|| (changedCell.row==5 && (changedCell.col==4 || changedCell.col==8))
|| (changedCell.row==6 && (changedCell.col==3 || changedCell.col==7))
|| (changedCell.row==6 && (changedCell.col==4 || changedCell.col==8)))) {
//
// console.log('changedCell.row ' + typeof(changedCell.row));
// convert board info and update red
var queryRef = tokRedRef.orderByChild('row').startAt(changedCell.row).endAt(changedCell.row);
queryRef.once("value", function(snapshot) {
// the following finds the cell in question and picks out the elements
// console.log('updateColors ' + changedCell.row, changedCell.col);
for (var key in snapshot.val()) {
// console.log('Red snapshot ' + snapshot.val()[key].col);
if (snapshot.val()[key].col == changedCell.col) {
// console.log('update these colors');
if (changedCell.color=='blue') {
color = 'blue';
rank = 'k'
} else {
color = changedCell.color;
rank = changedCell.rank;
}
// console.log('update ' + changedCell.row, changedCell.col, key,color,rank)
tokRedRef.child(key).update({color: color, rank: rank});
}
}
});
// translate board coordinates and update blue (leave tray alone)
//
if (changedCell.row < 11) {
var blueRow = 11 - changedCell.row;
var blueCol = 11 - changedCell.col;
} else {
var blueRow = changedCell.row;
var blueCol = changedCell.col;
}
var queryRef = tokBlueRef.orderByChild('row').startAt(blueRow.toString()).endAt(blueRow.toString());
queryRef.once("value", function(snapshot) {
// the following finds the cell in question and picks out the elements
// console.log('blue ' + blueRow, blueCol);
for (var key in snapshot.val()) {
// console.log('Blue snapshot ' + snapshot.val()[key].col);
if (snapshot.val()[key].col == blueCol) {
if (changedCell.color=='red') {
color = 'red';
rank = 'k';
} else {
color = changedCell.color;
rank = changedCell.rank;
}
tokBlueRef.child(key).update({color: color, rank: rank});
}
}
});
}
}
}
|
STACK_EDU
|
Among the Dolphins leaders at the NJSSL finals at New Providence a fortnight ago on Thursday, Aug. 2, were gold medal winners Jeff Sundberg, Kaitlin O'Brien, Patrick Giamario and Angela Otto. Sundberg, 18, garnered two golds by finishing first in a pair of 15-18 boys 50-meter races: the freestyle, with a time of :25.16, and the breastsroke, at :3 3.18.
O'Brien, 14, finished first in the 13-and-over girls 100-meter individual medley, with a 1:12.50 effort, while taking third place in the 13/ 14 50 breaststroke at :38.16.
Also garnering a gold and a bronze medal was Giamario, a 12-year-old who won the 11/12 boys 50 backstroke battle (:34.97) and finished third in butterfly at :33 .93, while Otto, 10, won the 9/ 10 girls 25-meter breaststroke race, with a :20. 23 performance.
Gold medal efforts were also turned in by two Dolphins relay teams.
O'Brien joined with fellow 14-year-old Betsy Roche, 16-year-old Jon Rowe and Sundberg to win the 13-18 mixed 200-meter freestyle race, with a total time of 1:51 .06, and the foursome of 10-year-olds Courtney Trivino and Katie Garden and 12-y ear-olds Jen Krone and Regina Douglas won the 9 to 12 girls 100 free at 1:17.00.
Trivino also finished second in both the 9 to 10 year old 25 free (:15.09) and the 25 'fly, at :16.23, while Roche took second place in the 13 to 14 year old 50 free, with a :29.29 finish, and fourth in the back (:36.18).
Other Dolphins medalists include Harry Squires, Graham Squires and Eli Tukachinsky.
A 12-year-old, H. Squires finished second in both the 11 to 12 year old 50 free (:29.99) and the 12-and-under boys 100 IM at 1:14.26, with his younger brother, eight-year-old G. Squires, taking third in the eight-and-under 25 'fly at :20.52 .
Another 12-year-old, Tukachinsky finished third in the 11/12 50 breaststroke, with a time of :41.04.
Additionally, the 9 to 12 year old boys 100 free relay quartet of Colin Ryan, 12, Tukachinsky and 10-year-olds Mike Dierkensen and Matt McNish took third place in its race with a total time of 1:05.70.
Two days before at the NJSSL finals, Dolphins swimmers placed first through sixth 54 times at the league's Division IV championships at Cedar Grove.
Winning two titles each were Sundberg, as he finished first in both the 15-18 50 free (:25.77) and the 50 breaststroke (:33.84), Giamario, who won both the 11/1 2 back with a time of :35.50 and the 'fly, at :34.60, Trivino (the 9/10 at :15.3 3 and 'fly at :16.33) and H. Squires, who took the 100 IM (1:16.40) and the 11/1 2 50 free (:30.23).
G. Squires won the 8-and-under 25 'fly (:21.69) and finished second in the free (:18.54), Roche won the 13/14 free (:29.59) and took fourth in the back (:35.88) , O'Brien took second in the 13/14 breaststroke (:38.59) and third in the 100 IM (1:15.03).
Otto won the 9/10 breaststroke title at :20.99, while Tukachinsky finished third in the 11/12 free (:33.63) and sixth in the breast (:42.35), Ryan was fifth in the 11/ 12 free (:33.22) and sixth in the 'fly (:48.87) and Rowe was fourth in both the 15 to 18 free (:26.88) and 'fly (:30.39).
Rich Palm, 12, finished fourth in both the 12-and-under 100 IM (1:24.11) and the 11/12 'fly (:38.09) and fifth in the breaststroke (:43.06).
Other first through sixth places at the divisional meet went to:
8-and-under: Britney Trivino (1st in free at :19.45); Louis Gebriele (2nd in 'fl y at :21.91); Catherine Canavan (1st in back at :21.82 and 3rd in 'fly at :24.72 ); Tom Machi (5th in back at :26.88)
9-and-10: Dan Garden (2nd in free at :15.66 and 3rd in breast at :20.82); Jeff Misa 4th in back at :20.65); Emily Zales 2nd in breast at :21.66).
11-and-12: Emily D'Angelo 5th in back at :43.62); Eric Iskra (5th in back at :35 .50 and 6th in breast at :46.65); Jen Krone (3rd in breast at :44.00).
13-and-14: Kevin Garden (2nd in breast at :38.61 and 4th in free at :29.55); Rob Garruto (4th in back at :37.15); Dennis Crump (4th in breast at :42.12).
15-18: Megan Meade (4th in free at :30.77); Hannah Tukachinsky (2nd in breast at :40.15 and fifth in 'fly at :34.36).
Relays: 9-12 Girls medley-5th at 1:21.03; 9-12 Boys medley-2nd at 1:13.03; 13-18 mixed medley-3rd at 2:23.47; 8/u mixed free-2nd at 1:22.36; 9-12 Girls free-2nd at 1:06.50; 9-12 Boys free-2nd at 1:08.19 and 13-18 mixed free-1st at 1:50.36.
The Dolphins' only regular season losses came to perennial power Westfield, which has won at least 10 straight league championships.
Caldwell/West Caldwell fell to Westfield, 265-180, on Thursday, July 5, and 291- 154, on Tuesday, July 17.
Dolphins' successes came against the Cedar Grove Cahounas: 279-167 on June 28, and 157-77 in a rain-shortened meet on July 10, the West Orange Waves: 270-174 on July 12 and 284-157 on July 26, and Berkeley Heights: 259-186 on July 15 and 24 3-200 on July 24.
"I'm absolutely thrilled with the team's performance this year," said second year head coach Judy Montalbano, a veteran swimming mentor who guides the Cougars Aquatics Club outside of the summer season.
"I'm also filled with enthusiasm for next season. I think that a lot of our kids will continue swimming throughout the winter and that we'll be even stronger next year. Maybe even strong enough to finally challenge Westfield."
|
OPCFW_CODE
|
The U.S. Is Outsourcing Away Its Competitive Edge is an interesting article. I'm not a business person (yet!) so I won't comment on the core message of the article. But this argument resonated on me:
“In many cases, R&D and manufacturing are tightly intertwined. Unless you know how to manufacture a product, you often cannot design it. And, to understand how to manufacture it, you have to have manufacturing competencies and experience. The notion that you can design a product in the serene world of the R&D laboratory without any knowledge of the rough and tumble world of production is ridiculous”
— The U.S. Is Outsourcing Away Its Competitive Edge, Harvard Business Review
I drew the obvious connection with that in our field is called as ”Software Architecture“ which, while reasonable in theory, ends up in practice with a bunch of Architecture astronauts: People who stop thinking on the real problems and solve abstract issues using whatever high-level “design language” they think is the equivalent to what the real architects draw.
For these folks, coding is too boring. And you know, they already did their share of coding and finally got away from that. Which tells me that they never liked coding too much, by the way.
So the solution is to "outsource" coding to the more junior coworkers. Which is were I draw the parallel with the article which talks about the US and their massive outsourcing of technology production while still wanting to keep the edge on R&D. Which sounds like a problem.
If you think that coding is sometimes boring (and I can agree about that!) and you feel tempted to take an software architect role which is being sell as the position in which you will still do the entertaining part of the work (the one in which your brain gets some exercise) while leaving the rest to something else, think about that again. It will probably work at first (since your experience with real coding is still fresh) but it won't work at long term. You can't tell people how to code and what to code if you stopped coding and solving real problems a while ago.
Not to mention than “coding” with diagrams is boooring, improductive and and almost useless anyway.
So I'm on the camp which thinks that the right answer is the technical leader, not the architect. One who will need experience, for sure. But who won't throw away that experience by completely change his role! He will still code and test and make releases and make mistakes (which is when he will continue building experience!)
“Experience is what you get when you don't get what you want”
— Dan Stanford
And this in turn takes me to the agile camp. Not only because I agree with the core principles like people-over-processes. But because I think that the whole parallel with civil engineering made by the traditionalist and waterfall-ish model is dead wrong. Civil engineering need to plan a lot because changing a building on the fly isn't exactly easy. But we not only can change our systems on the fly, but people expect to be able to change them.
We don't need architects who aren't up to get their hands dirty and participate in the coding process. Because, in the software-engineering-as-civil-engineering parallel, the real equivalent of what the civil architects do is the code, not the diagrams.
Which brings us back to something I mentioned before in passing: Coding can be boring, tedious sometimes. In my view, the appropriate solution is to change to a better language that let you write code which is closer to a description of the underlying design. DSLs and that sort of stuff.
And yeah, diagrams can be a useful tool in the thinking process (and even better tools as high-level, introductory views of the systems), but stating that they are the design/specification is admitting defeat.
This is also why I think that agile practices are somewhat dependent on programming languages. They can't be completely separated like a set of practices that you can use regardless of what you are using to build whatever you are building. But I'm digressing to a different topic, and this is enough for today.
My final advice, however, is that if you like to program but find it increasingly boring and the offered “solution” by your current company is to become an architect, counter by offering the real solutions. Or move to a better company, if that doesn't work.
|
OPCFW_CODE
|
Red Hat Bugzilla – Bug 1260188
Sending HTTP redirects between custom aliases fails
Last modified: 2017-05-31 14:22:11 EDT
Description of problem:
tl;dr: If a GET request to `alias1.com/abc` yields a redirect response with `Location` header `alias2.com/xyz` from my app running on OpenShift (where `alias2.com` and `alias2.com` are both custom aliases for the OpenShift application host), it seems that OpenShift rewrites the `Location` response header on its way out such that its value becomes `alias1.com/xyz`. I need the `Location` header to go back to the client unchanged.
I am running a Rails app as an OpenShift application. I have the application configured the OpenShift application to have two custom aliases — let's call them `example.com` and `ex.co`.
The primary purpose of the app is to serve as a CMS for various articles; a secondary function is a short-link redirect service. For any article in the CMS, it has a long URL such as `example.com/the-article-slug` and a short URL such as `ex.co/JKL`.
The routing rules in my Rails app are set up such that if an incoming request specifies a `Host` of `ex.co`, then it routes the request to the short-link redirect controller. (If the host for the request is anything other than `ex.co`, then the request is routed to a typical controller, which finds an article and serves it.) The short-link redirect controller inspects the URL path, looks up the corresponding long URL, and sends a `Location` response header with that long URL, which, as I mentioned before, involves the hostname `example.com`.
In other words, I have a single Rails app that is set up so that a request to `ex.co/JKL` should redirect the browser to `example.com/the-article-slug`.
Unfortunately, it seems that OpenShift is doing something "smart" and is rewriting this outbound `Location` header on the fly, replacing `example.com` with `ex.co`. Thus, what I'm seeing is that a request for `ex.co/JKL` will receive a response redirecting the browser to `ex.co/the-article-slug`. But a request to this new URL gets routed to my short-link redirect service, and this service doesn't recognize `/the-article-slug` as a valid short link code, so it yields a 404 error.
(My guess is that there's some sort of on-the-fly rewrite on outbound HTTP headers whose logic is something like "if there's a redirect to an external host, then pass the Location through, but if there's an 'internal redirect', then make sure the hostname of the redirect URL matches that of the previous request so cookies, SSL, and users don't get confused" — where a redirect is "internal" if the target hostname matches the canonical hostname or one if its aliases. If this is indeed the case, unfortunately, as you can see, this isn't a safeguard that I want enforced in my case.)
At any rate, I was hoping you could do something to remedy this situation. I was wondering whether it's possible to expose a settings control that would allow me to disable this redirect rewriting behavior. Or perhaps I could send the redirect using some special `X-Real-Location` header so that you guys know I *really* do want to redirect to this other hostname which happens to be an alias for this application. Or perhaps you could suggest a workaround that I may not be aware of?
Version-Release number of selected component (if applicable):
This incorrect redirect behavior is consistent.
Steps to Reproduce:
1. Create a new OpenShift application (with the Ruby stack).
2. Configure the application to have two custom aliases, e.g. `a.com` and `b.com`.
3. Set up a web app (e.g. Rails).
4. Bonus: Configure the web app to be hostname-sensitive such that, for instance, a request for URL `/abc` is unrecognized by the app request router if the `Host` request header is something other than `a.com` and the URL `/xyz` is only recognized if the `Host` is `b.com`.
5. Configure the web app so that for a request to `a.com/abc`, it sends a 30x redirect response with Location header `b.com/xyz`.
6. Make an HTTP GET request for `a.com/abc`.
7. Inspect the HTTP response message for the Location header.
The `Location` HTTP response header will be `http://a.com/xyz`.
(If you did the optional step 4 above, when you try to do step 6 in a browser, the browser will receive a 404 error.)
The *desired* `Location` header value is `http://b.com/xyz`.
(If you did the optional step 4, the correct browser behavior is to see a 200 response with the payload corresponding to `b.com/xyz`.)
My alias still not be attached to another app after era the old..
We apologize, however, we do not plan to address this report at this time. The majority of our active development is for the v3 version of OpenShift. If you would like for Red Hat to reconsider this decision, please reach out to your support representative. We are very sorry for any inconvenience this may cause.
|
OPCFW_CODE
|