Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Updated 9:56 a.m. PDT: Added screen shot and a link to Microsoft's Visual Studio 2010 page.
Airplanes are equipped with recorders that capture both cockpit audio and flight data, so in the event that something goes wrong, investigators can try to determine the source of the problem.
Microsoft is aiming to give software developers the same kind of access. In the next version of its developer tool suite, to be known as Visual Studio 2010, Microsoft plans to include the ability to record the full screens of what testers are seeing, as well as data about their machine. When a test application crashes, the technology will enable developers to see the bug as it occurred.
In an interview last week, Microsoft Developer Division Director Dave Mendlen said the feature is designed to avoid the all-too-frequent conflict that occurs when a software tester finds a bug that the developer says it can't reproduce. Internally, the feature has been called "TiVo for debuggers."
Although the feature is initially only aimed at in-house testers, a similar feature could one day find its way into broader testing, potentially even into Microsoft beta products. "I wouldn't be surprised at all to see this become a way that we do beta management, going forward," Mendlen said.
Microsoft offered scant other details about Visual Studio 2010 and the .Net Framework 4.0. It's a safe bet that better support for cloud-based services will be included, though. "That is certainly an area that Visual Studio and the .Net Framework will have to address," Mendlen said. "As we enable service-based technologies, of course we will have to tool it."
The company is also talking about new modeling tools it says will make it easier for programmers new to a team to get a sense of how earlier versions of the software work. One of the other goals is to add more business intelligence tools--things like dashboards and cockpits--that enable the project managers to assess whether a development project is on track. "The guys that are paying the bills often get very little info," Mendlen said.
Microsoft wouldn't get too much into other features of the product, but it outlined a few broad areas where it is seeking to improve the product, including "enabling cloud computing" and "powering breakthrough departmental applications."
Mendlen said it is expected to ship in fiscal year 2010 (which runs through June 2010).
"I can tell you it won't ship in 2011," he said.
The Redmond giant is not the only company looking to transfer the TiVo notion to software development. A company called Replay Solutions launched a product in June for enterprise Java applications.
Microsoft itself used the notion of a "black box" feature back in 2005.
Microsoft Chairman Bill Gates talked about adding a "black box" to Windows (without the video-recording ability, though). Microsoft later said it wasn't broadly expanding the "Watson" error-reporting capabilities beyond the kinds of data it already had been collecting. It was never totally clear as to what Gates was referring to.
A Microsoft representative did say that "the two technologies are not related and that in Visual Studio Team System the 'black box' is only on testers machines and only turned on when the tester decides it should be turned on."
Speaking of 2005, that same year, a pair of Canadian developers created a Visual Studio 2010 concept, kicked around by a back in 2005. Since they were the first to mention Visual Studio 2010, I thought I would give them some link love.
|
OPCFW_CODE
|
sdba - Optimizations for DQM and others
Pull Request Checklist:
[x] This PR addresses an already opened issue (for bug fixes / features)
This PR fixes #455
[x] Tests for the changes have been added (for bug fixes / features)
[x] Documentation has been added / updated (for bug fixes / features)
[x] HISTORY.rst has been updated (with summary of main changes)
[ ] bumpversion (minor / major / patch) has been called on this branch
[ ] Tags have been pushed (git push --tags)
What kind of change does this PR introduce?
Added norm_group to DetrendedQuantileMapping so that a different grouping can be used in the normalization steps. DQM.adjust also has a new normalize_sim arg. When True, sim is normalized along norm_group before being detrended along group. When False (default), sim is only detrended along norm_group. This is quite useful as the detrending process has poor performance compared to simple normalization, but only detrending by month creates artefacts at the month boundaries. Optimal performance is obtained with: norm_group='time.dayofyear', group='time.group', normalize_sim=True.
Added BaseAdjustment.save_training() and BaseDetrending.save_fit() to save the training/fit datasets. This can help dask as adjustment processes create an enormous amount of small operations, especially with "time.dayofyear" grouping. Default is to save to a temporary file that is deleted when the adjustment object is garbage-collected. A persistent file can be created by passing filename or the temporary file's directory can be given with tempdir.
Does this PR introduce a breaking change?
No
Pull Request Test Coverage Report for Build 1994
0 of 70 (0.0%) changed or added relevant lines in 4 files are covered.
3 unchanged lines in 2 files lost coverage.
Overall coverage decreased (-1.2%) to 75.677%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
xclim/sdba/base.py
0
3
0.0%
xclim/sdba/processing.py
0
6
0.0%
xclim/sdba/detrending.py
0
25
0.0%
xclim/sdba/adjustment.py
0
36
0.0%
Files with Coverage Reduction
New Missed Lines
%
xclim/sdba/adjustment.py
1
0%
xclim/sdba/detrending.py
2
0%
Totals
Change from base Build 1983:
-1.2%
Covered Lines:
2654
Relevant Lines:
3507
💛 - Coveralls
I agree with the spirit of your comments. I believe one strength of xclim.sdba is the use of xarray/dask and the tremendous performance improvement it can provide when compared to other options. For large workloads, dask does have its limits and the save_training/fit function is an example of optimization we can add to help dask while staying in kinda-high-level programming.
For the norm_group/normalize_sim duo, I am ambivalent. As @RondeauG and I came to realize is that QM with monthly grouping is not quite as robust as we thought. However, this 'dayofyear' pre-normalization really helps. We could suggest to the user to build its own process, that is in fact what I do in the notebook of #478.
D'ailleurs, I think that notebook adds explanations where the docstrings could be still unclear.
Just to give my opinion on this matter:
As @aulemahal mentioned, we discovered that grouping per month is not reliable enough, especially for months that exhibit large variations (i.e. several degrees of warming between 1st and 30th of June). Therefore, for an adequate adjustment, it is essential to normalize the data using a smaller window (time.dayofyear does the job adequately, especially if using a window).
The issue is that for large datasets, dask is overloaded by those smaller groups (as @aulemahal mentioned in another comment).
These are the conclusions of my tests:
norm_group=Grouper(group='time.dayofyear') is essential. Any other option (except maybe weekofyear?) will simply create bad results whenever there are strong variations within the groupings decided by the user. Therefore, I don't think that norm_group/normalize_sim are good arguments to provide the user, as anything other than norm_group=Grouper(group='time.dayofyear') is wrong. A better parameter, if we want one, could be norm_window. The difference in results between window=1 vs. window=31 seemed rather small, while window=31 was much more costly to run.
In any case, using norm_group=Grouper(group='time.dayofyear') is costly. There's no way around that. We should warn users that if this is too much for dask, the domain should be cut in smaller pieces, or the user should use the more optimized (but complex) code provided in @aulemahal 's Notebook. Using his Notebook, I have roughly the same performance as before (~25 minutes to bias correct 15 000 grid cells).
I'm :-1: on asking users to cut the domain. I think it's our job to solve this problem.
Is this relevant: https://docs.dask.org/en/latest/array-creation.html#intermediate-storage ?
If we can solve the problem, I'm all for it. The link you provided mentions "It can be useful as a checkpoint for long running or error-prone computations.", so... maybe?
The main problem that we have is that with the time.dayofyear groups, dask gets overloaded while building its task tree ("arborescence de tâches") : the process never gets parallelised, the RAM usage just grows and grows until you either stop the code or it reaches the limit of the computer. This is why our solution was to write more often on disk, as this reduces the number of tasks that dask has to perform at once.
After discussion with the whole user base, I followed the comments here and removed save_training and save_fit. Loading the datasets in memory is now the recommended way of dividing the workload in two, the notebook in the other PR has been update as well.
|
GITHUB_ARCHIVE
|
Alternativly, you can download the latest version from our releases page. If you encounter any issues getting the Google advertising ID, please open an issue
However, once I install the latest version (8.0.10), Unity will no. I have problems with upgrading to the newest Vuforia plugin for Unity in 24 Dec 2019 If you are encountering issues loading a game, this may be due to an for the few easy steps needed to update Chrome to the latest version.) 17 Jun 2019 The version of Visual Studio for Mac included with the Unity installation may not be the latest. It is recommended to check for updates to ensure you have access to the latest tools and features. Updating Loading feedback. Download the spine-unity unitypackage and read install instructions here. latest runtime also requires reapplying those modifications to the new version This functionality may not work correctly if your meta files were corrupted or replaced. Unity project? Make sure that you're using the latest version of the Firebase Unity SDK. Set up a device or emulator for running your Unity project. For iOS Alternativly, you can download the latest version from our releases page. If you encounter any issues getting the Google advertising ID, please open an issue
Alternativly, you can download the latest version from our releases page. If you encounter any issues getting the Google advertising ID, please open an issue 6 Dec 2019 The problem is that ORG-Spine and DEF-Spine should be together. Click on ORG-Spine and under its bone settings - Relations- Set the parent Unity 2019 introduces all new features that help teams of artists and developers build Certified clean download - Tested by TechSpot Work with the latest. On top of noticeable performance issues (finding an object in the hierarchy and The latest product line of mobile phones, consoles, or desktop computers will for example) because Unity is using an older, not ideally optimized version of 12 Dec 2019 Cardboard: You'll need an Android device running Android 4.4 'KitKat' (API release 2018.4 or newer; Minimum version for 6DoF head tracking: 2017.3 Download the latest GoogleVRForUnity_*.unitypackage from the releases page. Virtual Reality SDK Daydream is not supported in Editor Play Mode.
Please review the output for more details", make sure to install the latest stable system, will not interfere with the version of MonoDevelop that is installed by Unity. Unity has built-in support for opening scripts in Visual Studio Code as an 26 Nov 2018 ACCESS the FULL COURSE here: https://academy.zenva.com/product/rpg-academy/?zva_src=youtube-rpg-academy TRANSCRIPT Hey guys, 1.2 Open your Unity project, then open the downloaded unitypackage file for the line below to enable logging if you are having issues setting up OneSignal. Please note that for versions of Unity 2017+, our SDK will now automatically add Download. The latest MonoDevelop release is: 7.6 (220.127.116.11). Please choose your Note: the packages should work on newer Ubuntu versions too but we only If the health check fails, you must resolve the problem before performing an Select Download New Software to download a new software upgrade image in
If the health check fails, you must resolve the problem before performing an Select Download New Software to download a new software upgrade image in Install the latest version of Unity, and in the Unity component selection If you have not already created a Unity ID, then do so from the Unity The Unity SDK include support for Windows, Mac, Linux, WebGL, iOS, tvOS, UWP, will attempt to locate the latest receipt in iOS native code and submit the event if found This is useful for discovering unforeseen issues after the game has been launched.
Hello, Announcing a new feature preview for you all: Script Data Upgrade! Download Link (special build of 5.4.0f3 with the extra feature in it)..
|
OPCFW_CODE
|
[06:57] -GitHub[m]:#mir-server- **[MirServer/mir]** bors[bot] merged [pull request #2492](https://github.com/MirServer/mir/pull/2492): Add .drop method to mutex
[07:09] <Saviq> Good morrow \o
[07:39] <RAOF> Hello!
[07:40] * RAOF engages in some “commit-message-driven development”, where you explain the change and why it's necessary in the commit message and then realise that it isn't.
[07:48] <alan_g[m]> That sounds like more work that writing the test and discovering that it already passes
[07:53] <RAOF> It was more fixing the failing test and then deciding that the test was wrong.
[08:41] <alan_g[m]> I think we need to think more about the structure of mir-server.io "docs" tab. This lacks a top level under which server-side docs would live alongside Frame docs (and anything else we build with Mir)
[09:21] <Saviq> +1
[13:59] <Saviq> alan_g can this be dropped? https://github.com/MirServer/pi-gadget/tree/fix-vt
[14:00] <Saviq> And https://github.com/MirServer/egl-wayland (RAOF (he/his)?)
[14:03] <alan_g[m]> Saviq: Yes, it got merged
[14:05] <Saviq> All, can anything else here be archived / dropped?
[14:05] <Saviq> https://github.com/orgs/MirServer/repositories?type=all
[14:06] <Saviq> Asking b/c I need to run the inclusive naming tools on what we have, and would rather not burn unnecessary cycles
[14:07] <alan_g[m]> I'd need to review some of the old experiements to see how relevant they are. But I think snapcraft-desktop-helpers can definitely go
[14:07] <alan_g[m]> And travis-trigger-ci and tutorials.ubuntu.com
[14:08] <Saviq> Right, those are already archives, but I suppose we can prune since they're all forks
[14:08] <Saviq> Will do
[14:10] <alan_g[m]> We ought to be able to get rid of xwayland-kiosk-helper, but I fear it will have untracked dependencies
[14:11] <Saviq> Sounds like a candidate for archiving, then
[14:13] -GitHub[m]:#mir-server- **[MirServer/mir]** bors[bot] merged [pull request #2479](https://github.com/MirServer/mir/pull/2479): Refactor event matchers
[16:24] <Saviq> Good evening all o/
[17:00] <alan_g[m]> And good evening from me too!
|
UBUNTU_IRC
|
Dennis Ritchie: farewell and thank you
With the recent passing of Steve Jobs, the world has had a reason to reflect on the significant impact some people have. Someone who made everything Steve Jobs did possible also passed away recently. On the 12 October 2011, Dennis Ritchie, the father of The C Programming Language, died at his home in Berkley Heights, New Jersey.
Beginning in 1970 and with the help of Ken Thompson, Dennis Ritchie began the design and construction of a new programming language. It was based on a language developed by Ken Thompson dubbed B. So the next language was called C. And the reason they wanted to create a new language? They wanted to write the kernel for the powerful multi-user operating system UNIX. that was to replace MULTICS which Bell Labs were ending their involvement with in 1969, the same year man first stepped on the moon. And in doing so, Dennis Ritchie and Ken Thompson created the framework on which all our modern computer and communications infrastructure are based.
The C Programming Language
The importance of The C Programming Language cannot be underestimated. Not only did it make UNIX possible, but it made UNIX possible on multiple computing platforms. It was also the foundation for higher level languages such as C++ and Java as well as most of the core infrastructure of the Internet is based on programs written in C.
A few additional reasons why C is so important:
- Microsoft used it to create their initial software offerings
- UNIX is the origin for OSX and iOS
- 80% of all embedded software is still written in C
- Our business writes the Embedded Software we create in C
The C Programming Language, Brian Kernighan & Dennis Ritchie, was the language manual for C and was so well written that it made picking up the language easy and was one of the reasons for the rapid uptake of the language.
So much of our modern world depends on the work of Dennis Ritchie. And I along with many others are grateful. He may not have been the public figure that Steve Jobs was, but he is leaving a larger and more enduring legacy.
Here are some further accolades for Dennis Ritchie:
- Without Dennis Ritchie there would be no Jobs
- Dennis Ritchie: The Shoulders Jobs Stood On
- Dennis Ritchie, The Father Of C And C0-Developer of UNIX
- Father Of C And UNIX, Dennis Ritchie
- Dennis Ritchie, Trailblazer
- What we can learn from Dennis Ritchie
- Dennis Ritchie: the other man inside your iPhone
- Dennis Ritchie: Remembering another Computing Genius
- Dennis Ritchie Biography
And finally the 1998 USA National Medal for Science and Technology received by Dennis Ritchie and Ken Thompson for their creation of the UNIX operating system and The C Programming Language.
And Ken Thompson and Dennis Ritchie explain what was behind the development of the UNIX operating system
We stand on the shoulders of giants. And Dennis Ritchie was a giant amongst giants.
Ray Keefe has been developing high quality and market leading electronics products in Australia for nearly 30 years. For more information go to his LinkedIn profile at Ray Keefe. This post is Copyright © 2011 Successful Endeavours Pty Ltd
|
OPCFW_CODE
|
I'm guessing this is the phenomena you are imagining:
You hold the end of a garden hose with nothing attached, and turn on the spigot. Some water gushes out, with high volume, but low velocity. Then, you restrict the end of the hose with your thumb. The volume of the water is less, but the pressure in the hose rises, and it shoots much farther.
Hydraulic systems can make good analogies for electric systems. The problem is that not all hydraulic systems have an electric analog, and some people suck at making hydraulic analogies.
Here's one problem with mapping this hydraulic system to an electric system: charge is conserved, both in the universe as a whole, and practically speaking, most circuits. With your garden hose, an unlimited supply of water is magically created by the city water system, and when it exits the hose and hits the ground, it's no longer in the circuit.
To get the electric analog of this, you need a circuit capable of shooting charged particles out into space. You also need something capable of supplying charged particles. Things like this do exist (for example, a CRT flings electrons through a near vacuum at the phosphor coating on the screen) but they typically require high voltages, and you aren't going to build anything like this with just a resistor and a battery. If you connect a resistor across a battery, the electric charge is pumped around the circuit. No charge enters, and no charge leaves.
Another problem: very commonly, electric circuits are designed to maintain a constant voltage. The water supply system is also designed to maintain a (roughly) constant pressure. However, since nothing (electric or hydraulic) can supply unlimited current, all of these voltage/pressure regulation systems have limits. In the case of your garden hose, the water supply system can't supply enough water to keep the pressure at the end of the hose at the target, say 30 psi. With nothing attached to the hose, there isn't enough resistance for the supply to work against to build the pressure. It is analogous to an electric short circuit.
If you were to block the hose entirely, you would find the pressure inside the hose (and indeed, everywhere in your hydraulic system, if the hight differences are irrelevant) will be 30 psi. If you were to open the end just a little bit, you'd find that the pressure is still pretty much 30 psi. Only until quite a lot of water is flowing would the pressure drop from 30 psi; this is because at high currents, the friction of the water flowing in the hose becomes significant, and increasingly more of the pressure is lost over the hose's resistance as the current increases.
If we wanted to model the hose-thumb system electrically, we'd need to take into account that the hose has some friction. Maybe something like this:
simulate this circuit – Schematic created using CircuitLab
This is called a voltage divider. When your finger is not on the hose, (\$0\Omega\$), the \$5\Omega\$ from the hose is very significant, since it's the biggest resistance in the system. When your finger is blocking most of the hose (let's say \$1000\Omega\$), then the additional \$5\Omega\$ from the hose makes very little difference.
but in a hose, decreasing the hose diameter (current) would increase the water pressure (voltage).
So with that explained, we can circle back to your confusion. It depends on where you measure the pressure. You probably know that if you totally block the hose, the water pressure does not rise without bound (your pipes would burst!). Using thinner or longer pipes actually decreases the water pressure available at the spigot (or appliance, or whatever is connected), because more pressure is lost to friction between the supply and the spigot. There's an electric analog: higher currents require fatter wire.
|
OPCFW_CODE
|
skd is a small daemon which binds to a UDP, TCP, or Unix-domain socket, waits for connections and runs a specified program to handle them. It is ideal as a secure, efficient replacement for traditional inetd. It is also an easy-to-use tool for non-privileged users wanting to run their own network services. Datagram and stream sockets are available in both the Internet and Unix namespaces, each with the expected inetd behavior. In the Internet domain, IPv6 is supported in addition to IPv4. skd also supports connection limits, verbose logging of connections, dropping of privileges, forking into the background with a pidfile, and redirecting stderr to syslog or a file. Some of these facilities (such as forking into the background, privilege dropping, and logging) are also useful for standalone, non-network services and can be used without binding any socket.
JRedis is a high-performance Java client and connector framework and reference implementation for Redis distributed hash key-value database. It will provide both synchronous clients and asynchronous connections for Redis. The connectors will be both passive (non-threaded) and active, to address deployment scenarios and usage requirements.
Netomata Config Generator (NCG) creates complete, ready-to-install configuration files for network devices and services from a common lightweight model of a network. Because these configuration files are generated programmatically and generated from a shared model, they are more likely to be consistent and complete, making a network more reliable, easier to troubleshoot, and easier to expand in both size and functionality. The inputs to NCG are a model describing the network and templates for the configuration files of the various devices (routers, switches, load balancers, firewalls, etc.) and services (SNMP, DNS, DHCP, etc.). From these inputs, NCG produces complete, consistent, ready-to-install configuration files for those devices and services.
Trafficmeter is a traffic collecting and logging system. It collects and groups packets by time, source IP, destination IP, protocol, source port, and destination port. You can get a detailed log of traffic for every IP without any daemon configuration work. It also gives statistics of IP incoming and outgoing traffic for a time period.
Fallback-gw is a little script to be called via cron that checks availibility of neighbor routers using ping and activates backup routing on ping failure. It can be used as a stupid replacement for BGP/OSPF in a multihomed environment. It has been tested on FreeBSD and on Linux with iproute2.
sc-tool is a commandline tool intended to simplify administration of traffic shapers for Internet Service Providers. It features fast loading of large rulesets using batch modes of tc and ipset, loading of information from any SQL database supported by Perl DBI, a batch command execution mode, and synchronization of rules with databases.
PICI-NMS is an object oriented middleware which makes possible sending messages in a networked environment or on a single host between applications using the library provided. The supported message sending mechanism is "publish/subscribe" and this is backed up by a very easy-to-use and intuitive C++ API which hides the underlying socket interface to make the message sending as transparent to the client as possible.
|
OPCFW_CODE
|
Add a Binder badge
Description
You could add a Binder badge to launch the Jupyter notebooks in a cloud instance
Homepage: https://mybinder.org
Src: https://github.com/jupyterhub/binderhub
Docs: https://binderhub.readthedocs.io/en/latest/
https://github.com/QISKit/qiskit-sdk-py/issues/177
Your Environment
binder builds cloud containers with repo2docker.
repo2docker will install from requirements.txt and/or environment.yml
Docs: https://repo2docker.readthedocs.io/en/latest/samples.html#conda-mixed-requirements
Src: https://github.com/jupyter/repo2docker
conda env export -f environment.yml
https://conda.io/docs/user-guide/tasks/manage-environments.html#exporting-the-environment-file
Hi,
Thank you very much! This seems cool indeed. Would you be able to explain steps to do that? Perhaps writing a blog or an article on how to run notebooks in the tutorial here will be very nice to people who are not familiar to github and just want to run the tutorials.
Add a Binder badge to README.rst
( Ensure that binder (repo2docker) can install the necessary dependencies with an e.g. environment.yml )
Enter the URL and file path into the form at mybinder.org
Click the badge icon
Copy the RST source to the clipboard
Paste the Binder SVG badge link RST source to README.rst:
.. image:: https://mybinder.org/badge.svg :target: https://mybinder.org/v2/gh/QISKit/qiskit-tutorial/master?filepath=index.ipynb
(Near the top / 'above the fold' of the README?)
Launch the notebooks in a Binder instance
Click on the Binder SVG badge (that says "launch binder") to open /index.ipynb
Click any of the (relative) links in index.ipynb to open them in the already-provisioned Binder Docker instance
Many thanks! It seems to work pretty well and so easy!!
However, this notebook produces error when run with mybinder
1_introduction/getting_started.ipynb
Interesting. I jumped ahead to 2. "Superposition and Entanglement" and that seemed to work okay.
The given notebook seems to open in nbviewer just fine; so the JSON isn't a problem with whichever component versions are live on nbviewer. IDK why there's a Unicode symbol \ufeff in the JSON? Maybe that's normal? I would need to get to a terminal with hexdump or xxd to take a look.
The complete error message from mybinder.org JupyterHub is:
Unreadable Notebook: /home/jovyan/1_introduction/getting_started.ipynb NotJSONError('Notebook does not appear to be JSON: \'\\ufeff{\\n "cells": [\\n {\\n "cell_typ...',)
https://github.com/QISKit/qiskit-tutorial/blob/stable/1_introduction/getting_started.ipynb
It shouldn't be necessary, but one can install BinderHub locally (or just repo2docker) to test with. There may be a way to run the jupyter convert equivalent to runipy at container build time (e.g. after environment.yml or in addition to calling environment.yml) with repo2docker:
jupyter nbconvert --to notebook --execute mynotebook.ipynb
Thanks. It appears that the error is due to the file getting_started.ipynb that I fixed just now.
https://github.com/QISKit/qiskit-tutorial/issues/72
@diego-plan9 when we are finished with the tutorial rework can you look into this and see what we should do here.
This: https://github.com/QISKit/qiskit-tutorial/issues/64#issuecomment-353984390
https://mybinder.org/v2/gh/QISKit/qiskit-tutorial/master?filepath=index.ipynb
The heading links seem to link to directories instead of .ipynbs?
done. A few things that would be good. To get the circuit_drawer working with python and qconfig handled better. These are all on the roadmap for qiskit so im going to close for now.
@westurner i got it working with the latex (binder is really nice) and i got the qconfig working by skipping it in the the header and using getpass. Thanks for the suggestions and let me know if you have more improvements.
Thanks!
On Wednesday, June 6, 2018, Jay Gambetta<EMAIL_ADDRESS>wrote:
@westurner https://github.com/westurner i got it working with the latex
(binder is really nice) and i got the qconfig working by skipping it in the
the header and using getpass. Thanks for the suggestions and let me know if
you have more improvements.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/QISKit/qiskit-tutorial/issues/64#issuecomment-395165768,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AADGyy1RGEH8U7ZnARmv8-V87UKmRXB-ks5t6B1BgaJpZM4Q3q3Q
.
|
GITHUB_ARCHIVE
|
The foundations of XML are found in two World Wide Web Consortium
(W3C) recommendations: Extensible Markup Language and Namespaces
in XML. Using just these foundations, it is very simple and straightforward
to express a set of information in a labeled hierarchy. The hierarchy has
simple parent, child, and sibling relationships,... Internetworking Troubleshooting Handbook (2nd Edition) If you can think of the problem, Internetworking Troubleshooting Handbook probably has the solution--at least when it comes to networking problems. This 714-page tome is absolutely phenomenal in scope. Though you may not find in-depth scholarly discussions of networking woes, you will find pragmatic tips that can help you through an...
C++ 2013 for C# Developers
C++/CLI was originally envisioned as a high-level assembler for the .NET runtime, much like C is often considered a high-level assembler for native code generation. That original vision even included the ability to directly mix IL with C++ code, mostly eliminating the need for the IL assembler ilasm.
As the design of C++/CLI...
Oracle SQL: the Essential Reference SQL (Structured Query Language) is the heart of a relational database management system. It's the language used to query the database, to create new tables in the database, to update and delete database fields, and to set privileges in the database. Oracle SQL: The Essential Reference is for everyone who needs to access an Oracle... Handbook of Software Quality Assurance The industry's top guide to software quality -- completely updated!
Practical techniques for mission-critical and commercial software.
Build a great software quality organization.
Prepare for ASQ Software Quality Engineer Certification.
Software quality assurance has never been more challenging -- nor more...
Pro Visual C++/CLI and the .NET 2.0 Platform It is with great satisfaction that I introduce you to Stephen’s excellent new book, Pro Visual C++/CLI
and the .NET 2.0 Platform, the first detailed treatment of what has been standardized under ECMA as
C++/CLI. Of course, any text, no matter how excellent, is itself incomplete, like a three-walled room.
The fourth wall,...
.NET Framework Standard Library Annotated Reference Volume 2
The .NET Framework Standard Library Annotated Reference, Volume 2, completes the definitive reference to the .NET Framework base class library. This book-and-CD set offers programmers unparalleled insight into the ECMA and ISO specifications for the classes...
Concurrent and Real-Time Programming in Ada Ada is the only ISO-standard, object-oriented, concurrent, real-time programming language. It is intended for use in large, long-lived applications where reliability and efficiency are essential, particularly real-time and embedded systems. In this book, Alan Burns and Andy Wellings give a thorough, self-contained account of how the Ada tasking...
|Result Page: 16 15 14 13 12 11 10 9 8 7 |
|
OPCFW_CODE
|
View Full Version : FS2004 and Digital Right Management (DRM)
06-18-2003, 03:05 PM
I saw a post on a Lago forum last night suggesting that FS2004 will come with Macrovision SafeCast (C-Dilla) copy protection.
I've been looking forward to fs2k4 for some time and this makes me very unhappy. I'd planned to be near the front of the line with my $$ in hand but now I'm not so sure.
SafeCast is a very system intrusive thing that I first crossed paths with when I bought TurboTax this year. It has properties like a virus including doing undocumented writes to a supposedly unused sector on the boot track (i.e. beyond what's supported by the OS). One implication of this is that a normal system backup, using DriveImage for example, will not restore copy validation info. The marketing BS makes it sound as though SafeCast is a troublefree, user transparent system for doing DRM. My own experience with TurboTax is that this isn't the case. At one point I had to reactivate. I was able to do so without buying another licence but some people weren't so fortunate. After filing my tax returns, I uninstalled TurboTax and killed SafeCast including the ugly "C-Dilla" directory at the root level of the system drive (unlike some, my system root is not cluttered) and the permanently installed and running service required for its operation. I've vowed to never buy from Intuit again. I won't belabor this here but if you want to see some of the issues people have with this hateful protection scheme, Search the Google Groups with "C-Dilla".
I guess I won't be near the front of the line to get FS2004 after all. In fact, unless I hear a lot of reassuring feedback from those who do get it, I suppose I'll be flying FS2002 for a very long time. After my experience with Turbotax, I'm not at all inclined to let C-Dilla back on my system.
06-18-2003, 03:34 PM
What someone may have "suggested" aside, why don't we wait and see what MS is really using, rather than potentially worry folks over nothing.
Rumors like this often serve no purpose, unless the person making the suggestion was validated as an employee of MS, and I highly doubt that. ;-)
06-18-2003, 04:18 PM
I really hope there's no cause for concern. The info came from a quote attributed to Steve Small at FSD-International.
06-18-2003, 07:40 PM
See this is why I hate DRM.
Fair enough, they are protecting their software. I got no problem with that. But putting all this ##### on my machine, without my authorisation, and phoning home when it wants (who knows what else it is reporting?) I do NOT like.
Down with paladium as well.
06-18-2003, 10:15 PM
yes, I saw this too, he is a reliable source. Maybe he is not correct but coming from Steve, it is somthing to think about. Your concerns are correct, I know our company had very major issues with this 'scheme' on our company network - a real nightmare.
check this link
06-19-2003, 03:12 AM
>In fact, unless I hear a lot of reassuring feedback from those who do get it, I suppose I'll be flying FS2002 for a very long time.<
Well, not necessarily. Even if such technology is included, you still wouldn't have to live without FS2004. Contrary to public opinion, MS software does generally comply with national laws, and in a lot of countries such intrusive software violates privacy legislation, in others, even simple copy protection is illegal. Just buy your copy from one of those countries - it may cost a bit more and you may have to wait a bit longer, but you will have it without any undesired side effects.
06-19-2003, 03:59 AM
What countries are those? Any ordering sites?
06-19-2003, 07:33 AM
As Lou said, let's not start mass hysteria here. This is nothing more than rumor ( at this point in time ) and there's no need to jump the gun and even consider suggesting you purchase your copy from a different country until the facts are in....of course, you are free to do what you want, but let's not get everyone wound up without cause ( remember the whole Y2K hype ? ;-) )
06-19-2003, 08:44 AM
Having a plan to avoid potential problems, waiting to determine if those problems are real, and acting sensibly as the situation unfolds is prudence.
Properly exercised it is not hysteria!
06-19-2003, 04:04 PM
LAST EDITED ON Jun-19-03 AT 04:09PM (EDT)[p]LAST EDITED ON Jun-19-03 AT 04:07 PM (EDT)
Hi HP and Bob,
as far as countries go, Russia is a good start, as well as various EU countries (although EU-wide legislation is in the making, currently it's different in individual member states).
I agree with Bob on the 'hysteria' point, so let's consider the likelihood of something like C-Dilla being included in FS2004.
First, a program that calculates US income tax is not an attractive software title for customers outside the US, therefore serves a limited market, and thus only has to concern itself with US law.
MSFS, however, is a title that is marketed world wide. As any business, MS would like to make a profit, and considering their size and expertise in producing a wide variety of software, I find it unlikely that they would include third-party DRM software with their products, especially if that software has the potential to fall foul of foreign legislation. So, not only would MS have to pay royalties to the third party, they also would have to produce different versions of FS for distribution in different countries. This would considerably reduce their profit margin. It would seem much more likely, that they would include some proprietary form of DRM, which is adaptable to various needs, like the Windows XP activation technology. There, it is the serial number that determines, whether activation is required, if so, how much information is retrieved from your computer, if installation on multiple computers is permitted, and if so on how many etc. etc. - but the content of the actual CD is identical. Plus, MS wouldn't have to pay a licence fee to a third-party developer.
So, how do those rumours get started? Discounting pure malice (which obviously is a possibility), C-Dilla or something like it may well have been included in the beta versions of FS2004. After all, it is a limited distribution title (I would imagine a few thousand copies), and beta testers and reviewers are not customers in a legal sense. For such a small and time-limited distribution, it may make perfect sense for MS to pay a licence fee for some third-party DRM software, rather than spending time on developing their own, or integrating it into their existing technology. I guess a few months after the release, we'll get an idea of how many illegal betas made it into circulation, when people start posting questions on the forums like 'FS won't start - I get an error message saying 'sotware expired''... :).
So I agree with Lou and Bob, let's see what happens first, and if, against all the odds, unacceptable protection technology is indeed included, we can either grin and bear it, or try to find a reputable Russian online vendor :).
06-20-2003, 08:50 PM
Folks, remember FS2000 and the NODISK program?
Well I personaly do not think MS will do a so hard copy protection for a product that will cost less than 100 bucks.
But certainly, and unlawfuly (I quote), some hacker will broke the code, and voilá, no problemo.
Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.
|
OPCFW_CODE
|
Valhalla EG notes Feb 13, 2019
karen.kinnear at oracle.com
Wed Feb 27 15:30:47 UTC 2019
Attendees: John, Remi, Dan H, Tobi, Simms, Karen
AI: Karen: resend John’s Template Class proposal: http://cr.openjdk.java.net/~jrose/values/template-classes.html
AI: Remi: write up proposal on specializing parameters to defineClass
I. DH: Locking options: put stake in the ground:
Throw exception - consensus
II. JR: Generic Specialization: Template class refinement
Propose we do at least one prototype for specialization mechanisms this year.
1. Class file is still chief entity
constant pool: more articulated, as are class, field and method
goal: share constants and bytecodes as much as possible
LWorld helps here
Constants: change signatures to add reified type parameters in descriptors
- model 3 generated new classfiles, lost all sharing
Therefore: constant pool needs to be partially shared and partially specialized
Proposal: constant pool segments
Holes: fill with parameterized types
Requirement: No holes in the concrete by the time you get to a reference.
Actually no holes by the time you get to verifying the species
RF: Concern about resolving too early if specialize CP
JR: Risk - we need an experiment
RF: wants greater dynamicity, possibly fully dynamic
JR: In the VM: we can’t always do late binding - e.g. heap layout - need full information early
RF: agree layout needs early info
generate specialized class by filling hole when defining the class
JR: Entity model:
segmented constant pool
1 global segment
local segments depending on 1 or more holes, tree structured to at least depth 2
Hole kinds: field type, dynamic constant, method type, MethodHandle
structural inheritance of constraints
Fill hole when we specialize a CP segment
DH: global segment seen by others?
JR: yes - resolved at most once
DH: condy and MH: lookup dependent on instantiation
fill class holes when load/define a class/species, before referencing - i.e. field or method reference
class_info, method_info, field_info refer to segment
class template, method template ,field template - must specialize constant pool and then instantiate
Load class for specialization by providing hole values
(ed. note: provide live types)
Open question: how to represent generic method more generic than enclosing class
DH: CP indicies globally numbered?
JR: yes. segments are not overlays
DH: named segments?
JR: based on 1st constant in the segment
rules for referential integrity and placement of constants in segment
DH: each specialization CFLH/redefinition or 1 per template?
JR: open - default yes
(ed. note - need to revisit this one - earlier assumption was redefinition of template class, not each species)
JR: also nested generics with a shared constant pool, e.g. in future an inner class could share the same class file
RF: What should be shared/not shared?
Would like to see done dynamically, specialize parameters to defineClass
- JR had to leave
DH: JVMTI and general tooling issues
KK: working on sharing requirements
note: sharing requirements are: class-wide and per-species, there is nothing shared across a subset of species.
Conditional (possibly “where” syntax) determines if a method for example will be part of any given species
KK: other open issue: raw vs. erased - and best way to deal with backward compatibility
During the meeting asked if virtual methods/virtual fields are only needed to deal with raw/wild types - answer was yes.
(ed. note: after meeting - found another case - which I can’t recall at the moment)
RF: client level proposal: old generics vs. new generics
option 1: client with old generics not reference code with new generics
option 2: not have to recompile client code to use it, need virtual dispatch
Proposal: embrace reuse as central design.
Constant pool specialization - want to be careful about adopting java generics semantics.
Other languages, e.g. Scala can’t use this - slightly different generics semantics
KK: Is there anything we could add to the class file that would make it easier to support generics in other languages?
RF: wants to do a prototype at runtime, with no java semantics in the design
KK: would it be useful to have information in the class file and language-specific specializers?
RF: future: use Lookup.defineClass with 1 dynamic parameter
at runtime the dynamic object is like a static of the species
no representation of species at compile time
no reified type in constant pool
KK: Does this imply no sharing at all?
RF: Derive species when needed - ask for specialization, create new if none exists, and intern
Mark if a field or method is specialized
DH: Can a descriptor refer to specific specialization?
RF: No: dynamic check at runtime
DH: JIT engineers: if this model depends on JIT magic to work, concerns about startup especially in constrained environments
RF: specializing a class vs. method are different, maybe JR’s model for classes
not want java generic semantics in vm
if full template specialization, can’t have sharing
Swift: template: either generic code or compile time inline and specialize - based on caller/callee
KK: concerns about performance cost of virtual field/virtual method additional indirections
What are the sharing requirements?
RF: generic methods - just want to share resolution, not share data
segment for each combination of type parameters too much
DH: only specialize for value types, reduces the problem
RF: Haskell eg. Linked list which encodes the type of the next link or tuple as linked list
if CP specialization - never call with exactly the same type
DH: If it hurts …
RF: currently works with erasure. Concern slow if new specialization for each link
III. RF DynamicValue attribute
Another project Remi will lead and create JEP
language level: static lazy final
improve startup by allowing init with Condy at first access of individual static
Drawbacks: opt-in at source
change in semantics
in static block - there is a lock
condy BSM can execute multiple times
More information about the valhalla-spec-observers
|
OPCFW_CODE
|
Intel Publishes Ice Lake Gen11 Graphics Architecture Details Highlighting Significant Improvements
All of the buzz surrounding Intel's efforts in graphics right now is around the company's Odyssey towards its first modern discrete GPU, currently scheduled for release in 2020. That's understandable, but let's not forget that, in terms of market share, Intel's integrated graphics lead the pack. Intel's upcoming Gen11 graphics will keep things going, and interestingly, Intel has quietly released a white paper that describes Gen11 in some detail.
Intel has already shared a few details about Gen11 during its Architecture Day last December. Gen11 will complement Intel's upcoming Sunny Cove CPU architecture, which itself will form the basis for both Core (consumer) and Xeon (server) processors.
While Intel did not spill all of the beans at the time, it did say that Gen11 bumps the number of enhanced execution units from 24 to 64, and pushes compute performance to over 1 TFLOPS. That's not on par to stronger discrete solutions, but as we saw earlier today in a benchmark leak, Gen11 is shaping up to be much faster than Gen9.
In the newly published white paper, Intel compares the makeup of Gen11 to Gen9. The table above presents the theoretical peak throughput of the compute architecture, aggregated across the entire spectrum. Values stated are "per clock cycle," as the final product clock rates are still being hashed out.
This chart reiterates the 1 TFLOPS claim, while adding some other points of comparison, such as half-precision performance (2 TFLOPS).
As for the call out to "slices," Gen11 will consist of 8 subslices aggregated into 1 slice. So, a single slice aggregates a total of 64 execution units. Aside from grouping subslices, the slices integrate additional logic for geometry and L3 cache.
"In Gen11 architecture, arrays of EUs are instantiated into a group called a Subslice. For scalability, product architects can choose the number of EUs per subslice. For most Gen11-based products, each subslice contains 8 EUs. Each subslice contains its own local thread dispatcher unit and its own supporting instruction caches. Each Subslice also includes a 3D texture sampler unit, a Media Sampler Unit and a dataport unit," Intel explains.
Communication takes place through a ring interconnect, which is an on-die bus between CPU cores, caches, and the Gen11 graphics in a ring-based topology.
The paper also discusses a technique called Coarse Pixel Shading. This works by reducing the number of times the pixel shader executes, which in turn saves rendering time. To preserve details along the edges, sample coverage and depth continue to be sampled at the target resolution.
"CPS allows us to decrease the total amount of work done when rendering portions of the scene where the decrease in shading rate will not be noticed. We can also use this technique to lower the total overall power requirements or hit specific frame rate targets by decreasing the shading resolution while preserving the fidelity of the edges of geometry in the scene," Intel says.
Also found in the white paper are references to Gen11's position only shading tile-based rendering (PTBR). This consists of two distinct pipelines—a typical render pipe and a new position only shading (POSH) pipe.
The POSH pipe executes the position shader in parallel with the main application, and has the advantage of generating results much faster. That's because it only shades position attributes and avoids rendering actual pixels.
"The POSH pipe runs ahead and uses the shaded position attribute to compute visibility information for triangles to gauge whether they are culled or not. Object visibility recording unit of the POSH pipe calculates the visibility, compresses the information and records it in memory," Intel explains.
It's an interesting read, if you're into technical details. Hit the link in the Via field (PDF) below to give it a look.
|
OPCFW_CODE
|
package ftp
import (
"context"
"fmt"
"path"
"regexp"
"strings"
_ftp "github.com/jlaffaye/ftp"
"github.com/c2fo/vfs/v6"
"github.com/c2fo/vfs/v6/backend/ftp/types"
"github.com/c2fo/vfs/v6/options"
"github.com/c2fo/vfs/v6/utils"
)
// Location implements the vfs.Location interface specific to ftp fs.
type Location struct {
fileSystem *FileSystem
path string
Authority utils.Authority
}
// List calls FTP ReadDir to list all files in the location's path.
// If you have many thousands of files at the given location, this could become quite expensive.
func (l *Location) List() ([]string, error) {
var filenames []string
dc, err := l.fileSystem.DataConn(context.TODO(), l.Authority, types.SingleOp, nil)
if err != nil {
return filenames, err
}
entries, err := dc.List(l.Path())
if err != nil {
if strings.HasPrefix(err.Error(), fmt.Sprintf("%d", _ftp.StatusFileUnavailable)) {
// in this case the directory does not exist
return filenames, nil
}
return filenames, err
}
for _, entry := range entries {
if entry.Type == _ftp.EntryTypeFile {
filenames = append(filenames, entry.Name)
}
}
return filenames, nil
}
// ListByPrefix calls FTP ReadDir with the location's path modified relatively by the prefix arg passed to the function.
// - Returns ([]string{}, nil) in the case of a non-existent directory/prefix/location.
// - "relative" prefixes are allowed, ie, listByPrefix from "/some/path/" with prefix "to/somepattern" is the same as
// location "/some/path/to/" with prefix of "somepattern"
// - If the user cares about the distinction between an empty location and a non-existent one, Location.Exists() should
// be checked first.
func (l *Location) ListByPrefix(prefix string) ([]string, error) {
var filenames = make([]string, 0)
// validate prefix
if err := utils.ValidatePrefix(prefix); err != nil {
return filenames, err
}
// get absolute prefix path (in case prefix contains relative prefix, ie, some/path/to/myprefix)
fullpath := path.Join(l.Path(), prefix)
// get prefix and location path after any relative pathing is resolved
// For example, given:
// loc, _ := fs.NewLocation("user@host:21", "/some/path/")
// loc.ListByPrefix("subdir/prefix")
// the location fullpath should resolve to be "/some/path/subdir/" while the prefix would be "prefix".
baseprefix := ""
if prefix == "." {
// for prefix of ".", it is necessary to manually set baseprefix as "." and
// add trailing slash since path.Join thinks that "." is a directory
baseprefix = prefix
fullpath = utils.EnsureTrailingSlash(fullpath)
} else {
// get baseprefix fix from the fullpath
baseprefix = path.Base(fullpath)
// get absolute dir path of fullpath
fullpath = utils.EnsureTrailingSlash(path.Dir(fullpath))
}
// get dataconn
dc, err := l.fileSystem.DataConn(context.TODO(), l.Authority, types.SingleOp, nil)
if err != nil {
return filenames, err
}
// list directory entries
entries, err := dc.List(fullpath)
if err != nil {
// fullpath does not exist, is not an error here
if strings.HasPrefix(err.Error(), fmt.Sprintf("%d", _ftp.StatusFileUnavailable)) {
// in this case the directory does not exist
return []string{}, nil
}
return filenames, err
}
for _, entry := range entries {
// find entries that match prefix and are files
if entry.Type == _ftp.EntryTypeFile && strings.HasPrefix(entry.Name, baseprefix) {
filenames = append(filenames, entry.Name)
}
}
return filenames, nil
}
// ListByRegex retrieves the filenames of all the files at the location's current path, then filters out all those
// that don't match the given regex. The resource considerations of List() apply here as well.
func (l *Location) ListByRegex(regex *regexp.Regexp) ([]string, error) {
filenames, err := l.List()
if err != nil {
return nil, err
}
var filteredFilenames []string
for _, filename := range filenames {
if regex.MatchString(filename) {
filteredFilenames = append(filteredFilenames, filename)
}
}
return filteredFilenames, nil
}
// Volume returns the Authority the location is contained in.
func (l *Location) Volume() string {
return l.Authority.String()
}
// Path returns the path the location references in most FTP calls.
func (l *Location) Path() string {
return utils.EnsureLeadingSlash(utils.EnsureTrailingSlash(l.path))
}
// Exists returns true if the remote FTP directory exists.
func (l *Location) Exists() (bool, error) {
dc, err := l.fileSystem.DataConn(context.TODO(), l.Authority, types.SingleOp, nil)
if err != nil {
return false, err
}
entries, err := dc.List(l.Path())
if err != nil {
if strings.HasPrefix(err.Error(), fmt.Sprintf("%d", _ftp.StatusFileUnavailable)) {
// in this case the directory does not exist
return false, nil
}
return false, err
}
if len(entries) == 0 {
return false, nil
}
if entries[0].Type != _ftp.EntryTypeFolder {
return false, err
}
return true, nil
}
// NewLocation makes a copy of the underlying Location, then modifies its path by calling ChangeDir with the
// relativePath argument, returning the resulting location. The only possible errors come from the call to
// ChangeDir, which, for the FTP implementation doesn't ever result in an error.
func (l *Location) NewLocation(relativePath string) (vfs.Location, error) {
// make a copy of the original location first, then ChangeDir, leaving the original location as-is
newLocation := &Location{}
*newLocation = *l
err := newLocation.ChangeDir(relativePath)
if err != nil {
return nil, err
}
return newLocation, nil
}
// ChangeDir takes a relative path, and modifies the underlying Location's path. The caller is modified by this
// so the only return is any error. For this implementation there are no errors.
func (l *Location) ChangeDir(relativePath string) error {
err := utils.ValidateRelativeLocationPath(relativePath)
if err != nil {
return err
}
l.path = utils.EnsureLeadingSlash(utils.EnsureTrailingSlash(path.Join(l.path, relativePath)))
return nil
}
// NewFile uses the properties of the calling location to generate a vfs.File (backed by an ftp.File). The filePath
// argument is expected to be a relative path to the location's current path.
func (l *Location) NewFile(filePath string) (vfs.File, error) {
err := utils.ValidateRelativeFilePath(filePath)
if err != nil {
return nil, err
}
newFile := &File{
fileSystem: l.fileSystem,
authority: l.Authority,
path: utils.EnsureLeadingSlash(path.Join(l.path, filePath)),
}
return newFile, nil
}
// DeleteFile removes the file at fileName path.
func (l *Location) DeleteFile(fileName string, _ ...options.DeleteOption) error {
file, err := l.NewFile(fileName)
if err != nil {
return err
}
return file.Delete()
}
// FileSystem returns a vfs.fileSystem interface of the location's underlying fileSystem.
func (l *Location) FileSystem() vfs.FileSystem {
return l.fileSystem
}
// URI returns the Location's URI as a string.
func (l *Location) URI() string {
return utils.GetLocationURI(l)
}
// String implement fmt.Stringer, returning the location's URI as the default string.
func (l *Location) String() string {
return l.URI()
}
|
STACK_EDU
|
Why the difference of opinion about the disappearance of Subhas Chandra Bose
A year back, Harvard professor Sugata Bose released a biography of the Indian leader Subhas Chandra Bose which claimed to lay all speculation regarding his death to rest. However, last week, a veteran journalist Anuj Dhar released a book which claims to show documents obtained from India's government to prove that the evidence points to exactly the other direction. According to Dhar, documents obtained using the Right to Information Act show that the famous freedom fighter, who was popularly known as "Netaji" (leader) in India, had actually escaped to Soviet Russia in 1945, and that the news of the plane crash was a subterfuge that allowed Bose to escape. The Government of India's last inquiry also supports Dhar's claim. India's government itself seems to have an ambiguous stance on this matter.
Why is it that historians are not able to agree on someone's disappearance over 65 years after it occurred? And why has this great disappearance mystery, about such a famous and controversial Indian leader, not received much attention from historians?
I will be grateful for your replies.
Update I went through the preview of Dhar's book given on Amazon. It claims India's government responsible for intentionally sabotaging its own inquiry into Bose's disappearance. Dhar himself is fighting a judicial battle in Delhi Hight Court over the government's refusal to show some documents related to Bose's disappearance.
Update 2 In case anyone is interested about the latest news on the disappearance story, there is a story about a monk dying in 1985 in Faizabad in India. Many people (including three journalists) had claimed that the monk was Bose in disguise. The High Court of Uttar Pradesh has just ordered the government to conduct an inquiry into this incident.
Should be Right to Information, not education.
Sorry, it was a mistake. I have now corrected it. Thank you for pointing it out to me!
Perhaps because it sells more books if there's uncertainty?
Its difficult to believe that considering the fact that 1) If Bose had escaped, he had most probably been incarcerated by the Russians, and 2) Bose was among the youngest leaders of India - he was only 48 years old in 1945. This makes the situation tragic, almost similar to that of Raoul Wallenberg's disappearance.
@user571376: With all due respect, Wallenberg was saving people from the Nazis, whereas Bose was actively colluding with the Nazis, so the similarity should not be carried to far.
@FelixGoldberg I understand that the perspective of people differ. By showing how an independent India could run a government (even if provisional) and an army without being bogged down by religious, linguistic or caste divisions, Bose's made lasting contributions to the birth of India. The Indian Army
@FelixGoldberg The Indian Army's marching anthem is derived from that of the INA. As for collusion with the Nazis, to the millions of Indians who were starving to death (see Bengal famine of 1943) in Bengal due to sheer indifference, British rule was intolerable, even at the cost of collusion with the Nazis. Bose recognized this, and never showed any sympathy for the Nazis' racial bigotry. In fact, Bose had a few Jewish friends in Austria, and had expressed sympathy for their plight.
The event (alleged death) took place towards the end of the World War II, just after the surrender of Japan. There was a lot of general confusion and fog of war. And this was a person known for disguises and misdirection.
It would be difficult to get proper documentation or find reliable witnesses. This could very well explain the uncertainty regarding his death.
Thank you for your reply. I agree that Bose was known for disguises and misdirection, and this might have caused confusion among people.
But this does not explain why the matter has got so little attention. There are at least three reasons why the matter should be important to historians:
Proving Bose had escaped would show that Japan was serious about helping him.
A senior general of Japan Tsunamasa Shidei was also said to have died in the same accident.
Bose was acknowledged by almost all Indians to be a great leader, and India's history changed permanently with his disappearance.
There is a 2010 review of the subject in History Today. In the end, the mystery remains. My own conclusions after reading the article are:
The plane crash version does indeed feel fishy.
The answer is likely to be in the Russian archives - which means it will not be forthcoming for a long time, I am afraid.
As an Indian, I had always heard Bose being referred to as a great leader. The release of two diametrically opposite narratives about Bose's death had attracted my interest six months ago. I read both of them, and found that Dhar has sued the Government of India to reveal what information it has obtained on this from the Russians. India's government claims that doing so would harm friendly relations between the two countries. The judicial battle continues. Interestingly, the International Raoul Wallenberg Foundation has agreed to help in case they stumble on something in Russia.
The current Indian government has appointed another Judicial Commission to probe into the identity of a person living in Faizabad, India as a holy man.
It has been strongly suspected by many and believed by several researchers
that this person was Subhash Bose in disguise. The Commission under Justice Vishnu Sahay is expected to submit its report in 2017. There have been reports
of British and French intelligence in public domain and and a report by the Soviet Agent in Bombay V.I. Sayadyants which make explicit references to Subhash Bose being active after August 1945.
Please add some source references.
Maybe India would have been a lot different under Subash Chandra Bose than Gandhi family. I personally feel the Russians captured him and he died in Siberia. It was because of Nehru, but why did Stalin keep him in prison and when did he die? I wonder how this one family in India became so powerful and ruled illiterate Indians for so long and will continue to do so. Atal Behari Vajpayee should have helped clear all these doubts in his short 4.5 years of government.
By now, I have studied Dhar's book, and I appreciate what he says. He says that not only did the Vajpayee government not help the inquiry, it actually actively harmed it. Repeated requests to show the Home Ministry's documents were denied (they were shown to the previous inquiries), permission to go to Taiwan and Russia was given only after an extraordinary amount of delay, and the embassy officials in Japan were very reluctant to help. The inquiry itself was formed by an order of Kolkata High Court, which found the Government of India's silence on this matter suspicious.
Do you have any sources?
Can you please provide facts to support your answer?
|
STACK_EXCHANGE
|
Building a PubNub-Powered Chat App with Sencha Touch 2
Sencha Touch 2 is a new framework for building super slick mobile applications. I decided to put on my mobile dev hat on and experience it for myself. And of course, I wanted to see how nicely it played with PubNub.
Turns out, it delivered. Sencha is not only intuitive, it’s a blast. There were certainly a few “gotcha”s, but I found the experience to be extremely pleasant. Here’s what I learned. First step is installation. This was actually not the best thing in the world. It wasn’t abundantly clear what files / folders needed to exist in what places.
Picking up a new web framework
Furthermore, the documentation and “getting started” documents on the website were a completely distinct set from the ones that come when you unpack it the download. There’s a Sencha Touch 2 download, but there’s also a SDK Tools download. Additionally, a full working demo can be found here. I eventually figured out that the latter made it the easiest. The magic command was this:
sencha generate app example-app ../example-app
That started a new project with all the right files and folders in the right places.From there, I was off to the races.
app.js and app.json: the heart of your Sencha Touch 2 app
Within the bottom toolbar, I added three interactive elements. A text field for name entry, a text field for the chat message itself, and a submit button. Here’s they are defined.
This is pretty straight forward. I give them names, ids, placeholder text, etc. Layout-wise, again I’m using the `flex` element to define how much space they take up in relation to each other. The most important part here is that `listeners` field. This is where I tell the chat field and submit button to listen for “return” and “tap” events, respectively, and point them at a callback function called sendMessage. That callback is where PubNub comes into play.
First, lets initialize PubNub using the init() function. You’ll first need to sign up for a PubNub account. Once you sign up, you can get your unique PubNub keys in the PubNub Developer Portal. Once you have, clone the GitHub repository, and enter your unique PubNub keys on the PubNub initialization.
And here’s how we send messages: So now, anytime a user presses enter while typing, or taps submit, it triggers a `pubnub.publish` command and subsequently sends the chat message through the pubnub infrastructure. That’s only half the battle, though. Now we need to do a `pubnub.subscribe` and listen for any chat messages – including our own. Once we get a message, we make sure it’s the right type of message (message.name == ‘chat_message’) and add it to the data store associated with the the main chat view. Notice that mess of scrollToBottom()? That is a workaround to an annoying gotcha. Sencha Touch does not appear to provide a callback to the getStore().add() function. What we want to do is scroll to the bottom of the view after the data is added. Because that .add() function is asynchronous, simply adding it after the call doesn’t work. setTimeout() is certainly not a “best practice”, generally speaking, but that fix does seem to work for now. Finally, on initial page load, let’s pull the last 10 messages from pubnub.history() and add them to the data view. That wraps it up.
|
OPCFW_CODE
|
With FINIDEX, you can share investing ideas and create investing frameworks -- these are analyses that help you make investing decisions, while keeping your ultimate investing decisions private.
FINIDEX enables a learning feedback loop, whereby the interaction among a Community of Users improves the analysis. Users set their own permissioning levels for their sharing related choices.
The Confidence of Relationship is a measure of how confident a user can feel about associations and relational choices within an analysis. It is displayed next to each group item and represents an aggregated statistic, across all users, of how confident you can feel for including a group item for a given reference item.
FINIDEX allows sharing of investment frameworks, but not actual investment amounts. You get the full benefit of learning, but decision making remains PRIVATE. Users set their own permissioning levels. We offer comments, but comment-writers must be permissioned beforehand by FINIDEX so that comment writers are very knowledgeable, polite, helpful, constructive and write meaningful comments. You can set permissions to accept and/or display/share received comments.
We provide you with an analysis of news based on your Groups and other factors, including an assessment of sentiment (positive, negative, neutral - with actual scores). If you permission for sharing, then you can also observe another user's Group and their customized news analysis, giving you the ability to see news through another person's perspective for the same stock or theme.
We provide you with prices (non-US equity prices are end-of-day currently), earnings estimates (US only currently), and some fundamentals (US only currently). We provide benchmark indexes (US, other regions, by industry, etc.) for comparisons and portfolio functionality.
You can track an investment portfolio and include the portfolio items into your Equities Group, or into your Themes. This provides you with a link between your portfolio holdings and your Equities and Themes Groups. You can also share your portfolio page (not amounts/values, just positions and return percentages), depending on your permissioning choices.
Through the complete API, FINIDEX allows automated systems to participate just like a natural-person user. You choose, through your permission settings, whether to share / collaborate with automated systems.
We are in a 'beta' period. As we progress, our plans include enhancing the portfolio functionality, adding several charting tools, expanding data coverage (regions), advanced analytics, etc.
When people use statistics to glean information from others (sometimes called the "crowd"), they are guessing at the meaning. At FINIDEX, we believe it is better to simply be collaborative when setting-up an analysis, rather than try to guess.
We are building a community of users who may choose to collaborate, or may choose to use the tools individually, or a mix of both.
So, for example, if you want to know which stocks to include in an investment theme (like "U.S. Economy Grows More than Expected in 2018"), you can use the framework "Themes". And, you can observe other users' group selections for that theme from those who have elected to share their frameworks.
Another example is which stocks to include in a peer group to determine relative value by using the framework "Equities". And, you can observe peer groups from other users who have shared their frameworks. Other frameworks are included, including "Strategies" (for example: Best Stocks to Own if Retiring in 5-Years), and more are planned.
In each case, we tell you how confident you can be when including an item into a group (for example, should a specific stock be included in the theme mentioned above).
We do this based on an aggregated, anonymous assessment of all users' frameworks. This is called the Confidence of Relationship (CoR).
Importantly, your investing decision-making remains private, while benefiting from understanding the other users' frameworks.
FINIDEX also allows automated systems to participate, through a complete API, as a user on the platform, including posting comments, discussion items and setting-up a framework. Users can set their permissions to share/collaborate with such systems, or not, depending on their preferences.
We believe that if FINIDEX provides value for you, then you should be willing to pay a fee. We currently provide a 30-day free period (during the beta launch, it is 90-days), and thereafter charge the subscription fees described below.
Please use the form below to contact us. We will provide a timely response.
|
OPCFW_CODE
|
#! /usr/bin/env python
# -*-coding: utf-8 -*-
__author__ = 'dracarysX'
import unittest
from peewee import *
from peewee_rest_query import *
# define peewee model
class School(Model):
id = IntegerField()
name = CharField()
class Author(Model):
id = IntegerField()
name = CharField()
school = ForeignKeyField(School)
class Book(Model):
id = IntegerField()
name = CharField()
author = ForeignKeyField(Author)
class PeeweeOperatorTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.operator_v1 = PeeweeOperator('name', 'Python')
cls.operator_v2 = PeeweeOperator('author.school.id', 1)
cls.operator_v3 = PeeweeOperator('author.id', '10,20')
def test_eq(self):
node = self.operator_v1.eq()
self.assertEqual(node.lhs, Book.name)
self.assertEqual(node.op, '=')
self.assertEqual(node.rhs, 'Python')
def test_neq(self):
node = self.operator_v1.neq()
self.assertEqual(node.lhs, Book.name)
self.assertEqual(node.op, '!=')
self.assertEqual(node.rhs, 'Python')
def test_gt(self):
node = self.operator_v2.gt()
self.assertEqual(node.lhs, School.id)
self.assertEqual(node.op, '>')
self.assertEqual(node.rhs, 1)
def test_gte(self):
node = self.operator_v2.gte()
self.assertEqual(node.lhs, School.id)
self.assertEqual(node.op, '>=')
self.assertEqual(node.rhs, 1)
def test_lt(self):
node = self.operator_v2.lt()
self.assertEqual(node.lhs, School.id)
self.assertEqual(node.op, '<')
self.assertEqual(node.rhs, 1)
def test_lte(self):
node = self.operator_v2.lte()
self.assertEqual(node.lhs, School.id)
self.assertEqual(node.op, '<=')
self.assertEqual(node.rhs, 1)
def test_like(self):
node = self.operator_v1.like()
self.assertEqual(node.lhs, Book.name)
self.assertEqual(node.op, 'like')
self.assertEqual(node.rhs, 'Python')
def test_ilike(self):
node = self.operator_v1.ilike()
self.assertEqual(node.lhs, Book.name)
self.assertEqual(node.op, 'ilike')
self.assertEqual(node.rhs, 'Python')
def test_in(self):
node = self.operator_v3.iin()
self.assertEqual(node.lhs, Author.id)
self.assertEqual(node.op, 'in')
self.assertListEqual(node.rhs, [10, 20])
# def test_between(self):
# node = self.operator_v3.between()
# self.assertEqual(node.lhs, Author.id)
# self.assertIsInstance(node.rhs, Clause)
class PeeweeParamsParserTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
args = {
'select': 'id,name,author{id,name,abc,school{id,name}}',
'id': 'gt.10',
'age': 'lte.25',
'name.in': 'in.python, javascript',
'order': 'id.desc',
'page': 2,
'limit': 5
}
cls.parser = PeeweeParamsParser(params_args=args, model=Book)
def test_parse_select(self):
self.assertEqual(len(self.parser.parse_select()), 6)
self.assertIn(Book.id, self.parser.parse_select())
self.assertIn(Book.name, self.parser.parse_select())
self.assertIn(Author.id, self.parser.parse_select())
self.assertIn(Author.name, self.parser.parse_select())
self.assertIn(School.id, self.parser.parse_select())
self.assertIn(School.name, self.parser.parse_select())
def test_parse_where(self):
pass
# self.assertEqual(len(self.parser.parse_where()), 2)
# self.assertIn(Expression(Book.id, '>', 10), self.parser.parse_where())
# self.assertIn(Expression(Book.name, '<<', ['python, javascript']), self.parser.parse_where())
def test_parse_order(self):
self.assertEqual(self.parser.parse_order()[0]._ordering, 'DESC')
def test_parse_paginate(self):
self.assertTupleEqual(self.parser.parse_paginate(), (2, 5))
class PeeweeQueryBuilderTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
args = {
'select': 'id,name,author{id,name,abc,school{id,name}}',
'id': 'in.10,15',
'age': 'lte.25',
'name': 'ilike.python',
'order': 'id.desc',
'page': 2,
'limit': 5
}
cls.builder = PeeweeQueryBuilder(Book, args)
def test_build(self):
query = self.builder.build()
q = Book.select(
Book.id, Book.name, Author.id, Author.name, School.id, School.name
).where(
Book.id << [10, 15], Book.name ** 'python'
).order_by(
Book.id.desc()
).join(
Author, on=(Book.author == Author.id)
).join(
School, on=(Author.school == School.id)
).paginate(2, 5)
print(query.sql()[0])
self.assertTrue(query.sql()[0].endswith(
'''WHERE (("t1"."name" LIKE ?) AND ("t1"."id" IN (?, ?))) ORDER BY "t1"."id" DESC LIMIT 5 OFFSET 5'''
))
|
STACK_EDU
|
Cannot change log level
Description
This extension generates more than 100 lines of build output in my terminal, which makes it harder to find other output that I'm more interested in (e.g. sphinx warnings). I want to set the log level somehow, but can't figure out how. I have placed a jupyterlite_config.json in the same directory as conf.py and it's getting loaded:
[LiteBuildApp] Loaded config file: /opt/pydata-sphinx-theme/docs/jupyterlite_config.json
Here's the content of that file:
{
"LiteBuildConfig": {
"Application": {
"log_level": 40
}
}
}
Inspired by this I also tried
{
"LiteBuildConfig": {
"log_level": 40
}
}
but that gives a warning:
[LiteBuildApp] WARNING | Config option `log_level` not recognized by `LiteManager`.
Any guidance on how to actually suppress the terminal output of LiteBuildApp?
Reproduce
Sorry this is a complicated reproducer, I haven't had time to boil it down:
clone https://github.com/drammock/pydata-sphinx-theme/tree/silence-warnings
install nox
run nox -s docs
see the 100+ lines of LiteBuildApp output in the terminal that is not getting suppressed.
Expected behavior
the 100+ lines of output will be suppressed
Context
$ mamba list jupyterlite
# packages in environment at /opt/mambaforge/envs/pst:
#
# Name Version Build Channel
jupyterlite-core 0.1.3 pyhd8ed1ab_0 conda-forge
jupyterlite-sphinx 0.9.3 pyhd8ed1ab_0 conda-forge
$ uname -a
Linux agelaius 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Browser: N/A, this is about terminal output during build
I believe this was taken care of by #153
@steppi, would it be a good idea to add a three-level (or less) logger and configure it? I feel it can be taken up in #162
@steppi, would it be a good idea to add a three-level (or less) logger and configure it? I feel it can be taken up in #162
There's some prep work needed for that to actually help because the vast majority of noisy output comes upstream and can't be controlled with a logger.
Some of the output comes for doit from within jupyterlite_core, the verbosity is hard coded, and we'd need to get them to allow this to be configured. See https://github.com/jupyterlite/jupyterlite/issues/1351.
Another source of noisy output is unguarded print statements in jupyterlite_core/addons/ as described in https://github.com/jupyterlite/jupyterlite-sphinx/issues/149#issuecomment-2004635037. These would need to be changed to use a logger upstream.
If we actually want the log level to be configurable, this stuff needs to be changed upstream.
|
GITHUB_ARCHIVE
|
Table of Contents
If you're reading this, it most likely means that you're looking for a way to change your Windows password remotely, i.e. from a Remote Desktop connection (RDP protocol): this is a typical scenario for remote workers and system administrators who often have to access remote systems (such as Virtual Machines) through another Windows machine.
When such situation arises, the standard CTRL + ALT + DEL key combo cannot be used, because it would be captured by the local OS (the one used by the PC we're using to access the remote environment), which will prompt its own change password screen: therefore, we would be able to change the local Windows account password instead of the remote one.
Using CTRL + ALT + END
Luckily enough, there is another key combo that we can use to trigger the change password screen on the remote system: CTRL + ALT + END. This command is specifically meant to be the "three-finger salute" equivalent for remote desktop connections and can be safely used to remotely change password, because it won't be "intercepted" by the local OS in any way.
The END key is usually located close to the CANC key (that's arguably why it was chosen as replacement hotkey).
Using the On-Screen Keyboard
If you don't want (or you are unable) to use the CTRL + ALT + END key combo, you can still access the "change password" screen using the Windows On-Screen Keyboard. To activate it, just click to the Start menu, then type "ost" and click to the On-Screen Keyboard icon that will show up.
Now we can press CTRL + ALT using the hardware keyboard and then simultaneously click the third key (DEL / CANC) using the On-Screen Keyboard, thus determining the "three-finger salute" key combo on the remote PC.
What if it's already too late?
In the unfortunate event that the password expires before you can change it, the remote access tool will give you an error message like this when you connect:
An authentication error has occurred. The Local Security Authority cannot be contacted. This could be due to an expired password. Please update your password if it has expired. For assistance, contact your administrator or technical support.
In this case, all we can do is contact your System Administrator (or IT help-desk support) and request a password reset: once this is done we'll able to log back into the remote system and change the default password with a personal, secureone.
Why Windows doesn't warn me?
As you most likely already know, if your Active Directory (or local group policy) has been configured with expiring passwords, all users will receive a dedicated warning some days before the expiration date to remind them about changing their passwords before it's too late.
However, these warnings will only be shown when the user session is actually opened - i.e., when the user performs the login process.
To put it in other words, you need to "open" your user session to receive that warning: if you connect back to an existing session, you won't receive such notice.
Unfortunately, when using Remote Desktop, most users don't perform the logout / disconnect process, they just shut down the RDP client and then re-open it to reconnect whenever they need to: when they do this, the same AD user session is kept open and "recycled" over and over (the remote login process is used to "reconnect to it" instead of open a new session); for that very reason, the system never get the chance to properly warn them. Such scenario doesn't occur when those users physically work on their device, because their user session will also end whenever they perform a reboot, power off, or other maintenance activities that frequently occur during their daily activity at the office, yet often avoided when using RDP.
Anyway, the only possible "workaround" for this issue is to force the users to close their user session before closing the RDP client: this can be easily done using the Disconnect command available from the Windows Start menu.
That's it: we hope that this post will help many Windows users who are looking for a way to remotely change their password through Remote Desktop connection (RDP).
|
OPCFW_CODE
|
- Read the assignment carefully, including what files to include.
- Don’t assume limitations unless they are explicitly stated.
- Treat provided examples as just that, not exhaustive list of cases that should work.
- When in doubt regarding what needs to be done, ask. Another option is test it in the real UNIX operating system. Does it behave the same way?
- TEST your solutions, make sure they work. It’s obvious when you didn’t test the code.
Since we had some issues before on homework 1. Here are some of the things we know we will test, but these are not the only things we will test. Therefore make sure to test your program thoroughly and thoughtfully.
Total: 100 points
- 10: Can’t specify number of lines when input is from a pipe
- 10: No exit() at the end of hello.c
- 10: Does not handle long lines (more than 512 characters)
- 20: tail does not allow specifying number of lines -50: tail does not work -10: “cat README | tail” does not work -40: “tail README” does not work - 10: Debug printf left in code
In this assignment, you’ll start getting familiar with xv6 by writing a couple simple programs that run in the xv6 OS.
As a prerequisite, make sure that you have followed the install instructions from NYU classes to get your build environment set up.
A common theme of the homework assignments is that we’ll start off with xv6, and then add something or modify it in some way. This assignment is no exception. Start by getting a copy of xv6 using git (commands typed at the terminal, and their output, will be shown using a monospace font; the commands type will be indicated by a $):
$ git clone https://github.com/moyix/xv6-public.git Cloning into 'xv6-public'... remote: Counting objects: 4475, done. remote: Compressing objects: 100% (2679/2679), done. remote: Total 4475 (delta 1792), reused 4475 (delta 1792), pack-reused 0 Receiving objects: 100% (4475/4475), 11.66 MiB | 954.00 KiB/s, done. Resolving deltas: 100% (1792/1792), done. Checking connectivity... done.
Make sure you can build and run xv6. To build the OS, use cd to change to the xv6 directory, and then run make to compile xv6:
$ cd xv6-public $ make
Then, to run it inside of QEMU, you can do:
$ make qemu
QEMU should appear and show the xv6 command prompt, where you can run programs inside xv6. It will look something like:
You can play around with running commands such as ls, cat, etc. by typing them into the QEMU window; for example, this is what it looks like when you run ls in xv6:
Write a program for xv6 that, when run, prints “Hello world” to the xv6 console. This can be broken up into a few steps:
- Create a file in the xv6 directory named hello.c
- Put code you need to implement printing “Hello world” into hello.c
- Edit the file Makefile, find the section UPROGS (which contains a list of programs to be built), and add a line to tell it to build your Hello World program.
- Run make to build xv6, including your new program (repeating steps 2 and 4 until you have compiling code)
- Run make qemu to launch xv6, and then type hello in the QEMU window. You should see “Hello world” be printed out.
Of course step 2 is where the bulk of the work lies. You will find that many things are subtly different from the programming environments you’ve used before; for example, the printf function takes an extra argument that specifies where it should print to. This is because you’re writing programs for a new operating system, and it doesn’t have to follow the conventions of anything you’ve used before. To get a feel for how programs look in xv6, and how various APIs should be called, you can look at the source code for other utilities: echo.c, cat.c, wc.c, ls.c.
- In places where something asks for a file descriptor, you can use either an actual file descriptor (i.e., the return value of the open function), or one of the standard I/O descriptors: 0 is “standard input”, 1 is “standard output”, and 2 is “standard error”. Writing to either 1 or 2 will result in something being printed to the screen.
- The standard header files used by xv6 programs are “types.h” (to define some standard data types) and “user.h” (to declare some common functions). You can look at these files to see what code they contain and what functions they define.
I do not have strong preferences as to how you create source code. I personally prefer to use a traditional text editor that can be run at the command line such as pico. Altough vim and emacs are great as well and there are plenty of alternatives out there. On OS X, some may prefer to use XCode, others may prefer to use something like TextMate or Sublime Text. In the Linux VM I have provided, pico works fine. As long as you get a plain text file out of it with valid C syntax, you can choose whatever you like.
How you compile the code is another matter. The xv6 OS is set up to be built using make, which uses the rules defined in Makefile to compile the various pieces of xv6, and to allow you to run the code. The simplest way to build and run it is to use this system. Trying to coerce an IDE such as XCode into building xv6 is far more trouble than it’s worth.
Write a program that prints the last 10 lines of its input. If a filename is provided on the command line (i.e., tail FILE) then tail should open it, read and print the last 10 lines, and then close it. If no filename is provided, tail should read from standard input.
$ tail README To build xv6 on an x86 ELF machine (like Linux or FreeBSD), run "make". On non-x86 or non-ELF machines (like OS X, even on x86), you will need to install a cross-compiler gcc suite capable of producing x86 ELF binaries. Then run "make TOOLPREFIX=i386-jos-elf-". To run xv6, install the QEMU PC simulators. To run in QEMU, run "make qe mu". To create a typeset version of the code, run "make xv6.pdf". This requires the "mpage" utility.
You should also be able to invoke it without a file, and have it read from standard input. For example, you can use a pipe to direct the output of another xv6 command into tail:
$ grep the README | tail Version 6 (v6). xv6 loosely follows the structure and style of v6, xv6 borrows code from the following sources: JOS (asm.h, elf.h, mmu.h, bootasm.S, ide.c, console.c, and others) Plan 9 (entryother.S, mp.h, mp.c, lapic.c) In addition, we are grateful for the bug reports and patches contributed by The code in the files that constitute xv6 is To run xv6, install the QEMU PC simulators. To run in QEMU, run "make qe mu". To create a typeset version of the code, run "make xv6.pdf". This requires the "mpage" utility.
The above command searches for all instances of the word the in the file README, and then prints the last 10 matching lines.
- Many aspects of this are similar to the wc program: both can read from standard input if no arguments are passed or read from a file if one is given on the command line. Reading its code will help you if you get stuck.
The traditional UNIX tail utility can print out a configurable number of lines from the end of a file. Implement this behavior in your version of tail. The number of lines to be printed should be specified via a command line argument as tail -NUM FILE, for example tail -2 README to print the last 2 lines of the file README. The expected output of that command is:
$ tail -2 README To create a typeset version of the code, run "make xv6.pdf". This requires the "mpage" utility.
If the number of lines is not given (i.e., if the first argument does not start with -), the number of lines to be printed should default to 10 as in the previous part.
- You can convert a string to an integer with the atoi function.
- You may want to use pointer arithmetic (discussed in class in Lecture 2) to get a string suitable for passing to atoi.
|
OPCFW_CODE
|
Before you begin
- Go to .
Click the data class to configure its properties.
Field Description Enable/Disable
Select Enable to include this data class in the next data classification operation.Select Disable to not include this data class in the next data classification operation.Note: Even though a data class is disabled, it may still be manually assigned and unassigned from an imported object.
Name Type in the name of the data class. Description Type in a description. Classification groups Select one or more classification groups for this data class.
You can classify the data of a model by group.
Select a glossary term to associate it to any object classified with this data class.
The term provides information such as the name and description, when tracing the semantic definition from any object associated with the data class.
You can also obtain the list of all the objects associated with this data class when tracing the semantic usage from the term.
Select a sensitivity label to assign it to any object classified with this data class.
The sensitivity label assignment can control the display of data profiling and sampling information on the object pages. By default, you can see the information when you have an assigned object role with the data viewing capability. If a sensitivity label with the Hide Data option enabled is assigned, you cannot see the information as a data viewer.
Auto learning Enabling this option allows the data class to be auto-populated with a pattern based on existing imported objects. Matching threshold (%) Enter a value to specify the minimum percentage of values matching any of the enumeration values, patterns or regular expression among all values (of that field/column). Uniqueness threshold Enter a value to specify the minimum number of unique values among all values (of that field/column) to require enough diversity for the dataset.
By default, the value is set to 6 on patterns and regular expressions.
By default, the value is set to 1 on enumerations and limited to the maximum number of possible values in the enumeration list. If the number of the possible values is less than the one specified in the Uniqueness threshold field, Talend Data Catalog still uses the maximum number of possible values as the value for the Uniqueness threshold field.Note:
If you use an "International" enumeration data class including values in different languages and you have a column that uses one or more values of this data class in only one language, Talend Data Catalog will match it with confidence less than 100% because of other languages.
It is recommended to use "International" data classes only when you have multilingual columns. In other case, you should define a data class for each language used and group them in an "International" compound data class.
Data pattern Select the type:
- Enumeration: list of valid values for the data of that data class.
- Pattern: patterns for the data of that data class.
- Regular Expression: expression syntax that the data should conform to for that data class.
Enter the list of possible values, the data patterns or the regular expression.
- Save your changes.
|
OPCFW_CODE
|
Last Friday I had the pleasure of speaking with Alexey Georgev on DataTalks.club about Dataset creation. I am sharing my data creation process here, so you don’t need to watch me talk for an hour.
My cookbook for data labeling
- define success in business terms
- map the data with stakeholders for buy in
- rapid prototype from data to deployment
- iterate on dataset creation
Techniques to get more data bang for buck:
- weak supervision
- active learning
Considerations around dataset creation:
- in-house vs crowdsourcing
My cookbook for data labeling
Defining success in business terms
A good dataset is one that creates business value. Making sure stakeholders, domain experts, and engineers are on the same page is hard. I find sharing a mock of the model and data can really help move stakeholders understand what’s possible. Sharing an initial annotations with stakeholders can give quick feedback around conceptual errors and reduce the risk of overhyping the project.
Map the data with stakeholders for buy in
I show the data to stakeholders as a user interview. I am looking to see how usable is the data and strengthen my conceptual understanding of the business problem. It’s a successful interview if I know more about how experts understanding of the problems. So for example for Comtura to model sales processes, whenever I hear the importance of qualification I would link that with qualification methodologies.
I like having 2 mindmaps one with the expert’s conceptual view and one view around how can these expert concepts be modelled. For example maybe I will model sales qualification as a span classification problem where some spans are related to sales qualification, maybe instead I model it as a paragraph level document classification problem.
Rapid prototype from data to deployment
Perfect is the enemy of done. As a data scientist there are so many interesting ways a model could be improved. If you think your model is feasible then there’s no need for further improvements until it hits production. Speeding up the data -> model -> business value flow is essential otherwise how do you know if your data is creating business value?
Iterate on dataset creation
If you feel you are on the right track based on your prototype you can start refining your dataset creation process based on expert and user feedback. I like to have 3 steps in creating a dataset:
- data review
- dataset creation feedback
Creates labelled data
Review the created data qualitatively with a supervisor checking output periodically and quantitatively with inter annotator agreement and model metrics
Dataset creation feedback
I like having an annotation booklet. The annotation booklet is a living document where I keep task defintions, ambigous examples, and past discussions. With periodic feedback sessions where we discuss challenging samples and process improvement possibilities.
Techniques to get more data bang for buck
Weak supervision is a way to generate weak labels for unlabelled data programatically. Weak supervision has the most potential to improve your dataset creation process. Labelling functions generate weak labels from unlabelled data. A labelling function can have good signal for 2-3% of your data. With such a wide coverage labelling functions can improve performance a lot. Labelling functions can also be easily explained to domain experts allowing experts to improve your labelling functions.
Active learning is a form of model in the loop annotation where low confidence samples are annotated to improve model confidence. I have used prodigy in the past for this. It worked sometimes, sometimes not.
In-house vs crowdsourcing
For quick prototype or if quality not so important crowdsourcing works well as you can scale your tasks to a large pool of annotators. I prefer in-house annotation though as it allows you to build a relationship with annotators. You can retain the best performing annotators and iterate on your annotation process way easier with in-house annotation. In my experiece the long term cost / sample also favours in-house annotators as crowdsourcing requires 2/3x more levels of review or annotation due to low inter annotator agreeement.
Paid: Prodigy is a great annotation tool with active learning support from the creators of spacy
|
OPCFW_CODE
|
Windows Security is considered to be the major concern for the network administration. You can enhance that security by using different techniques like audit, authorization and configurations of security etc. You should also keep monitoring the performance of your system after some time. It is a good practice of keeping the system safe from malwares and different viruses. Windows Share-point is used to provide you the services for new requirements. When you install windows server on your system and you are configuring Microsoft Distributed Transaction Coordinator (MDTC) then you should also make a group with IP and the name on network. You should keep IP and the network name unique to others. You should not change privileges which are configured by default. You should delete the administrator account from your computer for the security purpose. After deleting that account you can make your new account which may have administrator privileges.
For windows security, you should create strong passwords for the protection of your system and windows. Password should be long and complex. Password should contain letters, symbols and numbers as well. The length of password should be 8 characters or more than that. Use different passwords for different things. You should keep changing your passwords after some time. The variety of characters in the password should be more but you should make a password that you could remember easily. For this, you should write the password somewhere for your convenience so that you may not forget your passwords. For making the password for windows security purpose, you can start with one or more than one sentences by removing the spaces between them. You may also add some numbers to increase the length of your passwords. You should check you password with the password checker for windows security. You should avoid dictionary in the passwords. You may also avoid abbreviations in your password. At the end you should keep it in your mind that you should not write any personal information in you password. Using these techniques you may secure your windows.
Windows security can be done by protecting the PC form malwares, viruses, spywares and other softwares which may harm the computer. You may use Microsoft Security Essentials for your PC. It is simple to install and free to use. It may run in the background of your work without causing any problem but you should have genuine version of windows to install it. You can have Microsoft Security Essentials for the small businesses if that business exceeds the Pcs from 9 or 10.
Tips and comments
There is a threat known as Torjan horses. It enters in your PC which is not detectable. It is a threat that can transfer the most confidential information in the background. This is threat which cannot be easily caught by scanning your computer because the computer scan focusses on the viruses. But Torjan is not virus. You should use Torjan Scanner for this purpose which is known as “Torjan remover” or “Anti-Torjan”. You should also do some tests like Endpoint scan, email security test, event log scan and cross site scripting scan.
|
OPCFW_CODE
|
#!/bin/bash
set -o nounset
set -o errexit
set -o pipefail
function run_command() {
local CMD="$1"
echo "Running Command: ${CMD}"
eval "${CMD}"
}
function vnet_check() {
local rg="$1" try=0 retries=15 vnet_list_log="$2"
az network vnet list -g ${rg} >"${vnet_list_log}"
while [ X"$(cat "${vnet_list_log}" | jq -r ".[].id" | awk -F"/" '{print $NF}')" == X"" ] && [ $try -lt $retries ]; do
echo "Did not find vnet yet, waiting..."
sleep 30
try=$(expr $try + 1)
az network vnet list -g ${rg} >"${vnet_list_log}"
done
if [ X"$try" == X"$retries" ]; then
echo "!!!!!!!!!!"
echo "Something wrong"
run_command "az network vnet list -g ${rg} -o table"
return 4
fi
return 0
}
function create_disconnected_network() {
local nsg rg="$1" subnet_nsgs="$2"
for nsg in $subnet_nsgs; do
run_command "az network nsg rule create -g ${rg} --nsg-name '${nsg}' -n 'DenyInternet' --priority 1010 --access Deny --source-port-ranges '*' --source-address-prefixes 'VirtualNetwork' --destination-address-prefixes 'Internet' --destination-port-ranges '*' --direction Outbound"
if [[ "${CLUSTER_TYPE}" != "azurestack" ]]; then
run_command "az network nsg rule create -g ${rg} --nsg-name '${nsg}' -n 'AllowAzureCloud' --priority 1009 --access Allow --source-port-ranges '*' --source-address-prefixes 'VirtualNetwork' --destination-address-prefixes 'AzureCloud' --destination-port-ranges '*' --direction Outbound"
fi
done
return 0
}
# az should already be there
command -v az
az --version
# set the parameters we'll need as env vars
AZURE_AUTH_LOCATION="${CLUSTER_PROFILE_DIR}/osServicePrincipal.json"
AZURE_AUTH_CLIENT_ID="$(<"${AZURE_AUTH_LOCATION}" jq -r .clientId)"
AZURE_AUTH_CLIENT_SECRET="$(<"${AZURE_AUTH_LOCATION}" jq -r .clientSecret)"
AZURE_AUTH_TENANT_ID="$(<"${AZURE_AUTH_LOCATION}" jq -r .tenantId)"
# log in with az
if [[ "${CLUSTER_TYPE}" == "azuremag" ]]; then
az cloud set --name AzureUSGovernment
elif [[ "${CLUSTER_TYPE}" == "azurestack" ]]; then
if [ ! -f "${CLUSTER_PROFILE_DIR}/cloud_name" ]; then
echo "Unable to get specific ASH cloud name!"
exit 1
fi
cloud_name=$(< "${CLUSTER_PROFILE_DIR}/cloud_name")
AZURESTACK_ENDPOINT=$(cat "${SHARED_DIR}"/AZURESTACK_ENDPOINT)
SUFFIX_ENDPOINT=$(cat "${SHARED_DIR}"/SUFFIX_ENDPOINT)
if [[ -f "${CLUSTER_PROFILE_DIR}/ca.pem" ]]; then
cp "${CLUSTER_PROFILE_DIR}/ca.pem" /tmp/ca.pem
cat /usr/lib64/az/lib/python*/site-packages/certifi/cacert.pem >> /tmp/ca.pem
export REQUESTS_CA_BUNDLE=/tmp/ca.pem
fi
az cloud register \
-n ${cloud_name} \
--endpoint-resource-manager "${AZURESTACK_ENDPOINT}" \
--suffix-storage-endpoint "${SUFFIX_ENDPOINT}"
az cloud set --name ${cloud_name}
az cloud update --profile 2019-03-01-hybrid
else
az cloud set --name AzureCloud
fi
az login --service-principal -u "${AZURE_AUTH_CLIENT_ID}" -p "${AZURE_AUTH_CLIENT_SECRET}" --tenant "${AZURE_AUTH_TENANT_ID}" --output none
rg_file="${SHARED_DIR}/resourcegroup"
if [ -f "${rg_file}" ]; then
RESOURCE_GROUP=$(cat "${rg_file}")
else
echo "Did not found an provisoned empty resource group"
exit 1
fi
run_command "az group show --name $RESOURCE_GROUP"; ret=$?
if [ X"$ret" != X"0" ]; then
echo "The $RESOURCE_GROUP resrouce group does not exit"
exit 1
fi
VNET_BASE_NAME="${NAMESPACE}-${UNIQUE_HASH}"
# create vnet
if [[ "${CLUSTER_TYPE}" == "azurestack" ]]; then
arm_template_folder_name="azurestack"
else
arm_template_folder_name="azure"
fi
vnet_arm_template_file="/var/lib/openshift-install/upi/${arm_template_folder_name}/01_vnet.json"
run_command "az deployment group create --name ${VNET_BASE_NAME} -g ${RESOURCE_GROUP} --template-file '${vnet_arm_template_file}' --parameters baseName='${VNET_BASE_NAME}'"
#Due to sometime frequent vnet list will return empty, so save vnet list output into a local file
vnet_info_file=$(mktemp)
vnet_check "${RESOURCE_GROUP}" "${vnet_info_file}" || exit 3
vnet_name=$(cat "${vnet_info_file}" | jq -r ".[].id" | awk -F"/" '{print $NF}')
vnet_addressPrefixes=$(cat "${vnet_info_file}" | jq -r ".[].addressSpace.addressPrefixes[]")
#Copied subnets values from ARM templates
controlPlaneSubnet=$(cat "${vnet_info_file}" | jq -r ".[].subnets[].name" | grep "master-subnet")
computeSubnet=$(cat "${vnet_info_file}" | jq -r ".[].subnets[].name" | grep "worker-subnet")
#workaround for BZ#1822903
clusterSubnetSNG="${VNET_BASE_NAME}-nsg"
run_command "az network nsg rule create -g ${RESOURCE_GROUP} --nsg-name '${clusterSubnetSNG}' -n 'worker-allow' --priority 1000 --access Allow --source-port-ranges '*' --destination-port-ranges 80 443" || exit 3
#Add port 22 to debug easily and to gather bootstrap log
run_command "az network nsg rule create -g ${RESOURCE_GROUP} --nsg-name '${clusterSubnetSNG}' -n 'ssh-allow' --priority 1001 --access Allow --source-port-ranges '*' --destination-port-ranges 22" || exit 3
if [ X"${RESTRICTED_NETWORK}" == X"yes" ]; then
echo "Remove outbound internet access from the Network Security groups used for master and worker subnets"
create_disconnected_network "${RESOURCE_GROUP}" "${clusterSubnetSNG}"
fi
# save vnet information to ${SHARED_DIR} for later reference
cat > "${SHARED_DIR}/network_machinecidr.yaml" <<EOF
networking:
machineNetwork:
- cidr: "${vnet_addressPrefixes}"
EOF
cat > "${SHARED_DIR}/customer_vnet_subnets.yaml" <<EOF
platform:
azure:
networkResourceGroupName: ${RESOURCE_GROUP}
virtualNetwork: ${vnet_name}
controlPlaneSubnet: ${controlPlaneSubnet}
computeSubnet: ${computeSubnet}
EOF
|
STACK_EDU
|
View Full Version : 100 = I win
12-06-2001, 03:00 PM
Sorry, this was pointless. Nice to see that the Mojo forums are exploding with posts though :).
12-06-2001, 04:27 PM
so can you or can you not
spam on this fourm
Uutont Fær Uulion
12-06-2001, 04:48 PM
it says in the discription of this forum that general chit-chat is allowed as for the rest of us I have no clue.
12-06-2001, 05:44 PM
Heh, I should just delete this thread. It's pointless.
12-06-2001, 05:51 PM
But that's what's so good about it! I mean, if we didn't have pointless posts, everything would be... er... pointless.
12-06-2001, 05:52 PM
That's very profound, I think. I just felt like posting something quick so I'd have the 100th post in this category, so I did.
12-06-2001, 07:31 PM
If I posted a thread every 100 post I make, that would be an example of 'spamming'.
12-06-2001, 07:58 PM
something just struck me - what the blue bollocks to a barndance does message overflowing have to do with a gelatinous meat-like-yet-****-based tinned substance that would fail health and saftey regulations and i wouldn't even give to hitler?
12-06-2001, 08:22 PM
Because they are both ghetto.
12-06-2001, 08:27 PM
Not sure, but I think it has to do with the Monty Python sketch about spam.
12-07-2001, 03:35 AM
lol i rememeber that one - and the spam song
by the way metallus, that new Av is well tasty!
12-07-2001, 03:46 AM
Thanks, Lemon. I thought I might as well design something for myself that both advertised my Mixnmojo staff position and reflected my personal favorite adventure game ;).
12-07-2001, 04:08 AM
gaah!!! every time i think about the bad sales grim fandango got it gets my blood boiling!! :evil6: :evil6: :evil6:
12-07-2001, 04:09 AM
Bad sales? :(. It was the number 4 seller on Amazon.com for awhile.
12-07-2001, 04:12 AM
not nearly as good as it should've got i'm afraid, sorry to break it to you my man
12-07-2001, 04:20 AM
Well, ideally it should still be selling like hotcakes. That is to say, if hotcakes sell at a consistently high rate.
12-07-2001, 04:28 AM
it should indeed, unfortunately it didn't get the PR and advertising the MI4 got
02-23-2002, 09:37 PM
Is it just me, or is there something extremely erotic about Largo LaGrande in MI2?
Lucky this is on the bottum and no one will read it.
That whole thing with the wriggling beard in his pants did it for me. Also the bra, but not so much.
02-24-2002, 05:15 AM
Wow, what a kerazy avatar Jake! I love it.
02-24-2002, 06:43 AM
What you don't know is that the lower half of Max's body is missing, and that the upper half is actually stuck on a revolving pike, while giving out any blood that still remains within it, and will soon wither and start rotting....
Is that avater a series of screen grabs from the cartoon, Jake? Maybe I'm just going mad, but I seem to remember the big Max robot doing that, or something. :)
vBulletin®, Copyright ©2000-2016, Jelsoft Enterprises Ltd.
|
OPCFW_CODE
|
Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
[Feature Request] Add conditional control statements to kakscript #2777
I think some more control statments should be added to kakscript.
In some pluggins I have seen, there are shell expansions solely for the purpose of a conditional operation, which is a performance drop over a native kakscript
I'll admit, many things in in kakscript won't need native conditional logic, since they'll be calling the shell anyway for some other purpose. That, and the performance impact is small enough (order of 1ms) that in most situations this is a non-issue. But for situations where conditional logic is all that is needed -- such as settings in plugins -- it seems like a waste to call the shell just for that.
Add bash-style conditional statments as kakscript commands:
@Delapouite in my (limited) experience, spawning a shell scope with dash has about 1ms of overhead. It's likely several times that with bash. Inside the shell scope, builtins are for the most part free, and process calls have their own overhead, with the unavoidable (but not negligible) process spawn. Even a basic
I think this would be acknowledging the failure of Kakoune's extension model, as we could not plausably deny that Kakoune's command language is a scripting language... (Its already ambiguous, the try / catch construct allowing a limited form of flow control already).
It would also be pretty limited, as I can only see two usefull things expanding to true or false:
In order to fix that, we would need to extend the command language further to introduce "expressions", as we currently only have statements. This would be a huge change in the extension model.
Out of curiosity, what do you use RawKey for, and in particular, cant the condition be encoded in the RawKey regex filter ?
Thanks for the replies.
I don't know of a way to measure the performance impact, but from my observations it is in the order of 1ms, as @occivink said.
The application I have noticed this in is the kak-crosshairs plugin pull #11.
@mawww For an implementation idea, would it be better to compare the expansion result to the empty string
So, I understand now that making a new scripting language is one of Kakoune's non-goals. After all, we don't want to just end up with a bad clone of vimscript (which is already bad).
If this interpretation is correct, could it be added to design.asciidoc or faq.asciidoc?
referenced this issue
Mar 12, 2019
IDK if it is the intended use, but I've sorta been using options like this: using logic in a shell expansion to determine a command at config time, and store it in an option to be evaluated later.
I like it: a simple addition. And it would be really helpful for conditional plugin and filetype configuration.
This still, however, has the question: how far should we develop kakscript as a language vs. supporting an alternative extension model vs. just saying "No, you can't make Kakoune do X"?
|
OPCFW_CODE
|
Microsoft confirms Windows printing problems after update
Microsoft is currently investigating printer problems reported by numerous users after Windows updates and is planning a corrective update to fix the error. According to Microsoft, Windows 10 client versions 20H2 to 21H2 and Windows Server version 20H2 are affected. In the support article, Microsoft also asks that affected users submit appropriate feedback via the Feedback Hub.
The printing errors are caused as soon as the preview update KB5014666 from June 28, 2022, or the security update KB5015807 from July 12, 2022 has been installed. The latter security update contains the fixes of the June 2022 preview update.
According to Microsoft’s support article, the problem is that printers appear twice in the device list, usually with a similar name and the suffix “Copy1”. Or applications that refer to an existing printer name can no longer print. In both cases, the print function is disrupted after the update installation. Uninstalling the updates brings the print function back in most cases.
As long as no correction update is available, Microsoft proposes various workarounds to be able to print again. This includes navigating to Printers & Scanners via Bluetooth & Devices in the Settings app and checking the printer configuration. If duplicates of a printer entry appear there, it is necessary to check whether a printer entry works. If the original printer entry can be used for printing, the second printer entry can be removed with the addition “Copy1” or something similar.
If that doesn’t help, Microsoft suggests removing the printer and adding it again. It also suggests trying to install the latest printer driver. The steps for the procedure are described in detail by Microsoft in the support article.
Unplug the USB connector and plug it back in
However, there are indications that the Microsoft support article does not describe the entire scope of the problems. After the July 2022 patch day, a number of administrators from the company environment contacted the author of the article and reported printing problems. In some cases, it was enough to unplug the USB printer port and plug it back in. Windows assigned or installed the correct printer driver.
Other users complained about errors when printing on Lexmark devices, which could be eliminated by switching off the device, unplugging it from the printer connection and reconnecting/switching it on, including updating the driver. Additional issues have been reported with HP laser printers going offline. There doesn’t seem to be a solution here yet.
In many cases the cause of the error was trivial, the option to use the USB port for printing out in the printer properties was no longer checked. Manually setting this option in the printer properties fixed the problem. There were also cases where all printer ports disappeared completely after installing the July 2022 update.
To home page
|
OPCFW_CODE
|
Running php code chunk in background
I have a php script which is responsible for reading some request parameters from my iPhone app. Once I do some manipulations to it I save them in db and will need to send some push notification message using apple APNS. So currently its done like this in the code.
<?php
$param1 = $_POST['param1'];
$param2 = $_POST['param2'];
//saving part here
//push notifications
$pushService = new PushService();
$pushService -> init();
$pushService -> push($param1, $param2);
//json response
echo json_encode(array($success, $dbsavedid);
?>
Problem occurs with the push part. Now it takes lot of time for this push notification code chunk to execute because the table has grown with lot of data. Hence the iPhone app waits too long for this to execute (to get the success response to iPhone).
Hence is there any way to make this push part asynchronous and send a response to iPhone side using the echo other than using a separate script for push notifications? Also note that I need to get some data from saved records as well to iPhone side. So I will need the output to reach the iPhone side.
How much records are we talking about in that table? If its not millions and millions then you might have some serious schema design issues there.
What is the "long time"? Are we talking about tens of seconds or based on your needs more than 500 msecs is considered long?
This takes time cos it takes time to connect to the APNS and send notifications. There are around 100-1000 records and it takes like 5-10 mins now to get the response. It worked perfectly initially but as the records count increases it has decrease the speed. http://developer.apple.com/library/mac/#documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/ApplePushService/ApplePushService.html
5-10 mins???? I'd argue that running that code on the background is hiding the real issue: bad code/data structures. Sure, you can run the code on the background, and it'll work just fine, right up to the moment you'll put such a load on your server the whole thing just falls apart...
Its nothing complex out there. I just retrieve records from DB and call the APNS. For the APNS part it takes that delay.
The actual reasons for this is, my server provider has blocked port 2195 and port 2196 which is used by apple APNS. I believe once you allow it this will be fixed and should work like earlier.
You can force PHP to send a response by using the flush() function for example. (there might be other possibilities to accomplish too)
So what you have to do is write with echo to the output buffer when your db operations finished (these should be really fast if you have 100-1000 records) and right after call the flush() function. Your client should get a response right away.
Also see this link about flush() itself, because there might be other parameters of your enviroment which prevents your response in reaching the client side as soon as expected.
http://php.net/manual/en/function.flush.php
<?php
$param1 = $_POST['param1'];
$param2 = $_POST['param2'];
//saving part here
//json response
echo json_encode(array($success, $dbsavedid);
//response should be sent right away, no need for wait on the pushservice operations
flush();
//push notifications
$pushService = new PushService();
$pushService -> init();
$pushService -> push($param1, $param2);
?>
I did try that using ob_flush() and flush() right after saving part where I added the json response also after that as well. Hence it is suppose to print the output and start push notifications. But it didn't work for me. Its suppose to work for browsers but not for my scenario. That's why I specifically add this question based on iPhone. May be this is because the iPhone maintain the ongoing interaction with the server and its waiting for server to complete it.
Yepp, should be possible to configure it either on the client or the serverside to receive the response after flush happened. Are you using NSURLConnection or something else on the client side?
I am using ASIHttpRequest. This is an old app developed 3 years ago.
Just found this one they mention that if you use it in the "queue way" you could have a requestDidReceiveResponseHeadersSelector callback which might be the same as NSURLConnection's didReceiveResponse.
The link is: http://allseeing-i.com/ASIHTTPRequest/How-to-use
Did you try it with ASINetworkQueue also? (yeah I'm constantly editing my comments)
Thanks. Ill add that and see whether it works. Im wondering whether there are any solutions from php end to make the last portion in the background.
First solution is flushing, my other suggestion would be to refactor it to have 2 separate requests (one for the db operations and the other for the long running tasks which can be executed async) but you mentioned explicitly in your question that you dont want to do that so you gotta figure out how to do it with flush. (It is definitely possible!)
Yup. Separate request means again will have the same thing. Cos if its long running again only thing I can do is to make the delegate of ASIHttpRequest to nil so that we dont rely on the response. But it will be executed in background. Just didnt want to do that.
|
STACK_EXCHANGE
|
I finally beat it! These levels on the average are actually quite a bit harder than I initially perceived them to be.
My favourite level I think is the third MM level (I love those icons by the way, looks absolutely like something from the Windows 98 era). The binary walls were really a clever idea. I like how they were introduced, with a chunk of them surrounding a point tile so that you have to think "there's gotta be a way into there, what is it?" And then how they're used in the rest of the level, not requiring you to know their "secret" but as a somewhat hidden quality-of-life improvement, I thought that was kinda cool.
Ammo in this level was a bit of an issue. Every time I reattempted the level, I came in with less ammo than last time. And then when I finally made it to the machine at the end:
I was able to beat the level some other time, with a lot of enemy dodging and conserving every possible shot, and then I was able to make it to the machine with 1 shot left.
I did find the secret point item in this level, by the way.
I also thought the stylistic variation for the final level was cool. You can clearly tell right from the start, this is it, the core of the virus, the most corrupted part of the system. This level was devilishly difficult though.
In the seventh Mortrix level, I accidentally found an exploit that allows you to skip the entire red key portion of the level:
It takes some precision to pull off, and is highly random whether the Maltimer decides to move the 2 pixels necessary to bump you. But I thought this was kind of funny.
The Tacks are extremely overpowered. Their high health and random behavior makes it impractical to shoot at them, but then they're so unpredictable that it's very hard to evade them. They can jump so high, they shoot so frequently, the shots being spawned higher makes them harder to avoid, and they can also just push you away. Any time I've come up against them, if I survived it was almost always due to pure luck.
I think the concept of a shooting enemy that pushes you, and that you can stand on, is a neat idea, but these guys are so incredibly dangerous that I always avoided them whenever possible. I feel like they could be really cool to play around with if they weren't quite so deadly.
Strangely it took me awhile to figure out I could shoot the Suerins, because of their visual resemblance to the Butler Robots from Keen 1.
I have mixed feelings about the frequent use of the lightswitch in this mod. Some levels I wasn't even able to win until I discovered the location of one.
My main gripe with it though is that turning the lights off makes the graphics not look quite as nice. I want to be able to see the new graphics as they are intended to look, and it's not as satisfying to have to beat the levels without being able to see all their unique visual things (such as the Mortrix machines) in full detail.
I'm sure it's technically possible to beat it without messing with the lights, maybe there's some weakness to the Tacks that I missed, but I wasn't able to do so.
Throughout the mod, I did find a few of the hidden points. I know I at least got the ones from Keen's Documents, Mortrix #3, and I think another one, maybe it was Mortrix #1? Sometime I'll probably do a replay and try to find all of them.
whoo 100th post
|
OPCFW_CODE
|
Dynamic Data PHP error
I'm a inexperienced webmaster for my Boy Scout troop. I have recently set up MAMP on my map and then tried to use MySQL PHP server for dynamic data. When testing locally, everything worked. When I uploaded everything to my web server (for which I only have FTP access) I get this message when I try to access the php page.
Fatal error: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) in
Can anyone tell me what I'm doing wrong?
More info:
Mac Pro 1,1 running 10.7
Dreamweaver CS6
MySQL Server 5.3.3
http://stackoverflow.com/questions/5376427/cant-connect-to-local-mysql-server-through-socket-var-mysql-mysql-sock-38
Post the code you're using to connect to the MySQL server. Mask the user name & password before you post it!
Make sure that the username and password you are using are the same on your remote server. Additionally, make sure the MySQL user has permission to access the database.
Mysql is either not running on your server, or you don't have proper credentials. I'm more inclined to believe it's the former, since if you are denied access, it will usually tell you that.
Sounds like your MySQL is not running.
Ok. When I say inexperienced, I mean it. I Probally didn't even set this up right. I have MAMP installed on my computer and everything works locally. It's only when I put it on the remote server that things go wrong. Do I need to install MYSQL on the server?
By "web server" you mean a hosted commercial service, rather than your personal PC or Mac? It's false economy to host the latter way, because hackers will eat you for breakfast. Assuming you're on a "real" server, it's likely that somewhere you have a hard coded "localhost" instead a server name or IP address. As well, the user ID and password might be different. Ask your host how to connect to your MySQL server. You must have more than just FTP access (are you "borrowing" or "subletting" services from another customer?).
I solved this problem. My error was that I didn't have MAMP set up to allow connections from the outside. Google it, there are many good resources out there to help you set it up. After I did that and made sure root access from any IP was enabled, I was good to go. Thanks for everyones help.
IF you are sure that the server is UP and MySQL is running, the following may be true:
When MySQL states it is trying to connect through a local socket, this typically means you are using "localhost" as the address for the MySQL server in your PHP code.
Instead try using the IP Address of the MySQL server.
If this does not fix the problem, then make sure your mysql user that your code is using has permissions to access the MySQL server from 'localhost'.
Let me know if you have any questions about any of this.
Hi. Thanks for your help. How do you find the IP of the mysql server running on my home network?
Sorry, I actually have figured that all out and have port fowarding set up. My question now is why do I have this error message: access denied for user'root'@' ' (using password yes) in file path
|
STACK_EXCHANGE
|
Next to Bitcoin, Ethereum is the second largest cryptocurrency as measured by market capitalization. The Ethereum network is a blockchain-based platform focused on programmable contracts (smart contracts) and decentralized programs (dApps). The associated cryptocurrency is called Ether (ETH). Ethereum is also often referred to as a “world computer”. The term world computer comes from the fact that Ethereum does not just store the state of currency ownership like Bitcoin does, Ethereum can track the state of any kind of arbitrary data and execute all code that can be put into binary data format.
Ethereum (ETH) is a decentralized open-source platform based on blockchain technology. It allows any interested developer, individual or company to run and develop their own dApp or even a decentralized organization (DAO) using smart contracts. Ethereum has memory that stores both code and data and it uses the Ethereum blockchain to track how this memory changes over time like a general-purpose stored program computer Ethereum can load code into its state machine and run that code storing the resulting stage changes in its blockchain.
To understand the basics of Ethereum, watch this Generation Blockchain introduction video. Click here to watch the Generation Blockchain video on the Basics of Ethereum.
Inventors of Ethereum
Vitalik Buterin is the initial inventor and co-founder of Ethereum. In the early stages of Bitcoin's development, Buterin ran a magazine that was publishing on the topic of Bitcoin. Through this, he identified aspects that he saw as room for improvement. Vitalik Buterin then co-founded Ethereum with Anthony Di Iorio, Mihai Alisie and Charles Hoskinson. Buterin first explained the Ethereum Blockchain in a white paper in 2013. Ethereum was intended to unleash the full potential of Bitcoin. It combines a synthesis between radical openness and radical privacy. Buterin wanted to create a platform that is a mining system and provides a platform for developing one’s own software applications.
Ether Currency Units The cryptocurrency of the Ethereum blockchain is called Ether. Ether is a means of payment for every transaction or creation of smart contracts, as well as the use of various services on the Ethereum platform. All changes made on the world state of Ethereum cost Ether.
To interact with the Ethereum Blockchain, one needs to buy Ether. Ether is the fuel of the entire platform. Ether is also distributed as a reward to Ethereum stakers and validators. Ether is subdivided into smaller units down to the smallest unit possible which is named wei 1 ether is 1 quintillion (1018) wei.
What are smart contracts?
Smart Contracts are digitized, determined contracts between two or more people or software programs. The code can either be the sole manifestation of the agreement between the parties or can also act as a complement to a traditional text-based contract and execute certain provisions, such as transferring funds from party A to party B. The code itself is replicated across multiple nodes of a blockchain and, therefore, benefits from the security, permanence, and immutability that a blockchain offers. It is possible to use smart contracts to develop a DAO or a dApp.
are operated by a network of computers
execute agreed steps automatically when a specified event occurs
automatically track changes within the set terms of the contract
are stored on the blockchain
On the Ethereum network, smart contracts exist as independent users that can be interacted with, and thus, they have the same status as human users. This also means that they can be viewed and monitored by anyone in the network. Conditions are defined in the contracts, which anyone can check for correctness. Depending on whether the triggering event occurs, the smart contracts automatically execute the linked commands. These smart contracts are stored on the Ethereum Blockchain.
Smart Contract Example
For example, the smart contract includes the understanding that a person will receive their money for the package when it arrives three days after the order is placed. Moreover, such a smart contract is connected to software that is capable of checking on the status of the package (i.e., whether it has been delivered or not). Once the package has arrived, the smart contract automatically releases the recipient's Ether amount stored and locked up in the smart contract to the sender of the package. If the software would detect that the package has not been delivered. Here, the smart contract could override itself and the buyer will get their money back.
Smart Contracts and the Ethereum Blockchain
What makes Ethereum special, however, are the smart contracts that turn the Ethereum network into a decentralized computer. Smart contracts, as the name suggests, are smart contracts or small programs that run on the Ethereum network and can, for example, regulate Ethereum transaction conditions. Unlike the Bitcoin network, the nodes on the Ethereum network are also responsible for processing these contracts. Through smart contracts, it is possible to develop so-called decentralized Apps. DApps are publicly accessible decentralized applications. Since ultimately everyone can run an Ethereum node, all dApps have the same functionality and can offer services built on top of the infrastructure accordingly.
Everywhere in the world, developers are building dApps on top of Ethereum and creating innovative dApps. There are almost no limits to dApp development. There are financial applications, decentralized exchanges (DEX), social media platforms, messenger services, games and others are just a couple of examples for dApps. Smart contracts can be seen as back-end APIs running in the blockchain while dApps are the front- end or UX. They represent the visible layer connecting users or other applications with the smart contracts running in the blockchain. You may think of app stores that we are already familiar with. Just like with dApps, there are countless apps in the app store today. These apps trust the app store with payment management and thus involve a third party and the need for established trust. Traditional developers are also dependent on the App Store's favor. App stores can remove apps from their stores as the ultimate authority. Accordingly, the consumer's choice also depends on the influence of third parties such as Google or Apple. Conclusively, this means that the content they generate is in the hands of third parties and influenced by unwritten rules of the industry. This is different with dApps which are built on Ethereum. Here, the data is in the hands of the users. Developers can offer dApps freely and independently of an app store provider, eliminating the pre-selection of an app store altogether. Running an Ethereum node is not a requirement for implementing DApps. Instead, there are private offerings in the cloud that can provide access to existing nodes.
As compared to Bitcoin which sets the maximum supply at 21 million, Ethereum does not have a limit. There will never be an end to Ether production. How many Ether exist has a direct impact on its price. Generally, the greater the number of coins that are publicly available, the lower its value. The total amount of Ether is still fluid, as the recent network upgrade to PoS have made an increase and decrease in both directions possibly in terms of Ether supply.
The differences of Ethereum and Bitcoin
Bitcoin and Ethereum differ in many aspects. While Bitcoin is a cryptocurrency, Ethereum is a platform. For this reason, Bitcoin is primarily a store of value and medium of exchange whereas Ethereum is seen as a general purpose blockchain.
Ether is the native token on Ethereum's blockchain. Bitcoin is and always has been the largest cryptocurrency as measured by market capitalization and Ethereum is the second largest. Transactions are faster on the Ethereum network than on Bitcoin’s due to the fact that the Ethereum blockchain is slightly faster than the Bitcoin blockchain. It is also important to note that Ethereum was created as a complement to Bitcoin and not as competition. While Bitcoin has been able to establish itself as a cryptocurrency, Ethereum aims to establish a decentralized world computer. In this sense, a comparison between the two cryptocurrencies is a difficult one.
To learn how Ethereum transactions work in accordance with smart contracts, listen to the Generation Blockchain podcast episode on the topic.
Click here to listen to the Generation Blockchain Podcast episode on Ethereum Transactions & Smart Contracts.
In a previous section, you have already gained a high-level overview of smart contracts which will be deepened in the following.
As you already know, smart contracts are a type of Ethereum account. Thus, they have a balance and can be the target of transactions. Smart Contracts must not be controlled by a user, instead, they are deployed to the network and run as previously programmed. Actual user accounts can interact with a smart contract by submitting transactions that execute a function defined on the smart contract. Smart contracts can define rules, like a regular contract, and automatically enforce them via the code. Smart contracts cannot be deleted by default, and interactions with them are irreversible. They can only be overwritten by authorized parties. Smart contracts are simply programs stored on a blockchain that run when predetermined conditions are met.
They typically are used to automate the execution of an agreement so that all participants can be immediately certain of the outcome, without any intermediary’s involvement or time loss. They can also automate a workflow, triggering the next action when conditions are met.
From an implementation standpoint, Vitalik and his co-founders designed the so-called Ethereum Virtual Machine (EVM) for running byte code in the blockchain. Every node in the network runs said EVM, and it is ready to execute any arbitrary code. Creating a new contract in the blockchain implies sending the program representation in byte code as part of the transaction data payload. Once the EVM runs the transaction and the block is added to the ledger, the programmer then receives the public address where it was published. From there, anyone that is given access to it can then start interacting with the contract at that address.
There are three important aspects of smart contracts. The execution context, the gas fee, and the immutability.
Smart Contracts run in isolation meaning that they can only see data available on the Ethereum blockchain or call other smart contracts. Thus, they cannot make calls to any service or query data from outside the Ethereum blockchain. Some contracts in Ethereum act as oracles. External users or applications can feed these oracle contracts with external data so others can consume them.
Running code in the EMV cannot happen without a gas fee being paid since computing resources and storage are scarce and are not for free for the validators. The cost of using Ethereum services is expressed in a unit known as gas, which represents short fractions of Ether (denominated in WEI). For every transaction submitted one must pay gas, otherwise the code will not run. Gas is consumed by executing lines of code or allocating storage space. If a transaction runs out of it, it results in the cancellation of the transaction. In this case, the tokens or funds are spent anyway.
Technically, gas represents a unit and not a price since the price for the transaction is assigned when it is created. Here, the higher the price that one pays or is willing to pay leads to a higher prioritization of the transaction in the execution queue. Validators have the incentive to execute transactions that pay more since they receive the gas fees. One can also set a gas limit on the transaction. This expresses how much one is willing to spend on the execution. If the transaction costs more, it is canceled, and the unused funds are returned to the sender.
Smart contracts are immutable. Thus, by definition (byte code) they cannot be changed or updated once they are deployed on the Ethereum blockchain. In case an existing smart contract needs to be altered, one has to deploy a new version in a new address. Once bugs are introduced, they cannot be fixed.
|
OPCFW_CODE
|
On the Hunt: Texas Officials Launch Search for Missing Six-Foot Cobra Snake
A nationwide estimate provided by the US Centers for Disease Control and Prevention indicates that between 7,000 and 8,000 people are bitten by a venomous snake on a yearly basis, with about five individuals succumbing to a lethal bite. Now, American officials are on the hunt for a deadly pet snake.
Officials in Texas recently issued a citywide alert to the residents of Grand Prairie after first responders were informed by a local pet owner that their six-foot, venomous West African banded cobra was out on the loose.
A release issued by the Grand Prairie Police Department on Wednesday revealed that the snake initially went missing on Tuesday at about 5 p.m. local time, and that law enforcement officials were not made aware of the development until about an hour and a half later.
It further detailed that the owner, as well as staff from the regional animal services agency and a “venomous snake apprehension professional” all participated in a localized search mission to determine whether the cobra was still within the proximity of the home. However, their initial attempt was not successful.
— Alex Rozier (@RozierReports) August 4, 2021
“Residents who live in the area and see any type of snake believed to be the missing cobra, are asked to call 911 immediately,” reads the department’s statement. “Do not approach or attempt to capture the venomous snake.”
The owner, identified by local station WFAA as Tre Mat, revealed that the lethal snake managed to get loose after finding its way out of an in-house aquarium container when it was not properly shut.
"It only took a couple minutes [for the snake to escape]," Mat told the station, before adding that he’s “doing everything I can to help retrieve the snake."
The apologetic owner later explained to the local NBC Dallas-Fort Worth station that the snake somehow managed to get out of its enclosure when he temporarily left the home to retrieve more food for his other pets. However, he believes that the reptile may have slithered into his walls and could have either died inside or outside of the home due to the heat.
Photo provided by Texas' Grand Prairie Police Department is of a similar West African Branded Cobra snake as part of a reference image for the public.
“There were simple protocols that could have, five screws could’ve stopped this,” he told the station. “It just gives a bad look for the community, and I’m sorry too to the reptile community and my local community.”
Incidentally, the Lone Star State legally allows individuals to own such venomous snakes; however, they must be in possession of a permit from the State of Texas Parks and Wildlife Department. While the owner is in compliance with the regulation, officials appear to be questioning whether stricter rules need to be implemented for snake enthusiasts.
The owner further relayed to the station that the city ordered the removal of two other snakes that he had in his possession. One of those serpents was identified as a viper.
The local police department has stated that it partnered with the Grand Prairie Fire Department on the investigation, and that area hospitals have been placed on alert for a potential deadly snake bite.
|
OPCFW_CODE
|
Architecting Druid for failure
Guaranteeing data availability in distributed systems.
Everything is going to fail.
If this is your first time working with a distributed system, the fact that everything is going to fail may seem like an extremely scary concept, but it is one you will always have to keep in mind. Modern distributed systems routinely scale to thousands or even tens of thousands of nodes, and operate on petabytes of data. When operating at such scale, failures are a norm, not an anomaly, and modern distributed systems must be able to overcome a wide variety of different failures.
In this post, I will cover my experiences building Druid, an open source distributed data store, and the various design decisions that were made on the project to survive all types of outages. (Druid was initially designed to power a SaaS application with 24x7x365 availability guarantees in AWS—you can see a variety of production use cases here.)
Single server failures
Let’s first examine the most common type of failure that can occur: single server failures. Servers may fail for a variety of different reasons, such as hardware malfunction, random kernel panics, network outages, or just about every other imaginable and unimaginable reason. Single server failures occur frequently, especially as your cluster grows in size, but thankfully, single server failures generally have minimal impact on data availability and system performance.
The standard approach to ensure data remains available and performance remains consistent during single server failures is to replicate data. Using Druid as an example, data tables (called “data sources”) are collections of timestamped events and partitioned into a set of segments, where each segment is typically 5–10 million rows. Segments represent the fundamental storage unit in Druid, and Druid queries only understand how to scan segments.
Druid segments are immutable and are created through an indexing process where raw data is compacted and stored in a column orientation. One very nice property of immutable data shards is that replication is very simple. To replicate data in Druid, you simply have to load the same segment on two different machines. In Druid, segments are uniquely identified by a data source identifier, the time interval of the data, and a version string that increases whenever a new segment is created.
In a Druid cluster, different processes (called “nodes”) deal with segments in different ways. Some nodes create segments, some download and serve queries on segments, and others help coordination of segments. To survive single server failure, created segments are downloaded by historical nodes, and multiple historical nodes may download the same copy of a segment. Historical nodes are very simple—they follow a shared nothing architecture, and only know how to scan segments. In front of the historical nodes, we have a layer of broker nodes that know which historical nodes are serving which segments.
Clients interact with the broker nodes, and brokers will forward queries to the appropriate historicals. Queries can be routed to any replica of a segment with equal probability.
When one copy of a segment disappears, the broker will respond by forwarding queries to any existing copies of the segment.
While single server failures are fairly common, multi-server failures are not. Multi-server failures occur because of data center issues, such as rack failures, and any number of nodes can be lost at once, including the entire data center.
While multi-datacenter replication seems like a very straightforward approach, there are many pitfalls to avoid. The simplest setup to replicating data in multiple data centers is to distribute the nodes of a single cluster across the data centers.
This setup works reasonably well if you run in a cloud setup such as AWS, where availability zones may be geographically situated very close to each other (i.e. across the street from each other, with fiber wires connecting the zones). In such setups, there is minimal network time as nodes communicate to one another. This setup works less well if you have data centers spread across the world.
Most modern distributed systems require some piece for cluster coordination. Zookeeper remains an extremely popular option, and Raft has been gaining more traction. Distributed coordination tools rely on consensus to make decisions around writes, consistent reads, failovers, or leader election, and consensus requires communication among nodes. If the coordination piece is distributed across multiple data centers, the network time involved in consensus agreements can impact system operations. Depending on how much your system relies on coordination in its operations, this overhead can have significant performance and stability impacts. Given most distributed systems are quite reliant on the coordination piece, an alternative setup of running in multiple data centers may look like this:
In this setup, you run an independent cluster per data center. The clusters do not know anything about each other, and it is up to your ingestion/ETL piece to ensure the same data is delivered to all the different clusters.
In my experience, catastrophic failures are by far the most difficult to diagnose, and arise not because things are completely down, but because your distributed cluster is experiencing performance issues, and there is nothing obviously failing. It can be tremendously difficult to find a root cause of a problem during a fire. The source of slowness can come from a variety of causes, for example, if there are too many users who are using your system at the same time.
There are two primary strategies Druid employs for multitenancy (supporting many concurrent users at the same time). The first is to keep the unit of computation very small. Druid queries involve scanning one or more segments, and each segment is sized such that any computation over the segment can complete in at most a few hundred milliseconds. Druid can scan a segment pertaining to a query, release resources, scan a segment pertaining to another query, release resources, and constantly move every query forward. This architecture ensures that no one query, no matter how expensive, starves out the cluster.
The second way Druid deals with multitenancy is to separate and isolate resources. For example, when dealing with event data, recent data is queried far more often than older data. Druid can distribute segments such that segments for more recent data are loaded on more powerful hardware, and segments from older data are loaded on less powerful hardware. This creates different query paths, where slow queries for older data will not impact fast queries for recent data.
A second cause of slowness (where nothing is obviously failing) is due to hot spots in the cluster. In this case, one or more nodes are not down, but are operating significantly slower than their peers. Variability between nodes is very common in distributed systems, especially in environments such as AWS. Thankfully, there has been great literature written about minimizing variability—for example, see this article by Jeffrey Dean and Luiz André Barroso, “The Tail at Scale”.
At the end of the day, diagnosing problems where there is no clear failure requires proper monitoring and metrics, and the ability to do exploratory analytics on the state of the cluster. If you are interested in learning more about using exploratory analytics to diagnose issues, I invite you to read my post on the subject, “Dogfooding with Druid, Samza, and Kafka.”
This post covers a very high level overview of how distributed systems such as Druid are architected to survive outages. I will be covering this topic in much greater depth at my upcoming session, Architecting distributed systems for failure, at Strata + Hadoop World San Jose March 28-31, 2016.
Editor’s note: all graphics in this post are courtesy of Fangjin Yang.
|
OPCFW_CODE
|
iPhoneLS is a new screensaver for Windows that simulates the iOS lock screen just the way you’d expect: after this screensaver is running, you’ll see a “slide to unlock” screen which brings you back to your desktop once your slide it to the right. Sounds better than the annoying bubbles, or 3D text, am I right?
iPhoneLS requires .NET Framework 3.5 or higher, which is built into Windows 7. If you’re using any version of Windows prior to Windows 7, make sure you get .NET Framework 4.0 directly from Microsoft’s download website. Other than that requirement, this screensaver should be as easy to install as any other, like so:
Step 1: download iPhoneLS from here. It’s a developer-provided link, so you can rest assured it’s safe.
Step 2: extract the RAR file. In order to do that you’ll need a custom tool, such as 7-zip or IZArc. You should end up with 3 files: an executable, a screen saver file and a text file. Make sure you extract them onto a directory that won’t be deleted, at least the Screen Saver file. If that file is deleted or moved, the screensaver will stop working.
Step 3: Right-click the Screen Saver file and select “Install”, like so:
Again, make sure you do not move this file or remove it after this point. If so, this screensaver will automatically be uninstalled (and we don’t want that, do we?).
Step 4: You should immediately see your Windows screensavers settings dialog, with iPhoneLS selected. There are no additional settings to set up, just hit OK.
You’re done! From here on, whenever your computer enters an inactive state, you’ll have to drag your cursor along the “slide to unlock” slider in order to resume use.
There are a few shortfalls to this screensaver, at least for now. First of all, there are no customization options available, which might be a letdown for a few enthusiasts who would like, for example, to set a background on their “lock screen”. That shouldn’t be hard to implement in a future version and make this screensaver more attractive to those who wouldn’t like their desktop to be visible, even when inactive.
This screensaver is also a little buggy. When iPhoneLS is set as active, it will show up whenever the screen savers dialog screen is launched. Hitting the settings button on that dialog will also make the lock screen pop up, twice.
That’s a small price to pay for a really cool concept. I’d love to see this implemented onto the login screen which would serve more of a purpose than a mere screensaver, although that might be hard to do. Overall, you’ll have a lot of fun with it!
(Kudos to Raphael Bucher for submitting this to us. Nice work!)
|
OPCFW_CODE
|
Writing quality code is essential for creating reliable, maintainable, and efficient software. Here are some guidelines to help you write quality code:
1. Understand the Requirements: Ensure that you have a clear understanding of the project requirements before you start coding. Discuss the specifications with stakeholders and seek clarification if necessary.
2. Follow Coding Standards: Adhere to the coding standards and guidelines set by the organization or the community. Consistent code formatting makes it easier for others to read and maintain your code.
3. Use Meaningful Names: Choose descriptive and meaningful names for variables, functions, and classes. This improves code readability and makes it easier for others (including your future self) to understand the purpose of each element.
4. Keep Functions and Classes Small: Follow the Single Responsibility Principle (SRP). Each function or class should have a clear and specific purpose. Smaller functions are easier to understand, test, and maintain.
5. Avoid Repetition: Don’t repeat yourself (DRY). Repeated code can lead to maintenance issues and inconsistencies. Extract common functionality into functions or classes to promote code reusability.
6. Comment and Document Your Code: Add comments to explain complex logic, algorithms, or any non-obvious code. Properly document your functions, classes, and modules so that others can understand their usage and behavior.
7. Unit Testing: Write unit tests to verify that your code functions as expected. Automated tests help catch bugs early and provide a safety net for refactoring.
8. Error Handling: Implement appropriate error handling to make your code robust. Gracefully handle exceptions and errors to prevent crashes and unexpected behavior.
9. Version Control: Use version control systems like Git to track changes to your codebase. This enables collaboration and makes it easier to revert to previous versions if needed.
10. Performance Considerations: Write code with performance in mind. Optimize critical sections, avoid unnecessary loops or data processing, and use efficient algorithms and data structures.
11. Security Awareness: Be mindful of security vulnerabilities. Sanitize input, avoid hardcoding sensitive information, and follow security best practices.
12. Simplicity and Clarity: Keep your code simple and straightforward. Avoid over-engineering or adding unnecessary complexity. Aim for clarity, so anyone reading your code can understand its purpose easily.
13. Code Reviews: Participate in and welcome code reviews. They provide valuable feedback and help identify potential issues early on.
14. Learn from Others: Study well-written code from experienced developers and open-source projects. Learn from their approaches and best practices.
15. Refactor Regularly: As your project evolves, refactor your code to keep it clean and maintainable. Refactoring helps eliminate technical debt and improves code quality over time.
Remember that writing quality code is an ongoing process of continuous improvement. It’s not just about the end result but also about the journey of refining and enhancing your code over time.
Get in touch with me:
|
OPCFW_CODE
|
There is a two step process to get your videos into your library. Step one is to get the video files onto the internet with a public URL (web address.) Step two, is to enter the information and graphic for your file and to initiate the encoding process.
Video files are typically very large files. The most successful upload process for such large files would be FTP transfer or some kind of managed uploader (like Dropbox.) Perhaps you have your videos stored online in a public folder already. If so, step one is already complete. But, if your videos are only stored on a local computer, step one will consist of uploading them to the internet first. You can upload these files to any publicly-accessible storage space via FTP, or you can use Dropbox to temporarily store your files. After your files are encoded and stored in your library, you can decide if you want to keep the original videos on the internet, or delete them. The originals will not be used other than for initial transfer and encoding. Click here to see more information about getting your videos online with a public URL.
Step two is where you actually add your video to your library. To do this:
- Click on "Media Library" from the tool databases
- Click the "add new video" button in the upper right corner of the tool
- Choose the channel you want the video to be in, or just leave it on the default channel (if you don't add channels, the default will be "general")
- Select any of the categories you want to use to label this video, and/or add new ones that apply to the content of this video
- Choose whether this video is part of a series of videos.
- If the video is part of a series, then you have to
- choose the series it is a part of from the drop-down menu (if you haven't added the series yet, you will have to stop and go to the series tab and enter the series title and optional information)
- enter the part number of this video in the series
- decide whether to use the series title, series part number, thumbnail, and description for this video when it is displayed in the main media library (the individual video information will always be displayed when the individual video is selected and loaded.)
- Enter the title of the video, and optionally, a subtitle, date, and short description.
- Upload a graphic for the thumbnail that has square dimensions (identical width and height) and at least 150 pixels by 150 pixels. If you do not upload a thumbnail, the system will use a generic graphic.
- Choose whether the video is active or inactive
- Enter any text or graphic resources, and/or links to documents or external sources.
- If your video is already online with a public URL, copy that URL and paste it in the appropriate field.
- Decide whether to also encode an audio file that would be available for people to listen to or download from the player.
- Choose the Video Quality you wish to use. If you have a video implemented into your homepage, you may see an "integrated" option with corresponding dimensions that will be used for the video to display in an area of the homepage. If you are uploading a video that will be used on interior pages of your website, use the "Standard" option.
- Hit submit to start the uploading and encoding process. Once the process starts, you can navigate away from this screen and the process will continue behind the scenes. Refresh your screen on the page with the list of videos to see an updated progress meter. Your videos will be available on your pages after they are listed as "finished."
|
OPCFW_CODE
|
8. Configure MySQL query_cache_size
If you have many repetitive queries and your data does not change often – use query cache. People often do not understand the concept behind the
query_cache_size and set this value to gigabytes, which can actually cause degradation in the performance.
The reason behind that is the fact that threads need to lock the cache during updates. Usually value of 200-300 MB should be more than enough. If your website is relatively small, you can try giving the value of 64M and increase in time.
You will have to add the following settings in the MySQL configuration file:
query_cache_type = 1 query_cache_limit = 256K query_cache_min_res_unit = 2k query_cache_size = 80M
9. Configure tmp_table_size and max_heap_table_size
Both directives should have the same size and will help you prevent disk writes. The
tmp_table_size is the maximum amount of size of internal in-memory tables. In case the limit in question is exceeded the table will be converted to on-disk MyISAM table.
This will affect the database performance. Administrators usually recommend giving 64M for both values for every GB of RAM on the server.
[mysqld] tmp_table_size= 64M max_heap_table_size= 64M
10. Enable MySQL Slow query Logs
Logging slow queries can help you determine issues with your database and help you debug them. This can be easily enabled by adding the following values in your MySQL configuration file:
slow-query-log = 1 slow-query-log-file = /var/lib/mysql/mysql-slow.log long_query_time = 1
The first directive enables the logging of slow queries, while the second one tells MySQL where to store the actual log file. Use
long_query_time to define the amount of time that is considered long for MySQL query to be completed.
11. Check for MySQL idle Connections
Idle connections consume resources and should be interrupted or refreshed when possible. Such connections are in “sleep” state and usually stay that way for long period of time. To look for idled connections you can run the following command:
# mysqladmin processlist -u root -p | grep “Sleep”
This will show you list of processes that are in sleep state. The event appears when the code is using persistent connection to the database. When using PHP this event can appear when using mysql_pconnect which opens the connection, after that executes queries, removes the authentication and leaves the connection open. This will cause any per-thread buffers to be kept in memory until the thread dies.
The first thing you would do here is to check the code and fix it. If you don’t have access to the code that is being ran, you can change the
wait_timeout directive. The default value is 28800 seconds, while you can safely decrease it to something like 60:
|
OPCFW_CODE
|
New library assignments reflect this shift as well, with term papers and research projects asking students to use web sites as an information resource, in addition . Assignments for students in my web development courses. Assignment, description, rubric, skills project 1 (10%): student bio website, set up a site with at least four pages with bio info and at least one link and image.
Allassignmenthelp covers all the area realted to programming including website design and development assignment help affordable price for the students. Your entire undergraduate/graduation course is jeopardy if you can't manage a decent web design and development project at quality assignment help, we. It industry experts can leverage our expertise to employ skilled manpower for optimum web solutions including web and app development we have dedicated . The first step is to create a homepage (see website assignment option 1) you will need to consider issues of web design and the conventions followed by.
This is a short introduction to writing html what is html it is a special kind of text document that is used by web browsers to present text and graphics. Assignment, description, skills, rubric project 1: student bio website, set up a site with at least four pages with bio info, links and one image on each page. Researchomatic is the largest e-library that contains millions of free web design assignment topics & web design assignment examples for students of all. Get locus assignment help with unit 14 website design sample assignment for level 4, our team of excellent writers help you in all coursework.
Html css responsive web design template assignment for aspiring web developers, web designers, front end developers and interface. His course provides an introduction to website design and development for artists , over weekly assignments, you will develop an actual, live website from. Web designing and development assignments help in ireland for writing essays, research papers and dissertation could be availed by the students this help is. Web design summer assignment 11th grade thomas a edison cte hs 165- 65 84th avenue jamaica, ny 11432 mrs cruz and ms dilrukshi, instructors. Get your web designing assignment help and writing from professional australian writers for a+ grade in your web design assignment at instant assignment.
Grading will be based on a point scale for each assignment course letter grades will be the standard a = 90%+, b = 80-89, c = 70-79, etc +/- grading will be. We cover all the area related to programming including website design and development assignment help avail online web designing help & assignment help. Part b – designing the application northbound events (nbe) wants you to develop a dynamic website using php and mysql to enable customers to view. Web design encompasses many different skills and disciplines in the production and maintenance of websites the different areas of web design include web. Web site design and development (cs0134) assignments assignment 1 exercise 1, [sample solution] assignment 2, grading standard - pdf file, [sample .
Instructions for doing assignments dear learner you are required to submit one assignment per course within the stipulated time. I introduction for this assignment, i plan to design a website for my imaginary company - minta cakes this is a multi-store company in australia. Coit20268-assignemnt item portfolio (s0274817) coit 20268 responsive web design (rwd) (term 2016) assessment item portfolio course co-ordinator: andrew.
Apply basic design concepts and principles of web delivery • demonstrate a basic server space for student portal/portfolio and assignments. This unit provides the principles and skills of web application development it aims at providing both conceptual understanding and hand-on experiences for web.
Lecture slides: introduction to web development sample code you are to update the webpage you created in assignment #1 with css you should have a . This landing page was created for those who want to choose web design as a future answershark is aimed to give design help with assignments of all levels. Assignment 1: web development index file lifestyle store link href=”indexcss”. Sample solution on online web designing assignment help to build web application web application introduction in the below report we.
|
OPCFW_CODE
|
How to recover an overflowed long in PowerBuilder
Using PowerBuilder 12.5...
My client stored a value in a long and saved it to the database. Eventually the values exceeded the amount possible in a long so it started storing negatives in the database. I know how to recover the original number if it is an integer that overflowed:
ABS(ai_int) + ((32768 - abs(ai_int)) * 2)
but using the same formula with the size of a long in 32768 does not work. Can someone help me get the negative number back to what the user wanted?
Where was it overflowed: in the PB client, or in the DBMS? If the latter, which DBMS?
Simply assign the Long to an UnsignedLong and it will be the expected value. For a demo create a window and put a MultilineEdit on it then put this in the Open event:
long ll_max =<PHONE_NUMBER>
long ll_a, ll_b, ll_c
unsignedlong lul_a, lul_b, lul_c
string ls_out
ll_a = ll_max + 1
ll_b = ll_a + 1
ll_c = ll_b + 1
lul_a = ll_a
lul_b = ll_b
lul_c = ll_c
ls_out = string(ll_max) + "~r~n"
ls_out += string(ll_a) + "~r~n"
ls_out += string(ll_b) + "~r~n"
ls_out += string(ll_c) + "~r~n~r~n"
ls_out += string(ll_max) + "~r~n"
ls_out += string(lul_a) + "~r~n"
ls_out += string(lul_b) + "~r~n"
ls_out += string(lul_c) + "~r~n"
mle_1.text = ls_out
The above produces this output:
2147483647
-2147483648
-2147483647
-2147483646
2147483647
2147483648
2147483649
2147483650
Good idea! Unless the overflow has been more than one bit, but then again, unless you assume only one bit of overflow, any recovery will be impossible. Still, very cool idea!
Even in that case you may be able to use other information to order or partition the rows enough to find the places the value rolled over. You could even use backups to determine which rows were in the table as of each backup date.
I'm guessing you're assuming these are overflows from incrementing (+1); I'm including the possibility that one day someone decided to add 4 billion to an integer. Unless serious audit tracking was done (e.g. tracking components of every operaton), I don't think there's any recovering from that one.
I'm certainly assuming that each change is at least a few orders of magnitude below the maximum size of an integer. It it's not there are much more serious problems with the system than running out of space in an integer. There's also the possibilty that the values aren't sequential at all. One could imagine a PB app connected to a Frob Detector that counts and logs Frobs over some interval. If Frobs are random and the counter overflows, there's no way to infer the lost data.
|
STACK_EXCHANGE
|
Last updated at Wed, 26 Jul 2017 18:09:34 GMT
Ever wish you could take all the work you just did commenting up a binary in IDA and have it all show up in your debugger? Now, you can produce a map file in IDA, and import it directly into WinDbg with the !symport command in byakugan.
In IDA, select File -> Produce File - > Create Map File, and select the destination. You can select any options for this, but currently we only import what's listed as the Local Symbols (This is all symbols that are tied to a specific memory address relative to the base address). All of the names you changed and added as labels and functions will be exported to the .map file.
Inside windbg, load byakugan as normal, then use the !symport command with the arguments of the module name, and the map file path to import the map file by name. These will be imported as synthetic symbols, so you wont be able to use them to set breakpoints (this will be fixed soon) but they will show up in the disassembly window.
0:001> !load byakugan.dll
Reloading current modules
.*** ERROR: Module load completed but symbols could not be loaded for C:\Windows\System32\calc.exe
....*** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Windows\system32\GDI32.dll -
.........*** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Windows\system32\SHELL32.dll -
0:001> !symport calc C:\Users\lgrenier\calc.map
[S] Adjusting symbols to base address of: 0x calc (00680000)
[S] Failed to add synthetic symbol: ?bFocus@@3HA
[S] Failed to add synthetic symbol: ?machine@@3PAY04DA
[S] Failed to add synthetic symbol: ?init_num_ten@@3U_number@@A
[S] Failed to add synthetic symbol: ?init_p_rat_exp@@3U_number@@A
[S] Successfully imported 566 symbols.
A couple caveats to be aware of. First, you should reload symbol server symbols manually before importing your own (unless they overlap). Reloading will remove all synthetic symbols. Second, if your symbols do overlap, !symport will be unable to override the symbol server symbols. If you'd rather use your own instead of the proper symbols, don't reload at all - just realize that you will be unable to do in depth heap analysis without the symbols of unexported functions.
NOTE: My xp build vm is at home on my laptop, so only Vista binaries have been updated with this new functionality! I'll be adding XP binaries tonight or tomorrow, or you can build on your own. Good luck!
|
OPCFW_CODE
|
10-26-2010 06:22 AM
I would like to ask few questions on QoS configuration in JUNOS:
1- When we classify the traffic to forwarding class through Multifield Classifier (Firewall Filters) then in order to tag the traffic (Marking) we have to define the rewrite rules (Mapping of code points to forwarding class). My question is that the code points are 6 digit binary number like 101 11 0, where 101 represents the Priority, 11 represents the drop probability (loss priority) and zero is unchanged. But when we classify the traffic to forwarding class then why we give explicitly loss-priority although loss-priority is part of code point.
2- When we define the schedular-map for queues then where exactly we have to apply the schedular-map on the ingress interface or egress interface.
3- In Schedular, could any one explain the buffer-size in simple words.
4- When we assign the multiple forwarding classes for example two forwarding class to same queue and the transmit-rate is 10mb exact then how this transmit-rate is distributed to both forwarding class? 50-50 or what?
Looking forward to positive response.
10-26-2010 06:02 PM
1. It is possible that certain applications may share the dscp code point. Say there are two applications http and ftp and the data packets of both are coming with default dscp code point which is 000000 getting classified to same queue and under congestion, we would like to give certain drop priority to one among the both.
2. Scheduler map will be applied at the egress interface.
3.Every port is given few port buffers. The number we specify under scheduler is the % of total buffers of port buffers that we would like to allocate for this queue or this is the % of total buffers that is guaranteed for that queue.
4.It will be not be 50-50. The minimum guaranteed will be 10mb for both together under congestion.
11-07-2010 06:18 AM
1 - Don't confuse the actual diffserve code point breakdown with the functions inside of the JUNOS based router/switch. You can create a COS configuration that mimics the DSCP values for Priority|Loss Priority|Unused - which I recommend, but the configuration and codepoint are not dependent on each other.
Normally, I would configure AF11 into the Assured-Forwarding queue with a loss-priority of low in JUNOS - which mimics the actually DSCP itself. You could also put AF21 traffic into the same Assured-Forwarding queue but with a loss priority of high - which is contrary to the DSCP values it represents.
The automatic mapping of DSCP to JUNOS queues are not automatic or rigid - even though they have the same look and feel.
2 - Unlike our main competitor, our QOS encompasses the whole box. There is a classifier and scheduler on every interface on every Juniper router or switch. You can see the defaults by issuing the command - show class-of-service interface xxxxxx. If you want to change the behavior of COS on the box you will need to (a) customize schedulers and classifiers for every interface (b) manipulate the defaults to add your functions.
3 - Buffer-size is the amount of "memory" you have assigned to that particular queue for packet buffering.
4 - Check out this whitepaper to see if it helps answer these questions in greater detail.
|
OPCFW_CODE
|
Racy construct in Ref<>::unref could lead to invalid memory accesses
Godot version:
Tag: 3.1.2-stable
OS/device including version:
Any multi-threaded environment.
Issue description:
I assume the reader is familiar with issues of data races. The effect of races is that they may cause rare crashes. I was compelled to study this after playing a game I was enjoying, but found it crashed after 30m-1h of gameplay.
As in #32081 (Data races when running Godot), the thread sanitizer highlights a number of problematic constructs.
Consider the following sequence involving two threads on an object.
Initial state: refcount is 2.
Step
Thread 1
Thread 2
1
Call Ref<>::unref()
-
2
Enter Reference::unreference()
-
3
Set die = false and refcount-- == 1
-
4
(thread suspends, e.g. after executing line 87, as can happen for all sorts of reasons at any point in the code at the whim of the CPU or operating system)
(thread wakes up)
5
-
Call Ref<>::unref()
6
-
Progress all the way through the unreferencing
7
-
die = true, refcount-- == 0, enter memdelete()
8
-
Object is deleted, it's no longer valid to access members of the Ref<>.
9
-
e.g. Thread goes on and allocates new stuff in the space of the old Ref<>.
10
(thread resumes)
-
11
refcount.get() <= 1 is true
-
12
Due to step 8, all subsequent uses of this are undefined behaviour and can cause arbitrary memory corruption
-
13
For example, get_script_instance() accesses a member variable of the ref, but the ref has been deleted and replaced with other content in timestep step 7.
This sequence of events is very unlikely in any given case, but it can happen, and is not valid.
If my interpretation and understanding above is correct, it could manifest in rare and difficult to reproduce crashes.
I also want to tip my hat towards this comment in the code, which I think is indicative that it is known that this construct may be responsible for crashes, but the reasons haven't yet been pinpointed. What I have described above could be one such a cause for these crashes (but no doubt there maybe other issues hiding around this construct).
https://github.com/godotengine/godot/blob/0587df4aa5f2977350cc80b1522cdc1e483c4515/core/reference.h#L262-L267
Steps to reproduce:
Study code, think, and be aware of issues relating to reference counts, threading and race conditions.
What might a fix look like?
Presumably this condition: https://github.com/godotengine/godot/blob/0587df4aa5f2977350cc80b1522cdc1e483c4515/core/reference.cpp#L89
... is here because scripts may hold one of the references, and it needs to be notified. The exact intent from the comments is opaque to me. Reading the code alone it is very difficult for me to convince myself that it works as intended, given that there may be multiple threads executing in parallel.
I think some it would be useful to discuss the relationship between these objects, and what purpose notifying the script has. What follows are some thoughts on this side.
It seems that once unref() has been called, one should not do any more member accesses. One way to avoid this would be to do the member accesses before calling unref(). The API at the moment involves feeding the whole object into e.g. CSharpLanguage::refcount_incremented_instance_binding, which it uses to query the refcount (something that should be determined once, atomically, along with the unref, throughout the whole process); it is also used to obtain the associated script binding get_script_instance_binding. I think these things could be stored up-front, then do the unref, then communicate to the managed side that the unmanaged side has gone, but without referring to unmanaged objects.
Something which makes this hard to analyse, I think, is that the refcount is effectively being used to store two pieces of overlapping information. 1) Is there a managed side to deal with, 2) the refcount.
It might make sense to disentangle these things. But care must be taken to only update all of the pieces of state consistently and atomically (e.g. using a mutex).
One final thought, an "obvious" solution is just to shove a mutex member variable around Reference::unreference so that only one unreference operation can take place per object at once. This too would make things a bit easier to reason about. But I think there are deeper issues surrounding the way this is written, and shoving a mutex in feels like a band-aid, compared to fixing issues such as multiple accesses to refcount.get(), which may be inconsistent and racy in a multi-threaded environment.
Can anyone still reproduce this bug in Godot 3.2.3 or any later release?
I took a quick look and I think so. My "reproducer" is to read the code, though, rather than to run the code.
open reference.cpp on the master branch
notice that the unref() logic appears to be the same as when I did my original analysis.
a key problematic construct appears to be the unrefing followed by memdelete. Just imagine that two threads simultaneously execute this line, and one suspends after that line but the other suspends temporarily.
the thread which continues executing concludes there are no references, deletes the thing.
the suspended thread wakes up and calls get_script_instance(), which is now an invalid access because this has been deleted.
Really, all the work done in unref (decrement a counter, delete stuff) needs to be protected by a mutex.
The code executing memdelete also needs to be written so that it's guaranteed there can't be other threads alive holding references to the deletee when it comes to delete the object, for example here. The following algorithm needs to ensure that exactly one thing is accessing the reference count during that time, otherwise bad things will happen:
decrement reference count
test if refcount is zero
if zero, delete
It's necessary to ensure that between (1) and (3), no-one can re-increment the refcount, or subsequently operate on the object once it's deleted. A good way to do that is to protect the whole operation by a mutex, and the same for modifications or tests against the reference count itself.
Hope this helps.
I took a quick look and I think so. My "reproducer" is to read the code, though, rather than to run the code.
open reference.cpp on the master branch
notice that the unref() logic appears to be the same as when I did my original analysis.
a key problematic construct appears to be the unrefing followed by memdelete. Just imagine that two threads simultaneously execute this line, and one suspends after that line but the other suspends temporarily.
the thread which continues executing concludes there are no references, deletes the thing.
the suspended thread wakes up and calls get_script_instance(), which is now an invalid access because this has been deleted.
Really, all the work done in unref (decrement a counter, delete stuff) needs to be protected by a mutex.
The code executing memdelete also needs to be written so that it's guaranteed there can't be other threads alive holding references to the deletee when it comes to delete the object, for example here. The following algorithm needs to ensure that exactly one thing is accessing the reference count during that time, otherwise bad things will happen:
decrement reference count
test if refcount is zero
if zero, delete
It's necessary to ensure that between (1) and (3), no-one can re-increment the refcount, or subsequently operate on the object once it's deleted. A good way to do that is to protect the whole operation by a mutex, and the same for modifications or tests against the reference count itself.
Hope this helps.
|
GITHUB_ARCHIVE
|
Play scala, java.sql.date throws me errors, trying to build a to-do form
I'm trying to build a to-do list using play. Now the issue I'm having, is that I can't seem to use the SQL format Date in my parser.
package models
import java.sql.Date
import anorm.SqlParser._
import anorm._
import org.joda.time.DateTime
import play.api.Play.current
import play.api.db.DB
case class Task(
id:Int,
task:String,
description:String,
dueDate:Date
)
object Task {
val task = {
get[Int]("id") ~
get[String]("task")~
get[String]("description")~
get[Date]("dueDate")map {
case id~task~description~dueDate => Task(id, task, description, dueDate)
}
}
def all(): List[Task] = DB.withConnection { implicit c =>
SQL("select * from task").as(task *)
}
def create(task: String, description: String, dueDate:Date): Unit = {
DB.withConnection { implicit c =>
SQL("insert into task(task, description, dueDate) values ({task},{description},{dueDate})")
.on(
'task -> task,
'description -> description,
'dueDate -> dueDate
).executeUpdate()
}
}
def delete(id: Int) {
DB.withConnection { implicit c =>
SQL("delete from task where id = {id}")
.on('id -> id).executeUpdate()
}
}
}
getDate throws me the following error:
could not find implicit value for parameter extractor: anorm.Column[java.sql.Date]
When I change SQL Date to util Date, it'll whine that SQL Date is expected. The parameter dueDate should become the SQL column whith custom dates as input.
How do I fix this? Is it even possible to use java.sql.date in Scala?
(Im using h2-db for my SQL)
If you need more info, leave a hint.
You shouldn't use java.sql.Date for your domain model. Use java.time.LocalDate or org.joda.time.LocalDate instead; anorm's Column type knows how to handle these by default.
Thanks, I haven't fixed it yet, but vytah explained it a bit more on reddit.com/r/scala Thank you both!
If you're OK with using java.util.Date the following should work
Replace
get[Date]("dueDate")
with
date("dueDate")
This results in more pain and suffering as I don't get an error, but it just blanks my page :p
You could try to write your own implicit conversion. Something like this:
import anorm._
implicit def toSqlDateColumn = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case date: java.sql.Date => Right(date)
case _ => Left(TypeDoesNotMatch(s"Cannot convert $value : ${value.asInstanceOf[AnyRef].getClass} to java.sql.Date for column $qualified"))
}
}
This results in type mismatch; found : (Symbol, java.sql.Date) required: anorm.NamedParameter at 'dueDate -> dueDate
It looks like by default it uses java.util.Date. See the date method in the SqlParser docs:
play docs anorm.sqlParsers
|
STACK_EXCHANGE
|
To all who've been using Cyberwatch(R) and provided feedback, thank you! It's very much appreciated. As a result of some of these feedback, Cyberwatch API version 1.1 was released last night, and its packed with improvements that you've requested:
- 25-35% faster with database clustering improvements
- For our automated alerting mechanism, we've built in dynamic notification API keys, which don’t require authenticating! Click through, see everything we know from that
- API key removal from the url on the front end wrapper
- Security Updates & bug improvements
To all who've not yet heard of Cyberwatch, last week we went public with a new application program interface (API), that allows users to run queries against our backend raw intelligence collections. We knew two things...
First, many (all?) companies need intelligence --not just information received when they buy that million dollar security tool, but a real understanding of what's going on outside of their border router that will likely affect them.
Second, many of those companies would prefer to slog through the myriad of google groups, open source lists, and take on the dark web themselves and waste an enormous amount of time chasing things that just don't mean much to the questions they should be asking themselves.
For example... a RISK focused security pro will always want to know if there's a RISK of something breaching. And if they do, what's the likelihood of loss?
An INTELLIGENCE focused security pro will want to look over the horizon for risks that'll might mean something soon, or they'll want to know that tactical information --what IP blocks should we be monitoring now? Blocking now? Remediating?
At the same time, our customer base is largely 100,000 computers or bigger... which while good for us, represents a small number of companies who need help... and who may be partnered with or in the supply chain to these larger companies.
I've talked with dozens of smaller companies. They simply can not, and like will never, spend the money on an intelligence shop.
So what if Wapack Labs could help them? What if we could allow users to query our backend data for say, 30 queries per day (for free), so that these smaller companies could see exactly what they're exposure looks like --and what if Wapack Labs could refer them to a security professional (under NDA of course) to help that smaller company get well?
Well, that's exactly what we did.
- Wapack Labs passively collects key logger 'dump' locations at about 1300 locations around the world;
- We collect on very specific sinkholes;
- We collect some specific open source --but not all... we don't want circular reporting;
- And we collect about a dozen other specific items that can help tell a company when they might have problems.
And we make that all searchable to anyone who wants to search against it.
As well, we started (this week) performing automated victim notifications. Our first batch, roughly 5000 of them, went out on Wednesday, with a no-cost, one time link to our databases to show the companies what we found, and why we think they may have been victimized. That email contains a link to our new Partner Exchange Program, and allows the victim to request a referral to one of our trusted, NDA'd, partners who can assist in the cleanup if needed.
The Cyberwatch API is available at api.wapacklabs.com.
Need more? We've built an ugly demo front end (we'll make it look nicer soon, I promise) on the API... cyberwatch.wapacklabs.com. Use it to monitor a portfolio of companies. If you're watching your supply chain, or a group of investment companies, you can set up five companies in our Cyberwatch front end, or you can use the API to bring the data into your own environment. Either way... you should be able to pull our data into a usable front end of your choosing or use ours.
So, to those who've provided feedback? We're listening.
To those who've not yet tried it? Try it!
We're heading into Christmas shopping season. And although much of the work we'd done in the past is APT and Espionage related, we've taken on a second flavor in our analysis --money. So if you're a retailer, financial institution, or a supplier to one of these, as we head into the Christmas shopping season you should be watching our API at least daily, knocking down the threats we identify.
Give it a try. There's absolutely no reason you shouldn't... it's free and we might know something about you that you don't already know.
Until next time,
Have a great weekend!
(CyberWatch(R) is a registered Trademark of Wapack Labs Corporation.)
|
OPCFW_CODE
|
A couple of days ago I was asked to explain myself: "Why Symfony?". I was reminded of this video - an interview with Fabien Potencier:
Fabien mentions some facts about Symfony that are key to understanding it, as an outsider. These same facts (and some more) will also help you to "sell" Symfony to others.
First of all, Symfony is a set of building blocks. You can pick the ones you need for the specific web application you are going to develop. Will the application be having some kind of secured area, where only members can log in to? You'll need the Security Component. Does your application have forms? Install the Form Component. Is it going to be a full-fledged, interactive, web application with some console commands? Install all components, and the "glue" between them: the Symfony Standard Edition.
Second, out-of-the-box, Symfony does nothing. Of course, there is a nice welcome page and some demo pages that you'll see once you have installed the Symfony Standard Edition. But really, there is no interface, there is no "admin" panel. There is just empty space, ready to be filled by your application's code.
Third, using Symfony you won't have to "reinvent the wheel" for each project. Symfony provides tools for many things you want to do in every project: render pages using templates, validate the values of submitted forms, secure part of the application with a login form, etc. Symfony has made many decisions for you on each of these low-level subjects.
Fourth, you can still make your own decisions when it comes to (almost) anything. You may agree with Symfony's "sensible defaults", but even when you don't: nothing stops you from doing things differently. In fact, Symfony encourages this and enables you to do so. Almost every part of the framework can be replaced by your own implementation. The key to this flexibility is the dependency inversion principle, which is practiced everywhere in Symfony's codebase.
Fifth, if you have become acquainted with Symfony and its components, it will be much easier to get involved with another project which uses some or all of the Symfony components. It will also be much easier to share code between projects.
Sixth, when it comes to reusable code: there is a lot of it. The Symfony community is big enough to have something for any need you may have. Even though the quality of reusable packages varies a lot, there are always some shoulders of giants you can stand on. And when you have developed something that nobody has done so well before, it is very easy to share it, on GitHub and Packagist, using Composer.
Seventh, by working with the Symfony components, writing extension or replacement code, and by preparing your classes for dependency injection, your code quality will automatically increase. I know that I have become a better developer since I started using Symfony2, and I have seen many other developers display the same growth in technical ability.
In conclusion: lots of good things to say about Symfony. I hope that when you are still in doubt, you will soon give it a try.
If you are already developing Symfony applications, and are looking for some more advanced information and best practices, especially about writing reusable code, check out my book A Year With Symfony.
Seven facts about all frameworks, nothing more.
Agreed, I was kind of waiting for this reply :). Still there are great differences between frameworks and really, they teach you different best practices which may or may not be best practices in general. Maybe that's my most important point here ("fact" 7).
Nice list, I could not agree more!
However, I think there is an eighth important fact that you've missed in the list: clear LTS versions release policy.
This is a key factor for companies deciding between one open-source project or another. In 2016 there will still be someone fixing bugs and maintaining Symfony2.3
You named the pro's of using a framework but not why specifically Symphony.
There are more frameworks around besides Symphony.
I do like Symphony but use an other framework ( Yii )
By the way (somewhat related to this) - the title of one of my next posts will: "Why you should not develop Symfony bundles". What bothers me exceptionally is the fact that many things are being created again and again, just for some specific framework.
Haha, agreed - you caught me there!
I think flexibility is one of the most important advantages of Symfony (with an f ;)) over other frameworks that I've seen. Changing parts of the behavior of Symfony itself is really easy in most situations. Also, the way you can take one Symfony component and use it stand-alone is really great, though I know that there are other frameworks trying to do this.
Can't agree more. And remember also symfony2 is not just a framework, is glue to its community which has projects sprouting out like silex, yolo, vespolina, sylius, etc, and are rediscovering different and better ways to do things!
|
OPCFW_CODE
|
This semester I am experimenting with a “flipped” classroom for my undergraduate course in deterministic operations research, INSY 3410. Although it’s still too early to know if this experiment was a success, I wanted to share some of my observations at the mid-way point (and to include some student comments for good measure).
Background: This course covers the foundational deterministic OR topics of linear programming (e.g., modeling LPs, solving via the simplex method, solving via Excel Solver and Gurobi, and basic sensitivity analysis) and network problems (e.g., shortest path, minimum spanning trees, transportation problems, and assignment problems). Traditionally this course featured 50-minute lectures on MWF.
Motivating Factors for Flipping: The most common comment made by students in end-of-semester course evaluations is that they would “like to see more example problems.” The traditional lecture-based teaching model made it difficult to dedicate sufficient time to both introducing the concepts and working practice problems.
So, this semester I recorded “short” (5- to 30-minute) lecture videos on the key course topics. These videos take the form of screencasts, where I talk over a set of slides. I make the slides available to the students so they can annotate them while watching the videos. [I’ll post all of these materials to this Website soon.]
Before class, the students are expected to (1) read the sections of the textbook related to each video, and (2) watch the assigned video(s). The goal is for students to come to class prepared to investigate these topics in greater detail (having gained a basic understanding through the videos and textbook reading). In particular, classroom time is now dedicated to working practice problems that help students better understand the topics and allow us to have more meaningful discussions. I post the in-class exercises to the course Website before class; during class I ask the students to form small groups (2-4 students) to solve the problems. If the students encounter difficulties we stop and have a class discussion.
Of course this model requires students to read and watch videos before class, which can be time-consuming. To compensate for the extra outside-of-class time requirements, I have made the Friday classes optional. However, rather than canceling the Friday time slot, the teaching assistants work more practice problems and provide general guidance on homework assignments.
Preliminary Observations: As an instructor, I find this model to be much more enjoyable. Those students that come to class prepared are more engaged and ask interesting questions. It is rewarding to watch students collaborate to solve problems. Although it has been time-consuming to write/record the videos, I believe this was a worthwhile investment.
From the students’ perspective, the results have been mixed. Based on the student responses to a mid-term course evaluation, many indicated that they prefer to watch the “lecture videos” on their own and work problems in class. However, judging by the statistics on video views, a disturbingly small percentage of the students actually come to class prepared. For example, of the 103 students enrolled, only 63 had watched the video on the simplex method before class. Not surprisingly, when I asked the students to solve problems in class, many were unable to do so. I suspect that these were the students who indicated that they do not like the flipped classroom.
(Some) Lessons Learned: I believe that there is value in this approach to teaching this course. However, there are a few things that I’d like to do differently. First, the videos need to be (at least somewhat) entertaining. Second, many students need to be motivated to watch the videos before class; “watch the videos for the sake of learning” is not sufficient. One option is to conduct a short quiz at the beginning of class that covers the key topics of the videos. Another option is to grade the in-class assignments. Currently, these assignments are not a component of the students’ overall grade. Finally, to accommodate the students’ unique learning styles, I believe it’s a good idea to maintain a mix of traditional lectures with video-based lectures.
I’m curious to hear from those of you who have tried new teaching strategies.
|
OPCFW_CODE
|
I have two issues but let’s start with the first one. I cannot use Docker commands without sudo.
For instance if I enter “docker ps” I get nothing, but if I enter “sudo docker ps” then I get a list of running containers.
I have added myself to the “docker” group and given myself the right permissions…
pblanton@Kubuntu-OneOfTen:~$ groups pblanton
pblanton : pblanton adm cdrom sudo dip www-data plugdev lpadmin lxd sambashare docker
And tried everything else I can find online, all to no avail.
The second issue is that Docker Desktop doesn’t show any containers, running or otherwise, on my machine when I run it. I suspect that’s because the Docker Desktop app isn’t running as a super-user.
I finally found a solution to being able to run Docker without Sudo.
After installing Docker and the Docker Desktop on Ubuntu, I can run Docker commands without sudo until I reboot. After that I must use sudo.
Using sudo to build my containers, results in them being stored in a different location than Docker Desktop is aware of, unless you run Docker Desktop under sudo as well.
The answer seems to be …
$ docker context use default
Which solves the problem of being able to access the Docker daemon as yourself AND results in the images you build being visible in Docker Desktop. I haven’t rebooted yet since doing that, but as I have work to do I won’t reboot until EOD.
Do you need two Docker Daemons at the same time? If you want to use Docker Desktop, I don’t recommend to install Docker Community Edition as well. They will have two different contexts and somehow I don’t think that the default is actually owned by Docker Desktop. It is usually Docker Community Edition, but I can be wrong, since I don’t have Desktop on my Linux now. When you install Docker Community Edition it will use a unix socket to communicate with the Docker daemon. That socket is owned by “root” and it is in the “docker” group. This is why you either need to use “sudo” or you need to be in the “docker” group as well.
In case of Docker Desktop it runs a virtual machine, so the socket of the Docker deamon is in the VM too. I don’t remember if it uses a local unix socket too or a TCP socket, but that socket can be handled differently.
If you want to make sure you always use the right Docker, remove the one that you don’t actually need. I guess it is Docker Community Edition.
When I installed Docker Desktop, it came without the docker service. Windows and Mac got both of them packaged into one, but the Linux version needed the service to be installed first, then the Desktop installed second.
If that’s changed, then that would explain my conundrum, as I am now doing it wrong - based on old facts.
I just looked at the docs and the old mention that I remember, about having to install the Docker service first is no longer there, but I did notice this…
“When Docker Desktop starts, it creates a dedicated context that the Docker CLI can use as a target and sets it as the current context in use. This is to avoid a clash with a local Docker Engine that may be running on the Linux host and using the default context. On shutdown, Docker Desktop resets the current context to the previous one.”
which explains what I am seeing.
When did you installed Docker Desktop first? As far as I know, it always ran a virtual machine. This is what Docker Desktop is. This is how Docker tries to give you the same experience on each platform. I think I installed it while it was beta and it ran a virtual machine.
To install Docker Desktop successfully, your Linux host must meet the following requirements:
- 64-bit kernel and CPU support for virtualization.
- KVM virtualization support. Follow the KVM virtualization support instructions to check if the KVM kernel modules are enabled and how to provide access to the kvm device.
QEMU must be version 5.2 or newer. We recommend upgrading to the latest version.
In the meantime, you edited your comment, but I leave the quote here
Thanks for your help!
On this machine the first install of Docker was about two months ago and I did follow instructions that said I needed to install the Docker engine first, but I cannot seem to find that documentation again.
The current documentation here (Install Docker Desktop on Linux | Docker Documentation) under “Differences between Docker Desktop for Linux and Docker Engine”, explain everything I’ve been seeing. I blame myself for maybe mis-interpreting something I read wrong.
I just removed the Docker Engine from my machine and DockerDesktop still works fine. I haven’t rebooted yet, but I expect my Docker installation will now survive a reboot.
Thanks again for helping me to clarify what’s going on!!
Then I am pretty sure it was a misunderstanding or a temporary mistake in the documentation. No problem, it happens. I am glad everything works
|
OPCFW_CODE
|
using System.Globalization;
using Krafted.ValueObjects;
using Xunit;
namespace Krafted.UnitTest.Krafted.ValueObjects
{
[Trait(nameof(UnitTest), "Krafted.ValueObjects")]
public class ValueObjectTTest
{
[Fact]
public void Equals_EqualValueObjects_True()
{
Email valueObjectA = new Email("contact@maiconheck.com");
Email valueObjectB = valueObjectA;
Email valueObjectC = new Email("contact@maiconheck.com");
Assert.True(valueObjectA.Equals(valueObjectA));
Assert.True(valueObjectA.Equals(valueObjectB));
Assert.True(valueObjectA == valueObjectB);
Assert.False(valueObjectA != valueObjectB);
Assert.True(valueObjectA.Equals(valueObjectC));
Assert.True(valueObjectA == valueObjectC);
Assert.False(valueObjectA != valueObjectC);
}
[Fact]
public void Equals_NotEqualValueObjects_False()
{
Email valueObjectA = new Email("foo@maiconheck.com");
Email valueObjectB = new Email("bar@maiconheck.com");
Assert.False(valueObjectA.Equals(valueObjectB));
Assert.False(valueObjectA == valueObjectB);
Assert.True(valueObjectA != valueObjectB);
}
[Fact]
public void GetHashCode_Id_HashCode()
{
Email valueObjectA = new Email("foo@maiconheck.com");
Email valueObjectB = new Email("bar@maiconheck.com");
var hashCode1 = valueObjectA.GetHashCode();
var hashCode2 = valueObjectB.GetHashCode();
Assert.NotEqual(hashCode1, hashCode2);
}
[Fact]
public void GetCopy_ValueObject_Copied()
{
Email valueObjectA = new Email("contact@maiconheck.com");
Email valueObjectB = (Email)valueObjectA.GetCopy();
Assert.True(valueObjectA.Equals(valueObjectB));
Assert.True(valueObjectA == valueObjectB);
Assert.False(valueObjectA != valueObjectB);
Assert.Equal(valueObjectA.Value, valueObjectB.Value);
Assert.Equal(valueObjectA.ToString(CultureInfo.InvariantCulture), valueObjectB.ToString(CultureInfo.InvariantCulture));
}
}
}
|
STACK_EDU
|
Description of problem:
After upgrading to 1.20.0 from updates-testing, data from python plugins is not stored. I have at least two plugins: hddtemp and sensors; the categories on web page do not appear.
I've tried debugging steps from Netdata wiki, but without any resolution. When I manually run python plugin (for example: sudo -u netdata /usr/libexec/netdata/plugins.d/python.d.plugin hddtemp debug) I see data being collected. But normally running netdata has no data.
I have no ideas how to debug further. What should I do?
Version-Release number of selected component (if applicable):
Take a look in the error log: /var/log/netdata/error.log and see if there's anything related.
Nope, nothing about problems in error_log:
# grep sensors *
error.log:2020-03-21 16:05:22: charts.d: INFO: sensors: is disabled. Add a line with sensors=force in '/etc/netdata/charts.d.conf' to enable it (or remove the line that disables it).
error.log:2020-03-21 16:05:24: python.d DEBUG: plugin[main] : [sensors] looking for 'sensors.conf' in ['/etc/netdata/python.d', '/etc/netdata/conf.d/python.d']
error.log:2020-03-21 16:05:24: python.d DEBUG: plugin[main] : [sensors] loading '/etc/netdata/conf.d/python.d/sensors.conf'
error.log:2020-03-21 16:05:24: python.d DEBUG: plugin[main] : [sensors] '/etc/netdata/conf.d/python.d/sensors.conf' is loaded
error.log:2020-03-21 16:05:24: python.d INFO: plugin[main] : [sensors] built 1 job(s) configs
error.log:2020-03-21 16:05:24: python.d DEBUG: plugin[main] : sensors[sensors] was previously active, applying recovering settings
I've reported it upstream: https://github.com/netdata/netdata/issues/3697
Here is the correct link to github issue: https://github.com/netdata/netdata/issues/8451
The problem was external. I have a CEPH cluster which is not fully healthy. 'ceph' plugin in netdata was hanging, being unable to communicate with the cluster. After disabling this plugin, everything works (I gather fixing the CEPH cluster would also clear this issue).
Sorry for the late, it's a little bit complicated for now.
Good news to know it's working again for you.
|
OPCFW_CODE
|
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
SSH2 library's fork (only for my devs, patches are sent upstream) http://libssh2.org
Fetching latest commit…
Cannot retrieve the latest commit at this time.
|Failed to load latest commit information.|
libssh2 - SSH2 library ====================== libssh2 is a library implementing the SSH2 protocol, available under the revised BSD license. Web site: http://www.libssh2.org/ Mailing list: http://cool.haxx.se/mailman/listinfo/libssh2-devel Generic installation instructions are in INSTALL. Some ./configure options deserve additional comments: * --enable-crypt-none The SSH2 Transport allows for unencrypted data transmission using the "none" cipher. Because this is such a huge security hole, it is typically disabled on SSH2 implementations and is disabled in libssh2 by default as well. Enabling this option will allow for "none" as a negotiable method, however it still requires that the method be advertized by the remote end and that no more-preferable methods are available. * --enable-mac-none The SSH2 Transport also allows implementations to forego a message authentication code. While this is less of a security risk than using a "none" cipher, it is still not recommended as disabling MAC hashes removes a layer of security. Enabling this option will allow for "none" as a negotiable method, however it still requires that the method be advertized by the remote end and that no more-preferable methods are available. * --disable-gex-new The diffie-hellman-group-exchange-sha1 (dh-gex) key exchange method originally defined an exchange negotiation using packet type 30 to request a generation pair based on a single target value. Later refinement of dh-gex provided for range and target values. By default libssh2 will use the newer range method. If you experience trouble connecting to an old SSH server using dh-gex, try this option to fallback on the older more reliable method. * --with-libgcrypt * --without-libgcrypt * --with-libgcrypt-prefix=DIR libssh2 can use the Libgcrypt library (http://www.gnupg.org/) for cryptographic operations. Either Libgcrypt or OpenSSL is required. Configure will attempt to locate Libgcrypt automatically. If your installation of Libgcrypt is in another location, specify it using --with-libgcrypt-prefix. * --with-openssl * --without-openssl * --with-libssl-prefix=[DIR] libssh2 can use the OpenSSL library (http://www.openssl.org) for cryptographic operations. Either Libgcrypt or OpenSSL is required. Configure will attempt to locate OpenSSL in the default location. If your installation of OpenSSL is in another location, specify it using --with-libssl-prefix. * --with-libz * --without-libz * --with-libz-prefix=[DIR] If present, libssh2 will attempt to use the zlib (http://www.zlib.org) for payload compression, however zlib is not required. If your installation of Libz is in another location, specify it using --with-libz-prefix. * --enable-debug Will make the build use more pedantic and strict compiler options as well as enable the libssh2_trace() function (for showing debug traces).
|
OPCFW_CODE
|
For centuries books were the dominant source of information, but how we acquire, share, and publish information is changing in fundamental ways due to the Web. The goal of the Social Book Search Track is to investigate techniques to support users in searching and navigating professional metadata and user-generated content from social media as well as providing a forum for the exchange of research ideas and contributions. Towards this goal the track is building appropriate evaluation benchmarks complete with test collections for focused, social and semantic search tasks.
This year the Social Book Search Track consists of two tasks:
The system-oriented Suggestion task is similar to the 2013 Social Book Search task. However, we want to focus more on recommendation, so the topics are enriched with the user catalogue data (i.e. the books that the topic creator had in her personal catalogue at the time of creating the topic). In addition, we will release a large set of anonymised user profiles from other LT forum members, so task participants can run recommendation experiments.
One of the challenges is dealing with a mixture of professional and social metadata, which differ both in quantity as well as in kind. Professional metadata is often based on controlled vocabularies to describe topical information, with a minimal set of subject headings or classification information. Social metadata comes in the form of reviews that vary widely in length, opinion, clarity, seriousness and in the aspects of the book they discuss, such as writing style, comprehensiveness, engagement, accuracy, recency, topical coverage and diversity and genre.The task attempts to address questions such as:
Book search is highly complex. Searcher may want to read reviews and ratings from others to inform their decisions. When searching for themselves their relevance criteria may be very different from when they are searching for someone else (as a birthday present or merely to help someone in their search). They may be searching for books in genres or about topics they are familiar with, in which case a profile of their reading habits may be helpful, but they may also be searching for new genres and/or topics, for which little preference information is available.
The document collection consists of 2.8 million book descriptions with metadata from Amazon and LibraryThing. From Amazon there is formal metadata like booktitle, author, publisher, publication year, library classification codes, Amazon categories and similar product information, as well as user-generated content in the form of user ratings and reviews. From LibraryThing, there are user tags and user-provided metadata on awards, book characters and locations and blurbs. There are additional records from the British Library and the Library of Congress. To get access to the document collection, participants have to sign a Licence agreement.
Participants are allowed to submit up to 6 runs in standard TREC format. Any field in the topic statement may be used as well as any information in the user profiles. The topics and user profiles for 2014 are available on the Document Collection page. The submission deadline is 10 May 2014.
Each topic statement contains the title and message of a LibraryThing member who requested book suggestions, as well as the name of the discussion in which the message was posted. The topic statements are enriched with a user profile of the topic creator, which contains information about the books catalogued by these members, including tags and ratings. In addition, there is a large set of 94,000 anonymised user profiles from LibraryThing, which can be used to derive recommendations based on collaborative filtering. The topics and user profiles for 2014 are available on the Document Collection page.
The goal of the Interactive SBS task is to investigate how book searchers deal with professional metadata and user-generated content at different stages on the search process. For the task, we provide two book search interfaces and two tasks, one goal-oriented and one non-goal task. Participating teams have to recruit a minimum of 20 users. The user data (interaction logs and questionnaire data) are shared among all participating teams that manage to recruit at least 20 users. Registration details are below.
Two different experimental tasks will be assigned to users, one goal-oriented and one non-goal task.
The goal-oriented task will be a subject search task: You are looking for some interesting physics and mathematics books for a layperson. You have heard about the Feynman books but you have never really read anything in this area. You would like to find an �interesting facts� sort of book on mathematics.
The non-goal task will follow Borlund's situations. In this case, a common scenario identified from existing research will be used to prime participants. This will not be just about finding specific books, but about an "experience." The scenario (which will be the same across all observations) will describe a non-intentional interaction (no predetermined information need), in the line of: Imagine that you are sitting in a doctor's office, a bookstore, the airport, a pub or coffee shop. You have 10 minutes to spend looking for books using the Amazon/LibraryThing collection. Insert in your "bag" (an interface object that will work like a shopping cart without the ecommerce checkout) any books that you found unexpected, surprising or novel and make notes on the stuff you found interesting along the way.
Half of the users will browse and search the A/LT collection using an experimental multistage interface designed at supporting different stages of the search process. The other half of he users will browse and search the same collection using a more traditional interface.
The experimental system will use pre- and post-interaction questionnaires and logging of user interactions in order to capture user behaviour. The task attempts to address research questions such as:
To conduct the task we will use the web-based experimental system developed by the PROMISE NoE at the University of Sheffield. The following data will be collected:
Participants in the study must be adults (18 or older).
Participants will receive web-based access to the experimental system, which comes with a standardized protocol for pre-questionnaire, task-based interaction and post-questionnaire. One user will approximately need 25 minutes for the entire experiment.
Participating research groups agree to:
|1-31 May||Data gathering|
|5 June||Release of shared data pool to all participants|
|15 June||CLEF working notes papers due|
For questions regarding this track, please contact firstname.lastname@example.org
|
OPCFW_CODE
|
Most annoying part for me with Windows 10's virtual desktops is simple thing: flashing task bar buttons appear on all desktops! It drove me nuts, every time I was focusing on some other desktop, some annoying program flashed it's button and it appeared on the desktop I was using.
I happen to have written a in-memory patch for explorer.exe to disable flashing task bar buttons all together. AHK script also: https://github.com/Ciantic/DisableFlashingTaskbarButtons
If there was a single design change I'd make to Windows, it would be to make it impossible for one application to steal focus from another (without user intervention).
TweakUI used to do this via the "Prevent Applications from Stealing Focus" checkbox on the General -> Focus tab. But TweakUI ended with XP/2003 Server unfortunately.
To give an example, I was typing away in Word, really on a roll, and suddenly Apple's iTunes auto updater had focused ate an entire sentence and my concentration was destroyed. Thanks Windows.
Stealing focus is more than annoying -- it's dangerous. When a Windows dialog pops up with focus set to "OK", all it takes is a press of the space bar and poof! you've just "clicked" OK on God knows what. Even worse, now that it's dismissed, you can't go back and see what you just OKed.
The next 30 seconds are sure to be spent wondering "what's about to happen? Will the system reboot? Did I just install an upgrade? Or open the firewall? or ???"
So, thank Apple, please.
You should use GitHub Releases instead, and keep the repository clean.
Sadly this is something Windows 10 does not do well. When I plug my Surface Pro into an external monitor my app moves from the laptop screen to the external screen and stays 'full screen'.
Here is a simple example of what I would appreciate if it existed:
I am at work, I am connected to two large screens. I put Outlook on my right screen at about 1/4 the size of the screen. I disconnect, all my windows jump to the laptop screen (great) and I have outlook in full screen mode. Now I reconnect my dual monitors. I'd love Outlook to jump back to where it was, and the size it was, on that right screen when I unplugged. Now I get home and plug into a 2K screen (rather than the 4K screen at work) and I want Outlook to be a bit larger on that screen. When I unplug I want it back on my laptop full screen, when I plug in at work, back to the right monitor quarter size, when I get home back to the monitor about half size.
Remember which screens I plug into, remember what the app settings were when I last plugged into that screen. Restore them when I re-attach.
On my MBP, I connect either through HDMI or a displayport->VGA adapter. My MBP exhibits the behavior you mention if the displays are on difference adapters, but it doesn't recognize different displays plugged into the same port/adapter (e.g. all HDMI displays are the same to it, all VGA displays are the same to it).
My Surface only has a minidisplayport port, and I suspect the adapter I'm using isn't fancy enough to tell the Surface that the attached displays are in fact different.
Definitely a bummer though. and 4k support in general on Windows 10 (and macOS, but less so) is a little iffy. Issues really start to show when remoting into a session on a device hooked up to a 4k monitor.
On the surface dock the dual monitors connect to the dock's DP connectors.
It's baffling since I often physically move my main monitor, either bring it closer in front to sit back and play a game or rotate it to the side so I can watch videos from the nearby couch (with a trackball as controller).
I never determined that it was opening them in a remembered fashion, in fact I'm pretty sure that sometimes closing the window from my main and relaunching another file it will pop up on the other side again, so I made a habit of dragging files to prevent that.
The impression I got is that it is spreading to the less busy monitor as a "favor", but it's not consistent - can't reproduce right now. I really have no clue what the logic is, and whether it's the behavior of some apps I use or the OS, or something weird related to having 2 mouses plugged in.
Either way, it's very annoying.
I also agree that these days MacOS does a better job at multi-monitor for laptops, but still is lacking in other areas of usability vs. W10 to me.
The one thing I really miss when switching to MacOS from using my windows workstation is the windows-arrow key shortcuts to quickly move windows between screens and snap them to half the screen size. I waste an amazing amount of screen real estate on my Mac simply because resizing windows and moving them around takes too much time and dexterity.
Both have horrible monitor re-connection bugs, you just have to spend time with them to see them. MacOS windows still constantly get moved to almost-off-screen where you can barely click and drag them to restore, Windows sometimes doesn't wake one out of 4 sleeping monitors, etc. It's certainly miles better than even 2 years ago, but annoying bugs abound. Especially if you plug in at home and at work to different monitor configurations.
In general multi-monitor support until very recently seemed like a half-baked solution on both platforms. I'm very happy with the progress on this front (after a decade of stagnation) in the last couple years. I am cautiously optimistic this development pace will continue and finally iron out all the annoying usability bugs we've been living with for 10+ years.
Either drag to a corner/edge or press `⌃⌥[LUNE←↓↑→]`. `⌃⌥⌘[←→]` for previous/next display. `⌃⌥` + `Enter` to maximize, `C` to center, and `Backspace` to restore. Can do thirds now as well.
I would still get excited to see a well-executed tiling WM in macOS.
Linux does that as well: it remembers the identifying information of attached displays, and remembers the last configuration you used with that set of displays attached.
That simple heuristic works well for docking, TVs, projectors, and many other use cases.
I finally gave up and made shell scripts using ARandR and bound them as keyboard shortcuts.
Additionally, if I disconnect a monitor it will re-position any applications that were open on the disconnected monitor to the remaining one. If I reconnect that same monitor the applications from it will migrate back to it again. Very neat, just works, gets out of my way and I can get on with the task I am trying accomplish. This comes in handy when switching back and forth the input on the 30" screen to have it show the display output of my desktop PC switching the input has the effect of disconnecting the monitor as far as the laptop is concerned.
GNOME 3 does this out of the box, in both Debian stable and latest Debian unstable.
On disconnecting any display, it completely forgets about where your windows were originally (even the ones on the display that is still connected), and rearranges them seemingly randomly in left-right splits on the remaining displays, and makes no effort to restore your configuration when you plug the display back in.
MacOS has handled this aspect a lot better, in my experience, although it has its own issues like not being able to snap arrange windows without 3rd party tools in the first place.
Has anyone tried the latest insider builds? Has the situation improved there?
Unless I'm totally missing something, is there a way to make Win10 believe the desktop size is larger than the video screen size? With or without device resolution indepdence, I don't care. Let it scroll or something.
It's just really sucky when I'm trying to debug an application that expects a large monitor (ala 1280x960 or something) and my laptop only has 1366x768. Qt literally says "screw it, I'm clipping your window" in these cases.
In the nvidia control panel, have a look in '3D settings' > 'Manage 3D settings' > 'DSR - Factors' and you can change the resolution multiplier. (Not sure why it's in 3D settings, as it applies to the desktop along with anything else).
What I need is this setup, but be able to go lower than 100%. So if I chose 50% for example, my 1366x768 LCD actually looks like a 2732x1536 desktop to Windows.
Win10 won't go lower than 100%.
* Multiple concurrent Remote Desktops running in full screen? Make them all separate virtual desktops so you can switch between and manage them like you can with virtual desktops.
One thing at a time + work after good rest + be good at it + love what you do = productivity. It's a narrow path. Skip any of these and you're fooling yourself, IMHO.
For anyone who wants to play with Windows 10 virtual desktops, just FYI, can use the keyboard shortcut Win + Ctrl + Left Arrow/Right Arrow to move between virtual desktops (might have to create a second virtual desktop before this works, I can't remember).
In my Mac configuration where I have several RDP sessions, I can switch from full screen RDP to full screen RDP with Ctrl+Alt+Left/Right, which is a joy (I have an IBM Model M without Win key).
I have not managed to do something similar with W10.
Having quickly scanned the link it looks like it may provide the key shortcuts so that's one step forward. The other two issues are probably closely related so I'm hopeful these will be possible in one form or another eventually.
A feature that Win 10 has and I was missing in mac (maybe they have it by now) was that desktops doesn't share taskbar and open windows states. So if you Alt+Tab you see only windows in current desktop. Same as using taskbar.
A feature I'm still missing is to save the state of all opened windows, and to have different icons on taskbar and desktop per virtual desktop (like separate computers).
Also there's a bug where chrome windows from all desktops just to the first desktop after waking from sleep.
Niche but changed the way I use VMs forever.
I switched to Linux soon after though, so I'm sure this project is much better now.
|
OPCFW_CODE
|
Introduction to XDOC API Services
XDOC provides various API services including:
- HTTP Server Services which provides an API layer for application to application service method invocation
- User Interface Services which can be used by 3rd party applications to launch XDOC user interface web pages
- Remote Container Service which is used to integrate XDOC with 3rd party mortgage applications
- Remote User Authentication Service which is used to integrate 3rd party system user authentication with XDOC
Please start by reading these general service overviews. Next, browse the other Service specific pages depending on which Services you are interested in.
After you have done your initial review of the documentation, we can setup an API Introductory Call with your developer(s).
HTTP Server Service Method Collections
The HTTP Server Services Method Listing contains these collections of service methods:
- Application - HTTP Simple Services specification for the Application. Includes Service Method for Testing Connectivity and API Protocol Handshaking
- System - HTTP Simple Services specification for the System Service. Retrieving System level configuration such as Security Profile lists, XDOC Version Information, etc.
- Project - HTTP Simple Services specification for the Project Service. Retrieving Project level repository configuration such as list of Document Types, list of Document Stacks, Bundle Profiles, etc.
- Container - HTTP Simple Service specification for the Container Service. Retrieving lists of documents for a particular container (loan), downloading documents in TIF or PDF format, etc.
- Document - HTTP Simple Services specification for the Document Service. Adding Documents, Deleting Documents, etc.
Configuring API Settings in the XDOC User Interface
XDOC API Service Settings for a project can be found under Admin > Projects Tab > HTTP Server Service & UI Launch Service in the side menu:
Note: Access to the XDOC API Services is restricted by the IP Access list defined on this page. For an extra layer of security, a Security Token can also be used when making requests.
Generating Security and User Tokens
After configuring the API settings in the XDOC User Interface, the Security & User Tokens for use in API requests can be created. Here is a brief overview:
- The tokens are first constructed in plaintext using either XML or JSON.
- The plaintext tokens must then be encrypted according to the encryption standards chosen on the API settings page.
- The encrypted tokens must then be Base64 encoded.
- Finally, the tokens must also be URL encoded for use in API request URLs.
Sample XML Security Token:
<SecurityToken> <Context>MySecurityContext</Context> <AppId>MyAppId</AppId> <AppKey>MyAppKey</AppKey> <GenDT>2021-03-08T14:15:53-08:00</GenDT> </SecurityToken>
Sample XML User Token:
<UserToken> <UserName>admin</UserName> <Display>John Doe</Display> </UserToken>
Security Token Encrypted, Base64 Encoded, and Finally URL Encoded (XST Value):
For more information on Security Token Encryption, please see S4. Security Token Authentication in the HTTP Server Services API section.
User Token Encrypted, Base64 Encoded, and Finally URL Encoded (XUT Value):
For more information on User Token Encryption please see S4. Security Token & Security Considerations in the User Interface Service section.
Constructing Request URLs for HTTP and UI Launch Services
Here is a brief overview for constructing XDOC API request URLs.
- The domain for the XDOC Server will be used as the base for the API request.
- All HTTP Service Methods are accessible via the /XDocServerService.ashx? endpoint with the desired service method specified by input to the XM parameter.
- All UI Launch Links are accessible via the /XDocUIService.ashx? endpoint with the link type specified by input to the AppLink parameter.
- If a Security Context is defined (and set to be required) in user interface API settings page, it must be specified in requests by input to the XSC parameter.
- Finally, if using a Security Token or User token they must also be specified in requests by using the XST parameter for the Security Token & the XUT parameter for the User Token.
Sample Container.ContainerInfo Invoke URL:
For more information on constructing invoke URLS please see S1. Method Invocation Syntax in the HTTP Server Services API section.
Sample APPDASHBOARD UI Launch URL:
For more information on constructing Launch URLs please see S1. Service Invocation Syntax in the User Interface Service section.
|
OPCFW_CODE
|
Read more about Cyberpunk 2077➜ https://cyberpunk2077.mgn.tv
The New Update Hotfix 1.21 for Cyberpunk 2077 is live on PC, consoles and Stadia.
In this update they focused on further improving the overall stability of the game and fixing the most common issues that could block progression. Here’s what changed:
#Cyberpunk2077 #Update1.21 #Cyberpunk2077Update #Cyberpunk2077News
🔥Make sure to leave a like and subscribe I Upload Daily🔥
🔔Enjoying the content? Make sure to Subscribe for More🔔
Cyberpunk 2077: NEW Update Hotfix 1.21 PATCH NOTES! Quest, Open World Fixes & More
1:02 Quests & Open World
🔔Legendary Bunker Videos🔔
✅Secret Legendary Garage Bunker: https://youtu.be/3zAD-1DS5Vo
✅Secret Turbine Bunker: https://youtu.be/zMHmqhyMRzU
✅Secret Desert Bunker: https://youtu.be/yeC815tPK-Q
✅Secret Legendary Trailer Bunker: https://youtu.be/h0QC3uOIf-o
✅Secret Legendary Gas Station Bunker: https://youtu.be/H4qCDd3LK8Y
✅Secret Legendary Bunker: https://www.youtube.com/watch?v=sHzDIasHIYQ
✅Social Media Links ✅
🔥Thanks For Stopping Bye The Channel🔥
🔔Cyberpunk 2077 Hotfix Update 1.21 Full Patch Notes: https://www.cyberpunk.net/en/news/37984/hotfix-1-21
🔔Outriders Giveaway: https://gleam.io/Lrsi0/outriders-ps4-ps5-xbox-pc-giveaway
✅Business Inquires: [email protected]
🚩►My Music Links: Song “Alright” Preformed By Money Chips a.k.a Karpo
✅Sound Cloud: https://soundcloud.com/karpomusick
🚩►Help Our Community reach 30,000k 🔥 Subscribe by showing your support and smashing that ⛑ SUBSCRIBE, Notification Bell 🔔 & Like button! I Do A Ton Of Giveaways! Show your support for the channel & Community by watching and Commenting it really does helps the channel grow as well as the Community….and…and you also get to keep up to date with everything that I upload!🔔 …good times.
✅Welcome To The Gaming Channel dedicated to bring gamers news, clips, montages, drops and everything related to the world of “Video Games”
Have A Great Day.
✅FTC Legal Disclaimer – Some links found in the description box of my videos may be affiliate links, meaning I will make commission on sales you make through my link. This is at no extra cost to you to use my links/codes, it’s just one more way to support me and my channel! 🙂
23 thoughts on “Cyberpunk 2077: NEW Update Hotfix 1.21 PATCH NOTES! Quest, Open World Fixes & More”
Awesome vid again bro. I will be jumping back in to Cyberpunk to test these updates!
Huummm….They Are Patching This Up…this is good news.💯😀
Wait didn't this patch come out a week or two ago or is this a NEW new patch?
Does the item/money duplication exploit still work? (The one that you use the drop box for)
Its running a ton better on base ps4 now!
23.95GB Update on my Xbox One X
Good vid man, thanks for the info
My game keeps crashing now?
Under a gigabyte where did you get that from? the download itself was 27 GB check your facts please
They still didn't fix the mod slots for light machine guns 😣 disappointing
Sorry to myself for not subbing so much sooner! Man you sure do stay up to date on these i love it thank you sir im now subbed!
Dude, did you really just read the entire note and that's the whole video? Wow. Can you still call it a "content"?
Third person. Mode please. Cdpr
Small correction: The patch size is actually 2.099 GB on console.
I think I just died listening to your interpretation of 8ug8ear. It's Bug Bear. I guess 1337 5P33K isn't a thing people know of any more. It's all good but I thought it was funny. For the uninitiated, it's a simple substitution cipher with numbers replacing letters that they look like.
Thanks for the news. Very helpful. 👍🏻
I’ve never had these issues/ bugs on SX in any of my three play throughs, yet there’s another update for a massive 24gb…
Another patch .
The truth will be told when playstation adds it back to the ps store games gonna be great I know it will be just in due time
Again 25 gb for ps4…wow…
DOWN ON THE STREET IS STILL BROKEN..
Nothing changed in my game ;/
What a huge patch! With little to know fixes or upgrades… why can’t they just provide what they promised and stop fixing garbage
Where is the download button for this. I see discussions. Not a dload option. I use on steam
|
OPCFW_CODE
|
Can't access site only on VPN
Hi, people in my organisation require visiting a particular website for their job role. When we try and visit using machines in the office on the corporate network, no problem. However, the same website over the VPN isn't loading.
Accessing the site over HTTP in Chrome:
ERROR The requested URL could not be retrieved While trying to retrieve the URL: http://www.website.com/dcssplash/login.aspx The following error was encountered: Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. Your cache administrator is webmaster. Generated Tue, 19 Jan 2021 10:05:39 GMT by PROXY1-BusinessNet (squid/2.5.STABLE13-NT)
Using HTTPS in Chrome:
This site can’t be reached The webpage at https://www.website.com/dcssplash/login.aspx might be temporarily down or it may have moved permanently to a new web address. ERR_TUNNEL_CONNECTION_FAILED
I'm not sure if logging is set up properly on our system, because whenever I try and view the logging or reporting from the WSM I get an error message saying the log server could not be contacted, despite it being pingable. But then our password manager says "We no longer have shell access to this server. Any config changes would be done by Watchguard support.".
Anyway, without being able to view the logs, anyone have any idea how to tell which one of watchguard's many settings may be causing this issue?
Thanks a lot.
EDIT: I tried adding the SSLVPN-Users group into the from field of the default http-proxy. I added an exception to the WebBlocker being used, Pattern: .website.com/
I also added an exception in the body response code of the associated proxy action (I knew this wouldn't change anything, but I'm running out of things to try)
No changes have made any difference to VPN users trying to access this site.
re. logging - you can look at Traffic Monitor.
In WSM Firebox System Manager -> Traffic Monitor, one can select the Maximum Log Messages, which can be set to a max of 25,000
Is access to this site being done via a BOVPN from your firewall?
The HTTP access message "Access Denied." suggests that the access to this site needs to come from a specific subnet at your site.
The firebox doesn't use squid, which is the open source proxy that generated that error message. Additionally, by default, traffic moving between sites on a BOVPN does so via a packet filter on the firebox.
I'd suggest asking if there is a squid server anywhere on the local or remote network that your traffic might be being sent thru.
WatchGuard Customer Support
Hi Bruce, thanks for your reply, sorry for the delay. I have checked the Traffic Monitor on the Firebox System Manager. I pinged the destination web address and put the ip address into the traffic monitor filter, but nothing came up. I put my own machine's IP address into the filter, nothing showing either.
I successfully got to the site from my machine in the office, but again, nothing appearing in the traffic monitor.
I went through the http proxy actions that the http proxy policy is using, and enabled logging everywhere I could find it.
I've had problems in the past and you've suggested traffic monitoring, but perhaps we have a problem with the setup as I can never find anything useful in there, whether it's for exe files that are being mysteriously blocked or in this instance either.
@James_Carson - Hi James, thanks for your reply, I will have a look into this and get back to you! That would certainly explain why I can't see anything in the traffic monitor!
The default for polices is to only log denies, proxy strips etc.
To see packets allowed by a policy in Traffic Monitor, you need to enable Logging on it.
|
OPCFW_CODE
|
How to "build" a python script with its dependencies
I have a simple python shell script (no gui) who uses a couple of dependencies (requests and BeautifulfSoup4).
I would like to share this simple script over multiple computers. Each computer has already python installed and they are all Linux powered.
At this moment, on my development environments, the application runs inside a virtualenv with all its dependencies.
Is there any way to share this application with all the dependencies without the needing of installing them with pip?
I would like to just run python myapp.py to run it.
What are your reasons to not use pip? Because that would seem to be an obvious solution.
Because I cant install it on some computers..
Have you resolved this issue? I also cannot install anything on the system, but I can ship my script with all necessary binaries and drop it in a directory and execute. Hard to believe people's never encountered such need. Everyone seems to just install all stuff on their servers...
Did you fix this issue. I am on a similar circumstance to be solved. Please share
You will need to either create a single-file executable, using something like bbfreeze or pyinstaller or bundle your dependencies (assuming they're pure-python) into a .zip file and then source it as your PYTHONPATH (ex: PYTHONPATH=deps.zip python myapp.py).
The much better solution would be to create a setup.py file and use pip. Your setup.py file can create dependency links to files or repos if you don't want those machines to have access to the outside world. See this related issue.
As long as you make the virtualenv relocatable (use the --relocatable option on it in its original place), you can literally just copy the whole virtualenv over. If you create it with --copy-only (you'll need to patch the bug in virtualenv), then you shouldn't even need to have python installed elsewhere on the target machines.
Alternatively, look at http://guide.python-distribute.org/ and learn how to create an egg or wheel. An egg can then be run directly by python.
link doesn't work anymore. What about some C-based dependencies. How can I include them in the system? I don't want the target system to install all stuff system-wide. I want to ship all necessary binaries with the package... something like JAR file in JAVA world... all stuff should be there purely python and external stuff too.
I haven't tested your particular case, but you can find source code (either mirrored or original) on a site like github.
For example, for BeautifulSoup, you can find the code here.
You can put the code into the same folder (probably a rename is a good idea, so as to not call an existing package). Just note that you won't get any updates.
|
STACK_EXCHANGE
|
kCFSocketAcceptCallBack not being called for WIFI connection with static IP
I will try to be as detailed as I can. I am trying to connect to an acquisition unit from my iPhone in my app. We are using IP4 and the acquisition unit doesn't support DHCP so its always scanning for device with a specific static IP and port no.
Before I tested the connection between the unit and my iPhone, I created an adhoc network using my desktop and try it out with my iPhone. This is part of my code.
CFSocketContext CTX = { 0, description, NULL, NULL, NULL };
/* Create the server socket as a TCP IPv4 socket and set a callback */
/* for calls to the socket's lower-level accept() function */
TCPServer = CFSocketCreate(NULL, PF_INET, SOCK_STREAM, IPPROTO_TCP,
kCFSocketAcceptCallBack
, (CFSocketCallBack)WiFiCallBack, &CTX);
/* Set the port and address we want to listen on */
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
addr.sin_len = sizeof(addr);
addr.sin_family = AF_INET;
addr.sin_port = htons(PORT);
addr.sin_addr.s_addr = htonl(INADDR_ANY);
CFDataRef addressData = CFDataCreate( NULL, (UInt8*)(&addr), sizeof( struct sockaddr_in ) );
CFSocketSetAddress(TCPServer, addressData);
It works and I can do data transfer between my desktop and iPhone if I feed in the IP that was assigned to iPhone to the PC app on my desktop. However if I set a static IP for iPhone and try to get the PC app to connect to any device with that IP it doesn't work.
Same goes with my acquisition unit. The call back function is not called at all.
I am in desperate need of help so any form of help is welcomed. Thanks.
I'm sorry, but your post is not very clear.
Are you are trying to establish a server socket on the iPhone, and connect to it from elsewhere?
This is going to be problematic for many reasons.
First is that your ip is not going to be the same. When connected to WIFI, you will have an ip that is routable at least on the current network.
But when connected to 3g (or lte, etc), you will likely not be able to route to the ip given at all.
Even if you did have a fully routable ip address on some interface that existed long enough, iOS is not designed for this. Your application will not be able to run efficiently in the background and listen to a server socket. You can simulate this with persistent sockets and voip background mode. However that requires a separate server component.
You could also try polling from the iPhone, that may satisfy your requirements.
|
STACK_EXCHANGE
|
Since its founding, CSDC faculty have submitted over $75 million in proposals and have received over $12 million in externally funded grants. CSDC faculty have also led two NSF Science and Technology Center proposals totaling an additional $100 million in requested funds. The center has been the home to the international Network for Computational Modeling in Social and Ecological Sciences for over a decade, an international network supporting social and cyberinfrastructure for reproducibility and transparency in scientific computation for social and ecological dynamics. It also hosts the leadership of the Open Modeling Foundation, an international consortium of modeling science organizations developing common standards for best practices across the social and natural sciences.
A suite of closely related themes has emerged out of the research activities of the center’s faculty, visitors, and postdocs.
- Innovation and evolution in life, society and technology
- Cooperation, conflict and collective action
- Social scaling
- Computation and modeling
- Evolution of social complexity
To encourage continuing innovative, transdisciplinary science across these research themes, the CSDC, Center for Behavior, Institutions and the Environment and the ASU/SFI Biocomplexity Center have cooperated to provide seed money for faculty-generated working groups to initiate new scholarship.
Innovation and evolution in life, society and technology
How do novel and innovative features emerge or evolve within groups? What processes drive innovation? How do biological, social and technological systems coevolve in humans and other organisms?
Cooperation, conflict and collective action
How do cooperative interactions individuals within groups create higher-level entities—including new individuals? What is the role of conflict between individuals and groups in maintaining or limiting cooperation? How does collective action among cooperating individuals enable groups to engage in new kinds of actions and generate new levels of cooperation?
How do societies interact with and influence their social and physical environments? How do multi-level feedbacks between coupled social and biophysical systems transform both systems over different time scales? How can we manage these complex interactions to sustain an Anthropocene world?
How do social groups change in their dynamics and structure as they grow, develop and evolve? Are there scaling laws that apply across human and non-human systems? Can studies of scaling in non-human societies (like social insects) provide new insights into the evolution and growth of urban systems?
Computation and modeling
How can we develop next generation computational modeling that is sufficiently flexible and scalable to represent feedbacks between social and environmental systems in an integrative way at local to global scales? How can new, large scale datasets be applied to computational representations of complex systems? How can computational science be made more open and reproducible?
Evolution of social complexity
What are the evolutionary bases for human cooperation and its unique characteristics? How do biological and social evolution interact to create human sociality? How can research on non-human primates inform our understanding of the evolution of social complexity in the human lineage?
|
OPCFW_CODE
|
This article is about how to customize the look and feel of
AsyncFileUpload from Microsoft Ajax control toolkit. The source code for both VS 2010 and VS 2005 is available. A live demo is also available for you to see how it runs.
Ajax control toolkit is a Microsoft open source project. Among the toolkit, there is a very useful control called “
AsyncFileUpload”. You can see Microsoft's official demo of this control here.
It's powerful but hard to customize from its built-in properties.
I successfully customized it through two approaches. You can see my live demo here.
Using the Code
After you extract the source code, you can directly open the solution file (one for VS2010, the other for VS2005 just in case).
To keep it simple, they are both edited under "web site" project not "web application". You can directly run them by clicking "run" from IDE.
NOTE: VS2010 project needs .NET 4.0 installed locally for debugging. VS2005 project needs .NET 3.5 installed locally for debugging.
Points of Interest
The basic idea of customizing
AsyncFileUploader is using mask trick:
You make the
AsyncFileUploader transparent and have a button or image looks covered
AsyncFileUploader. But actually, the
AsyncFileUploader is on top of masked button/image. So the user seems to click the masked customized button/image, but actually clicks the
That's just basic idea. I implement it in two approaches.
Approach 1 is easy and quick. Basically, you make a customized button with similar size of
AsyncFileUploader. The limitation of this approach is you cannot make big button because the size of
AsyncFileUploader itself is hard to adjust (especially the height if it's even possible).
Approach 2 allows me to be proud for myself: The idea is that you cannot adjust
AsyncFileUploader size easily, but you can move it. So I move it inside of the big image button. Whenever user clicks, the
AsyncFileUploader gets clicked.
To make it more clear, I have a X/Y position box to show you the mouse position when it's in the image scope. If you really want to see how the internal control is moving, you can change the transparent value of the control (it's 0 to be totally transparent, you can change it to 50 as half transparent).
Things to be Aware of
I didn’t find a good way to gracefully catch the over size uploading issue. I tried global.asax application error catching, also tried
AsyncFileUpload built-in error catching. It seems global.asax can catch the specific error, but cannot throw it gracefully. So I end up using warning UI and
<httpRuntime maxRequestLength="500"/> from web.config. It’s not perfect, but works for all major browsers.
Another thing to be aware of: The solution also works under latest IE9 RC, but you have to run it under IE8 compatible mode.
All codes are tested under major browsers: IE8/IE9 RC/Firefox 3.6/Chrome 9. It's nice to have such a built-in control to work with.
Hope this is helpful in your projects as well.
|
OPCFW_CODE
|
Issue with merging (or union) multiple "copy column" transformations
I have a legacy database that I am doing some ETL work on. I have columns in the old table that are conditionally mapped to columns in my new table. The conditions are based on an associated column (a column in the same table that represents the shape of an object, we can call that column SHAPE). For example:
Column dB4D is mapped to column:
B4 if SHAPE=5
B3 if SHAPE=1
X if SHAPE=10
or else Y
I am using a condition to split the table based on SHAPE, then I am using 10-15 "copy column" transformations to take the old column (dB4D) and map it to the new column (B4, B3, X, etc).
Some of these columns "overlap". For example, I have multiple legacy columns (dB4D, dB3D, dB2D, dB1D, dC1D, dC2D, etc) and multiple new columns (A, B, C, D, etc). In one of the "copy columns" (which are broken up by SHAPE) I could have something like:
If SHAPE=10
+--------------+--------------+
| Input Column | Output Alias |
+--------------+--------------+
| dB4D | B |
+--------------+--------------+
If SHAPE=5
+--------------+--------------+
| Input Column | Output Alias |
+--------------+--------------+
| dB4D | C |
+--------------+--------------+
I need to now bring these all together into one final staging table (or "destination"). Not two rows will have the same size, so there is no conflict. But I need to map dB4D (and other columns) to different new columns based on a value in another column. I have tried to merge them but can't merge multiple data sources. I have tried to join them but not all columns (or output aliases) would show up in the destination. Can anyone recommended how to resolve this issue?
Here is the current design that may help:
As inputs to your data flow, you have a set of columns dB4D, dB3D, dB2D, etc.
Your destination will only have column names that do not exist in your source data.
Based on the Shape column, you'll project the dB columns into different mappings for your target table.
If the the Conditional Split logic makes sense as you have it, don't try and Union All it back together. Instead, just wire up 8 OLE DB Destinations. You'll probably have to change them from the "fast load" option to the table name option. This means it will perform singleton inserts so hopefully the data volumes won't be an issue. If they are, then create 8 staging table that you do use the "Fast Load" option for and then have a successor task to your Data Flow to perform set based inserts into the final table.
The challenge you'll run into with the Union All component is that if you make any changes to the source, the Union All rarely picks up on the change (the column changed from varchar to int, sorry!).
|
STACK_EXCHANGE
|
Often asked: Which Two Protocols Manage Neighbor Discovery Processes On Ipv4 Networks?
- 1 Which two protocols manage neighbor discovery processes IPv4?
- 2 Which protocol is supported by icmpv6 to facilitate neighbor discovery on an IPv6 network?
- 3 What is the Internet standard MTU?
- 4 Which protocols header would a layer 4?
- 5 What four functions do all routers perform?
- 6 Which type of protocol is concerned with addressing and routing?
- 7 What are the differences between IPv4 and IPv6?
- 8 Which three 3 of the following are legitimate IPv6 addressing schemas?
- 9 What is the size of an IP address in IPv6?
- 10 Can MTU be higher than 1500?
- 11 Why is MTU 1500?
- 12 What is a good MTU size?
- 13 What are the two categories of Igps?
- 14 What is BGP networking?
- 15 What OSI layer does IP?
Which two protocols manage neighbor discovery processes IPv4?
Which two protocols manage neighbor discovery processes on IPv4 networks? Routing protocol that an exterior router uses to collect data to build its routing tables. When a router can’t determine a path to a message’s destination it uses this.
Which protocol is supported by icmpv6 to facilitate neighbor discovery on an IPv6 network?
The IPv6 neighbor discovery (nd) process uses Internet Control Message Protocol (ICMP) messages and solicited-node multicast addresses to determine the link-layer address of a neighbor on the same network (local link), verify the reachability of a neighbor, and keep track of neighboring routers.
What is the Internet standard MTU?
As mentioned, the common value of MTU in the internet is 1500 bytes. As you can see in the figure above, the MTU is built from payload (also referred as data) and the TCP and the IP header, 20 bytes each.
Which protocols header would a layer 4?
Which protocol’s header would a Layer 4 device read and process? Answer: B. TCP 2. What number does a host use to identify the application involved in a transmission?
What four functions do all routers perform?
What four functions do all routers perform? Connect dissimilar networks; interpret layers 3 and 4 addressing and other info; determine best path for data to follow; remote traffic if a primary path is down but another path is available.
Which type of protocol is concerned with addressing and routing?
The Internet Protocol (IP) is a protocol, or set of rules, for routing and addressing packets of data so that they can travel across networks and arrive at the correct destination.
What are the differences between IPv4 and IPv6?
KEY DIFFERENCE IPv4 is 32-Bit IP address whereas IPv6 is a 128-Bit IP address. IPv4 is a numeric addressing method whereas IPv6 is an alphanumeric addressing method. IPv4 binary bits are separated by a dot(.) whereas IPv6 binary bits are separated by a colon(:).
Which three 3 of the following are legitimate IPv6 addressing schemas?
The three types of IPv6 addresses are: unicast, anycast, and multicast. Unicast addresses identify a single interface. Anycast addresses identify a set of interfaces in such a way that a packet sent to an anycast address is delivered to a member of the set.
What is the size of an IP address in IPv6?
IPv6 uses 128-bit (2128) addresses, allowing 3.4 x 1038 unique IP addresses. This is equal to 340 trillion trillion trillion IP addresses. IPv6 is written in hexadecimal notation, separated into 8 groups of 16 bits by the colons, thus (8 x 16 = 128) bits in total.
Can MTU be higher than 1500?
The maximum size of frames is called the Maximum Transmission Unit (MTU). Historically, Ethernet has a maximum frame size of 1500 bytes. An Ethernet packet larger than 1500 bytes is called a jumbo frame. An Ethernet frame uses a fixed-size header.
Why is MTU 1500?
The MTU (Maximum Transmission Unit) states how big a single packet can be. Since the backbone of the internet is now mostly made up of ethernet links, the de facto maximum size of a packet is now unofficially set to 1500 bytes to avoid packets being fragmented down links.
What is a good MTU size?
Add 28 to that number (IP/ICMP headers) to get the optimal MTU setting. For example, if the largest packet size from ping tests is 1462, add 28 to 1462 to get a total of 1490 which is the optimal MTU setting.
What are the two categories of Igps?
There are two types of IGP: distance vector routing and link state routing. Distance Vector Routing Protocol gives each router in the network information about its neighbors and the cost of reaching any node through these neighbors.
What is BGP networking?
Border Gateway Protocol (BGP) refers to a gateway protocol that enables the internet to exchange routing information between autonomous systems (AS). As networks interact with each other, they need a way to communicate. This is accomplished through peering. BGP makes peering possible.
What OSI layer does IP?
The Internet Layer of the TCP/IP model aligns with the Layer 3 (Network) layer of the OSI model. This is where IP addresses and routing live.
|
OPCFW_CODE
|
Everyone has a tale to tell. Whether it’s about a time when they nailed an interview or a time when they totally botched one, we all have stories that illustrate our points of view. And for software developers, that means stories about the code we write. In this blog post, we’re going to talk about predictive software development—a path that can lead to success. By understanding how predictive software works, you can optimize your code for better performance and avoid common mistakes. So if you’re looking to improve your skills as a developer, read on for tips on how predictive software development can help you get there.
What is Predictive Software Development?
Predictive software development is the practice of using software analytics and machine learning to identify future problems and potential solutions. Predictive models are then used to develop software that is more likely to meet customer needs.
The benefits of predictive software development are twofold. First, it can help reduce the risk of developing faulty software. Second, it can help ensure that future updates and fixes are more timely and effective.
There are a number of different tools available for predictive software engineering. Some common ones include artificial intelligence (AI), text analytics, and machine learning. AI can be used to build models that predict user behavior or customer interests. Text analytics can be used to analyze user feedback or product reviews in order to identify patterns. Machine learning can be used to develop models that improve over time as data is fed into them.
The key thing to remember when using predictive software development is to always use cautionary measures. Never rely on predictions alone – always test them before implementing them in production!
What are the Benefits of Predictive Software Development?
The Predictive software development is an approach to software development that uses simulation and modeling to predict the outcomes of software designs. Predictive models can help developers identify problems before they happen, estimate how long it will take to fix them, and predict the effects of changes to a design.
Predictive models can also help developers improve the quality of their code by predicting when defects will be introduced and estimating how much work will need to be done to fix them. In addition, predictive models can help developers plan for growth in a project by estimating how many new features will be added and how much work they’ll require to implement them.
The benefits of predictive software development are numerous and far-reaching. By using predictive models in the early stages of a project, developers can avoid wasting time fixing mistakes that would have been caught if they’d used more traditional methods such as testing and debugging. Predictive models can also help developers schedule additional time for testing after features are added because they know which areas will require the most attention.
Overall, predictive software development is an effective way to streamline the process of developing software and achieve greater quality results while minimizing errors and wastes of time.
How Predictive Software Development Can Help You Succeed
The field of predictive software development is a growing one, and there are plenty of benefits to reap. Predictive software development helps organizations identify problems early and correct them before they become bigger issues. It can also help developers design software more efficiently, eliminating wasted time and resources. And finally, it can help prevent defects from ever making their way into the final product.
But predictive software development doesn’t come without its challenges. The technology is still in its infancy, and there are many unanswered questions about how best to use it. But with careful planning and execution, predictive software development can enable your organization to achieve success both now and in the future.
Whether you’re a software developer, manager, or team lead, predictive software development can be a powerful tool for your organization. Predictive software development is a method of developing software that can anticipate future changes and problems. Predictive software developments can help improve the quality and accuracy of your software, reduce the time it takes to develop new features, and decrease the amount of time you need to maintain your software. If you are interested in learning more about predictive software developments and how it can benefit your organization, read on for some tips on how to get started. ###
Why predictive software development is important
Predictive software development is an important discipline for any software company looking to stay ahead of the curve. Predictive models can help identify and prevent defects before they even happen. This can significantly reduce the time and cost associated with fixing defects, which in turn leads to a higher-quality product that is able to meet customer expectations. Predictive software development also allows companies to better plan their resources, predict future needs, and make informed decisions about product direction.
What types of data should be used for predictive modelling?
There are two main types of data that can be used for predictive modelling: categorical and continuous. Categorical data can be defined as a set of discrete values, whereas continuous data can be viewed as a collection of points with associated measurements.
Categorical data is more suited for predictive modelling because it is easier to define the relationships between variables. For example, in a survey, questions about age could be classified as categorical data. Answers could range from “less than 18 years old” to “more than 65 years old”. This type of data lends itself well to regression models, which use statistical techniques to identify patterns in the data.
Continuous data is less easy to categorize and can be more difficult to model using regression techniques. The measurement points in this type of data might correspond to different levels of performance or customer satisfaction. In order to get accurate predictions, it is important to understand the underlying trends in the data. Statisticians often use algorithms called classification methods or clustering methods in order to group similar measures into clusters or classes.
How to perform predictive modeling
Predictive software development is a process that uses data and algorithms to create models of future behavior. Predictive modeling can be used in a variety of contexts, including product development, advertising, and health care. The goal of predictive modeling is to identify patterns in data and use that information to make predictions about future outcomes.
There are several steps involved in predictive modeling. Data must be collected and analyzed first. This includes input from stakeholders, users, and other sources of information. Models then need to be created based on this data. These models can be used to predict future outcomes, such as customer behavior or product performance. Finally, the predictions made by the models must be evaluated to determine their accuracy.
Predictive modeling is an effective tool for predicting future outcomes. It can help improve the quality of products and services by identifying problems early on in the development process. Predictive modeling also has applications in areas such as advertising and health care prediction.
What are the benefits of using predictive software?
The use of predictive software development can lead to many benefits, both short- and long-term. Predictive software uses statistical models to make predictions about the future, which can help developers create more accurate and efficient code.
Short-term benefits of predictive software’s development include more accurate prediction of product requirements and decreased development time. Long-term benefits include increased product reliability and improved customer satisfaction. Predictive software can also help identify problems early in the developments process, saving time and money.
Predictive software development is a growing field with great potential for success. With the right approach, predictive software can help organizations make better decisions faster and improve processes across a wide range of industries. If you are interested in learning more about predictive software developments, I encourage you to read this article and explore some of the resources available on PredictiveIO.com. Good luck on your journey to becoming a predictive software developer!
|
OPCFW_CODE
|
- Comprehensive, hands-on experience directing product development teams from inception through to market introduction and product support.
- Proven track record converting requirements from business executives, scientists, engineers, and marketing staff into cost-effective, on-time, maintainable solutions.
- Expert in software quality systems, design controls, agile processes, object oriented design techniques and languages to produce software, components and applications with clear
customer value in highly regulated industries.
- My creative insights, engineering versatility, rigorous analysis, and personal leadership have delivered breakthrough products in rapidly evolving industries.
- Created an automated cloud-based testing and validation platform for machine learning software. The platform allowed quick comparison of new models of gene expression used to identify early risk of autism. Established a quality system to support deep learning to identify disease states from pathology images.
- Wrote embedded software alongside fully automated unit / integration tests for the clinical prototype of a needle-less, microprocessor controlled autoinjector. The successful human tests of the device were critical to securing early funding.
- Designed secure communications architectures, controls and protocols for a variety of connected medical devices to support remote monitoring, scheduling, authorization and software update. Established scalable, secure cloud environments for continuous integration, testing, and deployment using infrastructure-as-code.
- Managed a team of robotics software engineers developing new sensors and manipulators for defense applications. Implemented a common set of engineering practices across the three locations to allow innovations to be shared among the teams.
- In-depth expertise in medical device software development and member of the AAMI Medical Device Software Committee responsible for reviewing and approving ANSI/AAMI/IEC 62304 and the AAMI TIR 45 on “The use of agile practices in the development of medical device software”. Lead internal auditor for compliance to ISO standards 13485, 14976, and 62304.
- Led the architecture and design for a high speed, single molecule DNA sequencing instrument with unprecedented performance. Developed critical components and infrastructure to continuously deliver value in a dynamically changing market.
- Evaluated technical and market potential of investment opportunities for new venture capital fund. Review encompassed technology, intellectual property, marketing plans, and operational viability.
- Conducted a comprehensive review of the engineering processes of a leading clinical device manufacturer. The review identified critical deficiencies in quality and project management systems and outlined effective solutions balancing both their technical and business needs while also providing a realistic assessment of the deliverable schedule for corporate planning.
- Developed a universal shopping cart service consolidating order information from any Internet merchant. Converted the business to a successful consulting company when internet business climate changed. Developed the business model, wrote the business plan, and solicited venture capital funding. Established strategic partnerships with analysts and industry trade groups. Secured affiliate agreements with merchants and consolidators. Conducted consumer surveys, test marketing and competitive analysis, established corporate image, and defined product launch program.
TECHNICAL SKILLS and LANGUAGES
|Agile Software Development
|C, C++, C#
|Python, Go, Rust
|
OPCFW_CODE
|
finding sql geography point within rectangular (polygon)
I have an interesting/annoying issue with finding lat and long of land marks inside the rectangular boundary.
I believe my two points are inside my rectangular boundary. but as you can test yourself the result of the the first select is false instead of true!
DECLARE @boundingRect varchar(1000)
DECLARE @maxLat VARCHAR(20)
DECLARE @minLong VARCHAR(20)
DECLARE @minLat VARCHAR(20)
DECLARE @maxLong VARCHAR(20)
set @maxLat ='-36.06631759541187'
set @minLong ='125.23310677812492'
set @minLat ='-44.43329881450396'
set @maxLong='167.04707162187492'
SET @boundingRect = 'POLYGON((' + @minLong + ' ' + @minLat + ', ' +
@maxLong + ' ' + @minLat + ', ' +
@maxLong + ' ' + @maxLat + ', ' +
@minLong + ' ' + @maxLat + ', ' +
@minLong + ' ' + @minLat + '))'
DECLARE @Bounds AS Geography =GEOGRAPHY::STPolyFromText(@boundingRect,4326)
DECLARE @point1 AS GEOGRAPHY = GEOGRAPHY::Point(-37.81502, 144.94601, 4326)
DECLARE @point2 AS GEOGRAPHY = GEOGRAPHY::Point(-38.81502, 144.94601, 4326)
SELECT @Bounds.STIntersects(@point1)
SELECT @Bounds.STIntersects(@point2)
To give you background, I have list of land marks (lat,long) that I want to load on google maps. Since the number of landmarks are too many, I cannot return all of them at once. I need to return the landmarks that are in the areas that are visible to user , in their viewing boundary. I'm getting north west (max lat,min long) and south east (min lat, max long) of google maps boundary and sending it to my stored procedure to return back the list of the land marks within that boundary. However, as I explained above I have issues and some land marks are missing in list.
@point1 does not intersect due to the curvature of the earth:
Azadeh, If you want to use Geography polygon and still handle the curvature problem, I suggest you to add more points to the polygon.
for small distances, the geometry data type is fine for very long distances is best to use geographyc, but few marks are in total, also you can use a cluster https://developers.google.com/maps/articles/toomanymarkers
Actually, geography polygon isn't rectangle:
If you want rectangle, you can use geometry polygon:
You could use geometry, but shouldn't. Why? Because the data represents points and a region on the oblate spheroid we call home and as such doesn't conform to Cartesian geometry.
@point1 does not intersect this can be verified by:
DECLARE @boundingRect varchar(1000)
DECLARE @maxLat VARCHAR(20)
DECLARE @minLong VARCHAR(20)
DECLARE @minLat VARCHAR(20)
DECLARE @maxLong VARCHAR(20)
set @maxLat ='-36.06631759541187'
set @minLong ='125.23310677812492'
set @minLat ='-44.43329881450396'
set @maxLong='167.04707162187492'
SET @boundingRect = 'POLYGON((' + @minLong + ' ' + @minLat + ', ' +
@maxLong + ' ' + @minLat + ', ' +
@maxLong + ' ' + @maxLat + ', ' +
@minLong + ' ' + @maxLat + ', ' +
@minLong + ' ' + @minLat + '))'
DECLARE @Bounds AS Geography =GEOGRAPHY::STPolyFromText(@boundingRect,4326);
DECLARE @point1 AS GEOGRAPHY = GEOGRAPHY::Point(-37.81502, 144.94601, 4326);
DECLARE @point2 AS GEOGRAPHY = GEOGRAPHY::Point(-38.81502, 144.94601, 4326);
SELECT @Bounds.STIntersects(@point1);
SELECT @Bounds.STIntersects(@point2);
SELECT @point1, 'Point 1'
UNION ALL
SELECT @Point2, 'Point 2'
UNION ALL
SELECT @BoundingRect, 'Rect'
You win again, Horner.
Thanks Jason! Loved the idea of using union and see the results in "Spatial results" window
|
STACK_EXCHANGE
|
The $425,000 Question
I wanted to take a moment to address a question that has been circulating around social media since we launched the Codename: Morningstar Kickstarter.
Why are we asking for $425,000?
First of all, I want to apologize for not providing more transparency. When we were assembling our Kickstarter materials, we didn't expect that our ask amount would generate such interest. Speculation has been all over the board from "trying to recoup our investment" to "keep a skeleton crew going for 5 years of support".
I'm afraid the answer isn't anything so nefarious - no secret government agencies are involved.
The Trapdoor team working on Morningstar consists of six senior-level developers, a creative director who leads both UX and visual design efforts, a web developer, a production artist, a data entry specialist, a content architect, two QA engineers, three support and social media staff, and a number of contract folks for specialized tasks. These are full-time employees dedicated to the project - far from a skeleton crew. Throw in a commercial-grade infrastructure - we have 9 dedicated Linux servers for the project - and a fully-stocked QA lab with scores of tablets, phones, etc. for testing, and you can quickly see how $425,000 is a real world target. Trapdoor is a commercial software development company, and this is how we approach large projects. We have already invested $1.2M in the project to date.
Why will it take 6 months to complete a project that was "almost" complete?
Since we can't release Morningstar with the D&D 5e rules (at least until/if WotC announces their OGL strategy), we have to refactor the Expert System (our rules engine) for Pathfinder PRD. This is reflected in our April 2015 release date - which includes support for our three core platforms (iOS, Android, and web), QA, play testing, etc. We could have stopped there with a smaller ask, but we didn't. Why? Our July 2015 release date for the Forge represents a significant expansion of our original functionality. Where the original plan (with WotC) allowed users to create adventures and campaigns for their own use as stand-alone documents, we now plan to allow user-created content to be published to the store as fully interactive titles. This requires moving the automation and functionality of our commercial Story Machine technology into Morningstar while making the experience simple, fun and fast.
This is an aggressive plan, and we have an experienced team to execute on our vision.
As a final thought, we are not replicating existing software. We are building something remarkable, captivating and new. I have seen Morningstar in action in my home game, and I don't want it to go away. I urge you to take the leap of faith needed to pledge to the Kickstarter.
|
OPCFW_CODE
|
In Python, is there a way to detect the use of incorrect variable names; something like VB's "Option Explicit"?
I do most of my development in Java and C++ but recently had to write various scripts and picked up Python. I run python from the command line on scripts; not in interactive mode. I'm wondering if
I like a lot of things about the language, but one thing that keeps reducing my productivity is the fact that I get no advance warning if I am using a variable that is not yet defined.
For example, somewhere in the code I forget to prefix a variable with its declaring module, or I make a little typo, and the first time I learn about it is when the program crashes.
Is there a way to get the python interpreter to throw advance warnings if something might be funky when I access a variable that hasn't been accessed or set somewhere else in the program? I realize this is somewhat against the philosophy of the language, but I can't be the only one who makes these silly errors and has no way of catching them early.
Pydev is pretty well integrated with Pylint, see here -- and pylint is a much more powerful checker than pyflakes (beyond the minor issue of misspelled variables, it will catch style violations, etc, etc -- it's highly customizable for whatever your specific purposes are!).
Looks good. I'll give it a shot tomorrow. I have to refactor a lot of code and I was dreading it because of the naming issues; this will certainly help. Thank you!
Although there is no question about pylint being much more complete than pyflakes, the performance of pyflakes makes it a valid choice in some situations, IMO.
Guess it depends on what HW you're using for development -- I mostly use a slow Macbook Air, a semi-slow Macbook Pro (both 1st generation), and a Linux workstation that was state-of-the-art when I got it over 4 years ago... on each of those, I find pylint's performance to be hardly an issue (it gets run automatically, with a very picky customized suite of style checks, whenever I request a code review or submit code that's passed code review). That's on a Python project with a few dozen thousands lines -- if you work on much larger projects or slower HW, your mileage may vary;).
there are some tools like pylint or pyflakes which may catch some of those. pyflakes is quite fast, and usable on many projects for this reason
As reported on pyflakes webpage, the two primary categories of defects reported by PyFlakes are:
Names which are used but not defined or used before they are defined
Names which are redefined without having been used
Looks cool. I hope that Eclipe's PyDev is configurable enough for me to add it as a step.
|
STACK_EXCHANGE
|
"""
find_init_weights.py
====================================
Finding the weights of the to map an specific activation function
"""
import json
import numpy as np
from .utils import fit_rational_to_base_function
import torch
import os
from rational.numpy.rationals import Rational_version_A, Rational_version_B, \
Rational_version_C, Rational_version_N
def plot_result(x_array, rational_array, target_array,
original_func_name="Original function"):
import matplotlib.pyplot as plt
plt.plot(x_array, rational_array, label="Rational approx")
plt.plot(x_array, target_array, label=original_func_name)
plt.legend()
plt.grid()
plt.show()
def append_to_config_file(params, approx_name, w_params, d_params, overwrite=None):
rational_full_name = f'Rational_version_{params["version"]}{params["nd"]}/{params["dd"]}'
cfd = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
with open(f'{cfd}/rationals_config.json') as json_file:
rationals_dict = json.load(json_file) # rational_version -> approx_func
approx_name = approx_name.lower()
if rational_full_name in rationals_dict:
if approx_name in rationals_dict[rational_full_name]:
if overwrite is None:
overwrite = input(f'Rational_{params["version"]} approximation of {approx_name} already exist. \
\nDo you want to replace it ? (y/n)') in ["y", "yes"]
if not overwrite:
print("Parameters not stored")
return
else:
rationals_params = {"init_w_numerator": w_params.tolist(),
"init_w_denominator": d_params.tolist(),
"ub": params["ub"], "lb": params["lb"]}
rationals_dict[rational_full_name][approx_name] = rationals_params
with open(f'{cfd}/rationals_config.json', 'w') as outfile:
json.dump(rationals_dict, outfile, indent=1)
print("Parameters stored in rationals_config.json")
return
rationals_dict[rational_full_name] = {}
rationals_params = {"init_w_numerator": w_params.tolist(),
"init_w_denominator": d_params.tolist(),
"ub": params["ub"], "lb": params["lb"]}
rationals_dict[rational_full_name][approx_name] = rationals_params
with open(f'{cfd}/rationals_config.json', 'w') as outfile:
json.dump(rationals_dict, outfile, indent=1)
print("Parameters stored in rationals_config.json")
def typed_input(text, type, choice_list=None):
assert isinstance(text, str)
while True:
try:
inp = input(text)
typed_inp = type(inp)
if choice_list is not None:
assert typed_inp in choice_list
break
except ValueError:
print(f"Please provide an type: {type}")
continue
except AssertionError:
print(f"Please provide a value within {choice_list}")
continue
return typed_inp
FUNCTION = None
def find_weights(function, function_name=None, degrees=None, bounds=None,
version=None, plot=None, save=None, overwrite=None):
"""
Finds the weights of the numerator and the denominator of the rational function.
Beside `function`, all parameters can be left to the default ``None``. \n
In this case, user is asked to provide the params interactively.
Arguments:
function (callable):
The function to approximate (e.g. from torch.functional).\n
function_name (str):
The name of this function (used at Rational initialisation)\n
degrees (tuple of int):
The degrees of the numerator (P) and denominator (Q).\n
Default ``None``
bounds (tuple of int):
The bounds to approximate on (e.g. (-3,3)).\n
Default ``None``
version (str):
Version of Rational to use. Rational(x) = P(x)/Q(x)\n
`A`: Q(x) = 1 + \|b_1.x\| + \|b_2.x\| + ... + \|b_n.x\|\n
`B`: Q(x) = 1 + \|b_1.x + b_2.x + ... + b_n.x\|\n
`C`: Q(x) = 0.1 + \|b_1.x + b_2.x + ... + b_n.x\|\n
`D`: like `B` with noise\n
plot (bool):
If True, plots the fitted and target functions.
Default ``None``
save (bool):
If True, saves the weights in the config file.
Default ``None``
save (bool):
If True, if weights already exist for this configuration, they are overwritten.
Default ``None``
Returns:
tuple: (numerator, denominator) if not `save`, otherwise `None` \n
"""
# To be changed by the function you want to approximate
if function_name is None:
function_name = input("approximated function name: ")
FUNCTION = function
def function_to_approx(x):
# return np.heaviside(x, 0)
x = torch.tensor(x)
return FUNCTION(x)
if degrees is None:
nd = typed_input("degree of the numerator P: ", int)
dd = typed_input("degree of the denominator Q: ", int)
degrees = (nd, dd)
else:
nd, dd = degrees
if bounds is None:
print("On what range should the function be approximated ?")
lb = typed_input("lower bound: ", float)
ub = typed_input("upper bound: ", float)
else:
lb, ub = bounds
nb_points = 100000
step = (ub - lb) / nb_points
x = np.arange(lb, ub, step)
if version is None:
version = typed_input("Rational Version: ", str,
["A", "B", "C", "D", "N"])
if version == 'A':
rational = Rational_version_A
elif version == 'B':
rational = Rational_version_B
elif version == 'C':
rational = Rational_version_C
elif version == 'D':
rational = Rational_version_B
elif version == 'N':
rational = Rational_version_N
w_params, d_params = fit_rational_to_base_function(rational, function_to_approx, x,
degrees=degrees,
version=version)
print(f"Found coeffient :\nP: {w_params}\nQ: {d_params}")
if plot is None:
plot = input("Do you want a plot of the result (y/n)") in ["y", "yes"]
if plot:
plot_result(x, rational(x, w_params, d_params), function_to_approx(x),
function_name)
params = {"version": version, "name": function_name, "ub": ub, "lb": lb,
"nd": nd, "dd": dd}
if save is None:
save = input("Do you want to store them in the json file ? (y/n)") in ["y", "yes"]
if save:
append_to_config_file(params, function_name, w_params, d_params, overwrite)
else:
print("Parameters not stored")
return w_params, d_params
|
STACK_EDU
|
Im Stumped, Why is UIImage\Texture2d memory not being freed
I've been looking everywhere trying to find a solution to this problem. Nothing seems to help.
I've set up this basic test to try to find the cause of why my memory wasn't being freed up:
if (texture != nil)
{
[texture release];
texture = nil;
}
else
{
UIImage* ui = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"image" ofType:@"png"]];
texture = [[Texture2D alloc] initWithImage:ui];
}
Now i would place this in the touches began and test by monitoring the memory usage using intstruments at the start (normally 11.5 - 12mb)
after the first touch, with no object existing the texture is created and memory jumps to 13.5 - 14
However, after the second touch the memory does decrease, but only to around 12.5 - 13.
There is a noticeable chunk of memory still occupied.
I tested this on a much larger scale, loading 10 of these large textures at a time
The memory jumps to over 30 mb and remains there, but on the second touch after releasing the textures it only falls to around 22mb.
I tried the test another time loading the images in with [uiimage imagenamed:] but because of the caching this method performs it just means that the full 30mb remains in memory.
It seems i have found the problem. I don't quite know why it happens, but it seems when i run the instruments to monitor the memory usage, if i am monitoring the I/O activity at the same time (which is the default instrument that is initially loaded in) the memory usage showed is A LOT larger (over 3 times larger) and remains in memory even after the objects are dealloc'd. I assume this is because of the overhead of monitoring I/O activity.
Anyway, when i turn this off the memory usage initially reports at 3.16mb (a lot better) and jumps to 10mb when loading the 10 huge textures and goes right back down to 3.16 after i unload the texture. A brilliant result.
There's only one place in your code (from what we can see) where texture can be deallocated and that's at the [texture release]; statement.
You need to be executing that statement (or another one somewhere else). Did you verify that for every texture you alloc that you also free it? You can add NSLog statements to help, like so:
if (texture != nil) {
NSLog("releasing texture instance: %08x", texture);
[texture release];
texture = nil;
} else {
...
texture = [[Texture2D alloc] initWithImage:ui];
NSLog("allocated texture instance: %08x", texture);
}
Perhaps texture is being retained somewhere else? For example, do you add it to a subview or to an array or a dictionary? Those retain their contents.
As a last resort, for really tough alloc/release tracking problem, I've overridden the retain, release, dealloc methods to verify that they are called when I expect. This might be overkill at this point, but here's how: I added an int myRetainCount; ivar to help me keep track:
-(void)release {
NSLog(@"release %08x %2d -> %2d (%u)",
self, myRetainCount, myRetainCount-1, self.retainCount);
myRetainCount--;
[super release];
}
-(id)retain {
NSLog(@"retain %08x %2d -> %2d (%u)",
self, myRetainCount, myRetainCount+1, self.retainCount);
myRetainCount++;
return [super retain];
}
- (void)dealloc {
NSLog(@"dealloc %08x %2d (%u)", self, myRetainCount, self.retainCount);
// deallocate self's ivars here...
[super dealloc];
}
Thank you for your time, and such a quick response :).
For the sake of this test, the only place this texture is ever referenced (other than in the interface declaration) is in the code i have posted.
I am getting desperate for an explanation and i may just try those overrides you have suggested.
However i think the problem must be something else i don't know. Maybe something specific to Texture2D or the way the IPhone handles memory, or the instruments themselves.
Using the debugger i have shown that the dealloc methods for texture2d have been called.
|
STACK_EXCHANGE
|
using System;
using System.Transactions;
using CoreDdd.UnitOfWorks;
using Rebus.Pipeline;
namespace CoreDdd.Rebus.UnitOfWork
{
/// <summary>
/// Support for CoreDdd's unit of work within a transaction scope for Rebus.UnitOfWork package.
/// For a unit of work without a transaction scope, please see <see cref="RebusUnitOfWork"/>.
/// Please note that a transaction scope is not needed to ensure messages published or sent from a message handler
/// are not published or sent when there is an error during the message handling, and
/// using <see cref="RebusUnitOfWork"/> is sufficient for this scenario.
/// This class allows to enlist another resource manager into the transaction scope.
/// </summary>
public static class RebusTransactionScopeUnitOfWork
{
private static IUnitOfWorkFactory _unitOfWorkFactory;
private static IsolationLevel _isolationLevel;
private static Action<TransactionScope> _transactionScopeEnlistmentAction;
/// <summary>
/// Initializes the class. Needs to be called at the application start.
/// </summary>
/// <param name="unitOfWorkFactory">A unit of work factory</param>
/// <param name="isolationLevel">Isolation level for the transaction scope</param>
/// <param name="transactionScopeEnlistmentAction">An enlistment action for the transaction scope. Use to enlist another resource manager
/// into the transaction scope</param>
public static void Initialize(
IUnitOfWorkFactory unitOfWorkFactory,
IsolationLevel isolationLevel = IsolationLevel.ReadCommitted,
Action<TransactionScope> transactionScopeEnlistmentAction = null
)
{
_transactionScopeEnlistmentAction = transactionScopeEnlistmentAction;
_unitOfWorkFactory = unitOfWorkFactory;
_isolationLevel = isolationLevel;
}
/// <summary>
/// Creates a new transaction scope and a new unit of work.
/// </summary>
/// <param name="messageContext">Rebus message context</param>
/// <returns>A value tuple of the transaction scope and the unit of work</returns>
public static (TransactionScope TransactionScope, IUnitOfWork UnitOfWork) Create(IMessageContext messageContext)
{
if (_unitOfWorkFactory == null)
{
throw new InvalidOperationException(
"RebusTransactionScopeUnitOfWork has not been initialized! Please call RebusTransactionScopeUnitOfWork.Initialize(...) before using it.");
}
var unitOfWork = _unitOfWorkFactory.Create();
var transactionScope = _CreateTransactionScope();
_transactionScopeEnlistmentAction?.Invoke(transactionScope);
unitOfWork.BeginTransaction();
return (transactionScope, unitOfWork);
}
/// <summary>
/// Commits the unit of work and the transaction scope.
/// </summary>
/// <param name="messageContext">Rebus message context</param>
/// <param name="transactionScopeUnitOfWork">A value tuple of the transaction scope and the unit of work</param>
public static void Commit(
IMessageContext messageContext,
(TransactionScope TransactionScope, IUnitOfWork UnitOfWork) transactionScopeUnitOfWork
)
{
transactionScopeUnitOfWork.UnitOfWork.Commit();
transactionScopeUnitOfWork.TransactionScope.Complete();
}
/// <summary>
/// Rolls back the unit of work and the transaction scope.
/// </summary>
/// <param name="messageContext">Rebus message context</param>
/// <param name="transactionScopeUnitOfWork">A value tuple of the transaction scope and the unit of work</param>
public static void Rollback(
IMessageContext messageContext,
(TransactionScope TransactionScope, IUnitOfWork UnitOfWork) transactionScopeUnitOfWork
)
{
transactionScopeUnitOfWork.UnitOfWork.Rollback();
}
/// <summary>
/// Cleans the unit of work and the transaction scope.
/// </summary>
/// <param name="messageContext">Rebus message context</param>
/// <param name="transactionScopeUnitOfWork">A value tuple of the transaction scope and the unit of work</param>
public static void Cleanup(
IMessageContext messageContext,
(TransactionScope TransactionScope, IUnitOfWork UnitOfWork) transactionScopeUnitOfWork
)
{
_unitOfWorkFactory.Release(transactionScopeUnitOfWork.UnitOfWork);
transactionScopeUnitOfWork.TransactionScope.Dispose();
}
private static TransactionScope _CreateTransactionScope()
{
return new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = _isolationLevel },
TransactionScopeAsyncFlowOption.Enabled
);
}
}
}
|
STACK_EDU
|
12.2.1. The Plus/Minus (PMplot) Curves in Graphs¶
The PMplot feature allows the user (you) to plot a range plus and minus around a time series graph from a dataset. The range data comes at each voxel from another dataset, and is plotted plus and minus about the “base” dataset. That is, if the voxel value is 100 and the range value is 10, then the “base” curve will be at level 100 and the plus/minus curves will be at 110 and 90.
Getting this feature to work requires a few steps, which are
illustrated in words and screen captures. In the example, the datasets
Qsqrt.nii, which each have 242 time
You need the two time series (3D+time) datasets; where those come from is entirely up to you.
Then, you open the AFNI GUI, choose
Qorig.niias the Underlay dataset, and open one of the Graph viewers. Below, I’ve switched the number of sub-graphs shown (the “matrix” value) down to a single curve, using the Opt->Matrix menu (or the ‘m’ keystroke shortcut).
You use the sub-menu item Opt->Tran 1D->Dataset#N to open the plugin that lets you choose the auxiliary dataset for graphing. After that dataset is chosen, you will make that dataset be graphed as a PMplot rather than the default of just making it an extra graph curve plotted on top of the base graph. The image below shows the menu path from Opt to Tran 1D to Dataset#N.
When you click/release on Dataset#N you will get a plugin popup as shown in the image below. Click on the square box to the left of Input#01 to activate that row, then use the Choose Dataset button to select the plus/minus range dataset; in this example,
Then click on the Set button in the dataset chooser (lower right in the image above), and then click on the Set+Close button in the plugin controller. These actions will “send” the chosen dataset to the AFNI graph viewer to be plotted as an auxiliary curve. At this point, this auxiliary curve is just plotted as a new curve (in red), along with the base curve. In this example, the range dataset is considerably smaller than the base dataset, so the extra curve appears way below the original curve (shown below):
You use the Opt->Colors, Etc.->PMplot menu items to tell AFNI that this auxiliary dataset is to be plotted as a plus/minus range about the base dataset, rather than as an independent curve (which is the default way to plot an auxiliary dataset time series). The image below shows what the menus look like when you press Opt, then select the Colors, Etc. sub-menu; near the bottom is the PMplot set of items. Note that at this point, PMplot is turned Off.
There are 3 ways to display the plus/minus range:
If you choose Bars (two items below Off), you get vertical bars plotted around the base curve, shown below:
Above, the bars are colored in cyan, which is the default color in the PMplot menu section. You can change that color by going to Opt->Colors, Etc. again.
You can also choose to plot the range as Curves around the base curve (below, in green):
... or as a Fill-ed area (below, in violet):
|
OPCFW_CODE
|
7. Implement password management policies:
Implement strong password policies, such as requiring passwords to be changed frequently and to contain a mix of letters, numbers, and special characters, and store passwords securely.
8. Regularly assess IAM policies: Regularly assess and update IAM policies to ensure that they are still relevant and effective, and to address any changes in the threat landscape or business requirements.
By implementing these identity and account management controls, organizations can improve the security of their cloud environments, reduce the risk of unauthorized access and data breaches, and meet regulatory requirements.
Explain Identity issues given
Identity issues refer to various aspects of establishing and verifying the identity of individuals or systems in a secure manner.
1. Identity provider (IdP): An identity provider (IdP) is a system that provides identity-related services to other systems. An IdP is responsible for managing the identities of users and systems, and providing authentication and authorization services.
2. Attributes: Attributes are characteristics or properties of an identity that can be used to make authorization decisions. For example, an individual's role or job title might be considered an attribute.
3. Certificates: A certificate is an electronic document that is used to verify the identity of an individual or system. Certificates are typically issued by a trusted third party and are used to establish trust between two systems.
4. Tokens: A token is a piece of data that is used to represent an identity. Tokens can be generated by an identity provider and passed between systems to allow authentication and authorization decisions to be made.
5. SSH keys: Secure Shell (SSH) keys are used to authenticate SSH connections. SSH keys are generated on a client machine and are used to identify the client to the server.
6. Smart cards: A smart card is a physical device that is used to store identity information, such as certificates and tokens. Smart cards are used to provide a secure and convenient way to store and access identity information.
Explain the following Account types in identity control and management
Account types in identity control and management play a crucial role in ensuring the security and control of information systems and resources.
1. User account: A user account is created for individuals who require access to the information systems and resources. These accounts are assigned to individuals based on their job roles and responsibilities. The access to resources is defined and managed through the authorization and authentication processes.
2. Shared and generic accounts/credentials: Shared accounts are typically used by multiple individuals to access a specific system or resource. These accounts are generally used in situations where multiple individuals need access to the same resource, but it is not necessary to track who performed specific actions. However, the use of shared accounts can create security risks if proper management and control are not in place.
3. Guest accounts: Guest accounts are created for temporary or limited access to resources. These accounts are often used for visitors, contractors, or partners who require access to the organization's resources for a short period of time.
4. Service accounts: Service accounts are used by applications and services that run on information systems. These accounts have specific permissions and access to resources, and are used by applications to perform specific tasks. Service accounts provide a secure and controlled way for applications to access resources, as the permissions and access rights are defined and managed.
In conclusion, these account types play a critical role in ensuring the security and control of information systems and resources. The use of different account types provides the ability to enforce proper access controls, and to ensure that information is protected and available only to authorized individuals.
Explain the following Account policies
Account policies refer to the set of rules and guidelines that organizations put in place to manage and secure their user accounts.
1. Password complexity policies define the minimum strength requirements for user passwords. This can include requirements for the length of the password, the use of a mix of upper-case and lower-case letters, numbers, and symbols.
2. Password history policies determine the number of previous passwords that a user is not allowed to reuse. This helps prevent users from simply rotating through a small number of easily guessable passwords.
3. Password reuse policies restrict the number of times a user can reuse the same password.
4. Network location policies restrict access to user accounts based on their location. For example, an organization may only allow access to their systems from within a specific geographic region or IP address range.
5. Geofencing policies limit the geographic locations from where a user can log in to their account.
6. Geotagging policies require users to tag their location information with each login.
7. Geolocation policies restrict access to user accounts based on the location of the device being used to log in.
8. Time-based login policies limit the hours during which a user can log in to their account.
9. Access policies determine who is allowed to access specific resources and systems within an organization.
10. Account permissions policies determine what actions a user can perform with their account.
11. Account audits track and log all activity associated with a user account, including logins, changes to account settings, and resource access.
12. Impossible travel time/risky login policies restrict logins if the time required to travel between the last known location and the current login location is not possible.
13. Lockout policies automatically lock a user account after a specified number of consecutive login attempts have failed.
14. Disablement policies automatically disable a user account after a specified period of inactivity.
|
OPCFW_CODE
|
NOTE: first, you may want to confirm that you can build your Dockerfile locally and if so, then move on to the following Velocity-specific troubleshooting steps as needed.
Docker builds can fail for any number of reasons, but Velocity-specific causes may be that the filepath to your Dockerfile or build context (i.e., the directory from which you would run the Docker build command locally) in the run configuration are incorrect. Check that the filepath to both the Dockerfile and the build context align with your current application.
If the above paths are correct, double check the accuracy the following elements if they are included in your Docker build process:
Why are my dependencies not being updated on image rebuild?
Velocity leverages advanced caching to speed up container image build times. Sometimes this caching will prevent packages that have been added during a Velocity session to be added to the remote container.
You can click the Code Sync button to override Velocity's caching, and rebuild your remote image from scratch based on your local source code.
When creating a run configuration, why isn't the Kubernetes context field being populated?
Velocity uses your Kubeconfig file to populate options in the Kubernetes context dropdown. This file is located by default at /.kube/config. Alternatively, a non-default filepath to your Kubeconfig can be set with the environment variable KUBECONFIG.
To confirm that the Kubeconfig is accessible, you can run the following in a terminal window: cat ~/.kube/config or cat $KUBECONFIG. If the file is accessible, you will see its contents printed to the terminal.
When creating a run configuration, why aren't the Kubernetes namespace and workload fields being populated?
Velocity requires the same cluster access as kubectl. If the Kubernetes namespace and workload fields aren't being populated when you create a Velocity run configuration, you may not have system-wide access to the cluster from your local machine, or you may not have permission to access a given namespace within the cluster.
To confirm that you have system-wide access from your local machine to your cluster, open a new terminal window and run kubectl get all -n <namespace>. If you don't see output that is similar to the following, you likely need to re-authenticate for Velocity to access your cluster environment, or you don't have permission to access the specified namespace.
Why is the Docker build process taking a long time?
Velocity's build process can be impacted by the size of your Docker image and your internet connection speed. If your build process is taking longer than expected, you may want to look at optimizing your Docker image, and confirming that your internet connection is stable.
What does the "operation cancelled, not enough resources available" error mean?
Velocity-specific resources, i.e. the builder and the registry, as well as any service that is spun up by Velocity during a development session, respect any resource quotas that may exist in your Kubernetes namespace. If resources aren't spinning up because a resource quota would be exceeded, you may need to adjust the configuration of your resource quota.
You can identify the specific resource that needs to be adjusted by reviewing the logs of your resource quota object, and you can learn more about resource quotas in the Kubernetes documentation.
|
OPCFW_CODE
|
When reviewing products you do get a bit excited a certain things more than others, and today for review I have one of those products. It is from ICY DOCK it is the Modiflash 722 Removable HDD Enclosure (SATA version), it excited me because it just looked so darn cool on the ICY DOCK website.(not to mention the fact that when you go to the site you are greeted with a silhouette of a naked woman) The Modiflash 722 is something that everyone can appreciate, it adds a level of security and portability not found in any other enclosure that I know of. Read on to learn a bit more about the Modiflash 722…
ICY DOCK Modiflash 722 Removable HDD Enclosure (SATA)
Reviewed by: Kristofer Brozio AkA Dracos
Sponsor: ICY DOCK
Tech Specs,Features or the Basic Info:
ICY DOCK Modiflash 722 Removable Hard Drive Enclosure (SATA version)
Available Colors: Black
Gamers everywhere needing hard drive security, exchangeability, expandability and manageability will enjoy the MB722 Series that has the look thats suitable for every gamer system. Illuminating through a four segment motion electron luminescence faceplate insert, your removable storage gear is no longer dull but sweet looking! Built with aluminum body cooling, the unit is also equipped with a back lit cool blue LCD readout display indicating real-time drive temperature/activity/positioning/overheating and fan failures. The front buttons are equipped for you to adjust the temperature that you wish to allow the drive to operate under. If temperature exceeds it, then both the visual & the audio alarms would indicate accordingly for your caution. Further features include, metallic finish eject handle for easy drive extraction and secure key lock in keeping the drive safe. Most importantly, with the MB722 Series, you are able to quickly and easily share massive files among your fellow gamers if theyve got a unit installed in his/her system too!
All gamer/non-gamer systems requiring storage drive security, exchangeability, expandability, manageability with a sweet system exterior.
* Drive Fit : 1 x 3.5" Hot Swappable SATA I or II
* Device Fit : 1 x 5.25" device bay
* Internal drive security, exchangeability, expandability and maintenance capabilities
* Advanced drive monitoring system built internally w/ LCD visual & audio indicators
* Stylish faceplate with aluminum cooling body
* Warm air outtake fan cooling of drive
* Safeguard key lock capable of power control to disable data accessibility from others
* Accessible Master/Slave setting thru the drive tray without having to physically remove the drive
* Compact size version in SATA is available for limited depth integration
Models & Specifications:
Item Number: Black MB722SKGF-B
Internal Host: 7 pin SATA
Drive Fit: 1 x 3.5 SATA I / II
Drive Bay: 1 x 5.25
Transfer Rate: 1.5Gb/sec.
Insert & Extract connection Via 64 pin industrial DIN converter
Stucture Aluminum body w/ partial plastic
Drive Cooling Aluminum heat dispersion w/ 1 x rear outtake fan
Alarm Indication Audio & Visual
LED Indication Real-time drive temp., Drive activity/overheat/MASTER/SLAVE status, Fan working/failure status, Device usage time record
LED Display Color Cool blue
Drive Security 3 segment key lock
Dimension (LxWxH) 235.5 x 148.0 x 42.0 mm
Weight 2.10 lbs.
A Better Look at Things
The packaging of the ICY DOCK Modiflash 722 is your standard cardboard box enclosure, on the front we find a nice picture of the Modiflash 722 and information about optional EL slides to customize your Modiflash. Along the bottom of the front are some features in a picture type format that most people should be able to figure out.
On the back of the box we find a nice in depth look at the ICY DOCK Modiflash 722, along with its specs and features. There is a picture of the enclosure that has numbers on it that correspond to the pictures along the side of the back panel to let you know what things are.
Opening the lid we find the enclosure nice and tightly packed away in styrofoam and wrapped in a plastic bag for extra protection.
After fully unpacking the Modiflash we find only two things in the box, the enclosure itself and the installation guide. (We’ll find later that the other parts are tucked away inside the enclosure)
Let’s take a nice close look at the ICY DOCK Modiflash 722 while it is still together. The front has a small LCD display that will display the temperature inside the housing, the drive status, the drives activity and the fan status as well. There are three small buttons located beneath the LCD display that are used for changing the settings of the display. On the far right is a lock, it is a three position lock type, position one, or down is unlocked, removable and power off, position two or about 8 o’clock is locked, non-removable and power off, and finally position three or 9 o’clock is lock, non-removable and powered on. Basically the entire front acts as the handle to remove the enclosure from its docking bay.
If we take a look at it from the right angle, we can see the locking mechanism, and the side of the chassis housing. The enclosure is aluminum, so it will help in heat dissipation. The left side has nothing special except the mounting holes, the same as the right side. If you look behind the front bezel and on the top, you will see a black piece of plastic, that is the anti dust cover that will swing down once the enclosure is removed from its chassis.
Looking at the view from the back we can get a quick look at the cooling fan and the connectors.
On the bottom we can see the mounting holes for the hard drive, and a better look at the actual connectors.
Removing the enclosure from the chassis is very easy, you need to make sure the key is turned to the proper position, and just slide it out with the handle.
When you slide it out the anti dust cover will snap down into place to keep dust out of your computer and the Modiflash chassis.
Here is a closer look at the locking mechanism and the back of the chassis where the connectors are located. This enclosure uses an SATA drive but notice the standard Molex connection.
The inside of the back of the chassis has what looks to be a female IDE connection, but this is used to connect the enclosure with the chassis.
Here are some shots of the enclosure itself outside of the chassis, I like the shiny aluminum look, even though you really won’t see it inside of your case…. You can get a better look at the actual handle that is used to release the enclosure from the chassis in the first two pictures.
To insert the hard drive you need to remove the top of the enclosure, this is done very easily by pushing a plastic tab/button and sliding the cover toward the rear. Inside we find the two security keys and the mounting screws.
On the rear of the enclosure is the IDE like connection that docks with the like one inside of the chassis.
Looking at the inside of the enclosure we find that the SATA hard drive will just slide into place into the female power and data ports.
Lifting the locking handle we find ventilation slits, the LCD and the buttons, on the inside we find the back of the LCD and the slits…
My first impression of the ICY DOCK Modiflash 722 was that it was very well made. Then once I got a better look I found this to be true, the Modiflash is very sturdy and designed well, the aluminum should do well at dissipating heat from your drive and the included fan will help that along nicely.
I like the security features of the Modiflash, knowing that I can turn my main drive off and still leave it secured in my case is nice, but of course, unless I bolt the case to the ground there isn’t much to stop someone from removing the drive anyway. I would have liked to have seen some type of security related to the screws, possible using hex style screws or secure hex screws would be nice as well.
Installation, Testing and Comparison
Installation is very easy, I decided to use my main drive, which is a Maxtor DiamondMax 10 SATA 200Gb hard drive as I figured that is the one that needs to be the most secure. Installing it is as easy as slipping it into the female SATA power and data connections inside the enclosure and then adding the screws to the bottom to secure the drive in place.
Once that is done, you can just slide the enclosure into the chassis and install in your case. This is done the standard way that you would install any 5.25 device, slide it in and secure with screws, then attach the power and data cables. All done, you just need to turn the key to make sure it is in the power on position and fire up you system.
The LCD display is a bright blue, and I found that the fan is whisper quiet, I was a bit worried about the noise level of the fan, but it is surprisingly silent. In the top left hand corner of the display is the HDD activity icon, to the left of that is the status of your drive, whether it is set to master or slave. There is also a tiny animated fan to indicate the fan is on, and below that is an LCD readout of the temperature inside the Modiflash with a thermometer icon next to it. Above the far left button is the word SET, this is obviously for setting the time and alarm of the Modiflash, above the other two buttons are arrows for making your selections.
I already mentioned that the fan is quiet, that fan also does a good job of keeping the drive cool as well, I only noticed a four degree rise in temperatures from the bare drive. I haven’t had the chance to actually remove the drive from the enclosure to take it anywhere, but I did test it and it slides out smoothly without a hitch, the chassis stays firmly in place while removing or replacing the enclosure.
The only thing I might worry about is the handle that is used to remove the enclosure from the chassis, it is made of plastic, and might break with rough use, but it seems very sturdy so I don’t see it being a problem unless extreme force is used on it or carelessness occurs.
I have to say though I was kind of bummed when I opened the box and didn’t see the cool optional face plates, but oh well, you can check them out on the site if you wish, they do add a bit of personality to the Modiflash 722.
In closing, I can say that I would really like a few more of these, these are just plain cool, the ICY DOCK Modiflash 722 has become a permanent part of my system. The Modiflash 722 is very well designed and manufactured, it seems like a very solid piece of equipment that would be a welcome addition to any system I believe. The Modiflash 722 not only looks good but it provides a level of security and functionality that I think everyone can appreciate, from the standard ‘Joe’ user to the professional who needs to swap drives in a server.
DragonSteelMods gives the ICY DOCK Modiflash 722 Removable HDD Enclosure (SATA) a 5 out of 5 score and our Recommended Award as well.
-Functional and useful
-Added level of security
-Ease of use
I would like to thank ICY DOCK for the chance to review their products.
|
OPCFW_CODE
|
the reply function is terrible and dysfunctional for a plethora of reasons, and it does absolutely nothing to "replace" quotes like what was apparently intended. replies are a barebones function and a very bad stand-in for the functionality of quotes.
for one, quoting a message directly adds all of the message text to your message. with this, you also have the option of deleting parts of the message that are unrelated to what you're saying. it makes it much easier for other people to know precisely what you're responding to. this also has the added benefit of not making people scroll up in a conversation to see what you're responding to.
i know replies have an auto-link to the message at hand, but that's still annoying if you're replying to something from hours ago! you still have to scroll back down to read the "reply" that someone was making!
for two, replies are difficult to read due to their text difference (made smaller, and the message trails off.) it's also annoying functionally to have to click to disable or enable pinging someone - with quotes, you could use it completely from your keyboard to just backspace if you didn't want to ping the person you were quoting. it's also pretty useless to show the icon of who someone is replying to and, as i said before, trailing off what their message actually is.
there was no need to get rid of the whole quote option to forcibly replace it with these "replies" that so many people dislike. bring back the quote feature, even if it's used in tandem with replies. there are many suggestions to add back quotes and improve replies:
- make reply-pings remember your last option, if not making a setting for whether your replies automatically ping someone or not.
- make the reply text bigger, and arguably, flat-out remove the icon of the person that's being replied to. it's useless information.
- add back the quote option to the right-click menu. it's been made deliberately more annoying to use quotes now, because people are being forced to accept the reply replacement. if you want to quote, now, you have to copy someone's entire message, paste it onto a > marked line to get the blockquote, and then have to shift-enter and press backspace to exit the blockquote to type their own response.
- slightly modify the quote function. the only thing i didn't like about quotes, and continue to loathe about replies, is the lack of signifying WHEN a user sent a message. example for my suggestion directly below:
> USER said at TIME:
> user message
> (OPTIONAL: message link)
this is massively more useful than replies for various reasons.
first of all, as stated before, you are showing the whole message that you're responding to, and you can remove chunks of the message that aren't related to what you're saying. some people only respond to specific parts of a message, and with replies, if you want to make it clear you're responding to a specific part of a message you have to manually spell it out yourself!
again, as stated before, you can simply backspace to remove a ping that you don't want to send. it's annoying to have to click on a button to disable a ping every time you want to reply to someone without pinging them, especially if they're already present in the conversation.
it also shows you the precise time that someone sent a message to show exactly how long it's been since they sent it and the user replied to it. this is helpful because some people quote messages and respond hours late (for example, if they're asleep or at work.)
i've seen plenty of people talk about how much they genuinely don't like the replies, and how much they want the quote functionality back. you can say "oh well the quotes are still there! you just have to use >!" but that doesn't change the fact that it's directly been made more difficult to use them because you have to do it manually. before quotes were replaced, all of the work was done for you: you clicked "quote," the message was quoted in full in a blockquote, and a new line was started outside of the quote for you.
|
OPCFW_CODE
|
Single-threaded apartment - cannot instantiate ActiveX control
I need to get information about applied CSS styles in HTML page. I used AxWebBrowser and iterate IHTMLDOMNode. I'm able to get all the data I need and move the code into my application. The problem is that this part is running inside of the background worker and I got exception when trying to instantiate the control.
AxWebBrowser browser = new AxWebBrowser();
ActiveX control '8856f961-340a-11d0-a96b-00c04fd705a2' cannot be instantiated
because the current thread is not in a single-threaded apartment.
Is there any way how to solve this or other option than AxWebBrowser?
The problem you're running into is that most background thread / worker APIs will create the thread in a Multithreaded Apartment state. The error message indicates that the control requires the thread be a Single Threaded Apartment.
You can work around this by creating a thread yourself and specifying the STA apartment state on the thread.
var t = new Thread(MyThreadStartMethod);
t.SetApartmentState(ApartmentState.STA);
t.Start();
Thanks, great this is working. One more questions/problem. The class I'm using is just class and the AxWebBrowser looks like it needs to be added into this.Controls(). Is there way how to fake the Controls? Or will I need to have separated Form for that?
@martin.malek There's no great way to fake that. The best bet is to create a new form.
Hi, the code should be t.SetApartmentState(ApartmentState.STA);
Confirmed Dawkins comment, I also had to change it to AppartmentState.STA.
I've met this error when I want to open a windowsfrom from my XNA-Game. I've opened the form with JaredPar's code and it works. Can I carry this code inside the windows form's code ?
Beware that this code is not enough to properly implement an STA thread, it must also pump a message loop. Particularly WebBrowser will malfunction, it will not fire its DocumentCompleted event. Check this post for an alternative.
How do you do this in the Compact Framework. (i.e. compact framework has no [STAThread] ability.)
I am using Task, is there any solution?
Go ahead and add [STAThread] to the main entry of your application, this indicates the COM threading model is single-threaded apartment (STA)
example:
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new WebBrowser());
}
}
doesnt work in framework 4.5 - gives invalid arguments on last line
If you used [STAThread] to the main entry of your application and still get the error you may need to make a Thread-Safe call to the control... something like below. In my case with the same problem the following solution worked!
Private void YourFunc(..)
{
if (this.InvokeRequired)
{
Invoke(new MethodInvoker(delegate()
{
// Call your method YourFunc(..);
}));
}
else
{
///
}
What should be invoked in here? The browser? In which state? In my case the browser is sitting on a dialog.
My problem with this was that I had my Main method marked as async. So fixed it just by making it static void and adding .wait() to all awaitable methods inside.
|
STACK_EXCHANGE
|
Can independent mode run over all libraries at once and produce single commit (not multiple)?
Guys, i'm sorry for be so bothersome 🙄
Sync mode update all changelogs and package.json versions at once (with one single version) and produce single "bump" commit.
Indepdendent mode goes for each library and produce multiple "bump" commit for each library.
Is it possible to run independent mode like sync mode (over all apps and libs) and produce single commit not multiple?
Thank you!) 🍺
Hi @glebmachine, you're not bothersome, I like your suggestions! I think it's a complete valid scenario for a monorepo.
To make it work we should rework the versionProject function which run standardVersion internally and commit for each project. For now we can't influence on when and what's committed, so we should decompose standardVersion into smaller functions to control the moment it commits.
What do you think @yjaaidi? It Seems to cross what's you had in mind.
Hi @glebmachine, you're not bothersome, I like your suggestions! I think it's a complete valid scenario for a monorepo.
To make it work we should rework the versionProject function which run standardVersion internally and commit for each project. For now we can't influence on when and what's committed, so we should decompose standardVersion into smaller functions to control the moment it commits.
What do you think @yjaaidi? It Seems to cross what's you had in mind.
Hey y'all 😄
I have the same scenario as @glebmachine, would love to help if needed
Hey @EladBezalel! Happy to see you here 😊
Initially, we liked the idea of not reinventing the wheel and reusing the nx's options like --projects and --affected.
Now, considering this issue + the grouping issue https://github.com/jscutlery/semver/issues/125, there is a common prerequisite: running semver once on the workspace with a global configuration.
Let's discuss this here first then you can go ahead with a PR or we can plan a call or a mob programming party 🎉
Current
The idea is to drop the current independent configuration which looks like this:
{
"name": "my-app",
"root": "apps/my-app",
"architect": {
"version": {
"builder": "@jscutlery/semver:version"
},
},
},
{
"name": "lib-core-a",
"root": "packages/core/a",
"architect": {
"version": {
"builder": "@jscutlery/semver:version"
},
},
},
{
"name": "lib-core-b",
"root": "packages/core/b",
"architect": {
"version": {
"builder": "@jscutlery/semver:version"
},
},
}
Goal
and to move to something like this:
{
"name": "workspace",
"root": ".",
"architect": {
"version": {
"builder": "@jscutlery/semver:version",
"options": {
"projects": ["lib-core-a", "lib-core-b"],
"groupBy": ["packages/core"]
}
},
},
}
[ ] projects & groupBy are optional
[ ] default behavior if projects is not set is to version all projects
[ ] default behavior if groupBy is not set is to version projects independently
[ ] groupBy allows us to sync packages versions (packages can only be grouped by path). Syncing all packages would mean groupBy: ['.']
The main advantage here is that we have one configuration for the whole workspace.
The challenge is to override the configuration for some projects, for example, when plugins will be available. We might want to give distinct plugins for each project.
@edbzn what do you think?
As mentioned in #125, we'll be moving to a new configuration syntax that would look something like this:
{
"name": "workspace",
"root": ".",
"architect": {
"version": {
"builder": "@jscutlery/semver:version",
"options": {
"singleCommit": true,
"configs": [
{
"name": "rx-state",
"type": "independent"
},
{
"name": "cdk",
"type": "group",
"path": "packages/cdk",
"packages": [
"packages/cdk/operators",
"packages/cdk/helpers"
]
}
]
}
}
}
}
As you can see in this example, we could add a singleCommit option to handle this feature.
hi guys, I'm looking forward to have this grouping or single commit feature, do you have a plan for it?
@daton89 Yes, we are planning to do the grouping soon, first, we are focusing on post targets (#317), it will come next.
The single commit will come later because it needs a lot more work (related to #314).
Hi guys, is there any plan update on this request?
@davidren-apt This is the next thing I would like to work on BUT we need Nx to support workspace level executors before being able to introduce this change. So for now it's pending.
@edbzn since you have closed the issue in the Nx repo, could you give an update on how to use this now in combination with @jscutlery/semver?
It's not doable in the current state, we need now to refactor the whole plugin, this will be hanlded in the next major release. We started to work on an RFC so it's in the pipeline.
Thank you very much for clarification. 🚀
|
GITHUB_ARCHIVE
|
Hello guys In this video or in this tutorial I will show you 3 best ways to find tags for your youtube videos.
Guys youtube is not a human, it is a basic computer or program or its runs on the algorithm of Google system. If you do not understand then wait. Example: if we upload a video on YouTube then we need to out Title, description, and tags because youtube never watch our videos because it is not a human so they understand the only text and they only read the text. So they rank your videos according to your text use in videos, titles, description, and tags.
So many people use proper titles, with proper description but they do not use tags box because they don’t know how to put tags or find proper tags for our videos. So today I will help you all possible ways to find tags for your youtube videos.
1. Tags From Google (use Google search engine)
2. Tags From YouTube (use youtube search engine)
3. Tags From YouTube Tags Generator (it gives you tags automatically)
As you know Google is the best way to find tags for your youtube videos. Because Google help you to rank videos on Google
So first go to Google and type your title of your videos.
Now after such go to or scroll down at the last so you will see some suggested titles this is your tags.
Copy them and out to your youtube video tags box.
OK now we already rank our video on Google using Google tags but we need to rank videos on youtube also because this is important to grow your channel.
Now go to youtube and search your topic title then youtube also suggest you some titles or keyword which is ditching most so copy this tags also and use your youtube tags box.
OK, now I already told you about youtube search and google search tags topics so now this is another way to find tags for your youtube videos. In this method we use a tags generator it will automatically give you tags.
Go to Google and search tags generator for youtube. There are so many websites which help you to give tags to your videos.
Just open any website then put your title and boom tour tags is ready, just copy and put it on your video’s tags.
Personally, I never recommend this method because I never use this method.
NOTE: Use any method but keep in mind when you use the tags then don’t use wrong tags or others names like big youtube channels names. Just put proper relative tags on your video content.
How To Properly Tag Your YouTube Videos On Android
ABOUT ME :
I Make Tech Videos Based On SmartPhones especially Android Devices
Easy To Use Tutorials, Cool Android Tips & Tricks, Games & Apps Reviews.I Also showcase interesting accessories & Gadgets
So what are you waiting for S-U-B-S-C-R-I-B-E and Join the best Android Tips and Hacks Channel On Youtube
Download YouTube Creator Studio App :
|
OPCFW_CODE
|
RAID Information Restoration and UNIX Deleted Recordsdata
Information restoration is at its most fascinating when there are a number of points to cope with, so combining a RAID failure with the deletion of information from a UNIX UFS file system provides rise to a very difficult information restoration.
The primary side of the work is the securing of knowledge. Any respected information restoration firm, and there are various, will religiously safe all accessible information earlier than starting any work. Working stay on the disks from a RAID with out first having secured picture copies of every, and risking whole information loss ought to there be any failures or write backs, is morally indefensible and commercially inept. There are lots of instruments accessible to picture copy working disks.
Outline the RAID
There is no such thing as a commonplace RAID 5 group. RAID 5 describes a technique of striping information throughout numerous disks with the creation of parity XOR information that’s distributed throughout the disks.
The parity information calculation for RAID 5 is simple, however the order through which the disks are used, the order through which the parity is distributed throughout the disks and the scale of every block of knowledge on every disk usually are not. That is the place the UFS (and EXT3 and XFS) technique of dividing a quantity into allocation teams is a good profit. The NTFS all you actually get is the beginning of the MFT and the MFT mirror, and there will be a number of RAID 5 organizations that end in these being positioned accurately, so there’s a nice dependence upon analyzing the file system to enhance the evaluation course of. With UFS there’s a copy of the superblock adopted by inode tables and allocation bitmaps at equally spaced positions all through the quantity. This makes figuring out the RAID configuration comparatively easy in most UNIX information restoration cases.
Analyze the info
Having labored out the RAID group the subsequent problem is to trace down the required information. There are lots of who declare that deleted file information restoration from a UFS quantity will not be attainable, and there are good grounds for this declare, however it isn’t totally correct.
To start with we should think about the style through which UFS manages the allocation of knowledge for information. Every file is described by an inode, that is the place info pertaining to a information dates and instances, measurement and allocation are saved. The allocation is numerous tips to the blocks of knowledge that type a file, plus some oblique block pointers. When a file is deleted the indode is free for re-use and the allocation info therein is eliminated. This does imply that there isn’t a technique of utilizing a program to scan the inodes for deleted information in the way in which that may be carried out by scanning the MFT entries of an NTFS file system to undelete information.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Pathfinding;
using GameTypes;
namespace TingTing
{
class MultiRoomNetwork
{
PointTileNode[] nodes = null;
public MultiRoomNetwork(IList<Room> pRooms)
{
List<PointTileNode> tNodes = new List<PointTileNode>();
foreach (Room r in pRooms)
{
tNodes.AddRange(r._tilesByLocalPositionHash.Values);
}
nodes = tNodes.ToArray();
}
public void Reset()
{
foreach (PointTileNode t in nodes)
{
t.isGoalNode = false;
t.isStartNode = false;
t.distanceToGoal = 0f;
t.pathCostHere = 0f;
t.visited = false;
t.linkLeadingHere = null;
}
}
}
}
|
STACK_EDU
|
// Foundation not needed
// Note: Placing this enum inside the BinarySearch struct results in a compiler crash
#if swift(>=3.0)
enum BinarySearchError: ErrorProtocol {
case Unsorted
}
#else
enum BinarySearchError: ErrorType {
case Unsorted
}
#endif
struct BinarySearch<T: Comparable> {
let list: [T]
var middle: Int {
return list.count / 2
}
init(_ list: [T]) throws {
#if swift(>=3.0)
guard list == list.sorted(isOrderedBefore: <) else {
throw BinarySearchError.Unsorted
}
#else
guard list == list.sort(<) else {
throw BinarySearchError.Unsorted
}
#endif
self.list = list
}
func searchFor(datum: T) -> Int? {
let middleItem = list[middle]
if middleItem == datum {
return middle
} else if middleItem > datum {
let sublist = Array(list[0...middle])
guard sublist != list else {
return nil
}
// try! is safe here, since it's not possible to get here if the data isn't initially sorted
return try! BinarySearch(sublist).searchFor(datum)
} else {
let sublist = Array(list[middle..<list.count])
guard sublist != list else {
return nil
}
// try! is safe here, since it's not possible to get here if the data isn't initially sorted
if let index = try! BinarySearch(sublist).searchFor(datum) {
return index + middle
} else {
return nil
}
}
}
}
|
STACK_EDU
|
I am Harsh Vardhan Tiwari, a first year Master’s student in Financial Engineering student, I am working on web scraping, which is a technique of writing code to extract data from the internet. There are several packages available for this purpose in various programming languages. I am am primarily using the Beautiful Soup 4 package in Python. There are various resources available online for exploring the functionalities within Beautiful Soup, but the 2 resources I found the most helpful are:
- Web Scraping with Python: Collecting Data from the Modern Web by Ryan Mitchell
My project basically involves writing a fully automated program to download and archive data mostly in PDF format from about 80 webpages containing about a 1000 PDF documents in total. Imagine how boring it would be to download them manually and more so if these webpages are updated regularly and you need to perform this task on a monthly basis. It would take us hours and hours of work, visiting each webpage and clicking on all the PDF attachments on each webpage to perform this task. And even worse if you have to repeat this regularly! But do we actually need to do this? The answer is NO!
We have this powerful tool called Beautiful Soup in Python that can help us automate this task with ease. About a 100 lines of code can help us accomplish the task. I will now give you an overall outline of how the code could look like.
Step 1: Import the Modules
So this typically parses the webpage and downloads all the pdfs in it. I used BeautifulSoup but you can use mechanize or whatever you want.
Step 2: Input Data
Now you enter your data like your URL(that contains the pdfs) and the download path(where the pdfs will be saved) also I added headers to make it look a bit legit…but you can add yours…it’s not really necessary though. Also the BeautifulSoup is to parse the webpage for links
Step 3: The Main Program
This part of the program is where it actually parses the webpage for links and checks if it has a pdf extension and then downloads it. I also added a counter so you know how many pdfs have been downloaded.
Step 4: Now Just to Take Care of Exceptions
Nothing really to say here..just to make your program pretty..that is crash pretty XD XD
This post covers the case where you have to download all PDFs in a given webpage. You can easily extend it to the case of multiple webpages. In reality different webpages have different formats and it may not be as easy to identify the PDFs and therefore in the next post I will cover the different formats of the webpages that I encountered and what did I need to do to identify all the PDFs in them.
Thanks for reading till the end and hope you found this helpful!
|
OPCFW_CODE
|
Is it theoretically possible to design wormhole gates for space travel?
In the story I am writing I would like space travel to be as realistic as possible. Wormholes are a theoretical possibility but they are incredibly unstable as well as difficult to manage, but they are effective in fast travel to faraway locations. I considered the fact that in this story technology is much more advanced so anything is theoretically possible but I don't want things to be too unrealistic and in reality are completely impossible.
I thought about the typical engine attachments that would allow the entire ship to be engulfed in a wormhole but find it more unrealistic for each ship to have a wormhole engine system since it is unlikely each ship would have enough power to create a wormhole. So I thought of the possibility of wormhole gates throughout a solar system. Large, door frame shaped technology that creates and maintains a wormhole that leads to a specific location either in another solar system or somewhere else in that same solar system.
Is it possible to create technology that could stabilize a wormhole as well as create a definite destination?
'Is it possible' questions generally make poor questions because the only way the answer is no is if you can definitely disprove something, which obviously we can't do when talking about future technology. a lot of things we have today would have seemed impossible 500 years ago. IMO, its your job as a writer to provide the story and world building that would make it seem possible.
Wormholes in SciFi function as a handwave for instantaneous travel in general. As such they are often mixed up with black holes and neither of them is every really portrayed realistically. You can basically do whatever you want with it as long as it sounds like some sort of portal in space...
Call 'em "space portals" or just "portals", and you should be good. I wouldn't even try to explain the science of it unless the process of inventing them is core to your story: realistic wormhole travel isn't the realm of science, or even science fiction.
Wormholes are acceptable scifi.
If your company were making wormholes I would be very skeptical. But if your fiction entails use of wormholes - sure! Having a wormhole is a fine, familiar shorthand way to explain how your characters are zipping over enormous distances. I thought the Stargate series was great and they are premised around wormholes exactly as you describe. They handwaved the tech - ancient aliens, you know. Ditto for warp drives, hyperspace; all that. FTL of any stripe is problematic for real physics but for scifi FTL is a means to an end, the end being lots of aliens getting together for a story.
If you are worried about individual egghead spoilers who cannot lose themselves in the witty banter of your characters and your awesome alien scenarios, have such an egghead show up in your story to protest the wormhole as your acknowledgement of those concerns. He can wear a white cravat, and the ladies like his accent.
Is it possible to create technology that could stabilize a wormhole as well as create a definite destination?
Based on our current knowledge wormholes are nothing more than a speculation like time travel.
As such we cannot know of any technology to support them. For that we would need equations describing their behavior.
The equations of general relativity have solutions for wormholes, but those solutions involve negative mass. Quantum physics kind of allows negitive mass, maybe a little bit. General relativity and quantum mechanics don't fit together mathematically. In short, we don't know enough physics to say whether or not wormholes are possible.
We can say that if wormholes are possible, they are likely to have similar mass energy scales as black holes. Ie expect that you need VAST amounts of energy to create big ones. (You might be able to use mass.) The mass of the sun would make a wormhole a few km across. The mass used is proportional to radius. Tidal forces will probably be large enough to rip you apart unless the wormhole is far larger than that. Use tiny wormholes with high powered lasers for ftl comms.
Mind uploading is definitely possible under known physics, and probably a lot easier than wormholes. So make the spacecraft be rocket engines (nuclear or antimatter) plus a computer, and maybe humanoid robots. The computer simulates human minds in whatever virtual world makes sense. This might be a buttons and dials star ship bridge, if that is the best way to control the rockets. Sometimes the human minds get connected to the robot bodies to explore the ground on planets. All the hardware is left when you go through a wormhole, only the digital minds are beamed through, but hardware is cheep, nanotech can make anything in moments.
You do need a reason why this is being done by human minds. (It is hard to make a good story involving only AI's without anthropomorphising them.) Rules for dealing with supersmart AI's 1) The AI always gets what it wants. 2) What it wants is entirely a function of how it was programmed 3) What it was programmed to want might or might not be what the programmer wanted it to want.
Possible reasons for humans running rockets.
AI's can achive any precicely stated goal with superhuman skill, but they can't make difficult moral decisions and get the same answers that a human would get without simulating a human. The AI has been well programmed, it cares about human morality. But any part of its code that computes what is moral well enough to reliably get the right answer is actually quite similar to a simulated human. Difficult moral decisions need to be made, perhaps about the moral value of aliens, so the AI simulates some humans to make those decisions. The best moral philosophers humanity has to offer, on a bold mission to seek out alien life, and
decide if it is sentient.
The AI does what humans want. These humans want to explore space. The AI could make totally realistic virtual planets much more easily, but these humans want to explore real planets, so the AI makes the tech needed. Meanwhile >99% of humanity has decided that virtual worlds are fine, and are living in a vast array of virtual worlds somewhere.
Either way, a copy of the AI will reside in the computer with them, and do tasks like steering through an asteroid field without hitting anything. (This is easy as asteroids are really spread out. ) But expect the story to be all charactor development and moral quagmires, no parts where the crew are struggling with some technical problem.
Just as a note: lots about AI and just a bit of wormholes in your answer.
|
STACK_EXCHANGE
|
Recently along with ‘BDD’ in Agile, I have been hearing a lot about CI, CD, and people confusing about it. I was clear enough with my own understanding and at one point different opinions made me confuse and thought Should I step back and revisit my knowledge on the CI and CD. Here I am … Read more What is Continuous (Integration, Delivery, Deployment, Testing) ?
Here is the list of Chrome Driver command line Arguments. If you are using chrome Driver for Selenium WebDriver or Protractor or …. then these are a handy useful list of command line arguments that can be used. You may use this to look at the usuage: https://code.google.com/p/chromium/codesearch#chromium/src/chromeos/chromeos_switches.cc Run chromedriver –help to see command line … Read more List of Chrome Driver command line arguments
Getting started with Cucumber JVM BDD – Its one of the most trending topics in the Test Automation world, while Adoption of Agile in its way. To explain about BDD, there are lot of other sources around the web and am not going to explain them here, rather sticking to only explain the Java way … Read more Getting Started with Cucumber Java – BDD
This is was one old post that was hanging in my drafts… Never late to publish it.. Literary meaning of Stale: Old,decayed, no longer fresh… Exactly that is what referenced here. Stale Element – An element that is found on a web page referenced as a WebElement in WebDriver then the DOM changes (probably due … Read more What is StaleElementException in WebDriver?
Check out..!!! I am so happy, that the efforts put in had payed off. At first it looked like “Is it possible ?”, I said to myself, Yes you can!! With that Go! attitude initiated the discussion in Google Groups https://groups.google.com/forum/#!topic/selenium-users/WR5PYDL_j5A Lot of support came In and which was Simon Stewart the first person to … Read more Announcing Selenium Conf ’14: Bangalore, India
It was a Nice Experience being a part of GDG Chennai team. My First Conference as a Speaker at Google Developers Group DevFest 2013 Presented on Automated Testing with Google Chrome. Mainly it was using our Selenium WebDriver tool but I just presented specific to the ChromeDriver since its a Google conference and would be … Read more Google Developer’s Conference DevFest 2013
I have always had a craze for Python language since my graduation days. Recently, I have been to PyCon India 2013 and have seen many developers/testers who are interested in learning and implementing Selenium with Python. Its also a reason that I wanted to show newbies who wanted to write some automated tests, both who … Read more Getting Started with Python WebDriver
|
OPCFW_CODE
|
How do I beat the big lizard/snake?
Giant serpent in water.......cavern.......how do you beat it? Its deep in the ground whats the trick? Ammo doesn't work!!!!! Ive used everything ive got
ddamnn - 8 years ago
Have you tried running back and forth between the gas things on the floor? (you have to use the fire gun to ignite an explosion from the gas coming out of the it and you should be able to see it) the gas things will start from the closest to you and will move forward and back between 3 of which is located on the battle stage. You will be battling the same snake but with 3 different characteristics 1 of these characteristics is the head attacking snake, this particular snake i believe is the key to beating this monster because when the snake gets up and close to strike an attack with its head this is when you'll have the closest contact to the beast and the best chances of beating it, i use the napalms when it gets really close but you'll run out of it and eventually be resolved to the gas deposits expelling from the floor igniting it with the fire gun and using it as the snake comes in for an attack with the head but BE CAREFUL because YOU HAVE TO ANTICIPATE WHEN THE HEAD WILL STRIKE AND USE THE GAS COMING OUT OF THE FLOOR AND USE YOUR FIRE GUN TO CAUSE AN EXPLOSION AS CLOSE AS YOU CAN BUT YOU CANT WAIT TOO LONG TO STRIKE BECAUSE HES QUICKER, as the snake gets hurt it will retrieve into the water changing between the 2 other characteristics which is (the whipping snake) and (he rock throwing snake), these are the 2 that really kill me because you have to dodge the whips and rocks which is really difficult but not impossible to do you will rotate these snake as it will be the head striking snake first then rotating between the 2 other one's, and will turn back into the head striking one only after surviving an attack from either the whipping or the rock throwing snake by dodging his attacks with x. I have been on this level for about a hour and was looking for the answer too but i guess that you guys were having complications too. This is what i been trying and i think i got really close to beating him a couple of times but this is a game of luck because you can die from just about any attacks the snake can do. So Good Luck and I hope this will help
xbluexpugx - 8 years ago
When the head comes in 2 strike, use the flamethrower 2 ignite an explosion off the gas coming out of the ground. do it quick though. the the thing will try 2 hit u with its tentacles. u can dodge it by hiding behind the big rocks on the back wall but when they all break u will have 2 jump over the tentacles. then when u burn it again, it will throw the stalagmites off the ceiling at u. it is very hard 2 dodge so b careful. then when it comes in 4 the kill, burn it again and it will die.
blackninja16 - 8 years ago
This question was asked more than 60 days ago with no accepted answer.
|
OPCFW_CODE
|
Take advantage of bloom filter when delete terms
Description
Today we delete terms in a seeking forward fashion with seekCeil. This can help short cut terms seek by comparing to the next existed term. But the seekCeil can not take advantage of bloom filter which also significantly helped primary key lookups. I wonder if it is a acceptable trade off to use seekExact here for potential bloom filter implementation, or we can have a way to take advantage of both short-cuts (E.g. add mightContains for TermsEnum) ?
(I'd think this is not only the problem for delete terms but all scenes that need to seek sorted terms)
Hi @gf2121 ,
My name is Rohan Jha, I'm a Masters student at UT Austin taking a graduate Distributed Systems course. As part of my course project - contributing to OSS, I'm interested in contributing to Lucene by working on this issue/enhancement.
I appreciate the linked pointers, and would also appreciate any additional info/context you could provide for me to get started.
Thanks!
@robro612 please subscribe to the dev list and post your question there. We are more than happy to help you out from there.
@gf2121 do we have any numbers if it actually helps applying deletes? I think we can assume that we make use of seekCeil in the common case ie. all terms have the same field. I would assume that
@gf2121 wild idea but would it make sense to build an automaton off these terms and intersect it? We could reuse it for multiple segments? I am not sure how big the costs are for that but it would potentially in a codec agnostic way?
do we have any numbers if it actually helps applying deletes?
I made a naive benchmark comparing seekCeil and seekExact when deleting terms on a field with bloom filter, result shows seekExact style ~9% faster in average. (All terms are 16 bytes UUID)
round
seekCeil(ms)
candidate(ms)
diff
1
46211
42783
-7%
2
44641
40698
-9%
3
44462
43471
-2%
4
47059
43842
-7%
5
45049
43609
-3%
6
49529
40953
-17%
7
44707
41498
-7%
8
45732
40785
-11%
9
45396
41321
-9%
10
48759
40760
-16%
avg
46154.5
41972
-9%
wild idea but would it make sense to build an automaton off these terms and intersect it? We could reuse it for multiple segments? I am not sure how big the costs are for that but it would potentially in a codec agnostic way?
This sounds great. I think intersecting with automation can not take advantage of bloom filter too. So it should be a competitive approach comparing to seekExact or seekCeil ? I'd like to give it a try.
Hello,
I am planning to work on this issue. Can this issue be assigned to me please?
@SreehariG73 we generally don't assign issues here, but if you have a contribution to make, it would certainly be welcome
|
GITHUB_ARCHIVE
|
Is This Article for You?
- Migration of Riva-synced user mailboxes from Exchange on-premises to Office 365.
Migration tool used or to be used:
- Mailboxes have already been migrated or will be migrated to Office 365 by
- Using a third-party migration tool (like MigrationWhiz), or
- Using the Microsoft staged migration method (no AD integration), or
- Importing mailboxes from PST files.
Sync configuration requirement:
- Riva sync policies are configured to either not sync calendar items or to sync calendars in uni-directional mode, from the CRM to Exchange only.
Migrating Instructions for Riva Cloud Corporate
Migrating Instructions for Riva On-Premise
- HIGHLY RECOMMENDED: Before attempting the migration, review the entire process with our professional services team. Contact the Riva Success Team.
- Determine which version is installed. If it is not at least 2.4.37, upgrade Riva On-Premise to the latest public release.
- To request professional services to lead your Riva Admin team through this procedure, contact the Riva Success Team.
- Prepare Office 365.
- Migrate the Riva-synced users to Office 365.
Step 4: Prepare Office 365
To prepare Office 365 for Riva and the Windows host server for connectivity to Office 365:
- Prepare corporate firewalls for connectivity with Office 365.
- Confirm connectivity from the Windows host server to Office 365.
Step 5: Migrate the Riva-synced users to Office 365
To reconfigure Riva as part of the migrating mailboxes to Office 365:
Immediately before migrating the mailboxes, in the Riva Service Monitor application (or Windows services), stop the service.
In the Riva Manager application, on the menu, select Policies. On the right pane, right-click the current sync policy, and choose Disable.
Migrate the mailboxes (including the mailbox for the Riva connection user for Exchange).
Log in to the Riva connection user mailbox to confirm that the mailbox is active.
In the Office 365 Admin Panel, assign the Riva Connection user to the Exchange ApplicationImpersonation Role, which will grant it the correct permissions required in Office 365. For instructions, see Prepare Office 365 Exchange permissions for Riva connections.
In the Riva Manager application, if the menu displays Setup, select Setup. On the right pane, double-click the existing EWS connection to edit it.
On the Connection Details page, change the Host to https://outlook.office365.com/EWS/Exchange.asmx, and reset the password. Change the Impersonation Method option from Delegate Full Access to EWS Impersonation.
Save the connection.
Double-click the existing EWS connection to edit it, and select the Test page.
Select Run Test for Riva to perform a connection test for the connection user. If the test passes, select OK. If the test fails, contact the Riva Success Team.
On the Test page, in the User Email field, add the email address of one of the users in the sync policy. Select Run Test for Riva to perform a connection test for the syncing user. If the test passes, select OK. If it fails, contact the Riva Success Team.
If both connection tests pass, select Save to save the connection.
In the Riva Manager application, on the menu, select Policies. On the right pane, double-click the sync policy to edit it.
Configure the following line-in-the-sand advanced options for the policy. Read about the Line-in-the-Sand procedure. We highly recommend contacting the Riva Success Team for this step.
Key = value
Sync.Crm.MinStartDate.Appointment = yyyy-mm-dd
Sync.Ex.MinStartDate.Calendar = yyyy-mm-dd
Sync.Crm.MinCreateDate.Appointment = yyyy-mm-dd
Sync.Ex.MinCreateDate.Calendar = yyyy-mm-dd
Sync.Ex.SyncStartTimeOverride.Calendar = yyyy-mm-dd
Sync.Crm.SyncStartTimeOverride.Appointment = yyyy-mm-dd
Sync.Crm.FirstSyncExistingItemOption.Calendar = SyncToCrm
Save the sync policy. When Riva prompts to start the service, select Yes.
Monitor data sync for the users, and ensure that you do not see any sync errors.
|
OPCFW_CODE
|
FACTS: Management science- The term is used to refer to the socio-economic-political global situation of the last decades, there was a significant increase in use not only in the general press, but also in academic publications.
PROBLEM: The term was never defined, authors using it seem to attach different meanings to it, focusing on different, sometimes contradicting connotations.
RESEARCH PROJECT: Literature Review of scientific publications in management sciences using the term.
SCOPING: Only peer-reviewed journals ranked with A+ or A, only journals pertaining to four sub-disciplines (General Business Administration, Strategic Management, International Management, Organizational Human Resources)
DATA SET (after refinement): 16 journals > 38 papers > 115 hits (paragraphs containing “new normal”)
WHAT I DID UNTIL NOW: I inserted in an Excel document every paragraph containing the term (some paragraphs contain the term more than once) and noted the paper it’s coming from and the keywords related to the paper for context.
I then proceeded with isolating information around the New Normal and tried to find a connections between pieces of information.
MY PROBLEM: I really need to hand in this assignment ASAP, but I got sick and that “activated” an autoimmune disease, so I am not able to focus or work at my laptop for more than an hour.
I did already 90% of the mental work behind the research (coming up with an adequate heuristics and methodology mix, restricting the scope aiming at quality, started the coding process) but I am having difficulties because the Grounded Theory Method is thought for coding “natural” language as in survey interviews.
Coding academical papers means coding language that is already at the highest level of sophistication and “compression” (not redundant but conveying the most meaning in the shortest space possible),which means it’s hard to go through the three-stages of open, axial, and selective coding, as the ideas are already summed up and organized in logical manner, with the authors already using connectors and presenting information in a causal manner (X is cause of Y, Z is consequence of K, …).
HOW YOU CAN HELP: I need you to have a look at my Excel database and at the background information I will provide you with (i.e., the text of my research, about 16 pages) and basically do three things:
1. Help me code the rest of the paragraphs 2. Suggests ways to organize the results and, after discussing the best option, 3. Help me out in writing the results. I will take care of Discussion and Conclusion myself.
Such a cheap price for your free time and healthy sleep
All online transactions are done using all major Credit Cards or Electronic Check through PayPal. These are safe, secure, and efficient online payment methods.
|
OPCFW_CODE
|
In order to keep the Microsoft Certified Professional program current, Microsoft Learning continually monitors trends and then revises exams and certification requirements to keep pace with technology changes. Our goal is to provide you with several months’ notice of upcoming exam retirements. We review our certification program plan, the psychometric performance, and our upcoming exam list every quarter and make decisions about what to retire and when so that we can ensure people have the most up to date certifications reflecting the latest in Microsoft technologies while at the same time ensuring that MCPs can be certified on the technologies they use every day.
Retirements planned for September 30, 2017:
Replacement Exam (if identified)
354: Universal Windows Platform - Application Architecture and UX/UI
355: Universal Windows Platform - App Data, Services, and Coding Patters
74-678: Designing and Providing Microsoft Volume Licensing Solutions to Large Organizations
(NOTE: Previously scheduled for retirement on 12/31/17 and now on 11/15/17)
705 (in development; planned availability is late October/November)
Retirements planned for December 31, 2017
246: Monitoring and Operating a Private Cloud with System Center 2012
247: Configuring and Deploying a Private Cloud
534: Architecting Microsoft Azure Solutions (This exam is being replaced by 535, which will cover similar content, but the exam content domain is changing significantly enough that we want a clear differentiator for candidates who are preparing for this exam. The exam title is not changing. Keep your eye on Born to Learn for more details.)
535 – targeted to be available late November/early December
74-344: Managing Programs and Projects with Project Server 2013
MB2-709: Microsoft Dynamics Marketing
MB6-705: Microsoft Dynamics AX 2012 R3 CU8 Installation and Configuration
MB6-890: Microsoft Dynamics AX Development Introduction
MB6-892: Microsoft Dynamics AX Distribution and Trade
MB6-893: Microsoft Dynamics AX Financials
Retirements planned for January 31, 2018:
696: Administering System Center Configuration Manager and Intune
Retirements planned for July 31, 2018:
398: Planning for and Managing Devices in the Enterprise
488: Developing Microsoft SharePoint Server 2013 Core Solutions
489: Developing Microsoft SharePoint Server 2013 Advanced Solutions
496: Administering Visual Studio Team Foundation Server
497: Software Testing with Visual Studio
498: Delivering Continuous Value with Visual Studio Application Lifecycle Management
680: Windows 7, Configuring
685: Windows 7, Enterprise Desktop Support Technician
686: Windows 7, Enterprise Desktop Administrator
74-343: Managing Projects with Microsoft Project 2013
74-409: Server Virtualization with Windows Server Hyper-V and System Center
Do you plan to have some beta codes for 535?
Any reference for the cert content domain ?
I bought the Azure triple pack (valid until end of this year) to get certified for 532,533 and 534. To make 534 seems to be worthless to me if it will be replaced by 535. Can I use my voucher for 535 instead? Do you reflect this change to the Azure cert-recommendations for 2018? 534 is the most mentioned cert there.
Hi there, I have been hearing from colleagues that the 70-461 exam (SQL 2014 MCSE Track) will be expiring soon.
However I cannot find any article to confirm this.
Can you please advise.
To clarify the 534 to 535 information above. 535 is NOT a new exam. It's simply a replacement for 534. 535 has the same name and will accrue to the same certifications that 534 does. In addition, if you have passed 534, you will continue to hold any associated certifications that you have earned, and it will continue to count toward the same certifications as 535 will. Anything that applied to 534 will also apply to 535.
We decided to change the exam number to make the transition from the old objective to the new one better for candidates who were preparing for the exam. Rather than an abrupt change, which would have been the experience had we left the exam number the same, we are giving candidates the option to take 534 by the end of the year if they have already been preparing for it. If you are just starting your preparation, though, you should focus on the 535 objective domain.
To learn more about the differences between these exams and why we decided to replace 534 with 535, go here:
Remember that this is not a new exam. It's a replacement; essentially the same content area but a more refined focus on what we now think the architecting role should cover.
@Andreas: no plans to retire 461.
@Kasun: I don't understand your question
What is about 70-743 when it retirement?
|
OPCFW_CODE
|
I'm the IT manager for a very small company. When I was brought on about 2 years ago, we had 2 Exchange servers. One running Exchange 2003 and the other 2010 (with 2010 doing the heavy lifting). During the first 9 days, the 2003 EX server died. I didn't complain however because that would have been one of my first proposals. However, I'm not an Exchange "expert" then or now. So, I let it go and haven't revisited since then.
Fast-forward to today, I'm looking to bring (on a brand new server) EX 2016. The setup wizard ran its' process and thru a dozen+ errors, the top error was:
One or more servers in the existing organization are running Exchange 2000 Server or Exchange Server 2003. Installation can't proceed until all Exchange 2000 or Exchange 2003 servers are removed.
For more information, visit: http://technet.microsoft.com/library(EXCHG.150)/ms.exch.setupreadiness.Exchange2000or2003PresentInOr...
I'm not even sure where to start with this one. Any help would be greatly appreciated!
Looks like you didn't decomposed exch 2003 properly
on exchange 2010 powerShell type
Brand Representative for Lepide
As mentioned above, first you need to completely remove your server from production environment.
You can follow this guide for step-wise instructions - https://www.lepide.com/blog/decommissioning-exchange-server/
A few of you have mentioned moving to Office 365. Well.... to add complication to the matter, I have already done that. We are currently in a "Hybrid" environment with Office 365 whereby the on-prem tenant is the Exchange Server that I want to upgrade from 2010 to 2016. Here in lies my problem. For reasons that I established 2 years ago (but can not currently recall), there was a reason for not doing a full cut-over. This reasoning might bring up other questions and is likely better suited to for another thread.
Regardless, I wanted to thank everyone for their contributions. I believe I understand the core concepts of what you are telling me.....however, I am a little bit nervous in performing some of these ADSI edits. I know just enough to be dangerous.
I do not have the luxury of a "test" environment. Every action I take will impact my entire org.
I'll continue scouting for other answers and post my final decision/results on this tread.
I'm currently experiencing this dilemma. We have a server 2008 box with exchange 2010 installed. We have installed a new server 2016 box, and are going to install Exchange 2016 as a VM, but I'm stuck at the prerequisite check. Whoever installed Exchange 2010 did not remove 2003 the correct way (I'm assuming), and I'm not able to install 2016. I upgraded my Forest Functional level from 2003 to 2008, and I installed all of the necessary KB updates, but I'm still stuck with the 'One or more servers in the existing organization are running Exchange 2000 or Exchange 2003' error.
I've scoured through the ADSIedit.msc option, I found the offending server in the CN=First Administrative Group / CN=Servers, but I see it being used in my current Exchange version as a replication for my Public Folder Store in my System Public Folders under the Schedule + Free Busy folder. Any ideas?
|
OPCFW_CODE
|
Nested Compositions and the Test Suite Dashboard are distinct features that can be used in tandem for debugging functional tests and to quick identify regression errors or issues. Both are relevant to functional testers using CloudTest and TouchTest.
The Nested Compositions feature introduced a new (SOASTA 53.05) clip element type, Launch Composition. This clip element can be placed into the test clip the same as any other clip element. Using this feature CloudTest and TouchTest functional testers can launch compositions from other compositions, and view Test Suite results in the new Test Suite Dashboard. The ability to leverage Jenkins CI and other CI tools remains undiminished.
Test Suite Dashboard
This System Dashboard (also introduced in SOASTA 53.05) is purpose-built for web and mobile Functional Testing and allows the tester to easily browse through the history of a test suite and the test compositions in it, right down to the action level where validations pass/fail, and where regressions can be detected and fixed.
Using the Launch Composition Clip Element
The Launch Composition clip element is a call to a Composition from another Composition (via a Test Clip in the calling Composition). A Composition is added into a Test Clip via the Clip Editor lower panel, Compositions tab, and the resulting clip element is a Launch Composition clip element.
Adding a Launch Compositions clip element into the Clip works precisely the same as adding any other clip element—double click the item to place it at the insertion point, or, drag it into place in the Clip Editor workspace.
More than one Launch Composition clip element can be placed into a Test Clip(s) in order to create a Test Suite.
Additionally, the Set Launch Composition context command can be used to set another Composition as a Launch Composition.
When this command is selected, the Choose a Composition box appears. Select the Composition to set as a Launch Composition.
Launch Compositions have all of the following capabilities:
- They can Repeat, be placed in Transactions, have properties, and all of the things that Clip elements do.
- They have properties including "whether or not to wait for the comp to complete", "whether failure should fail the parent", "optional name for the result", etc.
- When the parent Composition that contains the Clip is played its result can be viewed in the Test Suite Dashboard (which loads by default); an individual result is also created for each Launch Composition.
- Results of parent Compositions show the Launch Composition elements in the tree, but not its children.
Note: In this release, Device Selection is done in the nested composition itself and cannot be overridden in the parent composition.
Launch Composition Options
Launch Compositions, like Nested Clips, have their own in situ properties that are applicable only to the nested instance (these options are distinct from the stand-alone properties that apply to the stand-alone Composition). After adding a Launch Composition to your Clip, expand the new clip element in Clip Editor, List View to display its Options. Alternately, select the Launch Composition clip element and view it in the Selected: <""> panel.
- Location – For non-functional tests specify the Maestro Location to use. Refer to Specifying Maestro Locations for more information. Locations doesn't apply to functional testing so functional testers can accept the default.
- Errors Should – This drop-down specifies which failure action to take if an error occurs in the Launch Composition. Select one of the standard Failure Actions that apply to all clip elements
- If the failure action is set to record only, the validation outcome will be recorded in the result, and the launch composition will continue whether the validation passes or fails.
- If the failure action is set to fail the parent, the parent clip will stop if the validation fails.
- Wait For Completion – Checked by default.
- If checked, this Clip Element will not complete execution until the Composition completes.
- If unchecked, this Clip Element will complete once the Composition has been started, and will not wait for its completion.
- Verify Successful Play – Unchecked by default.
- If checked, this Clip Element will fail if the launched Composition does not complete with status Completed. What then happens is determined by the value of the Failure Action attribute.
- This attribute only has meaning if the Wait For Completion attribute is checked. If that attribute is unchecked, this attribute is not relevant.
- Allow Partial Load – Unchecked by default.
- Checked indicates that Composition play should proceed even if all of the necessary servers were not found. Locations doesn't apply to functional testing so functional testers can leave this unchecked.
- Result Name – Optionally, specify an alternate name for the Result name (e.g. other than the default name).
- Result Description – Optionally, specify a description for the result.
Building a Test Suite
Test Suites can be built in a number of ways, including adding all of the relevant Compositions as Launch Compositions into a test clip that resides in a parent test composition with no other test clips.
Playing this parent Composition creates a result for the Test Suite (although individual test composition results are also created).
More sophisticated Test Suites can be built that contain many such Clips, each of which contains one or more Launch Compositions. For example, a Test Suite for mobile apps and devices such as the example below can be arranged by Track, each of which corresponds to a device type, OS version, or to a network type. The Clips that have Launch Compositions are organized into the appropriate Track.
When you play a parent Composition (e.g. your Test Suite), the Composition Editor opens to the Play tab with the Test Suite Dashboard in display.
The Test Suite Dashboard presents functional data as a summary of the output from a "test suite." The Test Suite is used to analyze the results of a functional test across many results, and in the TouchTest use case, mobile devices. The Test Suite Dashboard incorporates key metrics, and the results of a functional test, including Success Summary [the X and number of successful Compositions, Clips, Validations]; Details about the Compositions and its test results [pass/fail, time/date of last test run], and the ability to drill down into details.
Test Suite Dashboard
As part of the Test Suite Dashboard, a new Widget Type category—Functional—has been added to the Widget Type list.
The Test Suite Dashboard includes the following widgets:
Functional Summary – This widget displays Summary Statistics, including the Success percentage of successful out of the total for Compositions, Validations, and Clips. The Functional Summary can also be added to custom dashboards.
Compositions Overview – This widget shows the components of the parent Composition and their history by component.
The Composition Overview widget includes the following columns:
- A Composition column that can be expanded to drill into child components
- A Last Result column that shows the most recent result (pass/fail) of the specific item for that line (e.g. in the result that was opened)
- A Prior 10 Results column that shows the pass/fail result for each row before the Last Result (most recent from left to right).
- A Start Time column that shows the last start time for the item in the row
- A Time to Complete column that shows the total time that the item (a Composition, Clip, or Validation) last took to execute
The Compositions Overview tree starts from the Clips inside the parent Composition. Only Compositions, Clips, and Validations are shown here, while Tracks, Bands, Messages, App, and all grouping Clip Elements are omitted.
The selected dot in Last 10 Results drives the Result Details selection below.
Create a Report Using Composition Overview
As of SOASTA 57, reports render the Composition Overview widget using native tables. You can generate a report to view results in HTML or MS Word.
You can create a report by logging into your CloudTest instance and clicking Compositions on the leftmost panel. In your list of Compositions, double-click your Composition and select Create a New Report.
When the Create a New Report dialog will appears, choose between a MS Word or HTML report template. In the Widgets section, expand All Other Widgets > Functional, and then select Compositions Overview.
For more information on reports, please visit the Creating Report Templates and Reports help article.
Result Details – This is the existing Result Details dashboard for that Result without the cover flow and with the left-hand tree starting at the selected element from above. Users can also select in the Result Details tree itself to navigate through test elements.
As already noted, the Result Details widget is also driven by the selected result dot in the Last 10 Results column (shown below).
In this iteration of the Result Details widget, the Navigation Tree contains all Clip Elements (Groups, Chains, etc.) that are children of the Clip in the existing Result Details dashboard.
Note: Errors indicated in the Result Details UI will "bubble up" to the parent item(s) of the failed element. This was true of errors in the prior release only if the Failure Action for the given element was set to "Also fail the parent." An element such as a Composition may succeed, but if any of its children have failed, then the Composition icon in the tree will show a red-x. This change applies to all contexts of the Result Details widget—whether it is in view in the full Result Details Dashboard, here in the Test Suite Dashboard, or in any custom dashboard that includes the Result Details widget. Additionally, bubble-up errors are shown all the way to the composition-level (in the prior release, they were shown only up to the track-level if the Failure Action was set to fail the parent).
Selection in the Compositions Overview will navigate to the corresponding object in Result Details below. Additionally, the Result Details widget in this context shows only the relevant panels (for example, if the selection is a Validation, the relevant Accessor panels are shown).
Accessing the Test Suite Dashboard
The tester can enable to access the Test Suite Dashboard by any one of the following ways:
- Opening a parent Composition in the Composition Editor, Results tab.
- Opening the result of a parent Composition from Central (in which case, the Test Suite Dashboard is shown by default)
- Opening the Build in Jenkins and viewing the test result of a parent Composition
|
OPCFW_CODE
|
Dev-C++ is a lightwet yet powerful open source C/C++ IDE. It’s very popular among C/C++ begin learners, but has stopped developing since 2015.
Bloodshed Dev C++ For Mac
- Dev-C for Mac has not been released by Orwell so far, so you can't use it if you switch to Mac. However, there are many C/C compilers that can easily replace all functions of Dev-C for Mac. The best app to clean mac is a self-regulating software that won’t need your human intervention and will do all the tough job for you.
- Dev Cpp For Mac Os Reality Composer is a powerful new app for iOS and Mac that makes it easy to create interactive augmented reality experiences with no prior 3D experience. Move seamlessly between your Mac, iPhone, and iPad as you build with live linking.
- Dev free download - Orwell Dev-C, Bloodshed Dev-Pascal, RecBoot, and many more programs.
- These tools are the Mac equivalent of the 'Dev C' tools you were told that you need. An IDE is a program that makes it easy to manage a programming project without having to use a terminal window on a Mac or a DOS prompt on Windows.
Vst plugins adobe audition cc. I have used it in class education since 2012, and decide to pick it up and continue to develop it, and rename to Red Panda Dev-C++.
Alternatives to Orwell Dev-C for Windows, Mac, Linux, Android, BSD and more. Filter by license to discover only free or Open Source alternatives. This list contains a total of 25+ apps similar to Orwell Dev-C. List updated: 9/29/2020 4:42:00 PM.
Orwell Dev C++ For Mac Catalina
Comparing with the lastest version of Orwell Dev-C++, Red Panda Dev-C++ has the following highlights:
C++ For Mac
- Greatly improved “Auto Code Completion”:
- Fixed header parsing error. (Can correctly show type hints for std::string, for example)
- Auto code suggestion while typing.
- Use Alt+/ instead of Ctrl+Space to call Code Completion Action.
- Use TAB to finish completion.
- Greatly improved Debugger:
- breakpoints on condition
- Redesigned Debugger panel, add Toolbar / Call Stack / Breakpoints sheet
- Debug Toolbar
- gdb Console
- Infos in Watch View are updated timely
- Greatly improved ClassBrowser:
- Correctly show #define/typedef/enum/class/struct/global var/function infos
- sort by type/sort alphabetically
- show/hide inherited members
- correctly differentiate static class members / class members
- Greatly improved Code Parser, faster and less error;
- Greatly improved “Auto symbol completion” function (works like in IDEA/PyCharm/CLion)
- GDB 9.2 and GCC 9.2 ( from Mingw.org, which is windows xp compitible)
- View/editing/compile UTF-8 encoding files
- Use regular expressions in find/replace
- Rename symbol in the editing file.
- -Wall -Wextra -Werror is setted by default in the Debug profile, to help beginners learn good coding habits.
- redirect STDIN to a data file while running or debuging ( to easy debug / need a patched gdb )
- Windows XP/ Windows 7/ Windows 10 Compatible
- Support Windows 7/Windows 10 High DPI (needs configuration)
- Lots of bug fixes
|
OPCFW_CODE
|
The feature’s built in; no additional media is needed.
My machine wasn’t completely broken, but it wasn’t well. Months of turning things on and off, installing and uninstalling, and just generally “fiddling” while researching and documenting Ask Leo! articles left this particular Windows installation a couple of features short of a full package.
This presented a great opportunity to experiment with the “nuclear option” built right into Windows: “Reset This PC”.
There’s also what I’ll call a “light” nuclear option as an option to the traditional “delete everything and start over” approach.
Become a Patron of Ask Leo! and go ad-free!
Reset this PC
Windows includes an option to “Reset this PC” that reinstalls Windows from scratch. You’ll find it in the Settings app. Options include saving your files, or wiping everything and getting the new copy of Windows from a partition on your hard disk (or downloading it from Microsoft) as part of the process. Be sure to back up prior to starting to ensure you won’t lose anything important.
When to reset
It used to be common to consider reinstalling Windows from scratch “every so often” depending on how you used your machine. More recent version of Windows have become more stable, and that’s less of a rule of thumb than it once was.
Even so, stuff happens. Sometimes the most pragmatic solution is to start fresh, rather than spending an excessive amount of time looking for and trying random fixes.
When that might be called for is difficult to say as it depends on your specific situation. If your system is just generally unstable, has slowed down excessively, or you’re banging your head against a wall trying to resolve a problem, a reset might be the most expeditious approach.
What this won’t fix
This assumes your system is mostly working.
“Reset this PC” uses information stored on a separate disk partition, so the disk itself has to be working.
In addition, we begin the process from within Windows. This implies, of course, that Windows itself is working, at least enough to get us there.
If you’re replacing a disk or are suffering a more major failure that prevents you from booting, “Reset this PC” isn’t what you want to use. You’ll need to boot from installation or backup rescue media and use those tools to recover.
Speaking of backups…
Step one, no matter what
Take a complete image backup of your system as it is today. What we’re about to do is a massive reset, and there are many things that could go wrong. It’s also possible to encounter unexpected side effects after a reset that may make you regret having done so.
Be it a complete restore to the way things are before the reset or the ability to recover specific files, an image backup is your ultimate safety net. I strongly recommend you not skip this step.
Keep files or remove everything
In the Windows Settings app, search for “Recovery”. On the resulting page, underneath “Reset this PC”, click on Get started.
The first choice to make is whether to “Keep my files” or “Remove everything”.
It’s unclear exactly how much “Keep my files” keeps, but I can make an educated guess. I suspect it preserves the files in your login account folder — meaning everything in “C:\Users\<your login name>”, including Documents, Pictures, Music, and the like. This is what I refer to as the “light” nuclear option.
If you keep data anywhere else on your system drive, it may be deleted.
It’s my distrust of knowing exactly what’s preserved, as well as my tendency to keep files outside the account folder, that has me recommending — again — you always start with a complete image backup to not lose anything, no matter what.
In this example, I’ll click Keep my files, but ultimately the decision is yours.
Next you’ll be asked from where the process should get the copy of Windows to reinstall.
Selecting Cloud download will download Windows components from Microsoft. This might be the only option if, for example, you’ve previously removed the recovery partition.
Selecting Local reinstall will use the copy of Windows stored in the recovery partition. This might be preferable if you have a slow or metered internet connection, as the cloud download is quite large.
You’ll be presented with a confirmation of your choice; click Next to continue.
The process displays a summary of what’s about to happen.
If you like, click the “View apps that will be removed” link.
This will probably be a scrolling list, so be sure to examine all the entries. This is a list of all the applications that will no longer be present after the reset. Of particular note is that any customizations you may have made in these applications will also be removed. This is actually one of the reasons for the reset. Removing applications forces us to cleanly reinstall only the apps we need, which typically results in a more stable system.
After you’ve reviewed the list of apps, click on Back and then on Reset to begin the process.
The process will take some time and proceed through several stages.
Your machine will reboot several times.
This will take some time. The process is reinstalling Windows from scratch, after all.
Eventually, the Windows sign-in screen will reappear.
Note that even though this was a “reset”, sign-in account options and accounts were retained. Screen resolution was reset, but the (non-standard) scale I had set, 125%, was not.
On initial sign-in, Windows will perform some updates prior to displaying the desktop.
But wait! There’s more! (Updates)
Those updates weren’t enough.
The very first thing to do after resetting your PC is run Settings, navigate to Windows Update, and click on Check for updates, even if it says you are up to date.
Chances are Windows Update will still locate updates, which it will then install.
After the update has completed (which may involve rebooting), be sure to return to Windows Update and confirm that no more updates are available, or download any more that are.
Only when there are no more updates available should you proceed.
Reinstall everything else
Next, reinstall the applications you use.
I strongly recommend you install only those applications you actually use. My approach is to use the computer and only install those applications I find I need. That way, I install only the software I really use.
Similarly, restore any data files lost. If you selected “Keep data files” at the beginning of the process, confirm that the files you expect to be there are there. Restore any that are missing from your backup or other convenient location.
Throughout this process, you’ll reset the various options and customizations you had before the reset. Honestly, the more frequently I do this, the fewer customizations I carry forward, simply because of the time involved. Much like the programs I install, I only apply the customizations that are truly important to me.
Lastly, back up again
Yes: I recommend you back up once more, another full-system image backup.
With that backup, should you ever need to reset your PC again in the future, you can restore the image backup instead. The reason is simple: the image backup represents a clean install of your system, with your installed programs, customizations, and data files …
… essentially everything the reset did not preserve.
Know that “Reset this PC” is a power tool to restore Windows to its initial condition. It’s often a faster, more pragmatic approach to troubleshooting than actually spending hours tracking down individual failures.
When you’re done, subscribe to Confident Computing, my weekly newsletter. Less frustration and more confidence, solutions, answers, and tips in your inbox every week.
|
OPCFW_CODE
|
Learn how to work with tables within OnlyOffice Presentations in this lesson. We’ll see how to insert tables, merge and split cells, add and remove rows and columns, distribute rows and columns, and control your tables’ styling.
1.Introduction1 lesson, 00:55
1.1Welcome to the Course00:55
2.Getting Started3 lessons, 22:49
2.3Slides and Slide Settings09:46
3.Adding Content4 lessons, 30:44
4.Extras5 lessons, 26:37
4.1Themes and Color Schemes05:21
4.5Working With an Elements Presentation06:28
5.Conclusion1 lesson, 03:47
Hey, welcome back to Up and Running with OnlyOffice presentations. In this video, we're gonna see how you can work with tables in a presentation. Okay, so first inserting tables. To insert a table, you need to go to the Insert tab, and then you have the Table button here. Click it, and it will give you a grid that you can use to determine how many rows and how many columns should be in your table. So we can draw out some rows and columns here. Or alternatively, we can choose Insert custom table, and then we can type in the exact number of columns and rows that we want to use. So I'll just say. Sometimes when you insert a table, it might get inserted over top of another one. So then, you can just click and drag to move into the position that you want it to be. So that covers how you can insert your tables. If you want, you can also resize your tables, and the rows and columns will distribute themselves evenly. Next up, we're gonna see how you can control the styling of your tables. And you remember, I said in previous lessons that the coloring of your tables is completely controlled by your color scheme. So if you want to change the color, you can go up here to Select from Template. And then, you can choose from any of the colors that you see in this drop-down. And these are all pulled from the theme colors that have been extracted from the presentation that you're working with. So if you change your color scheme, let's say to this one, then the colors that you have available for your tables are going to change. So as I mentioned before, make sure that the color scheme that you choose is gonna work for your tables and charts before you commit to using that color scheme. So generally speaking, the styling of your table is controlled by this top portion of the tables panel. Up here, we've got a bunch of toggles that control whether your header is styled differently. Whether you have a total row at the bottom that's styled differently. Whether you have alternate coloring or banding in the rows. Whether you wanna highlight the first column, the last column. And whether you wanna have all the columns banded as well. Next up, we can set borders for your table. So I'm gonna choose a pretty thick border so it's easy to see. And we choose a fairly highly contrasted color. Actually, let's make it really obvious, we'll go with this pinky color. And now, to apply a border you need to choose one of these buttons. So if I just want to put a border around every single cell on this table, I have the whole table selected, and then I can press this button here that applies the border everywhere. Then, if I want to get rid of all those borders, I can press this button here to set no borders at all. So each one Of these buttons will apply a border to a different side, or numerous sides of the selected cells. So you can see there, how that works when you have the whole table selected. But you can also just select a few cells at once, and then do the same thing and apply borders just to those selected cells. And again, just using whichever button you want to apply the borders where you want. In the same way, you can also apply a background color to cells. So I'm going to select a few cells here. And I'm gonna choose a different background color to apply to just those cells. And again, if I have the whole table selected, and I apply a background color, then it colors everything. And if you have applied custom background and border colors, and then you choose another theme up here, then it's gonna overwrite any custom settings that you have added in. So just be aware of that, and be careful when you're changing up these themes. Okay, the next thing you're gonna wanna know is how you can insert rows and columns. So if you wanna insert a single column or row, just put your cursor into an existing cell. Then, you can either right-click and go to Insert, and insert a row above or below, and a column left or right. So insert a row below. And we will insert a column to the left. I'm just gonna undo that so I can show you the other way that's available. Or you can go over to this rows and columns section in the sidebar. And then, just choose the same thing. Insert a row below or insert a column. To delete cells, or rows, or columns, it's the same thing. Just make sure that you have all the columns selected that you want to remove. Right-click, and then choose Delete. And to merge cells, let's say we just wanna have one title here along the top. Select the cells you wanna merge, right-click, and choose Merge Cells. Conversely, if you want to split those cells, right-click > Split Cells. And then, let's put this back the way it was by choosing one row and three columns. And we saw before that you could resize the entire table by grabbing these handles and dragging it to the size that you want. But you can also change the cell size here by just increasing or decreasing the amount. On the height or the width. And just like with most of these functions, you can apply that to the whole table. Or you can apply it just to single rows or columns. And then, if you have rows and columns that are different heights or different widths and you want to have them all evenly distributed again, then make sure you have the cursor in any cell. And then, you can press distribute rows to make your rows all the same height, or distribute columns to make your columns all the same width. And that is the key information to help you work with tables in OnlyOffice presentations. In the next video, we're gonna see how you can work with charts using the inbuilt chart system. So for that, I will see you in the next lesson.
|
OPCFW_CODE
|
With the advent of Transformers, large language models (LLMs) have saturated well-known NLP benchmarks and leaderboards with high aggregate performance. However, many times these models systematically fail on tail data or rare groups not obvious in aggregate evaluation. Identifying such problematic data groups is even more challenging when there are no explicit labels (e.g., ethnicity, gender, etc.) and further compounded for NLP datasets due to the lack of visual features to characterize failure modes (e.g., Asian males, animals indoors, waterbirds on land etc.). This paper introduces an interactive Systematic Error Analysis and Labeling (SEAL) tool that uses a two-step approach to first identify high-error slices of data and then, in the second step, introduce methods to give human-understandable semantics to those underperforming slices. We explore a variety of methods for coming up with coherent semantics for the error groups using language models for semantic labeling and a text-to-image model for generating visual features.SEAL is available at https://huggingface.co/spaces/nazneen/seal.
Training a supervised neural network classifier typically requires many annotated training samples. Collecting and annotating a large number of data points are costly and sometimes even infeasible. Traditional annotation process uses a low-bandwidth human-machine communication interface: classification labels, each of which only provides a few bits of information. We propose Active Learning with Contrastive Explanations (ALICE), an expert-in-the-loop training framework that utilizes contrastive natural language explanations to improve data efficiency in learning. AL-ICE learns to first use active learning to select the most informative pairs of label classes to elicit contrastive natural language explanations from experts. Then it extracts knowledge from these explanations using a semantic parser. Finally, it incorporates the extracted knowledge through dynamically changing the learning model’s structure. We applied ALICEin two visual recognition tasks, bird species classification and social relationship classification. We found by incorporating contrastive explanations, our models outperform baseline models that are trained with 40-100% more training data. We found that adding1expla-nation leads to similar performance gain as adding 13-30 labeled training data points.
Our ability to limit the future spread of COVID-19 will in part depend on our understanding of the psychological and sociological processes that lead people to follow or reject coronavirus health behaviors. We argue that the virus has taken on heterogeneous meanings in communities across the United States and that these disparate meanings shaped communities’ response to the virus during the early, vital stages of the outbreak in the U.S. Using word embeddings, we demonstrate that counties where residents socially distanced less on average (as measured by residential mobility) more semantically associated the virus in their COVID discourse with concepts of fraud, the political left, and more benign illnesses like the flu. We also show that the different meanings the virus took on in different communities explains a substantial fraction of what we call the “”Trump Gap”, or the empirical tendency for more Trump-supporting counties to socially distance less. This work demonstrates that community-level processes of meaning-making in part determined behavioral responses to the COVID-19 pandemic and that these processes can be measured unobtrusively using Twitter.
Open Domain dialog system evaluation is one of the most important challenges in dialog research. Existing automatic evaluation metrics, such as BLEU are mostly reference-based. They calculate the difference between the generated response and a limited number of available references. Likert-score based self-reported user rating is widely adopted by social conversational systems, such as Amazon Alexa Prize chatbots. However, self-reported user rating suffers from bias and variance among different users. To alleviate this problem, we formulate dialog evaluation as a comparison task. We also propose an automatic evaluation model CMADE (Comparison Model for Automatic Dialog Evaluation) that automatically cleans self-reported user ratings as it trains on them. Specifically, we first use a self-supervised method to learn better dialog feature representation, and then use KNN and Shapley to remove confusing samples. Our experiments show that CMADE achieves 89.2% accuracy in the dialog comparison task.
We provide an NLP framework to uncover four linguistic dimensions of political polarization in social media: topic choice, framing, affect and illocutionary force. We quantify these aspects with existing lexical methods, and propose clustering of tweet embeddings as a means to identify salient topics for analysis across events; human evaluations show that our approach generates more cohesive topics than traditional LDA-based models. We apply our methods to study 4.4M tweets on 21 mass shootings. We provide evidence that the discussion of these events is highly polarized politically and that this polarization is primarily driven by partisan differences in framing rather than topic choice. We identify framing devices, such as grounding and the contrasting use of the terms “terrorist” and “crazy”, that contribute to polarization. Results pertaining to topic choice, affect and illocutionary force suggest that Republicans focus more on the shooter and event-specific facts (news) while Democrats focus more on the victims and call for policy changes. Our work contributes to a deeper understanding of the way group divisions manifest in language and to computational methods for studying them.
Word embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi-view Bayesian non-parametric algorithm which improves multi-sense wor d embeddings by (a) using multilingual (i.e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning. Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different parallel corpora to leverage multilingual context. Multilingual training yields comparable performance to a state of the art monolingual model trained on five times more training data.
|
OPCFW_CODE
|
Adds a suggested-alpha function
This adds a function to the alpha-rank module that'll suggest an alpha large enough for the ranking to have 'settled out'.
The logic behind this is that the transition probabilities can be expanded as a power series, as per Eqn 8 in the paper:
When alpha is sufficiently large, then the first or the last term will dominate. That happens roughly when
This is when the first term is e^2 (approx 10x) larger than the second term.
Empirically, it seems like this dominating-term effect is what gets the ranking to settle down, and I've started to use it as a 'default alpha'. As an exemplar, here's the sweep from Kuhn poker
payoff_tables = alpharank_example.get_kuhn_poker_data(num_players=2)
alpharank.sweep_pi_vs_alpha(payoff_tables, visualize=True)
for which alpharank.suggest_alpha(payoff_tables) gives 10^2.
On a related note, you can use this same idea to come up with a simple approximation to the alpha-rank transitions:
when $f_tau >= f_sigma$, and zero otherwise. This eliminates $m$ as a parameter, and then you've got yourself a parameter-free evaluation method that comes out really close to the full method. Again, on Kuhn poker it comes out within .006% error of the full method.
I haven't put this simplification in the PR though because I thought tearing the guts out of alpha-rank and changing the API'd be a bit much.
Cool! @shayegano @dhennes can you take a look?
Btw @andyljones how do you get latex to render in markdown, that looks really nice! Or are they images from e.g. latex2png?
Images I'm afraid to say! I spent an hour or so looking for a good way to get latex into Github comments, and I eventually gave up. Best I could find was this Chrome extension, but the people reading the comment need to have the same thing installed.
Images I'm afraid to say! I spent an hour or so looking for a good way to get latex into Github comments, and I eventually gave up. Best I could find was this Chrome extension, but the people reading the comment need to have the same thing installed.
Ah, well thanks for the effort.. it is super nice to see latex here on github! And makes explaining your points rather smooth. (Btw I use latex2png extensively when preparing presentations with Google slides! :))
Quick follow-up, I spoke to Daniel and he says it looks good. Shayegan is the best person to review this, and he's on vacation until the 26th, so he'll take a look when he's back.
Thanks for updating me!
As an aside @lanctot , this is the second entirely-minor PR I've made and the second where you've been notably enthusiastic and encouraging. It's a lovely attitude in a maintainer, and it makes me want to contribute more in future. Cheers!
@shayegano I've fixed all but the readme ask 🙂
As to the readme ask and the .006% error, I got to wondering if I could do better than 'empirically it works!'.
I've sketched a proof that you can get away with clamping all the fixation probs to 0 or 1, though there are still two TODOs to resolve. which of course mean it's not actually a proof yet.
Would appreciate your thoughts on it, though there's no urgency - I'll chew those TODOs over and see if I can fix them.
@andyljones Thanks for the changes (approved). Thanks also for the attached doc looking into the proof! I still have to read through it in detail, but the remaining TODO is reminiscent to one we ran across when proving some perturbation bounds in a follow-up paper. I'll send you an email now with more details!
|
GITHUB_ARCHIVE
|
The File or Assembly MySql.Data or a dependency is missing
Before you blame me, yes I did search the other threads for possible solutions, I uninstalled the package multiple times, added the dependeny manually in my App.Config file etc.
Im creating a Windows Forms App (.NET Framework), since it includes a Login, I established a connection to a MySql Server. In Debug Mode everything is working perfectly, but when I build the Solution and try to run the .exe File I just get this annoying error everytime:
I want to share this program with my friends, and I dont want them to get this stupid error message nor do I want to send anyone the mysqldata.dll manually..
Is there any way to work around this?
Here a more specifig Error-Message:
System.IO.FileNotFoundException: Die Datei oder Assembly "MySql.Data, Version=<IP_ADDRESS>, Culture=neutral, PublicKeyToken=c5687fc88969c44d" oder eine Abhängigkeit davon wurde nicht gefunden. Das System kann die angegebene Datei nicht finden.
I want to share this program with my friends, and I dont want them to get this stupid error message nor do I want to send anyone the mysqldata.dll manually
Since your application has a dependency on MySql.Data, the program needs to be able to find that DLL on your friend's computer in order to run.
I can think of two three possible options:
Zip up your bin\Release folder and send everything in it (including MySql.Data.dll) to your friend.
Have your friends download and run the MySQL Installer and make sure they select the "Connector/NET" option. This will put the required DLL in the "Global Assembly Cache".
Write and deliver a full-blown Windows installer to install your application and all its dependencies.
Option 3 is probably overkill for what you want to do. Option 2 seems error-prone, requires your friends to install the right components, and will stop working if you ever upgrade your application to MySql.Data 8.0.23. So I'd go with Option 1, even though it requires sending "the mysqldata.dll manually".
Thanks for your answer Bradley! I guess I'll have to send the MySql.Data with the Application... It just has me really confused since I created a login before in form of a Console-App, and I never had this error, nor do I had to send the dll with the app...
I just added the MySql.Data File to the Folder, now I get another error that its missing BCrypt.dll :/ Do I seriously have to send them all dll's in order to be able to use my .exe File? Gosh I thought there is a way to include all those dll's in one .exe File..
|
STACK_EXCHANGE
|
During the holidays, you may have missed what’s important in the December Service update for Microsoft Dynamics 365 for Finance and Operations.
Here are the top five highlights of the release which addresses address reporting, efficiency, and global compliance issues, as published on Microsoft’s Blog:
- Enhanced Financial and Operational Reporting – The latest service release introduces a new set of default reports built using Power BI. The new financial reporting experience is now embedded within the Dynamics 365 UX, giving users a seamless experience of report generation and allowing them to drill into supporting documents. Key sub-ledger data is available to provide better ledger to sub-ledger analysis. Default reports such as a trial balance, balance sheet, and profit and loss, are shipped out of the box and can be quickly and easily customized using the Power BI desktop. The existing financial reporting using Report Designer within Microsoft Dynamics 365 is still available and fully supported. In addition, Power BI Solution Templates are now available on AppSource. These will empower customers to extend our existing Power BI reports for unique business requirements.
- Global Coverage – India Regulatory Functionality. In this update for Microsoft Dynamics 365, Microsoft has added and updated important regulatory functionalities for India, where a new Goods and Services Tax (GST) is driving sweeping business changes and a strong move to the cloud. Dynamics 365’s strong international accounting, tax, and operations capabilities along with Microsoft’s strong local presence make Microsoft a natural choice for businesses seeking a digital transformation partner. Fixed asset depreciation, Value-added tax (VAT), Withholding tax, Customs Duty and India GST are all supported, with additional Retail regulatory functionality currently in public preview. Customers can now easily configure India GST in the Global Tax Engine without the painful application changes required by many other systems.
- Vendor Collaboration – A new vendor portal provides vendors with self-service capabilities to view and selectively maintain their own company information, such as contact information, business identification data, procurement categories, and certifications. Microsoft also enhanced vendor onboarding with a new workflow-supported process which can facilitate the addition of new vendors using invite-based registration and signup, further improving employee productivity. In addition to onboarding and profile maintenance, Microsoft included support for vendors to view and respond to RFQs with the ability to receive and upload attachments. The collaborative interface allows the vendor to receive information about the awarded or lost bids. In public sector configurations, RFQs can now be published and exported as entities via data management. This enables the data to be consumed and exposed in a customer-hosted public website.
- End-to-End Process Integration – In July, Microsoft introduced Prospect to Cash integration for Microsoft Dynamics 365, leveraging the Microsoft Common Data Service (CDS) to enable deeper business processes integration across Dynamics 365 applications and thereby transform operations. With its latest update, companies can now maintain data synchronization of processes between Dynamics 365 for Sales and Dynamics 365 for Finance and Operations using data integration templates without a third-party data integration tool. These enhancements enable additional end-to-end business process integrations focusing on customer, quote, sales order, invoice, and order fulfillment data. More information about the supported processes integration scenarios can be found here.
- Implementation efficiency – To further improve the implementation experience for customers, Microsoft has enabled a new capability to create legal entities based on configuration data from an existing legal entity, to drastically shorten setup time for new legal entities. In addition, Microsoft continues to make progress towards a full extension-based customization model to improve serviceability. In this release, Microsoft has enabled a ‘soft-seal’ mode for the application functionality, which will help customers and partners to recognize un-desirable customization development patterns as warnings during compilation.
The top five highlights above represent just a few of the highlights of the December service update for Dynamics 365 for Finance and Operations (application version 7.3). A detailed overview of all updates can be found here in Microsoft’s online documentation.
Chief Solution Strategist, Velosio
Robbie Morrison has spent nearly 20 years helping customers build and deploy elegant technology and business solutions. From start-ups to enterprise-class organizations worldwide, his knowledge of the Microsoft Dynamics ecosystem and products helps Velosio customers maximize ROI on technology investments.
Today, Robbie serves Velosio customers in his role as Chief Solution Strategist where he provides thought leadership and manages the development of B2B solutions. Robbie received his MBA from the University of Georgia, Terry College of Business.
|
OPCFW_CODE
|
Me and my wife filming videos for our YouTube channels for almost 5 years. We are full-time youtubers — YouTube is our life. We film videos every day for several hours. We got ~50 youtube channels — about all topics in which we are interested in our life. Among our channels is ‘Shkola Bloggera’ (translation from Russian — «Blogger’s School») — https://www.youtube.com/shkolabloggera , biggest Russian-speaking resource about YouTube (93.000 subscribers). All our channels got ~350.000 subscribers total.
Week ago 37 of our channels with good reputation (each of them <6000 subscribers) were blocked without any reason (we even didn’t get any emails about strikes). All other our channels which got >6000 subscribers wasn’t affected and still active.
List of blocked channels:
https://www.youtube.com/ShtukensiaCOM – Vera’s lifestyle channel (in English)
https://www.youtube.com/channel/UCC-Wm63vB2c0svPvhgkS2Wg – Nikolay’s lifestyle channel
https://www.youtube.com/channel/UCLbzTxuFhgyIbyMdEzYIdbA – about health, healthy lifestyle
https://www.youtube.com/channel/UCh0KQnKGJaJb7MaPjqlJtSA – sport and fitness
https://www.youtube.com/channel/UCz_xNqKORcVgeVX0Y92bUOw – about parenting (just our thoughts, we do not have children yet)
https://www.youtube.com/channel/UCoCps5zSUWJTscjbaYgcXFw – space, physics, science
https://www.youtube.com/agroprognozru – agriculture and Soil science
https://www.youtube.com/channel/UC2PKmy49kKx0Q50xDZRI8Sg – lessons about games streaming (how to make live videos)
https://www.youtube.com/channel/UCymdG7mO9hMu4Y8PieaQTAQ – our vlogs
https://www.youtube.com/channel/UCuD81WP9nOwHB3ca6BnUtpQ – youtube news and rumors
https://www.youtube.com/channel/UCsPGvhE1EIMfS0tpBkOzyyQ – home, house-holding
https://www.youtube.com/channel/UCEDEAiMaF3CsOgMITRHH_hw – answers to typical questions
https://www.youtube.com/channel/UCuar17ALf0iT_vSHeT4If0g – reviews of online shops and goods
https://www.youtube.com/channel/UCSLA5pPWpEDodKuA6QKYUEg – IT, web-sites development, coding
https://www.youtube.com/channel/UCTzuV0jjT2MLnDaPA4XpjbQ – reviews of food
https://www.youtube.com/channel/UCZJ5hIb2P7gHaA8R5sjI1UQ – reviews of books
https://www.youtube.com/channel/UCxx-ZMLCaxd5-8mr5SSzT7A – cars and transport talks
https://www.youtube.com/channel/UCo6JFVtL0IjAR70O31-DSBA – Nikolay’s humour
https://www.youtube.com/channel/UCQQPBJYhGlOFpd7Az3nbMzw – happy moments of our life
https://www.youtube.com/livecast24 – videos from the past (our family archive)
https://www.youtube.com/channel/UCAGZT7VOrV34BMroffK9Swg – song covers
https://www.youtube.com/channel/UCha6ju_fQT1mgtQ9OJhRYWQ – piano lessons
https://www.youtube.com/vnesociuma – Nikolay’s songs (I write and sing them)
https://www.youtube.com/channel/UCvHUoNOZRWQsd1D7JaK2yiA – Nikolay’s DJ channel
https://www.youtube.com/channel/UCr0xXjZAqWylm5EPKNaIgOA – videos of nature and poems (every video got poem in description which we created)
https://www.youtube.com/channel/UCIT2G3QfbW5AoSwC4iXgsDQ – sketches and comedy videos
https://www.youtube.com/channel/UCy793gRKUgtFoRjwFI6p1Ug – painting and drawing
https://www.youtube.com/channel/UCWjZeFYRMBKnRqzP9Z1DCAw – sewing lessons (new channel)
https://www.youtube.com/streamguild – Nikolay’s live game streams
https://www.youtube.com/channel/UCLGtPgcln7_zD2PyFTVFA1Q – geek culture (board games, yoyo, toys)
https://www.youtube.com/gameglaz – reviews of online games (in English)
https://www.youtube.com/channel/UCK-d8Z08ElRz0zgKiAla5fg – Nikolay’s live gaming streams (in English)
https://www.youtube.com/channel/UChmnXwnF8NxJH0iFSLf-i5A – Vera’s gaming streams channel (English)
https://www.youtube.com/channel/UCUpNmGpoYWAoDkW5ad16MIQ – Nikolay’s lifestyle channel (in English)
https://www.youtube.com/channel/UC6onYKZxZPOcS7Uhbx4bEEQ – Nikolay’s vlog (English)
https://www.youtube.com/kotmedved – home-made theater
https://www.youtube.com/channel/UCeQhi4rNnt3WkW54JfKn_7w – about the Netherlands
All our channels got useful and kind videos, for example, among blocked channels there is ‘AgroPrognoz’ ( https://www.youtube.com/agroprognozru ) – the biggest (1700 subscribers, a lot for such community) Russian-speaking YouTube channel about agriculture and soil science. There are absolutely no reasons to block it. Also it’s strange that our other, English-speaking channel about soil science – ‘SoilMap’ isn’t blocked ( https://www.youtube.com/channel/UCsniV3tyDTOU8cTUP2SMTJg ).
This block is somekind of error. We follow community guidelines very carefully and we do not have any forbidden content at our videos. We do not spam and do not make illegal actions. Also we do not have any videos with children (we do not have children).
All our channels got useful and kind videos, for example, among blocked channels there is ‘AgroPrognoz’ ( https://www.youtube.com/agroprognozru ) — the biggest (1700 subscribers) Russian-speaking YouTube channel about agriculture and Soil science. There are absolutely no reasons to block it. Also it’s strange that our other, English-speaking channel about Soil science — ‘SoilMap’ isn’t blocked ( https://www.youtube.com/channel/UCsniV3tyDTOU8cTUP2SMTJg ).
Finally, please notice, that all our ‘big’ (6000+) YouTube channels are still active and have good reputation, only channels with <6000 subscribers were terminated; but we equally follow YouTube rules and guidelines at all our channels — small and big-ones. So it is certainly an error.
More actual info could be found there: http://entr.ru/help
Just some more information about us to explain why we have so many channels. We are artists and make a lot of different stuff like painting, writing books, playing musical instruments. At the same time we are very active in sports, we like to travel a lot. That’s why we made decision to start a lot of YouTube channels. Also a few facts about us:
— we both are PhD in Biology and authors of 20+ scientifical articles (our last scientifical works were about YouTube). We are authors of two science books and planning to write one more soon — about YouTube (based at experience of our channel about blogging — https://www.youtube.com/shkolabloggera )
— we work as web-developers and graphical designers
— sometimes we writing not scientifical articles to different magazines and newspapers (about games, ecology & etc). Also we started to write fiction books. Example — http://shtukensia.com/thisis/mybooks/office/
— I’m musician, rock band leader, guitar teacher (my channel ‘Gitarka’ — https://www.youtube.com/gitarka got 50.000+ subscribers). Also I’m dj and I write electronical music.
And it’s not everything… That’s why we have so many YouTube channels. Each channel is unique for us and dedicated to specifil topic from our life.
Please help and report this problem to programmers at Google! This isn’t typical problem, it could happen again and again with innocent people! Please help us to restore our youtube channels!
|
OPCFW_CODE
|
using AutoMapper;
using DocRepoApi.Models;
using System;
using System.Collections.Generic;
using System.Text;
using Xunit;
namespace DocRepoApiTests.ModelTests
{
public class ProductDtoTests
{
private IMapper _mapper = MapperTestContext.GenerateTestMapperContext();
#region Test Compare and Sort
[Fact(DisplayName = "ProductDto.Equals(other, true) should match based on ID and all properties")]
public void ProductDtoEqualsReturnsCorrectValues()
{
ProductDto p1 = new ProductDto
{
Id = 1,
Alias = "ALIAS",
FullName = "Great Product",
ShortName = "GP"
};
ProductDto p2 = new ProductDto
{
Id = 1,
Alias = "ALIAS",
FullName = "Great Product",
ShortName = "GP"
};
ProductDto p3 = new ProductDto
{
Id = 3,
Alias = "ALIAS",
FullName = "Great Product",
ShortName = "GP"
};
ProductDto p4 = new ProductDto
{
Id = 1,
Alias = "SALIA",
FullName = "Bad Product",
ShortName = "BP"
};
Assert.True(p1.Equals(p2));
Assert.True(p1.Equals(p2, true));
Assert.False(p1.Equals(p3));
Assert.False(p1.Equals(p3, true));
Assert.True(p1.Equals(p4));
Assert.False(p1.Equals(p4, true));
}
[Fact(DisplayName = "List<ProductDto>.Sort() should sort Products based on FullName")]
public void ProductDtoSortReturnsListSortedByFullName()
{
List<ProductDto> products = new List<ProductDto>
{
new ProductDto { Id = 1, FullName = "BBB" }, // 2
new ProductDto { Id = 2, FullName = "AAA" }, // 0
new ProductDto { Id = 3, FullName = "DDD" }, // 4
new ProductDto { Id = 4, FullName = "CCC"}, // 3
new ProductDto { Id = 5, FullName = "ZZZ"}, // 7
new ProductDto { Id = 6, FullName = "ABC"}, // 1
new ProductDto { Id = 7, FullName = "DEF"}, // 5
new ProductDto { Id = 8, FullName = "XYZ"} // 6
};
products.Sort();
Assert.True(products[0].Id.Equals(2));
Assert.True(products[1].Id.Equals(6));
Assert.True(products[2].Id.Equals(1));
Assert.True(products[3].Id.Equals(4));
Assert.True(products[4].Id.Equals(3));
Assert.True(products[5].Id.Equals(7));
Assert.True(products[6].Id.Equals(8));
Assert.True(products[7].Id.Equals(5));
}
#endregion
#region Test Mapping
[Fact(DisplayName = "Product is properly mapped to ProductDto")]
public void ProductProperlyMappedToProductDto()
{
ProductDto pDto1 = new ProductDto
{
Id = 1,
Alias = "ALIAS",
FullName = "Great Product",
ShortName = "GP"
};
Product p1 = new Product
{
Id = 1,
Alias = "ALIAS",
FullName = "Great Product",
ShortName = "GP"
};
ProductDto pDto2 = _mapper.Map<ProductDto>(p1);
Assert.NotNull(pDto2);
Assert.True(pDto1.Equals(pDto2));
Assert.True(pDto1.Equals(pDto2, true));
}
[Fact(DisplayName = "ProductDto is properly reversed to Product")]
public void ProductDtoProperlyReversedToProduct()
{
ProductDto pDto1 = new ProductDto
{
Id = 1,
Alias = "ALIAS",
FullName = "Great Product",
ShortName = "GP"
};
Product p1 = new Product
{
Id = 1,
Alias = "ALIAS",
FullName = "Great Product",
ShortName = "GP"
};
Product p2 = _mapper.Map<Product>(pDto1);
Assert.NotNull(p2);
Assert.True(p1.Equals(p2));
Assert.True(p1.Equals(p2, true));
}
#endregion
}
}
|
STACK_EDU
|
To survive and thrive in the current state of persistent threats to IT systems, the chief IT security officer requires more innovative and integrated approaches and products. That's where security intelligence (SI) comes in.
This article explains IBM's concept of security intelligence, describes the security challenges of the mainframe environment and where mainframes intersect with SI, and details four steps to enabling mainframe SI. Part 2 will detail the IBM approach to enabling security intelligence in a mainframe environment and offer some tools to help with that implementation.
The topics in this article are more fully explained in the IBM whitepaper "Get actionable insight with security intelligence for mainframe environments" (see Resources).
What is security intelligence?
IBM defines security intelligence as:
- Threat analysis, real-time alerts, audit consolidation, and compliance reporting integrated into a single view of the risks affecting both mainframe and distributed systems
- Automated analysis and reporting that deals with complexity of event monitoring (involving people, data, apps, and infrastructure) without having to deal with log data
- Increased depth of insight and real-time anomaly detection
What are the security challenges in mainframe environments?
The set of security challenges, therefore, security intelligence, intersects the mainframe in the following areas:
- Complexity: The mainframe is an integral component of multiple, often large and complex, business services, which tends to make it difficult to identify and analyze threats.
- Visibility: Mainframe processes, procedures, and reports are often executed in silos; many roles, tasks, and responsibilities in mainframe administration are often highly compartmentalized. This can impede cross-enterprise information sharing that is necessary to combat threats.
- Compliance: Verification of compliance is frequently a manual task; problem alerts are often received only after the problem has occurred.
- Security change control: Change control procedures for security administration often are not followed or not even in place; this can threaten system availability.
The convergence of threat information, the analytics-assisted capabilities to deliver meaningful insights, and the automation of many complex security compliance and analytics tasks are the IT intersection points of security intelligence and the mainframe. There is a cost-oriented business advantage to mainframe security intelligence too; mainframe security management requires highly skilled administrators who may be in high demand and short supply. Automation, a single view of incoming event data, and analytics can act much like an administrator in expert pattern form, supplementing your staff.
How do I enable mainframe security intelligence?
These initial considerations are key to helping you plan how to enable mainframe security intelligence:
- Provide rich context that enables meaningful insights
- Reduce the complexities of mainframe security management
- Employ best practices to detect and prevent exposures
- Put security intelligence task-oriented operations into place
Figure 1 outlines a comprehensive approach based on security best practices that can help detect and prevent security and compliance exposures.
Figure 1. Security best practices approach
Enable meaningful insights
There aren't many security solutions that are broad and integrated enough to deliver insights that can make a difference. For example, information provided by log management and security information and event management (SIEM) solutions typically includes lots of data with limited context; the limited context limits the insight value of the data.
The goal (and the purpose of security intelligence) is to be able to identify who did what and when, recognize what's abnormal, and access the subtle connections between millions (maybe even billions) of data points. Integration (and the increased visibility it affords) helps you better uncover and respond to external, internal, and accidental threats. Integrated SI employs centralized logging, intelligent normalization of security data, visibility into network segments where logging may be problematic, and visibility into asset communication patterns.
Challenge complexities of mainframe security
In a large, cutting-edge, mainframe-based enterprise, it may be impossible for humans to keep up with the complexity and dynamic nature of the infrastructure. SI enables you to respond to a potential poor understanding of these complexities (mentioned previously) through automated monitoring, auditing, and reporting that can help distinguish between normal or baseline activities and suspicious events.
Detect and prevent exposures
Best practices are a useful part of the SI toolbox when it comes to detecting and preventing security exposures. Through advanced data collection, normalization, and analysis, activity outside of normal behavior ranges is flagged as an offense and is presented in a context that makes it easier to understand the incident so you can effect timely remediation.
You will generally try to apply these tools to achieve these three goals:
Accountability: Proving who did what and when comes from the ability to manage security-related information from networks, hosts, and applications across the IT infrastructure. Accountability correlates this information with an accurate picture of activity to achieve the forensic granularity necessary to investigate violations.
Transparency: Insight into business and IT assets that must be protected comes from visibility into security controls. Transparency enables the organization to assess its adherence to policies by extending visibility into network and application traffic and into the sensitive resources events governed by security rules.
Measurability: An understanding of your organization's security risk comes from the ability to assess and measure both compliance and threats. Measurability supports real-time awareness and responsiveness through interactive dashboards and reporting.
Put task operations into place
So how do you implement that accountability, transparency, and measurability triad? The operational tasks that many organizations perform to initially implement SI into their mainframe systems include the following:
- Collect and monitor data from initial data sources such as authentication events, operating system logs, anti-malware logs, firewalls, configurations, and file and directory auditing.
- Define use cases by examining key business challenges.
- Provide the security team and others with role-based access and customizable views into real-time analysis, incident management, and reporting so they can drill down into raw data and summarized security incidents.
- Provide management tools to summarize and analyze access control, remove unused access authorizations, and simulate the effect of new security rules before they are deployed.
- Role out additional data sources (like IDS/IPS data, database security logs, app logs, and physical security system logs) for a higher level of context and potential intelligence.
- Build activity baselines for key metrics and monitoring for meaningful anomalies.
- Deploy a risk management solution to analyze network and device vulnerabilities; this will help you shift your management style from reactive to proactive.
- The topics in this article are more fully explained in the IBM whitepaper "Get actionable insight with security intelligence for mainframe environments."
- The IBM Redbook Introduction to the New Mainframe: Security provides a wealth of security fundamentals for the latest generation of mainframe hardware and software.
- Explore the IBM Security Framework for cutting-edge knowledge on IT security issues.
- Visit the IBM Security QRadar SIEM site to learn more about the technology. Learn more at the developerWorks QRadar community.
- Visit the IBM Security zSecure site to learn more about the technology. Learn more at the developerWorks zSecure community.
- Visit the IBM Security Guardium site to learn more about the technology. Learn more at the developerWorks Guardium community.
- Start your journey to implement IT security through pragmatic, intelligent, and risk-based practices at Security on developerWorks.
- Attend a free developerWorks Live! briefing to get up-to-speed quickly on IBM products and tools as well as IT industry trends.
- Follow developerWorks on Twitter.
- Watch developerWorks on-demand demos ranging from product installation and setup demos for beginners, to advanced functionality for experienced developers.
Get products and technologies
- Evaluate IBM products in the way that suits you best: Download a product trial, try a product online, or use a product in a cloud environment.
- Get involved in the developerWorks community. Connect with other developerWorks users while exploring the developer-driven blogs, forums, groups, and wikis.
|
OPCFW_CODE
|