Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Aug 12, 2011 ... Hello everybody. I have installed windows CE 5.0 SDK, and Windows CE 6.0
SDK. and I have two devices with Windows CE 5.0 and Windows CE 6.0. The
problem is that I can use Visual Studio 2005 to develop application ...
I have created SDK with Ce 5.0 platform builder. It works fine in XP and in Vista
but when I upgraded to Windiows 7/64 and installed VS 2005 it knows nothing
about my SDK. Does anybody know solution for this? Thursday ...
Is there any steps that i missed for attach Windows CE 6.0 emulator in visualstudio 2005? THX~ ... You'll need to create an SDK for the images, you are trying
to use with Visual Studio and then install that SDK. After SDK is ...
The VS2005 gave Pocket PC 2003 + some other ones( I had Windows Mobile 5.0
SDK's) as SDK. But i wanted to ... You need VS 2005 SP1 to develop for win CE
6.0 SDKs. the errors you see have been fixed in the SP1.
I want to develop a application for wince 6 mobile. now i need to configure wince
6 sdk to vs 2005. how can i do that? Thanks Naresh mende. Friday, August 7,
2009 8:14 AM. Reply. |. Quote. Avatar of Mende Naresh.
Have you installed a CE 6.0 SDK? I think that's your missing piece. You can
circumvent the error you're seeing with a 5.0 target by manually copying the
indicated resource CAB to the device and running it before deploying ...
I have a Windows CE 5.0-based Platform Builder image. It is intended to be
installed on Visual Studio 2005. My team would like to upgrade our build tools to
utilize Visual Studio 2012, but Visual Studio 2012 ...
... undoubtedly also have to generate an SDK for your device so that application
developers can create applications targeting your device. Platform Builder for
Windows CE 6.0 plugs into Visual Studio 2005, but the latest and greatest
version of ...
Visual Studio 2005; Visual Studio 2008; Embedded Visual 4.0. For all cases you
need to have installed SDK STANDARD500 library for WindowsCE development.
If you don't have it please Download and Install Microsoft SDK. Based on your ...
|
OPCFW_CODE
|
This post was co-authored by Rajeev Jain, Senior Product Marketing Manager at Azure Storage, and Henry Yan, Product Marketing Manager at Azure Storage
There is still time for that Register for Azure Storage Day, a free digital event on Thursday, April 29, 2021, 9:00 a.m. to 12:30 p.m. Pacific time. Explore cloud storage solutions for all your workloads with us and learn how you can ensure scalability, security and compliance with the right Azure storage solution for every use case.
Here are five reasons to attend the Azure Storage digital event:
1. Find out about cloud storage trends from the experts
In his keynote session, Tad Brockway, CVP of Azure Storage and Networking, will discuss the forces currently driving cloud adoption for enterprise storage needs. For companies looking to the cloud to manage their rapidly growing data prints, data storage and management are more important than ever. Therefore Azure storage is designed as a single destination for cloud storage solutions – there is a service for every type of enterprise workload.
2. Azure Storage Solutions in Action
Check out demos for storage solutions and see how they would work in real use cases. Hear from product experts about a variety of storage services including:
- Azure disk storage for block storage.
- Azure Blob Storage and Azure Data Lake Storage for object storage.
- Azure files and Azure NetApp files for file storage.
Also, see how customers use these Azure storage services to find the right solutions for their workloads. From gaining insights from big data to reducing latency on mission-critical workloads, customers see great business benefits from using Azure Storage services.
3. Map Azure Storage Services to your corporate workloads
Instead of a one-solution-fits-all approach, Azure storage services are developed taking into account different workloads and IT environments. This workload-first approach means that Azure Storage includes a variety of different storage services, each intentionally designed to support specific workloads and scenarios. At the event, you will learn about the comprehensive portfolio of Azure Storage services and how to choose the services that best suit your needs.
4. Get answers to your storage questions
This digital event is your opportunity to connect with the cloud storage community, including Microsoft product experts and your storage and infrastructure colleagues. Use live chat for events to ask storage questions and get insights from experts and colleagues into your specific scenario.
5. Learn best practices for migrating and modernizing app development
While “lift and shift” migrations are popular for bringing business-critical apps to the cloud, moving the underlying storage presents its own challenges. That’s why Azure Storage includes migration tools, frameworks, and best practices to help you move your storage functions to the cloud quickly and reliably. You’ll also learn how to run modern application development patterns like Kubernetes, as well as non-web, mobile, and server-free apps.
Visit us on Azure Storage Day to learn more about these benefits, connect with Microsoft product experts, and find storage solutions for all business workloads. Get an early look at the agendaand we hope to see you there.
Register for Azure Storage Tag today
Thursday April 29, 2021
9:00 a.m. to 12:30 p.m. Pacific Time (UTC-7)
|
OPCFW_CODE
|
My PortfolioTime 2022-10-05 19:15:53
Web Name: My Portfolio
Artificial Intelligence Engineer
Developed and evaluated Natural Language Processing Contextual QA models for the APEL U.S. Navy search engine. (Classified) [ Top Secret Clearance ]
Founder of web 10
web10 is a cloud platform where users own the products instead of developers. web10 provides encryption, databases, and peer-to-peer functionalities.
Machine Learning Research
Worked with Prof. Yuejie Chi on detecting sleep apnea. Our input features were from an exclusive children's hospital dataset of recorded EM brainwaves .
Developed decision making algorithms for U.S. anti-missile systems Aegis similar to Israel’s famous Iron Dome system. (Classified)
Control Systems Engineer
Applied control theory concepts to design a safe braking system for trailers. The system included ABS and ESC safety features. This will save lives.
Designed an aesthetic, eco friendly, economical wall lantern. Sold 100 units of them in a Kickstarter campaign. Patented the design ( Chochin )
Worked with Professor Sankar. on close range ultrasonic positioning for blind person navigation (CMU Dept. Of Computer Engineering).
Greenstar Group was a software contracting firm I started. We developed applications and software for clients in financial, and healthcare domains.
Software / Data Mining Research
Mined Travis CI/CD build data in order to inform best development practices. My mentors were Professor Claire Le Goues and Professor Bogdan Vasilescu
Software Dev. at Uncommon Core
Uncommon Core is a differentiated learning platform providing a pencil on paper/tablet math curriculum for students, all materials graded via. machine learning.
Nationally ranked sprinter in high school. Tied the CMU 100m school record (10.7HT) in my freshman year. Three time CMU King of the Hill (2017-2019).
Machine Learning Teaching Assistant
Teaching assistant for the graduate version of Intro To Machine Learning (18-661). Held office hours, recitations, and graded homeworks.
Skills And Resume
A Computer Engineer With A Design Sense
Inventive, with a strong Carnegie Mellon University curriculum.
(15-410) Operating System Design And Implementation.
(18-461) Graduate Intro to Machine Learning for ECE.
(15-213) Introduction to Computer Systems.
(18-491) Digital Signal Processing.
RESUMEJacob Hoffman Resume
E Sport Grindset
I used to run track. Now coding is my sport. In regular sports, your body is the limit. In e sports, your mind and hardware are the limits.
<<< Thank you for your visit >>>
|
OPCFW_CODE
|
Read here about the guidelines for suitable barcodes and recommendations for their use.
Barcodes are notably different from Image Targets and are much smaller which affects the conditions for successful detection and tracking.
Choose Suitable Barcodes
If you are printing your own barcodes and placing them, please take the following considerations into account:
- Code density and detection distance: The more data a code contains, the higher the density of bits in the code becomes at a given physical size, which in turn reduces the maximum distance the code can be detected from. If distance is a concern, try to fit in less data, e.g., by shortening URLs used as payloads. Also note that for some codes, such as QR Codes, the data density is a design factor that can be chosen to some extent, trading off detection distance against data capacity or error correction capabilities.
- Type-specific guidelines: Different code types come with their own requirements of how they are placed and printed, including mandatory quiet zones around the code. For optimal performance, these guidelines should be respected.
During barcode scanning, we recommend showing a reticle, such as a frame to indicate to users that barcode scanning is active.
Scanning of 1-dimensional barcodes works most reliably if the barcode is either horizontally or vertically aligned with the camera image. If the primary scanning targets are 1D barcodes, we also recommend showing a horizontal scanline on the screen to help users align it correctly.
See the Barcode Scene in the Unity Core samples for an example implementation of 1D and 2D scanning reticles.
- Scanning reticle for 1D and 2D codes that allow selection by pointing (see the Pointing bullet in the next section).
- Scanning reticle for 1D barcodes that encourages horizontal alignment for improved recognition.
Single vs. Multiple Barcode Detection
The Barcode Scanner can be configured to run in two different modes:
- Single barcode detection where one barcode detection is active at a time.
- Multiple barcode detection where one or more barcode detections are active simultaneously.
For situations where only a single barcode is expected to be visible in the camera image, using the Single Barcode detection mode reduces CPU and power consumption. When multiple codes are visible, the code closest to the center of the camera view is in most cases the one reported.
If you expect multiple codes to be visible simultaneously, please use the Multiple Barcode detection mode. Since this will potentially produce more than one detection result, a selection mechanism is required to choose the right code automatically or manually. Possible solutions are:
- Automatic filtering by format and type: If you are looking for a barcode with a payload that has a unique format, regular expressions or similar mechanisms can be used to filter out unwanted detections. If possible, you should also configure the Barcode Scanner to detect only expected barcode types.
- Pointing: By showing a pointer or crosshair at a fixed location close to the center of the screen, you can guide users to point at the right barcode by moving the device. In many cases, this type of selection is most convenient and allows for one-handed interaction on mobile phones. For the most reliable selection, put the Barcode Scanner into Multiple Barcode detection mode and programmatically choose the code closest to the pointer.
The Barcode Scene in the Unity Core samples demonstrates this type of interaction.
- Tapping: In this mode, the outlines of all detected codes are drawn on the screen and the user chooses the desired code by tapping on the screen. While this may be more intuitive to some users, it is less convenient to hold the device stable with one hand while tapping with the other.
Since barcodes are in most cases shown on white backgrounds, they tend to be brighter than the environment. If the barcode only covers a small portion of the image in an otherwise dark area, there is a risk that the camera exposure optimizes for the whole the image and the barcode becomes overexposed. This may not even be noticed by the user on optical see-through digital eyewear, where the camera image is never shown. Therefore, the contrast of the barcode should be adjusted to the environment wherever possible. – This is particularly important when showing barcodes on a handheld device’s screen.
To avoid overexposure on the barcode, choose a bright background as the barcode background to encourage the camera to reduce the exposure; this provides a better condition for detecting the barcode.
Barcode vs VuMark
VuMarks are an evolved type of barcodes and QR codes that lets you embed data as type string, bytes, or numeric numbers together with a graphic background such as logos and icons that improve detection and tracking. Generating and retrieving VuMarks can also be done with Vuforia’s web API.
Another difference is that VuMarks are optimized for 6DoF (degrees of freedom) allowing you to add content in world coordinates whereas the Barcode Scanning is limited to 2D coordinates.
|
OPCFW_CODE
|
There has been unprecedented amount of structured and unstructured data. More companies are contemplating and planning the use of machine learning (as part of artificial intelligence) to capture the value of data as a strategic asset.
Have you ever wondered about the use of off-the-shelf machine learning models such as Google Cloud, AWS? This question often arises during initial discussion of creating plans for implementing machine learning.
Machine learning allows companies to learn from data and automate decisions in real time without explicitly programmed and with minimal human intervention. This helps when companies can't have a simple (deterministic), rule-based code solution (read more here on machine learning).
To save development time and effort, some providers provide off-the-shelf (machine learning) models. An off-the-shelf model is one that has been implemented by someone else (plug and play solution). Alternatively, there are also bespoke machine learning models. These models are uniquely built for your company's needs.
Deciding between off-the-shelf and bespoke machine learning models largely depends on your finances, state of data and the outcomes you expect to achieve.
Comparison Between the Models
A comparison between off-the-shelf and bespoke machine learning model is as follows:
As a summary:
Off-the-shelf machine learning models has lower accuracy and lower cost. The amount of data required to achieve modest accuracy can be surprisingly small.
Bespoke machine learning models provides high accuracy but at high cost. It is usually used where accuracy is paramount such as autonomous vehicle, healthcare.
Off-the-shelf Machine Learning Model
An off-the-shelf model is good as a fast solution and is usually used for common tasks such as face recognition or speech recognition. It is a tool for domain experts with limited data science or machine learning background.
It is sometimes used in the initial phase to guide the development of a bespoke machine learning model. It helps fill the talent gap and greatly improve accessibility to machine learning.
There are however limitations for off-the-shelf models. Since the model is developed on an external set of data, the output has lower accuracy/ precision when used on the company's own data.
Companies may also face the risk on relying on the black box without understanding the logic behind its decision-making that could risk resulting in reputational damage when things go wrong.
Companies should be aware that they may sometimes fall into the trap of getting things done quickly and at a lower upfront cost and then spend years fixing the solutions that didn't end up bringing the desired results. Usually off-the-shelf tools pay off for the short term.
Bespoke Machine Learning Model
Bespoke machine learning models are custom machine learning models. It crafts a customized solution tailored for the unique needs of each specific application. It is more accurate but takes a longer time and is costlier. It requires a large amount of data to train the models.
Ensemble Machine Learning Model
A technique that combines several base models in order to produce one optimal predictive model is sometimes used. This is called an ensemble model. It can increase accuracy. The decision tree model is the most popular and relevant ensemble machine learning model in data science today.
It should be noted that there are some areas where a 99% accuracy is not required. In these cases, companies are happy with a 70% accuracy where an average human won’t surpass that accuracy ever, and in the meantime, it gets to automate the process.
Companies now need to unlock value from big data to stay ahead of the competition. They need to decide between costs and benefits of off-the-shelf and bespoke machine learning models or an ensemble model. Creating the right balance requires in-depth thought on the required outcomes and the impact of the business.
What type/ types of machine learning models do you use? Share by leaving us a comment. If you require more information or expert advice to develop machine learning models, contact us. We want to be an extension of our clients. Subscribe to our newsletter for regular feeds.
Did you find this blog post helpful? Share the post! Have feedback or other ideas? We'd love to hear from you.
Emerj, What is machine learning?, https://emerj.com/ai-glossary-terms/what-is-machine-learning/, published 21 November 2019
Raconteur, Are off-the-shelf AI tools a good idea?, https://www.raconteur.net/technology/ai-tools-pre-built, published 12 May 2019
|
OPCFW_CODE
|
Have any of the questions been answered by the last two people? Did I miss their answers somewhere?
Vojta already answered here.
Dan still didn´t so far.
Huh. I could’ve sworn I read Hellboy’s responses yesterday.
Yeah, they’re both there now. I checked that page earlier today and didn’t see them so my browser must have been loading from local cache or something. But they’re there now so that’s cool.
Dan didn´t answered yet, because of the winter holidays, you can expect his answers in the next couple of days
Is this YOUR bug or is Henry a duck???
I don´t care for the Spanish inquisition!!!
In the last 2 weeks I have “quicksaved”, more than once!
I will be technical and I do not have anything against technical answers
1: In which cases do you rely solely on the CryEngines pyhsical system and in which cases do you use self-desgined physical systems (if any)?
2: Have you modified the CryEngine Core? If yes, then in what extent?
3: What were the biggest problems you have encountered so far?
- What tools did you create and for what puroses?
Some architectural/technical questions
1: What are the langues your game is written upon. The Cry-Engine is the core (C++), you have built your system above it with plugins (I presume also C++), then you have your AI/Behavior/Game System Scripts, what languages are you using, I presume LUA but I also saw Python in the install directory.
2: For what do you use Python?
3: What programm did you used to make your animations (Maya? 3DsMax)
4: What is the format of your animations?
5: What are the mesh types your characters are composed of (The standarad, skeleton and static) or with more
6: Do you work with destructible meshes?
7: What IDE are you using, in your videos I saw IntelliJ
1: What is your approach for the optimization of the NPC-Behavioural optimization?
2: What will be optimized graphically?
3: How many LODs will you include?
4: How is your approach for the LOD change?
5: In which cases are scripts more performant than using only the CryEninge itself? (If there are some cases)
6: Will you use multithreading for AI-Calculations or do you already use it to some degree? If you will use multithreading will you implement it scalabe or will you assume a certain amount of cores/threads and set it as a maximum?
Now it´s time to ask questions for our 3D Environment Artist Joukejan “Jouke” Timmermans from the Netherlands:
Haha, I would like to see Jouke´s face if there is such winter like it used to be 10-15 years ago ever again… This winter is still nothing, I remember few meters of snow in front of our house… we could jump from third floor roof and be absolutely ok
Do you like Jan Žižka? Or what historic person is you think is best?
- Since you are making LoDs now, I was wondering what is your opinion about pop-in in games and LoDs in general. Is this something that could be eliminated in the future ? Pop-in of geometry and objects is one of the most distracting and annoying visual issues in videogames today. Some games do pretty good job hiding it (Just Cause 2 or Witcher 3 come to mind), some games do horrible job of it (Forza Horizon 3 has foliage popping everywhere, Kingdom Come tech alpha and techbeta had terrible grass LoD issues).
Is the foliage and grass pop-in going to be well handled in final version of KCD, to have it non-distracting?
(1) Of all assets you created or you know that are created and were usable in the real world, how many are usable in the game?
(2) Are there any swords stands, where character can put their swords with scabbards into
(3) Please name the conflicts you had with your historian (if there were some) Because as an artist, somehow artistic tendency tends to overrule the reality in the arts
Here you can ask questions to our tester Jaroslav “Jantoš” Antoš? now:
I’m at a loss. :о)
Damned pink mane.
Different rigs, different fps. We need to test several different cards and set ups. The pic I took was on a stronger comp. It also heavily depends on the data which you get in the morning. Bugs can create major FPS drops.
Hi big warhorsian brother I am really interrested about the hardaware you are testing the game? (in czech-jsetli ti to neva).I must know that because i want to play KCD on high details,and in your screenshots of the game is a frame-rate(takže můžu podle čeho porovnávat).Thank you a lot knight Jantoš from kingdom of a Ústí nad labem.
Thank you for the deep and informativ look inside of WH.
This is actually the nicest part of the work as a tester,<
Please describe the bad side of the job…
You really like the game. What kind of DLCs would make the game even better? Is this possible?
Who decides in the end,… this is a balance problem, this is a feature, this is a bug or we can´t fix this problem?
Unfortunately, with the way the game is constantly in development and everything changes quite often<
How many bugs do you think will be in the end (at release) in the game…
Is it possible to make an open world game without bugs?
After release will you play the game? Do you enjoy watching people play the game?
Will you make a video channel, with tips and tricks for playing KCD? ( You know all the hints!)
So… can we expect Daniel Vavra´s answers?
Yes, I already have a plan to get his answers. I remind him quite often…
1: Do you provoke ununsual situations just to see if realistic or if not then at least believeable behavior of NPCs is seen on screen? For example standing a whole at a spot just to see whether torches or fires are being lit/extinguished by NPCs?
2: How do you behave when you see clipping errors?
3: In percent, how many bugs do you have discovered and how many of your discoveries have been fixed?
4: Are the reportings from the Beta still valid or are the reported bugs not in the dev version anymore?
|
OPCFW_CODE
|
This week, we check out the recently fixed vulnerability in Google Cloud Deployment Manager, and how to penetration test OAuth 2.0. On a higher level, we have Gartner’s classification of API security technology, and a recording of a panel discussion on API security.
Vulnerability: Google Cloud Deployment Manager
Google Cloud Deployment Manager is an infrastructure management service that makes it simple to create, deploy, and manage Google Cloud Platform resources. Ezequiel Pereira found an API vulnerability in Google Cloud Deployment Manager and collected his $31K prize from Google as result.
Pereira found a way to make it invoke Google internal APIs that he was not supposed to invoke:
- He could invoke non-production versions of the GCDM API called
stagingthat provided him internal information on the workings of the system. A classic example of API9:2019 — Improper assets management in the OWASP classification.
- He used these API versions to figure out how to invoke the APIs of Google’s internal services, including Global Service Load Balancer (GSLB).
- He took the advantage of the authentication logic that made calls through the service account of the service if user authentication failed.
Beware of non-production versions of your APIs being accessible externally and having in turn access to production systems and data. Such non-production versions are as much “the real thing” as the production versions and require the same considerations. Also, be very careful how you design your authentication flow.
Resources: PenTester’s Guide to OAuth 2.0 Authorization Code Grant
Maxfield Chen has published an extremely detailed penetration testing guide for OAuth 2.0 Authorization Code Grant. This is by far the most popular way of using OAuth 2.0, which in turn is the de facto standard of web and API access control. Yet, OAuth can be extremely confusing and there are many ways how OAuth implementations can go wrong.
Chen does a good job quickly recapping the flow and its components. Most importantly, he then proceeds to the main exploit scenarios and covers testing steps for each of them:
- Insufficient URI validation
- Referrer header leaking code and state
- Access token stored in browser history
- Other access token leaks
- Client secret leaks
- Lack of state
- Insecure state
- Reused state
- Invalid state validation
- Reusable authorization codes
- Implicit grant coercion
Definitely worth a closer look, if not as a pentester then as a reminder of what could go wrong.
Analysts: Gartner’s Solution Path for Forming an API Security Strategy
A few months ago, Gartner published their report “Solution Path for Forming an API Security Strategy” by Michael Isbitski, Frank Catucci, and Kirk Knoernschild. This report helps identify the different elements in the puzzle of the API security tooling.
API security continues to be top of mind for security practitioners as APIs underpin modern application design, data exchange and system integration. We published a research note towards the tail end of 2019 that provides guidance around API security strategy. There is no shortage of free and paid tooling in this space, but they address specific aspects of the overall API security puzzle. Secure design, testing, discovery, classification, monitoring, mediation and threat protection require a multi-pronged approach that cannot be satisfied with one technology, nor is it one size fits all for organizations. API security is also not just use of TLS to protect data in transit or access control to restrict who can access a given API. These are controls that improve security, but they should not be where your API security strategy begins and ends.
And there is a nice diagram to make a sense of the categories in which different API security tools fall. Obviously, some tools (like the API security platform by my employer, 42Crunch) can cover multiple categories:
Podcast: API Academy’s API Security Q&A Panel
The latest episode of API Academy is all about API security. Bill Oaks, Aran White, and Dmitry Sotnikov answer the frequently asked questions and cover a lot of API security ground in the discussion, such as:
- OWASP API Security Top 10
- Upcoming OpenAPI 3.1 release and why standards matter
- DevSecOps and API security
- Minimal steps for API security
- Why web application firewalls (WAFs) are failing for REST API security
- Machine learning / Artificial Intelligence vs defined API contracts and rules
- Schema validation
- Rate limits and quotas
- API responses: why they are also relevant and not just the requests
- IoT device authentication
- OAuth 2.1
- Certificate management
- SAML vs REST
- API key distribution
- API gateway and API firewall location
Get API Security news directly in your Inbox.
By clicking Subscribe you agree to our Data Policy
|
OPCFW_CODE
|
M: To Save the Climate, Look to the Oceans - LinuxBender
https://blogs.scientificamerican.com/observations/to-save-the-climate-look-to-the-oceans/
R: jakozaur
It omits one of the very promising, but somewhat controversial methods of CO2
sequestration:
[https://en.wikipedia.org/wiki/Iron_fertilization](https://en.wikipedia.org/wiki/Iron_fertilization)
Even in the most optimistic CO2 reduction scenarios, we still need negative
emissions that will offset existing CO2 or use-case where eliminating fossil
fuels is not practical (e.g. airplanes).
R: Valgrim
Whales play a crucial part in natural iron fertilization of oceans, because
they go deep underwater to eat a large amount of krill, which are iron-rich,
and deject near the surface, because they need to breathe. Their dejections
fertilize the phytoplankton which lives near the surface. Because they need to
move between deep and shallow water, and because they travel over large
distances, whales act as nutrient pumps for the ecosystem.
R: pyronik19
Are you sure we don't just need them to talk to a giant probe ionizing our
atmosphere?
R: gonzo41
Better to keep em around, just in case.
R: austincheney
The article mentions power by wave energy. I could not tell from the article
if this is real or hypothetical. It sounds like a good idea but I have not
heard anything about this. That is a constant source of naturally occurring
kinetic energy. I did find the following on Wikipedia:
[https://en.m.wikipedia.org/wiki/Wave_power](https://en.m.wikipedia.org/wiki/Wave_power)
As a Texan I heard a lot about potentially tapping wind energy from floating
turbines off the Galveston coast but have not heard about this in practice
while the wind farms in west Texas are exploding in volume.
[https://en.m.wikipedia.org/wiki/Wind_power_in_Texas](https://en.m.wikipedia.org/wiki/Wind_power_in_Texas)
R: pjc50
Wave power has been a pipedream since the 1970s; the problem is that water is
_too_ powerful and tends to destroy structures over time. As well as being
corrosive and full of living things that cause "fouling". It's also not been
very well funded.
Offshore wind turbines on the other hand are practically a mature technology,
while getting larger and larger over time (improves efficiency). There's loads
around the UK, although mostly fixed rather than floating. Floating is a less
mature technology.
R: JoeAltmaier
Perhaps it would work better in estuaries, with clean(er) river/fresh water.
The ecological impact may be prohibitive, but as an engineering task it seems
easier.
R: pjc50
Extracting tidal power from estuaries is another similar pipe dream - people
have been discussing building one in the Severn estuary for as long as I have
been alive.
Tide power is intermittent but extremely predictable and not tied to the
diurnal cycle, which would be advantages.
R: notahacker
They've been successfully extracting power from the Rance estuary for over
half a century.
The trouble with estuaries is that in addition to the ecosystem damage and
change to flood dynamics associated with hydro power in general, they also
tend to be used for ships to navigate, so getting a scheme which suits
everyone is a major challenge.
|
HACKER_NEWS
|
Archer Instance Migration
Hello, Archer Community.
Apologies if this is a re-post. I've done some searching and wanted to ask again, as everyone's situation and approach could be a little different.
Current Environment: Archer 5.4 SP1 P2 HF1
New Environment: Archer 6.5 P1
Version 5.4 Solutions Used:
Incident Management (ACTIVE)
Version 6.5 Environment
Incident Management (ACTIVE)
and many other Use Cases...
We're trying to find the best method to get everything that's in 5.4 over to 6.5. The new 6.5 contains two environments, QA and Production. The approaches we've considered are Database backup/restore (Copy Down), or building and installing packages. Just want to get everything (Applications, Sub-Forms, Workspaces, Dashboards, iViews, Reports, Fields, etc). into the 6.5 QA environment.
Important Note: Other developers are working in the same 6.5 QA environment, customizing and building new Solutions/Applications.
If we try the DB method, is there any special instructions for the Instance and Config databases? We need to get off the 5.4 environment ASAP (for obvious reasons).
Seeing how the 6.5 QA environment is already implemented, with new Instance and Config DBs, do we restore the 5.4 SQL DBs onto the 6.5 QA SQL Server and reconfigure DB connections in config files and the Archer Control Panel? And if all tests well, delete the newly created 6.5 QA SQL DBs?
Are there any known settings or items that will not be in the new 6.5 QA once the DBs are stored and connected? i.e Banners, Appearance customizations, any functionality? I'm aware that Data Feeds would need to be reconfigured. Anything else.
Due to others working in the 6.5 QA environment and creating customized solutions, would Creating/Installing Packages be the best option so we don't over-write their work with the Instance and Config DBs?
I might be over-thinking this effort... Just want to make sure we cover everything.
Looking forward to your support, feedback, experiences, and links to any docs.
That is a big topic, and hard to cover all in one post. But I believe there were many such considerations already, so check them out:
Well, it should be like creating new instance to existing 6.5 environment. So, basically aside of HW limitations, possible disruption in performance, should not affect existing instances much.
But surely, take backup always And perform in non-working times
That's sound advice!
Creating a new instance in the Archer CP? Then bring over the Databases and configure Archer to point to it? What bout the config database? Or will there be no need to bring it over? Any need to re-run the 6.5 installer?
You cannot bring Config DB over working and existing environment, you must treat it as new instance. Meaning also you should consider all company_files, Respository, etc. alongside.
And you do not need to rerun installer.
The quickest way to do this and have all of the data & configuration available in the upgraded version of the environment is to do a backup/restore of the Instance Database, but that would then overwrite anything that is currently being configured in that instance.
You would restore the 5.4 instance db to whatever server you’re going to run the 6.5 environment out of, create a new blank Config db, and then run the 6.5 installer. This will install the tables in the Config db and upgrade the Instance db. Then post-install you could delete the old 6.5 QA databases, if you’d like.
No other settings I know of that would need to be reconfigured except data feeds, and you’ll want to copy over your File Repository (attachments – if there are any), and rebuild your Search Indexes during the upgrade.
To cover what’s being done in QA, you could have them package that up, do your upgrade/db update, and then install their packages back in.
GRC Sales Engineer | RSA
I would upgrade your 5.4 instance in place, with backups of course. Then, once your primary system is on 6.5, move in the new work being done in QA by package. Then copy the back end databases and point QA to those copies, to synchronize the environments.
Truly appreciate your feedback and support, Sheila. So creating another instance (as mentioned by Ilya) won't be necessary. We're trying to avoid this route, as it is preferred to have one Archer URL, with one Config and one Instance DB. Sounds like the most logical method to achieve what we're trying to accomplish.
We cannot upgrade 5.4 in place as it's currently production and we need to move off of the infrastructure ASAP. It's more of a programmatic decision to set up the new 6.5 QA environment as a mirror of the current 5.4 Prod with the new QA customizations & Use Cases.
Appreciate the suggestion!
|
OPCFW_CODE
|
Dell's Ubuntu Linux Strategy Extends to China
From time to time, Dell does a poor job articulating its Ubuntu Linux strategy. But sources close to Dell and Canonical continue to insist the relationship remains healthy and “stronger than ever.” Here’s an update on Dell’s Ubuntu strategy — which includes a dramatic Dell-Ubuntu PC push in China.
First, some background: Dell began shipping Ubuntu preloads in mid-2007 on selected U.S. desktops. Dell’s decision to offer Ubuntu came only a few months after Microsoft launched Windows Vista. That certainly caught my attention.
By July 2007, I jumped on the Dell Ubuntu bandwagon, and hoped to eventually launch an Ubuntu-centric web site that tracked Canonical’s business strategy.
My business partner (Amy Katz) and I discussed the opportunity, and we ultimately funded WorksWithU’s soft launch in May 2008, and a full-fledged launch in November 2008.
So yes: Dell’s initial commitment to Ubuntu influenced our decision to launch WorksWithU. To Nine Lives Media Inc. (WorksWithU’s parent), Dell’s Ubuntu move was a significant watershed event for the desktop Linux market.
Still, Dell’s Ubuntu Linux strategy has suffered from some perception issues. First up, the Dell U.S. web site (www.dell.com/ubuntu) has stopped selling Ubuntu desktops from time to time, and instead emphasizes Ubuntu notebooks and netbooks.
In fact, a quick check of the Dell U.S. site today shows that Dell’s Ubuntu portfolio has been reduced to a single fully-baked device (a Mini 10n netbook).
Plenty of readers have complained to me about Dell’s inability to market Ubuntu desktops in the U.S. and abroad.
Now, The Good News
When it comes to Ubuntu, I still believe Dell deserves the benefit of the doubt. For Dell, Ubuntu has been a grand experiment. I hear the company has tested Ubuntu on everything from mobile internet devices to high-end servers.
That’s no small feat. Novell, Red Hat and Microsoft are entrenched on the server. Microsoft and Apple are entrenched on desktops. Yet Dell has continued to test and refine its Ubuntu strategy for nearly three years now. Most recently, Dell has bet its initial cloud partner program on only three companies. Canonical is one of them. Dell, it seems, has a sincere interest in Ubuntu Enterprise Cloud (UEC).
Has any other major PC vendor shown Ubuntu the same attention since 2007? I think not.
Emerging PC Markets
Generally speaking, I think Dell remains highly committed to Ubuntu. But perhaps not in the ways that some customers and Ubuntu users would like.
In mature PC markets like North America, Europe and Australia, Dell hasn’t done anything really dramatic with Ubuntu lately. But keep an eye on China. For example, visit www.dell.com.cn (Dell’s China web site) and type “Ubuntu” into the search bar.
As of this writing, the online search displayed at least seven Dell systems that offer Ubuntu as a pre-load option in China.
Now, let’s look at Canonical’s potential market opportunity in China. In 2008, total PC shipments in China reached 39.6 million units, up 9.3% from 2007, according to IDC. The research firm estimates PC sales in China rose about 2.7 percent in 2009, and will accelerate to a 21% compound annual growth rate through 2014.
Thanks to Dell, Ubuntu could potentially grab a significant piece of China’s PC market.
Based purely on emotion, I wish Dell would do more to promote Ubuntu in mature markets like the U.S., Europe an Australia.
But based purely on market opportunity, it’s easy to see why Dell has been making Blue Ocean Strategy moves — embracing Ubuntu Enterprise Cloud and Ubuntu netbooks in the U.S., and serving up far more Ubuntu options in China.
|
OPCFW_CODE
|
M: 7M Downloads: Why Angry Birds Is Free on Android - vamsee
http://phandroid.com/2010/11/26/rovio-over-7-million-people-helping-those-angry-birds-out-on-android-christmas-update-coming/
R: cletus
Personally I view this as bad news for the Android ecosystem.
The iOS/iTunes ecosystem is incredibly accessible. You don't need a credit
card (important for minors; a _huge_ market). You can buy credit everywhere.
iOS users seem willing to part with cash.
Google Checkout on the other hand is not available everywhere Android is.
People seem less willing to use it (eg 50k paid from this post). You can't buy
credit in the retail chain. Google gives developers 95%.
The last one is actually a _huge_ problem. Google's 5% will never pay for
retail distribution (Apple's 30% clearly does).
Developers have come to the obvious conclusion: they'd rather have 70% of
(potentially) a lot rather than 95% of much less.
iOS has the option of paid and ad-supported. Android only having ad-supported
(realistically) is a major disadvantage.
The only way Abgry Birds could make it to Android is on the back of the
success of iOS. I wonder how much money their experiment is really making.
R: alex_c
A minor point: outside the US, Android prices are listed as (some examples
from featured apps):
~CA$1.96
~CA$5.08
~CA$3.05
~CA$1.01
It's completely irrational, but I actually find myself less willing to buy for
"untidy" prices. I didn't fully understand why Apple has its tiered pricing
system, but now I do.
R: darshan
That's relatively new, but I was glad when they changed it. To be clear, it's
not "outside the US", that's Canada -- the point is that prices are now listed
in local currency, including in the US.
Back to the point: while I'm pretty comfortable with translating Canadian
dollars, British Pounds, or Euros to US dollars, that's about as much as I can
do in my head. I'm much happier seeing ~US$1.17 than ¥99, and I'm much more
likely to buy the app.
So while I can see your point, I think it was definitely a change for the
better, and it probably resulted in increased sales.
R: wzdd
I think the point was that iOS apps have a set of fixed prices in each area
(in local currency). Here in the UK the cheapest non-free app is 59p
(corresponding with a 99c app in the US store), the next is £1.19 ($1.99), and
so on. This makes for a better user experience because people can think in
terms of the price brackets.
Here is a summary of the price brackets: <http://www.mcmnet.co.uk/news/the-
app-store-explained-news>
R: dpcan
I'd love to know how many active installs there are of those 7M downloads. I
must have downloaded it 3 times to my devices and it didn't successfully run
on any of them.
We have an app with over 1.2M downloads and around 500k active installs. I
think the active install rate is low because it is a LITE version. Only - I
haven't put ads on the game yet, and have been wondering whether it's worth it
or not. If it only makes a little per day, then I'd rather just leave my free
users alone and stick with the paid upgrade.
I'd also like to know what kind of revenue they are making from ads on
Android. I feel like if we made our game completely free, we'd reach over 2-3M
downloads pretty quickly and triple our active install rate, and if that meant
we were making as much in ads as paid downloads, then I think we'd consider
going that direction. It's just too big of a maybe, and we can't seem to get
any inside info about ad revenues to make the leap - even when emailing
Mobclix and Admob, we get no response to this question.
Worst of all - I wonder if we are leaving money on the table by not being an
ad supported app.
Ho hum.
R: J3L2404
If you change from completely free to free w/ads you will incur quite a bit of
wrath in the reviews, at least with current users.
R: dpcan
The free version is VERY limited however. So, going from 10% of a game to 100%
would hopefully sway the reviews my way even with ads, but still - hard to
say.
I could start slow, with just ads on the home screen, then maybe put them
throughout.
R: SoftwareMaven
This seems to say really bad things for the Android marketplace. If a game as
good and popular as Angry Birds can't survive if charged for, it doesn't seem
like a good place for developers. Ad supported doesn't work unless you can get
to millions of downloads.
R: davidedicillo
Something the article doesn't keep in consideration is the fact that Angry
Birds is so famous that most likely those people on Android who downloaded it
have seen their iPhone friends playing it and praising it for months...
R: gregpilling
I have the game on my android phone and like it, so I put it on my wife's new
ipod touch. I am amazed at how much better the graphics and gameplay seem to
be on her ipod. The graphics seem much more detailed, and the controls seem
more responsive.
R: CountSessine
Byte code vs native code? I have a hard time believing that anyone would make
an Android game without the NDK (without which you're basically CPU starved -
Android isn't the platform for heavy CPU work), but maybe Angry Birds is
Dalvik-only?
R: mdaniel
I have it on good authority[1] that Angry Birds is a native-inclusive app.
Also, I wouldn't look down my nose at Dalvik-only apps. The JIT (which, AFAIK,
is the target of the litigation) gets pretty close to optimal code the longer
it runs - and theoretically games run the same code a lot more than your
average application.
In fact, my EUR0.02 is that native-inclusive apps (just like any Java apps
which rely on native libraries) are worse, in my opinion, because it limits
the number of platforms that one can execute the apps upon. And as a huge
Linux fan, current Mac user, and user of x64 Windows at work, I can assure you
that I get left out in the cold a LOT.
It's one level of bad to tie your app only to Windows, which coincidentally
includes 90% (last metric I heard) of the world. It's another thing entirely
to say, "oh, sorry, your Android isn't the same as my Android: too bad for
you."
1 = unzip -l AngryBirds_1.3.5.apk|grep -i \\\\.so
R: CountSessine
_Also, I wouldn't look down my nose at Dalvik-only apps. The JIT (which,
AFAIK, is the target of the litigation) gets pretty close to optimal code the
longer it runs - and theoretically games run the same code a lot more than
your average application_
That's interesting - are you sure about that? There's a big difference between
a static translation JIT and the more modern and memory-intensive
progressively-optimizing JIT's that are used outside mobile. My understanding
was that Dalvik's JIT was a one-time-only translater. Its difficult to find
any comprehensive benchmarks of the 2.2 JIT - there are plenty of benchmarks
showing the 2.1 interpreter eating CPU cycles compared to native code (think
20-to-1), and Google themselves claimed that the 2.2 JIT would mean a 2-5x
improvement in CPU efficiency, but I'm still waiting for a set of
comprehensive cross-platform benchmarks.
R: eli
I don't get why there isn't a version I can pay for to get rid of the ads.
They're annoying.
R: bryne
Because it'd be pirated ad infinitum and the game presumably makes more money
on the Marketplace with the ads included, no?
R: DannoHung
This is neither here nor there, but I'm wondering if anyone else has an
opinion on this: Does anyone find the physics in Angry Birds INCREDIBLY
frustrating? There seems to be very little rhyme or reason to the way momentum
is transferred. And the material modeling just makes me want to bite my tongue
off.
R: irons
I've heard this complaint before, and it puzzles me. Figuring out the odd
physical rules (to manipulate them) is almost the entirety of the game. If you
don't enjoy the physics, then you don't enjoy the game, but blaming the
physics for your non-enjoyment is like criticizing Pac-Man because you die
when you run into a ghost.
R: DannoHung
I don't believe that this is intrinsically correct. Part of what makes I feel
makes a game good or not is if the rules are consistent and can be
extrapolated.
If every action is a special case, the game is frustrating to any player
trying to build a mental model of the cause and effect relationships within
the world.
Now on the other hand, the premise of a game and it's basic interactions may
be enjoyable, or at least conceptually enjoyable, by themselves. There need
not be the call for the player to accept the gestalt as it is.
Personally, I'd like to see an Angry Birds clone that made it a little easier
to understand what's going to happen when you launch a game object in a
particular way.
R: irons
Inconsistency can certainly break a game, but where's the inconsistency in
Angry Birds?
I've got three stars on every level through the first ten worlds, and only a
couple of levels seemed to require lucky breaks for maximum points. 3-1 was
the one time I gave up and found the three-star solution on youtube.
R: shib71
Rovo could easily release a paid version without ads. It's practically ancient
tradition on the iPhone, which everyone is arguing is completely different to
the Android market. I just assumed that Rovo must be making more money from
ads than they ever could from sales. In what order did Rovo release versions
for alternate platforms? I would be interested in knowing whether there was a
point where they started preferring ad-support.
R: pilif
From an UI perspective I really hate the ads in angry birds on android: the
advertised products are never interesting and the ad is placed in a way that
it can be accidentally tapped during gameplay (happened to me multiple times).
I would GLADLY pay to get the ads to go away. Maybe they should add a premium,
ad-free, version
R: epo
And as lots of people have already pointed out, an ad-free version would be
pirated instantly. This way they make some money out of the Android user base.
Personally I suspect that once the hive mind gets to work even ads will stop
working and Android will simply not have any commercial-grade software
whatsoever.
R: mikek
"Because it makes more money that way" is the short answer. While this
explains why they have a free version, it doesn't explain why they don't have
a paid version as well. There are plenty of people who would pay $.99 to
remove ads. Piracy is the only explanation for this that comes to mind.
R: pasiaj
I think Rovio is putting all their weight into turning Angry Birds into a
global brand.
The mobile app market is still pretty limited. Getting beyond $10 million
revenue over all is incredible but nobody has any experience on sustaining
success in the mobile market in the long run.
Investing in the brand, on the other had, gives Rovio a lot more options -
licensing deals, a movie deal or whatever. It is a tried and tested model.
If Rovio got 7 million extra fans and alot of free media coverage by releasing
it for free on Android, that in itself might be a better deal than a half a
million in revenue from sales, even if you don't count the advertising
revenue.
R: Indyan
Developer of Angry Birds: "Really Big and New Project Underway"
[http://techie-buzz.com/mobile-news/angry-birds-new-
project.h...](http://techie-buzz.com/mobile-news/angry-birds-new-project.html)
R: dhughes
Still stuck on 7/11 :(
|
HACKER_NEWS
|
Led by Agile Coach Marcus Ward, a group of procurement managers gathered together on Zoom yesterday to brainstorm the key roadblocks facing procurement and their solutions.
Marcus began the day by asking the group to share one-word answers to the question “If Carlsberg made Procurement”. Referring to a well-known ad campaign by Carlsberg, the question asked what procurement would look like in a perfect world (without any roadblocks). Here are the answers:
“Responsive” was the most common answer, reflecting the theme of the previous day’s PASA Premier Confex and the profession’s agile response to the enormous challenge posed by COVID-19.
Similarly, attendees were asked to send themselves “postcards from the future”; again, indicating what they would like procurement to be. Answers included:
- Procurement included in key strategic decisions early on
- Trusted advisors to the business
- Opinions sought
- Being a part of creating strategy, not just following it
- Problem-solvers who are plugged into the data
- Aligned and delivering meaningful value
- Experimenters with a purpose.
Marcus then introduced the team to MURAL, a digital workspace for visual collaboration. MURAL is packed with features, but for the purposes of the day we made use of its stickynotes to recreate the experience of an Agile workshop (which always goes through reams of post-it notes). First, the team brainstormed the roadblocks facing procurement.
- Procurement not being consulted at the beginning of a project. (Why not?)
- Customers expected an Amazon-like buying experience. (Why does it take so long? Why do I have to fill in these forms? Why is there a conflict between speed and compliance?)
With limited time, the team decided to focus on the first roadblock: procurement being brought in too late.
Five whiskies and a hotel
This headline refers to give question words beginning with “W” (Who, What, When, Why, Where) and one beginning with “H” (How). When applied to the roadblock above, this is what we came up with [click on the image to make it bigger]:
The final session of the day finally gave the team the opportunity to brainstorm solutions to the roadblock. Now that everyone had a great understanding of the topic, the virtual stickynotes flowed fast:
Here are the top five solutions, as voted on by the team:
To get invited into a project earlier, procurement needs to:
- Invest time in building relationships with stakeholders
- Understand business drivers and speak the language of various functions, i.e. Finance.
- Gain a better understanding of enterprise strategy and business challenges
- When invited late to a project, highlight bad outcomes and show stakeholders what could have been possible if procurement was consulted earlier.
- Focus on educating employees on how to get the most value out of working with procurement.
Interested in learning more about Agile Procurement? Download the PASA Agile White Paper here: https://pasaagile.com/download-the-pasa-agile-white-paper/
|
OPCFW_CODE
|
import tkinter as tk
from tkinter import simpledialog, messagebox
from optimizer import Runner
from analyzer import ThetasAnalyzer
from utils import extract_thetas_records
corr_fns = {1: "avg_auto_corr", 2: "max_auto_corr",
3: "avg_cross_corr", 4: "max_cross_corr"}
class GUI(object):
def set_cores(self):
cores = simpledialog.askinteger("Cores", "Number of cores available",
parent=self.root,
minvalue=1, maxvalue=50)
if cores is not None:
print("No of cores was set to {}".format(cores))
self.coresVar.set(int(cores))
else:
print("No multiprocessing is used")
def quit(self):
self.root.quit()
def optimize(self):
dim = self.dimensionVar.get()
corr_fn_name = corr_fns[self.corrChoiceVar.get()]
epochs = self.thetasAmountVar.get()
file_name = self.filenameVar.get()+"__"
path = self.pathVar.get()
stop_criteria = self.stopCriteriumVar.get()
cores = self.coresVar.get()
runner = Runner(dim)
results = runner.optimize(corr_fn_name, epochs, stop_criteria=stop_criteria, cores=cores)
full_name = runner.save_results(file_name, results, file_path=path, file_format="json")
self._recent_files.append(full_name)
def show_recent_files(self):
print(self._recent_files + self._analyzed_thetas)
def analyze_thetas(self, save=True):
path = self.pathVar.get()
cutoff = self.cutoffVar.get()
try:
file_name = self._recent_files[-1]
theta_collections = extract_thetas_records(path, file_name)
except:
raise FileNotFoundError("No thetas were optimized during session")
dim = self.dimensionVar.get()
analyzer = ThetasAnalyzer(dim)
groups = min(len(theta_collections.thetas), int(dim*0.75))
sorted_thetas = analyzer.sort_thetas(theta_collections.thetas, groups)
cov_pca_reductions = analyzer.cov_pca_reductions(sorted_thetas, cutoff_ratio=cutoff)
if save:
path = self.pathVar.get()
file_name = file_name.split("__")[0]
sorted_name = analyzer.save_sorted_thetas(sorted_thetas, file_name + "_sorted__", path)
self._analyzed_thetas.append(sorted_name)
cov_reductions_name = analyzer.save_cov_reductions(cov_pca_reductions,
file_name + "_cov_reductions__",
path)
self._analyzed_thetas.append(cov_reductions_name)
def show_about(self, event=None):
messagebox.showwarning("About",
"DogeHouse Productions NLC (No liability company)")
def show_vars(self):
print(self._recent_files, self._analyzed_thetas, corr_fns[self.corrChoiceVar.get()],
self.pathVar.get(), self.coresVar.get(), self.dimensionVar.get(),
self.stopCriteriumVar.get(), self.filenameVar.get())
def __init__(self, root):
self.root = root
self.root.geometry("600x400")
self.root.title("GDFT Optimizer and Analyzer")
self._recent_files = []
self._analyzed_thetas = []
self.corrChoiceVar = tk.IntVar()
self.coresVar = tk.IntVar()
self.thetasAmountVar = tk.IntVar()
self.dimensionVar = tk.IntVar()
self.stopCriteriumVar = tk.DoubleVar()
self.filenameVar = tk.StringVar()
self.pathVar = tk.StringVar()
self.cutoffVar = tk.DoubleVar()
self.corrChoiceVar.set(1)
self.coresVar.set(2)
self.dimensionVar.set(4)
self.stopCriteriumVar.set(0.5)
self.filenameVar.set("Enter a name")
self.pathVar.set("data/")
self.thetasAmountVar.set(10)
self.cutoffVar.set(0.05)
self.set_menus()
self.set_widgets()
def set_widgets(self):
tk.Label(self.root, text="Dimension").grid(row=1, column=0, sticky=tk.W)
dim_entry = tk.Entry(self.root, width=50, textvariable=self.dimensionVar)
dim_entry.grid(row=1, column=1)
tk.Label(self.root, text="Amount").grid(row=2, column=0, sticky=tk.W)
thetas_amount_entry = tk.Entry(self.root, width=50, textvariable=self.thetasAmountVar)
thetas_amount_entry.grid(row=2, column=1)
tk.Label(self.root, text="Stop criterium").grid(row=3, column=0, sticky=tk.W)
tk.Entry(self.root, width=50, textvariable=self.stopCriteriumVar).grid(row=3, column=1)
tk.Label(self.root, text="File name").grid(row=6, column=0, sticky=tk.W)
tk.Entry(self.root, width=50, textvariable=self.filenameVar).grid(row=6, column=1)
tk.Label(self.root, text="File path").grid(row=7, column=0, sticky=tk.W)
tk.Entry(self.root, width=50, textvariable=self.pathVar).grid(row=7, column=1)
tk.Label(self.root, text="Correlation to be minimized").grid(row=8, column=0, sticky=tk.W)
tk.Radiobutton(self.root, text="Average auto correlation", value=1,
variable=self.corrChoiceVar).grid(row=9, column=1, sticky=tk.W)
tk.Radiobutton(self.root, text="Max auto correlation", value=2,
variable=self.corrChoiceVar).grid(row=10, column=1, sticky=tk.W)
tk.Radiobutton(self.root, text="Average cross correlation", value=3,
variable=self.corrChoiceVar).grid(row=11, column=1, sticky=tk.W)
tk.Radiobutton(self.root, text="Max cross correlation", value=4,
variable=self.corrChoiceVar).grid(row=12, column=1, sticky=tk.W)
optimizeButton = tk.Button(self.root, text="Optimize", command=self.optimize)
optimizeButton.grid(row=12, column=3, sticky=tk.E)
optimizeButton.bind("<Button-1>", self.optimize)
tk.Label(self.root, text="Cutoff ratio for variances").grid(row=15, column=0, sticky=tk.W)
tk.Entry(self.root, width=50, textvariable=self.cutoffVar).grid(row=15, column=1)
analyzeButton = tk.Button(self.root, text="Analyze", command=self.analyze_thetas)
analyzeButton.grid(row=15, column=3, sticky=tk.E)
def set_menus(self):
the_menu = tk.Menu(self.root)
file_menu = tk.Menu(the_menu, tearoff=0)
file_menu.add_command(label="Quit", command=self.quit)
the_menu.add_cascade(label="File", menu=file_menu)
# ----- SETTINGS MENU -----
settings_menu = tk.Menu(the_menu, tearoff=0)
settings_menu.add_command(label="Cores",
command=self.set_cores)
the_menu.add_cascade(label="Settings", menu=settings_menu)
# ----- HELP MENU -----
help_menu = tk.Menu(the_menu, tearoff=0)
help_menu.add_command(label="About",
accelerator="command-H",
command=self.show_about)
the_menu.add_cascade(label="Help", menu=help_menu)
self.root.config(menu=the_menu)
if __name__ == "__main__":
root = tk.Tk()
gui = GUI(root)
root.mainloop()
|
STACK_EDU
|
Typing transcription in various word processing programs - Paul ten Have
The following table summarizes some suggestions which are based on my own experiences, which are, or course limited.
|Symbol||Example||WordPerfect 5.1 (1)||WordPerfect 6.1, 7||MS-Word 7|
|degree sign||soft||Alt-248||Ctrl-W, (6,36)||Input>Symbol|
|high point||·hh||Alt-250||Ctrl-W, (6,32)||Input>Symbol|
|up arrow||high key||Alt-24||Ctrl-W, (6,23)||Input>Symbol|
|down arrow||low key||Alt-25||Ctrl-W, (6,24)||Input>Symbol|
|? and , combined||mild rise||Shft-F8,4,5,1:,?||‘overstrike’:,?||not available|
When transcribing episodes in which one participant’s talk overlaps with that of another, indicated by the use of square brackets, it helps to align the portions of simultaneous speech as precisely as possible. This creates special difficulties with modern words processors which tend to use ‘proportional fonts’ (also called: ‘variable-pitch fonts’). With such fonts, the horizontal space a letter is accorded on the line varies with its size, ‘w’ getting more than ‘l’, etc., and with the number of letters in relation to the length of the line. This implies that the exact place that a point of overlap start or finish can vary when something is added or when the margins are changed, or when a different font is chosen. As a solution one can try using a ‘fixed-pitch’ or ‘monospaced’ font, but it may require a bit of experimenting with one’s word processor’s. fonts as well as one’s printer. An alternative method is suggested by Charles Goodwin (1994), who will put a TAB before the bracket and adjust the TAB-stop using the ‘Ruler Bar’(2).
Another suggestion of his (cf. Goodwin, 1994) is that it can be useful to use a word processor’s table feature to type the transcripts. One can define columns of different width for different purposes such as ‘line number’, ‘time’, ‘arrows’, ‘speaker’, ‘utterance’, and ‘notes’. A ‘landscape’ format may be helpful so that each row can be longer than usual. This ‘notes’ column may be used to add ‘observations’ on hard to transcribe details, such as tone of voice, or - in the case of video tapes - visual aspects. Alternatively, or in an additional column, one might add ‘analytic’ comments, pointing out remarkable phenomena that deserve attention in a later phase, etc. In presentations or publications, such non-transcript columns can be deleted and the table lines can be hidden (by changing the preferences for line display in the lay out menu to ‘none’).
1. Using the Numeric Key Pad
2. Consult your word processor’s ‘Help’ for how to use TAB-settings and the Ruler Bar
|
OPCFW_CODE
|
Understanding Skeptics rules!
Why was this question Did Trump say “going loco”? closed as off-topic for for “not challenging notable claims such as pseudoscience”, while the following one was not? Did Barron Trump wear a “I'm with Stupid” shirt next to his father?.
What notable claims does the latter question challenge which the former does not?
You are right, the other question was also non-notable (even searching for the image leads to our site first...)
In the case of "going loco", the point is: who cares if he said those exact words? Have they any significance that it's worthwhile exploring? If he said "going nuts" or "going crazy" what would change?
There's perhaps a notable claim in there, like "did Trump accused Federal Reserve of going crazy?", but asking about the exact quote, in my opinion, is not really notable at all.
I agree, the reason I asked is that I could find only headlines and not the video in which he actually used that expression. Who cares if he really said that? Well, I did..but I probably posted the question on the wrong site.
"Who cares" is a reason to downvote and find a more interesting question to answer, not a close reason. I'm equally baffled that anyone cared to ask or answer that question - but for whatever reason, plenty of people did find it interesting. We have no grounds to say "My opinion of what is interesting is valid, yours isn't". I think you're mixing up (objective) notability with (subjective) interestingness.
On Skeptics, there's a concept of notability which pretty much encompasses "who cares". We only accept questions about topics where a lot of people care - unlike other sites. I did not mean to offend by it. It's literally a legitimate question we need to answer.
Yeah, the definition of notability has changed so often I can't keep track. It used to be "Is this an idea that people are being actively exposed to and believe, not just someone's misunderstanding or pet theory?". There were objective criteria for that. Now apparently it has to be "Is this interesting to every moderator, and every user who sees it and has close privileges, bar four"? That's completely subjective and luck-of-the-draw. Also, voting patterns clearly prove that many people don't share our opinion that the question isn't interesting. I'm baffled as to why, but it's a clear fact.
Notability has never changed definition: it's "something that many people believe to be true". We might disagree on whether something fits the bill, but the definition is what it always has been. And -- as users of the site -- moderators will vote according to conscience and bringing that into the discussion is just pointless
If there are many articles that say that Trump described the Fed using the specific word "loco" (which there are, as can be verified by Google search), isn't it reasonable to suppose that many people believe this to be true? If the exact choice of word is not important and wasn't part of the information that these articles intended to communicate, then why bother to use quotation marks in their titles?
You seem to give different definitions of notability in your answer and your last comment: "who cares if he said those exact words" is different from "do many people believe that he said these exact words".
@sumelic Well, the gist of the article is not "outrage! he said 'loco'" but "outrage! he thinks they are crazy". The emphasis on the specific word is not in the article but it's all in the OP's personal perspective. Of course, perhaps there is such a notable claim but it's up the the OP to convince us of it.
@sumelic - I guess that this “notable” issue is on this site a very fine point which, apparently, only long-standing users have able to fully anderstand in all its aspects. As a new user I am confused, but as they said, who cares if.....
@user070221 the reality is that the rule is not a specific, objective criteria, it has never been and probably will never be. As such, there are different opinions for different people.
That only works if n and c are unrelated, but in fact I posit that c is necessary for n. If people don't care, it can't be notable because people don't believe stuff they don't care about.
This obligatory XKCD comic was unfortunately published a bit too late, but I think captures the spirit of "who cares": In view of the fact that nothing of importance hinges on the truth or falsity of this statement, not much time need be consumed to ascertain whether this is truth or fiction.
|
STACK_EXCHANGE
|
Difficulty is encountered when configuring a remote Distribution Center (DC) for use with a LiveUpdate Administrator 2.x (LUA 2.x) server. What are the common points of failure?
Messages displayed when connections are tested::
"Connection failed." instead of the desired "Connection to lua2313_test was successful." (if lua2313_test is the name of the remote DC)
One LiveUpdate Administrator 2.x server is usually all that is required, even in large environments. A well-resourced LUA 2.x server can successfully distribute contents in up to one hundred Distribution Centers (DCs) throughout the corporate network. Endpoints and other Symantec products can then download their necessary updates from these convenient DCs.
LUA 2.x has capabilities built-in to ensure it can communciate with its DCs. When "test connection" is clicked in the LUA console, the LUA server will attempt to post a small file called minitri.flg to the Distribution Center using the configuration provided.
There are many possible causes of failure, but the most common are (1) misconfiguration of the path, and (2) permissions.
Links to the official Symantec knowledgebase articles on creating DCs can be found at the bottom of this page. Following these procedures will enable a remote DC to be created.
There is also an excellent illustrated article in the Symantec Connect forums on Configuring Distribution Center in LUA (https://www-secure.symantec.com/connect/articles/configuring-distribution-center-lua) - merely examining how a LUA 2.x server's example DC was correctly configured will help to confirm the correct procedure has been followed.
Is the URL Correct-?
One very common cause of failure is that the path to the DC, as configured in the LUA, is wrong. Navigate to the "Edit Distribution Center" screen, and simply copy the URL and paste it into the address bar of an internet browser window. Does that URL actually exist? Can that configured URL be accessed and opened from this computer?
Misconfiguration is common, especially if the DC is a new HTTP or FTP site hosted on a remote IIS server. (When creating a new site on an existing IIS server, a unique port number must be used. Attempting to connect to the pre-exiting IIS server site's default port 80 will fail.) In the example shown below, the IP and port were corrected for the environment, and the status of this DC changed to "Ready."
Logs from the Remote DC Examining the LUA server's logs alone will not provide all the answers. Assuming that the remote DC is on an IIS server with IP 10.10.8.94: examining that IIS server's logs may provide some clues. (Also consult Microsoft's article: The HTTP status codes in IIS 7.0 and in IIS 7.5 to learn what the code numbers mean!)
In this case: there was a misconfiguration. When the administrator attempted to configure the DC ("Edit Location" screen), they entered in a "Root Directory" called "LUAtypoDC." These IIS logs show that when the LUA server attempts to test the configured connection, it cannot find any directory of that name.
In the example below, examining the IIS log made it clear that the username was entered incorrectly in the LUA console:
Not all of the fields are manditory on the "Edit Location" screen where the DC is configured. If the contents are not located in a subdirectory, leave "Root Directory" blank. If a proxy server is not involved, leave those fields blank.
When supplying "login credentials for distribution access to remote location," it is generally just necessary to provide the username. Attempts to also add the domain name may cause the connection attempt to fail.
Playing with the Proxy
After any changes are made to the proxy configuration (Configure > Source Servers), stop and restart the LUA Tomcat service of the LUA 2.x server. This will confirm that new credentials and other details about the proxy are in effect at all levels of networking. Changing proxy-related details in the GUI and saving them is not sufficient.
If Technical Support's assistance is required:
This is one of the few situations where it is preferable to put the LUA 2.x server into DEBUG: ON mode. By default, the lua-application logs will record only if there was a success or failure without any extra details.
Example of a failed connection with DEBUG off:
2011-12-05 15:07:31,000 [http-apr-/0.0.0.0-7070-exec-23] INFO config.ConfigManagerUtil - testServerConnection(Production/Test server) returns:false , server name: lua231_test
Example of a failed connection with DEBUG on (MUCH more information on what configuration LUA is trying to use):
|
OPCFW_CODE
|
Yes! If anyone is interested in helping with this, please see the following as a starting point:
Yes! If anyone is interested in helping with this, please see the following as a starting point:
Maybe they are focusing on React and Apollo with the hope that Facebook will acquire the team and technology. Vue would just distract from a strategy like that.
I’m not sure about that, I think they’re just tight on resources with too many fronts. It seems that if the community can support in providing the guide and tutorial material for vue they might endorse it officially.
When talking about view layer integration, at least when it comes to Vue, I think an important aspect is often overlooked - that the integration itself is actually quite thin. @akryum, a core team member of the Vue project, has done the work. vue-meteor-tracker is 261 lines of code and vue-component is approximately 1500 lines of code. The former gets us tracker integration and the latter enables use of Vue’s single file components with Meteor. It seems to me that these two packages really cover the most important elements of what is necessary between Vue and Meteor. And this is a good thing. The connection area between a framework and library should be small and well defined (and I’d call sub 2k LOC reasonably small here). Less chances of things breaking in the future and less worries about maintenance.
A good Vue integration in my opinion is not one which provides a lot of behind-the-scenes automatic connections between Vue’s reactivity and Meteor’s reactivity, but quite the opposite - a good integration is one where the link between Vue and Meteor is as small and deliberate as possible. So that the Vue code you write for a Meteor project would be as close as possible to Vue code you’d write with any other backend and the small parts of your code where you inject Meteor’s reactivity to Vue’s reactivity would be clearly visible and understandable. This will give you three major benefits. Firstly, you’ll be able to effortlessly onboard new people into your Vue/Meteor project from the vast pool of Vue devs that are not familiar with Meteor. Secondly, if you’d ever want to switch away from Meteor, most of your Vue code would likely be easily transferable to the new backend as it is not coated with a heavy layer of Meteor specific abstractions (an important insurance policy for most CTOs I’d say when an employee tries to sell them on Meteor). And thirdly, you need not worry about the implementaiton of the integration falling out of date since it’s only a small chunk of code and presumably easy to maintain, even if a completely new maintainer would need to pick it up. It seems to me that these two packages mentioned above have opted for such an approach and this is a really good choice in my opinion by @akryum .
People put value in ‘official support’ of view layers in Meteor to alleviate their fears that at some point an integration will be abandoned and they will be left in a bad place with their project. Taking the above into account, a good first step to address such fears with regard to Vue and Meteor would be a blog post by @akryum on the official Meteor blog describing what are the dots that he has needed to connect between Vue and Meteor to make the integration work. And importantly, describing that the integration is effectively very thin (you will not depend too much on him continuing to maintain that 2k lines of code) and that the integration is thin not because of lack of resources or commitment, but because this is the best solution that makes your Vue/Meteor project future proof and avoids vendor lock in and enables bringing in new developers quickly.
Also, a very helpful resource for me has been this repo by @efrancis . From the little contact I’ve had with him he seems a very nice guy. If we’d be able to convince @akryum to make the initial post describing the Vue-Meter integration itself, then perhaps @efrancis could follow with a blog post describing the choices he has made in his opinionated starter kit. Two very big ifs. But who knows, maybe if there’s enough peer pressure on the forums the guys will find the time?
This regarding communication. Regarding the Vue integration itself, it is likely that when more people pick up Vue with Meteor there will be a need to improve certain aspects of it. In my opinion this would be a good place for some modest crowd funded action, similar to what took place recently with mup here, if @akryum would be interested. Although I must say that one essential reason why Vue is as good as it is today is in my opinion that Evan You keeps the focus on things and filters out the good feature requests from the many bad ones. Having the same kind of filter on the Meteor integration would be essential to prevent it from bloating up. So crowd funding, if necessary at all, would require someone with authority (be it @akryum or anyone else) to be able to keep things lean and in focus as well.
@vooteles Thank you for taking the time to write this and break things down.
@akryum you have done a lot of the work already, please help us cross the chasm and make meteor the tool of choice. Also for people like me who depend very much on meteor (having learnt it from 4 years) and helped hugely by the simplicity of blaze, to have a front end layer like vue will just be amazing. The very best of all the worlds!
@hwillson you have always been there when we needed you, we need you again…
@abernix and others, we are waiting for a vue version of the meteor guide, to give us some faith that mdg will at least keep an eye out for vue and projects that run with it.
Thank you all!
Please keep me honest, but here is a proposal for moving forward with this. If the folks in the community who are using vue can show meteor some love by:
Then MDG can:
4. Support the vue/meteor integration effort (closing any open issues etc.)
5. Update their website to add vue to their main page
Thanks for catching this! sorry I’m a react user
Read this thread to find the answer:
Taking nothing away from @mitar’s efforts, bottom line, it’s going to be difficult for the community to get any traction on first-class Vue + Meteor integration without help from either the Vue team or MDG somehow (IMO).
I think @maxhodges point was, Laravel adoped one of the top 3 front end frameworks and user adoption when up as a result.
Yet Meteor will not devote resources to change out their front end* (for reasons @hwillson points out above), AND the community around Meteor doesn’t have the inclination/resources to do it themselves, hence user adoption is possibly not what it could have been at this point.
I have legacy applications on Meteor + Blaze, so I’m not going anywhere anytime soon. Also, love the enhancements to Meteor being worked on all the time these days, thanks for this!
*Note: I have my own opinons on the shift from Meteor to Apollo, and I’m just now starting to see MDG’s vision.
Thanks for the clarification @aadams I was the under the impression that there is a first-class integration already but looking at this issue it seems that it’s stuck between a rock and a hard place, sigh.
Meteor competes more with LAMP / WebPack and doesn’t compete at all with vue.js. You can just use vue.js now, without any extra effort above what it takes to use Blaze or React.
On Meteor vs. WebPack - I’ve used both, and I prefer Meteor. It’s so much easier to both setup and to keep it working. I just update, and go. To me, that’s the value. And we get things you can’t get at all with other packages - perfect code splitting, the coming dual bundles system. It’s pretty great in Meteor these days.
I’m really not clear what is working and what is missing from the vue integration, I guess because I don’t use it. I’m hearing conflicting messages, on one hand I see people using it and creating starter kit and on the other hand we’ve this open issue that seems to be stuck.
That’s for a specific type of vue.js integration (and looks like it was rejected because it dug too deeply into vue.js internals). You don’t need that at all to use Vue.js from npm. Just install vue.js, and get started.
I think that this issue only relates to Tracker integration, required to use Blaze and Vue together. Without Blaze, the work that Akryum has done is sufficient.
Right, that issue is realted to “first-class” – drop-in replacement of Blaze – the kind of integration you’d expect in the Meteor community. Right now you can use Vue of course, but there’s going to be edge case issues, bumps in the road so to speak IMO. YRMV, so good luck.
Do we actually need that “first-class” integration? is this even desirable?
In our react project we minimized the use of tracker or any other meteor specific libraries because we want our view to be portable and we want to stick wth the rest of the ecosystem, so perhaps this also make sense for the vue integration. Perhaps we can leave concepts like tracker to Blaze (which I still love and use), and keep the integration thin and pure as @vooteles suggested. It’s really hard to beat meteor/blaze speed and simplicity of development, all you need is basic html, css and JS knowledge and you got to go! it’s really a league on it’s own. These view layers are targeting more complex apps and designed to work with rest, apollo or other data layers so why’re we expecting the meteor/blaze paradigm from this integration?
Furthermore, if you observe the work being done to Meteor last two years, you can clearly notice the focus toward positioning Meteor as a build system with loosely coupled modules, take a look at the minimal flag coming in 1.6.2, it doesn’t even have tracker. Thus having a tight/highly coupled integration to the view doesn’t seem aligned with the overall direction.
I still think Meteor should list vue as potential integration just like what we’ve with react and angular, I mean we don’t have an official tracker integration with those view layers either.
Do I understand correctly that if I create a “minimal” app and run my own socket.io server that there shouldn’t be any performance problems, because the Meteor Server itself won’t be running?
Had that problem recently when trying to port the lance pong game onto Meteor.
Core Meteor devs can keep me honest here, but it seems they’ve removed DDP package with the minimal flag as well, so I think you’ll have the HTTP server but no socket connection opened by default, and if my understanding is correct yeah it should be similar to any other node backend in terms of performance.
Really nice to see, hard to beat that landing page, best backend framework ever!
We also need a tutorial, guide section and a blog post going to officially welcome the vue folks
|
OPCFW_CODE
|
// Copyright 2020 Phyronnaz
#pragma once
#include "CoreMinimal.h"
#include "VoxelMinimal.h"
#include "Async/Async.h"
#include "Engine/World.h"
#include "TimerManager.h"
namespace FVoxelUtilities
{
// Call this when you pin a shared ptr on another thread that needs to always be deleted on the game thread
template<typename T>
inline void DeleteOnGameThread_AnyThread(TVoxelSharedPtr<T>& Ptr)
{
if (!ensure(!IsInGameThread()))
{
Ptr.Reset();
return;
}
if (!ensure(Ptr.IsValid()))
{
return;
}
check(FTaskGraphInterface::IsRunning());
// Always start a task to avoid race conditions
AsyncTask(ENamedThreads::GameThread, [Ptr = MoveTemp(Ptr)]() { ensure(Ptr.IsValid()); });
check(!Ptr.IsValid());
}
template<typename... TArgs, typename T, typename TLambda>
inline auto MakeVoxelWeakPtrLambda(const T& Ptr, TLambda Lambda)
{
return [WeakPtr = MakeVoxelWeakPtr(Ptr), Lambda](TArgs... Args)
{
auto Pinned = WeakPtr.Pin();
if (Pinned.IsValid())
{
Lambda(*Pinned, Forward<TArgs>(Args)...);
}
};
}
template<typename RetVal = void, typename... TArgs, typename T, typename TLambda>
inline auto MakeVoxelWeakPtrDelegate(const T& Ptr, TLambda Lambda)
{
return TBaseDelegate<RetVal, TArgs...>::CreateLambda(MakeVoxelWeakPtrLambda<TArgs...>(Ptr, Lambda));
}
template<typename... TArgs, typename T, typename TLambda>
inline auto MakeVoxelWeakPtrDelegate_GameThreadDelete(const T& Ptr, TLambda Lambda)
{
return [WeakPtr = MakeVoxelWeakPtr(Ptr), Lambda](TArgs... Args)
{
auto Pinned = WeakPtr.Pin();
if (Pinned.IsValid())
{
Lambda(*Pinned, Forward<TArgs>(Args)...);
DeleteOnGameThread_AnyThread(Pinned);
}
};
}
template<typename T>
inline void DeleteTickable(UWorld* World, TVoxelSharedPtr<T>& Ptr)
{
// There is a bug in 4.23/24 where FTickableGameObject gets added to a set of deleted tickable objects on destruction
// This set is then checked in the next frame before adding a new tickable to see if it has been deleted
// See Engine/Source/Runtime/Engine/Private/Tickable.cpp:107
// The problem is that when deleting a tickable, there is a large chance than if we create another tickable of the same class
// it'll get assigned the same ptr (as the memory allocator will have a request of the exact same size, so will reuse freshly deleted ptr)
// This set of ptr is only valid one frame. To bypass this bug, we are postponing the tickable deletion for 1s
// Fixed by https://github.com/EpicGames/UnrealEngine/commit/70d70e56f2df9ba6941b91d9893ba6c6e99efc4c
ensure(Ptr.IsValid());
if (World)
{
// No world when exiting
FTimerManager& TimerManager = World->GetTimerManager();
FTimerHandle Handle;
TimerManager.SetTimer(
Handle,
FTimerDelegate::CreateLambda([PtrPtr = MakeVoxelShared<TVoxelSharedPtr<T>>(Ptr)]() { ensure(PtrPtr->IsValid()); PtrPtr->Reset(); }),
1.f,
false);
ensure(!Ptr.IsUnique());
}
Ptr.Reset();
}
}
|
STACK_EDU
|
Issues: 418 - addresses issue of init.d not behaving the way as expected
Fixes #418
Problem:
The issue of the init.d script not exiting as expected with the correct,
printed output into the CLI for start, stop, restart. Have already
fixed the status in the past.
Analysis:
Am introducing additional logic to test the status of the daemon prior
to any start or stop. If the daemon was already running, then it will
simply state that it was&is running without 'trouncing' the original
PID.
If it was already stopped when 'stop' is executed, then it will state
that it was already in a stopped state and state that it is stopped. If
it was running when stop is issued, then it will simply state that it is
stopped or the particular error coded type, and state that it is still
running (if so).
Lastly, if the daemon is called from an unrecognized argument,
unprivilidged
user or the improper number of arguments are given, then it will exit
with the
proper exit statement and code.
Tests:
Have tested the following manually as this requires a full devstack or
similar build on ubuntu:
start; restart; stop
$ sudo service f5-oslbaasv2-agent start;sudo service f5-oslbaasv2-agent
restart;sudo service f5-oslbaasv2-agent stop
(0) Service is running
(0) Service is running
(7) Service, f5-oslbaasv2-agent, is not running
from stop, status; start; status; #kill# status; stop; start; start
$ sudo service f5-oslbaasv2-agent status
(3) Service f5-oslbaasv2-agent is not running!
$ sudo service f5-oslbaasv2-agent start
(0) Service is running
$ sudo service f5-oslbaasv2-agent status
(0) Service f5-oslbaasv2-agent is in an OK Status!
$ kill -9 $(cat /var/run/neutron/f5-oslbaasv2-agent.pid)
$ sudo service f5-oslbaasv2-agent status
(1) Service f5-oslbaasv2-agent is dead and /var/run pid file exists!
$ sudo service f5-oslbaasv2-agent stop
Service is already in stopped status
(7) Service, f5-oslbaasv2-agent, is not running
$ sudo service f5-oslbaasv2-agent start;sudo service f5-oslbaasv2-agent
start
(0) Service is running
Service is Already Running: no action
(0) Service is running
@richbrowne
What issues does this address?
Fixes #418
WIP #
...
What's this change do?
Please see above in commit
Where should the reviewer start?
The ./etc/init.d/f5-oslbaasv2-agent file
Any background context?
Please refer to the Linux's standard handling of init.d files for handling start-stop-daemon.
Ok, try the following:
Edit the configuration file
/etc/neutron/services/f5/f5-openstack-agent.init by putting a space before
any option. Try periodic interval. Run the script and see if you get the
error.
Rich
On Wed, Feb 8, 2017 at 3:40 PM, Steven Sorenson<EMAIL_ADDRESS>wrote:
Fixes #610
Problem:
The issue of the init.d script not exiting as expected with the correct,
printed output into the CLI for start, stop, restart. Have already
fixed the status in the past.
Analysis:
Am introducing additional logic to test the status of the daemon prior
to any start or stop. If the daemon was already running, then it will
simply state that it was&is running without 'trouncing' the original
PID.
If it was already stopped when 'stop' is executed, then it will state
that it was already in a stopped state and state that it is stopped. If
it was running when stop is issued, then it will simply state that it is
stopped or the particular error coded type, and state that it is still
running (if so).
Lastly, if the daemon is called from an unrecognized argument,
unprivilidged
user or the improper number of arguments are given, then it will exit
with the
proper exit statement and code.
Tests:
Have tested the following manually as this requires a full devstack or
similar build on ubuntu:
start; restart; stop
$ sudo service f5-oslbaasv2-agent start;sudo service f5-oslbaasv2-agent
restart;sudo service f5-oslbaasv2-agent stop
(0) Service is running
(0) Service is running
(7) Service, f5-oslbaasv2-agent, is not running
from stop, status; start; status; #kill# status; stop; start; start
$ sudo service f5-oslbaasv2-agent status
(3) Service f5-oslbaasv2-agent is not running!
$ sudo service f5-oslbaasv2-agent start
(0) Service is running
$ sudo service f5-oslbaasv2-agent status
(0) Service f5-oslbaasv2-agent is in an OK Status!
$ kill -9 $(cat /var/run/neutron/f5-oslbaasv2-agent.pid)
$ sudo service f5-oslbaasv2-agent status
(1) Service f5-oslbaasv2-agent is dead and /var/run pid file exists!
$ sudo service f5-oslbaasv2-agent stop
Service is already in stopped status
(7) Service, f5-oslbaasv2-agent, is not running
$ sudo service f5-oslbaasv2-agent start;sudo service f5-oslbaasv2-agent
start
(0) Service is running
Service is Already Running: no action
(0) Service is running
@<reviewer_id>
What issues does this address?
Fixes #
WIP #
...
What's this change do?
Please see above in commit
Where should the reviewer start?
The ./etc/init.d/f5-oslbaasv2-agent file
Any background context?
Please refer to the Linux's standard handling of init.d files for handling
start-stop-daemon.
You can view, comment on, or merge this pull request online at:
https://github.com/F5Networks/f5-openstack-agent/pull/578
Commit Summary
Issues:
File Changes
M etc/init.d/f5-oslbaasv2-agent
https://github.com/F5Networks/f5-openstack-agent/pull/578/files#diff-0
(212)
Patch Links:
https://github.com/F5Networks/f5-openstack-agent/pull/578.patch
https://github.com/F5Networks/f5-openstack-agent/pull/578.diff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/F5Networks/f5-openstack-agent/pull/578, or mute the
thread
https://github.com/notifications/unsubscribe-auth/AIVe0SWIGLhCqgIKrwIqHy2cjAKK9P-Vks5rakR5gaJpZM4L7di2
.
Does it make sense to:
Store the STDOUT/STDERR into a variable
Reference the return code from start-stop-daemon
Dump into the terminal upon error or...
Remain quiet upon success
Users can then go without the white noise when things are 'okay' and see the dumped terminal output upon discovery of an error.
I am still using the start-stop-daemon executable quite heavily in that it...
Still initiates and stops the service
Handling the PID and LOCK files accordingly
Use it to check the status to report appropriately if/when circumstances are abnormal
When the service is running with what items present
If I were to rewrite it, though I would suggest in Perl. I would not normally suggest this; however, given how complicated the parsing is to grab things, Perl is a bit more stable with log handling than straight shell. Even bash, which is quite handy and can handle a good number of things. Further, it would greatly reduce the number of lines we already have in bash as there are multiple steps to compensate for the lack of intelligence here.
Python is a lot of overhead to start/stop a service; however, it can be used to replace this as well. I'm just worried at that point that it's the equivalence of a sludge hammer for ants...
Have performed the following changes:
Remove the --background flag
Pipe the start-stop-daemon output into a truncated, temp var
Contains only trace, error, and critical strings
If this causes issues down the road, we can always use like a 40K-60K cache
Upon error, dump the piped content
This gives the user:
What happened and the choice to dump the material into a log file of their own
An umbrella "catch net" for errors that the service might run into if/when an unhandled exception occurs
If they want to customize whether or not they want the feedback in the terminal, there's a comment on the line that dumps it in.
So, I've got a fix for this that should address most of our concerns:
I create the $LOCKFILE
I cast the start-stop-daemon call 2>&1 >> $LOGFILE;rm -rf $LOCKFILE into a string
I call nohup $str >/dev/null 2>&1 &
What this does is:
Stores the STDERR/STDOUT into the /var/log/neutron/f5-oslbaasv2-agent log
Gives the running prompt back to the user
Forks the service job off using nohup followed by a disown
Destroys the lock file if something "bad" happens to the agent's service PID during runtime
This keeps the user form having the running instance going in a prompt that they have to then disown while still preserving the STDERR
|
GITHUB_ARCHIVE
|
Joined: 14 Jan 2008 Posts: 2504 Location: Atlanta, Georgia, USA
This is where nested EVALUATE's (introduced with VS/COBOL II) can replace 6-7 (or greater) nested IF levels ( Yikes ) and can come in real handy. As you say, the IF's just fall off the right side of the page/screen....
In the old days, if I was actually coding a case/evaluate structure, I would do it without the indenting, so it is clearer what it is:
IF condition a
IF condition b
IF condition c
IF condition d
No need anymore. No need for a genuine nested IF not to be indented.
One place I worked we somehow got hold of a fold-out paper Thanksgiving Turkey decoration (this was in the UK, which was why it was odd). If someone did something stupid (like Dick's example) you had to display the Turkey on top of your terminal for a week (or until superseded, whichever was shorter). Helped a little to concentrate minds.
For a real nested IF (or anything) how to avoid? If you are faced with a five-level nest where you need a change, it is not a happy sight.
One sort of nesting, PERFORMed paras/sections, don't march off to the right.
If you spend more time for the design of programm, you could avoid inscrutable code. The hidden secret of a good programm is the right mixture of segmentation and the strict use of conditional and unconditional branch instruction like Evaluate and Perform. I'm a hardliner. I won't see any normal move or compute instruction in the main programm-controll-section. I also use qualifier in section-names
which tells me the depth of a section within the programm-flow. The older one of us will remember the IBM SPL framework using A05/B05/F05 and so on.
The only legal commands for me in a control-flow-section are Perform, Evaluate and an IF-Then-Else, but not nested.
Perform Until Programm-End
......When ...Perform V05-Do-Something-02
......When ...Perform V05-Do-Something-03
......When ...Perform V05-Do-Something-04
......When other continue
I use the same design in assembler too. Keeping my programms strictly using this design it was quite easy for me in former times to modify some moduls just in the middle of the night after a telephone call, just after a few bottles of red wine and some houres with a horney blonde chick.
Oh god, lucky old days. Nothing is left over. Only the red wine.
In this method, you displace the nesting. From the normal nested structures to the sections. So, your PERFORMs become nested. I'm interested in your section-naming structure, which is the only way I've really identified to deal with this "problem" automatically, otherwise I end up thinking these (the performed sections) are now just too "deep" to be readily understandable. And then you think some more.
So, maybe you don't use the inline-perform like that. But the same thing is what is really happening from sections. What I mean is that when you are down at the third level, you also have to be aware of what was relevant at the first and seconde levels.
Don't get me wrong. I like your structure. It is the one I used, even. I'm as surprised as perhaps you are :-)
I'll (obviously) let UmeySan provide his own response as to how he would name the various sections, but this is how I would do it:
I assign 4-byte name prefixes that reflect both the hierarchical level of the section/paragraph (the first byte of the prefix (a letter)) and the order in which the section/paragraph was first referenced in the source (the second byte of the prefix (also a letter)); the third and fourth bytes of the prefix are numbers, which can be used to group related functions - e.g. for a given file, the last two prefix characters might be 10 for an OPEN, 20 for a START, 30 for a READ NEXT, 40 for a READ DIRECT, 50 for a WRITE, 60 for a REWRITE, 70 for a DELETE, and 80 for a CLOSE. The first two letters of the 'group' would be assigned in accordance with the 'normal' hierarchical/reference assignment logic.
The 4-byte prefix using this standard allows for 26 hierarchical levels and 2600 sections/paragraphs at each of those levels. FWIW, I normally assign I/O routines to hierarchical level 'X' and heavily referenced "utility" routines (e.g. display file-status error messages) to hierarchical level 'Z'.
The other thing I do is to always position the sections/paragraphs in the source in (ascending) prefix order. In this way, I always know where a referenced section/paragraph is located relative to the statement I am looking at - if the prefix is alphanumerically less than the current section/paragraph prefix, the referenced section/paragraph will be positioned earlier in the source; if greater, then later.
Also, VIEWing the source and executing
X ALL;F P'^' 8 ALL;F ' PERFORM ' ALL;X '*' 7 ALL
enables me to see the entire program structure at a glance (e.g. just what follows, minus the actual (non PERFORM) 'statements').
Etc. The numbers show one "axis", the letters another, the "depth" of the level of the performs (my "nesting"). As the letters "march off to the right" too much, I know to look at things again and simplify.
00- for program start-up
10- for opening files
80- for closing files
90- for program shut-down
99- for "routines" used from more than one section. I didn't distinguish IO from non-IO, (I was kind of lucky, and managed to avoid most IO directly) but that is a nice touch from Ronald.
IF I wanted to use a "GO TO" I would do 50AD-G-dfjksdf for the paragraph name, thus knowing that it was the target of a GO TO.
All the sections would be in strict sort-sequence. You always knew where to find in the listing, and you always knew what it was performed by (except for the general use 99-s).
In the XREF, all the procedure lables would be in order by line number. Particularly if there were GO TOs, you can look at the XREF and see that they are not outside the range.
I'd always make the names meaningful. You can get a lot in 30 characters, and if you go over, even, it is not the end of the world.
Any more favourites, anyone?
The other thing is maintaining other people's programs. I tended to stick to their convention, unless there was more time available than usual, when, in that lucky instance, the first thing I would do is to reformat my way.
I have an irrational dislike for SECTIONs. It came about from having to debug too many programs that mix paragraphs and SECTIONS improperly.
The thing is, how do you cater for idiots? The same person who adds a paragraph, after a SECTION, and performs the paragraph is probably going to add it in the middle of a perform thru as well. Or GO TO something that is not in the perform range.
OK, so you can get by without sections and without perform thrus. But then, if you can't, I definitely prefer sections. Even with no paragraphs in the section, I still code sections. Just in case, I suppose, or from habbit.
I'd sometimes idly think, why can't the compiler sort this out? "E" levels if you do dumb things. But too many earlier programs do dumb things, so "backwards compatability" loses out for everyone.
I'd like to see a compiler option like NOSH1, which, with the (T)est sub-parameter would highlight idiotic (outside of logical human readability scope) use of paragraph/section/go to - hey, for me, stick declaratives in as well if you like.
Funny thing is, for NOSH1(T), the OPT damn well knows about. List the generated code, look for any "non simple" returns anywhere, and then you know you have dodgy code in the program. Except declaratives. No idea how it handles that, but I suppose they are easy enough to spot anyway.
Naming convention for sections are still depending on personal advantages.
First, for my own, i make a difference between Control-Flow-Sections and real Processing-Sections. The real data-processing (move,compute,...) will only take place in these Processing-Sections. The logical flow of the data-processing within the programm ist guided by the Control-Sections.
I described it before. Ok, now naming conventions.
S00 Main Controll-Section of the Application
A00 Main Section for Programm-StartUp
V00 Main Section for Programm-Processing
Z00 Main Section for Programm-CleanUp
Whereas these sections are only Control-Sections witch perform others.
...Set Kto-Ind to 1
...Perform until Kto-Ind > Kto-Max
......move x to y
......set Kto-Ind up by 1
So, as you could see, the numer specifies the depth.
The first letter has a relationship to the kind of work.
A section named A45-blabla would be performed in section A40-blabla,
and will have a close link to A00. And this Section A45 has someting to do with the programm-Init. Same as Znn-Blabla. This section must have something to do with the Programm CeanUp at end of work.
Beside the A00, V00, and Z00 Main-Control-Sections i also have some universal sections for Open, Close, Write, or Declare Cursor, Fetch, Instert, Delete, etc.
So in any of my programms, Bnn-Sections will definitly deal with open dataset and Cnn will acordingly deal with close dataset.
If there's a B25, so it's a guideline, that this module opens five datasets.
B00 would be the Control-Sections for performing B05 till B25.
This Dataset would be read in Section R25 or written in Section W25
So as you see, there' quiet another kind of interrelationship.
Yes i know, perhaps it seams, something takes getting used to. But not for me. I'm used to it for the last 35 years. Whether Assembler, Cobol, Rexx, CSP, Abap.
And, at the end, i aggree with Dick, dbzTHEdinosauer.
Gotta love edit macro's and REXX Scripts
I have a little edit-rexx-proc which is executed by pressing a PF. The
Cursor has to be in a coding line. If there is a perform, pressing the PF will result in displaying the section. If the cursor is at a section-beginning, pressing PF will result in displaying all performs of this section. As a matter of course, this will also work for Assembler. Some more functions are included, but this will go beyond the scope of the discussion.
So, everyone of us has his own preferences and his own established-standards.
And i think, with every little new programm we are developping, our standard is expanding and ensuring the efficiency.
|
OPCFW_CODE
|
share|improve this answer answered Jun 16 at 3:39 goCards 89949 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up Contents 1 Causes 1.1 Non-existent address 1.2 Unaligned access 1.3 Paging errors 2 Example 3 References Causes There are at least three main causes of bus errors: Non-existent address Software instructs to make the program p1, you say ``make p1''). #include
Others, like our lab machines, have 64-bit pointers and downright chatty compilers. I got it from the wikipedia entry. In this case, if you compile your program with gcc -g myprogram.c -o myprogram and then run it with gdb ./myprogram (I am assuming Linux), you will get a stack dump i = %d (%c). http://stackoverflow.com/questions/212466/what-is-a-bus-error
In other words, suppose you have a region of bytes. E.g. Most CPUs can access individual bytes from each memory address, but they generally cannot access larger units (16 bits, 32 bits, 64 bits and so on) without these units being "aligned" To learn about 'dbx', you can read the manual pages by using the 'man' command, as in: man dbx To learn about 'gdb', you can read the manual node in the
This was arbitrarily chosen as to ease visual identification of individual bytes; */ arr = 0x00112233; arr = 0x44556677; arr = 0x8899AABB; arr = 0xCCDDEEFF; printf( "\nINT array: 0x%08lX\n", (UL) arr Don't pass it any other pointer, or a pointer that has already been freed, or really ugly things can happen (you'll see this in detail later in the class). When there's only one person who knows how to do something crucial to a particular workflow, and that person suddenly becomes unavailable (i.e., "falls under a bus" - but most likely Logical Error Example In C Read their man pages to see their prototypes and include statements.
You can view memory as one huge array of bytes (chars). I am using gcc arm gnueabihf cross compiler from ubuntu x64. It's not so much type conversion as you're doing type conversion on a pointer that you've done pointer math on. other However, if it doesn't (because they change the compiler yet again), you are writing bad C code.
Other than that, a fine answer. –paxdiablo Apr 20 '11 at 3:06 add a comment| up vote 8 down vote "this is " and "me" are string literals which may reside Bus Error Linux jp = 0x%lx\n", (long unsigned int) ip, (long unsigned int) jp); } The compiler, happy that you have taken responsibility for using mixmatched types, compiles it without any warnings: UNIX> make If all you want is pics, skip to the end. The program pd.c confirms all of these assertions: UNIX> pd 8 16 12 8 UNIX> A Common Type Bug This looks idiotic, but it is at the heart of all type
Even if line had a longer lifetime, it wouldn't be useful to have all your array elements having the same pointer (they'd each just point to whatever happened to be written I found a particular useful posts on bus errors in general, see here. Bus Error 10 C The routine returns the number of bytes read. Bus Error In C Program The problem with this is that array[x] doesn't belong to the array, the array only has useable indices of 0 to (x - 1).
We'll discuss later in the lecture. My custom made plugin has "a new version available" which links to unrelated plugin Maxwell's Demon: Why does the entropy of the overall system decrease? All rights reserved. tonyt View Public Profile Find all posts by tonyt #4 11-10-2001 TioTony Bit Pusher Join Date: Oct 2001 Last Activity: 4 October 2016, 2:50 PM EDT Location: Runtime Error Example In C
Randomly pick a word, jumble it and try to guess it. The things that had changed were that the process was recompiled, AND we were at 96% (df -k) on that disk... The compilers on our lab machines are happy to warn you about your potential problems, as evidenced by the warnings here: UNIX> make p8 gcc -g -o p8 p8.c p8.c: In Finally, you're trying to copy the strings using =.
then the program was trying to access a memory location outside its address space. Bus Error 10 Mac But I suspect that this is the cause of your bus error: you're passing in the array size as x, and in your loop, you're assigning to array[x]. Why does it allow you to copy the struct, but not to copy the array?
They are more complex than scalars. Whenever memory has been allocated, you can set a pointer to it. Tenant paid rent in cash and it was stolen from a mailbox. Fortran Bus Error You simply pass it the pointer that malloc() returned.
It is unaligned, and importantly s.i is unaligned. It's bad practices, to be blunt. :D –Svartalf Apr 23 '15 at 18:15 | show 2 more comments up vote 2 down vote It depends on your OS, CPU, Compiler, and For now, take a look at pm.c #include
Look at pc.c: #include
To populate your array with the strings, you need to make a copy of each one for the array: allocate space for each new string using malloc, then use strncpy to J is a local variable, and argc is a parameter. All Rights Reserved. Why can't I use \edef with \pageref from hyperref?
Malloc and Free There is no new or delete in C. However, certain parts of this array are not accessible. It ran nearly to normal completion time, then simply poo-pood. You should not leave file in /tmp when you logout.
My code is an attempt to teach myself C.
|
OPCFW_CODE
|
import Block = require('./Block');
import BlockType = require('./BlockType');
import IBlockSpec = require('./IBlockSpec');
import IfBlock = require('./IfBlock');
import RepeaterBlock = require('./RepeaterBlock');
import View = require('./View');
export function fromSpec(view: View, spec: IBlockSpec): Block {
var block: Block;
if (spec.type === BlockType.Element || spec.type === BlockType.Text || spec.type === BlockType.View) {
block = new Block(view, null);
block.template = processTemplate(block, [spec]);
} else {
block = createBlock(view, null, spec);
block.template = processTemplate(block, spec.children);
}
return block;
}
function createBlock(view: View, parent: Block, spec: IBlockSpec): Block {
var block: Block;
switch (spec.type) {
case BlockType.Block:
block = new Block(view, parent);
break;
case BlockType.IfBlock:
block = new IfBlock(view, parent, spec.source);
break;
case BlockType.RepeaterBlock:
block = new RepeaterBlock(view, parent, spec.source, spec.iterator, spec.children);
break;
}
return block;
}
export function processTemplate(parent: Block, template: IBlockSpec[]): IBlockSpec[] {
return template.map(function (spec) {
if (spec.type === BlockType.Element) {
if (spec.children) {
// allow two repeaters to share the same blockTemplate
spec = {
type: BlockType.Element,
tag: spec.tag,
attr: spec.attr,
binding: spec.binding,
// children has to be unique per repeater since blocks
// are processed into comments
children: processTemplate(parent, spec.children)
};
}
} else if (spec.type === BlockType.Block || spec.type === BlockType.IfBlock || spec.type === BlockType.RepeaterBlock) {
var block = createBlock(parent.view, parent, spec);
if (spec.type !== BlockType.RepeaterBlock) {
block.template = processTemplate(block, spec.children);
}
parent.children.push(block);
spec = {
type: BlockType.Comment,
owner: block,
value: 'block'
};
}
return spec;
});
}
|
STACK_EDU
|
Does std::bitset<64> work on 32 bit machines for uint64_t?
The output of code below on a 32 bit Linux (RHEL6) machine with Intel compiler with enabled -std=c++11 is like below - why?
uint64_t a = UINT64_MAX;
std::cout << std::bitset<64>(a) << std::endl;
Output on 32 bit machine:
0000000000000000000000000000000011111111111111111111111111111111
Output on 64 bit machine:
1111111111111111111111111111111111111111111111111111111111111111
Is the output different on a 64-bit machine?
Because std::bitset is broken before C++11 - has a constructor accepting unsigned long which is 32-bit type on 64-bit Windows.
What types are allowed for the constructor of std::bitset?
This is NOT on Windows - it is using Intel compiler on 32 bit Linux with -std=c++11
can you please provide your command for compilation? also what compiler do you use? what OS?
RHEL6 was released in 2010. Looks like C++ runtime you are using doesn't fully support C++11 (even if your compiler does). You need a newer runtime.
Does std::bitset<64> work on 32 bit machines for uint64_t?
Yes, since C++11.
Prior to C++11, the constructor accepted a unsigned long which is only guaranteed to be 32 bits or larger. When the size of the bitset exceeds the number of bits in long (prior to C++11) or long long (since C++11), the overflowing bits are initialised to 0.
The larger bitsets can be used, but the high bits cannot be initialised with this constructor nor observed with to_ulong.
@pmoubed Then that language implementation does not conform to C++11 standard.
icpc (ICC) 15.0.3 20150407 - https://software.intel.com/content/www/us/en/develop/articles/c0x-features-supported-by-intel-c-compiler.html
@pmoubed According to that documentation Linux: Note that language features available can depend on gcc* version installed You should check whether your standard library supports C++11.
|
STACK_EXCHANGE
|
Is my wallet effected by the Meltdown and Spectre vulnerabtilies?
Recently two new vulnerabilities, Meltdown and Spectre were published which lets someone read more memory than they are supposed to be able to. How does this effect my wallets and what can I do to secure my Bitcoin?
If you use a modern computer (i.e. one that has a processor that came out in at least the past 10 years), you are effected by the Meltdown and Spectre vulnerabilities. In fact, even if you use an older computer, you may still be effected as it is theorized that Intel CPUs dating back to 1995 may still be vulnerable. However CPUs that old were not tested. Meltdown primarily effects Intel CPUs while Spectre affects a wide range of CPUs, including Intel, AMD (including Ryzen), and ARM (used in smartphones) processors.
Meltdown
All wallet software are effected by the Meltdown vulnerability. Meltdown allows a malicious software to read any bit of memory that it knows the location of. It is capable of dumping the entire contents of the physical RAM in your computer. This means that any wallet which is currently running and has private keys loaded into memory is at risk of having the private keys stolen. Wallet encryption does not help here as the private keys will need to be unencrypted in memory in order for you to be able to sign transactions. Thus any malware exploiting Meltdown will be able to read those private keys.
Mitigations
Meltdown requires that code exploiting the vulnerability be run on your machine, so the usual explanations of due dilligence and avoiding malware apply. However it may be possible for the attack to be performed through malicious JavaScript that is loaded from a webpage. Thus, as usual, you should avoid visiting suspicious websites and disabling JavaScript entirely would not be a bad idea.
Furthermore, there are operating system upgrades that can mitigate Meltdown and make exploiting the attack almost useless. There are also browser changes planned that will make it much more difficult for JavaScript code to retrieve data from your computer's memory. You should expect to see these patches coming out soon for your browsers and operating systems if they are not already available.
Lastly, Meltdown appears to only effect Intel CPUs, so if you have an AMD CPU, you shouldn't be effected by this vulnerability
Spectre
Spectre is more limited in scope than Meltdown is and targets specific processes. It also requires that specific knowledge of the software that is being attacked which does make the attack much harder to pull off. Spectre effects every piece of software which receives an input from somewhere, so all wallet software will be vulnerable.
Furthermore, the Spectre example attacks have been focused primarily on Virtual Machines and browsers. It allows for malicious applications to break out of the sandboxing that VMs and browsers provide. This is particularly bad for web wallets as malicious JavaScript executed in your browser can result in your private keys (which are held in the browser's memory) to be leaked to the attacker.
Mitigations
Spectre effects a wide range of CPUs and it has no known software patches. It effects all modern Intel, AMD, and some ARM CPUs. This means that both computers and smartphones are vulnerable. Some variants may be mitigated but other variants may still be exploitable. As usual, you should avoid visiting suspicious websites and downloading suspicious files to your computer. The usual due diligence applies.
Since JavaScript can exploit Spectre, patches will become available from browser vendors to reduce the effectiveness of using JavaScript to exploit Spectre. There will also be other operating system and other software updates which will reduce the effectiveness of Spectre. Unfortunately it cannot go away entirely unless hardware is upgraded. As usual, you should ensure that all of your software is up to date in order to avoid the exploitation of these vulnerabilities.
Patching the vulnerabilities
Unfortunately there are no known ways to patch the vulnerabilities entirely through software. The current proposals are stop-gap measures which only reduce their effectiveness but also at the cost of performance. Because these vulnerabilities are based in the CPU hardware, the only way that they can be patched is through new hardware that is not vulnerable. It is not known whether a microcode update (aka the CPU firmware) will fix the vulnerabilities or not.
Keeping your coins safe
The only way to ensure that you are not effected by these vulnerabilities is to use hardware that is effected by the vulnerabilities or use hardware where even if they are effected, the data cannot leave the device. There are really only two options for this: use a hardware wallet, or use an offline computer solely for your wallet.
Hardware wallets
Hardware wallets do not have these vulnerabilities because they use processors that are not vulnerable. The processors do not feature Out-of-Order-Execution which is what both Meltdown and Spectre exploit in order to read data. Furthermore, even if they were vulnerable, software that runs on the hardware wallet must either be flashed as new firmware or be manually installed by the user. This makes it much more difficult (basically impossible to do without the user noticing) to get malicious software onto the device that could exploit these vulnerabilities. But as said earlier, they are not vulnerable so such software would be useless.
Hardware wallets also do not transmit any secret information (i.e. private keys) to the computer so the private keys are never exposed and thus cannot be stolen.
Offline cold storage devices
Offline cold storage devices that are not hardware wallets typically consist of older, low powered general purpose computers. Such computers are likely to be vulnerable to Meltdown and Spectre. But because they are offline, it is much more difficult for a piece of malware to both get onto the machine and get data off of it.
Although it is harder to infect and exfiltrate data from offline devices, sophisticated malware does exist and can do so. They do so by hiding on the USB drives that are typically used in such setups. By hiding on a USB drive, the malware can go from an infected online computer to the offline computer, infect the offline computer, and transmit data from the infected offline computer to the infected online computer via the USB drive. This would allow an attacker to steal private information (which may be read by exploiting Meltdown or Spectre) from an offline cold storage device.
The only secure way to send data between an offline device and an online device would be something which allows you to inspect the data before it reaches the online machine. Unfortunately this is rather difficult to do.
Conclusion
Meltdown and Spectre are two vulnerabilities that are based in the hardware and are difficult to fix through software patches. They have the potential to leak private keys and other secret information from a computer to an attacker whilst leaving little to no trace of it ever happening. The vulnerabilities effect all software wallets (including web wallets) which run on a computer or smartphone. The only way to secure your coins is to have the private keys stored on a device which cannot leak the private keys without the user noticed. It is thus my recommendation that you use a hardware wallet.
|
STACK_EXCHANGE
|
import * as application from "tns-core-modules/application";
import { isIOS } from "tns-core-modules/platform";
import { CLog } from "../services/logging.service";
/**
* Adds margin-bottom to the page. Is not super elegant but works for now.
* Once NS 4.0 releases and we upgrade this will not be needed as the page/frame
* will be defaulted to use the safe area insets for iOS.
*/
export function addBottomSafeAreaForIOS(): void {
if (isIOS && application.ios.window.safeAreaInsets) {
CLog("*** remove this when upgraded to NS 4.0 ***");
const bottomSafeArea: number = application.ios.window.safeAreaInsets.bottom;
if (bottomSafeArea > 0) {
application.addCss(`
Page { margin-bottom: ${bottomSafeArea} !important }
`);
}
}
}
|
STACK_EDU
|
Ïf-statements generate conditional jumps unless you can utilize conditional moves but that is more likely something done in hand-written assembly. There are rules that govern the CPU's conditional jump assumptions (branch prediction) such that the penalty of a conditional jump which behaves along the rules is acceptable. Then there is out-of-order execution to additionally complicate things :). The bottom line is that if your code is straight-forward the jumps which eventually occur won't mess up performance. You might check out Agner Fog's optimization pages.
A non-debug compilation of your C-code specifically should generate four conditional jumps. The logical ands (&&) and parentheses usage will result in a left-to-right testing so one C optimization could be to test the f32 that is most likely to be >0.0f first (if such a probability can be determined). You have five possible execution variants: test1 true branch taken (t1tbt), test1 false no branch (t1fnb) test2 true branch taken (t2tbt), etc giving the following possible sequences
t1tbt ; var.m128_f32 <= 0.0f
t1fnb t2tbt ; var.m128_f32 > 0.0f, var.m128_f32 <= 0.0f
t1fnb t2fnb t3tbt ; var.m128_f32 > 0.0f, var.m128_f32 > 0.0f,
; var.m128_f32 <= 0.0f
t1fnb t2fnb t3fnb t4tbt ; var.m128_f32 > 0.0f, var.m128_f32 > 0.0f,
; var.m128_f32 > 0.0f, var.m128_f32 <= 0.0f
t1fnb t2fnb t3fnb t4fnb ; var.m128_f32 > 0.0f, var.m128_f32 > 0.0f
; var.m128_f32 > 0.0f, var.m128_f32 > 0.0f
Only a taken branch will result in a pipelining disruption and branch prediction will minimize the disruption as much as possible.
Assuming floats are expensive to test (they are), if var is a union and you are well-versed in floating-point ins and outs you might consider doing integer testing on the overlapping types. For example the stored value 1.0f occupies four bytes stored as 0x00, 0x00, 0x80, 0x3f (x86/little-endian). Reading this value as a long integer will give 0x3f800000 or +1065353216. 0.0f is 0x00, 0x00, 0x00, 0x00 or 0x00000000 (long). Negative float values have exactly the same format as positive with the exception that the highest bit is set (0x80000000).
|
OPCFW_CODE
|
41 Like 8 Dislike
I go over all of the missions from the new Jungle Inferno Contracktor! some of these missions include rewards that can allow you to own the new Pyro items! Leave your thoughts down below! https://mannco.trade/ - The trading site ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Connect with me! ● Patreon: https://www.patreon.com/PyroJoe ● My Discord: https://discord.gg/fzQ3kx8 ● Twitter :https://twitter.com/pyrojoe_ ● Steam Group: http://steamcommunity.com/groups/PyroJoeYouTube ● Snapchat: Pyro.Joe family friendly pg content 8)
In this epic video we're dodging lasers with our friend Guava Juice to take the Team Edge trophy! Bean Bucket Challenge! ➡ https://www.youtube.com/watch?v=r_HmReIg3mE Subscribe To Team Edge! ➡ https://www.youtube.com/channel/UCaRH3rDr3K3CEfhVqu5mgUQ 🔽MORE LINKS BELOW 🔽 TEAM EDGE GEAR ➡ https://crowdmade.com/collections/teamedge Matthias ➡ https://www.youtube.com/user/matthiasiam?sub_confirmation=1 J-Fred ➡ https://www.youtube.com/user/mrjollywhitegiant Bryan ➡ https://www.youtube.com/channel/UCFN1GcJz2gqBW9dn9IEDQcw Team Edge Gaming ➡ https://www.youtube.com/channel/UC-6Ygz2yrPcPPhl4pAobqNQ Guava Juice ➡ https://www.youtube.com/user/aynakoitsroi Mail Box: 24307 Magic Mtn Pkwy #211 Valencia, CA 91355 Twitter ➡ https://goo.gl/rbKKmG Instagram ➡ https://instagram.com/itsteamedge/ Challenges ➡ Monday - Wednesday - Friday
Muselk Merch: https://muselk-us.myshopify.com/ Twitter (best place to message me): https://twitter.com/mrmuselk Twitch Stream: http://www.twitch.tv/muselk/ Community Discord: https://discord.gg/muselk Friends in this video: Bazza: https://www.youtube.com/channel/UCwyqAtZfXgsdMDnatgCTHrQ Tyrodin: https://www.youtube.com/channel/UCNV9ehrWzPaRCQ1Vl-Fi2Gw Music: our hearts collide (iamsleepless | original song) https://iamsleepless.bandcamp.com/
Patreon: https://www.patreon.com/b4nny Gameplay from Twitch: http://www.twitch.tv/b4nny Twitter: http://www.twitter.com/4G_b4nny Sponsored by https://MarketPlace.TF/?r=76561197970669109 Production by Shounic
After a decade of dustbowls and gravel pits, it’s time to pack your snorkel, find your flip-flops, and endure a series of painful yellow fever vaccinations to your abdomen, because Team Fortress is heading to the tropics! Go to http://www.teamfortress.com/jungleinferno for more information. Play for free on Steam: http://store.steampowered.com/app/440/
Thanks For Watching, Everything You Need to Know about The Jungle Inferno Campaign Pass [TF2]
So in this video i go in to detail and explain about what does the new Jungle Inferno Campaign Pass have and include.
Also I explain some possibilities for the CONTracker, and Blood Money in the Jungle Update.
Ma Twitch: https://www.twitch.tv/theafricanredhead
|
OPCFW_CODE
|
I couldn't find a lot of discussions around Linksys Velop. I've grown old of not having OpenWRT because I moved into a three story house 12 months ago, and I decided to try something main-stream.
The Specs between the Velop and Lyra look very similar. The Velop has 512MB RAM vs 256MB, 4GB Flash vs 128MB, Different Bluetooth chipset, I didn't see any CSR BT in the supported hardware database, but I don't use BT. The Velop also has ZigBee, which I don't use either.
In the past I was spoiled because I used a Merlin build for my Asus devices. I see I will need to dust off some cobwebs, and hope by brain hasn't turned into teflon when I'm trying to absorb how to embrace loading OpenWRT on the set of 3 - Linksys Velop WHW03v1 access points.
I'm a complete newbie, so I apologize in advance for all the taboo things I may have just done.
CPU: Qualcomm IPQ4019 (Quad-core Wave2 2T2R abgn/ac WiSoC)
Flash: Samsung KLM4G1FEPD (4GB eMMC), RAM: 512MB (DDR3)
2.4GHz Radio: Qualcomm IPQ4019 + 2x MSC5533621E (2.4GHz PA)
5GHz Radio #1: Qualcomm IPQ4019 + 2x Skyworks SKY85408-11 (PA)
5GHz Radio #2: QCA QCA9886 (2T2R) + 2x Skyworks SKY85408-11 (PA)
ZigBee: Silicon Labs EM3581 (SoC) + SiGe SE2432L (2.4GHz FEM)
Bluetooth: CSR (CSR1021/CSR8510 ?) Bluetooth 4.0 LE SoC
Switch: Qualcomm Atheros QCA8072 GbE (1x LAN/1x WAN)
pq40xx: add support for ASUS Lyra
SoC: Qualcomm IPQ4019 (Dakota) 717 MHz, 4 cores
RAM: 256 MiB (Nanya NT5CC128M16IP-DI)
FLASH: 128 MiB (Macronix NAND)
WiFi0: Qualcomm IPQ4019 b/g/n 2x2
WiFi1: Qualcomm IPQ4019 a/n/ac 2x2
WiFi2: Qualcomm Atheros QCA9886 a/n/ac
BT: Atheros AR3012
IN: WPS Button, Reset Button
OUT: RGB-LED via TI LP5523 9-channel Controller
UART: Front of Device - 115200 N-8
Pinout 3.3v - RX - TX - GND (Square is VCC)
|
OPCFW_CODE
|
A Web Application Firewall or WAF is a network security system that helps protect web applications from various types of attacks by making sure that a web server only receives legitimate traffic.
Firewalls are systems that monitor and control traffic that enters and leaves the network. It acts as a barrier between your network and the open internet.
A web application firewall is a specific type of firewall that focuses on the traffic going to and leaving web apps. Standard firewalls act as the first level of security but today’s websites and web services need more security. This is where WAFs provide specialized capabilities and thwart attacks specifically aimed at the applications themselves.
Looking for a WAF Solution? Check out CDNetwork’s Application Shield.
How Does a Web Application Firewall (WAF) Work?
A WAF works by filtering, monitoring, and blocking suspicious HTTP/s traffic between a web application and the internet.
Implementing traditional firewalls have been a basic cybersecurity practice for a while. These are deployed around networks and operate in the Layers 3 to 4 in the Open Systems Interconnection (OSI) Model. Their role is limited to inspecting packets over the IP and TCP/UDP protocol and filtering traffic based on IP addresses, protocol types and port numbers.
A WAF on the other hand operates at Layer 7 (L7) of the OSI model and can understand web application protocols. They are essential to analyze the traffic going to and from a web application and to prevent attacks that might otherwise go undetected through a traditional network firewall and can be used as part of a positive or negative security model.
When deploying a WAF, it acts as a reverse-proxy shield between an application and the internet. A proxy server is an intermediary that protects a client machine. Reverse-proxies on the other hand ensures that the clients pass through it before reaching a server. Crucially, a WAF can be used to protect multiple applications that it is placed in front of.
A WAF uses a set of rules called policies to filter out malicious traffic from taking advantage of application vulnerabilities including the OWASP Top 10. These security policies are often based on known web attack signatures, including scanpoints like HTTP Headers, HTTP Request Body and HTTP Response Body. The set of rules can also be specified to detect patterns in URL or file extension, to restrict URI, header and body length, to detect SQL/XSS injection, zero-day exploits and even bots based on their signature detection and behavior
The key benefit of using a WAF is that these policies can be modified and implemented quickly and with ease. Some WAF providers also provide functionalities for load balancing, SSL offloading, and intelligent automation of these policy modifications using machine learning to optimize your cloud security. This makes it easy to adapt and respond to varying attack vectors and for Distributed Denial of Service (DDoS) protection.
On its own, a WAF cannot protect against all attacks. But it can enhance web application security to protect against these common attacks:
These are attacks that force authenticated users of a web application to take actions that compromise the security of the app. Usually, an attacker tricks the user to click on a link by sending them a link via email. Once the user authentication and logins are completed, the user can be forced to perform requests such as transferring funds or changing their profile details and email addresses. If the attack is aimed at an admin account and becomes successful, it could compromise the entire web application.
These are attacks where the attackers try to inject malicious SQL commands into websites and applications which have user-input data fields such as contact forms. The injected code can gain unauthorized access to databases and run commands to extract or modify private information contained in the databases.
Need DDoS Protection and high-performance security solutions? CDNetwork’s Flood Shield is perfect for DDoS attacks mitigation.
What Are The Different Types of WAFs?
A WAF protects web applications by utilizing threat intelligence and blocking attacks that satisfy certain pre-set criteria while allowing approved traffic. They help protect against cross-site forgery, cross-site scripting, SQL injection, and file inclusion where attackers try to gain unauthorized access to an application to steal sensitive data or compromise the application itself.
A WAF can be one of three types based on the way they are implemented.
This is usually a hardware-based WAF and is installed locally. This means that it is placed close to the server and is, therefore, easier to access. As is the case with hardware-based deployments, they help minimize latency but can be expensive to store and maintain.
A host-based WAF is one that is fully integrated into an application’s software. It exists as a module inside the application server. This type of WAF is less expensive than a network-based WAF and is more customizable. On the downside, they can drain the local server resources and affect the performance of the application. They can also be complex to implement and maintain.
A Cloud-based WAF is more affordable and requires fewer on-premises resources to manage. They are easier to implement and often delivered as SaaS by a vendor. offering a turnkey installation as simple as changing the DNS to redirect web traffic. Because of the cloud service model, they also have minimal upfront cost and can be continuously updated to keep up with the latest attacks in the threat landscape. CDNetworks offers a cloud-based WAF that is integrated with our global data centers and content delivery network (CDN) and prevents web application-layer attacks in real-time.
|
OPCFW_CODE
|
YouTube and FLV Player for windows mobile
Youtube is the best way to get the latest songs from the internet.So we can easily download these video with the help of Youtube Downloaders in the flv files.We often play these flv files in the computers with the help from players like vlc player, GOm players but the main issues is how to play flv files for mobiles?.
This is the ultimate solution for windows mobile users for playing flv files in their mobiles.If you are unable to download and watch youtube videos on your Windows Mobile 6 device then you may also enjoy this application.I personally tested this application on my HTC Touch.
Youtube Player allows you to search for and play videos in a nice Windows Mobile application, then plays the full FLV video (not minuscule MPEG4 versions) with its own player.
Require high data rate and consume high bits so be sure you have unlimited plan for accessing net of mobile.Moreover It won’t work well on GPRS as flv videos are of large sizes.
If you have other mobile like nokia e71 or other sysmbian based mobile then in general youtube provide the mobile verions on www.m.youtube.com and you can install the applications in few simple steps.Supported mobiles are Nokia’s N73, E51, E61, E61i, E65, N95, 6120c, 6110n and Sony Ericsson’s k800i and w880i.
In short you can install the youtube application and if you need more information the watch this video
Transcript of the video
1. Using your phone browser go to: m.youtube.com/app
2. Follow the instructions to download YouTube
3. Exit your phone browser
4. Find the YouTube icon on your phone and start watching videos
Update on 7march,2009:Temporally m.youtube.com/app gives 404 not found error.
It works on my Jasjar well. But it does'nt exit by pressing exit button. No way to exit program but resetting device.
Hi i found this new encoder
There is a link so you can check in
plz am also using htc tytn 11. can't find it working for me. how did guys get yours working? plz i really need ur response. thanks
I have exactly same problem as Sagar F. I have downloaded youtube play but the flv video deosnt play. It show the blank screen and cursor moves for about 10 secs as if it were playing a video and then nothing. Its a 60 mb video file.
Can someone help plz!!!!!
I have tried both youtube play, as well as TCPMP. I have few FLV files whose size is varying from 4 MB to 28 MB. Not a single files runs. I have downloaded them through Real Player. Only 1 video which is 5.2 MB runs, rest nothing happens.
When I try to play these files, The scree shows blank, the cursor moves as if playing file and then the message comes stopped. If I try again to play, it doesnt work. U tried GDI and Raw Buffer, tried increasing buffer size, but no luck.
Can some one help me. How do I play all the FLV files which I have downloaded using realplayer, I can watch on my HTC Touch, WM 6.
Waiting for some real solution
I've been using this daily with a set of my flv files and have some ideas for enhancements;
- Multi-files selection for playback.
- Automatic playback of next file when the current reaches end of file.
- I'd like to be able to reposition playback, e.g. re-run a section or resume a closed session.
- While an flv file is open, you cannot access any other screen/program on your device.
Still, superb software with an option to get even better.
Wow! I've been looking for this kind of software for a long time. The crappy quality after conversion to 3gp, 3g2 or whatever is no more! I bought CorePlayer after I got the impression that it could play flv files, but it turned out it couldn't handle ON2 VP6 coded files.
I installed Youtubeplayer on my HTC Touch HD2 and moved the flv files to a first-level subdirectory on my storage card. The film plays in portrait and landscape. Finally, the high quality of the screen shines through. I'm happy.
Sorry Not Working!
It searches the flv files stored in any place, but for my gsmart i120 wm6 nt working but Thanks.
how to play flv files on windows mobile http://bit.ly/1YRVOz
YouTube and FLV Player for windows mobile http://bit.ly/1YRVOz
YouTube and FLV Player for #windows #mobile http://bit.ly/1YRVOz
This software is a junk.. Use TCPMP player instead. only play a few flv files...i dont know why..or for youtube, use the youtube application present on their application
I have tried it with my Asus P527 with windows mobile 6.1 and it works just fine . Great thanks ,Honey .
I have tried out in a HTC TyTn II with Windows Mobile 6.1 and it works perfectly well. The video was really helpfull, thank you very much.
i too have a software of you tube install in my windows mobile. only some files iam able to able to play and it is working very well. but some files iam not able to view. please send ur response
Well i also have tried UtubePlayer. it works but with heavy files (10mb), it doesn't work properly.. it plays them very slowly.. whats the solution? i have HTC Touch.
Thanks, it works on my HTC Touch just well. It's right we have to rename the folder with "flv", otherwise the software will detects no flv files.
>Install the application
>Store your flv files in any folder ( like "flv" )
folder location must be
storage card/"your folder"
HI, I tried this after downloading it onto my HTC Touch as well and it does not work with my flv files. If you have any other software I'd appreciate it.
Any youtube video can be played on windows mobile using http://m.vtap.com. this site allows you to search and stream almost any video on the internet in MMS format.
YouTube FLV Player for windows mobile ( tested on my HTC Touch) http://tinyurl.com/5e7btz
Youtube Flv players are generally for Flv files of Youtube. It ranges from 4mb to 20mb.@Gurpreet
Please check if your file is flv, youtube player will not handle larger files of about 60mb :)
I checked it for general Youtube files and it is perfectly working for me.
@Apple ( anonymous )
Which advice are you talking to?
This is the post about FLV Player application for Windows mobile and is working perfectly fine !
|
OPCFW_CODE
|
What we’re about
Upcoming events (2)See all
- The Uncertainty of Web Development Careers in 2023Roam, Atlanta, GA
It's been a tumultuous year in the tech space for the last year. We know developers are worried even at what one would consider "safe" jobs. We knew the tide would eventually turn from the crazy job market of 2020 and 2021... but the sky is not falling. Those concerned about their current jobs and those currently looking need to be well-informed about the job market and the reality of what's coming over the next year.
##AWESOME EVENT ALERT##
Why go to a React, Vue, or Angular Conference when you can get it all in one place? CONNECT.TECH Oct 24-26, now in its 11th year, is Atlanta's web dev conference where you can seriously level up your skills.
- Core js
- Serverside js
- Advanced js
No need for FOMO. See all the workshops, talks, and speakers, and save your seat here CONNECT.TECH
- CONNECT.TECH 2023Georgia World Congress Center - Building C, Atlanta, GA$595.00
CONNECT.TECH is a three-day event that will take place on October 24-26, 2023, at the Georgia World Congress Center in Atlanta, GA. The conference is designed for front-end developers, designers, and other web professionals who want to learn about the latest trends and technologies in the industry.
- 2 Keynotes
- 7 Workshops
- 9 Tracks
- 80+ sessions
CONNECT TECH is the largest and longest-running multi-framework front-end conference in the USA. It is a premium web development conference at a community conference price.
Here are some of the things you can expect:
- Learn from the best in the industry. CONNECT.TECH will feature keynote speakers and workshops from over ninety of the most respected names in the front-end development community.
- Network with other professionals. CONNECT.TECH is a great opportunity to meet other front-end developers, designers, and other web professionals worldwide.
- Get hands-on experience with the latest technologies. CONNECT.TECH will have a variety of workshops and labs where you can learn about and experiment with the latest technologies.
- Stay up-to-date on the latest trends. CONNECT.TECH will cover the latest trends in front-end development, so you can be sure you always use the latest tools and techniques.
This registration is for the two-day conference, October 25-26. If you wish to attend one of the full-day workshops on October 24, please contact us at tickets [at] connectevents.Io
- Lessons Learned from 10 Years in React - Cory House
- Supercharge React Applications with Modern GraphQL Backends - Glenn Reyes
- Leadership Workshop for the Reluctant Leader - David Neal
- Automated Testing Made Easy - Micah Wood
- Remix Fundamentals - Matt Brophy
- Solving Back End Mysteries for Front End Developers - Jeff Linwood
- Accessibility Auditing: Getting Started with Accessibility - Todd Libby
|
OPCFW_CODE
|
Fixed #25718 -- Allowed using None as a JSONField lookup value.
It's my follow up on #5617.
In general it provides an ability to use None in queries if querying is applied to some key inside value of JSONBField.
Thank you!
Is there any way to regain momentum on this issue? I would be interested in seeing this merged for 1.11.2, if possible. It seems that it should only need a rebase.
I'll do the rebase. Also, probably it will worth it to add similar feature for ArrayField. Since it is not a bugfix, then it will be included only in 2.0 (if will be merged at all).
@timgraham, could you give an advice here? Should I add something to the PR?
If there is anything, that I can improve, I will be happy to do that :)
Does this now only affect sub fields, and leave the current behaviour to distinguish between 'null' and NULL for the whole value?
I tried to investigate that, however, if it's possible to store a JSON null in JSONField, I'm not sure how.
@mjtamlyn , it should affect only sub fields because of this check. And the previous behaviour is the same for regular fields.
I added some comments and fix for formatting. But I'm not sure if the comments are ok
What's the status of this pull request? With 2.0 being released in the next few days, I assume this won't get merged before?
Correct -- only release blocking bugs (regressions and bugs in new features) are being fixed in 2.0 at this point. The patch needs to be updated to remove usage of QUERY_TERMS as that's been removed in 244cc401559e924355cf943b6b8e66ccf2f6da3a.
@Stranger6667 I added a test for the can_use_none_as_rhs behaviour. This is at least exercised now.
Can you have a look? If you're happy, can you squash and rebase this? Then we'll have a look if there are any last issues. Hopefully we can get it in!
Hello @carltongibson !
For me, everything seems good! Probably, only a note in the changelist is missing
I'll add a note to the changelog and do squash & rebase
Squashed & Rebased!
Hm, it is strange, but the builds fail with ImportError on "QUERY_TERMS", which is not present in the PR
@timgraham Can I ask you to look at the build failure here? It looks like (?) it's picked up the wrong commit after the rebase... (QUERY_TERMS was removed between the PR being opened and now.)
Whilst you're here your comments on the PR would be worthwhile, to save you a trip back later. 🙂
Thanks!
I saw an email notification that suggest a bunch of commits from master were pushed to this branch with different hashes (probably this was fixed in a later push) and the problem might have been caused by that.
As for the changes, I haven't reviewed in detail but I noticed that the behavior changed in Django 2.0 after 58da81a5a372a69f0bac801c412b57f3cce5f188. The querysets in the new tests are now returning results instead of raising ValueError: Cannot use None as a query value -- the results may be incorrect but we should understand the current behavior and make sure this change still makes sense (and update the release note to reflect that, although since might be bug fix rather than a "new feature" at this point). Also, the documentation should have a ``.. versionchanged:: 2.1` note that explains what has changed.
@Stranger6667 I'm going to assume you can go over this one more time to address Tim's comments. (If not let me know and I'll have a look!) I'm going to mark it Patch needs improvements on the Trac ticket. When you're done uncheck that and we'll take another look.
Thanks for the effort here! I know it's been a long road. 🙂
@carltongibson ,
Please, could you take a look? I've added a .. versionchanged:: 2.1 block with a small explanation of the change, but I'm not sure about tests and release notes. In the end, it is a support for querying JSON "null", which seems to me like a new feature, but I'm not sure.
Thank you for your support :)
How does the behavior change exactly? As I mentioned in my previous comment, None is allowed as a lookup value since 58da81a5a372a69f0bac801c412b57f3cce5f188.
Hey @timgraham — I've just been looking at this...
The current behaviour has the =None query converted to isnull is build_lookup here:
https://github.com/django/django/blob/9b1125bfc7e2dc747128e6e7e8a2259ff1a7d39f/django/db/models/sql/query.py#L1087-L1090
So the tests cases work but fail — the =None query is the same is the has key __isnull query. (The filter in test_none_key returns the same QuerySet as the filter in test_isnull_key above.)
If you then add in the postgres changes, so add and register the JSONExact lookup, but leave the rest as is, you still get the same result, because the lookup_name remains exact. (Thus the old error is not raised there.)
It's only when you add the Lookup.can_use_none_as_rhs changes that the new JSONExact is returned instead of an IsNull and the tests pass.
I think all of the above is correct, i.e. as we want it.
The new docs cover the different cases. The distinction between the isnull has key and the =None case (as interpreted here) is important.
I'm not sure about "Bug" vs "New Feature" — this isn't something it's been possible to do, but the original ticket has "Bug" up the top...
@timgraham: @Stranger6667 has adjusted the docs/release notes as per the discussion last week. I think this is good to go.
I made a few edits and then realized that the problem also affects HStoreField. We should fix both fields together so there isn't an inconsistency. Can you give that a look Dmitry? Don't worry about documentation changes for that, I'll take care of it afterward. Generally, I think these changes make querying intuitive and don't need to be documented in much detail.
Hello Tim!
Thank you for getting back to me. Sure, I'll take a look
Regarding the HStoreField.
For example, if key a has NULL value or, there is no b key inside hstore, then the following queries will both return NULL:
SELECT 'a=>NULL'::hstore -> 'a';
SELECT 'a=>NULL'::hstore -> 'b';
Thus, checking for certain key having NULL value should be done in a different way. It could be:
extra check for key existence in hstore. It will allow us to distinguish between two different situations described above;
Replacing value getter (->) with containment operator (@>). Query will be like this SELECT 'b=>NULL'::hstore @> 'b=>NULL';
So, in this aspect, it differs from JSONBField, which returns null and NULL in similar situations.
I'm not sure how to implement it, for now, I have this:
class HStoreExact(Exact):
can_use_none_as_rhs = True
def as_sql(self, qn, connection):
lhs, lhs_params = self.process_lhs(qn, connection)
rhs, rhs_params = self.process_rhs(qn, connection)
params = lhs_params + rhs_params
if (rhs, rhs_params) == ('%s', [None]):
return "%s @> '%s=>%s'" % (self.lhs.source_expressions[0].target.name, self.lhs.key_name, rhs), params
return '%s %s %s' % (lhs, self.operator, rhs), params
But I think that it is very specific code, that, probably will not work in general. Could you, please advice? Or maybe it would be better to take another approach to the problem?
Regards
|
GITHUB_ARCHIVE
|
Prof. Venkatesan Guruswami
时间: 2017-06-09 14:00-2017-06-09 15:00
Given a k-SAT instance with the promise that there is an assignment satisfying at least t out of k literals in each clause, can one efficiently find a satisfying assignment (setting at least one literal to true in every clause)?
Extensions of some 2-SAT algorithms solve this problem when t >= k/2. We prove that for t < k/2, the problem is NP-hard (joint work with P. Austrin and J. Hastad). Thus, SAT becomes hard when the promised density of true literals falls below 1/2. One might thus say that the transition from easy to hard in 2-SAT vs. 3-SAT takes place just after two and not just before three.
The talk will sketch the proof of this hardness result, which proceeds by characterizing functions passing the natural "dictatorship test” as "juntas" depending on few variables. We will then elucidate a broader principle based on the paucity of "weak polymorphisms” (generalizing the universal-algebraic approach to study constraint satisfaction via polymorphisms), which seems to govern the intractability of promise constraint satisfaction problems (PCSPs), a rich class that includes the above example among many other fundamental problems. We will touch upon on a body of work (with J. Brakensiek) that shows that the complexity of a PCSP is precisely captured by its associated weak polymorphisms, and applies this framework to prove new hardness results for approximate graph coloring and establish a complexity dichotomy for the case of Boolean symmetric PCSP.
Venkatesan Guruswami is a computer scientist at Carnegie Mellon University in Pittsburgh, United States. He did his schooling at Padma Seshadri Bala Bhavan in Chennai, India. He completed his undergraduate in Computer Science from IIT Madras and his doctorate from Massachusetts Institute of Technology under the supervision of Madhu Sudan in 2001 . After receiving his PhD, he spent a year at UC Berkeley as a Miller Fellow, and then was a member of the faculty at the University of Washington from 2002 to 2009. His primary area of research is computer science, and in particular on error-correcting codes. Following 2007, he was on leave from University of Washington. During 2007-2008, he visited the Institute for Advanced Study as a Member of School of Mathematics. He also visited SCS at Carnegie Mellon University during 2008-09 as a Visiting Faculty. In July 2009, he joined the School of Computer Science at Carnegie Mellon University as Associate Professor in the Computer Science Department. Guruswami was one of two winners of the 2012 Presburger Award, given by the European Association for Theoretical Computer Science for outstanding contributions by a young theoretical computer scientist.
|
OPCFW_CODE
|
Is the Wisdom (Survival) skill used for both tracking and finding tracks?
The rules mention that to follow tracks, you need to find them. It is also mentioned that it can take up to an hour outdoors to find tracks you have lost - all under tracking, which is Wisdom (Survival).
The way I read it would be to use Survival no matter the situation (for both finding and following) but I read some people would use perception or investigation to find the tracks. When looking at the table for Sylvan random encounter in DMG p. 87, in one entry it uses Wisdom (Survival) to both find and follow the tracks.
Also, I see a problem using other skills to find the tracks for a Ranger character because Ranger favored enemy feature states you have advantage on Wisdom (Survival) to track your favored enemy. Then it would be very strange for the Ranger not being able to find tracks he could easily follow due to his advantage on a check.
And what if the Ranger for some reason is not proficient in perception would never be able to find tracks so not able to follow any?
So, the question is easy but I fear the answer is not, as I was not able to find a straight answer to it.
I want to make sure that any character who want to become a good tracker (either through ranger or rogue sub-class) can do so. I feel that having to be good at 2 or 3 skills to accomplish one thing (i.e. tracking) is not the common usage of skills in 5e.
Players don't "use skills" in 5e anymore. Instead, they say what they do, and the DM may (or may not) ask a player for an ability check. A relevant question: How to use skills — did this change between editions and how?
@enkryptor thanks for the comment. I have commented on this point in the answer's comments.
There are no skill checks in D&D 5e
Don't feel bad if you missed it, the Player's Handbook sucks at explaining it.
There are Ability Checks in D&D 5e
The first question to ask is which ability score is the correct one for tracking and finding tracks?
Strength, Dexterity, Constitution and Charisma don't immediately suggest themselves, although I can see circumstances where they would - tracking someone through a crowd by reading disturbances and asking questions, could use Charisma for sure. Or Dexterity if you are tracking someone across rooftops perhaps?
You are generally left with:
Intelligence "when you need to draw on logic, education, memory, or deductive reasoning", or
Wisdom "how attuned you are to the world around you and represents perceptiveness and intuition."
Now, consider the task of tracking or finding tracks in the particular circumstances: is it primarily analytic, or primarily intuitive?
There is no correct answer and different circumstances can give one answer one time and the other answer another time.
You're the DM, you make the call.
Once you've decided on the ability; is there one or more applicable skills?
If the PC has a skill that is applicable then they can apply their proficiency bonus to the roll.
For tracking: is Survival applicable? Of course, it is.
Perception? Yes.
Insight? Definitely.
Arcana? If the thing being tracked is magical, why not?
Following a wild animal? Nature is applicable.
There are no off-limits skills. If your player can convince you why it should apply; be convinced.
At my table, my call is "Make an [ability] Ability Check?" and I expect and encourage "Can I apply [skill]?"
I highly recommend these "no skill" character sheets to emphasise this.
Following that logic (with which I agree!), I'd argue you might as well go the rest of the way and replace "skill" with "proficiency". While the non-skill proficiencies may be less obvious with tracking, they may well still apply for other activities (eg., proficiency with Disguise Kits would be applicable while creating a mundane disguise, which would probably otherwise be a Charisma check). Plus, for this player/GM, removing that distinction helped cement the "make a check" "can I add X profiency?" loop.
@Dale M. I didn't take it personally but I really meant Survival skill using Wisdom ability. Because the way you described it in your answer assumes I would use the variant of using different abilities with different skills which I didn't plan on to keep it simple for players...in a sense of helping them make a strategy while playing or even for character building. maybe not using this variant will not help to meet my objective but we'll see. Again, I might have missed something. if I need to clarify something, please let me know so I can adjust my question.
@Dale M I have modified my question and added how I perceive the different proficiencies so you can maybe modify your answer and add a part to address the specific question. thanks for bringing this point of view of ability checks versus skill checks. I worded my question so that it can be useful for others who play with the variant of different skills with different abilities.
@jonDraco etiquette on this site is not to change a question once there are answers, it’s to write a new question if you find the question you asked wasn’t exactly the question you wanted to ask
@Dale M sorry about that. I'll see if I should delete or put back the original then ask another.
@Dale M. I see you did it already. could you please then, as your answer is close to what I'm looking for anyways, add if you would use the same combination of ability check and proficiency bonus for both finding and following tracks as this is the main intent on the question which is partially answered by referring to "find tracks". thanks again.
|
STACK_EXCHANGE
|
import java.sql.*;
public class DBAccess
{
PreparedStatement s;
Connection con;
ResultSet r;
public DBAccess()
{
try
{
Class.forName("com.mysql.jdbc.Driver");
con = DriverManager.getConnection("jdbc:mysql://localhost:3306/employee","root","password");
System.out.println("\t\t\tDatabase Connected");
}
catch(Exception e)
{
System.out.println("Database Not Connected because " + e);
}
}
public void insert(String id,String n,String a,String sa,String pn)
{
try
{
s = con.prepareStatement("Insert into Employee Values (?,?,?,?,?)");
s.setString(1,id);
s.setString(2,n);
s.setString(3,a);
s.setString(4,sa);
s.setString(5,pn);
System.out.println("Record has been added");
}
catch(Exception e)
{
System.out.println("Record has not been added because " + e);
}
}
public void delete(String id)
{
try
{
s = con.prepareStatement("Delete Employee with id =?");
s.setString(1,id);
s.executeUpdate();
System.out.println("Record has been deleted");
}
catch(Exception e)
{
System.out.println("Record has not deleted due to " + e);
}
}
public void modify(String id,String n,String a,String sa,String pn)
{
try
{
s = con.prepareStatement("update Employee set name=?, age=?, salary=?, phno=? WHERE emp_id = ?");
s.setString(1,n);
s.setString(2,a);
s.setString(3,sa);
s.setString(4,pn);
s.setString(5,id);
s.executeUpdate();
System.out.println("Record has been updates");
}
catch(Exception e)
{
System.out.println("Record has not been updated due to " + e);
}
}
public void display()
{
try
{
s = con.prepareStatement("select * from Employee");
r = s.executeQuery();
System.out.println(" ID | NAME | Age | Salary | PHONE ");
System.out.println("---------------------------------------------------------------");
while(r.next())
{
System.out.println( r.getString(1) + "\t " + r.getString(2) + "\t " + r.getString(3) + "\t " + r.getString(4) + "\t " + r.getString(5));
}
System.out.println("Record has been updates");
}
catch(Exception e)
{}
}
public void close()
{
try
{
r.close();
s.close();
con.close();
}
catch(Exception e)
{}
}
}
|
STACK_EDU
|
Noise sensitivity in critical percolation
Recall Garban’s talk: Algorithm to decide if a crossing exists : start exploration from the left corner. If there is a left-right crossing, you cannot reach the upper side.
Introduce a dynamic: Flip edges with small probability, once every second. We see that in a short portion of time, crossing switches from left-right to up-down several times. So there seems to be plenty of pivotal edges, i.e. edges that, when flipped, change the crossing event.
1. Pivotal edges
From technology, we know that
so the length of the interface is .
1.1. How many pivotals are there ?
Let be the probability to switch a given edge. If tends to (low noise), we don’t hit any pivotals. Indeed, . It implies asymptotically full correlation of successive configurations.
If tends to infinity, we do hit many pivotals, but this does not imply asymptotic independence.
1.2. The Fourier spectrum
Let denote the indicator function of the crossing event. Let denote the noise operator,
is diagonal in the Fourier-Walsh basis,
Define the spectral sample as the distribution on the set of subsets of . Notation:
If for some sequence , tends to , then we have asymptotic independence, since correlations
1.3. Pivotals versus spectral sample
An easy calculation gives
(but this does not hold for more points).
For percolation, , hence there exists such that
1.4. Other events
Dictator has .
Majority has (most of the mass on singletons, very noise stable) and is huge (sharp threshold).
Parity is the most sensitive to noise.
Schramm tried to prove existence scaling limits using noise sensitivity.
Theorem 1 (Benjamini, Kalai, Schramm) A sequence of monotone Boolean functions is noise sensitive iff it is correlated to all weightged majorities.
Corollary 2 In particular, left-right percolation crossing is noise sensitive.
1.6. Schramm and Steif
Theorem 3 If can be computed by a randomized algorithm with revealment , then
This does not give a good bound for percolation. The best revealment is at least , whereas the conjectured is .
2. Basic properties of the spectral sample
Gil Kalai to study the whole distribution . We do not know how to sample from it (a quantum algorithm exists, Bernstein-Vazirani).
Now we know conformal invarince (Smirnov, Tsirelson, Schramm-Smirnov). Can we use it to describe , percolation ?
2.1. General facts
Lemma 4 (Linial, Mansour, Nisan) (Random restriction Lemma).
This gives a control on the spectral sample in a subset.
Lemma 5 (Kesten 1987, Damron, Sapozhnikov 2009) (Strong separation Lemma). If is far from the sides of the square, conditioned on the interfaces to reach , with a uniformly positive conditional probability, the interfaces are well-separated around .
2.3. Self similarity
The number of -boxes hit by is .
is very different from a uniform random set of the same density. Typically, for , intersects all -boxes.
We would like to estimate how much differs from its expectation. This works for the zero set of random walks on the line. But there is less independence for . For instance, we do not know how independent events hits and hits are. But we understand hits versus .
Theorem 7 (Garban, Pete, Schramm) On the triangular lattice,
So the scaling limit of is a conformally invariant Cantor set with Hausdorff dimension .
We also prove that the scaling limit of dynamical percolation exists as a Markov process. Above theorem implies that this process is ergodic.
- What about other Boolean functions ? Do typical random restrictions of large Boolean functions look generic ? Compare Szemerédi 1975 Regularity Lemma, Chatterji-Ledoux 2009 for submatrices of Hermitian matrices.
- Self similarity of and implies that the entropy of these random sets should be at most , so no factors as in the uniform case. This looks like fractal percolation on a -ary tree.
2.6. Connection with Influence-Entropy conjecture
Influence-Entropy conjecture (Friedgut-Kalai 1996).
Considered very hard (much stronger than KKL). Does it hold for ?
|
OPCFW_CODE
|
Gatsby 4 Update!
November 10th, 2021
Hello all, I noticed when I upgraded this demo repo to Gatsby 4 there were one or two small issues with this approach, I’ve now updated the code examples and explanations below to work with Gatsby 4.
— end update —
This post takes a deep dive into MDX frontmatter and how it can be used to store references to either local and / or remote images and render them anywhere in the MDX body using GatsbyImage from the new gatsby-plugin-image.
Check out the demo site 👉 https://gatsbymdxembeddedimages.gatsbyjs.io/
Check out the code repo 👉 https://github.com/PaulieScanlon/gatsby-mdx-embedded-images
gatsby-plugin-image is absolutely brilliant to use when returned by JSX. Unfortunately
gatsby-plugin-image doesn’t work in quite the same way when returned by MDX.
TL;DR: it doesn’t render an image 😖
There’s no problem with
gatsby-plugin-image itself! The situation has more to do with how image data is processed, queried and then referenced when returned by MDX. Let’s see how to fix it!
The general approach
The idea behind this approach is to store references to images or image URLs in the MDX
frontmatter , which can later be processed by gatsby-transformer-sharp and used anywhere around the post body. By using
<GatsbyImage /> from
gatsby-plugin-image it’s possible to maintain blazing fast site speed with multiple images found in any MDX post or page body!
Note: you can of course use an HTML <img … /> tag in your MDX. But the images won’t be automatically optimized and you won’t get the cool blur up effect that you get when using
References to images stored on disk and co-located with the MDX file can be stored in
Example of frontmatter with reference to local images:
Example of image co-location on disk:
If you are working with images images stored on a remote URL, references to these remote images can also be stored in
Example of frontmatter with reference to remote images:
The docs will explain that to use
<GatsbyImage /> from gatsby-plugin-image in JSX, you can do something similar to the code example below. However, as mentioned previously, this will only work if the data passed on to
<GatsbyImage /> has been processed and queried in the correct way.
Typically, when using locally sourced image files (files on disk), Gatsby and GraphQL are able to correctly infer the type and process the data using
gatsby-transformer-sharp. Happily, this will work just fine when combined with the getImage helper function.
Example of using
<GatsbyImage /> in JSX:
However, when querying image URLs from MDX
frontmatter Gatsby and GraphQL need a little help inferring the type.
One possible solution
One way round the problem is to process the
frontmatter fields on the server using
gatsby-node.js then pass the
gatsbyImageData back to MDX using the
<MDXRenderer /> from
gatsby-plugin-mdx and apply it to
<GatsbyImage /> via data made available on a custom prop 🥴
… and the usage for this approach would look a something like this:
Example of using
<GatsbyImage /> in MDX :
GraphQL Type Inference
This is a bit of a brain bender but… in order to correctly process images using
childImageSharp, GraphQL needs to understand that the field is of type
For the Local Image example, the typeDefs created by Gatsby will automatically and correctly infer that
image1.jpg is in fact of type
Example of GraphQL type inference for images as files:
But! That said…
For the Remote Image example, the typeDefs created by Gatsby will infer that the image URLs are of type
Example of GraphQL type inference for images stored on a remote URLs:
Can you see the problem???? We can’t process a
To correct this,
createTypes can be used to manually Type a new field and store it using the same name on the parent MDX
node. I create a new field of the same name to avoid directly mutating the original node on
Code snippet of
embeddedImagesRemote to type
[File] and sets a reference
@link for the field name provided when using
CreateRemoteFileNode and CreateNodeField
In this demo the, image URLs are stored on Cloudinary and won’t exist as Files on disk until they have been remotely sourced and stored as a Node in the Gatsby data layer.
Using Promise.all and Array.prototype.map to iterate over the
frontmatter, it’s then possible to source the remote image using
createRemoteFileNode for each image URL and once sourced, create a new node field using
createNodeField that represents the image as a File.
onCreateNode function that uses
This will result in the ability to query
embeddedImagesRemote from GraphQL and see all the returned
gatsbyImageData as you normally would — just as though you were using local files!
Example of GraphQL
Before going too much further, it’s a good idea to configure
gatsby-plugin-sharp and set up some “global” options. This will save duplicating some of the settings in the page query.
Example of setting global config options for
Now that the remote image data is available in Gatsby’s data layer, you can work with it in the usual way — e.g., via a page query or by using
For the purposes of this demo all data is queried using a page query.
Example of page query:
Once the image data has been correctly processed, it can be passed back on to the
<MDXRenderer /> via named props. In this example these are called
The final step is to use the props that have been passed on to the
<MDXRenderer /> in the actual .mdx file and use them with the
<GatsbyImage /> component and the getImage helper function.
Here’s an example of how to use the
<GatsbyImage /> component with remotely sourced images from
|
OPCFW_CODE
|
Director IT, Business Intelligence
As the Director, Business Intelligence at Slack, you will lead a team of BI Engineers and Analysts and oversee KPI reporting, performance forecasting and analysis and predictive modeling, including the build out of technologies that support Business Intelligence at Slack. The role will drive insights to the corporate functions of Marketing, Sales, Finance and HR, helping to drive enhanced performance of our business. The ideal candidate thrives at delivering solutions to complex data problems, has an affinity for data and analytics technologies in support of driving business decisions, is a strong team leader and is comfortable working with executives and stakeholders of all levels.
In this role you will report directly to the VP of Information Technology. You are a high-energy, self-starter with an entrepreneurial spirit and the ability to navigate a rapidly changing landscape. You enjoy and excel at team building and development, as well as collaborating with cross-functional teams. Technical expertise and a passion for innovation are crucial, but you'll balance that with sound decision-making and a solid background in data & analytics.
- Hands-on leader engaging in complex data challenges and information management projects
- Develop and execute a holistic technology vision for Slack’s long-term analytics & data strategy while identifying quick wins that can provide immediate value.
- Develop and maintain relationships with business leaders across the company to prioritize development activities, set strategy and educate the organization on new technologies. Set data standards and ensure alignment. Facilitate an enterprise Data Council and other cross-functional user groups. Effectively communicate with internal and external stakeholders on a regular basis regarding progress on projects, performance and potential business risks
- Build a high-performing team through mentorship, training, and feedback.
- Design roadmaps for BI systems that support our global businesses into the future.
- Collaborate with business teams to identify, prioritize, and develop key features in our physical and logical data models (e.g. marketing, web, sales, and customer success analytics).
- Monitor trends and maintain a philosophy of continuous improvement to identify new technologies and processes, improve the data structure and integrations for long term scalability and efficiency.
- Own the technology stack, including data infrastructure, ETL and Visualization layers and enable our employee base to move to a self service model of access to business insights.
- Lead development through agile methods and test driven development.
- 12+ years of experience in Business Intelligence and integration technologies.
- 5+ years of experience in an IT leadership position.
- Considerable knowledge of integration platforms, service oriented architecture/microservices enterprise programming frameworks.
- Experience developing analytics technology solutions that drive business outcomes.
- Advanced experience with at least one business intelligence tool at enterprise scale.
- Broad technical exposure to various BI disciplines and tools, including Requirements Analysis, Management, ETL concepts, Data Warehouse, Analytics and Dashboards.
- Understanding of modern data pipeline tools (e.g. NiFi, Airflow, Luigi) and modern data warehouses. (e.g. BigQuery, Snowflake)
- Experience with AWS tools & technologies (S3, Redshift, EMR, Kinesis, Lambda, API Gateway, Dynamodb etc).
- Experience rolling out user driven self-service data access to an organization
- Expertise with data visualization tools (For example: Tableau, Domo, Looker) is required to support our unique requirements for visualization, security, data access, etc.
- Superb communication and presentation skills, as well as demonstrated management skills.
- Possess a customer experience attitude while maintaining a business-minded solutions approach.
- Experience with streaming data pipelines using any of Kafka, AWS Kinesis, Spark streaming etc would be a plus.
- Knowledge of Statistics and/or Machine Learning. Familiarity with columnar data would be a plus.
Slack is where work happens. It connects you with the people and apps you work with every day, no matter where you are or what you do. We believe everyone deserves to work in a welcoming, respectful, and empathetic culture. We live by our values and hire accordingly.
Launched in February 2014, Slack is the fastest growing business application ever and is used by thousands of teams and millions of users every day. We currently have eight offices worldwide, in San Francisco, Vancouver, Dublin, Melbourne, New York, London, Tokyo, and Toronto.
Ensuring a diverse and inclusive workplace where we learn from each other is core to Slack's values. We welcome people of different backgrounds, experiences, abilities and perspectives. We are an equal opportunity employer and a fun place to work. Come do the best work of your life here at Slack.
|
OPCFW_CODE
|
require "ipaddr"
module Puppet::Parser::Functions
newfunction(:ip_for_network, :type => :rvalue, :doc => <<-EOS
Returns an ip address for the given network in cidr notation
ip_for_network("127.0.0.0/24") => 127.0.0.1
EOS
) do |args|
addresses_in_range = []
range = IPAddr.new(args[0])
facts = compiler.node.facts.values
ip_addresses = facts.select { |key, value| key.match /^ipaddress/ }
ip_addresses.each do |pair|
key = pair[0]
string_address = pair[1]
ip_address = IPAddr.new(string_address)
if range.include?(ip_address)
addresses_in_range.push(string_address)
end
end
# TODO don't be a dork dork with the return
# handle multiple values!
return addresses_in_range.first
end
end
|
STACK_EDU
|
About 7 months ago I posted data comparing two memory dividers (1:1 and 3:5 @ 333 MHz) on my then Q6600/P965 based system and concluded that for the 67 % increase in memory bandwidth, the marginal gains in actual performance weren't worth the extra voltage/heat. Since then I've upgraded my hardware to an X3360/P35 setup and wanted to revisit this issue. Again, two dividers were looked at: one pair running 8.5x333=2.83 GHz, and another running @ 8.5x400=3.40 GHz: 333 MHz FSB: 1:1 a.k.a. PC2-5300 (667 MHz) 5:8 a.k.a. PC2-8500 (1,067 MHz) 400 MHz FSB: 1:1 a.k.a. PC2-6400 (800 MHz) 4:5 a.k.a. PC2-8000 (1,000 MHz) I figured there would be a much greater difference in the 333 FSB case since the memory bandwidth increased by 60 % vs. 25 % in the 400 MHz FSB case. All other BIOS settings were held constant with the exception of the divider (and the strap) and the given FSB. Subtimings were set to auto and as such could vary as managed by the board which I found out, was required since manually settings some of the subtimings lead to either an incomplete POST, or an unstable system. The benchmarks were broken down into three categories: 1) "Real-World" Applications 2) 3D Games 3) Synthetic Benchmarks The following "real-world" apps were chosen: x264, winrar, and the trial version of Photohop CS3. All were run on a freshly installed version of Windows XP Pro x64 SP2 w/ all relevant hotfixes. The 3D games were just Doom3 (an older game) and Crysis (a newer game). Finally, I threw in some synthetic benchmarks consisting of the Winrar self test, Super Pi-mod, and Everest's synthetic memory benchmark. Here is an explanation of the specifics: Trial of Photoshop CS3 – The batch function in PSCS3 v10.0.1 was used process a total of fifty-six, 10.1 MP jpeg files (226 MB totally): 1) bicubic resize 10.1 MP to 2.2 MP (3872x2592 --> 1800x1200) which is the perfect size for a 4x6 print @ 300 dpi. 2) smart sharpen (120 %, 0.9 px radius, more accurate, lens blur setting) 3) auto levels 4) saved the resulting files as a quality 10 jpg. Benchmark results are an average of two runs timed with a stopwatch. RAR version 3.71 – rar.exe ran my standard backup batch file which generated about 955 MB of rars containing 5,210 files totally. Here is the commandline used: Code: rar a -m3 -md4096 -v100m -rv40p -msjpg;mp3;tif;avi;zip;rar;gpg;jpg "f:\Backups\Backup.rar" @list.txt where list.txt a list of all the target files/dirs included in back up set. Benchmark results are an average of two runs timed with a stopwatch. x264 Benchmark HD – Automatically runs a 2-pass encode on the same 720p MPEG-2 (1280x720 DVD source) file four times totally. It contains two versions of x264.exe and runs it on both. The benchmark is the best three of four runs (FPS) converted to total encode time. Shameless promotion --> you can read more about the x264 Benchmark HD at this URL which contains results for hundreds of systems. You can also download the benchmark and test your own machine. 3D Games Based Benchmarks Doom3 - Ran timeddemo demo1 a total of three times and averaged the fps as the result. Settings were 1,280x1,024, ultra quality with 8x AA. Crysis - Ran the included "Benchmark_CPU.bat" and "Benchmark_GPU.bat" both of which runs the pre-defined timedemo, looped four times. I took the best three of four (average FPS) and averaged them together as the benchmark. Settings were 1,024x768, very high for all (used the DX9 very high settings hack, and 2x AA. "Synthetic" Application Based Tests WinRAR version 3.71 – If you hit alt-B in WinRAR, it'll run a synthetic benchmark. This was run twice (stopped after 150 MB) and is the average of four runs. SuperPI / mod1.5 XS – The 16M test was run twice, and the average of the two are the benchmark. Everest v4.50.1330 Memory Benchmark - Ran this benchmark a total of three times and averaged the results. Hardware specs: Code: D.F.I. LP LT P35-TR2 (BIOS: LP35D317) Intel X3360 @ 8.5x400=3.40 GHz Corsair Dominator DDR2-1066 (TWIN2X4096-8500C5DF) 2x 2Gb @ 5-5-5-15 (all subtimings on auto) (tRD=8) @ 667 MHz (1:1) @ 2.100V (tRD=7) @ 1,066 MHz (5:8) @ 2.100V (tRD=8) @ 800 MHz (1:1) @ 2.100V (tRD=6) @ 1,000 MHz (4:5) @ 2.100V EVGA Geforce 8800GTS (G92) w/ 512 meg Core=770 MHz Shader=1,923 MHz Memory=2,000 MHz Note: the performance levels (tRD) are set automatically by the board which wouldn't POST if I manually tweaked them. Even though they're different, I still feel the data are valid since this is the only way I can run them. In other words, if I'm going to run the higher dividers, it'll be as such or it won't POST! Without further ado, here are the data starting first with a 333 MHz FSB comparing the 1:1 vs. 5:8 divider (DDR2-667 vs. DDR-1066): Here are the averaged data visualized graphically: Now on to the 400 MHz FSB comparing the 1:1 vs. 4:5 divider (DDR2-800 vs. DDR2-1000): And graphically: As you can see, there way nothing spectacular in either the real-world category, or the 3D games category in comparison to the massive increase in memory bandwidth (shown on the graphs in red). In fact, I was surprised to see that there were really no gains by Doom3 and minimal gains by Crysis. This is probably due to the fact that the video card shoulders the burden of these games with Doom3 being the light-weight of the two. As expected, the synthetic benchmarks did pick-up on the larger bandwidth, but only in the case of the 400 MHz FSB did I see anything approaching the theoretical increase (14 % of 25 % vs 15 % of 60 %). If you read my first memory bandwidth post, perhaps the same conclusions can be drawn from these new data. One thing I'll add is that this new MB doesn't require extra voltage like my older P5B-Deluxe did to run the higher dividers, so it's not producing that much more heat. That said, I'm actually running the system with the 4:5 divider, since things seem to feel faster to me (windows opening, responsiveness, etc.) which are all unfortunately intangibles I can't measure.
|
OPCFW_CODE
|
with programs 3, 4, & 5 to create an overlay
demo. If you assemble progs 2,3,4 &5. and then
LINK ROOT,OVLAY (OV1) (OV2)<RET>
The demo will create itself. It only outputs
messages and will not win any prizes but it proves
Program 6 is an overlay loader written in ‘C’. This
is the one I use for my GSX tester. It uses ‘C’s
pointers to functions that allow you to call things
like overlays at absolute addresses and pass
parameters to them if necessary. Don’t worry if it
all seems like gibberish as ‘C’ can sometimes
confuse, it performs in the same manner as the
Using L80 to create overlays
There is to my knowledge no facility for creating
overlay systems from within L80. It can be done but
is somewhat complex and should be avoided. You
have to hack the code around a lot in memory and
I don’t like having to admit that I have done it this
way on occasion.
The basic principle used here is that you load the
root module into memory, and then you load an
overlay in at a predetermined address. All global
references will now have been resolved. You now
exit L80 having saved the program. Using Gemdebug,
ZSID or DDT you take the program file and
move the overlay code down to 100H and SAVE it
as a separate file. Messy but it works. May I suggest
at this juncture that while you are messing about
with the overlay, you create overlays that are
compatible with the LINK format (see previous for
Using the sample assembler programs lets create
the programs using L80.
- Assemble all programs to REL format.
- Enter the following:
L80 <RET> Run L80
ROOT,OVLAY<RET> Load root module
and overlay loader
At this point you will see where the top of the root
module is by looking at the Data readout from L80
(in this case it is 01BE). We have to decide where
the overlay base is to be. Lets put it at 200H and
keep things nice and simple. To do this enter:
/P:200<RET> Meaning load next bit at 200H.
OV1<RET> and load in the 1st overlay.
ROOT/N/E Save ROOT.COM and exit.
Now we have to create the overlay from the COM
file using a debugger. I have used Gemdebug to
create LINK lookalike overlay files. This is how it is
F100,1FF,0 Fill 256 byte header with 0's
Now move the overlay section down to 200H using
the M command. In this case we have been lucky as
the overlay was originally loaded into 200. If it had
started at 0760H for example we would have to
assuming that the overlay code finished at 800.
To maintain LINK compatibility we now have to
insert the information concerning the length of
code and the base address. To do this use the ‘S’
(set) command to insert the details. Our overlay is
12 bytes long and its base is at 200H. We set 101
and 102 to 12 and 00, then 107 & 108 to 00 and 02.
We now have a .OVL file in memory and we are
ready to save it, enter:
SAVE 2 OV1.OVL
and the process is complete.
Repeat this for OV2.
Now create the economy version of the COM file by
and its all ready to go.
Not very pleasant is it? It involves a lot of effort and
no mean amount of Hex arithmetic to calculate the
sizes of the files for the move and save commands.
As all this is done for you by the LINK program may
I suggest again that you go out and get it and avoid
|
OPCFW_CODE
|
Parallel 5 V power supply on 3.3 V board to increase current capacity
I am trying to design a board in M.2 key B form factor to split the USB and PCI-E signals to serve two devices.
However, M.2 key B only provides 2.5 A current (5 power pins, and 0.5 A each), but the total power requirement for the devices is 4 A. The motherboard I am working on has a standby 5 V pin rated at 2 A. I wonder whether there is a way to satisfy the power requirement by sharing current between the two sources.
Some of my "ideas":
Motherboard 5 V->buck converter to 3.3 V->ideal diode->load
M.2 3.3 V->ideal diode->load
The main problem I see in this idea is the step down of 5 V may not really match the 3.3 V generated by the motherboard, and the current sharing does not happen until the load voltage drops below the minimum of the two. I am not sure whether the over-current protection could be triggered before current sharing even starts.
Active current sharing like UCC39002. May work, but seems quite complex and I am not familiar with the relevant field.
Would really appreciate evaluations on my "ideas" and/or alternative solutions to the problem.
Where is this standby pin located? how will you connect it to your M.2 module?
The front panel header of the motherboard provides 5V standby. Was considering simply connecting a jumper to the module.
Although the M.2 Key B has no input for external power other than 3.3V, if you are to connect somehow an external power to your board then you should consider 12V instead of 5V stby. Because:
Every motherboard has (or is supposed to have) 12V supply
12V coming to the motherboard generally has (or is supposed to have) higher output power
Conversion from 12V to 3.3V is easier compared to 5V to 3.3V conversion e.g. the stress will be low due to the lower duty cycle.
Active current sharing like UCC39002. May work, but seems quite complex and I am not familiar with the relevant field.
I'm familiar with it. I kindly recommend you to stay away from UCC39002 as it'll bring you headache, hair loss and sleep disorder. I'm not dishing on neither the product nor TI, it's just the complexity of the sub-system and the potential problems it causes, and it'll be overkill for your application.
That's a good idea. If I am stepping down from 12V and higher, can I get away with simply using the 3.3V as ENABLE signal for the regulator and supply the whole module with it to avoid voltage difference?
|
STACK_EXCHANGE
|
On the 4e forums the subject of random encounters came up. Most complaints really came down to encounter speed in 4e, or just hatred of random aspects in the game.
Random encounters can work great, but remember you are rolling for 'wandering monsters'. If used correctly this adds flavor and doesn't necessarily lead to combat.
In B/X D&D, movement as you might recall was a little different. There was the combat round but also a unit of time called a 'turn'; this is 10 minutes. Your speed in an encounter, say you have speed 6 on your PC, that is 1/3 of your 'exploration speed', or how far you can go in a dungeon in a turn (10 minutes). This assumes you are kind of being cautious and 'mapping' where you are at. So, say for a 4e PC w speed 6, your exploration speed is 3*6 or 18 squares
Any encounter is assumed to take a turn even if it doesn't; the rest is made up by the short rest or looking for treasure or what have you. Any time you disable a trap, that takes a turn. Stop to work on a stuck door, takes a turn etc. and anytime you have a PC move 18 sqrs thru a dungeon it has taken a turn. In B/X, you had to rest once every 5 turns, or 10 minutes of every hour.
Then you say ok, every so many turns, say 3 or 4, I will roll for wandering monsters. Roll a d6, and a 1 (or a 2 if you want more to happen) means they encounter wandering monsters.
Now remember in old editions a monster was really just a blanket term for anything you came across; in 4e it is more accurate to say 'roll for random creatures'. You can have memorable, flavorful random 'encounters' that dont all have to be combat based. There is also some cool random flavor. Here is an example table.
Roll for wandering creatures once every 3 turns.
A roll of 1 on a 1d6 indicates an encounter. Then roll a d8. If you roll the same thing twice, reroll it.
1. 3-8 Eladrin explorers looking for a relic
2. 5-10 Gnomes carrying the carcass of a large creature
3. A sleeping (snoring) adventurer lying in the hallway
4. A large swarm of bats flies by; the walls are covered in guano
5. 2-12 giant rats feeding on dead bodies; they scatter when seen
6. A woman tied up in a corner with a gag over her mouth
7. 5-10 rot grub swarms gestating
8. 3-10 monks meditating in front of a strange image on stone a wall
If you really want to kick it old school and test your ability as a DM to improvise, you can roll the reactions of intelligent creatures randomly.
Roll a d6
1. Run from the pcs/scared
5. Beg the PCs for help
6. Act friendly and ask to join PCs, then betray them first chance
This is literally just off the top of my head. Hopefully it gives you some ideas on how it could add a little unpredictable flavor to your game.
|
OPCFW_CODE
|
M: Snowden Using Lavabit for Email - tippytop
http://boingboing.net/2013/07/12/so-apparently-edward-snowden.html
R: Dystopian
I've had a 4 char account for years - never did it because they were
inherently secure though.
They're an American company with an American hosting provider. Only pro
accounts use the encrypted email feature set.
Here's Lavabit's whitepaper on their process - pretty standard setup:
[http://lavabit.com/secure.html](http://lavabit.com/secure.html)
R: mieses
Couldn't the NSA easily tap this server? The email is only encrypted at rest.
The datacenter is in Dallas, TX.
R: stugs
About time to update that VMWare install -
[http://status.lavabit.com/export/graphs/graph_401_4.png](http://status.lavabit.com/export/graphs/graph_401_4.png)
R: joejohnson
Does anyone know if you can use Lavabit with your own domain?
R: ra
you can.
You need to get a paid account then email them to ask them to set it up;
there's no web interface to add a domain as you would on eg fastmail.
R: mariuolo
Nice, this will ruin it for the rest of us all:(
|
HACKER_NEWS
|
Sanjoy posted on Monday, October 10, 2005 - 4:31 pm
Dear professors ... this is my initial model (where each r's and b's are 5 point ordinal)
R by r1, r2, r3 B by b1, b2, b3
I had a hunch for the possibility of cross-loading, and also because of initial EFA results, I went for it completely ... like this way
R by r1, r2, r3, b1, b2, b3 B by r1, r2, r3, b1, b2, b3
However it can not run ... saying "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL PROBLEM INVOLVING PARAMETER 12 " ...
I mean our total freely estimable parameters are 10 lambdas, 2 factor variances and 1 factor covariances ... that makes 13 ... and from sample correlation matrix (lower traingular elements), we have (6*7)/2=21 values ... I'm missing the non-identification point
Q1. I was wondering why?
thanks and regards
bmuthen posted on Monday, October 10, 2005 - 4:43 pm
Remember the basic rule for exploratory factor analysis - with 2 factors you need 4 restrictions on Lambda and Psi. You only have 2 restrictions - the two unit loadings. You can specify an "EFA within CFA model" which we teach how to do in the first day of our 5-day course in Alexandria; you can purchase the handout from this day using the Mplus web site.
Sanjoy posted on Wednesday, October 12, 2005 - 8:27 pm
Thank you Professor...I'm not very sure yet about the reason why u have said so, yesterday I found Prof. Joreskog's article "Addendum, page 40-43", Advances in Factor Analysis and SEM, 1979, I suppose I can site his proof as the reference to this identification issue ...
yes as u have advised, I'm planning to buy that handout ... I have one suggestion in this regard ...why don't you sell it as a “.pdf” document instead of sending via UPS, I mean u can charge a bit more on handling account, however selling it as a “.pdf” document will be 1. less time consuming, besides 2. we can save the amount we pay to UPS
Dear professors: I'm working with categorical data. I'm interested in defining the therholds and the intercepts too; but I have problems with the model. Could you tell mee what's is wrong? I've 10 categorical variables, and so I define 10 latent variables (one for each), also I define one new latent factor (f1). VARIABLE: NAMES ARE u1-u31; USEVARIABLES ARE u2 u3 u4 u6 u9 u10 u12 u19 u28 u31; CATEGORICAL ARE ALL; ANALYSIS: TYPE=MEANSTRUCTURE; PARAMETERIZATION=THETA; ESTIMATOR=ULS; MODEL: f2 by u2; f3 by u3; f4 by u4; f6 by u6; f9 by u9; f10 by u10; f12 by u12; f19 by u19; f28 by u28; f31 by u31; f2@1; f3@1; f4@1; f6@1; f9@1; f10@1; f12@1; f19@1; f28@1; f31@1; f2-f31*.5 f1 by f2* f3 f4 f6 f9 f10@1 f12 f19 f28 f31; f1@1; [f2 f4-f31] OUTPUT: TECH1; TECH2; modindices;
You can identify the thresholds and intercepts only with multiple group analysis or repeated measures data. So you can't do this for your example. If you had multiple groups or timepoints, then you would not be able to identify the variances in addition to the intercepts. So you would need to eliminate the statement f2-f31*.5.
|
OPCFW_CODE
|
What is the shebang/hashbang for?
Is there any other use for shebangs/hashbangs besides for making AJAX contents crawlable for Google? Or is that it?
possible duplicate of What's the hashbang (#!) in Facebook and new Twitter URLs for?
The hash when used in a URL has existed since long before Ajax was invented.
It was originally intended as a reference to a sub-section within a page. In this context, you would, for example, have a table of contents at the top of a page, each of which would be a hash link to a section of the same page. When you click on these links, the page scrolls down (or up) to the relevant marker.
When the browser receives a URL with a hash in it, only the part of the address before the hash is sent to the server as a page request. The hash part is kept by the browser to deal with itself and scroll the page to the relevant position.
This is what the hash syntax was originally intended for, so this is the direct answer to your question. But I'll carry on a bit and explain how we got from there to where we are now...
When Ajax was invented, people started wanting to find ways to have a single page on their site, but still have links that people could click on externally to get directly to the relevant content.
Developers quickly realised that the existing hash syntax could do this for them, because it is possible to read the URL's hash value from within javascript. All you have to do then is stop it from scrolling when it sees a hash (which is easy enough), and you've got a bit of the URL which is effectively ignored by the browser, but can be read and written to by javascript; perfect for use with Ajax. The fact that Google includes the hash part of a URL in its searches was just a lucky bonus to begin with, but has become quite important since the technique has become more widespread.
I note that people are calling this hash syntax a "shebang" or "hashbang", but technically that's incorrect; it's just a hash that is relevant -- the 'bang' part of the word "hashbang" refers to an exclamation mark ('bang' is a printing industry term for it). Some URLs may indeed add an exclamation mark after the hash, but only the hash is relevant to the browser; the string after it is entirely up to the site's authors; it may include an exclamation mark or not as they choose, but either way the browser won't do anything with it. Feel free to keep calling it a hashbang or shebang if you like, but understand that only the hash is of significance.
The actual term "shebang" or "hashbang" goes back a lot further, and does refer to a #! syntax, but not in the context of a URL.
The original meaning of this term was where these symbols were used at the beginning of a Unix script file, to tell the script processor what programming language the script is written in.
So this is indeed an answer to your question, the way you've worded it, but is probably not what you meant, since it has nothing to do with URLs at all.
Wow, great explanation, seriously. Thanks!
|
STACK_EXCHANGE
|
[19:05] <cm-t> hi
[19:06] * cm-t using webchat, loged on ubuntu-tv, didnt install clien,
[19:59] <bobweaver> ping tgm4883
[19:59] <tgm4883> bobweaver, pong
[19:59] <bobweaver> tgm4883, can you join google hangout to help with mythtv stuff ?
[20:00] <bobweaver> send me your email and I will send invite
[20:00] <tgm4883> what do you need help with?
[20:00] <bobweaver> france stuff
[20:00] <tgm4883> france stuff?
[20:00] <bobweaver> on remote box for ubuntu-fr
[20:00] <tgm4883> give me a few minutes
[20:00] <bobweaver> Cool
[20:00] <tgm4883> you sent it right before, I just wasn't available
[20:01] <bobweaver> I will send another one
[20:03] <bobweaver> email has been sent take your time tgm4883
[20:04] <cm-t> #ubuntu-tv-fr if you need writting to us
[20:12] <tgm4883> bobweaver, I didn't seem to get it this time :/
[20:12] <bobweaver> ok will send again
[23:09] <bobweaverstv> ping mhall119
[23:10] <bobweaverstv> want to join hangout to tell these people more about the community side of things with Ubuntu TV and explain better then I could about how the community is involved and things like the facebook google+ and what not.
[23:11] <bobweaverstv> I will send invite
[23:14] <mhall119> bobweaverstv: now?
[23:15] <mhall119> bobweaverstv: sorry, I can't do it now, it's the middle of dinner time
[23:15] <bobweaver> yeah we are hanging ou now
[23:15] <bobweaver> cool just thought that I would ask
[23:15] <bobweaver> there will be more. It is just that they say that there is going to be alot people there and I do not want to give out bad info so I thought that you would be + person for job.
[23:16] <bobweaver> maybe in the next couple of days ?
[23:24] <mhall119> bobweaverstv: sure, if I know about it ahead of time I will try to join
[23:24] <mhall119> fwiw, you know as much about the TV as I do
[23:25] <olive> the party is 17-18 (november)
[23:48] <bobweaver> mhall119, we will be meeting tomorrow a couple of hours after noon are time
[23:48] <bobweaver> if you would like I think that it would be a good idea for you to get to know these fine people
|
UBUNTU_IRC
|
Hyper Casual Game: Part 1
As of the 15th of November, I’ve decided to work more on my Hyper Casual Game for the Unit. In this blog I’ll be going over what game I’m trying to make, and what I hope to achieve with it.
The idea for my Hyper Casual Game is to make a simple top-down game that requires the player to avoid rolling boulders that would bounce off of walls (See Figure 1), this game would feature a very simple control scheme in-which you just need to move the player.
Hyper Casual games are typically easy to play, and feature minimalistic user interfaces. As most of the games consist to be small they can be quickly downloaded without needing any instructions, and thus gather a wider audience of players.
“A hyper-casual game is a mobile video game which is easy-to-play, and usually free-to-play; they also feature very minimalistic user interfaces.”
Here is the progress of what I’ve made so far, there are only three objects that make the game; the player, the boulder and the walls. The player simply moves about using WASD, and the is a spherical boulder uses the walls to bounce off of and change direction.
In the boulder, are several overlap events where it detects if it’s hit the player or the walls. If it calls a function that hits the player then it uses a pointer to set the player’s collision to false and their death boolean to true.
If it touches a wall, it updates its direction variable to make it face a different direction. So far it only subtracts it’s current angle by 111 degrees which is currently a placeholder, to make the ball properly bounce off the wall would require some possible trigonometry.
What I could try to do to get some more realistic bouncing is by trying to calculate the angle of reflection from the angle of incidence once the ball hits the wall (See Figure 4).
How this could be done in Unreal is that I could simply get the current angle of the direction the ball is going in and to bounce it off in the opposite direction by multiplying it with a negative value to get the angle of reflection.
Hopefully I am able to improve upon this function when I next work on the project to get a more realistic feel for the boulder to bounce off of the walls.
So far I feel as if the project has gone well at least, as I’ve got the game functioning at a prototype level. The gameplay just needs more refining before I can move onto the details and possibly adding in more content.
En.wikipedia.org. 2021. Hyper-casual game – Wikipedia. [online] Available at: https://en.wikipedia.org/wiki/Hyper-casual_game#:~:text=A%20hyper%2Dcasual%20game%20is,feature%20very%20minimalistic%20user%20interfaces.&text=Usually%20featuring%20infinite%20looped%20mechanics,leading%20to%20their%20addictive%20nature. [Accessed 23 November 2021].
O’Reilly Online Learning. 2021. HTML5 Canvas, 2nd Edition. [online] Available at: https://www.oreilly.com/library/view/html5-canvas-2nd/9781449335847/ch05s02.html [Accessed 23 November 2021].
|
OPCFW_CODE
|
// Basic Operating System.
// Printing on the screen.
#include <screen.h>
#include <memory.h>
#include <stdint.h>
uint16_t *video_buffer;
int cursor_column;
int cursor_row;
// Initialize the screen.
void init_screen (void)
{
video_buffer = (uint16_t *) 0xB8000;
cursor_column = 0;
cursor_row = 0;
}
// Clear the screen.
void clear_screen (void)
{
memset ((uint8_t *) 0xB8000, 0, VBUFFER_SIZE);
cursor_column = 0;
cursor_row = 0;
}
// Scroll the screen.
void scroll_screen (void)
{
memcpy ((uint8_t *) 0xB8000, (uint8_t *) 0xB8000 + SCREEN_WIDTH * 2,
VBUFFER_SIZE - SCREEN_WIDTH * 2);
memset ((uint8_t *) 0xB8000 + VBUFFER_SIZE - SCREEN_WIDTH * 2, 0,
SCREEN_WIDTH * 2);
cursor_row = SCREEN_HEIGHT - 1;
cursor_column = 0;
}
// Draw a character on the screen at the specified position.
void putch_at (char ch, int column, int row)
{
int offset = (row * SCREEN_WIDTH) + column;
video_buffer[offset] = ch | (0x07 << 8);
}
// Go to the next line.
static inline void newline (void)
{
++cursor_row;
cursor_column = 0;
if (cursor_row >= SCREEN_HEIGHT)
scroll_screen ();
}
// Implement backspace.
static inline void backspace (void)
{
if (cursor_column > 0)
{
--cursor_column;
putch (' ');
--cursor_column;
}
}
// Draw a character at the cursor position.
void putch (char ch)
{
if (ch == '\n')
{
newline ();
}
else if (ch == '\b')
{
backspace ();
}
else
{
putch_at (ch, cursor_column, cursor_row);
++cursor_column;
if (cursor_column >= SCREEN_WIDTH)
{
++cursor_row;
cursor_column = 0;
if (cursor_row >= SCREEN_HEIGHT)
{
scroll_screen ();
}
}
}
}
// Draw a string starting at the cursor position.
void puts (const char *s)
{
int i;
for (i = 0; s[i] != 0; ++i)
{
putch (s[i]);
}
}
|
STACK_EDU
|
Last week Microsoft announced that they would be abandoning the ACE and dynamic entity (“property bag”) model for the SQL Server Data Services cloud data storage system. They would also switch from their REST data API (used in ADO.Net Data Services) to the old-school “Tabular Data Stream” wire protocol.
While Microsoft’s promise of more relational support was always a distinguishing feature of their cloud DB service, and while they tried to spin the news in that direction, it feels a lot more like when they abandoned WinFS and announced that, really, everything you could do with WinFS would work fine using NTFS and a whole heck of a lot of indexing. Maybe sorta true … but feels like a big step back.
Of course, big customers – large enterprises with SQL Server databases and lots of SQL code – would not want to see a change in their data layer and would prefer this move. But accommodating them is assuming that they are ready to become first-version customers of the data cloud at all. And I doubt this for two reasons.
First, any move to the cloud involves a trade-off of control which some companies are loath to make even if they are confident the system will work. Which is problematic because:
Second, anyone who has dealt with big databases knows that there is no magic. Despite the quest for automagic autoscaling self-tuning databases, no one, so far as I know, has made one that does all of this for really large enterprise applications. There are just too many application specific variables, not to mention poorly written app code that can cause trouble in proportion to the amount of resources you give it access to.
I do believe Microsoft has the engineering brainpower to try the problem, and are as likely as anyone to succeed. It’s just that I haven’t seen any evidence of a specific strategy or technology. Maybe if I were a bigger customer … but seriously, if Redmond had this problem solved (and it’s one of the biggest out there), they would either patent it or publish lots of white papers. Either way, it would be publicized and reviewed. A trade secret? maybe, but which Fortune 500 CIO is going to jump on that bandwagon and the cloud and the outsourced data stuff all at the same time?
To the extent that these large database apps could be made to behave without human intervention, there is likely to be a tradeoff in resources, and when you’re paying per GB or per compute-cycle, that equals a side order of more cost to go along with the entree of new greater risk.
The point is that the ACE/dynamic entity/REST model is well understood, performs, utilizes resources in a known manner. Not appropriate for every app. Not relational in the formal sense if at all. Not easy to migrate to. But it goes like the devil. So you’re getting something concrete in exchange for your risk and your dollars. Unlike a magical SQL Server instance in the sky.
Maybe there is magic in there, and I’ll be proven wrong. Or maybe 99% of the customers’ database needs are so small that it’s a non-issue, and Microsoft is really just competing with the thousands of hosting providers that will host actual individual SQL Server instances for you on a large server. But this change still seems to raise more questions than it answers.
|
OPCFW_CODE
|
We look into the various incentives, the video game latest online game giving and you can coupons. Another difference is if the player attacks up until he’s four notes instead splitting, it’s known as a five-card Trick. Generally, the objective of the game is to obtain cards one include as much as as close so you can 21 that you could instead of going-over.
But playing at no cost is a wonderful technique for having the ability to practice black-jack. So there try of course lots of benefits from to experience black-jack for real money. If you want to understand how to amount cards to improve your chances of successful, then below are a few our very own card-counting publication.
While you are unique so you can black-jack, you might understand all of our book for you to enjoy black-jack to learn the basic principles. 100 percent free black-jack games are fantastic to practice it, such as black-jack your own choices actually count and you may influence your own enough time-term performance. Along with, participants can choose from the numerous sort of that it vintage online game making it far more fun and exciting. Foreign language 21 is much more or quicker the same as real money antique blackjack. Really the only celebrated difference is that the 10s aren’t area of the video game. Along with, players often hit many times using this type of type and make right up to your smaller give totals.
To try out online black-jack for free can also help one build your approach instead risking the cash. Just after you’re up to speed, you might play real cash black-jack in the one of our best-rated online casinos. You are however to play contrary to the specialist, and also the manner of profitable and you may losing are the same in order to fundamental black-jack video game online. Much like other free online casino games, on the web blackjack comes in multiple variations. When you won’t be in a position to enjoy alive specialist blackjack for free, there are 100 percent free models of the most popular black-jack online game online here.
Here are a few of the very well-known additional variations you might discuss. As we outlined the principles from a simple black-jack games above, of numerous casinos have brought variations to the new games to incorporate professionals with an increase of alternatives. Including, the new Las Atlantis gambling establishment get twelve additional black-jack game readily available to possess participants.
Ultimately, abandon ‘double off’ from your vocabulary and have accustomed saying ‘buy’ if you want to double a bet and require other card. The purpose of one another game is similar, but truth be told there’s lots of distinctive line of differences in terms of give values and betting. PayPal – deposits are built on the web betting account whenever the athlete clicks the brand new option to send currency. Withdrawals usually bring a few minutes otherwise occasions; otherwise, this may arrive in the newest PayPal account in one day.
We provide you which have evaluations, intricate recommendations and you will everything related to brand-the brand new Us casinos. BlackjackSimulator.internet will not intend for the information on this site to help you be used for unlawful motives. It is your responsibility to make sure you is actually of court decades which online gambling try court on your nation from house. BlackjackSimulator.internet is intended to render bias 100 percent free details about the internet gaming community. Everything on this web site is supposed to have enjoyment intentions only. That it flash-totally free video game will likely be played to your people Mac otherwise Window computers as well as on your iphone 3gs and you will Android os products.
|
OPCFW_CODE
|
[07:06] <ctan> hello there
[07:09] <Innatech> heya
[07:15] <ctan> know any mentors for google SoC which I can convince to accept my proposal :P ?
[07:15] <ctan> i have a great marketing pitch!
[07:20] <fabbione> ctan: repeatedly asking in different ubuntu channel will not help
[07:20] <fabbione> and you already got an answer in #ubuntu-devel
[07:20] <fabbione> you have to wait
[07:23] <ctan> i was asking a different question in #ubuntu-devel
[07:24] <ctan> but i get the picture :P
[07:24] <ctan> sorry to have bothered
[02:00] <[miles] > afternoon guys
[02:01] <[miles] > guys, am I going crazy, or is there no pam_ldap in LTS?
[02:02] <Nafallo> [miles] : universe
[02:02] <Nafallo> https://launchpad.net/ubuntu/+source/libpam-ldap/180-1ubuntu0.6.10
[02:03] <[miles] > h
[02:03] <[miles] > jeje
[02:03] <[miles] > ok cheers
[02:03] <[miles] > forgot to add the repo
[12:04] <Burgundavia> ajmitch: do we have a list of cool news thing for ubuntu server in feisty?
[12:04] <ajmitch> nope
[12:04] <ajmitch> I can't really think of much that's cool & new
[12:05] <ajmitch> maybe apache 2.2, the GFS & clustering stuff (some of which is new)
[12:05] <ajmitch> you looking for release note stuff?
[12:05] <Burgundavia> apache 2.2 is worth talking about
[12:05] <Burgundavia> yep
[12:05] <maswan> oprofile and systemtap!
[12:05] <ajmitch> maswan: aha, thanks :)
[12:05] <Burgundavia> systemtap?
|
UBUNTU_IRC
|
This section describes how the RTS interacts with the OS signal facilities. Throughout we use the term "signal" to refer to both POSIX-style signals and Windows ConsoleEvents.
Signal handling differs between the threaded version of the runtime and the non-threaded version (see Commentary/Rts/Config). Here we discuss only the threaded version, since we expect that to become the standard version in due course.
On Posix, the timer signal is implemented by calling timer_create() to generate regular SIGVTALRM signals (this was changed from SIGALRM in #850 (closed)).
On Windows, we spawn a new thread that repeatedly sleeps for the timer interval and then executes the timer interrupt handler.
The interrupt signal
The interrupt signal is SIGINT on POSIX systems or CTRL_C_EVENT/CTRL_BREAK_EVENTon Windows, and is normally sent to the process when the user hits Control-C. By default, interrupts are handled by the runtime. They can be caught and handled by Haskell code instead, using System.Posix.Signals on POSIX systems or GHC.ConsoleHandler on Windows systems. For example, GHCi hooks the interrupt signal so that it can abort the current interpreted computation and return to the prompt, rather than terminating the whole GHCi process.
When the interrupt signal is received, the default behaviour of the runtime is to attempt to shut down the Haskell program gracefully. It does this by calling interruptStgRts() in rts/Schedule.c (see Commentary/Rts/Scheduler). If a second interrupt signal is received, then we terminate the process immediately; this is just in case the normal shutdown procedure failed or hung for some reason, the user is always able to stop the process with two control-C keystrokes.
A Haskell program can ask to install signal handlers, via the System.Posix.Signals API, or GHC.ConsoleHandler on Windows. When a signal arrives that has a Haskell handler, it is the job of the runtime to create a new Haskell thread to run the signal handler and place the new thread on the run queue of a suitable Capability.
When the runtime is idle, the OS threads will all be waiting inside yieldCapability(), waiting for some work to arrive. We want a signal to be able to create a new Haskell thread and wake up one of these OS threads to run it, but unfortunately the range of operations that can be performed inside a POSIX signal handler is extremely limited, and doesn't include any inter-thread synchronisation (because the signal handler might be running on the same stack as the OS thread it is communicating with).
The solution we use, on both Windows and POSIX systems, is to pass all signals that arrive to the IO Manager thread. On POSIX this works by sending the signal number down a pipe, on Windows it works by storing the signal number in a buffer and signaling the IO Manager's Event object to wake it up. The IO Manager thread then wakes up and creates a new thread for the signal handler, before going back to sleep again.
RTS Alarm Signals and Foreign Libraries
When using foreign libraries through the Haskell FFI, it is important
to ensure that the foreign code is capable of dealing with system call
interrupts due to alarm signals GHC is generating.
For example, in this strace output
a select call is interrupted, but the foreign C code interprets the
interrupt as an application error and closes a critical file
|
OPCFW_CODE
|
[Solved] why lambda, and use cases
2/14/2020 8:27:05 AMKode Krasher
8 AnswersNew Answer
[Part 1 of 4] Code Crasher Excellent question and well done setting up the context for what you are trying to reconcile in your understanding. I also dig your approach to following up to share your discoveries. Nice job and thanks for taking the time to help others who hopefully stumble upon this thread. My response to your question is as follows: Ultimately, lambdas are an artifact of functional programming. However, due to limited support for functional programming in Python, the usage of lambdas are a bit awkwardly implemented into the language and consequently, can be less preferred to alternative approaches available in Python. Guido van Rossum provides a lot of his personal insight and context from his perspective regarding support in Python for functional programming in the link below: https://python-history.blogspot.com/2009/04/origins-of-pythons-functional-features.html?m=1 (continued...)
If u need a function only once eg for filtering or mapping, it is a shortcut and keeps the flow. If u need to square a number often, giving a name to the func is the better way.
[Part 2 of 4] After reviewing the links you posted, I wasn't satisfied with the content as a reference for learning about "the when and why" to use lambdas in Python. Some seemed to use examples that should be avoided. The Stackoverflow link does highlight a limitation of lambdas in Python, which, in my opinion, is due to some poor language design decisions. I pulled a set of links that I'd suggest on this topic which you may prefer as well. There may be some overlap in these articles. However, each one is worth reviewing as they collectively reinforce many strong talking points I agree with. The significance of some talking points may not register to less experienced programmers for a while. But there are quite a lot of great points made that many will connect with. https://stackabuse.com/functional-programming-in-python/ https://realpython.com/python-lambda/ https://treyhunner.com/2018/09/stop-writing-lambda-expressions/ (continued...)
[Part 4 of 4] Back to Python, if you would like to see some sample code using lambdas compared to for-loop and list compression alternatives, check out the code series I put together in the links below. Experimenting with For Loops vs Lambdas vs List Comprehensions in Python: #1: Using for-loop. - https://code.sololearn.com/cXLf85iK5T4U/#py #2: Using lambdas with calling print() within nested lambda. - https://code.sololearn.com/ca6pN64a79Tx/#py #3: Using lambdas and calling print() function once. - https://code.sololearn.com/cg5H878BmTo5/#py #4: Using List Comprehensions - https://code.sololearn.com/ckl4rbtLFm7h/#py
Code Crasher lambdas are trivial? 😂😂😂😂😂😂😂😂😂😂 maybe you get a better insight by learning filter/map functions. They are very related to lambdas and absolutely useful once you got familiar with them.
Not sure if I am anywhere closer to my answer, but googling made me start to see I can do some useful things in python now, that were only available to me with bash. So if nothing else, I broke thru being given projects to try, to actually thinking of ways to create projects for myself. A little confidence maybe? ----- urlList = ['http://yahoo.com', 'http://sololearn.com', 'http://bing.com', 'http://aol.com', 'http://google.com'] wwwList = [urls.replace('http://', 'https://www.') for urls in urlList] print(wwwList) fullList = list(map(lambda www: str.replace(www, 'http://', 'https://www.'), urlList)) print(fullList) ----- Having multiple ways of doing things never helps my comprehension, but I have ran across a few examples that got the gears turning. I think a parsing utility in python to fix JSON data is in my near future. ;) @Oma Falk -- Thank you for the reply... I am humbled you would spend the time to explain something that is probably extremely trivial for you. For the short while I have been on SL, I have seen your name all over the place, and while I don't understand most of your code, I have thoroughly enjoyed your code examples. Thank you for all your contributions! /me passes Oma Falk a cold one! Cheers!
So I guess we can apply the old Idiom... "the more I learn, the less I know." -Unknown OR if you prefer: "Wisest is he that knows he does not know" -Socrates I have been learning long enough to know this is the Truth! Anyway, here are some links from my journey down the rabbit hole... A good place to start was the official docs.python.org site... concise, and to the point: https://docs.python.org/3/tutorial/controlflow.html#lambda-expressions This is a site I go to a lot for syntax on multiple languages, and it has a web based shell to experiment in: https://www.w3schools.com/python/python_lambda.asp Another brief explanation with examples: https://book.pythontips.com/en/latest/lambdas.html This was a great read, and I think combined with the next link turned on a few light bulbs: https://www.makeuseof.com/tag/python-lambda-functions/ This is probably more advanced, but it is a great article with a lot of examples, and covers the most variety of use cases: https://thispointer.com/python-how-to-use-if-else-elif-in-lambda-functions/ This seems to be the most complete article I found... To be honest, I have not gotten thru it. It will be a weekend read for me, I think: https://www.afternerd.com/blog/python-lambdas/#what-is-python-lambda And lastly, this is just a thread with a "Gotcha" that someone needed help with, but I found it most interesting: https://stackoverflow.com/questions/452610/how-do-i-create-a-list-of-python-lambdas-in-a-list-comprehension-for-loop I have searched the Code Playground for 'lambda' and looked at some of the code, but it seems to be pasted from lessons on SoloLearn or rehashing other's code... If anyone wants to share links to their code in the Code Playground using lambdas, I personally would enjoy studying them. Please, Share a link in this thread. (I am sure other's will benefit as well.) Have I mastered this? No, but it has been enlightening! I hope others find these links useful. Cheers!
|
OPCFW_CODE
|
|A fast and compact algorithm for the normal quantile|
Message #1 Posted by Dieter on 21 Apr 2011, 4:58 p.m.
In earlier threads in this forum various improvements have been discussed concerning the most efficient way to evaluate the inverse normal distribution (quantile) on the 34s. Since this might be of some interest for other calculators and/or implementations I think it's okay to start a new topic here.
As usual, it's a tradeoff between memory usage, execution speed and numeric accuracy. The current implementation starts with two quite good initial guesses - one value slightly high, the other a bit low. Then the solver is called to determine the exact result.
We can do better. ;-)
I have tried the one or other approach, and finally it turned out that there is a simple method that converges very fast and might even use less memory than the current method. The very sophisticated 34s solver is not required here. There is a much simpler, yet effective way to determine the quantile with a different algorithm:
First we guess a good estimate for the quantile. Only one single value is required, for instance this one:
p > 0,2 p < 0,2
u = sqrt(2*pi)*(0,5-p) u = -2 * ln p
x = u + u^3 / 6 x = sqrt(-2 * ln(p * sqrt((u-1)*2*pi))) + 0,2/u
Yes, the second term is a bit more complex than before but it's worth it.
After this, a dedicated though simple solver is used. The preferred
algorithm here is the Halley method, which uses both the first and the second derivative of a function. In our case, the first derivative simply is the PDF, and the second derivative is a simple function thereof. This leads to a very compact form of the Halley equation:
Assuming x > 0 and p < 0.5, first the well-known Newton-Raphson quotient f / f' is evaluated:
cdf(x) - p
t = ----------
The pdf already has been evaluated within the cdf routine, so its value is known and no additional calculation is required. Now the new and improved approximation is:
x_new = x + ---------
1 - t*x/2
Look, Pauli - no slow logs required here. ;-)
Okay, how does it perform? I tried this method in Excel/VBA with 15
significant digits. After just two (!) iterations the final result was achieved. The method seems to converge at least quadratic, theoretically it's even cubic. So a third iteration probably should return something like 30 valid digits.
Pauli, what do you think? This should be faster than before and maybe it even requires less memory since no equation for the solver has to be set up. And only three (hard-coded) iterations should return a result with far more valid digits than actually required.
(Edit: corrected error in first guess formula)
Edited: 22 Apr 2011, 6:06 a.m. after one or more responses were posted
|
OPCFW_CODE
|
Ways to give users some specific education about question quality and topicality
Important: This question is being asked here, and not on the Programmers Meta, because I believe that several SE sites suffer from this same, specific problem.
On Programmers, I see a pattern emerging.
New User sees the "Programmers" site title, and asks their incomplete, "fix my broken code" question.
Community members comment that their question is off-topic, and recommend Stack Overflow.
New User says "thanks" and leaves, leaving their undeleted question for the community to clean up.
What seems clear to me is that:
New User has never seen the "What kinds of questions can I ask about here" page, or if they have, they've clicked through it without reading it.
New User doesn't know how to move their question to the right place properly (i.e. post on new site, delete from old site).
Community cleanup of such questions is onerous. The moderators on Programmers, following the principle that the community should moderate itself for the most part, are not proactive about removing such questions unless they're especially egregious.
Migration is even more onerous.
OK, so we have the Tour page. Here's what it says about topicality, about halfway down the page:
Note to those who are confused about Programmers' site scope: it's all right there, in black and white. However, I do notice a problem. It's circled in red.
That's our fault, the fault of the Programmers community. What we really meant to say was "Don't ask your code troubleshooting questions here; those belong on Stack Overflow." Coding tools deserves its own bullet. We should fix that.
However, what are the chances that the user is actually seeing this, and evaluating whether or not they should ask their question based on this?
So here's my question, in two acts:
ACT I: Is there a way to highlight the pain points of a particular site to new users specifically, so that we can be very clear that we don't want those questions that are clearly and unambiguously off-topic on our site, before they ask their question?
ACT II: Failing that, is there a way that we can fast-track the removal of such questions so that they no longer pollute our front page?
Why, look at this over here http://meta.stackoverflow.com/questions/319980/explaining-stack-overflow-experimenting-with-about-pages
Reading it I'm a bit dizzy. But it seems to suggest encompassing some changes which may be relevant to above.
One of the most frustrating things for me, working my way up to a couple thousand rep on Chem.SE, was a lack of laid-out information about some of the inner workings of the SE site model. I'd go to do something, and then either it wouldn't work, or I'd be told by a mod or by the system that I was doing it wrong. (These things might be "further in to the SE experience" than what you're referring to, though.) The various things are second nature now, though, so I dunno how well I could recall them. Will try...
@Won't: tl;dr: Right now all new users at Programmers really need to know is "Don't ask your code troubleshooting questions here," and "This is how to use the Delete link to remove your broken code question." There's also a bunch of recent history at Programmers Meta, including a failed "three votes to close" experiment (I personally don't think it failed, but).
There is a page they have to tick to agree to before they can ask. I have never read that page - the fact that it's a full A4 page of text doesn't make me want to read it. That needs changing. Maybe with memes - http://imgur.com/mj03Ubd
related: Let's help askers who are trying to circumvent question block at Stack Overflow (because per stats, about 10% of all (all) questions at Programmers are asked by these folks)
...I would also consider tagging this with [meta-tag:se-quality-project]. Because flood of debugging questions at Programmers seems to correlate with rolling out features of this project at Stack Overflow. I wouldn't be surprised if other software related sites are impacted as well
-1, that circle was not freehand :)
somewhat related discussion at Code Review meta: What would Clippy say? "We have a question closure rate of ~30%. That's a significant burden on moderators and users who help triage the questions..."
see also: Improved Help Center - site-specific pages and site-specific edits to all pages
I think one way to accomplish this could be to cement the how to ask section directly above the interface for asking a question when the user has 1 reputation, or less than 50 reputation, or some metric along those lines.
For example, on programmers that could look like this:
An interesting way to test the success of this approach would be the same way that is being proposed for testing this approach: Explaining Stack Overflow: Experimenting with About Pages - which as far as I can tell is the standard way things are tested at Stack Overflow (company name, i.e. the whole exchange).
A split test. I do not have the data for closure rates on Programmers, but on Stack Overflow the rates are much higher for lower reputation users. Based on this assumption, it should be possible through the use of a split test to compare the closure rates of one population seeing this topicality section versus one population not seeing the section.
I like this. Short, sweet and impossible to ignore.
You would want to ignore the association bonus for whatever metric is chosen
@JoshCaswell - Yeah that is a good point.
I like this idea, but it would probably work better if the number of dotpoints is reduced. If I was a new user who couldn't be bothered to read the existing popups, I would most likely skip this entire section. It should be short, sweet, and easy to read so that it is not ignored.
This is way too much text to read and it even triggers the TOS part of my brain. Nobody is going to read that.
@isanae: Only the latter half of that text (the questions we don't answer on Programmers) would be essential, and of that, I could live with just "We don't answer code writing or code troubleshooting questions here; those belong on Stack Overflow."
@RobertHarvey I'm not sure I understand what you're trying to accomplish. We all know that users don't read warnings and that putting larger or longer warnings only makes it worse. You really think a user would say "aw shucks, I wanted to post my homework question here, I guess I'll go somewhere else"?
@isanae: That's not what you said. You said the text was "too much to read." I offered a reasonable alternative. If you're position is "nobody will read anything, no matter what the size," that's different.
This is definitively a good idea, of course some won't read it, but more will instad of having that somewhere else. Becase a lot wont' bother to go through some other links before clicking the button "Ask Question" since they're here for that. Are they wrong ? probably, but they'll still do it, so there is no poin discussing that. So the only way to get around this is to make them read the points that most newbies don't know and post for nothing when then try to post.
On Stack Overflow, all new users see this page before they can ask their first question, and must tick the box and press "continue" to proceed. I assume this step was added to prevent exactly these sort of problems.
I'm not sure if this is unique to Stack Overflow, or if this currently also happens on other sites? The problem with this page is that it only links to the on-topic information, but doesn't actually present it. You need to click at least two links: /help/on-topic and /help/dont-ask. In total, there are six links with "more information", and a number of other links leading up to a total of 11 links in total.
Most people aren't going to click all that.
So:
Present this page, or a variant thereof, before a user asks the first question if this doesn't already happen.
Make sure it contains all the essential information in the body itself as concise as possible − don't make it a "link-only answer!"
only SO has this click through and Server Fault folks forced dev team to set them up one. At Stack Exchange they purposely design smaller sites so that askers aren't bothered to read instructions, "...the idea is that, since they get less traffic than Stack Overflow, there's not as much of a disincentive to prevent people from posting, since the community can help users fix problems with their posts, or close, flag, and delete"
|
STACK_EXCHANGE
|
DNS options for my local network
I have two domains services.xxxxxxx.com and www.xxxxxxxx.com being hosted on two different machines on my local network and rather than using port forwarding I would rather have all traffic for each domain forwarded by the router to the right machine. Would I need to setup a DNS Server to accomplish this or are there other alternative to get this to work?
Do you only have one public IP address? What router do you have? Most likely, you need what is called a reverse proxy. "Reverse proxies can be used whenever multiple web servers must be accessible via a single public IP address. The web servers listen on ... different machines and different local IP addresses altogether. The reverse proxy analyzes each incoming call and delivers it to the right server within the local area network."
Yeah only one ip address. Is this something that is done at a router level or would I have to setup a machine for this.
Some routers can do it. The simplest solution is probably to pick one of the two machines (the more powerful, busier, or more reliable one), port forward to it, and then configure that machine to act as a reverse proxy for misdirected requests. (The reason to pick the busier machine is so that fewer connections will need to be proxied.)
There are two approaches to this depending on what your ISP and router can support. However, you should note from the outset that your router cannot tell the difference between different sub-domains, it only routes on IP address and port not by name.
The least likely approach is that your ISP can provide more than one IP address and that your Router can also handle this (my old Draytek router could, my new Billion router cannot). In that case, you would set up a public DNS (not local) that pointed the two sub-domains at the different addresses and, in the router, made sure that you also routed those addresses accordingly. The router would then need to forward port 80 on each IP address to a single IP address (the hosting PC, I assume they are both on a single box?) but two different ports. This is by far the easiest and most robust approach. If you indicate that this is the desired approach, I can update the answer with more details on how to set up Apache and the PC correctly.
More likely is that you will only have a single external IP address available to you. Now things are a little more difficult since both sub-domains must be directed at a single external IP address and will be NAT'ed to a single internal address/port since the router cannot differentiate the traffic. In this instance, you will need to (as David has indicated) set up a reverse proxy. This can be done using either a dedicated proxy tool or via mod_proxy in Apache (other web servers such as NGINX can also do it). You also need to make sure that your two web apps actually work behind a reverse proxy, some do not.
A reverse proxy will take traffic for each sub-domain and transparently forward it to an internal address/port combination so that the two apps will run on separate ports on the host PC. For simple web apps, the configuration is straight-forwards but for complex apps, the configuration can be somewhat complex to get right.
In both cases, you need the help of an external DNS not an internal one.
|
STACK_EXCHANGE
|
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using NUnit.Framework;
using Wikitools.Lib.Primitives;
using Wikitools.Lib.Tests.Json;
using static Wikitools.Lib.Primitives.SimulatedTimeline;
namespace Wikitools.AzureDevOps.Tests
{
[TestFixture]
public class AdoWikiWithStorageTests
{
[Test]
public async Task NoData()
{
var adoWikiWithStorage = await AdoWikiWithStorage(UtcNowDay);
// Act
var actualStats = await adoWikiWithStorage.PagesStats(AdoWiki.PageViewsForDaysMax);
new JsonDiffAssertion(new string[0], actualStats).Assert();
}
[Test]
public async Task DataInWiki()
{
var wikiStats = new ValidWikiPagesStatsFixture().PagesStats(UtcNowDay);
var adoWikiWithStorage = await AdoWikiWithStorage(UtcNowDay, wikiStats: wikiStats);
// Act
var actualStats = await adoWikiWithStorage.PagesStats(AdoWiki.PageViewsForDaysMax);
new JsonDiffAssertion(wikiStats, actualStats).Assert();
}
[Test]
public async Task DataInStorage()
{
var storedStats = new ValidWikiPagesStatsFixture().PagesStatsForMonth(new DateDay(UtcNowDay));
var adoWikiWithStorage = await AdoWikiWithStorage(UtcNowDay, storedStats);
// Act
var actualStats = await adoWikiWithStorage.PagesStats(AdoWiki.PageViewsForDaysMax);
new JsonDiffAssertion(storedStats, actualStats).Assert();
}
/// <summary>
/// Given
/// - wiki page stats for current month coming from wiki via API
/// - and wiki page stats for previous month coming from storage,
/// starting from the earliest available day in the AdoWiki.MaxPageViewsForDays window
/// When
/// - querying AdoWikiWithStorage for page stats for the entire day span of AdoWiki.PageViewsForDaysMax
/// Then
/// - the merged stats of both previous stats (coming from storage) and current stats (coming from wiki)
/// are returned.
/// </summary>
[Test]
public async Task DataInWikiAndStorageWithinWikiPageViewsForDaysMax()
{
var pageViewsForDays = AdoWiki.PageViewsForDaysMax;
var fix = new ValidWikiPagesStatsFixture();
var currStats = fix.PagesStatsForMonth(UtcNowDay);
var currStatsDaySpan = currStats.VisitedDaysSpan;
var prevStats = fix.PagesStatsForMonth(
UtcNowDay.AddDays(-pageViewsForDays + currStatsDaySpan));
var adoWikiWithStorage = await AdoWikiWithStorage(UtcNowDay, storedStats: prevStats, wikiStats: currStats);
Assert.That(
currStatsDaySpan,
Is.GreaterThanOrEqualTo(2),
"Precondition violation: the arranged data has to have at least two days span " +
"between first and last days with any visits to provide meaningful test data");
Assert.That(
prevStats.FirstDayWithAnyVisit,
Is.GreaterThanOrEqualTo(UtcNowDay.AddDays(-pageViewsForDays + 1)),
"Precondition violation: the first day of arranged stats is so much in the past that " +
"a call to PageStats won't return it.");
Assert.That(
prevStats.Month,
Is.Not.EqualTo(currStats.Month),
"Precondition violation: previous month (stored) is different from current month (from wiki)");
// Act
var actualStats = await adoWikiWithStorage.PagesStats(pageViewsForDays);
new JsonDiffAssertion(prevStats.Merge(currStats, allowGaps: true), actualStats).Assert();
}
/// <summary>
/// Given
/// - wiki page stats that were stored earlier than AdoWiki.PageViewsForDaysMax days ago,
/// meaning they cannot be updated from thw iki
/// - and assuming the stats have the following characteristics:
/// - first stored month has no page visits at all
/// - last (current) stored month has no page visits at all
/// - there are months with stored visits
/// - and there is a "gap" month, i.e. a month chronologically in the middle of the stored
/// months that has no visits, but months before and after have visits.
/// When
/// - querying AdoWikiWithStorage for page stats for the entire day span of all the stored stats.
/// Then
/// - all stored stats are returned, merged.
/// - This means stats rom beyond AdoWiki.PageViewsForDaysMax were included in the merged stats.
/// - This means the first and last months without any visits were not stripped, i.e.
/// their day span was included.
/// </summary>
[Test]
public async Task DataFromStorageFromManyMonths()
{
var statsInMonthPresence = new[] { false, false, true, false, true, true, false, false };
var storedStats = ArrangeStatsFromMonths(statsInMonthPresence);
Assert.That(storedStats.VisitedDaysSpan > AdoWiki.PageViewsForDaysMax);
Assert.That(storedStats.DaysSpan > 6*31, "Should be more than 6 months");
Assert.That(storedStats.MonthsSpan == statsInMonthPresence.Length);
var adoWikiWithStorage = await AdoWikiWithStorage(UtcNowDay, storedStats);
// Act
var actualStats = await adoWikiWithStorage.PagesStats(storedStats.DaysSpan);
new JsonDiffAssertion(storedStats, actualStats).Assert();
ValidWikiPagesStats ArrangeStatsFromMonths(bool[] pageStatsInMonthPresence)
{
var fix = new ValidWikiPagesStatsFixture();
int monthsCount = pageStatsInMonthPresence.Length;
var months = pageStatsInMonthPresence.Select(
(statsPresent, i) =>
{
DateDay currDay = UtcNowDay.AddMonths(-monthsCount + 1 + i);
return statsPresent
? fix.PagesStatsForMonth(currDay)
: new ValidWikiPagesStatsForMonth(
WikiPageStats.EmptyArray,
startDay: currDay,
endDay: currDay);
});
return ValidWikiPagesStats.Merge(months, allowGaps: true);
}
}
private static Task<AdoWikiWithStorage> AdoWikiWithStorage(
DateDay utcNow,
ValidWikiPagesStatsForMonth storedStats,
ValidWikiPagesStats? wikiStats = null)
=> AdoWikiWithStorage(utcNow, (ValidWikiPagesStats) storedStats, wikiStats);
private static async Task<AdoWikiWithStorage> AdoWikiWithStorage(
DateDay utcNow,
ValidWikiPagesStats? storedStats = null,
ValidWikiPagesStats? wikiStats = null)
{
var decl = new AzureDevOpsDeclare();
var testsDecl = new AzureDevOpsTestsDeclare(decl);
var storage = await testsDecl.AdoWikiPagesStatsStorage(utcNow, storedStats);
var adoWiki = new SimulatedAdoWiki(
wikiStats ?? new ValidWikiPagesStats(
WikiPageStats.EmptyArray,
startDay: utcNow,
endDay: utcNow));
var wiki = decl.AdoWikiWithStorage(adoWiki, storage);
return wiki;
}
}
}
|
STACK_EDU
|
The .NET thread pool is a really amazing piece of technology, and it is suitable for a wide range of usages. RavenDB has been making use of it for almost of all concurrent work since the very beginning.
In RavenDB 3.5, we have decided to change that. RavenDB have a lot of parallel execution requirements, but most of them have unique characteristics that we can express better with our own thread pool.
To start with, unlike the normal thread pool, we aren’t registering just a delegate and some state for it to execute, we are always registering a list of items to process, and a delegate that takes either a single item from that list or a section of that list. This let us do a much better job at work stealing. Because we have a lot more context about the actual operation. We know that when we are done with executing a particular delegate, we get to run the same delegate on the next available item in the list that it got passed it. That give us higher locality of code, because we are always executing the same task, as long as we have tasks for that in the pool.
We often have nested operations, a parallel task (execute indexing work) that spawn additional parallel work (index the following documents). By basing this all on our custom thread pool, we can perform those operations in a way that doesn’t involve waiting for that work to be done. Instead, the thread pool thread that we run on is able to “wait” by executing the work that we are waiting for. We have no blocked threads, and in many cases we can avoid getting any context switches.
Under load, that means that threads won’t put a lot of work on the thread pool and then have to fight with each other over who will finish its work first, it means that we get to run our own tasks, and only when there are enough threads available for other word will we spread for additional threads.
Speaking of load, the new thread pool also have dynamic load balancing feature. Because we know that RavenDB will use the thread pool for background work only, we can prioritize things accordingly. RavenDB is trying to keep the CPU usage in the 60% – 80% range by default. And if we detect that we have a higher CPU usage, we’ll start decreasing the background work we are doing, to make sure that we aren’t impacting front row work (like serving requests). We’ll start doing that by changing the priority of the background threads, and eventually just stop processing work in most of the background threads (we always have a minimum number of threads that will remain working, of course).
Another fun thing that the thread pool can do is to detect and handle slowpokes. A common example is an index that is taking a long time to run. Significantly more than all the other indexes. The thread pool can release all the other indexes, and let the calling code know that this particular task has been left to run on its own. RavenDB will then split the indexing work so the slow index will not slow all of the rest of the indexing.
And having split the thread pools between front row work (the standard .NET thread pool) doing request processing and the background pool (which is our own custom impl), we get a lot more predictability in the environment. We don’t have to worry about indexing jobs taking over the threads required to serve requests, or for requests on the server to impact the loading of a new database, etc.
And finally, like every other feature in RavenDB nowadays, we have a rich set of debug endpoints that can tell us in details exactly what is going on. That is crucial when we are talking about systems that run for months and years or when we are trying to troubleshoot a problematic server.
|Reference:||What is new in RavenDB 35 My thread pool is smarter from our NCG partner Oren Eini at the Ayende @ Rahien blog.|
|
OPCFW_CODE
|
1. What is TensorFlow?
TensorFlow is an open-source machine learning framework developed by Google. It allows developers to build and train machine learning models using data flow graphs, which represent computations as directed graphs.
2. What is a data flow graph in TensorFlow?
A data flow graph in TensorFlow is a graphical representation of a machine learning model. It consists of a set of nodes, which represent mathematical operations, and edges, which represent the data that flows between the nodes.
3. What is a tensor in TensorFlow?
A tensor in TensorFlow is a multi-dimensional array that represents a mathematical object. It can be a scalar, vector, matrix, or higher-dimensional array. Tensors are the basic building blocks of machine learning models in TensorFlow.
4. What is a session in TensorFlow?
A session in TensorFlow is an environment for executing computational graphs. It allows developers to run a graph, evaluate nodes, and update variables in the graph.
5. What is a variable in TensorFlow?
A variable in TensorFlow is a tensor that holds a value that can be updated during training. It is typically used to store the weights and biases of a machine learning model.
6. What is a placeholder in TensorFlow?
A placeholder in TensorFlow is a way to pass data to a computational graph. It allows developers to define the input and output shapes of a graph without specifying the actual data.
7. What is a feed_dict in TensorFlow?
A feed_dict in TensorFlow is a dictionary that maps placeholders to actual data. It is used to feed data into a computational graph during runtime.
8. What is a loss function in TensorFlow?
A loss function in TensorFlow is a mathematical function that measures how well a machine learning model is performing. It is typically used to optimize the weights and biases of the model during training.
9. What is backpropagation in TensorFlow?
Backpropagation is a method for calculating the gradients of a loss function with respect to the weights and biases of a machine learning model. It is used to update the weights and biases during training.
10. What is a checkpoint in TensorFlow?
A checkpoint in TensorFlow is a saved version of a machine learning model. It allows developers to resume training from a specific point, or to use the model for inference without retraining it. Checkpoints are typically saved to disk during training.
11. What are placeholder tensors?
Placeholder tensors are entities that provide an advantage over a regular variable. It is used to assign data at a later point in time.
Placeholders can be used to build graphs without any prior data being present. This means that they do not require any sort of initialization for usage.
12. What are managers in TensorFlow?
TensorFlow managers are entities that are responsible for handling the following activities for servable objects:
- Lifetime management
13. Where is TensorFlow mostly used?
TensorFlow is used in all of the domains that cover Machine Learning and Deep Learning. Being the most essential tool, the following are some of the main use cases of TensorFlow:
- Time series analysis
- Image recognition
- Voice recognition
- Video upscaling
- Test-based applications
14. What are TensorFlow services?
Servables in TensorFlow are simply the objects that client machines use to perform computations. The size of these objects is flexible. Servables can include a variety of information like any entity from a lookup table to a tuple needed for inference models.
15. How does the Python API work with TensorFlow?
Python is the primary language when it comes to working with TensorFlow. TensorFlow provides an ample number of functionalities when used with the API, such as:
- Automatic checkpoints
- Automatic logging
- Simple training distribution
- Queue-runner design methods
16. What are some of the APIs outside of the TensorFlow project?
Following are some of the APIs developed by Machine Learning enthusiasts across the globe:
- TFLearn: A popular Python package
- TensorLayer: For layering architecture support
- Pretty Tensor: Google’s project providing a chaining interface
- Sonnet: Provides a modular approach to programming
17. What are TensorFlow loaders?
Loaders are used in TensorFlow to load, unload, and work with servable objects. The loaders are primarily used to add algorithms and data into TensorFlow for working.
The load() function is used to pre-load a model from a saved entity easily.
18. What makes TensorFlow advantageous over other libraries?
Following are some of the benefits of TensorFlow over other libraries:
- Pipelines: data is used to build efficient pipelines for text and image processing.
- Debugging: tfdbg is used to track the state and structure of objects for easy debugging.
- Visualization: TensorBoard provides an elegant user interface for users to visualize graphs.
- Scalability: It can scale Deep Learning applications and their associated infrastructure easily.
19. What are TensorFlow abstractions?
TensorFlow contains certain libraries used for abstraction such as Keras and TF-Slim. They are used to provide high-level access to data and model life cycles for programmers using TensorFlow. This can help them easily maintain clean code and also reduce the length of the code exponentially.
20. What is a graph explorer in TensorFlow?
A graph explorer is used to visualize a graph on TensorBoard. It is also used for the inspection operations of a model in TensorFlow. To easily understand the flow of a graph, it is recommended to use a graph visualizer in TensorBoard.
21. What exactly do you know about Recall and Precision?
The other name for Recall is the true positive rate. It is the overall figure of positiveness a model can generally claim. The predictive value which is generally positive in nature is Precision.
The difference between the true positive rate and claimed positive rate can be defined with the help of both these options.
22. Name some products built using TensorFlow?
TensorFlow built the following products:
- Teachable Machine
- Giorgio Camthat
23. What are some advantages of TensorFlow over other libraries?
Debugging facility, scalability, visualization of data, pipelining, and many more.
24. How can you make sure that an overfitting situation is not arriving with a model you are using?
Users need to make sure that their model is simple and does not have any complex statements. Variance takes into account and the noise eliminates from the model data. Techniques like K-fold and LASSO can also help.
25. What exactly do you know about a ROC curve and its working?
ROC or region of convergence is used to reflect data rates that classify as true positive and false positive. Represented in the form of graphs, it can use as proximity to swap operations related to different algorithms.
|
OPCFW_CODE
|
from python_console_menu import AbstractMenu, MenuItem
class DemoSubMenu(AbstractMenu):
def __init__(self):
super().__init__("Welcome to the demo sub menu.")
def initialise(self):
self.add_menu_item(MenuItem(0, "Exit current menu").set_as_exit_option())
self.add_menu_item(MenuItem(1, "Demo sub menu item", lambda: print("Demo sub menu item selected")))
class DemoMenu(AbstractMenu):
show_hidden_menu = False
def __init__(self):
super().__init__("Welcome to the test menu.")
def initialise(self):
self.add_menu_item(MenuItem(0, "Exit menu").set_as_exit_option())
self.add_menu_item(MenuItem(1, "Demo sub menu", menu=DemoSubMenu()))
self.add_menu_item(MenuItem(2, "Show hidden menu item", lambda: self.__should_show_hidden_menu__()))
self.add_hidden_menu_item(MenuItem(3, "Hidden menu item", lambda: print("I was a hidden menu item")))
def __should_show_hidden_menu__(self):
print("Showing hidden menu item")
self.show_hidden_menu = True
def update_menu_items(self):
if self.show_hidden_menu:
self.show_menu_item(3)
def item_text(self, item: 'MenuItem'):
return "%30s" % item.description
def item_line(self, index: int, item: 'MenuItem'):
return "%d: %s" % (index, self.item_text(item))
demoMenu = DemoMenu()
demoMenu.display()
|
STACK_EDU
|
Let’s create a new playground, and
import Foundation at the top.
As we've hinted at previously,
importing a module makes its type available in the current file. Unlike a C
#include compiler directive, there are no header sources copied into the current file. Instead, each module defines what is exports, which we'll cover another week.
First, we'll cover
var url:URL? = URL(string:"https://www.apple.com") //https://www.apple.com
The first thing to notice is that the
init(string: method does not guarantee it will create a
URL from the
String must be correctly-formatted in order for this to work. Let’s see it fail, by making the url invalid, by using a space character:
var url:URL? = URL(string:" ") //nil
This means in code, you cannot assume that creating a
URL will succeed, so please don’t force-unwrap the optional. However, it does mean that if you require a
URL in your API, it means validation will have already taken place.
That works for creating a
URL to the internet.
var url:URL = URL(fileURLWithPath:"/users/ben/Documents/Swift 14 Foundation.md")
URL’s to files need to be constructed specially if you already have a path
Notice that we’re making a pretty big deal out of the fact that one is a file and the other is not.
var url:URL? = URL(string:"file:///users/ben/Documents/Swift%2014%20Foundation.md")
You can create a file URL with the
string: init function, but you’ll need to make sure you include the correct scheme, leave out the domain, and convert the spaces to precent escape encoding, so in other words, if you need to take a string path from another API, use the
init function to convert.
There is a property on the
.isFileURL, which will directly let you know if a
URL is actually a file. This is preferred over, for instance, using
.hasPrefix("file:///") on a
var isAFile:Bool = stringURL.hasPrefix("file:///") //anti-pattern var isAFile:Bool = url.isFileURL // thumbsup
For working with paths,
URL knows how to pull out path components into an array:
var pathComponents:[String]? = URL(string:"https://developer.apple.com/wwdc/live")?.pathComponents // ["/", "wwdc", "live"]
Or read the extension:
var fileExtension:String? = URL(fileURLWithPath:"/video.mp4").pathExtension
It also knows how to modify a url to add more path components:
var urlWithPath:URL? = URL(string:"https://developer.apple.com/wwdc/live") try! urlWithPath?.appendPathComponent("slides") urlWithPath //https://developer.apple.com/wwdc/live/slides
Many platforms rely on the use of strings to refer to files or internet resources, but
Foundation provides a custom value type,
URL. When importing a String from another source that should be represented with a URL, convert them immediately, leaving the interior of your code to work with this canonical type.
URLComponents - a Builder pattern
var scheme:String? = url.scheme var host:String? = url.host var query:String? = url.query
URL struct does provide read-only properties for extracting the pieces of a
query in this example. But they don’t allow direct setting, so let’s take a look at a companion struct,
var components = URLComponents() components.scheme = "https" components.host = "developer.apple.com" components.path = "/wwdc/live" components.user = "jonny" components.password = "1P40n3" components.queryItems = [URLQueryItem(name: "skip", value: "20"), URLQueryItem(name: "search", value: "Best Practices")]
URLComponents is what’s known as a "Builder" pattern. It’s an object whose properties are configured one at a time, and then it produces a manufactured object, a
URL. This lets us forget about formatting, and set each field directly.
let url:URL = components.url //https://jonny:1P40n3@developer.apple.com/wwdc/live?skip=20&search=Best%20Practices
Then let the components type take care of creating a correctly-formatted
URL for me:
Notice that the query parameters automatically got percent escaping for the right characters. Also notice that the query items are an
Array, not a
Dictionary, since the URL spec does not prevent a query key from appearing more than once.
One might even imagine that instead of passing
URL’s through various layers of an app, each one tacking on different fields with very carefully calculated string offsets, only the
URLComponents would be passed around, with each layer of the app modifying it to its heart’s content, only to be turned into a perfectly formatted
URL at the very end. In short, this is a grand slam for value types in Swift over the Class / Mutable Class Obj-C architecture.
Foundation provides basic date/time-types. There is a
Date, which represents an absolute position in time:
let now:Date = Date()
The empty initializer gives us "now".
We can also construct a
Date with a Unix system time:
let someSpecificTime = Date(timeIntervalSince1970:4567.0)
By using the word
init method’s intention is to make
Dates after 1970 have a positive value, and
Dates before negative.
Similarly, we construct a date moments into the future using:
let nearFuture = Date(timeIntervalSinceNow:0.4)
We can compute the
TimeInterval between two
Dates, which is simply a
typealias of a
now.timeIntervalSince(Date(timeIntervalSinceNow: 10.0)) //-10.004
You’ll notice you don’t quite get a perfect "10.0" because it takes milliseconds for the playground to move between these computations.
And again, the values are positive if the receiver is after the argument date, negative if it is before the argument date.
Alternatively, if we need to add small time intervals to a given date, Date provides simple math operations:
Date can also add small time intervals:
var nearFuture = now.addingTimeInterval(0.4) nearFuture.addTimeInterval(0.2)
I recommend you pretty much never do this, however, because a
Date is just a time interval since 1970. Since clocks are constantly changing to try to adjust for various political time changes and minute variations in the inaccuracy of our conventions, like 365-day years, or 24-hour days, merely adding time intervals to dates will not always get you what you want.
Just as we had a
URL to represent a single fully-composed URL, and used
URLComponents to create them, we’ll use
DateComponents to build
var components = DateComponents() components.year = 2016 components.month = 3 components.day = 1 components.hour = 17 components.minute = 32 components.second = 12 components.calendar = Calendar.init(identifier: .gregorian) components.timeZone = TimeZone.current let date = components.date //"Mar 1, 2016, 5:32 PM"
This constructed a Date for us, without our having to do any math, or know anything about leap years. Notice that we needed to get a
Calendar to make the components work. This is necessary, because although a single point in time represented by a
Date is the same in each
Calendar system, the way to break down the years, months, days, etc… are not the same.
Unlike arrays which use 0-based indexes,
DateComponents is 1-based. In other words, Jan 1, has
.month = 1 .day = 1
This means that we won’t need to alter any numeric values from when converting from the user’s writing or to display.
`DateComponents` are also useful in performing math operations on `Date`s. var difference = DateComponents() difference.day = -1 let future = gregorianCalender.date(byAdding: difference, to: date) //"Feb 29, 2016, 5:32 PM"
Here, we create a blank set of
DateComponents, then set a -1 for the
.day. Then we ask the calendar to perform the math, giving us a new
Date. Without needing to know anything about leap years, Foundation has provided us with the correct date before. In general, using components and Calendar-based math in this way helps us entirely avoid Date math problems, like leap years, and daylight savings time.
Today we learned the Foundation module, which ships with Swift, has many standard types used for building representing real values. Among those are
URLs, which represent both files and web resources, and
Dates which represent moments in time. Both have a
Components companion type which provide a "builder" pattern. The components provide direct meaningful access to the meaningful pieces of the larger types, and assemble one correctly on demand.
A standard library with value types for representing real-world values, now that's Swift!
|
OPCFW_CODE
|
How does traceroute work?
What is traceroute?
We can use the traceroute (or
tracert on Windows) command available on most hosts
and network devices to 'trace' the route a packet takes through a network. Traceroute
can be very helpful for identifying connectivity problems between devices on a
local network. It can also help you understand what route your traffic takes across
the internet – for example, to a particular domain. Where possible, the command
will return the IP addresses of intermediate hops (usually routers) which packets
transit to reach the given destination.
Although the implementation of traceroute varies between platforms, the information returned typically includes:
- Number of hops to the destination
- The IP address of each hop
- The hostname of each hop (where possible)
- Round trip time to each hop (usually repeated three times)
How does it work?
The magic of traceroute is achieved by using the Time To Live (TTL) field within IP packets. This field is present to prevent routing loops and to stop packets from endlessly traversing networks if they can't find their destination.
The sender sets the TTL to a specific value (such as 64 or 255). The TTL value then gets decremented by each intermediate device the packet transits. When the TTL reaches zero, the device (e.g. router) processing the packet will drop the packet and typically sends an ICMP packet back to the original sender. This ICMP packet will be a type 11 'Time Exceeded Message' as specified in RFC-792. Whilst routers must drop packets with a TTL of zero, sending the 'Time Exceeded Message' is optional.
This feature is used by traceroute by sending packets to the given destination with different TTL values to identify the routers at each stage. First, it sends a packet with TTL of 1, and the first router (first hop) will decrement this to zero, drop the packet and send back an ICMP message. When traceroute receives this ICMP packet, the source address of it will be the address of the router which dropped the original packet.
Then traceroute sends another packet to the destination but this time with a TTL of 2. This time, the packet successfully transits the first hop (where the TTL is decremented to 1), but when it reaches the second hop, the TTL is decremented to zero, the packet gets dropped, and the router sends back an ICMP Time Exceeded Message.
The TTL is incremented by one each time until we reach the final destination.
Step by Step
A traceroute from our host a.a.a.a to 188.8.131.52 (Google Public DNS) might look like this:
- Send a packet from a.a.a.a to 184.108.40.206 with a TTL of 1.
- Router b.b.b.b decrements the TTL to zero, drops the packet, and then sends an ICMP message back.
- a.a.a.a receives the ICMP packet and now knows that b.b.b.b is the first hop from the source IP address in the ICMP packet.
- a.a.a.a sends a packet to 220.127.116.11 with a TTL of 2.
- b.b.b.b decrements the TTL to 1 and forwards the packet to the next hop.
- c.c.c.c is the second hop, it receives the packet, decrements the TTL to zero and drops the packet. It then sends an ICMP packet back to the original sender (a.a.a.a).
- a.a.a.a receives this ICMP packet and knows that c.c.c.c is the second hop. The time taken to get to c.c.c.c and back is recorded as the round-trip time.
- a.a.a.a sends a packet to 18.104.22.168 with a TTL of 3.
- The TTL is decremented to 2 by b.b.b.b.
- The TTL is decremented to 1 by c.c.c.c and sent on to 22.214.171.124.
- The packet reaches its destination at 126.96.36.199 which sends back a 'Port Unreachable' ICMP packet.
- a.a.a.a receives the ICMP packet and can see from the source address of that packet (188.8.131.52) that the original packet reached its destination and so the full route has been traced.
In reality, there would typically be a lot more hops as the packet first crosses the local network before crossing the internet.
What protocol does traceroute use?
Typically, traceroute uses either ICMP or UDP to a high destination port. When the packet reaches the destination, the packet just gets dropped, and typically the destination will respond with an ICMP port unreachable message. Intermediate hops may also respond to the sender with ICMP.
It is also possible to use TCP (SYN packets), usually specified as an option to the traceroute command. Ultimately, tracert implementation will depend on what platform you use.
What stops traceroute from working?
Sometimes a firewall will be configured to prevent the ICMP messages from being returned to the sender. Other times a router may be configured not to send the ICMP back, a device may be misconfigured, or a packet may get lost.
If this happens, traceroute will wait a while before timing out and trying again
– it usually represents this with asterisks (e.g.
* * *). After a few attempts
(usually 3) traceroute will increment the TTL and try to reach the next device.
In some cases, subsequent devices may still return an ICMP reply allowing the remainder
of the route to be identified. If the firewall is preventing this, then it is likely
that no more responses will be received. In this case, the command will stop after
a maximum number of attempts – e.g. 30.
By default, traceroute will try to resolve the hostname for each hop by using the returned IP address. However, this may not always be possible, which will result in just the IP address being displayed.
The traceroute command
The command varies depending on the platform. On Windows (with PowerShell or Command
Prompt), it is
tracert whilst on most other devices (such as Linux and other
Unix platforms) it is
traceroute. To run a basic traceroute we give the IP address
or hostname we want to reach.
tracert 184.108.40.206 # Windows
traceroute google.com # Linux
Running the traceroute command without any arguments should print the help pages for that platform. Options may be available to
- Prevent trying to resolve hostnames from the returned IP addresses.
- Specify whether to use IPv4 or IPv6
- Set the maximum number of hops to try and reach the destination.
- Specify whether to use ICMP, UDP or TCP
|
OPCFW_CODE
|
|Stochastic Systems Group|
Dr. Feng Zhao
Xerox Palo Alto Research Center
Collaborative signal and information processing (CSIP) for distributed sensor networks is an emerging research area, drawing upon traditionally disparate disciplines such as lower-power communication and computation, space-time signal processing, distributed algorithms, adaptive systems, and sensor fusion and decision theory.
Recent advances in wireless networking, microfabrication (e.g. MEMS), and distributed signal processing have enabled a new generation of sensor networks for a range of tracking and identification problems in both civilian and military applications. Examples range from human-aware environments, intelligent transportation grids, factory condition-based monitoring and maintenance, to battlefield situational awareness. However, unlike centralized sensor-poor systems, distributed sensor nets are characterized by limited battery power, frequent node attrition, and variable data and communication quality. To scale up to more realistic tracking and classification applications involving tens of thousands of sensors, heterogeneous sensing modalities, multiple targets, and non-uniform spatio-temporal scales, these systems have to rely primarily on collaboration among distributed sensors to significantly improve tracking accuracy and reduce detection latency.
At Xerox PARC, we have embarked on a set of projects to take a systemic approach to address key CISP issues such as scalable distributed algorithms, progressive accuracy, spatial resolution, and high-level information processing for sensor nets. The key insight is to develop a dynamic feedback mechanism between the high-level structure analysis and node-level signal processing so as to focus the sensing and communication on a when-needed basis. The first problem we are addressing is the combinatorial explosion in data association: assigning signal streams to objects in a distributed setting. I will describe a mechanism we have developed that uses a predictive model to temporally and spatially segment signal streams to drastically reduce the number of possible associations. The second problem we are addressing is the multiple hypothesis management problem in sensor nets. I will describe a method for filtering data and discuss issues concerning data exchange, information utility measure, and data consistency.
Feng Zhao is a Principal Scientist in the Systems and Practices Laboratory at Xerox PARC. Dr. Zhao leads the Collaborative Sensing and Smart Matter Diagnostics Projects that investigate how MEMS sensor and networking technology can change the way we build and interact with physical devices and environments. His research interest includes distributed sensor data processing, diagnostics, qualitative reasoning, and control of dynamical systems.
Dr. Zhao received his PhD in Electrical Engineering and Computer Science from MIT in 1992, where he developed one of the first algorithms for fast N-body computation in three spatial dimensions and phase-space nonlinear control synthesis. From 1992 to 1999, he was Assistant and Associate Professor of Computer and Information Science at Ohio State University. His INSIGHT Group developed the SAL software tool for rapid prototyping of spatio-temporal data analysis applications; the tool is being used by a number of other research groups. Currently, he is also Consulting Associate Professor of Computer Science at Stanford.
Dr. Zhao was National Science Foundation and Office of Naval Research Young Investigators, and an Alfred P. Sloan Research Fellow in Computer Science. He has authored or co-authored over 50 peer-reviewed technical papers in the areas of smart matter, artificial intelligence, nonlinear control, and programming tools.
Problems with this site should be emailed to email@example.com
|
OPCFW_CODE
|
Filling and reading QML UI forms from Python
This PySide tutorial shows you how to create a “classic” form-based UI with the Colibri QML Components and have it filled and controlled by Python code. There are several ways to do this, and depending on your use case, there might be a better method. Please also note that in this example, the controller code knows a bit about the UI (or rather: the UI has to inform the controller which widgets are to be filled), which might not be desired.
Import the required modules
We need the QtCore module for QObject, the QtGui module for QApplication and the QtDeclarative module for the QML View (QDeclarativeView):
Define a Car as QObject
This is simply the Python version of a normal QObject with 4 properties:
- model (String) – The car name
- brand (String) – The company who made the card
- year (int) – The year it was first produced
- inStock (bool) – If the car is still in stock at the warehouse
The controller to fill the form and react to events
This is another QObject that takes a list of cars as constructor parameter. It also remembers the current position in the list of cars. There are three slots that are visible to the “outside” (QML in our case):
- prev – Go to the previous item
- next – Go to the next item
- init – Show the first item
All these slots take a QObject as parameter, and from the QML file, we will pass the root object there, which has a property widgets where we save a dictionary of mappings from name to QML component. The fill function takes care of filling in the data of the current car into the QML widgets.
Here is some example data, so that we can use our example and click through a list of cars:
Putting it all together
We first need to create the controller, which then also knows about our cars. Then, there is some housekeeping that we need to do – create a QApplication, create the QDeclarativeView and set its resizing mode (so that the root object in QML is always as big as the window).
We then get the root context and expose the controller and the cars list to it (if you look closely, we don’t really need the cars themselves). Then, we load the QML file, show the view and start the application.
This is the user interface of our application. We only use the controller in the UI, and we also only use it for initialization and when buttons are clicked.
How the example app looks like
Simply start the resulting app with python CarAnalogy.py and you should get something like this:
|
OPCFW_CODE
|
Forgot your password?
July 19, 2005 in ObjectBar
Jan 13 2008
Mar 31 2007
Jul 20 2005
Jul 21 2005
Oh, link is dead, can anyone share it one more time?
it's alright i found it at another page
can anyone please tell me what OB skin is that in the screenshot.....ive been trying to get something like that....thanks
this is a great mod, but i cant get the shadow to work, or acctually i can, but then, well, i will explain how i got the shadow working, i checked in prefs to block winFX from applying the shaddow to the dok, and then unchecked it, applyied the prefs and it worked, but afterwards the dock showed thicker. so i went to prefs and tryied to give it exact height (22 pixels) as before, but it just didnt change into 22 pix, and when i got back to the prefs, there wasnt 22 pix as i tiped, but 0 pix. i tryied this few more times and it just wouldnt change, could anyone help me? I use OB 1.65
it doesnt work?
i put the 2 files (there are 3 in the rar, but i dont want to replace the actual .exe file) into the install folder for ob but it doesnt work in either panther or in aqua updated
You need to replace the .exe file. Make a backup of your current objectbar.exe file and call it objectbar.bak. That way, if anything goes wrong, you can just rename it again.
I can't modify anything in menus, each time i try to change somthing (I want to translate a theme into french) it shows "Finder" in place of what i've changed. Is there a way to modify my bars and keep "finder when i click on the desktop ??
I've had that same problem, so what I've been doing is just quitting the modded ObjectBar and then loading the actual EXE and making whatever changes I need to, then quitting, and running the modded EXE.
Been having that trouble, too. But OOPS! I replaced my exe with the modded one!
Fix for this thing, plz with a cherry?
Unless you're willing to send a copy of your regular exe by some off chance, qweedle.
Been having that trouble.......Unless you're willing to send a copy........................, qweedle.
Unless you're willing to send a copy........................, qweedle.
Qweedle WTF, thats hilarious
Nice job Local Host !, ill give this a try tommorow
localhost can you try on your computer , chose Chinese as the default language in the "language for non-Unicode programs" ,and try to use OB to show some ctrl + or shift + , it crashes everytime here on my laptop.I am using a windows pro sp2 and OB 1.65 635 /
can you try on your computer , chose Chinese as the default language in the "language for non-Unicode programs" ,and try to use OB to show some ctrl + or shift + , it crashes everytime here on my laptop.
I am using a windows pro sp2 and OB 1.65 635 /
i can confirm that. my setup is about the same as wishbone's, and your OB mod crashes every time i click on it, so i'm forced to revert back to the unmodded OB.
This is definitely an "I love you, man!" moment. Do you know how many hours I invested trying to tweak objectbar to do just what you have done here? Searching through registry keys, translating hex code, going through reshacker, pulling out every hair on my head and here you come along and... I am freakin high right now on this mod. Seriously, who needs crack when you have this sweet mod? YOU FREAKIN ROCK!!! Thank you so much!
I love you moment indeed. *Adds RK Launcher to list*
Btw, is there anyway you could fix some issues with for example photoshop? It malfunctions a lot with the "replace menu items" option on...
AND (bigass error) every shortcut I make now, is labeled: Finder. Always.
And it (always) makes it open up a Folder.
How on earth did you get it to do that?
The error occurs when renaming (again, anything), it changes it to Finder.
This is working rather oddly for me. I have some Finder-VFinder theme which I beleive is Lie and it works perfectly when using that theme. As far as I know, the mod needs the OBFont to work. Well, the Shinobi objectbar uses OBFont but for whatever reason It wont change Finder to the new active application name. It also is making things act weird when trying to edit my bars info. Any ideas?
@unwritten - To make objectbar+ob_mod display new active application name:
Check to see if you have this option enabled
@localhost or anyone that can help - Anytime when I try to change the display-format of the clock, it changes to "Fin18er" instead, and just stays that way. I think this problem was adressed before, but I've just downloaded OB_mod a few days ago, and I tought it was already fixed. I'm sorry I don't have a screenshot of the problem, but if anyone recognizes it, any help to solve it would be appreciated.
Guys, whenever you want to change settings or configurations for your theme, just temporarily replace the modded objectbar.exe with the orginal, go back in, make your changes, then re-use te modded objectbar.exe to open ObjectBar up again. Otherwise you will get renaming issues and settings not going through because of the mod.
A minor inconvienience when considering how freakin' awsome this is. By the way, LocalHost, perhaps you could do something about Photoshop. If you could do that, we could start talking about organizing a religious cult in your name. How about it?
objectbar 2.0 is coming out in a few weeks.
Quite. And we all implore or dear friend localhost to adapt the mod to ObjectBar 2.0 when it comes out. (Pleeeeeeeease???) And maybe localhost could kick Photoshop CS2 in the nads while he's at it. I can't get the menus to work in ObjectBar! They show up but they're empty! WHY???
Is there a way to change the icons for programs that appear on objectbar?
Hi, I'm COMPLETELY new to WindowsXP modding, and I was wondering, If I'm using objectbar with objectbar mod, is there any way to view my system tray?
Errr... i installed it and it only works with some apps...
Mozilla Firefox=Safari: Working
These apps are NOT working [it still says the original app name]:
Yahoo Widget Engine
Anyone knows how to solve this?
i got a problem. the theme i am using appears as a running task, and it comes up as "finder"....
also i cannot get that nice drop shadow on the Finder Menu text =. any suggestions? anyone?
maybe cause you havent chose the option to replace the text to the open application?
the dropshadow...doesnt seem like a big thing but hey whatever floats your boat
no i have, im talking about that in my taskbar section of the Window menu of OB there is an entry called "Finder" with the OB icon in front of it and i cannot get rid of it
You will be able to leave a comment after signing in
|
OPCFW_CODE
|
This version of RR cannot resume playing the last song when the song is under a Chinese title or stored in an artist foldername in Chinese. No problem resume playing if it's an English song. I didn't have this problem when I was using DEC-2011 version of RR. Please kindly help. Thank you.
I JUST tested a chinese song in a chinese folder and it plays and when i restart RR, it resumes perfectly
using WINAMP 5.61 as my player
test, with carwings too...case its a skin issue!
As mentioned no such problem in Dec release with any skin I used. I don't have the setup file anymore. Anywhere I can download it again?
you cant(publicly), we dont go backwards, only forwards.... i cannot reproduce your issue... i have plenty of chinese music thats to a few people here, and it always resumes to the same position
did you read debug.txt? it will tell you what its doing
trackposition=-1 means something... i cant remember exactly right now, but it does mean, that RR could NOT get the posititional data from winamp
like you are killing the winamp process before RR is closed, which there is some safety code in rr, to try to use the last known positional data from winamp
so.. maybe you can FTP me that perticuar song, and let me see your debug.txt
if there is an issue, ill fix immediatly but it has to be reproducable
pm me for FTP information
Trackposition of -1 means, that WINAMP is in STOP, therefore on resume, it would be correct if its STOPPED
this means resume to position 4 in PL, (zero based), but STOPPED, not at any specific location in song...
I will enable dubug and post the resulting debut.txt next time. Thanks.
something is stopping winamp...review your debug.txt!
Here're the 3 debug.txt files I just obtained moments ago in the following order:
1. debug(English) - I start RR, click next track button until an English song is playing (I use shuffle mode), then exit RR while the music still playing.
2. debug(Chinese) - I start RR again, it resumes playing the last English song at the exact position. I click next track button until a Chinese song is playing, then exit RR again.
3. debug (stop) - I start RR one more time, it didn't resume playing any music. I exit RR right away.
I hope these info are good enough to nail the problem. Thank you.
no, clearly from the debug STOP, it is working file...
2/22/12 8:46:27 PM: INFO: LoadResume() - Reading resume data from F:\X-Trail\Documents\RideRunner\Cache\resume.ini
2/22/12 8:46:27 PM: AudPlayerStart: music player: PLPosition=257 TrackPosition=5543
that shows me, the player will be asked to go to playlist position 257 (258th song), and at time index 5543
now, i never see it play, now i NEVER had 258 songs in a PL on my machine (i might have)
try 5 songs!
|
OPCFW_CODE
|
Recently I was introduced to a highly peculiar but amazing presentation format called PechaKucha. Like many cool things, it started in Japan. The PechaKucha presentation format is this:
- Exactly 20 slides
- Exactly 20 seconds per presentation (slide transitions are on a timer, so no backing up, taking questions or starting over!)
- Mostly graphic content on each slide
You can see PechaKucha (also called “20x20”) described here, as well as run PechaKucha presentations.
As someone who is frequently on both sides of the podium (either in the audience, or giving a presentation) I immediately saw the appeal of PechaKucha. It forces the speaker to be brutal in their content selection and scoping, and to work out in advance their spoken-word narrative that will accompany the slides. Any and all PechaKucha presentations run just under seven minutes in duration (20 slides times 20 seconds per slide equals 400 seconds, or six and two-thirds minutes).
As an audience member, how many presentations have you sat through that you wish could have been limited to six and two-thirds minutes? As a presenter, have you ever worked with a format that forces you to get to the point and stay on message? Well here you go.
When I first discovered PechaKucha, I had two thoughts relating to project management. The first was this: Use the format for project status reports to cover the essentials, then spend minutes seven through N drilling into whatever details are of interest to the audience.
More broadly PechaKucha embodies in the presentation domain some issues that we so often struggle to define in our project plans. I’m referring to scope management and working within the given constraint(s). In the case of PechaKucha, the presenter must scope their spoken narrative to align with the 20-second duration per slide, for 20 slides. Both the duration and number of allowable slides are hard constraints on the presentation; the presenter adds their value by working within those constraints to maximum effect.
In project management, I find great value in building my plans and running the related conversations with reference to the Project Triangle of Triple-Constraints. I’ve described the Triangle previously (see these posts, for example). Being able to identify, quantify and articulate the constraints under which a project must be executed are essential skills of the project manager.
Hands-on with Project Step by Step
To read more about this blog entry's subjects in the two most recent editions of Tim Johnson's and my Project Step by Step books, see the following cross-references.
The Project Triangle model
- Project 2013 Step by Step: "A Short Course in Project Management," pg. 505
- Project 2010 Step by Step: "A Short Course in Project Management," pg. 431
|
OPCFW_CODE
|
[bug] Explorer's Anchorage is a valid sell station
Describe the bug
When calculating trades, Explorer's Anchorage is considered to be a valid destination. This being in spite of the fact that Explorer's Anchorage is several thousand light years outside of the bubble and shouldn't be a profitable destination. I would suggest also similarly filtering Colonia, but generally speaking the Colonia markets are already collapsed and overfed to begin with so they shouldn't be a problem.
To Reproduce
Version: <IP_ADDRESS>
Steps to reproduce the behavior:
Open the game.
Get a trade.
Trade Destination is Explorers Anchorage.
Expected behavior
Trade Destinations should not be Explorer's Anchorage
Screenshots
For stations like that I added the distance field below the profit, so you can see how far away the trade would take you.
I will try to add a filter option for this stat, but due to the current way the distance is received this requires a bigger re-make, which will not happen soon.
If you add a max distance filter it would also allow exclusion of Rackham's Peak, which always takes the top spots unless I uncheck "include small pads"
Or less elegantly you could add a checkbox for "include outside the bubble" which would exclude those two stations but that will introduce an ongoing maintenance issue as additional stations are added far from the bubble.
The main problem with the distance calculation is that it's only triggered if you check a trade. That's why there is alway an UNKNOWN distance when you first look at a trade and after a few seconds it switches to the actual distance.
In the background the following happens:
User looks at a new trade
Tool checks which systems are involved
Tool checks if local database contains distance information about those systems
If distance is not available locally, the tool makes a web-request which creates a notable delay
That means that if the distance check is done whenever the filter options change or the Reload Trades button is pressed, it would take a few minutes to get the distance between all valid stations.
So the solutions I thought about are:
A filter option for the maximum distance to a specified star system
User can specify to which system the distance should be calculated
User can specify how far away from that system the trade systems can be
This would enable me to add a distance web-request whenever a new system gets stored in the local database
Store the coordinates of each system
This allows me to calculate the distance myself, which wouldn't require a slow web-request
This would allow the user to filter for max distances AND max distance to a specified system
I'll work on this as soon as I did a rework of the database structure
I remade the whole database and added coordinates to the systems, but for performance reasons it's not possible to filter out systems that are too far away from each other, because I would need to calculate all distances between all stations at runtime, which is not possible.
I also tried to add a "Sol"-Filter that ignores systems that are too far away from Sol, but the performance impact for this feature is also way too high.
The last solution I currently see is to implement a black-list where you can add or remove individual stations.
But if you have any other ideas, please let me know.
A user-editable blacklist is probably the best way to go - it lets people pick and choose what stations they don't want to see. But if you don't want to go there I do have an idea for solving this programmatically.
Calculating each station's distance introduces a n squared speed step which can certainly cause performance problems. But if you only calculate the distance between the two systems you are about to present as the top pick that should be fairly quick.
proposed algorithm:
If the best price match fails (#1s/#1b) the max distance calc then match the second best sell (#2s/#1b) vs best buy and second best buy vs best sell (#1s/#2b). One of those two will probably fail as well, telling you which station is "way out there". DIsplay the other. If all three fail (#1s/#1b, #2s/#1b, and #2s/#1b) then consider both seller and buyer stations as "way out there".
temporarily black list both #1b and #1s and repeat, or try #3s/#1b and #3b/#1s.
Calculating only the first distance is also a problem, because If the user selects the next station/commodity, I would have to calculate that one next. This would cause some lag in the UI which I want to avoid.
It's also not safe to say that all failed calculations require stations that are far away, because if station 1 is in colonia, 2 is in the bubble and 3 at sag a, not all of them would need to be blacklisted, even if all 3 distance checks would fail.
So I think I'll stick to the blacklist.
Blacklist added to version <IP_ADDRESS>
|
GITHUB_ARCHIVE
|
Precautions and Responses
How to Avoid
Learn to recognize the snake species that are likely to be in the area. Please do not kill a snake – even a venomous one. Snakes serve a valuable function in the environment. The majority of bites result from people taking unnecessary or foolish risks with venomous snakes. Understanding what snakes look for in suitable habitat can help you know when to be wary. Understanding their behavior will help you know what to do if you encounter one. Snakes like tall grass.
- Keep the lawn around your home trimmed low.
- Remove any brush, wood, rock or debris piles from around the residence – they make great hiding places for snakes and their prey – rodents.
- Always wear shoes while outside and never put your hands where you cannot see them.
- Be careful when stepping over fallen logs and rock outcroppings.
- Take care along creek banks and underbrush.
Snakes do not prey on humans and they will not chase you, in fact they usually retreat or escape if given the opportunity. The danger comes when they are either surprised or cornered. Do not play around with a dead snake, they have been known to bite and envenomate. Get a good field guide and keep it handy especially in the field. Source
Houston Snake Calls Can Be Amusing
I received a call this morning from a woman named Michelle. She needed a Houston Snake Removal service in her back yard and was calling to find out information regarding the services we provide. After talking with her and getting more information about where the snake was, she informed me that someone had came and out snake poison around her yard and asked if that would kill a snake…My answer was no and she did not like that at all. I told her that we can go do a snake inspection of her yard to make sure there are no snakes and to check the are around her house to make sure there were no entry points for snakes to go in her house. She asked about what preventatives we have to prevent snakes from coming in. I told her that we can do snake traps, snake repellent spray or granules, and we can seal her house; if it was necessary; so snakes and other small wildlife, such as rodents, couldn’t get into her house.
After talking with her twice, she decided to think about it and call me back….not the way I wanted to schedule a call. But after an hour she called back and actually scheduled for a technician to go out and do an inspection. To say the least I was surprised, she was very short when I was talking to her and acted like she didn’t wasn’t to hear anything I was saying. But it is all in a days work!
I think this is one of the most significant info for me. And i am glad reading your article. But should remark on some general things, The site style is wonderful, the articles is really excellent : D. Good job, cheers
Great work! This is the type of information that should be shared around the web. Shame on the search engines for not positioning this post higher! Come on over and visit my website . Thanks =)
Excellent weblog here! Additionally your site a lot up very fast! What web host are you the usage of? Can I am getting your affiliate link for your host? I want my web site loaded up as fast as yours lol
My wife and i ended up being very lucky that Ervin could do his reports through your precious recommendations he obtained when using the blog. It’s not at all simplistic just to happen to be giving freely tactics which most people could have been making money from. We really grasp we have the website owner to appreciate for that. These explanations you have made, the easy website menu, the relationships your site make it possible to foster – it’s mostly wonderful, and it is assisting our son in addition to our family understand that subject is enjoyable, and that’s seriously serious. Thank you for the whole lot!
Hiya, I am really glad I’ve found this information. Today bloggers publish just about gossips and net and this is actually irritating. A good website with exciting content, this is what I need. Thanks for keeping this site, I’ll be visiting it. Do you do newsletters? Can not find it.
|
OPCFW_CODE
|
import sys
from collections import defaultdict
def snd(val):
def snddoit(registers, messages):
if messages is None:
#Part1
registers["snd"] = getRegisterOrValue(val, registers)
return (1, None, 'snd')
else:
#Part2
return (1, getRegisterOrValue(val, registers), 'snd')
return snddoit
def rcv(register):
def rcvdoit(registers, messages):
if messages is None:
#Part1
if getRegisterOrValue(register, registers) != 0:
return (1, registers['snd'], 'rcv')
return (1, None, 'rcv')
else:
#Part2
if len(messages) == 0:
return (0, None, 'rcv')
registers[register] = messages.pop(0)
return (1, None, 'rcv')
return rcvdoit
def getRegisterOrValue(val, registers):
if val in registers:
return registers[val]
else:
return int(val)
def setval(lvalue, rvalue):
def setdoit(registers, messages):
registers[lvalue] = getRegisterOrValue(rvalue, registers)
return (1, None, 'set')
return setdoit
def add(lvalue, rvalue):
def muldoit(registers, messages):
registers[lvalue] = registers[lvalue] + getRegisterOrValue(rvalue, registers)
return (1, None, 'add')
return muldoit
def mul(lvalue, rvalue):
def muldoit(registers, messages):
registers[lvalue] = registers[lvalue] * getRegisterOrValue(rvalue, registers)
return (1, None, 'mul')
return muldoit
def mod(lvalue, rvalue):
def moddoit(registers, messages):
registers[lvalue] = registers[lvalue] % getRegisterOrValue(rvalue, registers)
return (1, None, 'mod')
return moddoit
def jgz(lvalue, rvalue):
def jgzdoit(registers, messages):
jumpval = getRegisterOrValue(rvalue, registers)
condval = getRegisterOrValue(lvalue, registers)
if condval > 0:
return (jumpval, None, 'jgz')
return (1, None, 'jgz')
return jgzdoit
def parse(lines):
instructionlist = []
for line in lines:
instructions = line.strip().split(" ")
if instructions[0] == 'snd':
instructionlist.append(snd(instructions[1]))
elif instructions[0] == 'set':
instructionlist.append(setval(instructions[1], instructions[2]))
elif instructions[0] == 'add':
instructionlist.append(add(instructions[1], instructions[2]))
elif instructions[0] == 'mul':
instructionlist.append(mul(instructions[1], instructions[2]))
elif instructions[0] == 'mod':
instructionlist.append(mod(instructions[1], instructions[2]))
elif instructions[0] == 'rcv':
instructionlist.append(rcv(instructions[1]))
elif instructions[0] == 'jgz':
instructionlist.append(jgz(instructions[1], instructions[2]))
return instructionlist
def part1(instructions):
i = 0
registers = defaultdict(int)
while i < len(instructions) and i >= 0:
(jump, sndValue, cmd) = instructions[i](registers, None)
if sndValue is not None:
return sndValue
i += jump
return None
def part2(instructions):
i0 = 0
i1 = 0
registers0 = defaultdict(int)
registers0['p'] = 0
registers1 = defaultdict(int)
registers1['p'] = 1
messagequeue = {"prog0": [], "prog1": []}
prog0sndcnt = 0
prog1sndcnt = 0
while i0 < len(instructions) and i0 >= 0 and \
i1 < len(instructions) and i1 >= 0:
(jump0, sndValue0, cmd0) = instructions[i0](registers0, messagequeue['prog0'])
(jump1, sndValue1, cmd1) = instructions[i1](registers1, messagequeue['prog1'])
if cmd0 == 'rcv' and cmd1 == 'rcv' and jump0 == 0 and jump1 == 0:
return prog1sndcnt
if cmd0 == 'snd' and sndValue0 is not None:
prog0sndcnt += 1
messagequeue['prog1'].append(sndValue0)
if cmd1 == 'snd' and sndValue1 is not None:
prog1sndcnt += 1
messagequeue['prog0'].append(sndValue1)
i0 += jump0
i1 += jump1
return prog1sndcnt
if __name__ == "__main__":
instructions = parse(sys.stdin.readlines())
print part1(instructions)
print part2(instructions)
|
STACK_EDU
|
View Full Version : Anyone know if it's possible to extract Loom's CDDA.SOU?
08-10-2009, 04:50 PM
(The big file distributed with the Steam version, that is.)
I tried a couple of versions of ScummRev and they couldn't even recognise the header somehow. Is there anything else that's likely to work at this moment in time?
08-16-2009, 04:21 PM
There isn't much to this file as far as I can tell. Basically its a long wav file (without the header). You can import the data into Audacity using Import Raw.
I also found a WAV file header fix program (don't remember where right now) that you can add it to the file and play it.
If I remember correctly the data was a single track on the CD version of LOOM, so I don't think there are any splits or anything.
I tried using Import raw in different settings and found that it sounded best at 16 bit PCM mono at 44100.
08-17-2009, 10:54 AM
Yes, I discovered it can also be read through GoldWave (a similar program to Audacity), and furthermore I seem to get the correct sound from choosing either 44100Hz Mono or 22050Hz Stereo (both 16-bit), so who knows what the correct specification is supposed to be. Is this a common theme with all headerless/RAW music?
All I know is that it has to be downgraded from the original CDDA somehow - ripping it from an original CD results in a 551Mb WAVE file. Still, does this whole CDDA.SOU business mean that any audio data can effectively have its extension renamed to SOU and work, provided that the application is programmed to read it in that manner?
I'd love to make a size comparison between this file and a FLAC rip of the CDDA track at some point.
08-17-2009, 10:50 PM
Hey samlii, when you checked the audio in Audacity did it sound crackly or dodgy to you at all? I mean besides the otherwise accurate-sounding speech. I'm not too sure how to fix the audio myself.
08-17-2009, 11:46 PM
Yeah it doesn't sound very clean.
I'm trying to run it through some filters, but I haven't found anything useful as of yet.
11-06-2011, 02:37 PM
Someone figured out the format. Read all about it over here (http://forum.scummvm.org/viewtopic.php?t=7783&postorder=asc&start=29).
vBulletin®, Copyright ©2000-2016, Jelsoft Enterprises Ltd.
|
OPCFW_CODE
|
I'm currently trying to scan my network. However, some of our servers are configured with multiple IP addresses in order to connect them to different VLANs. When I try and scan those IP addresses however, Spiceworks only picks up one of the IPs. I have to manually scan the particular IP for SW to "see" it, but then it overwrites the previous entry so I lose the old IP! How do I set it up so that SW is able to scan both IP addresses and register them as active?
Hmmm...The second IP doesn't show on my system either...
I would send a message to email@example.com to see if they can help you...
This topic was created during version 5.1.
The latest version is 7.5.00101.
Have you entered the IP's in your scan range, under Settings --> Network Scan??
Yes, I entered 10.200.5.1-254. The issue is only partly that the IP address doesn't get picked up; the more pressing issue is that SW will overwrite the IP address of the server with the other IP whenever it is scanned. The server IP is 10.200.131.28; when I scan 10.200.5.28, all instances of 10.200.131.28 are gone. However, it doesn't tell me that there was any change to the server under the Timeline widget on the dashboard.
OK...You have the 10.200.5.0/24 subnet entered...Do you have the 10.200.131.x subnet entered in as a range as well?
I haven't had dual NIC's enabled for a bit, but if I remember correctly, it shows 2 instances with the same device name.
I have it set to 2 seperate scans, but yes the 10.200.131.x subnet is entered as well. I tried scanning both IPs at the same time in one scan, but got the same result
Hmmm...I may not be much help here...I just can't remember what happened when I had this setup... :(
You might try deleting the device (with either IP), and then rescan the subnets...I'm assuming you don't have too many devices, so a full scan won't take long...
Make sure to set the scan speed to slow, and disable incremental scanning, then under the pro settings, set 'Scanner sends all data instead of deltas' to true.
Ah that's what I've been doing...deleting and scanning over and over again. I've already set the speed to slow and incremental off, but I don't know what setting it to send data does? Regardless, I'll try it tonight and see if it helps. Thanks for your time!
It basically sends all of the collected data over, instead of the changes...Similar to Incremental, really...I don't know why there are 2 settings, and I'm not sure anymore why they have them at all...They should just set it and forget it...
I'm on a pretty small network though. It probably makes a difference on larger networks or remote collector sites, but I don't notice major time lags with it off, so I just leave it...
If I get around to it, I'll enable one of the NIC's on a server, and see what it does...
OH...Make sure you're running the latest version of SW...Version 5.1.68412 I think...
Sorry, I don't see that setting? All I see are:
Always Show Pro settings....false
Update Old Default to New...true
Do not use flash...false
Computer depreciation in years...3
Show quick info boxes...true
Disable intro help tips...false
Restore dismissed help tips...false
Disable CDW search bar...false
Tried running with the Scanner sends all data instead of deltas to true, but still the same result. Only seeing one instance of the server, no trace of the other IP addresses.
Still running into this issue. Anybody else have any idea/experience with this problem?
I've re-enabled a second NIC on one of my servers...Let me see what I can find out...
If you open a command prompt on the machine that is running SW, and type the following, what do you get?
wmic nic get name
Sorry!! Run this from the server that has the dual NIC's...
here's what I get:
WAN Miniport (SSTP)
WAN Miniport (IKEv2)
WAN Miniport (L2TP)
WAN Miniport (PPTP)
WAN Miniport (PPPOE)
WAN Miniport (IPv6)
WAN Miniport (Network Monitor)
Intel(R) PRO/1000 MT Network Connec
Microsoft ISATAP Adapter
WAN Miniport (IP)
Intel(R) PRO/1000 MT Network Connec
RAS Async Adapter
Microsoft ISATAP Adapter #2
Teredo Tunneling Pseudo-Interface
|
OPCFW_CODE
|
Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
Fetching latest commit…
Cannot retrieve the latest commit at this time.
|Type||Name||Latest commit message||Commit time|
|Failed to load latest commit information.|
NAME RT-Extension-ExcelFeed DESCRIPTION This extenstion allows you to generate RT reports in MS Excel XSLX format. It provides two ways to do this. First, it adds a new MS Excel option to the 'Feeds' menu on the Query Builder search results page. It also adds an option to the Dashboard subscription page that allows you to have scheduled dashboards emailed to recipients as attached MS Excel files rather than inline HTML reports. RT VERSION Works with RT 4.2, 4.4 INSTALLATION "perl Makefile.PL" "make" "make install" May need root permissions patch RT The following patches are also needed. Note the versions and only apply the patches needed for your version. Only run these the first time you install this module. If upgrading, install any patches that were not previously applied. Apply for both 4.2 and 4.4.0. Not needed for 4.4.1 or later: patch -p1 -d /path/to/rt < etc/subscription_callbacks.patch Apply for 4.2 and 4.4.0. Not needed for 4.2.13 or later, or 4.4.1. patch -p1 -d /path/to/rt < etc/chart_callback.patch Apply for 4.2: patch -p1 -d /path/to/rt < etc/tabs_privileged_callback.patch Apply for 4.4: patch -p1 -d /path/to/rt < etc/tabs_privileged_callback_44.patch Add this line to /opt/rt4/etc/RT_SiteConfig.pm Plugin('RT::Extension::ExcelFeed'); Clear your mason cache rm -rf /opt/rt4/var/mason_data/obj Restart your webserver AUTHOR Best Practical Solutions, LLC <email@example.com> BUGS All bugs should be reported via email to L<bug-RT-Extension-ExcelFeed@rt.cpan.org|mailto:bug-RT-Extension-ExcelFeed@rt.cpan.org> or via the web at L<rt.cpan.org|http://rt.cpan.org/Public/Dist/Display.html?Name=RT-Extension-ExcelFeed>. LICENSE AND COPYRIGHT This software is Copyright (c) 2015-2018 by Best Practical Solutions, LLC This is free software, licensed under: The GNU General Public License, Version 2, June 1991
|
OPCFW_CODE
|
Ras Error Value 31
Specialized programs are also available to diagnose system memory issues. Also, check the Phone Configuration page to determine whether the phone is associated with the specified end user in the Digest User drop box. Recommended ActionMonitor for other alarms and restart Cisco CallManager service, if necessary. Reason Code - Enum Definitions Enum Definitions - DeviceType Value Definition 1 CISCO_30SP+ 2 CISCO_12SP+ 3 CISCO_12SP 4 CISCO_12S 5 CISCO_30VIP 6 CISCO_7910 7 CISCO_7960 8 CISCO_7940 9 CISCO_7935 12 CISCO_ATA_186
Possible causes include device power outage, network power outage, network configuration error, network delay, packet drops, and packet corruption. ERROR_ACCOUNT_DISABLED 1332 No mapping between account names and security IDs was done. All error codes are supported in Windows 2000 or later versions of Windows unless specified otherwise. If the node was taken out of service intentionally, bring the node back into service.
ERROR_IRQ_BUSY 1120 A serial I/O operation was completed by another write to the serial port. RPC_S_SEC_PKG_ERROR 1826 Thread is not canceled. ERROR_SLIP_REQUIRES_IP 729 SLIP cannot be used unless the IP protocol is installed. Each hexadecimal code denotes a different memory address location that loaded instructions when the error was generated.
Please check that the card is inserted correctly, and fits securely. Note Deprecated in Windows Vista and later versions of Windows. ERROR_ROUTE_NOT_ALLOCATED 612 The specified route is not allocated. ERROR_REMOTE_DISCONNECTION 629 The specified port was disconnected by the remote computer. ERROR_RESTRICTED_LOGON_HOURS 646 The specified account is not permitted to log in at this time of day.
Note Supported in Windows Vista and later versions of Windows. ERROR_RASMAN_SERVICE_STOPPED 834 The connection was terminated because Remote Access Connection manager stopped. If you want to use this connection at login time, you must configure it to use the user name on the smart card. RPC_S_INVALID_VERS_OPTION 1757 There are no more members. ERROR_NO_WILDCARD_CHARACTERS 1418 Thread does not have a clipboard open.
Note Supported in Windows 7 and later versions of Windows. ERROR_PEAP_SERVER_REJECTED_CLIENT_TLV 845 Server rejected client authentication, due to unexpected TLV or value mismatch for a TLV. These services are required to establish an L2TP/IPSec connection. Note Supported in Windows 7 and later versions of Windows. ERROR_INVALID_PREFERENCES 846 Either VPN destination preference is not selected by the user or it is no longer valid. This documentation is archived and is not being maintained.
ERROR_SIGNAL_REFUSED 157 The segment is already discarded and cannot be locked. For trunks, this alarm should only occur when a system administrator has made a configuration change such as resetting the H.323 trunk. ERROR_HANGUP_FAILED 753 The connection could not be disconnected because it was created by the multi-protocol router. If the device has registered an inconsistent number of lines compared the Multi-Line report for this device, restart the device so that it can reregister all lines.
ERROR_SEM_TIMEOUT 122 The data area passed to a system call is too small. Note Deprecated in Windows Vista and later versions of Windows. ERROR_KEY_NOT_FOUND 627 Cannot find the specified key. ERROR_EAS_DIDNT_FIT 276 The extended attribute file on the mounted file system is corrupt. ERROR_REMOTE_AUTHENTICATION_FAILURE 924 Access was denied to the remote peer because the user name, password, or both is not valid on the domain.
Note Supported in Windows Vista and later versions of Windows. ERROR_USER_LOGOFF 830 The connection was terminated because user logged off. Also, confirm that database replication is working. Also, Ras Error Return Value 31 errors are very common during PC restarts that immediately follow a previous improper shutdown and recent virus or malware infection recovery. ERROR_SPECIAL_ACCOUNT 1372 Cannot perform this operation on this built-in special group.
ERROR_PPP_INVALID_PACKET 722 The PPP packet is not valid. If status shows 2, then replication is working. ERROR_INVALID_SHARENAME 1216 The format of the specified password is invalid.
please helpe me jadams Reply | Threaded Open this post in threaded view ♦ ♦ | Report Content as Inappropriate ♦ ♦ Re: Error while importing HEC-RAS data to ARC-GIS
Cisco recommends that you restart the Cisco CallManager service. Changes will not be effective until the service is restarted. ERROR_CANNOT_FIND_PHONEBOOK_ENTRY 623 Cannot find the specified phone book entry. Note Supported in Windows 7 and later versions of Windows. ERROR_EAPTLS_SCARD_CACHE_CREDENTIALS_INVALID 847 Cached smart card credential is invalid.
ERROR_MAX_WAN_INTERFACE_LIMIT 934 The maximum limit on the number of Demand Dial interfaces supported has been reached. ERROR_NO_SUCH_DOMAIN 1356 The specified domain already exists. ERROR_DDM_NOT_RUNNING 903 The Demand-dial Interface Manager (DDM) is not running. You can also go to the Real-Time Reporting Tool (RTMT) and check the Replication Status in the Database Summary page.
ERROR_ADDRESS_ALREADY_ASSOCIATED 1228 An address has not yet been associated with the network endpoint. quantity Customs FINLAND FR GERM FRANCE GERH HAITI HG KONG INDIA information on coverage ISRAEL ITALY January to date JAPAN KING KOR REP less than $251 MALAYSA MEXICO NETHERLANDS NORWAY NSPF Originally published in the journal Theoretical Chemistry Accounts, these outstanding contributions are now available in...https://books.google.gr/books/about/Isaiah_Shavitt.html?hl=el&id=TJbDCgAAQBAJ&utm_source=gb-gplus-shareIsaiah ShavittΗ βιβλιοθήκη μουΒοήθειαΣύνθετη Αναζήτηση ΒιβλίωνΑποκτήστε το εκτυπωμένο βιβλίοΔεν υπάρχουν διαθέσιμα eBookSpringer ShopΕλευθερουδάκηςΠαπασωτηρίουΕύρεση σε κάποια βιβλιοθήκηΌλοι οι If status shows 2, then replication is working.
ERROR_CLASS_ALREADY_EXISTS 1411 Class does not exist. RPC_S_NO_PRINC_NAME 1823 The error specified is not a valid Windows NT RPC error value. ERROR_NO_SUCH_ALIAS 1377 The specified account name is not a member of the local group. RPC_S_BINDING_INCOMPLETE 1820 Communications failure.
Unified CM initiated a restart to the phone to force it to re-home to a single node. Insufficient memory errors are often resolved by merely rebooting Ide Diagnostics Return Code 7 the device.
|
OPCFW_CODE
|
Could not construct partition: Cannot accept NaN weights.
Hi, using the following command, I get an error:
pos_patterns, neg_patterns = modiscolite.tfmodisco.TFMoDISco(
hypothetical_contribs=attrs,
one_hot=inputs,
max_seqlets_per_metacluster=2000,
sliding_window_size=20,
flank_size=5,
target_seqlet_fdr=0.05,
n_leiden_runs=2,
)
The error message is below:
/opt/conda/lib/python3.10/site-packages/modiscolite/affinitymat.py:238: RuntimeWarning: invalid value encountered in true_divide
(Y_ / np.linalg.norm(Y_)).ravel())
/opt/conda/lib/python3.10/site-packages/modiscolite/affinitymat.py:237: RuntimeWarning: invalid value encountered in true_divide
scores_ = np.dot((X / np.linalg.norm(X)).ravel(),
---------------------------------------------------------------------------
BaseException Traceback (most recent call last)
Cell In[33], line 1
----> 1 pos_patterns, neg_patterns = modiscolite.tfmodisco.TFMoDISco(
2 hypothetical_contribs=attrs,
3 one_hot=inputs,
4 max_seqlets_per_metacluster=2000,
5 sliding_window_size=20,
6 flank_size=5,
7 target_seqlet_fdr=0.05,
8 n_leiden_runs=2,
9 )
File /opt/conda/lib/python3.10/site-packages/modiscolite/tfmodisco.py:310, in TFMoDISco(one_hot, hypothetical_contribs, sliding_window_size, flank_size, min_metacluster_size, weak_threshold_for_counting_sign, max_seqlets_per_metacluster, target_seqlet_fdr, min_passing_windows_frac, max_passing_windows_frac, n_leiden_runs, n_leiden_iterations, min_overlap_while_sliding, nearest_neighbors_to_compute, affmat_correlation_threshold, tsne_perplexity, frac_support_to_trim_to, min_num_to_trim_to, trim_to_window_size, initial_flank_to_add, prob_and_pertrack_sim_merge_thresholds, prob_and_pertrack_sim_dealbreaker_thresholds, subcluster_perplexity, merging_max_seqlets_subsample, final_min_cluster_size, min_ic_in_window, min_ic_windowsize, ppm_pseudocount, verbose)
307 if verbose:
308 print("Using {} positive seqlets".format(len(pos_seqlets)))
--> 310 pos_patterns = seqlets_to_patterns(seqlets=pos_seqlets,
311 track_set=track_set,
312 track_signs=1,
313 min_overlap_while_sliding=min_overlap_while_sliding,
314 nearest_neighbors_to_compute=nearest_neighbors_to_compute,
315 affmat_correlation_threshold=affmat_correlation_threshold,
316 tsne_perplexity=tsne_perplexity,
317 n_leiden_iterations=n_leiden_iterations,
318 n_leiden_runs=n_leiden_runs,
319 frac_support_to_trim_to=frac_support_to_trim_to,
320 min_num_to_trim_to=min_num_to_trim_to,
321 trim_to_window_size=trim_to_window_size,
322 initial_flank_to_add=initial_flank_to_add,
323 prob_and_pertrack_sim_merge_thresholds=prob_and_pertrack_sim_merge_thresholds,
324 prob_and_pertrack_sim_dealbreaker_thresholds=prob_and_pertrack_sim_dealbreaker_thresholds,
325 subcluster_perplexity=subcluster_perplexity,
326 merging_max_seqlets_subsample=merging_max_seqlets_subsample,
327 final_min_cluster_size=final_min_cluster_size,
328 min_ic_in_window=min_ic_in_window,
329 min_ic_windowsize=min_ic_windowsize,
330 ppm_pseudocount=ppm_pseudocount)
331 else:
332 pos_patterns = None
File /opt/conda/lib/python3.10/site-packages/modiscolite/tfmodisco.py:254, in seqlets_to_patterns(***failed resolving arguments***)
252 #apply subclustering procedure on the final patterns
253 for patternidx, pattern in enumerate(patterns):
--> 254 pattern.compute_subpatterns(subcluster_perplexity,
255 n_seeds=n_leiden_runs, n_iterations=n_leiden_iterations)
257 return patterns
File /opt/conda/lib/python3.10/site-packages/modiscolite/core.py:153, in SeqletSet.compute_subpatterns(self, perplexity, n_seeds, n_iterations)
150 sp_density_adapted_affmat /= np.sum(sp_density_adapted_affmat.data)
152 #Do Leiden clustering
--> 153 self.subclusters = cluster.LeidenCluster(sp_density_adapted_affmat,
154 n_seeds=n_seeds, n_leiden_iterations=n_iterations)
156 #this method assumes all the seqlets have been expanded so they
157 # all start at 0
158 subcluster_to_seqletsandalignments = OrderedDict()
File /opt/conda/lib/python3.10/site-packages/modiscolite/cluster.py:22, in LeidenCluster(affinity_mat, n_seeds, n_leiden_iterations)
19 best_quality = None
21 for seed in range(1, n_seeds+1):
---> 22 partition = leidenalg.find_partition(
23 graph=g,
24 partition_type=leidenalg.ModularityVertexPartition,
25 weights=affinity_mat.data,
26 n_iterations=n_leiden_iterations,
27 initial_membership=None,
28 seed=seed*100)
30 quality = np.array(partition.quality())
31 membership = np.array(partition.membership)
File /opt/conda/lib/python3.10/site-packages/leidenalg/functions.py:81, in find_partition(graph, partition_type, initial_membership, weights, n_iterations, max_comm_size, seed, **kwargs)
79 if not weights is None:
80 kwargs['weights'] = weights
---> 81 partition = partition_type(graph,
82 initial_membership=initial_membership,
83 **kwargs)
84 optimiser = Optimiser()
86 optimiser.max_comm_size = max_comm_size
File /opt/conda/lib/python3.10/site-packages/leidenalg/VertexPartition.py:456, in ModularityVertexPartition.__init__(self, graph, initial_membership, weights)
452 else:
453 # Make sure it is a list
454 weights = list(weights)
--> 456 self._partition = _c_leiden._new_ModularityVertexPartition(pygraph_t,
457 initial_membership, weights)
458 self._update_internal_membership()
BaseException: Could not construct partition: Cannot accept NaN weights.
Can you advise on what this means? I'm running modiscolite 2.0.7 in Python 3.10.8. inputs and attrs are both numpy arrays of shape (500, 400, 4) and neither contains NaNs.
From those errors, I'm not sure what's happening. I'd need to have access to the underlying data. Are your attributions the full hypothetical contributions?
|
GITHUB_ARCHIVE
|
Most concise way of updating a python list?
Say I have a list of 100 unique integers ranging from 0 to 99. Now I have 10,000 new integers ranging from 0 to 199 (with duplicants of course). I want to append every new integer to the list and constantly update the list. Whenever there is a same integer already existed in the list, the old one should be removed. The updated list should still be a list of all unique integers.
My first instinct is to use set().add(), but set is unordered so it cannot be used to update a sequence. I managed to write the following code, which works fine. It takes 47 ms for updating 10,000 new integers to a list of 100 unqiue integers.
import random
from time import time
random.seed(99)
newlen = 10000
lst = list(range(100))
random.shuffle(lst)
newlst = [random.randrange(200) for _ in range(newlen)]
t0 = time()
for new in newlst:
idx = next((i for i, n in enumerate(lst) if n == new), None)
if idx is not None:
lst.pop(idx)
lst.append(new)
print(time()-t0)
Output:
0.04717659950256348
However, I feel like there should be faster and more concise way to update a python list. Any suggestions?
How about OrderedDict.fromkeys(list)? This will remove duplicates and sort the list
I may be missing the point here, but it seems to me that all this does is make another list of unique integers in range(100), of the same size, just in a different order. What will your algorithm achieve that a second call to random.shuffle(lst) will not do?
So one must assume that your example is simply designed to show that repeatedly deleting values out of the middle of a list is inefficient.
It is, but what is the point in doing things that way? Is there a use-case that could not be better implemented as a set or a dict?
@BoarGules The lst and newlst here are just to show as an example. In reality I always have new integers generated by a function one by one over time, and I want to constantly update the list in place.
But why does the data structure you are using have to be a list, when the operation you are performing on it (repeated deletions) is known to have a time complexity of O(n)? (https://dev.to/global_codess/time-complexities-of-python-data-structures-3bja)
@BoarGules So, every time the list is updated, it has a new state. I just concatenate all integers to a string to represent this state. Of course the sequence is important.
I didn't say insertion sequence was unimportant. I said that a list is not the only nor necessarily the best way to preserve it. Depending on what you want to do, you might choose OrderedDict, collections.deque. or (in Python ≥ 3.6) simply dict. Try this: dic = {1: 0, 2: 0, 3: 0}; del(dic[2]); dic[2]="new". 2 will now be the last key in dic. All with O(1) operations.
There are various OrderedSet implementations available (but not in the standard library).
|
STACK_EXCHANGE
|
Second version of ESSD figures
Map now uses Stamen maps for ease of reproducibility and includes point sizes to represent data density
New plot showing number of obs by DOY
Still to do:
[ ] fix legend titles on map
[ ] add MAP/MAT plot
@bpbond do not merge yet
Thoughts?
Ooh beautiful! Only possible suggestion might be COSORE points in a more contrasting color? Not sure.
From: Stephanie Pennington<EMAIL_ADDRESS>Sent: Monday, April 20, 2020 1:38:28 PM
To: bpbond/cosore<EMAIL_ADDRESS>Cc: Bond-Lamberty, Benjamin<EMAIL_ADDRESS>Mention<EMAIL_ADDRESS>Subject: Re: [bpbond/cosore] Second version of ESSD figures (#214)
Thoughts?
[image]https://protect2.fireeye.com/v1/url?k=75fe59bf-294b6770-75fe73aa-0cc47adc5e60-dc7f14021463489e&q=1&e=860fc16e-48e3-4aad-bb62-55ce81cccb4e&u=https%3A%2F%2Fuser-images.githubusercontent.com%2F20341158%2F79781857-29104600-830c-11ea-82cc-f290fa6c3d83.png
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://protect2.fireeye.com/v1/url?k=a5e8bf0c-f95d81c3-a5e89519-0cc47adc5e60-d2d14a1ca711a819&q=1&e=860fc16e-48e3-4aad-bb62-55ce81cccb4e&u=https%3A%2F%2Fgithub.com%2Fbpbond%2Fcosore%2Fpull%2F214%23issuecomment-616706402, or unsubscribehttps://protect2.fireeye.com/v1/url?k=bb9c9ba2-e729a56d-bb9cb1b7-0cc47adc5e60-5281d821ad282833&q=1&e=860fc16e-48e3-4aad-bb62-55ce81cccb4e&u=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAAO5U5HINZSTW3QSBRMCMLLRNSCBJANCNFSM4ML3SDQQ.
Everything is up to date. I am having trouble sourcing the mat-map.R script in the Rmd, otherwise this is ready to go
> source("mat-map.R")
Error in .getDataPath(path) : path does not exist: essd/
@stephpenn1 When I try to knit the RMarkdown, I get an error in line 403, data_meta not found. 😕
When I knit I get an error about not having the google sheet authors... but line 353 is...
csr_table(table = 'data') %>% left_join(db) -> data_meta
line 353 is
That doesn't appear in the diff, so may not have gotten pushed?
@stephpenn1 I'd like to turn back to this later this week...could you push up any code you have by then, and fx me needed WorldClim file(s)? Thank you.
@bpbond Everything is up to date on my end. I addressed your comment abt the path, see above. Also, lines 14 and 15 download the WorldClim data, so you shouldn't need my local files right?
👏 @stephpenn1
|
GITHUB_ARCHIVE
|
The GD Perl module is a collection of methods and constants for reading, manipulating, and writing color GIF files. Although it is more limited in scope than the ImageMagick package, its size and speed make it well-suited for dynamically generating GIF graphics via CGI scripts. GD has become the de facto graphics manipulation module for Perl; other modules such as GIFgraph (described in Chapter 6) extend the GD toolkit to easily accommodate specific graphics tasks such as creating graphs and charts.
The GD Perl module is actually a port of Thomas Boutell’s gd graphics library, which is a collection of C routines created for manipulating GIFs for use in web applications. Early versions of the GD.pm module simply provided an interface to the gd library, but now GD has its own library that is optimized for use with Perl. This module was ported by Lincoln D. Stein, author of the CGI.pm modules.
This chapter starts with an overview and a sample CGI application that will implement a web-based “chess server” that interactively manipulates the pieces on a chess board. The remainder of the chapter is a more detailed description of the GD methods and constants, with additional information on more advanced topics such as using GD’s polygon manipulations functions.
Scripts that use the GD module to create graphics generally have five parts, which perform the following functions: importing the GD package, creating the image, allocating colors in the image colormap, drawing on or manipulating the image, and writing the image to a file, pipe, or web browser. After you’ve installed the GD module, just follow these five steps:
First you must import the GD methods into your script’s namespace with the use function. The command
use GDwill give you access to all of the methods and constants of the GD::Image, GD::Font, and GD::Polygon classes:
The Image class provides the means for reading, storing, and writing image data. It also implements a number of methods for getting information about and manipulating images.
The Font class implements a number of methods that store and provide information about fonts used for rendering text on images. Each of the fonts are effectively hard-coded; they are described as a number of bitmap matrices (similar to XBM files) that must be compiled as part of the source during installation on your system. GD provides a limited number of fonts; the GD::Font class exists to make it easier to expand font support in the future.
The Polygon class implements a number of methods for managing and manipulating polygons. A polygon object is a simple list of three or more vertices that define a two-dimensional shape.
Create a new image. To make a new image, you can create a new, empty image object of a given width and height, or you can read an image from a file. To create an empty image, use the
newmethod of the Image class, as in:
# Create a new, empty 50 x 50 pixel image $image = new GD::Image(50, 50) || die "Couldn't create image";
All image creation methods will return
undefon failure. If the method succeeds, it will return a data structure containing the decoded GIF data for the image and store it in the given scalar value. This scalar can only contain one image at a time.
GD supports three stored file formats: GIF, XBM (black and white X-bitmaps), and GD files. A GD format file is a file that has been written to a file using the
gd( )method. To read in the image data from a file, use
newFrmXbom( ), or
newFromGd( ), depending on the format of the stored file. Each of these methods takes a filehandle as an argument, so you must open the file before you read the image data from it:
# Read an image from a GIF file open (GIFFILE, "beatniks.gif") || die "Couldn't open file!"; $image = newFromGif GD::Image(\*GIFFILE) || die "Couldn't read GIF data!"; close GIFFILE; # Read an image from an XBM file open (XBMFILE, "ginsburg.xbm") || die "Couldn't open file!"; $image = newFromXbm GD::Image(\*XBMFILE) || die "Couldn't read XBM data!"; close XBMFILE; # Read an image from a GD file open (GDFILE, "ferlinghetti.gd") || die "Couldn't open file!"; $image = newFromGd GD::Image(\*GDFILE) || die "Couldn't read GD data!"; close GDFILE;
You are now ready to manipulate the image or write it to another file or to STDOUT.
If you are going to be doing any drawing or manipulation of the image, you will need to get information about the colors available in the image. GD images support a maximum of 256 colors which are stored in the image’s color table. You may need to add new colors to an image’s color table, or you may need to get color indices of existing colors. Use the
colorAllocate( )method with a list of decimal red, green, and blue values to add a new color to the color table. This method returns the index of the color in the color table, which you should store for use with drawing methods requiring a color index:
$red = $image->colorAllocate(255, 0, 0); $grey2 = $image->colorAllocate(51, 51, 51);
colorExact( )to determine the color index of a color already in the color table. Use
colorsTotal( )to find out how many colors are currently allocated.
To draw on an image, use one of the graphics primitives. For example, to draw a 100 × 100 rectangle with purple lines in the upper left-hand corner of an image, use:
$purple = $image->colorAllocate(255, 0, 255); $image->rectangle(0, 0, 100, 100, $purple);
It is possible to use any of the drawing primitives with specially defined brushes by specifying the gdBrushed constant instead of a color. You can also fill areas with a tiled pattern with the gdTiled constant.
GD provides a special class, GD::Polygon, for managing information about polygons. First create a new polygon object with
new( ), add points to it with the
addPt( )method, then draw it to an image with the
polygon( )drawing primitive. To draw the same filled purple rectangle above as a polygon, use:
# Create a new polygon object my $polygon = new GD::Polygon; # Add each of the polygon's vertices $polygon->addPt(0,0); $polygon->addPt(0,100); $polygon->addPt(100,100); $polygon->addPt(100,0); # Allocate the color purple in the image. # Note that if this is the first color allocated in $image, # it will become the background color. # $purple = $image->colorAllocate(255, 0, 255); # Now draw the polygon in purple on the image $image->polygon($polygon, $purple);
When you are finished manipulating the image, you can write it to a file or to STDOUT. To write to a file, you must first open the file for writing with the open command. If you are working on a platform that makes a distinction between text and binary files (such as Windows 95/NT), be sure that you are writing in binary mode by calling the
binmode( )method first. To write the data as a GIF file, call the
gif( )method which returns the image data in the GIF format:
open OUTFILE, ">output.gif"; # Open the file for writing binmode OUTFILE; # Make sure we're in binary mode print OUTFILE $image->gif; # Print GIF data to the file close OUTFILE;
You can also write to a GD format file with the
gd( )method. Note that you can read from XBM files but you cannot write to them; you’ll need an external utility if (for some reason) you want to do that. Also note that you cannot use GD manipulation routines directly on data generated by a
gd( )method call; all method calls should be made on the original GD::Image object.
GD images can also be handed off to other packages such as PerlMagick or GIFgraph for additional manipulation. See Chapter 6 for this discussion.
|
OPCFW_CODE
|
Over the past few days I had the opportunity to participate in a closed alpha playtest of WotC’s new virtual tabletop, currently named “Dungeons and Dragons Digital.” There were some other creators also in the alpha whose names you will likely recognize, but it’s not my place to list them off.
The playtest lasted roughly a week, and all of the playtesters were invited to a Discord server to discuss, schedule games, and share feedback. We were given the option of scheduling a game to run a playtest scenario, and a developer would sit in to observe if scheduling allowed.
I had plans to explore the VTT with other members of the RPGBOT.Podcast team, but the scheduling didn’t line up, so I was only able to test the VTT solo. This involved a lot of clicking buttons and trying to see what I could do and how all of the options worked. Other participants may have more realistic impressions of how the VTT works in practice if they were able to actually run a session.
The playtest gave us access to a single pre-built area which was something like your stereotypical D&D tavern with a few surrounding out-buildings like a stable and a well. Encounters were scattered around the area, as were tokens intended for those various encounters.
My general impressions: It’s very pretty, but this is definitely an alpha.
The tokens, structures, and terrain looked good. The 3d environment is attractive, and clearly quite a bit of work was put into the models. Based on chatter in the discord, the VTT performed well on a variety of devices with varying tech specs.
There is a lot of functionality that still feels like early stages, and the user interface is frequently confusing. It’s not clear what various buttons do, how to accomplish a lot of simple tasks, or how to clean up mistakes after you click the wrong button. There is a lot of room for improvement here, but a lot of it could also be fixed by improving button text.
There appears to be support for playing on a 2d surface, but it’s not clear to me how well it will work in play. You can place a 2d map as an object on the map, then set the content by plugging in a web URL to populate it. Based on a conversation with the dev team, this is intended to support importing a 2d map image. I imagine that you could drop this on a blank area, scale the map, and drop tokens onto it, but it wasn’t obvious to me how to do any of that.
This would allow people to easily play with adventures that don’t already have pre-built maps in the VTT, which is good because there have been long-standing concerns that any non-official adventures would be frustrating to play, if not impossible.
Based on information we were given, the VTT is slated for release in 2025. A lot can change about software in a time window that big. I think it’s possible that WotC could have something really impressive by then. For now, I’m hoping for a beta some time in the future.
|
OPCFW_CODE
|
/ R Programming read.csv
Data is the lifeblood of any computational analysis, predictive modeling, or statistical research. In R programming, a common way to import data from external sources into your workspace is through the use of the read.csv function. This function allows you to read and import data stored in CSV files directly into R. In this article, we will go through the use of the read.csv function in R programming, illustrating its use with practical examples, and discussing potential errors and how to avoid them.
What is read.csv?
The read.csv function is one of the most commonly used functions for importing data in R. It is part of the utils package, and is used for reading in data stored in CSV (Comma Separated Values) files.
The syntax of the read.csv function is as follows:
read.csv(file, header = TRUE, sep = ",")
In this syntax, 'file' refers to the name of the file which we want to read. 'Header' is a logical argument indicating whether the first row of the file contains the names of the variables. 'Sep' indicates the field separator character.
Reading a CSV File
Let's say you have a CSV file named 'data.csv' in your current working directory. You can read this file into R using the following command:
data <- read.csv("data.csv")
This will read the 'data.csv' file and store its contents in the variable 'data'.
Setting the Working Directory
By default, R looks for files in the current working directory. To check your current working directory, use the getwd() function. To set a different working directory, use the setwd() function:
Remember to replace "/path/to/your/directory" with the actual path to your directory.
By default, R assumes that the first row of your CSV file contains the variable names. If this is not the case, you can tell R to not treat the first row as headers by setting header = FALSE:
data <- read.csv("data.csv", header = FALSE)
In this case, R will automatically assign variable names.
The 'sep' Parameter
In a CSV file, values are typically separated by commas. However, other characters can be used as well. If your CSV file uses a different separator, you can specify it using the 'sep' parameter:
data <- read.csv("data.csv", sep = ";")
This tells R that values are separated by semicolons.
Tips, Tricks and Common Errors
One of the most common errors when using read.csv is forgetting to set the correct working directory. If R cannot find your file, it will give an error. Always make sure that your working directory is set correctly.
Another common error is mismatched data types. R automatically assigns data types to your variables based on the contents of your CSV file. However, it can sometimes get this wrong. To avoid this, you can specify the data types of your variables using the colClasses parameter.
Large files can take a while to read. If you're dealing with large CSV files, you might want to consider using the data.table package's fread function instead of read.csv. It has a similar syntax but is usually much faster.
The read.csv function is a powerful tool for importing data in R. With a basic understanding of its syntax and parameters, you can start importing your own data and performing analyses. Just remember to always check your working directory and the structure of your CSV file to avoid common errors. Happy coding!
|
OPCFW_CODE
|
With RTA Supernodes alpha testnet release, GRAFT makes a first step into the era of cryptocurrency being a viable option at the point of sale, providing a level of service comparable to that of the credit/debit card networks. With the Supernode release, GRAFT is off to build an ecosystem of Supernodes and Service brokers – some are used to perform quick (credit-card speed) transaction authorizations, some to provide external system connectivity, others to act as service brokers performing currency exchanges and hosting various applications for merchants.
Supernodes for quicker transactions
Supernodes (aka Masternodes) are gaining traction in the blockchain space as a mean of expediting transactions. They work by constructing a second-layer network around the nodes that maintain the blockchain itself and are able to provide additional functionality on top of the blockchain such as “off-chain” transaction processing or governance. Supernodes/Masternodes usually run on a Proof-of-Stake model requiring staking for collateralization and provide passive income to their owners, which helps explain their popularity.
The GRAFT supernode is the backbone of its blockchain’s second layer. It enables many different functionalities including real-time authorizations of cryptocurrency payments, the hosting of service brokers and GRAFT’s decentralized exchange, smart contract merchant token and v-chain capabilities, various cryptocurrency transaction types, merchant offline transaction approvals, distributed identity provider services, network participant reputation scores, and much more.
Decentralized payment network: GRAFT’s answer to crypto payment woes
Enabling cryptocurrency to pay for goods and services has been a hot topic recently, with integrated payment providers like BitPay and Coinbase enabling e-commerce payments via a gateway solution of their own. These services act as fully integrated service providers performing settlement and payout functions. Multiple (predominantly online) merchants have rolled out alternative payment options based on these integrated services, a notable example being Expedia. Merchants, however, have been rolling them back, citing high unpredictable fees, high risks and lack of universal coverage. Stripe’s short-lived experiment to offer cryptocurrency payments is another example of the centralized approach failure.
GRAFT is working to give the space a decentralized equivalent of a payment network (such as the ones provided by Visa, Discover, and Mastercard) by offering a network fabric that connects gateways and services together, crossing locales and making sure the network works with existing point-of-sale solutions and payment terminals. In fact, GRAFT utilizes the recently-added ability of leading payment terminals to run 3rd party applications on their platforms. Most notable is GRAFT’s integration with Verifone’s Engage line of terminals.
“GRAFT’s solution makes a lot of sense both technically and economically, staying true to decentralized model of cryptocurrency, and enabling regular people to service various parts of the network. We look at it as people ARE the network,” said Dan I, GRAFT blockchain co-creator. “At the end of the day, the network functions – hosting blockchain nodes, doing authorizations/validations, providing exchange services, ensuring compliance, taking care of distribution and support, and even offering credit is best done by lots of individuals or small businesses and that’s the promise of ultimate decentralization.”
GRAFT (which stands for Global Real-time Authorizations and Funds Transfers) was originally conceived by Slava Gomzin, author of ‘Hacking Point of Sale’ and ‘Bitcoin for Nonmathematicians’ – some of the seminal work in the point of sale and cryptocurrency spaces. The network is designed to address all issues that cryptocurrency faced at the point of sale – speed, fee, and privacy, while taking advantage of the “smart” nature of digital money backed by the smart contract capabilities, and doing it all in a decentralized manner. The project is an open platform / open source and the network is free to use with the low 0.5% network fee paying Supernodes for their service.
GRAFT is the first cryptocurrency that combines the benefits of CryptoNote protocol, which provides absolute privacy to all participants, and second-layer network of supernodes, enabling fast authorizations and instant exchanges. Such a combination of absolute privacy, instant authorizations and exchanges, and network of service brokers is a unique feature that differentiates GRAFT form all other solutions and enables wide cryptocurrency acceptance in retail environments.
More information about GRAFT blockchain can be found on WWW.GRAFT.NETWORK
Follow BitcoinNews.com on Twitter at https://twitter.com/bitcoinnewscom
Telegram Alerts from BitcoinNews.com at https://t.me/bconews
Image Courtesy: GRAFT
|
OPCFW_CODE
|
Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for Android, Internet of Things, Intel® RealSense™ Technology, and Windows to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathon’s, contests, roadshows, and local events.
This article introduces Android* Studio (Beta), the new Android* integrated development environment (IDE), which will eventually replace the Eclipse* ADT* Bundle. As a use case, this article discusses the flow of moving an Android project currently developed using Eclipse ADT to using Android Studio.
Caution: When this article was written, Android Studio was still in Beta. You may encounter not-yet-implemented features and bugs. If you are not comfortable with a Beta product, you may want to stay with the development environment you are currently using, such as the ADT Eclipse.
In the past several years, Android had been encouraging and enabling developers to use the Eclipse ADT (Android Developer Tools) Bundle as the app development environment. This was changed in recent months after Android Studio (Beta) was announced and available for download. In the past several months, we have seen this new IDE improving. As the Android developer community was informed, Android Studio will eventually be the official Android IDE. For an Android developer currently using ADT, it is good to proactively migrate to the Android Studio IDE since, as we will show you in this article, the migration is fairly straightforward.
Unlike the ADT Bundle, which is based on Eclipse IDE and the Apache Ant* build system, Android Studio is powered by IntelliJ* IDEA and the Gradle* build system. Although the underlying components and technologies are very different, Android provides tools and process flows to support the transition from using ADT to Android Studio.
The discussion in this article is based on JDK version 1.8.0_25Android Studio (Beta) version 0.9.1, and ADT version 23.0.2, on a 64-bit Windows* 8.1 system.
Installation and Setup
To install and run Android Studio, you are required to have JDK 6 or above. To see if you have the required version of JDK, open a Command Prompt window, and enter "javac -version" as your command. You should see a javac version number. Make sure it is greater than 1.6, otherwise go to http://www.oracle.com/technetwork/java/javase/downloads/index.html to download and install a proper version of JDK. You may need to add a system environment variable "JAVA_HOME" with the JDK installation directory in the value field.
There is no Android SDK or SDK tools bundled in the Android Studio download. You may choose to copy an existing Android "sdk" folder from the IDE you were using; for example, from the ADT installation directory, to the same directory you are going to install the Android Studio, such as C:\android. If there is no Android SDK currently on your system, you may visit https://developer.android.com/sdk/index.html?hl=i and go to the "Get the SDK for An Existing IDE" section to download a copy of standalone SDK tools for Windows. For convenience, you may have the SDK Tools installer put everything under the C:\android\sdk folder.
To download Android Studio Beta, visit the official download page:
The package is a .zip file. Simply extract the .zip file to a folder, for example, C:\android. To launch Android Studio, simply go to C:\android\android-studio\bin and run "studio64.exe".
If things go smoothly, you will see an Android Studio start window similar to Figure 1.
Figure 1 - Android Studio Start Window
Migration from Eclipse ADT to Android Studio
This article makes the assumptions that 1) you have been an experienced Android app developer, and 2) you have been using the ADT on Eclipse, the most popular Android app development environment. With the advent of Android Studio, you may want to migrate your current projects under development or under maintenance from ADT to Android Studio. In our case we have a restaurant business app (Figure 2) which explores many advanced features provided by Android SDK, such as animation, sensors, Geolocation, and NFC. The app was developed using ADT.
Figure 2 - A Restaurant Android business app
We will provide a step-by-step guide on how to migrate a project to Android Studio, so that its future development can be continued under the new development environment.
Exporting the Gradle Build Files in ADT Eclipse
You may have been told that in using Android Studio, you can directly import an ADT project. However based on our experiments, exporting the Gradle build files from your ADT project then importing the generated build files into Android Studio is a more reliable way.
On your ADT Eclipse IDE (Figure 3), right click at the open project, which in our case here is the "RestaurantApp" project in the "Package Explorer" window, and select "Export".
Figure 3 - The Eclipse ADT IDE
In the "Export" dialog box, select "Android", then "Generate Gradle build files" (Figure 4). The export dialog boxes will guide you through the process of generating Android Studio build files. On the final step, you will need to check the "Force overriding of existing files" box, then press "Finish".
Figure 4 - In ADT's Export project dialog box, select the option to generate Gradle build files
After completing the export process, we can see there is a build.gradle file generated at the project root "RestaurantApp" directory. This is the file we will use to import the project in Android Studio.
As we have mentioned, while ADT Eclipse uses Apache Ant to handle the project build, Android Studio adopts a different build system called Gradle. In Gradle, project builds are driven by build scripts, such as the build.gradle files. Build scripts are written in a dynamic language called Groovy*. We may take a look at the build.gradle file (Code Example 1) to get some ideas. The most important key is the first line, which applies the "android" plugin to the project. The plugin will add a number of tasks to the project to accomplish the build requirements.
apply plugin: 'android'
compile fileTree(dir: 'libs', include: '*.jar')
java.srcDirs = ['src']
resources.srcDirs = ['src']
aidl.srcDirs = ['src']
renderscript.srcDirs = ['src']
res.srcDirs = ['res']
assets.srcDirs = ['assets']
Code Example 1 - The build.gradle file generated for the RestaurantApp project **
After exporting the Gradle build files, we can close the project and exit ADT Eclipse.
Importing Projects to Android Studio
Now we start Android Studio, and on the Start window, select the "Import non-Android Studio Project" (Figure 5).
Figure 5 - The import project option in Android Studio Start window
On the next screen, we browse to the RestaurantApp project folder, and select the build.gradle file generated by ADT Eclipse (Figure 6) and click OK.
Figure 6 - Select the Gradle file to import
That is it! In Android Studio, we now have our RestaurantApp project (Figure 7). We can continue our development under the new IDE.
Figure 7 - The imported project in Android Studio
Start a New Android Studio Project
After we become familiar with Android Studio, we can see is it is a powerful tool for developing apps running on all kinds of Android form factors, which include phones, tablets, Android TVs, and Android Wear (Figure 8).
So, starting to use this tool now provides great advantages for Android developers.
Figure 8 - Android Studio supports the development for new Android form factors
One thing we should note is that Android Studio is still in Beta. Some features are still under development and not yet included. For example, at the time when this article was written, NDK support had not yet been integrated in the tool, but as Android promises, it will be included soon.
Other Related References
Miao Wei is a software engineer in Intel’s Software and Services Group. He currently works on the Intel® Atom™ processor scale enabling projects.
**This sample source code is released under the Intel Sample Source Code License Agreement
Intel is inside more and more Android devices, and we have tools and resources to make your app development faster and easier.
|
OPCFW_CODE
|
<?php
namespace VIITech\Helpers;
use Carbon\Carbon;
use Exception;
use Google_Client;
use GuzzleHttp\Client;
use GuzzleHttp\Exception\GuzzleException;
use Illuminate\Support\Str;
use VIITech\Helpers\Constants\Attributes;
use VIITech\Helpers\Constants\CarbonFormat;
use VIITech\Helpers\Constants\EnvVariables;
use VIITech\Helpers\Constants\Values;
/**
* Google Helpers
*/
class GoogleHelpers
{
/**
* Validate Google reCaptcha
* @param string $google_recaptcha_secret
* @param string $g_recaptcha_response
* @return boolean
*/
public static function validateRecaptcha($google_recaptcha_secret, $g_recaptcha_response)
{
try {
$client = new Client();
$response = $client->post(
'https://www.google.com/recaptcha/api/siteverify', ['form_params'=>
[
'secret' => $google_recaptcha_secret,
'response' => $g_recaptcha_response
]
]
);
return json_decode((string) $response->getBody())->success;
} catch (Exception | GuzzleException $e) {
return false;
}
}
/**
* Validate Google Token
* @param string $google_client_id
* @param string $token
* @return boolean
*/
public static function validateGoogleToken($google_client_id, $token)
{
try {
$client = new Google_Client([Attributes::CLIENT_ID => $google_client_id]);
$payload = $client->verifyIdToken($token);
return (bool) $payload;
} catch (Exception $e) {
return false;
}
}
/**
* Parse Google Calendar
* @param $url
* @param null $calendar_id
* @param null $timeMin
* @param null $timeMax
* @param null $updatedMin
* @param bool $debug
* @return array|false
*/
static function parseGoogleCalendar($url, $calendar_id = null, $timeMin = null, $timeMax = null, $updatedMin = null, $debug = false){
if(is_null($calendar_id)){
$calendar_id = self::validateGoogleCalendarLink($url);
}
if(!$calendar_id){
return false;
}
if(is_null($timeMin)){
$timeMin = GlobalHelpers::now(null, null, 0, -6);
}
if(is_null($timeMax)){
$timeMax = GlobalHelpers::now(null, null, 0, 18);
}
if(!is_null($updatedMin)){
$updatedMin = "&updatedMin=$updatedMin";
}
$json_response = null;
$calendar_events_array = [];
$deleted_events_array = collect();
try {
$client = new Client(['base_uri' => 'https://www.googleapis.com',]);
$google_api_key = env(EnvVariables::GOOGLE_CALENDAR_API);
$singleEvents = "true";
$showDeleted = "true";
if($debug){
dd("https://www.googleapis.com/calendar/v3/calendars/" . urldecode($calendar_id) . "/events?singleEvents=$singleEvents&showDeleted=$showDeleted&orderBy=startTime&timeMin=$timeMin&timeMax=$timeMax$updatedMin&key=$google_api_key");
}
$response = $client->get("calendar/v3/calendars/" . urldecode($calendar_id) . "/events?singleEvents=$singleEvents&showDeleted=$showDeleted&orderBy=startTime&timeMin=$timeMin&timeMax=$timeMax$updatedMin&key=$google_api_key");
$response_body = (string)$response->getBody();
$json_response = json_decode($response_body);
} catch (Exception | GuzzleException $e) {
SlackHelpers::sendSlackMessage($e->getMessage());
}
if(GlobalHelpers::isValidObject($json_response) && GlobalHelpers::isValidObject($json_response->items)) {
$calendar_timezone = $json_response->timeZone;
if(!GlobalHelpers::isValidVariable($calendar_timezone)){
$calendar_timezone = Values::DEFAULT_TIMEZONE;
}
foreach ($json_response->items as $event) {
// delete the event
if(isset($event->status) && $event->status == Attributes::CANCELLED){
$deleted_events_array->add($event->id);
continue;
}
if(!isset($event->summary)){
continue;
}
$name = $event->summary;
$all_day = false;
$start_date = null;
$end_date = null;
$event_timezone = $calendar_timezone;
if(isset($event->start->dateTime)) {
$event_timezone = $event->end->timeZone ?? $calendar_timezone;
$start_date = Carbon::parse($event->start->dateTime, $event_timezone);
}else if(isset($event->start->date)) {
$start_date = Carbon::parse($event->start->date, $calendar_timezone);
}
if(isset($event->end->dateTime)) {
$event_timezone = $event->end->timeZone ?? $calendar_timezone;
$end_date = Carbon::parse($event->end->dateTime, $event_timezone);
}else if(isset($event->end->date)) {
// Reason of that that Google API returns the next day as if the day is 24 hours not 23:59:59. It causes incorrect data in the app
$end_date = Carbon::parse($event->end->date, $calendar_timezone)->subSecond()->hour(0)->minute(0)->second(0);
}
if(is_null($start_date)){
continue;
}else if(is_null($end_date)){
continue;
}
// review this logic
if(isset($event->start->date) && isset($event->end->date)){
$all_day = true;
}
if($all_day){
$start_date = $start_date->startOfDay();
$end_date = $end_date->endOfDay();
}
$start_date_formatted = $start_date->format(CarbonFormat::C);
$end_date_formatted = $end_date->format(CarbonFormat::C);
$location = $event->location ?? null;
$description = $event->description ?? null;
$calendar_events_array[] = [
Attributes::START_DATE => $start_date_formatted,
Attributes::END_DATE => $end_date_formatted,
Attributes::ALL_DAY => $all_day,
Attributes::GOOGLE_EVENT_ID => $event->id,
Attributes::DESCRIPTION => $description,
Attributes::LOCATION => $location,
Attributes::TITLE => $name,
Attributes::TIMEZONE => $event_timezone
];
}
}
return [
Attributes::GOOGLE_EVENTS => $calendar_events_array,
Attributes::DELETED_EVENTS => $deleted_events_array->values()->unique()->toArray(),
Attributes::TIMEZONE => $calendar_timezone ?? Values::DEFAULT_TIMEZONE
];
}
/**
* Validate Google Calendar Link
* @param $url
* @return string
*/
static function validateGoogleCalendarLink($url){
if(is_null($url)){
return false;
}
if(!Str::startsWith($url, "https://calendar.google.com")){
return false;
}
$calendar_id = str_replace("https://calendar.google.com/calendar/ical/", "", $url);
$calendar_id = str_replace("/public/basic.ics", "", $calendar_id);
$calendar_id = str_replace("https://calendar.google.com/calendar/u/0/embed?src=", "", $calendar_id);
$calendar_id = str_replace("https://calendar.google.com/calendar/embed?src=", "", $calendar_id);
if(Str::startsWith($calendar_id, "http")){
return false;
}
if(Str::contains($calendar_id, "&")){
$calendar_id = substr($calendar_id, 0, strpos($calendar_id, "&"));
}
return trim($calendar_id);
}
}
|
STACK_EDU
|
White paper on security and cluster isolation for kubernetes.
Ran info
[ ] Configuration issues
[ ] Disk in use
[ ] UUID issue even though enableUUID seems to be set
{"log":"I0213 09:48:43.502621 1 operation_executor.go:620] AttachVolume.Attach succeeded for volume \"kubernetes.io/vsphere-volume/[netapp01ads02] k8s/myDisk\" (spec.Na
me: \"test-volume\") from node \"k8s.minionpp-01\".\n","stream":"stderr","time":"2017-02-13T09:48:43.502834346Z"}
{"log":"I0213 09:48:43.599644 1 node_status_updater.go:135] Updating status for node \"k8s.minionpp-01\" succeeded. patchBytes: \"{\\\"status\\\":{\\\"volumesAtt
ached\\\":[{\\\"devicePath\\\":\\\"/dev/disk/by-id/wwn-0x6000c2931824b17ebd31f2dc365f0d67\\\",\\\"name\\\":\\\"kubernetes.io/vsphere-volume/[netapp01ads02] k8s/myDisk\\\"}]}}
\" VolumesAttached: [{kubernetes.io/vsphere-volume/[netapp01ads02] k8s/myDisk /dev/disk/by-id/wwn-0x6000c2931824b17ebd31f2dc365f0d67}]\n","stream":"stderr","time":"2017-02-13
T09:48:43.599797118Z"}
{"log":"E0213 09:48:46.750024 1 vsphere.go:1062] disk uuid not found for [netapp01ads02] k8s/myDisk. err: No disk UUID found\n","stream":"stderr","time":"2017-02-13T09:
48:46.750250047Z"}
{"log":"E0213 09:48:46.750107 1 vsphere.go:1044] Failed to check whether disk is attached. err: No disk UUID found\n","stream":"stderr","time":"2017-02-13T09:48:46.7502
7874Z"}
@kerneltime the problem was a permission issue on the vCenter. Do you think it is possible to add in the documentation a list of permissions that are needed?
@cvauvarin yes of course, can you elaborate on the permissions problem? The last you mentioned that the disk was attached to the VM, I am not sure why permissions made a difference.
Yes the disk was attached to the VM but the kube-controller could not get the UUID of the disk. What I did is that I used another user with full permissions on the vCenter and it worked without any issue. Then we tried to add some permissions to the other user to make it worked.
Here is the list of the permissions we applied :
Datastore :
Allocate space
Browse datastore
Low level file operations
Update virtual machines files
Update virtual machnes metadata
Virtual machine
Configuration
Add existing disk
Add new disk
Remove disk
Don't think it was a problem of reading the uuid, do you think it can be a problem writing some metadata ?
Thank you for the additional info.
This issue tracks the list of privileges the user needs to specify in vSphere UI in order to configure vSphere cloud provider.
Relevant https://bugzilla.eng.vmware.com/show_bug.cgi?id=1791819
A hacky reference between API spec and UI spec
auth-privs.txt
Partial list
Privileges
FindByIp => System.View
MakeDirectory => Datastore.FileManagement # https://bugzilla.eng.vmware.com/show_bug.cgi?id=1791819
The goal here is to have a white paper that explains what is possible to isolate the credentials used in kubernetes and what is the level of isolation achieved and gaps that a customer should be aware about.
Updated Getting Started Guide with minimal set of privileges required for vSphere Cloud Provider
https://github.com/kubernetes/kubernetes.github.io/pull/2989
Updated k8s-anywhere prerequisites section with privileges required for Kubernetes-Anywhere.
https://github.com/kubernetes/kubernetes-anywhere/pull/360
Updated documentation with required set of roles and permissions required for Kubernetes vsphere cloud provider.
|
GITHUB_ARCHIVE
|
The memory of a computer holds (stores) program instructions (what to do),
data (information), operands (affected, manipulated, or operated upon data), and
calculations (ALU results). The CPU controls the information stored in memory.
Information is fetched, manipulated (under program control) and/or written (or
written back) into memory for immediate or later use. The internal memory of a
computer is also referred to as main memory, global memory, main storage, or
primary storage. Do not confuse it with secondary or auxiliary memory (also called
mass storage) provided by various peripheral devices. In newer computers you also
will encounter a number of small and independent local memories that are used for
a variety of purposes by embedded microprocessors. You have already learned
about cache memory that lies between the CPU and main memory.
After completing this chapter, you should be able to:
Describe the organization of memory
Describe the operation of main memory
Recognize the types of memory and describe how they function
TOPIC 1MEMORY ORGANIZATION
The main memory of a computer is used for storing
programs, data, calculations, and operands.
Memory is used in all types of computer systems includ-
ing mainframes, minicomputers, and microcomputers.
The amount of main memory each type of computer has
varies according to the configuration. A wide variety
of memory types is being used. To simplify our
discussion, we have divided memory into two general
categories: read/write (random access) memory and
read-only memory. Within the read/write group, we
discuss magnetic (core and film) memories and semi-
conductor (static and dynamic) memories. Read-only
memory can be subdivided into factory programmed
parts called read-only memory (ROM) and user pro-
grammable devices called programmable read-only
memory (PROM). This classification system is illus-
trated in figure 6-1. Lets take a look at some of the termi-
nology used with regard to the computers memory.
The following terms need to be explained at this
Memory Memory generally refers to the actual
hardware where the programs, data, calculations, or
operands are stored.
Memory address A memory address is a
particular location of a larger memory array. Usually
one memory address contains one word of data. A
word is one packet of information for the computer and
is usually composed of many bits. Computers exist that
use 1-bit words, 8-bit words, 16-bit words, 32-bit
words, and 64-bit words. Handling computer data in
8-bit words is so common that the 8-bit word has its own
name, the byte. Half of a byte is called a nibble (4 bits).
Capacity (memory size) Capacity is an
important aspect of system performance; it is a useful
and convenient way to describe the size of memory. At
the individual part level, a computers memory maybe
|
OPCFW_CODE
|
Once, long ago, I wanted to know how many files there were on the File Exchange. This number, as it happens, is easily had. Just look at the top right of the file listing and you’ll see it (look where it says “1 – 50 of 14973”, or some similarly large number). Right away I realized this number wasn’t very interesting by itself. What I really wanted to know was this: How big is the File Exchange compared to how big it was yesterday? This information isn’t hard to get either, but it does require some discipline. First you write down today’s number somewhere, and then you have to remember to do the same thing tomorrow. Unfortunately, I’m not very disciplined. And since ultimately what I really wanted was long-running time series, why not automate this process?
Since I’m a MATLAB programmer, I wrote some MATLAB code to pull the number off the web page and store it in a MAT-file. I kicked off MATLAB every night with a scheduled task on my PC, and it would gather and plot the necessary data. This worked well enough, and soon other people were asking me to track things: sales numbers, file sizes, bug counts, headcount numbers. These data sources are all similar in two important respects: they’re slow-moving trends (gathering data once a day is fast enough), and the information is available on a web page somewhere. At that point I realized we had an opportunity to make a simpler and more general service: Trendy.
Trendy is a web service that makes it easy for you to track and plot slow-moving trends. You only need to give us two little chunks of MATLAB code: one to collect the latest data point for a trend you care about, and one to plot the resulting trend. We take care of the rest. We’ll store your data in a safe place and we’ll remember to run your code every night.
Here, for example, is the data I’ve been collecting on the number of files on the File Exchange. It’s hard to make sense of a list of numbers without plotting it. So here is a plot of the same data.
Notice that separating the trend’s data from its plot has some benefits. For one thing, I can do multiple plots of the same data. I might want to plot the rate at which files are coming in (with data smoothing). Or I might want to use linear extrapolation to predict how long until we hit a certain threshold.
Because everything on Trendy is public, you’re welcome to plot someone else’s data. Teja Muppirala made a cool plot of the number of sushi and ramen restaurants in Tokyo. When I saw Teja’s plot, it occurred to me to plot the ramen-to-sushi index. Noodles are cheaper than fish, so who knows? Maybe this can be used as a leading indicator for the Japanese economy.
Trendy is designed to remove the tedium of data collection, but as a side effect it also give you something else: data transparency. If you see an interesting plot, you can say “Show me the data.” And if you’re still curious, you can say “How did you get that data?” The data source is just a click away.
When you first create a trend, it seems maddeningly slow to fill up with data. But then you forget about it for a few days, and the next then you know, it’s revealing some fascinating patterns. Like the time-lapse movie of a sprouting bean, when you put on your slow eyes, you see things you never noticed before. We’re used to living in a data-rich world. Numbers are good. But for every number you hold in your hand, Edward Tufte is asking “compared to what?” From Premier League football to the solar system and beyond, Trendy helps you make sense of the numbers you care about.
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
|
OPCFW_CODE
|
// player.js
// Dependencies:
// Description: singleton object that is a module of app
// properties of the player and what it needs to know how to do go here
"use strict";
// if app exists use the existing copy
// else create a new object literal
var app = app || {};
// the 'player' object literal is now a property of our 'app' global variable
app.player = {
color: "yellow",
init: function(){
this.image = new Image(); this.image.src = app.IMAGES['ellie'];
},
x: 25,
y: 25,
width: 50,
height: 50,
speed: 50,
moving: false,
proportion: 1.0,
level: 1,
image: undefined,
alive: true,
life: 10,
reset: function(){ // resets stats for a new game
this.life = 10;
this.alive = true;
this.level = 1;
this.x = 25;
this.y = 25;
},
loseHP: function(){ // damage is dealt to the player
var sfx = new Audio('assets/oops.wav');
sfx.play();
this.life-=1;
if(this.life<=0){this.alive = false; setTimeout(function(){/*alert("You are dead!");*/this.alive = false; /*<---how the player dies*/}, 200);}
},
direction: undefined,
draw: function(ctx) { // draws the player using the drawLib
var hW = this.width/2;
var hH = this.height/2;
if(!this.image) {
this.drawLib.rect(ctx,this.x-hW,this.y-hH,this.width,this.height,this.color);
}
else if(this.image) {
this.drawLib.drawImg(ctx,this.x-(hW*this.proportion),this.y-(hH*this.proportion),this.width*this.proportion,this.height*this.proportion,this.image);
}
},
// moveLeft: function(dt) { this.x -= this.speed * dt; },
// moveRight: function(dt) { this.x += this.speed * dt; },
// moveUp: function(dt) { this.y -= this.speed * dt; },
// moveDown: function(dt) { this.y += this.speed * dt; },
// movement functions, they move you
moveLeft: function(dt) { this.x -= this.speed * dt; },
moveRight: function(dt) { this.x += this.speed * dt; },
moveUp: function(dt) { this.y -= this.speed * dt; },
moveDown: function(dt) { this.y += this.speed * dt; },
// a double move
jump: function(direction){
(direction=="l")? this.shiftLeft(10,2): (direction=="r")? this.shiftRight(10,2): (direction=="d")? this.shiftDown(10,2): (direction=="u")? this.shiftUp(10,2): 0;
this.expandShrink();
},
// this grows and shrinks the player image to emulate jumping
expandShrink: function(x){
x = typeof x !== 'undefined' ? x : 0;
this.proportion=-(x*x)+(2*x)+1;
if(x>=2){ this.proportion=1.0; return; }
requestAnimationFrame(this.expandShrink.bind(this, x+0.15));
},
// moves one square in some direction
shiftLeft: function(reps,speed) {if(reps <= 0||this.level != app.tilturn.level) { this.level = app.tilturn.level; this.moving = false; return; }else{ this.moving = true; this.x -= 5*speed; reps -= 1; requestAnimationFrame(this.shiftLeft.bind(this, reps, speed));}},
shiftRight: function(reps,speed) {if(reps <= 0||this.level != app.tilturn.level) { this.level = app.tilturn.level; this.moving = false; return; }else{ this.moving = true; this.x += 5*speed; reps -= 1; requestAnimationFrame(this.shiftRight.bind(this, reps, speed));}},
shiftUp: function(reps,speed) {if(reps <= 0||this.level != app.tilturn.level) { this.level = app.tilturn.level; this.moving = false; return; }else{ this.moving = true; this.y -= 5*speed; reps -= 1; requestAnimationFrame(this.shiftUp.bind(this, reps, speed));}},
shiftDown: function(reps,speed) {if(reps <= 0||this.level != app.tilturn.level) { this.level = app.tilturn.level; this.moving = false; return; }else{ this.moving = true; this.y += 5*speed; reps -= 1; requestAnimationFrame(this.shiftDown.bind(this, reps, speed));}}
}; // end app.player
|
STACK_EDU
|
They say imitation is the sincerest form of flattery and in this post, I’m going to share with you my scripts for Veeam style color-coded emails from Rubrik!
For the record I love Veeam. Back in 2010 there was no better solution for backing up VMware environments, it was my standard go-to for any VMware environment I deployed. Veeam has grown exponentially since because of its VM-level simplicity, but it still hasn’t made much headway into enterprise IT and its now old tech. No HTML5, bolted on APIs, Windows mgmt consoles everywhere, no deduplicated storage, not cloud-native. Even when I see it on a larger scale its typically alongside another backup product or process handling everything Veeam can’t.
So how do you get all the goodness of a modern solution like Rubrik managing not just VM backups, but SQL, Oracle, NAS, physical, while keeping a nice VM-level report with green, orange, and red, to clearly show whether a VM backup needs further investigation? Rubrik has some great built-in email reporting capabilities, but this hasn’t yet stretched to color coding of emails yet. The answer is PowerShell, some basic HTML code, and the REST API first architecture of Rubrik allowing us to pull any data we need.
The request for this report came from 5 customers who loved Rubrik but missed their Veeam email reports. Here’s the example they sent me to emulate:
Pretty colors! But it’s still missing some really important information like:
- What was the backup success %?
- Was the backup app or crash consistent?
- Was the VM even powered on?
- How many VMDKs were on the VM?
- If the job is still on-going when do I get the email?
- Do I have to look at 1 email per job?
- What is the status of the backup infrastructure itself?
If you think about the above questions, then the basic color-coded email from Veeam doesn’t really tell me if the backup was of any use. To play devils advocate; what if the backup was crash consistent and I didn’t force app consistency, somebody powered off the VM, removed all the VMDKs, the job overran so I didn’t get the email when I expected, I had to look at 20 emails to find this 1 VM, or my repositories are now 99% full and the next backups might fail? The report could be green, but the backup useless, or subsequent backups are going to fail.
Even worse is that someone will typically take the report then go manually start a backup on each VM, one by one, to remediate it being out of compliance. We can certainly do better than that!
Using PowerShell Invoke-RestMethod, Send-MailMessage, some simple HTML, and the Rubrik REST APIs here is the equivalent from Rubrik:
Color-coded with way more useful information! I decided to throw everything into the table so you can reel it in by deleting the table columns of your choosing from the HTML code. It has the following features:
- Pre-built PowerShell script ready to run a schedule in your environment today
- Each VM goes green, orange, red (configurable) depending on the outcome of last backup and SLA compliance
- Table headers and outcome changes color if any warnings, failures, or not meeting SLA
- Generates 1 report across all protected VMs with the SLA assigned (no more 1 email per job)
- Supports SMTP authentication and SSL, or straight SMTP relay, with multiple recipients
- Specify separate email address for a consolidated list of all failures (so you can use 1 email for all reports, another just for failures to open a helpdesk ticket)
- Shows VM consistency, tools, power status, VMDK count, total backups and OS
- Includes the failure or warning message if a backup wasn’t successful
- Total backup success % and other useful summary statistics
- Exclude SLA domains if required
- Includes Rubrik cluster health, node health, total space and utilization
- Automatic remediation of non-compliant VMs with an on-demand backup (disabled by default), removing the manual process of remediation altogether
To download your copy simply click on the zip file below:
Extract the script to C:\RubrikAdvancedReportingv1\ (or change the $ScriptDirectory variable on each script), edit the -Settings.ps1 file with your defaults. On first run the script will prompt for Rubrik credentials and store them securely in an XML file for subsequent headless runs. There are 2 versions of the email report within the zip file, to explain the difference:
- Uses the $BusinessSLAInHours variable in the settings (default 24 hours) to determine compliance by checking if each VM has a backup within the period specified
- It doesn’t matter if the VM has multiple backups within the period or if a backup failed last week, it just checks the last backup
- Allows you to have lower RPOs on an SLA domain but not be held to that frequency for compliance/reporting purposes
- Bypasses $BusinessSLAInHours from the settings file and instead gets the frequency in hours on the SLA assigned to the VM. I.E an SLA backing up every 5 hours means it looks for a backup within the last 5 hours
- The frequency is then used to determine compliance if the last backup is within that frequency
- Allows you to have VMs backing up hourly, daily, weekly etc, and determine individual compliance
I created both due to different customer requirements. All the columns/ordering can of course all be removed or changed to your wishes by simply editing the HTML in the script.
All feedback welcome and I hope you found this useful in your pursuit of simplifying backup with Rubrik. Happy scripting,
|
OPCFW_CODE
|
I don’t often do this*, but I recently got a question on my YouTube tutorial, Update dropdown list in Google Sheets dynamically based on previous dropdown choice: Data Validation, about whether or not this process can be applied to a column range.
The short answer is yes. The long answer is that it is a bit ugly, but it works.
Let’s first clarify the problem.
*obviously not procrastinating before starting another big project 🤣🐐.
Let’s say in column A of our Google Sheet tab called Main, we want a dropdown menu for each cell in your column from, say A2:A12. We’ll keep it simple and make our options:
If the user selects ‘one’ in say cell A2, then cell B2 will have a corresponding set of values that we will make:
However, if the user selects ‘two’ in cell A2, then B2 will have a different set of values such as:
We also want the same scenario to occur for each row. So the user can select a value in any row in column A and that will update the corresponding Google Sheets data-validation options in column B.
So our range might look like this:
So how did we achieve this? Well, just like in the original tutorial we relied on another Google Sheet tab that, in this example, I have labelled, Notes.
To achieve this dynamic dropdown in column B we need to do a number of steps:
- Create a new sheet tab called Notes.
- Add a list of all the selected items in Column A and Column B.
- Set up our Column A dropdown back in Main.
- Reference Column A, Main and recreate individual selections for each row that we will hide in Notes.
- Update column B in Main drop-down data validation and change their relative and absolute values.
Setting up the dropdown data for Columns A & B
Go ahead and create a new Google Sheet tab and call it Notes. In cell A1, add the header Option 1 and column B1 the header Option 2. Then from column A2 down type in three sets of each of the following:
Finally, from B2 down type in the corresponding number followed by A, B and then C for each number set so that it looks like this.
Set up the first dropdown data validation
Head back to the Main Google Sheet tab Give A1 the header Option 1.
Next, select the range from, say A2:A12 and then right-click > select Data Validation.
Check that you have selected the correct range (1).
Then ensure the Criteria is set to List from a range. From here, click the little grid symbol and navigate to your Note sheet tab and select the range from A2:A10 encompassing all your options (2).
Ensure that the dropdown menu is selected (3). And then hit the Save button (4)
Creating column B options specific to each row of Column A selections
Navigate back to our Notes sheet tab. We need to create a new list of column B options for each row of data from column A in our Main sheet tab.
In cell E1 type the header, ‘Cell Ref’. Then in F1, add the header, ‘Selections for column B Sheet 2’.
Next, let’s just add a range of titles to Notes column E indicating each row that we will reference back in Main. Type ‘A2′ to cell E2, ‘A3’ to cell E3, and so on down the column until you get to ‘A12’ in cell E12. This column is just a description too and has no formulaic effect on the actual process. It is just a handy guide.
a unique range of values for each associated cell
Column F is where the real magic happens. We need to create a corresponding list of values for each cell in Column B of the Main sheet tab that changes depending on the choice from column A of the Main sheet tab.
Here is the formula for cell B2 of Notes.
=IF(Main!A2=“”,“”,TRANSPOSE(FILTER($B$2:$B$10,$A$2:$A$10 = Main!A2)))
Go ahead and drag it down the column. You can drag it past the last item in the column just in case there are more cells added to Main later.
You can now head back to your Main sheet table and make a change to your selection and see how it affects the values in column B.
Let’s look at how this formula works:
Filtering out only those items with a corresponding selection in Col A of the adjacent row.
First, we need to filter out all the items in our selection range in column B of our Notes. The FILTER function first takes the range you want to display as its first argument. For us, this is $B$2:$B$10. Note the dollar signs beside the column letter and row numbers. This forces those rows and columns to be locked in so when you move them down or across other cells their values won’t change. The values are absolute. More on this here:
The next argument is the range that we want to use to filter our data and how we want it filtered. In our example, we are using $A$2:$A$10 of Notes. We only want those values in column B where column A is equal to the user’s selection in Column A of the Main sheet tab.
This filter will provide us with a vertical list of values.
Making our values run horizontally
We need to now make our values run horizontally so that they can be references in each cell.
This is achieved with the TRANSPOSE function.
We will future-proof our list of formulas so that we can drag it down the column a ways should the user add more cells in Main. We don’t want to display any errors if there is a blank cell in Column B of Main so we can use an IF function here to hide it by saying if Main A2 is blank then we just want to display blank otherwise we want to run our formula.
Setting up the column B dropdown
Head back to the Main Google Sheet tab. Select the range B2:B12. Right-click > select Data validation.
Ensure your range is selected (1) and the Criteria is set to List from a range (2). Select the grid to update the range and head over to the Notes sheet tab.
This time we are going to select from Cell F2 across to Cell M2. You might have noticed that we have select across our columns far more than what options we have. This is just to make sure that if we add more options in the future, then we have them covered.
Now that you have your selection you will need to make some modifications to it. Have a closer look at our selection in the image:
You can see here that we are only locking in (making absolute) the columns, but not the rows. This will make the rows relative to each cell as the data validation dropdown goes down the row. So our dropdown in cell B3 will reference Notes!$F3:$M3 and so on down the column.
To wrap up the data validation dialogue. Ensure that you have selected Show dropdown list in cell and hit save.
You are all done. Go ahead and give it a try.
You can now go ahead and right-click the Notes tab and select Hide sheet so no one can see your working.
As you can see, this is a pretty messy process, but it is effective. Once you set it up, it pretty much maintains itself.
Here is a link to the Google Sheet. Go to File > Make a copy to grab your own copy of the sheet to play around with.
|
OPCFW_CODE
|
On this page:
- MirOS BSD
- The MirOS Project
MirOS is available as a BSD flavour which originated as an OpenBSD patchkit, but has grown very much on its own, though still being synchronised with the ongoing development of OpenBSD, thus inheriting most of its good security history. This variant is also called "MirBSD", but the usage of that word to denote MirOS BSD (plus MirPorts) is deprecated.
A very good general overview about MirOS BSD and MirPorts is available from our information flyers, which are available in English, German, and French. They are distributed on various events by ourselves and/or the AllBSD team.
MirOS started after some differences in opinion between Theo de Raadt, the OpenBSD project leader, and Thorsten Glaser, who is now our lead developer. The main maintainer of MirPorts is BennySiegert. There are several more persons working as contributors on the project.
Why not just use OpenBSD?
MirOS BSD often anticipates bigger changes in OpenBSD and includes them before OpenBSD itself. For example, ELF on i386 and support for gcc3 were available in MirOS first. Controversial decisions are often made differently from OpenBSD; for instance, there won't be any support for SMP in MirOS.
The most important differences to OpenBSD are:
- Completely rewritten bootloader and boot manager without an 8 GiB limit and with Soekris support
- Slim base system (without NIS, Kerberos, Bind, i18n, BSD games, etc.), Bind and the BSDgames being available as a port
- Binary security updates for stable releases
- ISDN support
- IPv6 support in the web server software
- wtf, a database of acronyms
- Some of the GNUtools (like gzip and *roff) were replaced by original UNIX™ code released by Caldera (SCO) under a BSD licence
- Based on OpenBSD (-current and older releases)
- 64-bit time handling routines (time_t)
- Correct handling of leap seconds
- Full GCC 3.4 support: C, C++, Pascal, Objective-C
- Current versions of the GNU developer toolchain (rcs, binutils, gdb, texinfo, lynx etc.)
- GNU CVS 1.12 with custom extensions
- Uses "MirBSD" as its uname
- Binary compatibility with OpenBSD and MirOS #7 via emulation
- Improved random number generator
- Uses sv4cpio with/without CRC instead of tar archives as its package format; support for new formats in cpio
- Improved support for UTF-8 and the Unicode BMP, including wide character support for libncurses ("libncursesw") and friends
In snapshots of MirOS, the installation CD is also a live CD. That means that you can boot a full MirOS system (although without any ports installed) from the CD. For special cases, you can also use dd(1) to write the image (or the mini-ISO, cdrom8.iso) to your hard disk and install from there. Attention: All data on the hard disk will be lost.
Releases do not contain the live CD as we cannot (yet) make it dual-bootable for the i386 and sparc architecture.
For the full copyright statement of MirOS, please refer to the 1stREAD and LICENCE files, summarised in BSD-Licence(7) including the dreaded advertising clauses, and the website licence. We prefer new code and documentation to be placed under our licence template which is compliant to the Open Source Definition and conforms to the Debian Free Software Guidelines. (Don't be scared by the length of the template, the actual licence stops after the first *- followed by instructions only, and is way below 1 Kibibyte.)
MirPorts—a derivative of the OpenBSD ports tree—is our solution for installing additional software packages not contained in the base system.
Using MirPorts is straightforward. After the first checkout or after updates, make setup in /usr/ports automatically installs the package tools and configuration. The ports themselves are in subdirectories, sorted by category. Just executing mmake install in such a directory will download the source code, compile it, create a binary package and install it. Dependencies are automatically installed when necessary. Some ports exist in several "flavours", e.g. with or without X support.
Many ports removed for political reasons in OpenBSD (e.g. all the DJB software or the Flash Plugin) have been kept in MirPorts and can continue being used. We also want to be a place for unofficial or rejected OpenBSD ports.
MirPorts does not use the package tools from OpenBSD, which are written in Perl, but continues to maintain the previous C-based tools. New features are in-place package upgrades and installing your own MirPorts instance as a non-root user.
Why use MirPorts
Support for multiple platforms. Out of the box, MirPorts has support for the following operating systems:
- MirOS BSD (-stable and -current)
- OpenBSD (-stable and -current)
- Mac OS X (10.4 and newer) / Darwin
- Interix / SFU 3.5
Even on stable releases, using the newest MirPorts version is recommended.
The support for Darwin and Interix is still fairly new. On Darwin, MirPorts is usable, Interix support is in the alpha stage. Both the BSD build system and the autotools/libtool infrastructure has been ported and support shared libraries on this platform. Our mid-term goal is to provide at least a part of the MirOS base system as a port or a package.
For all platforms, we are still searching for developers as well as testers to build packages and to submit bug reports to the developers.
MirLibtool. GNU Libtool is used by many packages to build shared libraries in a portable way. However, there are many problems with it—for example, it breaks when no C++ compiler is installed. Therefore, MirPorts contains a modified version nicknamed MirLibtool.
MirLibtool is based on GNU libtool 1.5. It is compatible with all versions of autotools. The MirPorts infrastructure installs it automatically whenever a port uses autoconf to recreate its configure script.
NetBSD® pkgsrc® on MirOS BSD
pkgsrc® on MirOS BSD is an alternative packaging system which provides more up-to-date packages with less integration with the main BSD operating system.
The MirOS Project
The MirOS Project has grown to be an umbrella organisation with many subprojects such as mksh, The MirBSD Korn Shell. It’s also acting as an OSS type foundry “MirOS” (releases). Several individual developers have semi-official subprojects like jupp – the Editor which sucks less or the image/tiff part of the Issue 9 (golang) standard library. Finally, The MirOS Project at FreeWRT.org Evolvis was a supplemental hosting platform site where experimental or detached (CVS), or otherwise non-core (git, Debian APT Repository, etc.) publications appear; the FreeWRT.org FusionForge/Evolvis system also permitted separate, distinct project setups.
|
OPCFW_CODE
|
This is something that happened me three days ago when I installed VS 2010 RC for the first time. I have tried to reproduce it several times to no avail, so I haven’t opened a Microsoft Connect report. FWIW, I will post here what happened just in case some of you want to test.
First, some background: when you develop an add-in for commercial or freeware use (not for in-house) chances are that you want to target multiple Visual Studio IDEs, ideally with the same binary DLL, just registering the add-in for several hosts. For example, an add-in built using CLR 2.0/.NET Framework 2.0 can target VS 2005, VS 2008 and VS 2010 (even if VS 2010 doesn’t install CLR 2.0/.NET 2.0 but CLR 4.0 / .NET 4.0). When the setup of the add-in is run, it can do two things:
- To install the add-in and register it only for the actual IDEs that are present on the machine. In this case, if the user later installs a new IDE version (supported by the add-in), he would be forced to run the setup again to get the add-in registration for that new IDE version.
- To install the add-in and register it for all the IDEs supported, even if they are not installed.
- For COM registration this dirties a bit the registry creating keys like HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\10.0\Addins\<addin> that should not be there if VS 2010 (10.0) is not installed yet, so I don’t like this very much.
- For XML registration, it happens that you can have a single .AddIn file in a location whose content specifies multiple hosts, so you don’t dirty anything targeting IDEs that are not installed yet. When the user installs a new IDE, the next time that he launches it the add-in will be there. No need to re-run the add-in setup. This is cool and this is the scenario where the problem happened with VS 2010 RC, but only once.
I had a new version of my MZ-Tools add-in that uses XML registration and targets VS 2005, 2008 and 2010 using a single XML .AddIn file, and it is marked to load on startup, and then I installed VS 2010 RC for the first time in that particular machine. The installation went OK, and when I launched VS 2010 RC for the first time, VS showed the usual message asking for a profile and indicating that it was initializing for first use, which could take a few minutes. The add-in was loaded two (I know because it shows a welcome kind of message).
When VS 2010 RC was done, I noticed that most of the user interface of my add-in was duplicated: two main menus (one of it disabled), two toolbars, etc. I went to the Add-in Manager and unloaded it, which caused a crash in VS and finally I was able to delete by hand the toolbars, menus, etc. Once cleaned, next times the add-in loaded well as expected.
I don’t know for sure what happened there, but it seems as if VS 2010, as part of its initialization, tried to persist the menus, toolbars and buttons of my add-in (that uses temporary UI) on disk as it does with packages and add-ins that use permanent UI. So the IDE ended with duplicated UI.
To reproduce the problem, I thought that the same thing would happen if I created another user account and launch VS 2010 for that user, since VS will need to ask profile, initialize user settings, etc. But I was unable to reproduce the issue.
Then I went the long route of installing VS 2010 RC on virtual machines with the add-in already registered, and I was unable to reproduce it too.
So, I don’t know if it is a timing issue (when the add-in is loaded and when VS persists UI on disk) or what… but I know what I saw and there is a bug somewhere there…as I explained in other posts, VS doesn’t take much into account that buttons can belong to add-ins that use a temporary UI, not always to packages or add-ins using permanent UI.
To play safe maybe I will change the setup to not register the add-in for IDEs not installed yet, but I am almost sure that the problem should happen too if the VS is launched by a new user on the same machine
I hope this helps. Let me know if you encounter this issue to reproduce it and send it to Microsoft.
|
OPCFW_CODE
|
Image initialization speedup
This PR speeds up image driver initialization by only opening and closing each input netCDF file once (instead of opening and closing when reading every variable). This PR is related to issue #538 , issue #380 and PR #564 .
Specifically:
Originally in the filenames structure, all input file information is saved as filenames only. Now in this PR, for those input files in netCDF format (domain, parameter, initial state and forcings), both filename and nc_id are saved in a new nameid_struct in the filenames structure. This change allows nc_id to be stored alongside with filename once an input nc file is open, and be directly used later.
Removed the opening and closing netCDF file parts from the low-level nc functions, including get_nc_field_<TYPE>, get_nc_var_attr, get_nc_var_type, get_nc_vardimensions. Instead, these functions assume that the nc file is already open beforehand, and take nameid_struct (which includes both filename and nc_id) directly as input argument. Created separate functions for opening and closing nc files.
For domain, parameter and initial state input files, the files are opened before their first use; and are kept open; and are closed after their final use. For forcing files, the first-year file is opened before being used in get_global_param, and kept open; then in vic_force, close the previous-year forcing file and open a new-year file if going into a new year; the last-year file is closed after the last simulation timestep.
Further things to do:
Format clean up
Right now no further double checks are being done to ensure files are indeed opened/closed correctly (the image driver runs successfully though). Do we want to, for example, add check in the end of VIC simulation (or somewhere else) whether all input netCDF files are correctly closed? Or, when reading each variable, double check whether the netCDF file is indeed open? ...
Other updates after code review
Note:
This PR does not include changes for CESM driver. Additional changes are needed for CESM driver to run with the speedup.
This PR only speeds up opening and closing input netCDF files, but not output files.
Some speed tests (times shown below are all walltime):
Test 1 (a very small domain): Stehekin (16 grid cells), 10 day, hourly timestep; hydra master node, 1 processor:
Before
After
Init time (sec)
23.01
0.08
Run time (sec)
10.98
1.74
Final time (sec)
0.0014
0.0018
Total time (sec)
33.99
1.83
Model cost (pe-hrs/simulated_year)
0.34
0.019
Test 2 (a regional-scale domain): Arkansas Red (3999 grid cells), 10 years, 3 hourly timestep; hyak, 1 node (16 processors total):
Before
After
Init time (sec)
20.78
1.27
Run time (sec)
5250.93
1083.79
Final time (sec)
0.097
0.045
Total time (sec)
5271.8
1085.1
Model cost (pe-hrs/simulated_year)
2.34
0.48
Test 2 (continental-scale domain): whole CONUS (333579 grid cells), 1 year, 3 hourly timestep; hyak, 2 nodes (32 processors total):
Before
After
Init time (sec)
76.49
15.48
Run time (sec)
7009.55
6914.95
Final time (sec)
0.20
0.21
Total time (sec)
7166.23
6930.64
Model cost (pe-hrs/simulated_year)
63.70
61.61
It seems like the initialization time is improved quite a lot; the run time is also improved since now we don't open and close the forcing file for each variable at each timestep. The speedup is more significant for smaller basins.
@jhamman @bartnijssen @dgergel This PR can use a code review now. The CESM test is failing right now since corresponding changes in CESM driver are needed. Other tests have passed, so I think I can incorporate your comments/suggestions first, and then @dgergel can work on the CESM driver update.
This PR should be merged after PR #685
Have addressed @bartnijssen 's comments
New speed tests with time for vic_force and vic_write_output (times shown below are all walltime):
Test 1 (a very small domain): Stehekin (16 grid cells), 10 day, hourly timestep; hydra master node, 1 processor:
Before
After
Init time (sec)
23.08
0.087
Run time (sec)
11.02
1.75
Final time (sec)
0.0016
0.012
Total time (sec)
34.10
1.85
Model cost (pe-hrs/simulated_year)
0.346
0.0187
vic_force (sec)
9.32
0.064
vic_write_output (sec)
0.170
0.177
Test 2 (a regional-scale domain): Arkansas Red (3999 grid cells), 10 years, 3 hourly timestep; hyak, 1 node (16 processors total):
Before
After
Init time (sec)
13.2
0.95
Run time (sec)
1950.8
1107.9
Final time (sec)
0.02
0.043
Total time (sec)
1964.0
1108.9
Model cost (pe-hrs/simulated_year)
0.87
0.49
vic_force (sec)
1420.6
548.3
vic_write_output (sec)
119.8
125.1
Test 2 (continental-scale domain): whole CONUS (333579 grid cells), 1 year, 3 hourly timestep; hyak, 2 nodes (32 processors total):
Before
After
Init time (sec)
73.6
16.1
Run time (sec)
6989.7
6949.6
Final time (sec)
0.22
0.20
Total time (sec)
7063.5
6966.0
Model cost (pe-hrs/simulated_year)
62.8
61.9
vic_force (sec)
1284.4
802.7
vic_write_output (sec)
13.5
11.2
The times in this set of tests are somewhat different from last test because of variation of different VIC run realizations (even if we use the same specs and machines). But note that the "before" time for the 10-year Arkansas-Red run in our last test showed a much longer running time - I might have made some mistakes last time.
|
GITHUB_ARCHIVE
|
You will also be able to know which sites to avoid. The reason behind this is because since you are in a single parent dating site, it is apparent that you are one. Texting, sexting, dick pics, dating sites are all new since the last time I dated. I met a serious boyfriend on Match, and many great people I know have Match accounts. Hey bigblueeyes74: We'll likely never meet, but me love you long time. Then, read what this dating coach says about spoiler alert: they love them! Keep at it, try new things and keep an open mind. The League This new online dating matchmaking service that bills itself as very elite, as it only accepts a small percentage of applicants, making those accepted seem very special indeed.Next
Which online dating apps are the best? It is very popular in New York City where I live, but I find it to be a great interface. The website is designed with a First Date Section, which provides your friend or partner or anyone interested in you with a list of preferred activities. Unsure of how dating works in 2018 — with apps, texting, sexting, dick pics, etc? No quality man wants to date a single mom seriously. It has an outstanding success rate for its users. For that, among , we have you covered! Let us help put your past behind you - our aim is to find you a partner who even the kids approve of! Your identity, including contact info, is not shared with the other party. It can be daunting to go back to the dating scene after a long time, that is why these online dating sites will help single parents out.
The site has a good layout and is easy to use. Thank you for using Best Dating Sites! There are some online dating sites that are full of scammers. My mom bod is so fat and saggy! Which dating sites are full of freaks and pervs? Created by a psychologist whose goal was to create an algorithm to find true compatibility that will result in deeply committed, fulfilling partnerships. We hope that we can help you find the perfect Single Parent Dating website and ultimately find your perfect match. You can explore the site and its features for free, and the database includes plenty of profiles for you to browse. ChristianCafe All the sites allow you to search by religion, but a few dating sites specifically focus on different faiths. Single Parent Dating Sites There are a lot of different niches of today, and one of the niches that have been becoming more and more popular are online dating sites for single parents.Next
You'll find the site offers some nice features that you normally won't find. Tawkify Tawkify bills itself as a personalized matchmaking service — not a dating app. Check out our top 10 list below and follow our links to read our full in-depth review of each single parent dating website, alongside which you'll find costs and features lists, user reviews and videos to help you make the right choice. If you are a single dad or a single mum looking for a genuine partner for a long-term partner, then singleparentlove. You can search for members by constructing a query using pull down menus: You can use some of these to send to anyone who intrigues you! If online dating isn't working for you now, take a break, assess how you might approach dating in general, and then try again in a few months. The end result is really all that matters. Connect with old friends on dating sites.Next
But before anything else, it would be best to get to know some tips on dating online as a single parent. Potential members are approved based on data from their Facebook and LinkedIn profiles, presumably seeking out daters with higher income and education. We strongly recommend you not join these dating sites listed below. You also need to write something about yourself and the kind of person that you want to meet. There is a free version, but very few people can resist upgrading. In other words, women have since the dawn of time been sick of dudes coming on too strong, cheesy pickup lines, dick pics, stalkers and worse.Next
Last week my brother, who owns a media company, had a business lunch with a guy introduced him to — someone I'd met online and dated for a minute. Again, check out a few that others recommend, use their free trials or promotions, and see which has the best selection for you. How you choose to employ the service of a company to compensate the others of your your life with, taking a risk to locate that person from the internet; sounds a better way to gamble today getting an endeavor to get happiness for sale. That said, online dating is a boon to single moms. Worried about flaunting your new mom bod on the market? You may care less about physical appearance than you did before becoming a parent. On the plus side, the investment means other paid members are generally serious about meeting someone.Next
Dating Pregnant Women is an online dating site for single expectant mothers looking to date. It can be frustrating to date as a single parent, and it can even be frightening to look for a match. No need to explain, no need to make excuses. My friend, an accountant, has turned several otherwise dead-end dates with guys she met online into clients. Just Single Parents is a dating site for single parents. I just want to say to all the single moms: if you are struggling, the grass is greener nowhere.Next
The site is free to be a member however they have monthly subscription charges called Bolt-ons. Single parents are almost always more interested in meeting Mr. For example, Tindr is more of a hookup site in some areas, but full of serious daters in others. If we found that most members usually men were only looking for sex, that site received a negative mark. Since Happn's goal is to connect you with locals, you actually must be within 250 miles to actually send and receive messages from another member. First, while eHarmony does have a very long questionnaire that promises to scientifically match you, several studies have found that to be basically useless.Next
|
OPCFW_CODE
|
If you need to write a conference report, the first step is to create a template. A template is an excellent way to keep your ideas organized and easy to follow. Once you have chosen a template, the next step is to write the report. Make sure you include the key people who spoke at the conference, as they are the ones with the most authority on the subject matter. Also, include the original goals of the conference, as this can help you make any necessary changes.
A conference report template can make the job a lot easier. When you use a template, it will be easy to keep track of all the details and make the report look professional. It also comes with templates for different kinds of reports, including those for conferences, seminars, and workshops. These types of templates are perfect for business presentations, but they can also be used in educational settings as well. A template allows you to highlight important points, such as the number of attendees, the number of female and male participants, the size of the hall, and so on.
After a conference, you can send a report to the participants. This type of report is usually sent out 72 hours after the event. In addition, it is also useful for other people who were not able to attend the event. Once you’ve finished writing it, you should review it carefully, check for mistakes, and make sure it is easy to read. Once you’ve finished, you can let a friend or colleague read it to ensure everything is clear.
A post-conference report template gives you access to multiple graph charts, as well as a custom table of contents. You can use a post-conference conference report template to share the results of your conference and highlight areas for improvement. The post-event report template can also be used for a parent-teacher conference. Using a conference report template, you can easily summarize the key points from the conference and send them to the parents or attendees.
Besides formatting, conference report templates are designed to be easy to customize. If you’re new to the school, you can download and use the Transparent Classroom template. The conference report template is completely customizable, and you can customize it to your needs. After downloading, you can add graphics and tables to your report, add key points, and insert keywords. Your report template should be able to present all of these features and much more. If you’re a teacher, you can use the parent teacher template to organize the key points.
A post conference report template allows you to include various graph charts. For example, it can list the number of attendees, the number of females and males present, and the overall size of the conference hall. You can also include key points from the conference, such as the participants and the attendees. A parent teacher report template is especially useful in schools and colleges, as it allows you to focus on key points from the meeting. The reports can include data such as the number of students, the number of parents, and the total cost of the conference.
It is important to create a clean and well-written report. It should be free of errors and read smoothly. You should also include a note on the importance of the conference. Remember, that the conference is an opportunity to learn and make connections. In the end, conferences are the perfect place for you to learn from other people with expertise in the field. It is also helpful for you to know the different trends and strategies of your peers and colleagues.
A conference report template will help you organize all of your conference details. It will also help you make the document more informative. If you have a conference that is held regularly, you can include the details of the speakers and the attendees in your report. You can also include the topics and the results of the conference. A report is the best way to get the feedback from your attendees. In addition to the evaluation, you can highlight your best points.
|
OPCFW_CODE
|
This site has been built– nokillnetwork.org. Now what I need is for someone to populate the directory with information about organizations in each state. The goal of the project is to list as many organizations for each state as possible in the directory, which should be a total of no less than 200 overall.
Here’s an example of one state that already contains two entries:
[url removed, login to view]
This project involves two main steps:
1. data entry based on web research
a. I will provide instructions for finding the organizations to add to the directory. These instructions will be something like, “go to this web page and add all of the listed organizations to my directory,” or, “do a google search for these keywords and add any organization that meets this criteria.”
b. Adding the information to the directory is done via web-based interface. Example: [url removed, login to view];location=link_add Three-four fields required for each entry.
c. After submitting entries via the form, you will need to login to the admin interface and mark them as “approved” in order to publish them.
d. Here is an example of the end result of the data that needs to be added: [url removed, login to view]
2. email to organization
a. for each organization that is entered into the directory, a corresponding email must be sent to the email address for that organization (meaning that you’ll need to find this email address as you add them to the directory and store it for later use). The email text will be provided, but care must be taken to replace certain elements with the organization’s name, state, and the URL where they are listed in the directory. All emails will be sent from an email address that I set up, and sent copies of each email will be saved for my review. No emails are to be sent until step 1 is completed. No follow-up emails are required, just one per organization.
The majority of this project is copy+paste. Please let me know if you have any questions, and thank you for bidding!
I would love to help you with this project. Please see your PM for details. Thank you.
Bu iş için 46 freelancer ortalamada $58 teklif veriyor
HI we are a team and u can view my ratings since i have done a similar project,can do for 20 cents per shelter added and u can pay me 10 cents per mail sent,open PMB for further discussion and to start the work immed Daha Fazla
Ready to start immediately on your project, as we are the experts in such things. Waiting for your selection. Thanks!
Hi. Thank you for giving us the opportunity to bid on your project. We are quite interested in this project and hope can work with you. I have gone through your detailed description of this project. You listed an ex Daha Fazla
I would love to do this project. The instructions are very clear so I don't see a problem regarding this. It would be nice to hear from you soon. Thanks.
Thanks, I take this great chance to place my bid on this project. I have gone through the project description, and I have adequate experience to support this work and complete to the fullest satisfaction as required by Daha Fazla
Dear sir, I have gone through your requirement and ready to do. Thanks
Hi, I am offering my services for this project. Details through PM please. Thanks
I have spent significant time researching community resources available for non-profit organizations I serve; I am confident that I can effectively research no-kill animal shelters and populate your link pag for each s Daha Fazla
Hi,would like to do this project,please send [login to view URL], Megapixel
Greetings! If you want research I am the one you want to hire. If you want data entry I can do the job! I have an incredible track record as a researcher and can do an excellent job for you. I propose a fee of $ Daha Fazla
I have had to work with no kill organizations many times and, am experienced in data entry. I have a team of four that will fulfill this job to the utmost perfection.
Having gone through your specifications, I could understand everything You want in Your project and I can assure you that my outstanding experience will deliver You 100% satisfaction .I expressly assure You "quality Daha Fazla
|
OPCFW_CODE
|
Return d × 2scaleFactor rounded like performed by a single the right way rounded floating-level multiply to a member with the double value set. Begin to see the Java Language Specification for your dialogue of floating-level value sets. In case the exponent of The end result is concerning Double.MIN_EXPONENT and Double.MAX_EXPONENT, The solution is calculated accurately. In the event the exponent of The end result could be much larger than Double.
type inference is activated, that means that even if you use def on a local variable as an example, the type checker will be able to infer the sort of the variable through the assignments
Making a new Java project in Eclipse is reasonably easy, but is usually puzzling should you've by now installed Eclipse for a distinct programming language.
Therefore code that's correctly legitimate with no @TypeChecked will never compile any longer when you activate style checking. This is certainly specifically correct if you're thinking that of duck typing:
A meta Examination describing with regards to the wellbeing results on many people today mainly because of the exposure of Digital cigarette vapour.
So it doesn’t matter that you use an specific variety here. It really is especially appealing when you mix this element with static sort examining, as the form checker performs form inference.
is The mix of a perform and the lexical atmosphere within just which that functionality was declared. Lexical scoping
Not The solution you're looking for? Look through other queries tagged java eclipse or request your individual query. requested
When ever Anybody enters uid of the person info will be exhibited. Basing on that data security steps is usually taken. This website will deliver Picture identification with finger prints and eye scanning so odds of escaping will be difficult.
Closures are handy since they Allow you to associate some information (the lexical setting) by using a operate that operates on that details. This has evident parallels to item-oriented programming, where by objects make it possible for us to affiliate some great post to read knowledge (the item's properties) with a number of methods.
Utilizing the def key word in this article is suggested to describe the intent of a way that is speculated to work on any kind, but technically, we could use Item in its place and The end result would be the identical: def is, in Groovy, strictly such as employing Object.
Numerous things such as the branding have an effect on and many other which can be answerable for producing a specific final decision of buying cellphone handsets in London. Obtain View Sample
Making use of This technique citizens can vacation abroad with out owning passport. Protection Examine might be finished through on the web services just by getting into distinctive identification variety of the citizen.
Intending to do your final calendar year project in python? If Indeed see this site then you'll get properly crafted remaining year project help service. I helped 300+ college students inside their remaining yr project and most of them bought an A+ grade in closing 12 months project. So don’t hold out. Speak to me now.
|
OPCFW_CODE
|
Computer Architecture Part 2: Assembly Language
Join over 3 million cybersecurity professionals advancing their career
Sign up with
Required fields are marked with an *
Already have an account? Sign In »
3 hours 41 minutes
Hello, everyone. And welcome to this section on assembly. Language in this session will learn about assembly language and how programs interact with the computer architectures.
This is a pretty critical skill. If we're going to be performing a static analysis on now, we're binaries. So let's get started
as a quick review. When we write a program were typically writing it in a high level language. This could be C C plus plus or any other type of language that's compiled by the computer.
Now there's many types of languages out there, and that's our challenge. Is a reverse engineer, right? We need to figure out which language of program was crafted by.
But for now, let's use C or C plus, plus an example. During program compilation, the code is going to go through four separate stages pre processing compilation assembly and linking. Now the stage where most concerned with is where the pre process their code is translated into assembly instructions based on the target process architectures.
Now you can see that we've compiled a sample program here on the left
and we've output id the assembly code on the right.
Now the goal of this section is to understand assembly language enough so that we can feel confident when we're looking at the Assembly code of a disassembled program. Now, based on the information that we've covered so far, we have almost all the pieces we need to make this possible. However, we just need to add a few more relevant concepts to make this clear.
The malware executed files We analyze these air all in machine code format
because this is really impossible for us to read as humans, we use this assemblers. Now this assemblers air going to take machine code and they're going to convert it into assembly language. In this course, we use either pro or Godhra to do that conversion for us.
So if we go ahead and open up our malware file, I already have a malware
database that I've been using. So I'm just gonna having double click this.
So let's go ahead and double click and execute that. Okay, So, to understand assembly, we need to look at the format. So any time we look at assembly, it's displayed a line by line with its label first. So here is the label. This will be the address in hex
and the second column. This is going to display the machine code or it's called the op code into the right of that, we have the instruction pneumonic. That's push and move here, and the instruction pneumonic also contains arguments, and these are known as the operations. So we have two operations right here.
When we read the Assembly, we could get an idea of the operation
by reading the instruction. For instance, right here on the second line, we are taking the data in the RSP register and moving it into the R B P Register when we read instruction arguments there displayed first by destination, followed by the source. So source and destination
an assembly. Every instruction consists of an operation killed and operations.
Now the op code indicates what operation the CPU executes and the operations are the data and our values that the operation operates on.
For instance, here we've got three instructions that move data to and from different locations. They are grouped into three types. The first, our immediate top brands thes have a fixed data value.
For instance, here in our first line of assembly, were moving in immediate value of C eight into the R B X Register. The RBS Register is called a register operate
now, in addition to immediate and register our brands. We also have indirect memory addresses
now indirect memory addresses. They provide data values that are located at a specific memory location.
These are typically shown in the form of square brackets. As you can see in our 2nd and 3rd lines of code,
however, the memory locations can be supplied in a few different ways. It can be a fixed value, a register or any combination of register or fixed.
For example, here in our second line, R C X in square brackets refers to data located at the address held in our C X.
So if our c X holds Theodore s for 0000
this instruction transfers the value held at that address into RDX.
In our third line,
E B X plus four refers to the data located at the address held in E B X plus four. So if EBX holds an address of +40000 then the instruction operates on the data located at +40004
in assembly language. We've got some common instructions you're going to see when you disassemble programs. Typically, they can be broken down into five categories, and the first ones will look at facilitate copying and accessing data.
For example, we've got the move, instruction and move instructions, read values at the given address registers and so on. In addition to move, there is also the load effective address instruction, which is used to get data in the form of a memory address.
The load effective address calculates the source operandi in the same way as the moved US. But rather than loading the contents of the address into the destination opera and it loads the address itself
as a note, you might see the L A instruction used for general purpose arithmetic. Also, the next type of instructions are addition, subtraction, multiplication and division, and these are indicated by the pneumonic Add sub m u l and def
the arithmetic instructions at and sub thes. Take two operations, a destination and a source.
The destination could be a register memory location, and the source may be either a memory location. Register constant
now the ad and sub thes add or subtract the source and destination and the results are stored in the destination.
Now, as a note, we also have the increment or Decorah mint, and these also can add or subtract one from a register.
When we want to perform multiplication and assembly, we do it using the M. U L instruction. It only takes one operandi and it's multiplied by the content of the Rx Register. Then the result is stored in the A X or D X family registers.
When he wants to perform a division and assembly, we do it using the DIV instruction. It only takes one opera and where the number to divide is stored in the E X and E X register.
In division, the number is split up where the significant the word is held in e. D. X. After the division is executed, the quotient is stored in the EAA X Register, and the remainder is stored in the E. D. X register.
Bit wise instructions are binary logic instructions which operate on bits. Thes are used because they're fast and they could be used to perform higher math functions like multiply or divide, and they're commonly used in cryptographic, obfuscation and decoding algorithms.
Now here we've got a pin number represented as zero and hex.
Now, even though we're working with a bite, the same applies to words, D words and so on. Now bits. They have a location starting from the least significant bid, starting on the right and moving towards the left to the most significant bit. So zero through seven, respectively.
The not instruction. This is our first but wise operation. This takes one operandi and simply inverse the bids so 000008 zeros becomes eight once the result is then stored in the same location. It's very useful for inverting values
the and X or AND or
functions thes, perform operations on the source and destination, and they store the results in the destination.
These operations are similar to end X or and or in Sierra Python.
Shift and rotate instructions. Perform these operations on the destination and the count, which we'll see in a second.
The end instruction compares each binary form of two injures in returns. A new integer.
The new interest er is formed by looking at each bit position of the comparison and setting the new bit position to one. If both are one otherwise, it sets the bit to zero.
A useful implementation of the and instruction is to check and see if a number is even or odd.
The or instruction compares each binary form of two integers. The new integer is formed by looking at each bit position of the comparison and setting the new bit position to one if either of them are one.
Otherwise it sets the bit to zero.
The X or instruction compares each binary form of two integers, and this operation sets the new bit to zero. If both bits are equal and if not, it sets a bit to one. This is commonly used to clear register to zero.
The SHL instruction takes bits of our binary number and move them to the left. By the count operandi, this is the same as multiplying it by two to the power of n.
So if the contents of the A L register is 20 and this is equal to 00010100 in binary and the count operate is three, then we shift bits to the left by three.
The bits on the left fall off and we fill in the new bits with zeros on the right, and so are new. Binary number becomes 10100000 or 160 in decimal. The SRL instruction. This shift spits to the right by the Count top brand.
This is the same as shift left, but instead
it's the least significant bits that fall off on the right hand side. And this is the same as a division by two to the power of end. Lastly, the rotate left and or rotate right instructions thes air similar to the shift instructions. But instead of moving the shifted bits, they're rotated to the other end.
Okay, so there was a lot of assembly language there. I hope you are still with me in the next session. We're going Thio, wrap up our computer architecture er and assembly language discussion with examining control flow and the stack
|
OPCFW_CODE
|
#ifndef AN_UNIFICATION_H
#define AN_UNIFICATION_H
#include "antype.h"
#include "typeerror.h"
#include <tuple>
namespace ante {
using Substitutions = std::list<std::pair<AnType*, AnType*>>;
class UnificationConstraint {
using EqConstraint = std::pair<AnType*, AnType*>;
using TypeClassConstraint = TraitImpl*;
union U {
EqConstraint eqConstraint;
TypeClassConstraint typeClassConstraint;
U(AnType *a, AnType *b) : eqConstraint{a, b}{}
U(TraitImpl *tc) : typeClassConstraint{tc}{}
} u;
bool eqConstraint;
public:
TypeError error;
/** Eq constructor, enforce a = b */
UnificationConstraint(AnType *a, AnType *b, TypeError const& err)
: u{a, b}, eqConstraint{true}, error{err}{}
/** Typeclass constructor, enforce impl typeclass args exists */
UnificationConstraint(TraitImpl *typeclass, TypeError const& err)
: u{typeclass}, eqConstraint{false}, error{err}{}
bool isEqConstraint() const noexcept {
return eqConstraint;
}
EqConstraint asEqConstraint() const {
return u.eqConstraint;
}
TypeClassConstraint asTypeClassConstraint() const {
return u.typeClassConstraint;
}
};
using UnificationList = std::list<UnificationConstraint>;
/** Substitute all instances of a given type subType in t with u.
* Returns a new substituted type or t if subType was not contained within */
AnType* substitute(AnType *u, AnType *subType, AnType *t, int recursionLimit = 10000);
Substitutions unify(UnificationList const& list);
std::pair<bool, Substitutions> tryUnify(AnType *a, AnType *b);
std::pair<bool, Substitutions> tryUnify(std::vector<AnType*> const& a, std::vector<AnType*> const& b);
AnType* applySubstitutions(Substitutions const& substitutions, AnType *t);
TraitImpl* applySubstitutions(Substitutions const& substitutions, TraitImpl *t);
AnTypeVarType* nextTypeVar();
bool hasTypeVarNotInMap(const AnType *t, llvm::StringMap<const AnTypeVarType*> &map);
AnType* copyWithNewTypeVars(AnType *t, std::unordered_map<std::string, AnTypeVarType*> &map);
llvm::StringMap<const AnTypeVarType*> getAllContainedTypeVars(const AnType *t);
void getAllContainedTypeVarsHelper(const AnType *t, llvm::StringMap<const AnTypeVarType*> &map);
template<typename T>
std::vector<T*> copyWithNewTypeVars(std::vector<T*> tys, std::unordered_map<std::string, AnTypeVarType*> &map);
AnType* copyWithNewTypeVars(AnType *t);
/** Remove any duplicate type class constraints and any constraints that are known to exist. */
AnFunctionType* cleanTypeClassConstraints(AnFunctionType *t);
}
#endif /* end of include guard: AN_UNIFICATION_H */
|
STACK_EDU
|
Can a laser be designed to ionize muonic atoms so as to prevent a-sticking?
Muon catalyzed fusion is currently little more than a lab curiosity today in part because of how many hydrogen nuclei can be fused before the muon is carried away by an alpha particle. Deuterium+deuterium reactions are ten times more likely than deuterium tritium reactions to result in a muon sticking to a helium ion. I am wondering if some one can calculate the ionization energy needed to prevent that from happening and to speculate if a laser can be built to do it.
If it is possible, it may help pave the way to clean low-temperature fusion energy that produces more power than is used to make it.
For what it's worth (I cannot verify the claims): http://www.j.sinap.ac.cn/nst/EN/article/downloadArticleFile.do?attachType=PDF&id=448 (NUCLEAR SCIENCE AND TECHNIQUES 25, 020201 (2014) - I guess this is a Chinese journal). Abstract: "Considering the mixture after muon-catalyzed fusion ($\mu$CF) reaction as overdense plasma, we analyze muon motion in the plasma induced by a linearly polarized two-colour laser, particularly, the effect of laser parameters on the muon momentum and trajectory. The results show that muon drift along the propagation of laser and oscillation perpendicular to the propagation remain after the end of the laser pulse. Under appropriate parameters, muon can go from the skin layer into field-free matter in a time period of much less than the pulse duration. The
electric-field strength ratio or frequency ratio of the fundamental to the harmonic has more influence on muon oscillation. The laser affects little on other particles in the plasma. Hence, in theory, this work can avoid muon sticking to $\alpha$ effectively and reduce muon-loss probability in $\mu$CF."
Muon mean lifetime is 2.2 µs. There's your problem. Muons mass 105.7 MeV/c2, about 200 times that of the electron. If you wanted to ionize a hydrogen atom, you would need 13.6 eV. If you wanted to ionize a muonic hydrogen atom, you would need about 2813 eV or about a 0.441 nm photon. Start building your laser.
Certainly the muon's mean lifetime is major contributing factor. If 0.441nm is close, then you are talking about mid to high range Xrays. That doesn't seem feasible to build a laser for.
Odd that someone refers to X-rays using wavelengths, usually one uses keV.
If 2813 eV is personally inconvenient, feel free to divide by 1000 for 2.813 keV. Chemistry and crystallography find distances to be defining.
|
STACK_EXCHANGE
|