Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Enterprises large and small are relying on SharePoint to support ever-increasing information and workloads. The quantity and criticality of data and documents hosted in SharePoint means that downtime, data loss or corruption, and performance degradation become ever more problematic.
In recent years, Microsoft improved the storage optimization, backup and restore capabilities of SharePoint 2010 and SharePoint 2013. But do these features really meet the business needs of your enterprise? Can you rely on the recovery time objectives (RTOs) and recovery point objectives (RPOs) supported by out-of-box features to avoid the kind of data loss that produces a resume-generating event (RGE)?
The answers, unfortunately, are “no.”
In this highly valuable session, SharePoint MVP Dan Holme and Trevor Hellebuyck CTO at Metalogix will illuminate Microsoft’s definition of “data protection” and “granular restore”, and will identify the gaps that customers find between their requirements and the capabilities of SharePoint, SQL, and Data Protection Manager. Dan will clarify the issues, the technologies, and the difficult decisions you must make. While Trevor will demonstrate the solutions to overcome the difficulties. You’ll leave with a solid understanding of the scenarios, RTOs and RPOs that SharePoint supports, out-of-box, and where and how to augment your service to protect and optimize your SharePoint content.
Dan Holme, with an 18-year career as a Microsoft Technologies Evangelist at Intelliem, has reached hundreds of thousands of IT professionals, executives, and users at almost every Fortune 100 enterprise, and well more than half of the Fortune 500 and Global 1000. His deep experience solving customers’ IT and business challenges and educating the global technical community have earned Dan a reputation as one of the most respected and expert voices in the Microsoft technology community—a sought-after consultant, best-selling author, and renowned speaker. Dan, a native of Colorado, resident of Maui, and graduate of Yale and Thunderbird, has been recognized as an MVP for six years, and as one of the top ten partner MVPs in the world. Dan has penned hundreds of articles for SharePoint Pro Magazine and numerous bestselling books and training courses for Microsoft Learning. This summer, Dan served as the Microsoft Technologies Consultant for NBC Olympics during the broadcast of the 2012 Olympics in London, a role he also played in Torino, Beijing, and Vancouver. You can learn more about Dan, his work, and his contact information, at http://tiny.cc/danholme.
Trevor Hellebuyck is a recognized SharePoint innovator and the principal architect of StoragePoint, the ground-breaking storage optimization software for SharePoint, and leads our cross-product technology integration and new product innovation planning initiatives. Trevor joined Metalogix in 2010 with the acquisition of BlueThread Technologies. He designed and developed StoragePoint while serving as Chief Operating Officer (COO) of BlueThread Technologies, a company focused on developing applications for Microsoft products and technologies. StoragePoint has become the #1 Remote BLOB Storage (RBS) solution in the market. Prior to BlueThread, Trevor led technology teams in Enterprise Content Management (ECM) and Enterprise Application Integration at NuSoft Solutions, acquired by RCM Technologies in 2008. Trevor holds a bachelor’s degree from Western Michigan University in Computer Information Systems and resides in the Detroit metro area. He can be reached at [email protected]
|
OPCFW_CODE
|
"""
Useful functions for hashing rules and computing inside/outside.
"""
from numba import njit, int64
from numba.core.types import UniTuple
import numpy as np
__author__ = 'Haoran Peng'
__email__ = 'gavinsweden@gmail.com'
__license__ = 'MIT'
@njit(int64(int64, int64, int64))
def hash_forward(a, b, c):
return (a << 40) ^ (b << 20) ^ c
@njit(UniTuple(int64, 3)(int64,))
def hash_backward(h):
# 20bits mask, 1048575 = 2^20-1
mask = 1048575
c = h & mask
b = (h >> 20) & mask
a = (h >> 40) & mask
return (a, b, c)
@njit
def Tj(T2ij, T1j):
ans = np.zeros(T2ij.shape[0])
for i in range(T2ij.shape[0]):
for j in range(T2ij.shape[1]):
ans[i] += T2ij[i][j] * T1j[j]
return ans
@njit
def Ti(T2ij, T1i):
ans = np.zeros(T2ij.shape[1])
for j in range(T2ij.shape[1]):
for i in range(T2ij.shape[0]):
ans[j] += T2ij[i][j] * T1i[i]
return ans
@njit
def Tjk(T3ijk, T1j, T1k):
ans = np.zeros(T3ijk.shape[0])
for i in range(T3ijk.shape[0]):
for j in range(T3ijk.shape[1]):
for k in range(T3ijk.shape[2]):
ans[i] += T3ijk[i][j][k] * T1j[j] * T1k[k]
return ans
@njit
def Tij(T3ijk, T1i, T1j):
ans = np.zeros(T3ijk.shape[2])
for k in range(T3ijk.shape[2]):
for i in range(T3ijk.shape[0]):
for j in range(T3ijk.shape[1]):
ans[k] += T3ijk[i][j][k] * T1i[i] * T1j[j]
return ans
@njit
def Tik(T3ijk, T1i, T1k):
ans = np.zeros(T3ijk.shape[1])
for j in range(T3ijk.shape[1]):
for i in range(T3ijk.shape[0]):
for k in range(T3ijk.shape[2]):
ans[j] += T3ijk[i][j][k] * T1i[i] * T1k[k]
return ans
|
STACK_EDU
|
Far future: experiment with auto-detection of form fields and add acroform automatically
I think a combination of pdfminer and reportlab could help us recognize fields about as well as Adobe Acrobat does and then add them as acroform.
Some places to start:
https://www.blog.pythonlibrary.org/2018/05/29/creating-interactive-pdf-forms-in-reportlab-with-python/
This suggests creating a separate PDF with the form elements and then merging. Not clear if reportlab can edit an existing PDF instead of merging. This looks quite plausible and not too much effort--the hardest part would be heuristics to identify form fields. Starting with looking for horizontal lines and boxes and then scanning for text near those boxes.
https://thecodework.com/blog/dynamically-changing-pdf-acroforms-with-python-and-javascript/ makes use of a package that lets you directly write to the Acrobat data stream. We would need to understand the low-level PDF format that makes up a form. This should definitely work but would be very high effort.
Recognizing boxes: https://medium.com/coinmonks/a-box-detection-algorithm-for-any-image-containing-boxes-756c15d7ed26
This should be a place for us to gather more research on this idea.
How would you recognize lists of fields that group the same information? E.g., 3 repeating form sections, asking for child 1, child 2, child 3.
How do we recognize tables with columns of data?
Some random research links:
https://medium.com/coinmonks/a-box-detection-algorithm-for-any-image-containing-boxes-756c15d7ed26
https://neptune.ai/blog/image-processing-in-python-algorithms-tools-and-methods-you-should-know
https://stackoverflow.com/questions/46617779/learning-method-for-detecting-relevant-fields-in-forms-image-format
https://github.com/OlafenwaMoses/ImageAI
https://hub.packtpub.com/opencv-detecting-edges-lines-shapes/
Random progress on this
Went through a lot of this today, writing down thoughts here.
Not clear if reportlab can edit an existing PDF instead of merging
reportlab seems to be one of those "open core" projects, where a lot of functionality is hidden behind a paid package. I also didn't see anywhere in that link that talked about merging PDFs, so I'm not super sure what you were talking about there.
pdfrw does seem to be promising, I couldn't get their examples working, but it seems promising, so I'm gonna look into more stuff tomorrow.
The box detection algorithms that opencv uses (particularly the coinmonks one) worked pretty well: on a screen shot of a random ME form, it gets the lines where fields are great:
This is the original form:
This is everything but horizontal and vertical lines removed.
Doesn't seem to catch:
different sections: the whole second part enclosed by a box kinda looks like a potential field. Could be remedied by OCRing text inside that box
check boxes (either too small or, in this case, looks too much like characters). Could be remedied with OCRing the text)
That was from a manual screen shot, but I'm sure of the hundreds of pdf libraries can render an image from just the file (maybe this blog post).
I had some more notes on Trello I'll copy here. Card link: https://trello.com/c/DUZ3qqGU
First: instead of pdfrw, it looks like pikepdf is the modern replacement. They have equivalent functionality.
Create an acroform with reportlab: https://thecodework.com/blog/dynamically-changing-pdf-acroforms-with-python-and-javascript/ (note this might be possible with pikepdf too, this just seems pretty nicely OO without requiring you to know PDF low level layout)
I found some libraries specifically for recognizing checkboxes that seems to work very well: https://stackoverflow.com/questions/57260893/detect-horizontal-blank-lines-in-pdf-form-image-with-opencv
Line detection: https://stackoverflow.com/questions/57260893/detect-horizontal-blank-lines-in-pdf-form-image-with-opencv
The Trello card has some sample code I was playing with that worked decently. I think we would probably want to copy and combine the line detection with the box detection into one library instead of adding two dependencies that aren't frequently updated/actively maintained.
Darn, didn't realize you had been doing more work in the Trello :sweat_smile: That definitely solves most of the issues, I think pdf2image, pikepdf (still wary about using reportlab), boxdetect, opencv, and maybe ocrmypdfor tesseract to get where text is (still looking through those options).
I wonder if the [ ] style checkbox will work with image recognition, or we should do a second pass to specifically scan for that common combination of characters to mean a checkbox field.
I now have a working prototype that adds AcroForm fields to a blank PDF and then merges that PDF, I went ahead and made a PR on the form explorer for now, but it's pretty loosely coupled, so could move anywhere.
Most of the effort was trying to understand low level PDF stuff, specifically what parts of the AcroForm (and the annotations) and such to copy over from a good PDF.
The next step is to wire up some existing opencv code that I have to recognize places where fields could be added a the PDF, and to pass it to the new tool.
Also want to link here: https://www.nonprofittechy.com/2022/01/05/batch-convert-adobe-livecycle-xfa-forms-into-ordinary-pdfs/ the notes on flattening XFA forms for future use.
Future development on this is happening in the FormFyxer https://github.com/SuffolkLITLab/FormFyxer/. But we can integrate the feature now--we're already integrating the field normalization/renaming functionality.
|
GITHUB_ARCHIVE
|
RAID on a low-traffic server?
My company is buying a couple servers. I don't have a sysadmin background and neither does anyone at my company but I'm the "computer guy" so it falls to me to spec them out.
We were given the option of putting RAID on our servers but I declined since we already have RAID backups going at our office. My boss didn't understand why I declined the RAID and, honestly, neither do I.
My thinking is that if a server's hard drive fails, we can restore it from backup (to a different, clean server) and it wouldn't be a huge deal (we'd practice these recoveries, of course).
My boss's thinking (as I understand it) is that having RAID on our servers would mean no downtime (or at least significantly less downtime) because if the primary disk fails, the server can just switch to one of the other disks without missing a beat.
It's important to note that this server will be very low-traffic. In fact, it won't even be connected to the internet (except possibly to grab OS updates). It will mostly just be running automated jobs.
Since neither myself not my boss know a whole lot about this "RAID-on-a-server" idea, can someone shed some light on this for us? And to clarify, this isn't a me-vs-my-boss situation. We're just trying to find the right solution together.
Never count lower than 2. You need >2 disks, >2 NICs >2 Routers, Firewalls, Switches, Power supplies. You need at least 2 of everything in your server/datacentre to give you enough redundancy to sleep at night. Or at least that's how I feel about it. I'd never spec a server without some level of RAID. Hell, my workstation has a pair of 1TB disks in RAID 1.
RAID is not backup, it's redundancy. Not installing atleast RAID1 into production servers is something I would strongly recommend against.
Have you tried restoing a server from backup? Do you test it regulary? If so; then you know how long time it actually takes to get a server back up on it's feet in working condition.
Let's say it takes 8 hours to bring a server back up on it's feet from tape backup. Do you have a spare disk lying around? If not: add 20 hours. Can your company produce the same amount of $$ while missing the server for 28 hours?
RAID is a very cheap option to have some resilliency against disk failures, you should really implement this.
RAID is about availability. If a drive dies, you don't lose access to the server until you get a new hard disk.
But the OP is right in that he should have the backup for restore. Oh, and avoid RAID 5 now. Go RAID 10 or mirroring (RAID 1) if it's a low-usage server. Prefer RAID 10 though. And hardware RAID card, preferably with hardware to show blinkies and status of failed disks and allow hot-swapping drives.
@Bart, how do you feel about RAID 6 as an alternative to 5?
@Tom: Less common, can have overhead on write operations so it depends on what the server is used for, but it can survive two drive failures. From Wiki: RAID 6 provides protection against data loss during an array rebuild, when a second drive is lost, a bad block read is encountered, or when a human operator accidentally removes and replaces the wrong disk drive when attempting to replace a failed drive. - Therefore, I wouldn't mind it. Basically any RAID that can survive two drive failures is better in my book.
If you set up your server's disks to be mirrored (RAID 1) then you'll be all but guaranteed to never need downtime on the server due to a disk failure.
Personally, I'd say it's a good idea to have a mirror even on low-traffic servers just for the safety net it'll give you against hardware failures. It only costs one extra drive (plus the knowledge and time to configure the RAID) and saves you the time it would take to build out a new server from your backups.
Adding RAID to a server won't add that much to the cost of the hardware, shouldn't add anything significant to the cost of setup & maintenance, and has the potential to prevent downtime while you reinstall, restore backups, etc.
If you're basing it on cost, what you're really comparing is the cost of the extra hardware vs. the cost of your time for doing recovery when the drive fails. If it's a low-priority server doing automated processing, it's likely going to be in service until it dies - why not extend that time up front?
|
STACK_EXCHANGE
|
This is a method can be used for estimating the age and intensity of a founder event in a population. Details of the method and algorithm can be found in Tournebize et al. biorxiv 2020. Download code from here.
This is a method can be used for estimating the time of admixture in low coverage ancient genomes. Details of the method and algorithm can be found in Chintalapati et al. 2022 eLife. Download code from here. Old version can be found here.
Variation in molecular clock in primates
Multiple sequence alignments used in this study can be downloaded from here. Download code from here. For details of the methods and analysis, see Moorjani, Amorim et al. 2016 PNAS.
This is a method for estimating the date of Neanderthal gene flow using a single diploid genome. The main idea is to measure the extent of covariance in Neanderthal ancestry present in modern human genome to estimate the time of the Neanderthal gene flow. Details of the method and algorithm can be found in Moorjani et al. 2016 and Fu et al. 2014. Download code from here.
This simulator can be used for generating admixed genomes. As input the method takes phased data from two populations and generates admixed individuals for a given time of admixture and proportion of ancestry from each population. Method assumes instantaneous admixture but to generate data for multiple pulses or continuous admixture, one can run the same code in a loop. Details of the method can be found in Moorjani et al. 2011. For new implementation download code from here.
Rolloff: Method for dating admixture
For dating admixture in contemporary populations, one can use Rolloff which is available here. Details of the method and statistic can be found in Moorjani et al. 2013. This method was first introduced in Moorjani et al. 2011 and distributed as part of ADMIXTOOLS software package. The latest implementation is more reliable and robust to biases that can be introduced due to strong founder events that may postdate admixture.
South Asia genotype data
The genotype data used in Moorjani et al. (2013) Genetic Evidence for Recent Population Mixture in India is available upon request (due to the nature of the study consent, it is only available for population genetics studies). If interested in accessing this data, please email the corresponding authors with a signed letter (with your contact information) stating the following:
I affirm that
· The data will not be posted publicly.
· It will not be secondarily distributed to other people outside this collaboration.
· There will be no attempt to connect the data back to identifying information from any individuals in the study.
|
OPCFW_CODE
|
Vulnerability assessment tools were devised to detect security threats of the system causing potential threats to the applications. These include web application scanners that are tested, and the gauge is known to attack patterns through simulation. Protocol scanners search and scan protocols, ports, and network services.
The goal of the vulnerability assessment tool is to prevent unsanctioned access to systems. Vulnerability assessment tools help in maintaining confidentiality, integrity, and availability of the system. The system can mean any computers, networks, network devices, software, web application, cloud computing.
Top 15 vulnerability assessment tools:
- Nikto2: It is an open-source vulnerability scanning assessment software pivoting on web application security. Nikto2 can detect around 6700 malicious files causing a threat to web servers disclosing obsolete servers. Nikto2 watches on server configuration issues by performing web server scans within a short time. Nikto2 does not have any expedients to vulnerabilities detected, and also does not provide risk assessment features. Nikto2 is updated now and then for covering broader vulnerabilities.
- Netsparker: A tool with web application vulnerability embedded with an automated feature for detecting vulnerabilities. This tool is proficient in assessing vulnerabilities in several web-applications within a specified time.
- Open VAS: A robust vulnerability scanning tool supporting large-scale scans suited for organizations. This tool is beneficial in detecting vulnerabilities in the web application or web servers and databases, operating systems, networks, and virtual machines. Open VAS has daily access to updates widening the vulnerability detection coverage. It is useful in risk assessment recommending expedients for detecting vulnerabilities.
- W3AF: An untethered and open-source tool also known as web-application-attack and framework. An open-source assessment tool for web applications. It forms a framework securing web applications by detecting and making use of the vulnerabilities. A user-friendly tool with features of vulnerability scanning, W3AF has additional facilities for penetration testing purposes. Furthermore, W3AF has a varied collection of vulnerabilities. This tool is highly beneficial for domains that are at stake frequently with vulnerabilities that are recently identified.
- Arachni: An unwavering vulnerability tool for web applications and is regularly updated. This has a broader coverage of vulnerabilities and has options for risk assessment recommending tips and counter features for the vulnerabilities detected.
- Acunetix: A paid web assessment application security tool that is open-source with many purposes. This tool has a broader vulnerability scanning range covering 6500 vulnerabilities. It can detect network vulnerabilities along with web applications. A tool that allows automating your assessment. This is appropriate for large-scale organizations as it can manoeuvre several devices.
- Nmap: A popular and free open-source network assessment tool among many security professionals. Nmap maps by examining hosts in the network for identifying the operating systems. This feature is useful in finding vulnerabilities in single or multiple networks.
- Openscap: A structured assistance of tools that is useful in vulnerability scanning, assessment, measurement, forming a security measure. A community developed tool supporting Linux platforms. Openscap framework provides strength to the vulnerability assessment on web applications, servers, databases, operating systems, networks, and virtual machines. They also assess risk and counteract threats.
- Golismero: An unpaid open-source tool for assessing vulnerability. A tool specialized in detecting vulnerabilities on web applications and networks. A tool of convenience performing with the output provided by other vulnerability tools such as OpenVAS that combines output with the feedback. It also covers database and network vulnerabilities.
- Intruder: A paid tool for vulnerability assessment designed to assess cloud-based storage. Intruder software assesses the vulnerability instantly after it releases. An intruder has automated scanning features that persistently monitors for vulnerability, by providing quality reports.
- Comodo HackerProof: A tool inclusive of PCI scanning reducing cart abandonment, performing daily vulnerability assessment. To build trust and value from customers using the drive-by attack prevention feature is beneficial. A tool that has transitioned visitors to buyers. A safe platform for ensuring safe transactions with business and increasing the monetary abundance. Enjoy sophisticated security with patent-pending scanning technology, Sitelnspector.
- Aircrack: A framework of tools assessing the wifi network security assessing the packets and data, testing drivers and cards, cracking, and having an attack-response. This tool is also beneficial in restoring lost-keys by capturing the data packets.
- Retina CS Community: An open-source web-based tool paving the way for a centralized and apt vulnerability management system. A management system embedded with varied options like reporting, patching, and configuration compliance, ensuring the assessment of cross-platform vulnerability. A cost-efficient tool saving time and effort managing the network security. It is imbibed with automated vulnerability assessment for DBs, web applications, workstations, and servers. A support system for businesses and organizations with virtual environments with virtual app scanning and vCentre integration.
- Maze ransomware functions by maliciously encrypting the files and demanding a ransom for restoring the files.
- Nexpose: An open-source free tool with security experts using the tool for assessing the vulnerability of applications. The new vulnerabilities are saved in the nexpose database with the help of the Github community. This tool when used in combination with the Metaspoilt framework which is reliable by performing a detailed scan of the web application. It considers various aspects before generating a report.
A vulnerability assessment tool secures the system by identifying unauthorized security threats of accessing information. The threat occurs by manipulation of device networking configuration, the tools detect these activities putting an end to it. This also has regulatory compliance with an envisioning to assess out-of-process changes, audit configurations, and even rectify violations.
So, have you made up your mind to make a career in Cyber Security? Visit our Master Certificate in Cyber Security (Red Team) for further help. It is the first program in offensive technologies in India and allows learners to practice in a real-time simulated ecosystem, that will give them an edge in this competitive world.
|
OPCFW_CODE
|
This article is about Search Engine Optimization and its effects on your Hub.
You can think of SEO as “how to help Google find my stuff and rank me higher on search results”.
Before you read any further, I highly recommend reading: http://moz.com/beginners-guide-to-seo
SEO used to be all about “hacking” your website - getting the right words into your blog posts, the right amount of times, and linking back and forth between other sites. You also had to worry about the keyword “meta tag” (those <meta … > tags in the source of HTML pages near the top of the page) and make sure it had the right words, and the right amount of words.
Google declared in 2009 that it doesn't look at the keywords metatag for search ranking...
Google (and other search engines) used to behave more like robots than they do today. They've learned to be much more “human” in that they’re essentially trying to help people find what they’re looking for and measuring that key result like a human would. If your content is helpful, and people who search on Google and then go to your site actually engage with your site and stick around, your search ranking will go up. Sound familiar?
Yes, this is the basic premise of Content Marketing - be helpful.
So, the first step to making an Uberflip Hub SEO friendly is simple. Fill it with helpful, insightful, useful and/or enjoyable content. There’s no working around that.
Next, a Hub has some key components that work right out of the box:
- All URLs to content (Streams or Items) are automatically formatted to include that content’s title as the “slug”. They also include the ID of that content, which is of no value to SEO but is required for our system. So for example a Stream with the ID 1234 and the title “My awesome stream” will have a URL such as: https://hub.site.com/h/c/1234-my-awesome-stream
This is important for SEO. But what happens if I change my Stream title to “my kickass stream” ? We’ve taken care of that too. No matter what URL you try with that ID, the system will automatically 301 Redirect (http://moz.com/learn/seo/redirection) to the appropriate URL - which will now be: https://hub.site.com/h/c/1234-my-kickass-stream
This ensures content URLs always match the content they serve, and that old URLs that may be floating around the web do not:
- (a) get lost or redirect to a 404 “not found” page
- (b) generate the same content and essentially create duplicate content (http://moz.com/learn/seo/duplicate-content)
- All meta tags, such as <meta name=”description”> are automatically populated with the description or excerpt of that piece of content.
- All structural tags, such as <h1>, <h2>, <h3>, <article>, which search engines use to better understand the context of a particular page, are automatically applied in a logical way.
There are some best practices that are left to the Hub owner to decide where and when to apply.
Probably the biggest SEO concern (that can be easily addressed) is duplicate content. To quote Moz:
Duplicate content is content that appears on the Internet in more than one place (URL). When there are multiple pieces of identical content on the Internet, it is difficult for search engines to decide which version is more relevant to a given search query. To provide the best search experience, search engines will rarely show multiple duplicate pieces of content and thus, are forced to choose which version is most likely to be the original—or best.
If you think about the basic premise of a Hub, it’s to bring in content that exists elsewhere. And if not done carefully can produce A LOT of duplicate content.
There are 2 ways to address duplicate content.
- Telling search engines not to crawl a particular page - i.e. ignore it via a noindex/nofollow meta tag. The system will automatically apply this tag in the following cases:
- The Stream option “No Robots Meta Tag” has been checked. This will cause the Stream URL and all its Items (when accessed through this Stream) to be ignored by search engines
- The Stream is “hidden” - the same will apply as per above.
- An Item is “hidden” - in this case just this Item will have the noindex/nofollow meta tag applied
- Telling search engines where the originating content lives. this is done by another meta tag called the “Canonical” tag which basically includes a URL to the original content. This is a better technique in some cases, because at least the search engine can gain context about your page even though you’re telling them to list that other/originating content in their search results. This canonical tag is applied when the Stream option “Enable Canonical Meta Tag” is checked. When checked, all Items within that Stream will have their originating URLs populated in this meta tag. This has different effects for different types of content. The most obvious is a Blog RSS that was imported - each Article has an originating URL where that blog lives.
Sometimes Canonical tags point to another URL in the same Hub. This is what happens when you check the Stream option for a Marketing Stream - which by definition is a collection of Items from other Streams in your Hub. It’s recommended that this option be enabled on all Marketing Streams. In fact, when you create a Marketing Stream this option is pre-selected.
Flipbooks and SEO
A Flipbook is really a web-app in that its a single web-page that has an interface for consuming many PDF pages that are no longer in PDF format. A search engine would normally see a Flipbook URL as a single page, but we’ve devised a system whereby when a search engine hits a Flipbook URL it doesn’t see what humans see.
Here’s what you see on desktop for a one of our latest Flipped newsletters:
But here’s (roughly) what a search engine sees for the same page:
Since every Flipbook page can be linked to directly but appending the page number to the end of the URL, we have a system for “paginating” a Flipbook, displaying its text and links for each page along with images for each page, so that search engines can effectively “crawl” its content like they would any set of web pages.
Important: this is how direct Flipbook URLs can be crawled. When a Flipbook lives within a Hub, it is essentially an embedded iframe of that Flipbook - the same as a YouTube video is an iframe of that video inside a Hub. However, if you publish a Flipbook at the same subdomain as the Hub it lives in (which is best practice and in most cases will happen automatically), the SEO “juice” is applied to that subdomain just the same.
Subdomains vs Subdirectories
The final aspect of SEO that’s worth mentioning is subdomains vs subdirectories (or folders). It is said that a blog living at www.site.com/blog (subdirectory) is better for SEO than blog.site.com (subdomain). This is arguably true if looked at in a silo, and there’s also nothing we can do about Hubs having to live at a subdomain. We are a “hosted service” meaning that our customers cannot install our software on their servers - the only way to have their Hub live on one of their domains, is to setup a CNAME for one of their subdomains (anyone they want) and point it to our servers. However, when you consider all the other important aspects of SEO, this subdomain/subdirectory issue becomes a lot less important. Here’s an article about it:
and here’s a more recent one written by Dharmesh Shah (CTO/Co-founder of Hubspot)
When it comes to SEO, there is no substitute for good content.
All the technical requirements are automatically address by Uberflip’s system except for how to treat duplicate content, which needs to be decided case-by-case.
About the AuthorFollow on Twitter More Content by Yoav Schwartz
|
OPCFW_CODE
|
The tick (✓) also known as checkmark is often used to indicate the correct answer.
Insert a tick in Excel
The most popular way to insert a tick symbol in Excel is:
- Click a cell where you want to insert the symbol.
- Navigate to Insert >> Sybols >> Symbol.
- On the Symbols tab, inside Font type: Wingdings.
- Move to the end of the list, and select the tick symbol and click Insert.
There are two types of checkmark symbols. You can use any of them. The cross mark is often used next to a checkmark, so you can insert them as well. After you place them, it’s a good idea to change their colors, as you do with any other characters.
Insert a tick and assign it to a keyboard shortcut
Now, instead of using all these steps, we are going to create a macro and assign it to a keyboard shortcut.
Click the button in the bottom-left corner to start recording a macro.
From a new window, type a name for the macro.
Under Shortcut Key, inside small box use Shift + T. Now you will be able to use this macro with the Ctrl + Shift + T keyboard shortcut.
Now, Excel is going to save your every move.
Change font color to green and navigate to Home >> Font and choose the Wingdings font.
It’s necessary to do this step, otherwise, Excel will insert different type of character.
Move to Insert >> Symbols >> Symbol, and choose from Wingdings font the tick characters as you did before.
Click the Home tab, and change color to green and press Ctrl + Enter to stay inside the cell.
You can check your macro in View >> Macros >> Macros.
Your code should be similar to this.
Sub Tick() ' ' Tick Macro ' ' Keyboard Shortcut: Ctrl+Shift+T ' With Selection.Font .ThemeColor = xlThemeColorAccent6 .TintAndShade = -0.249977111117893 End With With Selection.Font .Name = "Wingdings" .Size = 11 .Strikethrough = False .Superscript = False .Subscript = False .OutlineFont = False .Shadow = False .Underline = xlUnderlineStyleNone .ThemeColor = xlThemeColorAccent6 .TintAndShade = -0.249977111117893 .ThemeFont = xlThemeFontNone End With Selection.FormulaR1C1 = "ü" End Sub
Excel usually ads a lot of unnecessary code. You can remove it, so it will look like this.
Sub Tick() With Selection.Font .Name = "Wingdings" .ThemeColor = xlThemeColorAccent6 End With ActiveCell.FormulaR1C1 = "ü" End Sub
Now, you can just use Ctrl + Shift + T (or any other shortcut) to insert the green tick symbol.
|
OPCFW_CODE
|
Since the last blog I’ve been working towards the new vision I set out last time; I’ve successfully eliminated infinitely long tanks and implemented an upgrade screen to allow the player to fine tune their much smaller chaintank:
As I said in my last blog, I’ve opted to go with a new implementation for vehicles that limits them to six vehicles each. Since the player’s chain is now so small, it’s not acceptable to let the weapon or upgrade level for each segment be random. This screen was necessary to let the player adjust the size of their vehicle, upgrade segments, and change what weapon each one is equipped with.
Removing or downgrading parts doesn’t have any penalty to it. I figured it would be best to let the player experiment with it as much as they wanted and not be punished, especially since it’s a small game that will probably only be played through once.
Implementing this screen was time consuming, and there’s not really a way around that short of going with a less complex UI. That’s not a bad idea really, since the one I’ve made seems quite crowded. I do have some ideas on how to simplify it a little bit, but I don’t plan to redo completely it at the moment as I want to focus on actually finishing the game. Revisiting this screen might be something I’ll do during the final polish phase, however.
After finishing the upgrade screen, I focused my efforts on getting the new level system up and running. The largest part of the work by far was creating the art for the levels. It turned out okay, though not amazing.
Again, I may or may not revisit this. If my goal is to develop my skills in as many areas of game development as possible, I’d be better served by finishing Chaintanks so I can finally learn the process of actually shipping a game. Not only that, but I’d avoid falling into the trap of leaving a dozen dead games behind me as I try to actually finish one.
To call the game complete, I’d ideally like to have art for two more tilesets so I can have more variety in the levels. I may settle for one extra though if that takes too much time. One of the lessons I’m beginning to grasp in the creation of this game is just how precious time is. It’s entirely too easy to spend weeks on a screen or a bunch of art only to find in hindsight that it wasn’t very good, or that the game would have been fine without it.
Lastly, I finished a few smaller features as well. The first was the actual implementation of the maps. For this, I decided to just make each map a scene in Unity. I’m not sure if this is the best approach from a programming standpoint, but it’s a lot less work than serializing data or creating a level editor. If it gets the game done faster then I’m all for it!
The new maps also required edge tiles that block vehicle movement. My solution was a bit hacky, but I was able to implement it quickly and get it working within an hour or so. They don’t block bullets at the moment, but that will barely take any time at all
Adding these edge tiles meant that my old spawning system needed to be thrown out. At first I had some weirdly complicated idea of trying to reuse the existing system of spawning an enemy somewhere in a fixed distance from the player, but also trying to detect if it was in a valid location. When I actually sat down to implement it, I realized it would be MUCH simpler if I simply added spawn points to the level maps and had the game pick one at random that isn’t too close to the player. This solution took less than an hour to implement!
After working for so long on Alpha Strike and taking hours to implement even the simplest features, working in Unity is a huge breath of fresh air. These features took so little effort to code, and yet they’re crucial in that they’re some of the final pieces needed to get the level system fully working.
Making my own engine was an interesting challenge and I’m a better programmer for it, but it takes much more than raw programming skills to make a game; most notably, time! Just by spending so much less time on programming my dream of one day making my own games for a living is so much more feasible.
In the meantime, there are a few bugs I haven’t fixed yet, but with this done the only major feature I have left is the final boss! That is, if I don’t choose to cut that feature.
At the moment I’m fairly sure that I’m won’t be cutting the final boss. However, if I can find a way of ending the game that isn’t just the same old ‘kill the big thing at the end’ trope that so many games have done before, then I’d prefer that over a standard boss fight.
I’m not too worried about it though. If I manage to think up a better way to end the game then great, and if not, I’m not too worried about it.
There’s still a good bit of work left to do, but it’s finally beginning to feel like the end is in sight. For now I’m going to continue to focus on getting this game completed, avoid adding new features, and make myself more comfortable with calling things done! With any luck, it’ll be done soon™!
|
OPCFW_CODE
|
In this article, I will explain the term in distributed computing called Partitioning so I will explain the following points
- What is Partitioning meaning?
- When do I need it?
- Explaining the kinds of Partitioning with advantages and disadvantages
- Secondary index and kinds of Secondary index
- Rebalancing and why I need Rebalancing?
- Different Rebalancing strategies with advantages and disadvantages
- Different ways for clients to know about which node contains target partition ( Request routing)
What is Partitioning meaning?
The term Partitioning simply coming from away that every part of our data such as records is owned by one partition so every partition can be considered a small DB.
When do I need it?
When our data became very large and it can not be stored on a single machine and the query to a large volume of data on a single machine became very hard and the scalability now is important the partitioning can solve this problem as a different partition can be placed on a different node on our cluster and the large volume of data can be distributed on many disks and our queries can be done on different processors as we can scale by adding more machines on our cluster.
Ways of Data Partitioning
As we explained we need to distribute our data on a small unit (Partition)
and to distribute the data and the query load equally across the nodes
so Actually this can be done in different approaches
This kind of partitioning is like having a simple key-value data model and always getting the value by the key so we can partition the data through the primary key but in many cases, this way can not be efficient as a partition key design that doesn’t distribute requests evenly can make some partitions have more data or queries than others and that called skewed and on the worst case all the data and loads wind up on a single only one partition and this is some use case to explain when this kind can be good or bad
- User ID, where the application has many users (Good Case)
- Item creation date rounded to the nearest time period (for example, day, hour, or minute) (Bad Case)
On other hand, we can assign a constant range of keys to every partition from N to N of keys and if we know the borderline between these ranges we can simply know which partition include our key or we can directly ask the node if we already know the which partition assigned to which node.
The ranges of keys do not surely have the same space because it depends on the partition boundaries. for example, if you have volumes and one of them contains words starting with letters A and B and another volume starting with V, X, Y, and Z it will end up with some volumes much bigger than others so if you want to equally distribute your data the partition boundaries need to adapt to the data.
The boundaries might be set manually or the database engine supports to set it automatically.
This strategy is used by HBase, BigTable, RethinkDB, and others.
The advantage of this strategy is you can keep keys sorted that range scans are easy and you can fetch several related records by treating the key as a concatenated index, for example (year-month-day-hour-minute-second).
The disadvantage of this strategy is the particular access patterns can end up with hot spot problems, for example, if the key is a timestamp then the partition correspond to ranges of time, one per day all data for today end up going to the same partition and the partition can be simply overloaded to avoid this issue you can rely on another key or by for example prefixing the timestamp with the machine name in case of sensors data for example.
- Hash of key
As we explained before the issue of the hot spot and skew a lot of distributed data systems use a hash function to determine the partition of a given key.
by the way hash function is any function that can be used to map data of arbitrary size to fixed-size values.
the data is distributed evenly and the skew / hot spot avoiding as much as the hash function is perfect. for example, if you have a 32-bit hash function that takes a string whenever you give it a new string, it returns an apparently random number between 0 and 2³² even if the strings are very similar the hashes are equally distributed across the range of numbers.
You can now assign each partition a range of hashes and every key whose has falls within a partition`s range will be stored in the partition.
regrettably, we lost a good feature of key-range that we can do efficient range queries as the key is now scattered across partitions so the sort order is lost as well for example in MongoDB if we enabled hash-based Partitioning mode any range query has to be sent to all Partitions.
As we explain before the partitioning schemes as we have discussed the key-value and we can only access the record through the primary key and we can determine the partition from this key so we can easily know the record in which partition but what can for example if we have data about car and the car_id is the primary key how we can get the red cars as some of them can be stored in different partitions. a secondary index is used commonly in relational databases and document databases as well but NoSQL key-value databases like HBase avoided it because it added implementation complexity but some stores have started adding it because so very useful for data modeling such as Riak. Actually, the problem with the secondary index is that it does not map cleanly to partitions so we have two kinds of secondary index partitioning document-based and term-based.
Consider we have an online system for selling cars and we are using a primary unique id and we partition our database using this key so for example car IDs from 0 to 500 in partition 0 and so on . now our customer need to filter the black cars or BMW cars for example here you need a scondary index on the color and brand these would be fields in document databases or columns in relational databases. if you have declared the index the database can perform the indexing automaticlly whenever the black car is added to the database the database partation automaticlly adds it to the list of primary keys IDs for the index entry. In this indexing type each partition is entirely separate as each partition maintains its own secondery indexes and it does not care what the data is stored in other partitions you only deal with the partition that contains the primary key ID or doucment ID. so if some of the black cars in partation and others on another partations and if you want to search for black cars you have to send the query to all partitions and combine all the results. this approche is sometimes knows as scatter/gather and it can make read queries on secondary index costly and it widly used in ElasticSearch and cassandraDB and others. most of database vendros recommend thay you structure your parationing sechema so that the seconday index queries can be served from a single paration but this is not always possible mainly when you are using multiple seconday index such as color and brand.
Instead of each partition having its own secondary index we can build a global index that covers the data in all partitions. A global index must also be partitioned individually the primary key index for example if we have black cars from all partitions appear under color:black in the index but the index also partitioned so that colors starting with letters A and K apprear in partition 0 and others in partition 1 and so on we call this type of index term-partitioned. so rather than make request to all partitions with scatter/gather cost now we just know the partition that match our term like car color starting from A and so on but the bottleneck there is the writes aee slower and more complicated because write to a single document may now affect multiple partitions.
I will exaplin the reblancing in the part 2 of the article.
- Design data intensive application
- Introduction to Reliable and Secure Distributed Programming
|
OPCFW_CODE
|
We have to upgrade the RAM on my HP workstation.
Which DDR2 RAM would be best suited for my machine and SW2010?
I have 5 slots available.(see profile for other specs)
Your Motherboard determines what RAM you can use. I would look up the make and model of the motherboard and get the data sheet on it, that will tell you how much, what speed, and what configurations. I know in the past that another rule of thumb was that all of your ram modules (sticks) should be the same. so if you have 2 sticks in your machine they should both be the same size and speed.
Generally I say to get the fastest speed your board can handle. Size is really determined by what you need and can run (4GB is the max on a 32-bit system).
As for manufacturers, personally I always use Kingston. I have tried others and have gotten burned in the past by getting parts in doa, etc. Maybe its bad luck on my end, but I have never had any problems with Kingston.
Websites like www.crucial.com are good for finding exactly what to get for your particular machine. Just plug in the make/model etc and it will list all you need to know.
try http://www.kingston.com/ as well
more important than the brand, is who you get it from, how it is handled and whether or not is is actually new rather than recycled .... muy importante!
Just a heads up, you will want to check what memory speed your processor can handle as well. Some motherboards can handle higher memory speeds than what some processors are capable of and vice versa. Basically stick with the same speed you've already got as far as memory; also see if your system is set up for dual channel or ECC. If dual channel, you may want to consider buying "matched" sticks of ram.
Your Profile does not mention if you are running a 32 bit or 64 bit Operating system so I will go with 32 bit specs. Your system also can use various
Proccessors that you would be able to use different Memory for better performance due to Overclocking. The following should perform well for the basic system
and will also work with the OC Proccessors. If you are OC'ing these Memory modules are not good canidates to OC.
You need to install 4GB of RAM from the same manufacturer. Ensure all modules are the same kind of memory, whether that'sfour 1GB sticks, or two 2GB sticks the rule holds pretty true.
2GB Kit Corsair Twin2X1024-6400C4 (200/800)2GB Kit Patriot PDC22G800+XBLK (200/800)Crucial PC2-6400 4GB DDR-2 Memory Kit
Retrieving data ...
|
OPCFW_CODE
|
You know what it's like when you need to be able to execute a particular command, perhaps one you've written yourself, from any Windows XP command window without having to specify the full path to the executable. The directory could be added to the PATH environment variable, but getting to it is such a pain in the arse:
Start > Run > sysdm.cpl [ENTER] > Advanced Tab > Environment Variables button > choose variable to edit > Edit button > Edit the path > OK > OK > OK
It's possible to add directories to the Windows XP PATH environment variable from the command line:
Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\> my_some_app_command -V 'my_some_app_command' is not recognized as an internal or external command, operable program or batch file C:\> path=%PATH%;C:\Program Files\some_app C:\> my_some_app_command -V Version 1.0 C:\>
however this is a temporary change to the PATH environment variable and exists only in that command window and disappears once the window is closed. This is because the path command only appends to a copy of the PATH environment variable (which was created when the command window was opened) and not to the original PATH. When the window is closed the copy is destroyed along with any changes you made.
There is a nifty tool in the Windows XP Service Pack 2 Support Tools named setx (available even if you have service pack 3) which can quickly make permanent changes to environment variables from the command line. The setx command modifies the actual PATH and not the copy in use by the currently open command window - so here I'll show you how to modify both in a couple of quick steps so that you can begin working with your new PATH in the current window whilst also permanently updating your PATH.
Once you've installed the support tools (accepting all the default options such as the install location and "Typical Install") they will be found in C:\Program Files\Support Tools. Open a command window from any directory:
C:\> setx 'setx' is not recognized as an internal or external command, operable program or batch file.
The Support Tools directory is not in our path so lets add it temporarily so that we can use setx from here and at the same time, we'll add our some_app directory so that we can begin work:
C:\> path = %PATH%;C:\Program Files\Support Tools;C:\Program Files\some_app
These directories are now in the copy of the PATH environment variable in use by this command window. Now let's permanently update the PATH:
C:\> setx path "%PATH%"
This has set PATH to the same value as the copy of PATH in use by this window and we're good to go in this or any future command windows.
Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\> echo %PATH% C:\WINDOWS;C:\WINDOWS\system32;C:\Program Files\Support Tools;C:\Program Files\some_app C:\>
|
OPCFW_CODE
|
Why doesn't disk io occur to read file system metadata despite clearing dentry and inode with drop_cache command?
As the title says, I am confused about "echo # > /proc/sys/vm/drop_caches" and blockdev --flushbufs.
http://pages.cs.wisc.edu/~remzi/OSTEP/file-implementation.pdf
According to OSSTEP, If the target inodes are not cached in memory, disk IO should be occur to read the inode, which will make a dentry data structure on memory.
To my knowledge, echo 3 > /proc/sys/vm/drop_caches is to drop(clear) page cahche, inodes and dentry.
I have experimented with blktrace to figure out whether disk io is really occurring to read the inode.
1) echo 3 > /proc/sys/vm/drop_caches
> 259,0 38 1 0.000000000 40641 Q R 109314048 + 32 [cat]
> 259,0 38 2 0.000004096 40641 G R 109314048 + 32 [cat]
> 259,0 38 3 0.000027108 40641 UT N [cat] 1
> 259,0 38 4 0.000027862 40641 I R 109314048 + 32 [cat]
> 259,0 38 5 0.000036393 40641 D R 109314048 + 32 [cat]
> 259,0 38 6 0.006268251 0 C R 109314048 + 32 [0]
However, there is no disk io to read inode. I can only see the disk io to read 16KB data block.
2) echo 3 > /proc/sys/vm/drop_caches and blockdev --flushbufs /dev/nvme0n1
259,0 1 1 0.000000000 325 Q RM 74232 + 8 [cat]
259,0 1 2 0.000004854 325 G RM 74232 + 8 [cat]
259,0 1 3 0.000026263 325 D RM 74232 + 8 [cat]
259,0 1 4 0.006292470 0 C RM 74232 + 8 [0]
259,0 1 5 0.006382162 325 Q RM 109052160 + 8 [cat]
259,0 1 6 0.006385621 325 G RM 109052160 + 8 [cat]
259,0 1 7 0.006393322 325 D RM 109052160 + 8 [cat]
259,0 1 8 0.006455750 0 C RM 109052160 + 8 [0]
259,0 1 9 0.006511245 325 Q RM 109117696 + 8 [cat]
259,0 1 10 0.006512342 325 G RM 109117696 + 8 [cat]
259,0 1 11 0.006514627 325 D RM 109117696 + 8 [cat]
259,0 1 12 0.006591933 0 C RM 109117696 + 8 [0]
259,0 1 13 0.006624544 325 Q RM 109117704 + 8 [cat]
259,0 1 14 0.006625538 325 G RM 109117704 + 8 [cat]
259,0 1 15 0.006627567 325 D RM 109117704 + 8 [cat]
259,0 1 16 0.006688973 0 C RM 109117704 + 8 [0]
259,0 1 17 0.006764838 325 Q R 109314048 + 32 [cat]
259,0 1 18 0.006766035 325 G R 109314048 + 32 [cat]
259,0 1 19 0.006768078 325 UT N [cat] 1
259,0 1 20 0.006768755 325 I R 109314048 + 32 [cat]
259,0 1 21 0.006773426 325 D R 109314048 + 32 [cat]
259,0 1 22 0.006854480 0 C R 109314048 + 32 [0]
I found block access (+8(512*8=4KB)) to read inode.
A quick look at how blockdev --flushbufs works in the kernel code shows that it clears the superblock.
Why doesn't disk io to read inodes with drop_cache alone?
The ULK book says that inodes or superblocks are cached in buffer-cache.
The book ULK says that inodes and superblocks are cached in buffer-cache.
Is this the reason for this?
Thank you
|
STACK_EXCHANGE
|
How to use grep to search ALL folders for text?
I'm having issues with searching through ALL the directories at once with grep. When I use the command:
find . -name "*.txt" | xargs grep texthere
It just takes forever and then gives me "no such file or directory" errors.
Why is this happening and is there nothing easier than grep? Or am I using the wrong command?
If you have GNU grep, the grep itself supports "-r" option to search recursively, as suggested by @Ouroborus.
If unfortunately your grep does not support such option, like in SunOS, you can use following commands instead:
find . -name "*.txt" -exec grep -n your_pattern {} /dev/null \;
The tricky is /dev/null added to ensure each execution of grep has two files (the file matched *.txt and the /dev/null) provided, forcing it print the name of files being searched. And you can add -type f option of find to refine the scope of find.
Wouldn't this be slower since it invokes grep once for each file?
As in this answer, you're probably better off with:
grep -rnw '/path/to/somewhere/' -e "pattern"
-r or -R is recursive,
-n is line number, and
-w stands match the whole word.
-l (lower-case L) can be added to just give the file name of matching files.
If you're getting errors about permissions (you don't say you do), then I'm guessing you're standing in either the root directory (/) or in some path where you don't have permission to read all files, such as in /etc or in /var. But since you say it takes an awfully long time, I'm leaning more towards the first assumption (the root directory).
If you want to search absolutely all files on the whole system, than what you're doing is pretty much right. It will take a long time no matter what you do. It's just an awful lot of files to search.
You can use find to narrow down the amount of files to look at.
At the moment, you have
$ find . -name "*.txt" | xargs grep texthere
Since it looks like you're only interested in plain text files, we can exclude any other type of file (executables):
$ find / -type f \! -perm -o=x -name "*.txt" | xargs grep texthere
I replaced . with / here, because that's where I think you are (correct me if I'm wrong). I'm also specifying that I want files (-type f) that are not executable (\! -perm -o=x) (the ! needs to be escaped so that your shell doesn't do funny things with it).
Now, there's a couple of more things we can do. One is a safety thing, and the other may possibly improve speed a tiny bit.
Some file names might have spaces in them, or other wonky character that we usually don't want in file names. To be able to pass these properly between find and grep we do
$ find / -type f \! -perm -o=x -name "*.txt" -print0 | xargs -0 grep texthere
This (-print0) means each file name will be delimited by a nul character (\0) rather than a space character. And the corresponding option to xargs for receiving these as nul-delimited file names is -0.
It's because you're not using those two option that I believe you get those "no such file or directory" errors.
The speed thing is fgrep. This is a utility which is exactly equivalent to grep -F that you should use if your search string is a plain text string with no regular expressions in it.
So, if you have a plain text string and you want to look for it in every single file on the whole system:
$ find / -type f \! -perm -o=x -name "*.txt" -print0 | xargs -0 fgrep texthere
The permission thing... Obviously there will be files that you aren't able to read. You can either run find and fgrep with sudo prepended to them:
$ sudo find ... | xargs -0 sudo fgrep texthere
Or you could try to craft another -perm flag for find (and also get it to ignore directories that you can't enter), but that would be too time consuming and result in a ridiculously long command line, so I won't do that here.
Or you could just sudo -s to get a root shell and run the thing from there... but I would generally advise against that because people have a tendency to mess up their systems by forgetting they are root.
ANOTHER SOLUTION would be to use the locate command to locate all the .txt files on the whole system. The locate command doesn't search the file hierarchy but instead uses a database (which is usually updated daily, so not all files may be there). The database does only contain files in directories accessible by all users, so if you have removed the read-permissions on your own directories, your files won't be there.
So, replacing the find command above with the locate almost-equivalent:
$ locate '*.txt' -0 | xargs -0 fgrep texthere
The -0 option (or --null on some systems) corresponds to the --print0 option of find.
|
STACK_EXCHANGE
|
- Download Lean 3.4.2 from https://leanprover.github.io/download/
- Extract it, and update PATH environmental variable so command
leancan be executed on the command prompt
- Download & install z3 from https://github.com/Z3Prover/z3 and update PATH so
z3can be executed as well
leanpkg configureto install SMT lib interface and mathlib
# Run selected tests from Alive's test suite (which contain # no precondition and do not require additional grammars) ./run-alive.sh
# Run random tests for the specification of Z3 expression - # concrete value, as well as 4 admitted arithmetic lemmas. # Note that bv_equiv.zext/sext/trunc will have 'omitted' tests # because sometimes generated expressions try to compare # bitvectors with different bitwidths. ./run-proptest.sh
# Run random tests for the specification of LLVM assembly language. # Set clang path to yours by modifying the script. ./run-irtest.sh
- Specification, as well as proof, is in
- Execution of
bigstepwith two different value semantics (SMT expr / concrete value) has some good relations.
def encode (ss:irstate_smt) (se:irstate_exec) (η:freevar.env) := irstate_equiv (η⟦ss⟧) se def bigstep_both:= ∀ ss se (p:program) oss' ose' η (HENC:encode ss se η) (HOSS': oss' = bigstep irsem_smt ss p) (HOSE': ose' = bigstep irsem_exec se p), none_or_some oss' ose' (λ ss' se', encode ss' se' η) -- Its proof is at equiv.lean
- We can generate initial state correctly.
def init_state_encode:= ∀ (freevars:list (string × ty)) (sg sg':std_gen) ise iss (HUNQ: list.unique $ freevars.map prod.fst) (HIE:(ise, sg') = create_init_state_exec freevars sg) (HIS:iss = create_init_state_smt freevars), ∃ η, encode iss ise η -- Its proof is at initialstate.lean
- If refinement checking function
check_single_reg0says it's true, refinement indeed holds.
def refines_single_reg_correct := ∀ (psrc ptgt:program) (root:string) (ss0:irstate_smt) sb (HSREF:some sb = check_single_reg0 irsem_smt psrc ptgt root ss0) (HEQ:∀ (η0:freevar.env) e, b_equiv (η0⟦sb⟧) e → e = tt), root_refines_smt psrc ptgt ss0 root -- Its proof is at refinement.lean
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
|
OPCFW_CODE
|
An administrator can change the following location settings in Administration > Site administration > Location > Location settings.
This sets the default timezone for date display.
"Server's local time" here will make Moodle default to the server timezone as defined by the “Default timezone” setting in PHP. You can view this in Site administration > Server > PHP Info > Date > Default timezone. (When this term appears in the user profile it means something slightly different. See the next setting for more.)
If you want to explicitly set a Timezone in Moodle instead of relying on the server one, please do not choose one of the UTC time zone settings in this list that come with a default Moodle installation. It is recommended that you select a named timezone specific to your Area and Location (e.g. America/New_York) instead of a UTC setting if one is available, unless you have a very specific reason not to. Using the UTC settings will cause Scheduled tasks to behave unexpectedly, and in addition, daylight savings will never be applied
To get or update the complete list of timezones from Site administration > Location > Update timezones before setting this.
Such settings are preferred to UTC settings because they aware of [local time zone shifts] such as Summer Time or Daylight Savings Time while UTC is not.
Force default timezone
Pull down the menu to force the user to be in a specific timezone or allow users to individually select their timezone.
When you allow the User's can set their own time zone setting, users will see this in their own profile in Timezone as "Server's local time," which means the Moodle default timezone. So if the Moodle default timezone is set, for example, to America/Montreal, the user will not see America/Montreal that in their profile, but will see "Server's local time" instead, but it means America/Montreal.
If you do force a particular time zone, the users will see the explicit name of the time zone in their Profile > Timezone and will be not be able to change it.
Select the country to appear by default on a new user account page form.
IP address lookup
GeoIP City data file
Location of GeoIP City binary data file. This is a non-invasive way to determine geographical and other information about Internet visitors in real-time. This file is not part of Moodle distribution and must be obtained separately from MaxMind. There is a GeoIPLite version for free.
Google Maps API key
After updating the information in this section, IP's that are displayed as a link, such as in reports, when clicked will open new window with a Google Map indicating the location of the IP, if found and if not a private address.
The update timezones page Site administration > Location > Update timezones provides administrators with the option to update their local database with new information about world timezones.
This is important because of daylight saving changes that many countries use. You should generally use these regional location settings in preference to UTC unless you have a very specific reason to use UTC instead.
If the update is completed with success, Moodle will inform you how many entries were updated and which location was used as a source.
This naming convention on the list of Area/Location is based on the IANA Time Zone standard: see here for more details.
Typically, there will be one major city or location chosen to represent the time zone. Countries are not used. So you will need to pick the representative city or location for the time zone you wish Moodle to use.
For example, the following are the standard time zones in the continental U.S.:
- America/New_York = Eastern Time
- America/Chicago = Central Time
- America/Denver = Mountain Time
- America/Los_Angeles = Pacific Time
However, if you live in the state of Arizona, which does not observe Daylight Savings Time, you should choose America/Phoenix instead of America/Denver in order to properly observer the time change twice a year. Likewise for other exceptional areas in some states like Indiana, which have local time zone changes.
- Moodle 2 Administration - Location MoodleBites video on YouTube
|
OPCFW_CODE
|
Lift the release tab on the latch drive bracket 1 for the drive you want to remove, then slide the drive from its drive bay 2. As well as actually reading my post completely and answering all my questions. Press the card straight down into the expansion socket on the system board 2. The slot on the far right should be used when a port cover is installed. Figure 3-5 Removing a Bezel Blank Replacing the Front Bezel. I really appreciate the time you take your cover everything discussed. To reattach the Smart Cover Lock, secure the lock in place with the tamper-proof screws.
Use the key provided to disengage the lock. If you are installing an optical drive, connect the power cable 1 and data cable 2 to the back of the drive. By risky, I mean that you can turn it into a device with the response and value of a brick. Otherwise, keep the existing version as long as possible. To install a drive into the 3. If it ain't broke - don't fix it.
Use the hole in the bracket that best secures the peripheral device cable. To download the necessary driver, select a device from the menu below that you need a driver for and follow the link to download. Must be something to do with windows 10 on this old machine. Table 1-2 Front Panel Components 5. Rotate up the internal drive bay housing to access the memory module sockets on the system board. After searching through the support forum I read that this is any version 8 bios or after. Insert the hooks on the port cover into the slots on the rear of the chassis, then slide the cover to the right to secure it in place 2.
Figure 4-46 Threading the Keyboard and Mouse Cables Screw the lock to the chassis in the thumbscrew hole using the screw provided. To remove a bezel blank: Remove the access panel and front bezel. Figure 3-21 Sliding the Drives into the Drive Cage Connect the power and data cables to the drive as indicated in the following illustrations. You need this driver for that device. To install the port cover: Thread the cables through the bottom hole on the port cover 1 and connect the cables to the rear ports on the computer. Figure 4-38 Installing the Hard Drive Connect the power cable 1 and data cable 2 to the back of the hard drive. Any thoughts and recommendations are welcome.
Figure 2-47 Engaging the Lock When complete, all devices in your workstation will be secured. For some reason I have the issue currently and on only 2. Plug in the computer and turn on power to the computer. Figure B-3 Removing the Security Screws. No affiliation or endorsement is intended or implied. Lift the card straight up to remove it. Figure 4-19 Disconnecting the Power and Data Cables Rotate the drive cage back down to its normal position.
While lifting the release tab, slide the drive from its drive bay 2. Select the F10 Setup menu and hit the enter key. I work for the Army and we have over 12 thousand system. In other words can I go directly to this bios or must I install all bioses before this one in order? I have read about the delayed restart associated with 2. Tighten the thumbscrew to secure the access panel 2. Before you remove the old hard drive, be sure to back up the data from the old hard drive so that you can transfer the data to the new hard drive.
Here is their position on the subject. The hard drive is housed in a carrier that can be quickly and easily removed from the drive bay. Refer to Installing Drives on page 37 for an illustration of the extra M3 metric guide screws location. It updates the Intel Management Engine firmware to version 8. However after much reading on this forum it seems some bioses version 8. They would have been developed at least a year ago if that were the case.
Figure 4-26 Rotating the Drive Cage Down Replace the front bezel if removed and access panel. Warnings and Cautions Before performing upgrades be sure to carefully read all of the applicable instructions, cautions, and warnings in this guide. Figure 2-46 Attaching the Lock to the Chassis Installing a Security Lock. Will I need to go back to windows 7 or 8 from the current win10 to install this bios, and then back to win 10? There is a fix, but the fix would prevent users from running Bitlocker encryption. Gently pull the subpanel, with the bezel blanks secured in it, away from the front bezel, then remove the desired bezel blank.
Using the Smart Cover FailSafe Key to Remove the Smart Cover Lock 165. The screw hole is located on the left edge of the chassis next to the top hard drive bay. Figure 3-25 Removing a Hard Drive Remove the four guide screws two on each side from the old drive. Have 7-Zip Extract to: And let it extract the file into its folder name. Open both latches of the memory module socket 1 , and insert the memory module into the socket 2.
Drivers are the property and the responsibility of their respective manufacturers, and may also be available for free directly from manufacturers' websites. Download and save, but do not run sp73099. Figure 5-30 Securing Peripheral Devices Printer Shown Thread the keyboard and mouse cables through the computer chassis lock. In order to facilitate the search for the necessary driver, choose one of the Search methods: either by Device Name by clicking on a particular item, i. Figure 5-16 Removing the Hard Drive Carrier Remove the four guide screws from the sides of the hard drive carrier. Refer to Installing and Removing Drives on page 106 for an illustration of the extra M3 metric guide screws location.
|
OPCFW_CODE
|
This is quite a serious error, as far as operating systems go. When it works Windows live has stopped showing full page pictures, or defaults to its own 'web pictures'. vrm There is a lot of misinformation going around about windows 10 (among other things). Instructions Lets try to get this project featured so we can have the largest error message gallery on Scratch! http://speciii.com/error-message/windows-error-message-gallery.html
this happens several times a day to me now, and its getting worse, and whenever it happens Windows does not shut down properly, so I have to sit through scandisk, a Just doing a simple "netstat -a" in a DOS box. Start Menu/Shortcuts hangs. 1999/02/05. Windows Crash Gallery page 1 by David Joffe Page 1 Page 2 Page 3 Page 4 Page 5 Page 6 Page 7 Page 8 Page 9 Page 10 Page 11 12. http://telcontar.net/store/archive/CrashGallery/
Vista 1 13. Now how do I get support for this crash? You too can try this at home! (At your own risk. This is one of the reasons I now refuse to use Windows at all for Internet access.) Luckily the gimp ftp site supports restarting broken downloads. (Windows NT ftp servers do
is caused by a defective simm/dimm. Help Reply Mike says: February 7, 2014 at 8:15 pm This doesn't work in Windows 8. I had an interesting non-crash today; I was sending data to the shared printer, and Windows popped up a message, "There was an error printing. Error Message Generator Text Contact Maintainers Contact Site Admins Software Vendor?
I believe that part of the blame lies with Microsoft. If you are looking for more useful guides on using Windows Photo Gallery, don’t hesitate to check the recommendations found below. If this page has a point, it's to say to consumers "wise up": Microsoft has led many to believe that crashes are a "normal" aspect of software, and even marketed increased This file format is not supported, or you don’t have the latest updates to Photo Gallery".
Do YOU trust your important files and folders to be in the hands of THIS software? :) I don't know if this bug affects earlier versions of Windows. 2002-02-03: Win2K SP2: Windows Live Photo Gallery Error Code 0x8000000a About this time I decided to shut down and restart. (This was most likely caused by resource leakages) Windows Paint Brush crashes in GDI.EXE. I tried to play a file associated with Media Player here. "Unspecified error"?!?! Very timely given us so much useful information.
Log On Register Home About Compare Packages Upload Docs Forum* Shop* 81 Downloads 81 Downloads of v 3.0.7 7/8/2015 Last update Software Site Package Specific Possible Package Source Package outdated? https://support.microsoft.com/en-us/kb/2581947 The dialog box says, "if the problem persists, contact the program vendor". Error Message Generator Windows 7 Remixes (1) View all Studios (0) View all More projects by FNaF87DudE Creator Remixes ‹ › About About Scratch For Parents For Educators For Developers Credits Jobs Press Community Community Error Message Text Please try the request again.
Valuable photos I can't do anything with now -? This could cause a bit of trouble, particularly if driver updates are reported to cause problems. If Microsoft itself can't even get ActiveX controls to function, what hope do outside developers have? This one and the next three happened during the same general Windows slow-death. Error Message Generator Download
I got this one the other day while trying to help restore someone's Windows95 installation on a laptop. Skip to main content Menu DIGITAL CITIZEN Login Search ProductivityEntertainmentSecuritySmartphoneSmarthomeHealth Subscription optionsAbout usContactTerms and conditions How to View RAW Pictures In Windows Photo Gallery & Fix Weird Errors TutorialBy Ciprian Adrian The Problem: No Picture Preview & Error Code 0x88982f61 When viewing RAW pictures in Windows Explorer (e.g. A Linux system requires rebooting probably about as often as a Windows system requires re-installing. 98/07/08.
Don't use JPEG; that format is meant for photographic images and distorts text in computer-generated images. Windows Live Photo Gallery Error 0x80010108 It took me some time to manage these screenshots, due to the constant crashing of programs. Strange things should happen :).
Explorer crashes inside itself after I try running it again (after it crashed in GDI.EXE, another win32 module.) A win32 module crashes inside another win32 module. Technically, Windows is an "operating system," which means that it supplies your computer with the basic commands that it needs to suddenly, with no warning whatsoever, stop operating. NOTE: Before you tell me "its my hardware", or that I'm just a moron who doesn't know how to use a computer, please read my technical notes below. An Error Is Preventing The Photo Or Video From Being Displayed Windows 7 Most Computer Science students, however, will tell you that any arbitrary program (such as WinZip) should not be able to crash the entire OS.
Peter Kingsbury sent me this one; Outlook Express crashed, giving him this funny message. 98/11/10. I gave up, and told him to have his hard disk replaced (it was still under warranty.) Quite frankly I'm tired of running around doing free tech support for Microsoft products. Perhaps Windows 98's kernel32.dll has some of the bugs fixed. On a Windows system, your setup will continue to degrade, and after a few months you have to re-install everything.
Then, if you go to the File menu and click on Make a copy, you are shown an error message. Turned out to be some weird interaction with Virtual CD. 99/01/04. Post Cancel Comments loading... ActiveMovie, when installed, registers itself as a player for .FLC and .FLI files, even though it cannot play them.
It was not a debugging Msgbox() output; VB did this all by itself. I uninstalled and reinstalled the Microsoft DirectX 6 SDK, and now Microsoft Visual C++ won't run at all anymore, giving this set of errors. Because of competition, consumers lose out, at least in this case. Windows Explorer crashes; a stack fault in module PSICON.DLL.
You are on web.2. Then I had to wait patiently for about ten minutes while Windows restarted and ran scandisk, during which it found 52MB of 'lost data', which (I hope) was only invalid unzipped Click here to get started.More Share Image EMAIL fullscreen How To: Fix HP Support Assistant after the Windows 10 Upgrade Why would you need to do this? If you're on a computer, your Flash player might be disabled, missing, or out of date.
So do I stop the operation or stop the operation? The following error followed this crash directly. I guess it's about time I do the inevitable in any Microsoft user's career --- re-install Windows yet again.
|
OPCFW_CODE
|
Problem building ECOS for "Linux Synthetic" target
I'm trying to building Synthetic Linux target with ECOS. My software environment:
Ubuntu 11.4
GCC 4.5.2
ECOS 3.0
In the Config Tool I have set up "Linux Sythetic" target with "all" packages. Pressing F7 (build) the compilation starts, but later it says:
/opt/ecos/ecos-3.0/packages/hal/synth/i386linux/v3_0/src/syscall-i386-linux-1.0.S:
Assembler messages: make: Leaving
directory `/opt/ecos/linux_build'
/opt/ecos/ecos-3.0/packages/hal/synth/i386linux/v3_0/src/syscall-i386-linux-1.0.S:457:
Error: .size expression for
__restore_rt does not evaluate to a constant
/opt/ecos/ecos-3.0/packages/hal/synth/i386linux/v3_0/src/syscall-i386-linux-1.0.S:457:
Error: .size expression for __restore
does not evaluate to a constant
make:
[src/syscall-i386-linux-1.0.o.d] Error 1 make: [build] Error 2
The content of the file /opt/ecos/ecos-3.0/packages/hal/synth/i386linux/v3_0/src/syscall-i386-linux-1.0.S from the line 434 is:
// ----------------------------------------------------------------------------
// Special support for returning from a signal handler. In theory no special
// action is needed, but with some versions of the kernel on some
// architectures that is not good enough. Instead returning has to happen
// via another system call.
.align 16
.global cyg_hal_sys_restore_rt
cyg_hal_sys_restore_rt:
movl $SYS_rt_sigreturn, %eax
int $0x80
1:
.type __restore_rt,@function
.size __restore_rt,1b - __restore_rt
.align 8
.global cyg_hal_sys_restore
cyg_hal_sys_restore:
popl %eax
movl $SYS_sigreturn, %eax
int $0x80
1:
.type __restore,@function
.size __restore,1b - __restore
So the __restore and __restore_rt is undefinied.
I've tried to comment out this part and remove signal-related packages (it says, that it is a signal handler stuff), but it looks to be the base part of the ECOS kernel; the build seems succeed when parts are outcommented, but when I compile example apps, there are linker error because of the missing symbols (cyg_hal_sys_restore).
Silly idea, but I've tried to replace "__restore" with "cyg_hal_sys_restore"
and "...rt" same way, just to eliminate undefs (not really hoping that the wrong code causes no error), and the result is: the build is ok (as there're no undefs), example compiling is ok (as no missing symbols), but example a.out throws segfault just at the holy moment I start it.
Halp, pls., I'm not familiar with inline asm nor ECOS.
There is very little eCos-specific knowledge here, you might have more luck asking on the ecos-discuss list. http://ecos.sourceware.org/intouch.html
The problem seems to be related to binutils. On Debian, a downgrade to 2.20.1-16 worked for me.
http://ecos.sourceware.org/ml/ecos-discuss/2011-06/msg00010.html
EDIT: Follow link, there's a proper fix too.
Compile problem is solved. (There is another: the examples throws a segfault. But it's another issue.)
|
STACK_EXCHANGE
|
How to implement dynamically changing number of forms in ASP.NET MVC?
I've a Person entity that contains set of "Education" entities.
What I want is to collect "N" number of Education entries from the form.
This "N" is controlled by "Add More Education Information" and "Remove Education Information" buttons.
At the beginning there's an empty form that'll collect one Education entry.
("person" object , which is stored in session, has one empty education entity initially, and I call View with this "person" object)
If user clicks "Add More Education Information" button, then another form is added (to get another education information, i.e. from another university, or another degree)
[I add another education object to "person" and call view(person) again]
This works ok, but old data is lost. Better to say data in the forms is not mapped to Education entities and is lost, even if I call TryUpdateModel(person). Everytime I click "Add More Education Information" button all previously entered data is lost, and all forms become empty.
Question: Is there a better way of solving this kind of problems(dynamically changing number of forms)? Or what should I change to preserve old data?
Thanks.
I hope I could explain my problem.
You almost certainly don't want to add another form to the page (the browser will only submit one of them, absent JavaScript trickery), but rather add more fields to your existing form. I'm sure there's a way to map that stuff into an array in your controller, but I don't know ASP.NET MVC well enough to say how.
I think you'd be better off extending the single form and having it get an array of entities back. That is, have your first form use ids like:
<%= Html.Hidden("Schools.index", 0) %>
<%= Html.TextBox("Schools[0].Name") %>
...
Then, add new elements with an new index.
<input type="hidden" id="Schools_index" name="Schools.index" value="1" />
<input type="text" id="Schools[1]_Name" name="Schools[1].Name" />
...
And receive them as a collection
public ActionResult AddEducation( Person person, School[] schools )
{
}
This will all you to submit all the information at one shot and not have to worry about partial submissions and what to do with incomplete submissions.
Note that your javascript will have to find the current maximum index to know what the names/ids of the new elements will be.
$('#addSchooButton').click( function() {
var index = $('#form').find('[name=Schools.index]:last').attr('value') + 1;
$('<input type="text" id="Schools[' + index + ']_Name" name="Schools[' + index + '].Name" />').appendTo( '#form' );
...
});
In actuality, you'd probably create an entire container -- perhaps by cloning an existing one and replacing the ids in the clone with appropriate ones based on the next index value.
For more information, reference Phil Haack's article on model binding to a list.
Thanks. I finally was able to manage to accomplish what I wanted.
Youe answer was really helpful.
|
STACK_EXCHANGE
|
How To Get Create Table Statement In Mysql – In this article, What is the SQL CREATE Statement Language, you will learn how to create a table in a database, using the CREATE TABLE statement.
With a Relationship diagram or ERD, we can see how the tables relate to each other in the data.
How To Get Create Table Statement In Mysql
So in our SQL lessons, we have been working with such SQL Data Manipulation statements. In order to continue working with advanced SQL Databases for data processing, we will need to understand about Database Definitions.
Mysql Show Create Table Statement
Database definitions allow the creation of additional tables and columns. With these details, we will create new tables and columns.
The newly created columns will have specific data types specifying whether they are Numbers, Halls or Dates and how much space they will occupy.
Similar to PLC data types, MySQL uses data types in queries. In this query, the column named Lesson_ID uses the INT data type and the column named Status uses the TINYINT data type.
Column name will use VARCHAR data type and column named Description will use TEXT data type or string data type.
Mysql Show Tables: List Tables In Database [ultimate Guide]
Because we created a table with Data Definition information and did not use the Data Definition option to list in the output group, no results will be displayed.
Instead, we can see listed in the Navigator, a new table called Courses and columns added to the courses table.
Now let’s review the Merge command. A relational database consists of several related tables linked together using common columns. These columns are known as Foreign Key Columns.
Because of this collaborative approach, the data in each table is incomplete and does not provide all the information needed from a user and business perspective.
Power Bi: Importing Data From Sql Server And Mysql
To get order details, we need to query data from both orders and Details table and this is where JOIN comes into play.
A join is a method of combining data between one or more tables based on a common column value between the tables.
To join the table, you can use SIMPLE JOIN, INNER JOIN, LEFT JOIN or RIGHT JOIN for the same type of join. The join clause is used in the SELECT clause that appears after the FROM clause.
To write the JOIN command, let’s create a few simple tables called t1 and t2, using the Create Table statements I wrote in the SQL query tab.
Introducing Mysql: A Beginner’s Guide
Now, write INSERT INSI statements in the SQL query page that will add data to the table.
Refresh the Navigator Panel by selecting the Refresh button. Then expand the table elements t1 and t2 to see the columns created for these tables.
When the query is run, the Create Table dialog creates two new tables with ID and pattern columns. Then, the INSERT INTO statement adds data to each of the tables t1 and t2.
Well, as we can see, tables t1 and t2 have been added with new columns.
Velo: Integrate Your Google Cloud Mysql, Postgres, And Bigquery Databases With Your Wix Site
To learn more about MySQL and more SQL databases, we encourage you to visit the MySQL website. This concludes the article, What is SQL Database Programming Language?
Be sure to read on for these articles that cover training questions for beginners and then on to more advanced SQL training topics.
If you want to learn more on this topic please let us know in the comment section.
What is SQL Integration, And/or, Accessing and Defining Database Objects? (Part 5 of 8)
Solved Participation Activity 10.3.3: Create A Product
What are SQL Cross Join, Inner Join, and Union Clause Description language elements? (Part 8 of 8)
In this blog, you will learn about the advice that helped me get a PLC programming job without experience. This is my personal experience as an applicant in this field and as an employer who reviews resumes and interviews candidates for various jobs. So let’s get started!
Here is a real example of how machine learning can be used for predictive maintenance. You don’t need to be an engineer to understand this. It is very important and interesting and you can understand it easily. I promise. Example of car vibration We have two vibration sensors…
In this article, we will introduce you to a PLC programming language called Sequential Function Chart, or SFC for short. IEC 61131-3 PLC programming standard with five programming languages: – Ladder Diagram – Functional diagram – Command list -.. . The stack overflow for Groups is moving to its own area! When the migration is complete, you will have access to your Teams on teams.com, and they will no longer appear on the left side of the .
Build A Php & Mysql Crud Database App From Scratch
I’m switching to the MySQL GUI Tools MySQL Query Explorer since I can’t find a shortcut to get the table creation script in MySQL Workbench.
I think this is because of the Reverse Engineering feature, which, unfortunately, is only available in the commercial version
Edit: Using MySQL 8.0, there is an option to right-click > copy space (undefined) on the result to get the desired result without specifying.
The solution, except to come with the MySQL Query Browser, seems to be connecting to the database, using the command line client, and running the operations.
Create A Table In Mysql Database
I came here to find the answer to this question. But I have a better answer myself.
In the table list, if you right-click on the table name there is a group of CRUD script reading options in “Send to SQL Editor”. You can choose multiple tables and take the same route too.
In the “model view” or “process” just right-click on the table and you have the following options: “Copy Paste to screen” OR “Copy SQL to screen”
I’m not sure if this is still an issue, but for me in 5.2.35CE it is possible to find the script created by:
Coding And Implementing A Relational Database Using Mysql
You can use the MySQL proxy and its scripting program to view SQL queries in real time in the terminal.
Very busy question. Get 10 names (not counting party favors) to answer this question. Requiring a name helps protect this request from spam and non-responders.
Later in this series, I will try to cover everything necessary for the complete beginner to jump into the magical world of SQL and databases. So, let’s get started:
How To Insert Rows And Get Their Identity Values With The Output Clause
The purpose of this article is to create a database (using the SQL command Create Database) and two tables (using the SQL command Create Table) as shown in the image above. In the following articles, we will insert data into these tables, update and delete data, but we will also add new tables and create queries.
Before we create a database using the Create Database SQL command, I want to explain what a database is. I will use the definition provided by Oracle:
A database is an organized collection of information, or data, usually stored electronically in a computer system. A database management system (DBMS) often manages data.
In this article, I will use the Microsoft SQL Server Express edition. So, the DBMS is a SQL server, and the language we will use is T-SQL. I will use the word again:
Create A Mysql Query Based Table By Querying A Database
T-SQL (Transact-SQL) is a set of programming extensions from Sybase and Microsoft that add many features to the Structured Query Language (SQL), including transaction management, exception and error handling, line management and variable definition.
I will not go deep in this article, but we can conclude this section with the statement that a database is a table structure that contains data in the real world and some additional columns that are necessary for the system to work properly. We will discuss these things in future articles.
He didn’t look happy at all. We will add more fun by creating new information. After clicking New Question, a new window will open and we can write something in it. It looks like the picture below:
Before we send anything, we should make sure that it is sent in the right way. T-SQL is a language and therefore it has its own rules – a set of rules on how to write different commands.
Learning Sql? 12 Ways To Practice Sql Online
Fortunately, one of those commands is the Create Database SQL command. You can view the full T-SQL Database Creation Guide on the Microsoft site.
I’m going to make it very simple and just go with the most basic process. To create a new database on our server, we need to use the following command:
Click the + next to the Databases folder, and in addition to the two folders, you will see that the mu_first_data database has been created.
That’s great and you have successfully created your first profile. The problem is that we don’t have anything stored in the database. Let’s change this.
Create Table And Modify Table Dialogs (old Ui)
I like to use a lot of similes, so I will be here too. If you think of a library, a library is a shelf with books, and each book is a table. It’s all paper
How to create db in mysql, create table statement in mysql, create statement mysql, how to create mysql database in ubuntu, create statement in mysql, create table statement mysql, mysql create table statement example, how to create database in mysql, mysql show create statement, mysql create database statement, how to create a database in mysql, how to create relational database in mysql
|
OPCFW_CODE
|
package lt.vu.mif.ui.helpers.implementations;
import java.math.BigDecimal;
import lt.vu.mif.model.product.Category;
import lt.vu.mif.model.product.Discount;
import lt.vu.mif.model.product.Product;
import lt.vu.mif.ui.helpers.interfaces.IPriceResolver;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
@Transactional
@Component
public class PriceResolver implements IPriceResolver {
private static final int ONE_HUNDRED = 100;
@Override
public BigDecimal resolvePriceWithDiscount(Product product) {
BigDecimal priceWithCategoryDiscount = resolvePriceWithCategoryDiscount(product);
BigDecimal priceWithProductDiscount = resolvePriceWithProductDiscount(product);
if (priceWithCategoryDiscount != null) {
return priceWithCategoryDiscount.compareTo(priceWithProductDiscount) < 0
? priceWithCategoryDiscount : priceWithProductDiscount;
}
return priceWithProductDiscount;
}
private BigDecimal resolvePriceWithProductDiscount(Product product) {
if (product.getDiscount() == null || !product.getDiscount().isDiscountValid()) {
return product.getPrice();
}
Discount discount = product.getDiscount();
BigDecimal priceWithAbsoluteDiscount = null;
BigDecimal priceWithPercentage = null;
if (discount.getAbsoluteDiscount() != null) {
priceWithAbsoluteDiscount = discount.getAbsoluteDiscount();
}
if (discount.getPercentageDiscount() != null) {
priceWithPercentage = getPriceWithPercentages(product,
discount.getPercentageDiscount());
}
BigDecimal resolvedPrice = getGreaterPrice(priceWithAbsoluteDiscount, priceWithPercentage);
return resolvedPrice == null ? product.getPrice() : resolvedPrice;
}
private BigDecimal resolvePriceWithCategoryDiscount(Product product) {
Category productCategory = product.getCategory();
if (productCategory == null) {
return null;
}
Discount categoryDiscount = getBiggestCategoryDiscount(productCategory);
if (categoryDiscount == null) {
return null;
}
BigDecimal priceWithPercentage = null;
if (categoryDiscount.getPercentageDiscount() != null) {
priceWithPercentage = getPriceWithPercentages(product,
categoryDiscount.getPercentageDiscount());
}
return priceWithPercentage == null ? product.getPrice() : priceWithPercentage;
}
private Discount getBiggestCategoryDiscount(Category category) {
Discount discount = category.getDiscount();
Category parentCategory = category.getParentCategory();
Discount parentDiscount = null;
if (parentCategory != null) {
parentDiscount = getBiggestCategoryDiscount(parentCategory);
}
if (discount == null || !discount.isDiscountValid()) {
return parentDiscount;
} else if (parentDiscount == null) {
return discount;
} else {
return discount.getPercentageDiscount() > parentDiscount.getPercentageDiscount() ?
discount : parentDiscount;
}
}
private BigDecimal getGreaterPrice(BigDecimal priceWithAbsoluteDiscount,
BigDecimal priceWithPercentage) {
if (priceWithAbsoluteDiscount != null && priceWithPercentage != null) {
return priceWithAbsoluteDiscount.compareTo(priceWithPercentage) > 0
? priceWithAbsoluteDiscount : priceWithPercentage;
} else if (priceWithAbsoluteDiscount != null) {
return priceWithAbsoluteDiscount;
} else if (priceWithPercentage != null) {
return priceWithPercentage;
}
return null;
}
private BigDecimal getPriceWithPercentages(Product product, Long value) {
return product.getPrice().multiply(new BigDecimal(ONE_HUNDRED - value))
.divide(new BigDecimal(ONE_HUNDRED));
}
private BigDecimal subtract(BigDecimal bigDecimal, BigDecimal value) {
return bigDecimal.subtract(value);
}
}
|
STACK_EDU
|
Unable to send requests with abstract arguments
Hello @lula, loving ngx-soap so far!
One snag that I've hit so far is being unable to send a request that includes types based on an abstract type.
Relevant wsdl snippet:
<s:element name="DoQuery">
<s:complexType>
<s:sequence>
<s:element minOccurs="0" maxOccurs="1" name="query" type="tns:MyQuery" />
</s:sequence>
</s:complexType>
</s:element>
<s:complexType name="MyQuery">
<s:sequence>
<s:element minOccurs="0" maxOccurs="1" name="Filter" type="tns:MyQueryFilter" />
</s:sequence>
</s:complexType>
<s:complexType name="MyQueryFilter" abstract="true" />
<s:complexType name="MyQueryByActiveFilter">
<s:complexContent mixed="false">
<s:extension base="tns:MyQueryFilter" />
</s:complexContent>
</s:complexType>
<s:complexType name="MyQueryByCompletedFilter">
<s:complexContent mixed="false">
<s:extension base="tns:MyQueryFilter" />
</s:complexContent>
</s:complexType>
<s:complexType name="MyQueryByIdFilter">
<s:complexContent mixed="false">
<s:extension base="tns:MyQueryFilter">
<s:sequence>
<s:element minOccurs="0" maxOccurs="1" name="Ids" type="tns:ArrayOfLong" />
</s:sequence>
</s:extension>
</s:complexContent>
</s:complexType>
When I try to make a request to this DoQuery endpoint with the following code:
this.soap.createClient("https://mywebsite.com/my.directory/API.asmx?wsdl")
.then(client => {
console.log("Client", client);
this.client = client;
})
.catch(err => console.log('Error', err));
var body = {
query: {
Filter: {} as MyQueryByActiveFilter
} as MyQuery
};
var headers = {
SOAPAction: "http://www.mywebsite.com/MyQuery",
"Content-Type": "application/soap+xml; charset=utf-8"
}
this.soap.client.call('DoQuery', body, null, headers);`
I end up with the following error:
The specified type is abstract: name='MyQueryFilter'
"<?xml version="1.0" encoding="utf-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><soap:Body><soap:Fault><faultcode>soap:Client</faultcode><faultstring>Server was unable to read request. ---> There is an error in XML document (1, 369). ---> The specified type is abstract: name='MyQueryFilter', namespace='http://www.mywebsite.com/', at <Filter xmlns='http://www.mywebsite.com/'>.</faultstring><detail /></soap:Fault></soap:Body></soap:Envelope>"
Non abstract types seem to work just fine. I'm not sure if I'm missing special headers or just doing something plain wrong. We have a requirement to change the backend API as little as possible, so any help or tips for using abstract types would be greatly appreciated.
Brock
After some additional debugging, it turns out the request I was sending was missing the xsi:type attribute.
After looking at another issue resolution in this repo https://github.com/lula/ngx-soap/issues/64
I modified the body to the following and it worked:
var body = {
query: {
Filter: {
attributes: {
"xsi:type": "MyQueryByActiveFilter"
}
}
}
};
After some additional debugging, it turns out the request I was sending was missing the xsi:type attribute.
After looking at another issue resolution in this repo https://github.com/lula/ngx-soap/issues/64
I modified the body to the following and it worked:
var body = {
query: {
Filter: {
attributes: {
"xsi:type": "MyQueryByActiveFilter"
}
}
}
};
|
GITHUB_ARCHIVE
|
import os.path as path
import pandas as pd
import numpy as np
import pickle
script_path = path.dirname(__file__)
def OHLCV(df):
keys = [
'dt',
'open', 'high', 'low', 'close', 'vol'
]
tracker = {}
for k in keys: tracker[k] = []
groups = df.groupby(pd.TimeGrouper(freq='{0}Min'.format(15)))
for group in groups:
g1 = group[1]
if len(g1) == 0: continue
dt = group[0]
tracker['dt'].append(dt)
# extract OHLCV from bar data
open = g1.ix[0]['Open']
high = g1['High'].max()
low = g1['Low'].min()
close = g1.ix[-1]['Close']
vol = g1['Volume'].sum().round(2)
tracker['open'].append(open)
tracker['high'].append(high)
tracker['low'].append(low)
tracker['close'].append(close)
tracker['vol'].append(vol)
df = pd.DataFrame(data=tracker, columns=keys[1:], index=tracker['dt'])
return df
def feat_extract(df):
import talib
close = df['close'].values.astype(np.float)
vol = df['vol'].values.astype(np.float)
df_ = pd.DataFrame(index=df.index)
df_['r'] = np.log(talib.ROCR(close, timeperiod=1))
df_['r_1'] = np.log(talib.ROCR(close, timeperiod=2))
df_['r_2'] = np.log(talib.ROCR(close, timeperiod=3))
r = df_['r'].values
zscore = lambda x, timeperiod: (x - talib.MA(x, timeperiod)) / (talib.STDDEV(x, timeperiod) + 1e-8)
df_['rZ12'] = zscore(r, 12)
df_['rZ96'] = zscore(r, 96)
change = lambda x, timeperiod: x / talib.MA(x, timeperiod) - 1
df_['pma12'] = zscore(change(close, 12), 96)
df_['pma96'] = zscore(change(close, 96), 96)
df_['pma672'] = zscore(change(close, 672), 96)
ma_r = lambda x, tp1, tp2: talib.MA(x, tp1) / talib.MA(x, tp2) - 1
df_['ma4/36'] = zscore(ma_r(close, 4, 36), 96)
df_['ma12/96'] = zscore(ma_r(close, 12, 96), 96)
def acc(x, tp1, tp2):
x_over_avg = x / talib.MA(x, tp1)
value = x_over_avg / talib.MA(x_over_avg, tp2)
return value
df_['ac12/12'] = zscore(acc(close, 12, 12), 96)
df_['ac96/96'] = zscore(acc(close, 96, 12), 96)
df_['vZ12'] = zscore(vol, 12)
df_['vZ96'] = zscore(vol, 96)
df_['vZ672'] = zscore(vol, 672)
df_['vma12'] = zscore(change(vol, 12), 96)
df_['vma96'] = zscore(change(vol, 96), 96)
df_['vma672'] = zscore(change(vol, 672), 96)
df_['vol12'] = zscore(talib.STDDEV(r, 12), 96)
df_['vol96'] = zscore(talib.STDDEV(r, 96), 96)
df_['vol672'] = zscore(talib.STDDEV(r, 672), 96)
df_['dv12/96'] = zscore(change(talib.STDDEV(r, 12), 96), 96)
df_['dv96/672'] = zscore(change(talib.STDDEV(r, 96), 672), 96)
df_ = df_.fillna(0.)
assert (not df.isnull().values.any()), 'feature dframe contain NaNs'
return df_
if __name__ == '__main__':
# load csv
save_path = path.join(script_path, 'data.csv')
df = pd.read_csv(save_path,
index_col=[0],
parse_dates=True)
# convert unix timestamp to datetime
df.index = pd.to_datetime(df.index, unit='s')
# select period between Dec. 1, 2014 ~ Jun. 14, 2017
'''
start_date = pd.Timestamp(year=2014, month=12, day=1, hour=0, minute=0)
end_date = pd.Timestamp(year=2017, month=6, day=14, hour=23, minute=59)
mask = (df.index >= start_date) & (df.index <= end_date)
df = df.loc[mask]
'''
# drop and change column names
df = df[['Open', 'High', 'Low', 'Close', 'Volume_(BTC)']]
df.rename(columns={'Volume_(BTC)': 'Volume'}, inplace=True)
df = OHLCV(df)
# # save the csv file for further use if you want
# save_path = path.join(script_path, 'BTCUSD-15Min.csv')
# df.to_csv(save_path)
# df = pd.read_csv(save_path,
# index_col=[0],
# parse_dates=True)
feat_df = feat_extract(df)
data_dict = {
'data': feat_df,
'label': df
}
save_path = path.join(script_path, 'data.pkl')
with open(save_path, mode='wb') as handler:
pickle.dump(data_dict, handler, protocol=pickle.HIGHEST_PROTOCOL)
|
STACK_EDU
|
How to avoid code injection in Jenkins shell calls?
Consider the following code, which invokes a program (echo) with some arguments:
String malicious_input = '""; rm -rf / #yikes'
sh "echo Hello ${malicious_input}!"
The resulting shell script is then
echo Hello ""; rm -rf / #yikes!
Simple, classic code injection. Nothing unheard. What I have been struggling to find is a way to properly handle this case. First approaches to fix this are:
Just add single quotes around the string in the shell call, like sh "echo Hello '${malicious_input}'!". Yes, but no, I only need to switch to malicious_input = "'; rm -rf / #yikes" to circumvent that.
Just add double quotes then! Still no, not only are these just as simple to circumvent but those are even prone to path globbing/expansion.
Then add the quotes around the input string before invoking Groovy string interpolation. Same thing, the shell commandline is unchanged.
Then, add single quotes but prefix every single quote inside the string with a backslash to prevent its interpretation as meta character by the shell. Yes, that kind-of works, if I also escape every existing backslash with a second one. Still, the details of how to prevent this expansion depend a bit on the shell (POSIX-ish, Windows bat, not sure about powershell). Also, this takes three lines of code for every argument. Plus, without an explicit shebang line, I can't even be sure which shell is taken.
So, my question is this: Where is the built-in function in Groovy that does this for me in a portable, shell-agnostic way? I find it hard to believe that this doesn't exist, yet I can't find it. Also, quite puzzling for me that I'm the first one to come across this issue...
What you are describing is called Argument Injection referenced as CWE-88 which is a subclass of Command Injection referenced as CWE-77:
Some potential mitigations described in these CWEs are:
If at all possible, use library calls rather than external processes to recreate the desired functionality.
(Parameterization) Where possible, avoid building a single string that contains the command and its arguments.
(Input Validation) Assume all input is malicious. Use an "accept known good" input validation strategy, i.e., use a list of acceptable inputs that strictly conform to specifications. Reject any input that does not strictly conform to specifications, or transform it into something that does.
The answer to your question:
Where is the built-in function in Groovy that does this for me in a portable, shell-agnostic way?
is : there is no such built-in Groovy function.
What you should do is:
Avoid, when possible, using shell scripts with user input data and use instead Groovy functions.
Use input validation by allowing only alphanumerical characters.
Use quoting as discussed in How to prevent command injection through command options?. This is the less safe option, because the sh Jenkins command use the the system default shell (which can be anything) and there is no 100% guarantee that there is no tricks to bypass this quoting.
Interesting links! Still, it doesn't answer my question, because the target programming language is Groovy. I'll clarify that. Still, keep this answer here, it adds value!
See my updated answers.
|
STACK_EXCHANGE
|
Fantasticfiction Top Tier Providence, Secretly Cultivate For A Thousand Years – Chapter 32 pig smile suggest-p2
Jakefiction Let me laugh – Chapter 32 outgoing salt recommend-p2
marvel super extraction raw
Novel–Top Tier Providence, Secretly Cultivate For A Thousand Years–Top Tier Providence, Secretly Cultivate For A Thousand Years
Chapter 32 faint zipper
He’s not with the 9th degree of the cornerstone Business kingdom!
The guy in black died before he even landed.
Li Qingzi suddenly directed him a sound transmitting. “He’s stalling for time. However he’s arrogant, he didn’t speak a great deal of in past times.
Fairy Xi Xuan reported inside a lower speech, “Listen! Abandon without delay. Operate with regards to it is possible to. From now on, you’re not anymore a disciple of your Jade Genuine Sect!”
A speech drifted through.
Han Jue also laughed and aimed his middle finger at him.
Han Jue wasn’t afflicted with the alert.
A sound drifted around.
Just as he was approximately to collide with him, Han Jue lifted his hand and made use of the Nine Dragons Devil Expelling Close off.
Han Jue feigned ignorance and reported, “The ninth level of the Nascent Soul world?”
Top Tier Providence, Secretly Cultivate For A Thousand Years
Han Jue smiled and stated, “Not actually. I’m willing to remove you.”
“You could possibly overcome a Nascent Spirit cultivator, however the distinction between a Soul Formation cultivator and also a Nascent Soul cultivator is much like heaven and entire world. Don’t kick the bucket below.”
“You might be able to beat a Nascent Spirit cultivator, however the distinction between a Soul Structure cultivator and also a Nascent Spirit cultivator is similar to paradise and entire world. Don’t perish below.”
The person in dark colored died prior to he even landed.
Li Qingzi suddenly dispatched him a sound transmission. “He’s stalling for time. Even though he’s conceited, he didn’t converse a great deal in the past.
Each of their Perfect Constellation Fantastic Body had been demolished. Li Qingzi lost the two his biceps and triceps and knelt in the wrecks.
There seemed to be you can forget suspense during this conflict.
The Qilin Sword came out in the palm.
“You might be able to overcome a Nascent Spirit cultivator, but the difference between a Heart and soul Structure cultivator along with a Nascent Soul cultivator is actually paradise and planet. Don’t kick the bucket on this page.”
The Excellent Great Elder’s face was included in blood stream while he meditated to recuperate.
It improved again!
“Boy, not surprising you dare to always be so arrogant. But are you aware my farming amount?” Duan Tongtian inquired using a spurious smile.
The elders were silent. Quite a few acquired even shut down their eyeballs and were waiting for loss of life.
Everybody was astonished!
Isn’t he Chen Santian’s become an expert in?
It elevated all over again!
Xing Hongxuan increased her eye in disbelief.
How could he conquer the black colored-robed mankind?’
Isn’t he Chen Santian’s grasp?
Jade 100 % pure Sect could be damaged nowadays. From nowadays onwards, Jade 100 % pure Sect would not take place in the cultivation environment.
|
OPCFW_CODE
|
UTF8 processing: performance improvements, validation, and other useful APIs
This PR introduces a workhorse routine for validating UTF-8 strings without performing a proper to-UTF16 conversion. It is intended to be used as the backend to UTF-8 validation routines, including routines that perform substitution of invalid sequences to convert an ill-formed UTF-8 string into a well-formed UTF-8 string. There are friendly APIs on the Utf8Utility type that allow inspecting the UTF-8 string. These APIs are intended to be compatible with System.Text.Encoding.UTF8 in that they agree on the length of bad sequences, etc.
There is a sister workhorse "convert UTF-8 to UTF-16" routine, but it's not part of this PR since I'm still adding unit tests for it.
The unit tests for the workhorse routine perform some limited self-fuzzing. Additionally, the unit test project introduces the NativeMemory type (for unit test use only), which grabs memory pages directly from the kernel's page allocator and bookends the returned range with poison pages, where accessing these pages will immediately AV the process. This type helps detect buffer overruns in code that works directly with memory buffers in an unsafe fashion.
Finally, this API introduces substantial performance improvements over the existing "how many UTF-16 / UTF-32 code units would there be if I were to convert this UTF-8 string to one of those representations?" framework API System.Text.Encoding.UTF8Encoding.GetChars. I've copied the performance improvement table below.
Perf results
1,000,000 iterations on each lipsum
Testbed: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz, 3408 Mhz, 8 Core(s), 8 Logical Processor(s), Win10 RS3 amd64fre
English (ASCII)
System.Text.Encoding.UTF8: 0.286 sec
Utf8Util.GetCharCount: 0.077 sec
73% reduction in runtime
Hebrew (primarily 2-byte)
System.Text.Encoding.UTF8: 5.22 sec
Utf8Util.GetCharCount: 1.81 sec
65% reduction in runtime
Cyrillic (primarily 2-byte)
System.Text.Encoding.UTF8: 5.20 sec
Utf8Util.GetCharCount: 1.78 sec
65% reduction in runtime
Japanese (primarily 3-byte)
System.Text.Encoding.UTF8: 3.06 sec
Utf8Util.GetCharCount: 1.08 sec
65% reduction in runtime
Chinese (primarily 3-byte)
System.Text.Encoding.UTF8: 6.30 sec
Utf8Util.GetCharCount: 2.25 sec
65% reduction in runtime
The corpus that was used for testing can be found at https://github.com/GrabYourPitchforks/fast-utf8/blob/master/FastUtf8Tester/Lipsum.cs. That same directory contains the test program and full results.
@GrabYourPitchforks,
Thanks for having already signed the Contribution License Agreement. Your agreement was validated by .NET Foundation. We will now review your pull request.
Thanks,
.NET Foundation Pull Request Bot
I've copied the performance improvement table below.
Can you please check-in the performance tests to this repo (preferably as part of this PR) so that we can run them using xunit.performance and our perf harness?
@ahsonkhan I just added the perf tests and sample texts. These are different than the sample texts I had been using, so the numbers will be somewhat off.
@GrabYourPitchforks +++10 I would love this to become the start of more fuzzing ability for .net as it's something I have been wanting to get around to for a while...
Since the bulk of the logic isn't changing and we're discussing utility methods / renamings, I'm going to merge what I have now so that people can immediately start playing with it. Will send a new PR with additional performance improvements and changes.
|
GITHUB_ARCHIVE
|
Docker Working Group proposal
This is a proposal and preliminary discussion about forming a new Docker Working Group within io.js. Refs #29. Working Groups are described here.
Possible WG charter (draft):
Docker WG
The Docker working group's purpose is to build, maintain, and improve official Docker images for the io.js project.
Its responsibilities are:
Keep the official Docker images updated in line with new io.js releases
Decide and implement image improvements
Maintain and improve the images' documentation
We also need at least 3 initial WG members (the more the better IMO). Here is an alphabetical list of possible candidates who have contributed in the past (please respond if you would like/dislike to join):
@hmalphettes
@jlmitch5
@pesho
@rvagg
@Starefossen
@wblankenship
There have also been significant contributions from @jfrazelle, @tianon, @yosifkit from the Docker team, who are of course welcome to join if they wish.
Also /cc @mikeal
Thoughts?
You can feel free to continue to ping me on whatever patches etc :) I enjoy
a nice break from the large influx of stuff to docker. But I would hate to
commit to be a maintainer and let you all down when I have other maintainer
responsibilities for the docker project. But do feel free to ping me on
whatever. You all are awesome :D
On Tue, Mar 10, 2015 at 2:28 PM, Peter Petrov<EMAIL_ADDRESS>wrote:
This is a proposal and preliminary discussion about forming a new Docker
Working Group within io.js. Refs #29
https://github.com/iojs/docker-iojs/issues/29. Working Groups are
described here https://github.com/iojs/io.js/blob/v1.x/WORKING_GROUPS.md
.
Possible WG charter (draft):
Docker WG
The Docker working group's purpose is to build, maintain, and improve
official Docker images for the io.js project.
Its responsibilities are:
Keep the official Docker images updated in line with new io.js
releases
Decide and implement image improvements
Maintain and improve the images' documentation
We also need at least 3 initial WG members (the more the better IMO).
Here is an alphabetical list of possible candidates who have contributed in
the past (please respond if you would like/dislike to join):
@hmalphettes https://github.com/hmalphettes
@jlmitch5 https://github.com/jlmitch5
@pesho https://github.com/pesho
@rvagg https://github.com/rvagg
@Starefossen https://github.com/Starefossen
@wblankenship https://github.com/wblankenship
There have also been significant contributions from @jfrazelle
https://github.com/jfrazelle, @tianon https://github.com/tianon,
@yosifkit https://github.com/yosifkit from the Docker team, who are of
course welcome to join if they wish.
Also /cc @mikeal https://github.com/mikeal
Thoughts?
—
Reply to this email directly or view it on GitHub
https://github.com/iojs/docker-iojs/issues/39.
--
Jessie Frazelle
4096R / D4C4 DD60 0D66 F65A 8EFC 511E 18F3 685C 0022 BFF3
pgp.mit.edu http://pgp.mit.edu/pks/lookup?op=get&search=0x18F3685C0022BFF3
I would very much like to be a part of the Docker Working Group :rocket: :whale:
Since GitHub doesn't seem to be delivering my email reply:
I'd echo what Jess said -- I don't want you guys relying on me too much, but I'm happy to take part and provide input where I can, especially since being involved in changes here helps with the review after the official-images PR (since all the changes and rationale for them are already familiar). :+1:
I'd echo what Jess said -- I don't want you guys relying on me too much,
but I'm happy to take part and provide input where I can, especially since
being involved in changes here helps with the review after the
official-images PR (since all the changes and rationale for them are
already familiar). :+1:
I'll also stick to the sidelines with @tianon; watching so that the official-images PRs can go smoothly. Feel free to ping and ask questions about the official images if need arise.
I would also like to join the Docker WG.
Ok, we have the required 3 initial WG members (@hmalphettes, @pesho, @Starefossen). Let's wait 24 more hours, in case more people would like to join, or if there are any other suggestions. After that I'll submit our request to the TC.
@pesho I haven't made any contributions (yet), but I have been following the work done here and would like to volunteer to be a 4th WG member. I'm also a member of the Evangelism WG.
@rosskukulinski thanks for volunteering :+1: I'm ok with you joining as an initial member, if the others agree as well.
I'm on board :smile:
Awesome :+1:
+1 from here to @rosskukulinski as an initial member.
Yes, I would like to join! I'm on vacation right now, and I'm sorry for responding at the last minute!
Proposal submitted: https://github.com/iojs/io.js/pull/1134
@rosskukulinski sorry, I didn't include you in the initial member list. The rules for starting a WG say that the initial members "should be individuals already undertaking the work described in the charter", I hadn't noticed that before. Of course, you are welcome to join later, after you've had a chance to contribute.
No problem! I'll continue to lurk and help out where I can. I have some
designated open source time next week, so you'll hear more from me.
How do g'all usually communicate? iRC? Gittim? Just github issues?
On Thursday, March 12, 2015, Peter Petrov<EMAIL_ADDRESS>wrote:
Proposal submitted: iojs/io.js#1134
https://github.com/iojs/io.js/pull/1134
@rosskukulinski https://github.com/rosskukulinski sorry, I didn't
include you in the initial member list. The rules
https://github.com/iojs/io.js/blob/v1.x/WORKING_GROUPS.md for starting
a WG say that the initial members "should be individuals already
undertaking the work described in the charter", I hadn't noticed that
before. Of course, you are welcome to join later, after you've had a chance
to contribute.
—
Reply to this email directly or view it on GitHub
https://github.com/iojs/docker-iojs/issues/39#issuecomment-78711727.
@rosskukulinski we are communicating primarily through github issues right now. I accidentally created a gitter via https://github.com/iojs/docker-iojs/pull/32 so it exists but is currently unusued.
I opened a pull request for a governance document for if/when our request to become a wg is approved: https://github.com/iojs/docker-iojs/pull/40
@wblankenship this was approved by the last TC meeting, wasn't it?
yes
@Starefossen Will get the final documents updated and ready to go tonight :smile:
The working group docs has been merged so this issue can now be closed. Should we make a docker team and add the members as collaborators for this repo maybe?
Yes. This has been requested in https://github.com/iojs/io.js/pull/1134, I'm not sure what's the holdup there. @rvagg @mikeal can one of you guys take a look.
Let's either keep it open until https://github.com/iojs/io.js/pull/1134 is resolved or open a new issue to track it from this repo.
Mission completed :)
|
GITHUB_ARCHIVE
|
Get PID of the application running in the active terminal emulator
My end goal is to be able to open a new terminal window (urxvt) directly in the current working directory of the program running in the window currently active.
I'm currently using the shell (Bash), but I don't have anything against alternatives.
So far, I've got the ID of the current active window using xdotool:
wid=$(xdotool getactivewindow)
and the PID of its process using xprop:
pid=$(xprop -id $wid _NET_WM_PID | awk '{print $NF}')
but this is not the PID I'm looking for.
I want the PID of the process running in the terminal displayed in that window.
For now, I mostly want the case of a bash shell running in that window, but I don't see why it would depend on that.
I can already get CWD from a PID using cwd="$(readlink /proc/$pid/cwd)".
Got it! Thanks to Stephane Chazelas for the help.
The trick was to look for the child processes... D'oh!
My script is now:
#!/usr/bin/env bash
ppid=$(xdotool getactivewindow getwindowpid) # PID of process in the window
pid=$(pgrep -P $ppid | tail -n1) # PID of the last child
cwd="$(readlink /proc/${pid:-$ppid}/cwd)" # current CWD of pid, or ppid if no pid
cd "$cwd"
"$@"
You can use it by simply prefixing any command with the name of the script, eg. incwd urxvt.
The only caveat is that some programs, like evince, reset their cwd. I doubt there's anything I can do in these cases.
Isn't there another caveat: If the window has multiple child PID's and the caller wasn't the most recent? gnome-terminal and Chrome both come to mind. How are you calling this script, isn't there a way to use the script's PPID to at least narrow the field f possibilities?
My intended use was through a keybinding, so its parent would be the WM (OpenBox in my case).
Maybe:
readlink "/proc/$(
pgrep -P "$(xdotool getactivewindow getwindowpid)" | head -n1
)/cwd"
That is get the pid associated with the window using xdotool, use pgrep to get the list of children of that process, head -n1 to only select the first one, and use readlink to get the working directory.
Will not work for every window. For instance, not for windows from remote clients or that don't provide the window manager with their PID, not for processes by other users.
Thanks for the idea to get the children PIDs, and the cleaner way to get the PID of the window. I've posted the full answer in another reply though, so I'll not mark your answer as the accepted one.
|
STACK_EXCHANGE
|
Below, we’ll share what we all know about the Big Sur OS, the M1-primarily based Macs, and the software Sibelius, Finale, Dorico, MuseScore, and Notion. If you’re utilizing any of these merchandise, please share your expertise in the feedback part. As we’ve come to count on, Big Sur showcases a variety of design enhancements and blurs the road even additional between a Mac and iOS, in relation to the OS’s look and core Apple features like Messages, widgets and the Control Center. It’s extremely uncommon last time I checked for a Scripting language app to be compiled. Programming language apps are compiled all the time, however Scripting languages aren’t. “Run Only” simply means it has been processed right into a compacted model of this system that is not easy to edit.
A virus/malware has no person interface and so isn’t any extra an software than a device driver. You need a specialized editor to view it, particularly one supporting Gzip decompression. Java Bytecode isn’t interpreted, but i wont waste time on that. As i stated before its not virtually attainable to edit java class files using a textual content editor. You may however the pain isn’t value it, in contrast to a script file which is mainly just text which you’ll be able to edit, lets assume you understad the language syntax and so on. The primary difference between a scripting language and a programming language is of their execution – programming languages use a compiler to transform the high-stage programming languages into machine language. While a compiler compiles code into an entire program, an interpreter interprets code line by line.
What’s New In Macos Mojave Patcher
It wasn’t meant to be straightforward to read, perceive, or edit, thus the name “run solely”. They may have named it AppleScript Bytecode should you assume that’s a greater phrase. Applications are massive applications that typically have a graphic consumer interface and are meant to run on a private computer. Whether they’re distributed in binary kind or in plain text is irrelevant.
If I want to transmit to my friends stay music from my studio, I can’t, as a result of the app only listenes to the mac’s internal mic, and can’t be set to any other gadget. Great news, this version fixes all of the echooooos on Houseparty Mac app. A new Retouch device “powered by machine learning” comes to Photos, letting you easily get rid of blemishes and different unwanted elements in your footage.
Apple Releases Macos High Sierra 10 Thirteen.Three, Ios 11.2.5, Watchos Four.2.2, And Tvos 11.2.5
The above command will print out your SSH key on your Linux machine, with out prompting you on your key authentication password. PyMICROPSIA makes use of Python libraries for a variety of functions, ranging from info and file theft to Windows course of, file system, and registry interplay. Despite this, these checks may need been launched by the malware’s developers whereas copy-pasting code from other ‘initiatives’ and could very well be removed in future variations of the PyMICROPSIA trojan. “This is an fascinating finding, as we’ve not witnessed AridViper targeting these working methods before and this could characterize a new area the actor is starting to explore.” “PyMICROPSIA is designed to target Windows working methods only, however the code contains attention-grabbing snippets checking for different operating systems, such as ‘posix’ or ‘darwin’,” as Unit 42 stated.
- A main change in Catalina was Apple’s determination to finish help for 32-bit apps, requiring builders to transform their apps into 64-bit versions to continue functioning correctly.
- Copland development led to August 1996, and in December 1996, Apple announced that it was buying NeXT for its NeXTSTEP operating system.
- In 1996, Apple determined to cancel the project outright and discover a suitable third-celebration system to switch it.
- On March 24, 2001, Apple released Mac OS X 10.zero .The preliminary version was sluggish, incomplete, and had only a few purposes available at launch, largely from unbiased developers.
Take a take a look at your Desktop, or the Downloads folder – is it a little disorganized? Download Folder Tidy right now and choose the folder to prepare and with one click on you may see the information get sorted into the suitable sub folders . No intel relating to how Cubase eleven behaves on M1-chip Macs at this moment. If you rely on third-get together plug-ins and hardware whereas using Finale, it is recommended to check with the producers of those products for their specific guidelines concerning macOS 11 and Apple Silicon M1.
|
OPCFW_CODE
|
Predict file size of a Huffyuv codec video stream
I like to do many comparisons and check before picking a codec and selecting its settings.
Settings with HuffYUV are few but more importantly I'm having trouble determining video file sizes.
Given pixel width, height and pixel format, is it possible to predict a file size for a video encoded with the HuffYUV lossless codec?
I would like to prepare a simple converter to get an estimate of how much space it takes hours, minutes, seconds ;)
Which formula can I use? I'm OK even if you throw me out a complex model, I'm kinda a math guy.
ps: I know my english is poor and is also a bit late so if you good guys think this question could be edited in a better way feel free to do it ;D
The answer is No. Besides the frame dimensions, there's the matter of content complexity. Without scanning the video and doing a first-pass as it were, it's not possible to predict the output size. A video consisting of a slideshow of very simple text slides will be much easier to compress than scenes of busy city life..etc.
The closest you may come to making some sort of prediction is to encode a few representative segments from the source file and compare those bitrates. If there isn't a large difference among the bitrates of the various encoded segments, then you may assume a bitrate in that range for the final output. Of course, this method can't account for anomalous segments in the video of very high or low complexity compared to the rest of it, so sample selection is important.
I see, so in this case I think will be need a different approach: grab tons of segments of videos of any different kinds, all of the same size eg: full hd would be a good starting point. Batch convert them all with huffyuv for a small amount of time, let's say not more than 3 min, do a statistical analysis of the file sizes getting a plain average, a maximum and a sort of "flavored average" for video with text, heavy media, and different categories content. Get your coefficients, prepare a simple calculator based on this coefficients. :)
My approach requires encoding multiple segments of the same duration from within the same video to a) check resulting bitrates and b) variance of bitrate across the segments. HuffYUV uses intraframe compression, so no need to use segment size of 3 minutes. In fact, make it closer to a few seconds.
Thank you for the suggestion, I'm tempted to bring some media here and do the tests. Yeah, your approach is the most correct for obtain the result of a single video, and better fit the original question. My final goal however was understand how in average a Huffyuv video will take.
Would be nice if you could help me setup the test environment, what is your recommendation ? 5 seconds could work ? or 10 in order to sample more frames ? there's a way for start a chat here or i have to reach like 7-8 comments ?
5 seconds is good. I'm not a coder, but if you are, you can write a script to pick N random numbers from 0 to D-5 where D is the duration of your input file. Then just execute ffmpeg -ss T -t 5 -i input -c:v huffyuv -an "input-T.avi" where T are the numbers you generated, one by one. It's upto you what statistical analysis you wish to perform with the resulting files, but mean and SD comes to mind. Consult the stats SE, maybe.
So you talking about of using a single file but scraping randomly 5 seconds many times in order to obtain a consistent result? I was thinking about something like that, I mean avoiding take let's say 100 movies but the first 5 seconds because in the most of the cases will be just written text and black background because of initial credits. PS: Yes i can code something ;)
The 100 files approach doesn't seem useful - it's like computing the nutritional value of a food item by averaging the nutritional values of fruits, meats, grains, nuts..etc. Even among fruits, there's significant difference in sugar levels, so it's just not that useful.
What I want to achieve is a bit broader average on what Huffyuv takes per minute, some people talk around 400MB in Full HD I want to check this theory and use this math info for predict a worst case scenario for storage requirements.
Worst case scenario is a video with a lot of noise. Take any video, add random noise using ffmpeg filter and encode to HuffYUV. A text slideshow is best-case scenario. A static camera talking head video is low-moderate scenario.
Let us continue this discussion in chat.
For 1080p, I often stick with.. (80 secs) - equals close to the 4gb limit.
It's realistic to assume so around 3.0 - 3.2 GB per min at 1080p I guess..
|
STACK_EXCHANGE
|
An experiment is a test to demonstrate a fact. In a startup context, more specifically at the initiation stage, it is meant to simulate the product, measure user behavior and get real feedback. If the first thing you plan for as soon as you observe a need and propose a solution is product development, you’re missing a big opportunity to first test the idea through various techniques that provide for small, practical and often costless changes but can have a major impact in initial validation, target definition and product feature selection. Here are 5 proven experiments you can conduct in this order.
Did You Check XYZ?
People are usually more willing to follow a recommendation if it comes from an outsider. After identifying your potential buyers, do not introduce yourself as the founder but simply say, did you check company X? I heard they do a good job at Y. This experiment can be conducted face to face or online. A friend of mine used this tactic. He found that sign ups were a lot higher when him and his team members introduced themselves as outsiders. More importantly, it showed them that people care for the solution and have a need to solve the problem.
What you need: A landing page showing off the product is a plus but not essential. You may start by simply observing user behavior upon introduction of the service. You can do this in forums or by attending meetings and social events. It is a powerful and yet simple experiment.
We’re Almost There!
Referred users, especially those who really need what you are offering, will want to create an account and start using the product. But you don’t have a solution yet! Test their intent by creating a longer than usual sign-up process. This is meant to increase the friction for two reasons: user filtering and gauging real interest. Those who go through the whole sign up process are more likely to not only use the product but also pay for it. Take the example of Buffer. They used a landing page and a series of registration steps to finally say, “Hello! You caught us before we’re ready.” In a blog post, Joel Gascoigne, Buffer co-founder said, “I used the landing page as an initial validation of whether they would go through a long sign up process for the product I pitched on the first page.”
What you need: First, keep your value proposition in mind and write down the different actions you want users to take for validation purposes. Second, design a landing page and implement the defined steps. You can always use non-scalable tools to simulate the registration process and gather feedback, however for a more realistic scenario, I suggest the use of a landing page in this case.
Sound Very Real
Following early stage validation from the first two experiments, incorporate customer feedback to come up with a video describing the product in details. As if it existed. Bill Gates announced Windows before even starting it. Dropbox showcased the product well before its completion.
What you need: With a strong early validation, don’t hesitate to invest in a video showcasing the product. Dropbox acquired thousands of users with their product video. Here is where you can learn more about their launch story.
Fake The Price
Nothing is more powerful than people pulling money out of their pocket for your service. After a long registration process, include a payment plan for testing purposes. Enable users to input their credit card information and pay. Upon payment, users read a message such as, “Hi! Thanks for your payment but guess what? We haven’t charged you. Instead, we’re giving you [discount] but we’re not quite ready yet! You’ll be the first to know when we are. Share the news with your friends here [social medial and email].” This a powerful experiment for validation and in building strong relationships with those who truly believe and need what you offer. Upon checkout and for testing purposes, Wufoo, acquired by Survey Monkey, showed customers a price but charged them less. This enabled them to test users’ sensitivity to the price in real time unlike surveys or interviews. The same experiment can be conducted without a product, that is, at the idea validation stage of the company.
What you need: Add a payment plan and gateway to the landing page you created for the second experiment.
Instead of delaying the service, you can perform it manually or by using non-scalable resources. If you cannot perform the tasks without automation, select a few of early users and do your best to simulate the experience. This stage is just a transition. It is meant for learning and validation. Users who take the time to go through the registration process then make a payment would not mind working with an OK product. The team at Doordash used their own vehicle, phone, Find My Friend app, Google Docs and other tools to simulate the food delivery process.
What you need: Whether it is a simple or complex offering, define the steps and find the tools or man power (you and your team) to simulate the experience. Founders at Doordash note that the simulation process helped them understand how their app algorithm should look like.
Idea or hypotheses validation can take many forms. Traditionally, it was about building the product. Lately, we turned to building smaller versions of the product (MVP). Today, it is about taking smaller simpler and earlier (than MVP) steps for early stage validation. This is done through experiments.
- Identify your potential buyers then recommend your solution as if it solved your problem too.
- Use a landing page and a longer than usual registration process to filter those who love your solution from those who kind of like it. End the form by a, “we’re coming soon”, message.
- After incorporate customer feedback from the first two stages, create a video showcasing the product and its features as if it exists.
- Pretend as if the product is ready for sale by allowing buyers to input and submit their payment information.
- Use non-scalable resources to simulate the product or service.
Can you think of another experiment? tell us about it below.
|
OPCFW_CODE
|
R. Backofen, Albert Ludwigs University Freiburg
Service center: RNA Bioinformatics Center – RBC
Public archives and databases, like ENA, SRA, are key to ensuring read sequences and datasets are ideally stored long-term and adhering to the FAIR (Findable, Accessible, Interoperable, Reusable) principles. For more than 15,000 users of the European Galaxy server (https://usegalaxy.eu) we have integrated special connectors to those public archives to easily provide scientists with those data in Galaxy and our clouds. Besides accessing data, keeping track of the latest data is time-consuming and not straightforward due to the rapid data growth. Our Galaxy Gateway is designed for downloading and analysing all available (several thousand) sequences of SARS-CoV-2 that are currently published in those data archives. To overcome the issues of tracking the latest datasets as well as to provide a quick turn-around in analysing the latest sequences, we have created a collection of identifiers of relevant sequence-datasets that we update daily in an automatic way. We also mirror all publicly available COVID-19 related datasets in Galaxy to ease the access and reduce analysis time. To facilitate the COVID-19 research, we also continuously integrate the recent SARS-CoV-2 reference genome and create optimised indices to access it from all related tools. After provision of these key features, the European Galaxy server has recently experienced an increased usage especially in COVID-19 related research. In March 2020, the European Galaxy server processed 400,000 jobs, and in April already 500,000 jobs. In terms of data, 140 TB were uploaded in April or created by our users. The hardware underlying the Galaxy service is provided by the “Federal Ministry of Education and Research” of Germany, which is supporting the de.NBI-cloud; the University of Freiburg, which offered 2000 additional cores to the Galaxy infrastructure to fight the pandemic; and a global distributed compute network with contributions from Finland, Belgium, UK, Italy, Spain, Norway, Portugal, and Australia. Specialised tools, particularly in the field of long-read sequencing and drug design, have advanced requirements, e.g. GPUs. With the help of the University of Freiburg and colleagues from the UK, we were able to offer GPUs to all researchers within only a week to accelerate their research during the COVID-19 pandemic.
For further information visit the Galaxy project website: https://covid19.galaxyproject.org
This work was published: https://doi.org/10.1101/2020.02.21.959973
Figure: Example workflows for pre-processing of SARS-CoV-2 short-read and long-read sequences (left) and analysis of paired-end Illumina reads (right).
|
OPCFW_CODE
|
import torch
import torch.nn as nn
__all__ = ["BaseSelfAttention"]
class BaseSelfAttention(nn.Module):
def __init__(
self,
head_dim: int,
num_heads: int,
how: str = "basic",
slice_size: int = None,
**kwargs,
) -> None:
"""Initialize a base class for self-attention modules.
Four variants:
- basic: self-attention implementation with torch.matmul O(N^2)
- slice-attention: Computes the attention matrix in slices to save mem.
- memeff: `xformers.ops.memory_efficient_attention` from xformers package.
- slice-memeff-attention: Comnbines slice-attention and memeff
Parameters
----------
head_dim : int
Out dim per attention head.
num_heads : int
Number of heads.
how : str, default="basic"
How to compute the self-attention matrix.
One of ("basic", "flash", "slice", "memeff", "slice-memeff").
"basic": the normal O(N^2) self attention.
"flash": the flash attention (by xformers library),
"slice": batch sliced attention operation to save mem.
"memeff": xformers.memory_efficient_attention.
"slice-memeff": Conmbine slicing and memory_efficient_attention.
slice_size, int, optional
The size of the slice. Used only if `how in ('slice', 'slice_memeff)`.
Raises
------
- ValueError:
- If illegal self attention (`how`) method is given.
- If `how` is set to `slice` while `num_heads` | `slice_size`
args are not given proper integer values.
- If `how` is set to `memeff` or `slice_memeff` but cuda is not
available.
- ModuleNotFoundError:
- If `self_attention` is set to `memeff` and `xformers` package is not
installed
"""
super().__init__()
allowed = ("basic", "flash", "slice", "memeff", "slice-memeff")
if how not in allowed:
raise ValueError(
f"Illegal exact self attention type given. Got: {how}. "
f"Allowed: {allowed}."
)
self.how = how
self.head_dim = head_dim
self.num_heads = num_heads
if how == "slice":
if any(s is None for s in (slice_size, num_heads)):
raise ValueError(
"If `how` is set to 'slice', `slice_size`, `num_heads`, "
f"need to be given integer values. Got: `slice_size`: {slice_size} "
f"and `num_heads`: {num_heads}."
)
if how in ("memeff", "slice-memeff"):
try:
import xformers # noqa F401
except ModuleNotFoundError:
raise ModuleNotFoundError(
"`self_attention` was set to `memeff`. The method requires the "
"xformers package. See how to install xformers: "
"https://github.com/facebookresearch/xformers"
)
if not torch.cuda.is_available():
raise ValueError(
f"`how` was set to {how}. This method for computing self attentiton"
" is implemented with `xformers.memory_efficient_attention` that "
"requires cuda."
)
# for slice_size > 0 the attention score computation
# is split across the batch axis to save memory
self.slice_size = slice_size
self.proj_channels = self.head_dim * self.num_heads
|
STACK_EDU
|
Zigbee Home Automation integration for Home Assistant allows you to connect many off-the-shelf Zigbee based devices to Home Assistant, using one of the available Zigbee radio modules that is compatible with zigpy (an open source Python library implementing a Zigbee stack, which in turn relies on separate libraries which can each interface a with Zigbee radio module a different manufacturer).
There is currently support for the following device types within Home Assistant:
- Binary Sensor
Zigbee devices that deviate from or do not fully conform to the standard specifications set by the Zigbee Alliance may require the development of custom ZHA Device Handlers (ZHA custom quirks handler implementation) to for all their functions to work properly with the ZHA integration in Home Assistant. These ZHA Device Handlers for Home Assistant can thus be used to parse custom messages to and from Zigbee devices.
The custom quirks implementations for zigpy implemented as ZHA Device Handlers for Home Assistant are a similar concept to that of Hub-connected Device Handlers for the SmartThings Classics platform as well as that of Zigbee-Shepherd Converters as used by Zigbee2mqtt, meaning they are each virtual representations of a physical device that expose additional functionality that is not provided out-of-the-box by the existing integration between these platforms.
- dresden elektronik deCONZ based Zigbee radios (via the zigpy-deconz library for zigpy)
- EmberZNet based radios using the EZSP protocol (via the bellows library for zigpy)
- Nortek GoControl QuickStick Combo Model HUSBZB-1 (Z-Wave & Zigbee USB Adapter)
- Elelabs Zigbee USB Adapter
- Elelabs Zigbee Raspberry Pi Shield
- Telegesis ETRX357USB (Note! This first have to be flashed with other EmberZNet firmware)
- Telegesis ETRX357USB-LRS (Note! This first have to be flashed with other EmberZNet firmware)
- Telegesis ETRX357USB-LRS+8M (Note! This first have to be flashed with other EmberZNet firmware)
- Texas Instruments CC253x, CC26x2R, and CC13x2 based radios (via the zigpy-cc library for zigpy)
- CC2531 USB stick hardware flashed with custom Z-Stack coordinator firmware from the Zigbee2mqtt project
- CC2530 + CC2591 USB stick hardware flashed with custom Z-Stack coordinator firmware from the Zigbee2mqtt project
- CC2530 + CC2592 dev board hardware flashed with custom Z-Stack coordinator firmware from the Zigbee2mqtt project
- CC2652R dev board hardware flashed with custom Z-Stack coordinator firmware from the Zigbee2mqtt project
- CC1352P-2 dev board hardware flashed with custom Z-Stack coordinator firmware from the Zigbee2mqtt project
- CC2538 + CC2592 dev board hardware flashed with custom Z-Stack coordinator firmware from the Zigbee2mqtt project
- XBee Zigbee based radios (via the zigpy-xbee library for zigpy)
- Digi XBee Series 3 (xbee3-24) modules
- Digi XBee Series 2C (S2C) modules
- Digi XBee Series 2 (S2) modules (Note! This first have to be flashed with Zigbee Coordinator API firmware)
- ZiGate based radios (via the zigpy-zigate library for zigpy and require firmware 3.1a or later)
From the Home Assistant front page go to Configuration and then select Integrations from the list.
Use the plus button in the bottom right to add a new integration called ZHA.
In the popup:
- USB Device Path - on a Linux system will be something like
- Radio type - select device type
|Radio Type||Zigbee Radio Hardware|
||EmberZNet based radios, Telegesis ETRX357USB*** (using EmberZNet firmware)|
||ConBee, ConBee II|
||Digi XBee Series 2, 2C, 3|
Submitto save changes.
The success dialog will appear or an error will be displayed in the popup. An error is likely if Home Assistant can’t access the USB device or your device is not up to date. Refer to Troubleshooting below for more information.
To configure the component, select ZHA on the Integrations page and provide the path to your Zigbee USB stick.
Or, you can manually configure
zha section in
configuration.yaml. The path to the database which will persist your network data is required.
# Example configuration.yaml entry zha: usb_path: /dev/ttyUSB2 database_path: /home/homeassistant/.homeassistant/zigbee.db
If you are use ZiGate, you have to use some special usb_path configuration:
- ZiGate USB TTL or DIN:
autoto auto discover the zigate
- PiZigate :
- Wifi Zigate :
Path to the serial device for the radio.
Baud rate of the serial device.
Full path to the database which will keep persistent network data.
Enable quirks mode for devices where manufacturers didn’t follow specs.
To add new devices to the network, call the
permit service on the
zha domain. Do this by clicking the Service icon in Developer tools and typing
zha.permit in the Service dropdown box. Next, follow the device instructions for adding, scanning or factory reset.
Go to the Configuration page and select the ZHA integration that was added by the configuration steps above.
Click on ADD DEVICES to start a scan for new devices.
Reset your Zigbee devices according to the device instructions provided by the manufacturer (e.g., turn on/off lights up to 10 times, switches usually have a reset button/pin).
Philips Hue bulbs that have previously been added to another bridge won’t show up during search. You have to restore your bulbs back to factory settings first. To achieve this, you basically have the following options.
Using a Philips Hue Dimmer Switch is probably the easiest way to factory-reset your bulbs. For this to work, the remote doesn’t have to be paired with your previous bridge.
- Turn on your Hue bulb you want to reset
- Hold the Dimmer Switch near your bulb (< 10 cm)
- Press and hold the (I)/(ON) and (O)/(OFF) buttons of the Dimmer Switch for about 10 seconds until your bulb starts to blink
- Your bulb should stop blinking and eventually turning on again. At the same time, a green light on the top left of your remote indicates that your bulb has been successfully reset to factory settings.
Follow the instructions on https://github.com/vanviegen/hue-thief/ (EZSP-based Zigbee USB stick required)
On Linux hosts ZHA can fail to start during HA startup or restarts because the Zigbee USB device is being claimed by the host’s modemmanager service. To fix this disable the modemmanger on the host system.
To remove modemmanager from an Debian/Ubuntu host run this command:
sudo apt-get purge modemmanager
If you are using Docker and can’t connect, you most likely need to forward your device from the host machine to the Docker instance. This can be achieved by adding the device mapping to the end of the startup string or ideally using Docker compose.
Install Docker-Compose for your platform (Linux -
sudo apt-get install docker-compose).
docker-compose.yml with the following data:
version: '2' services: homeassistant: # customisable name container_name: home-assistant # must be image for your platform, this is the rpi3 variant image: homeassistant/raspberrypi3-homeassistant volumes: - <DIRECTORY HOLDING HOME ASSISTANT CONFIG FILES>:/config - /etc/localtime:/etc/localtime:ro devices: # your usb device forwarding to the docker image - /dev/ttyUSB0:/dev/ttyUSB0 restart: always network_mode: host
|
OPCFW_CODE
|
Arrow Diagrams and Symbolic Representations
While not useful for every function or equation, the use of arrow diagrams as a means of
introducing order of operations, solving equations, inverse operations, and function composition
gives a student one more representation in which to view otherwise abstract concepts. The basic
structure of the arrow diagram really draws on the old idea of the function machine, but it breaks
the larger function machine into its constituent parts.
My personal experience has been that the more that I work with arrow diagrams, the more places
I find application for them in my curriculum, even into calculus. But enough text - let’s get to
Example 1: Represent the equation y with an arrow diagram
3 5 8
x 3x 3x 5 y
Okay, but what does that do for us? First, we have verified that we know the order of operations
- when you do this you will be amazed to discover that is at the heart of a lot of students’
struggles, and you are able to pinpoint exactly where the problem is. Second, we now have a
step-by-step function machine for evaluating the function - we can put a number in at the
beginning (for x), and just do what the arrows tell us to do. The operation on the arrow always
applies to whatever came off of the previous arrow.
Example 2: Find the inverse of y .
The key here is that the inverse of the whole function will be the composition of the inverse
functions - assuming that the operations are invertible. We simply make arrows from y back to x.
3 5 8
x 3x 3x 5 y y 8y 8y 5 x
3 5 8 3
Example 3: Solve: 2.
Using the arrow diagram above, we just start with a 2 where the y is:
2 2 8 16 16 5 21 21 3 7 x
These are exactly the same steps you would make in solving the problem in a traditional algebra
When students work regularly with arrow diagrams, they become naturally curious about
inverses. Whenever a new function or operation is introduced (something new that can go on an
arrow), then the expectation is that there will be something to go on the arrow that goes the other
direction. When functions are not one-to-one (like the squaring function), then the class can
have a discussion about what adjustments need to be made to the return arrows.
Example 4: Make a complete arrow diagram for y x 2 .
^2 This is close, but ^2 A double set of arrows takes care of
x y fails to capture x y it (if I were doing this by hand, I
the negative root would curve the arrows so that they
both appear to be coming from the y
Example 5: Construct arrow diagrams for the following: y x 3 , y , y x , y sin x
Solutions left as an exercise.
Other applications: In calculus, I teach chain rule using arrow diagrams - which is a natural fit as
chain rule deals with composition of functions. We generally do not construct the inverse part of
the diagram, but instead add a down arrow for each operation/function.
Example 6: Find the derivative of y sin 3 x 3 .
x x 3
3x 2 3 cos 3x 3 9 x 2 cos 3x 3
3 2 3 cos
This is really just the inside function, outside function process with the inside functions
occurring first. The advantage is that students are less likely to insert the derivative of one
function inside the derivative of another function.
The drawbacks: The biggest is that arrow diagrams do not deal well with combinations of
functions, only with compositions. I have seen some methods of representing combinations, but
the final result has no practical application in terms of solving equations (inverses) or taking
derivatives, so I never spend any time with it.
The second is that if you are the only one doing it, then students tend to be resistant to learning to
do something differently.
|
OPCFW_CODE
|
Most Recent Firmware Update | Version 2.2.5 | June 17, 2106
This purpose of this update is the addition of 2 new features that users have requested plus one minor bug fix that only applies to balanced preamps. As such this is not a critical update. This update skips a version (2.2.4) which was shipped in only a single preamp and included the Fix discussed below.
Hope springs eternal but I really think this will be the last iteration of the version 2.2 series for a while now. We've gone through the usual post-release bug kill and everything appears to be stable. New: Fast Input Switching
- While in Input adjust mode, user now has option to do use fast switching between inputs by using the Left/Right buttons on the remote rather than the normal Raise/Lower followed by Enter. When using the Left/Right buttons the inputs transition within less than a second without any muting/unmuting. When using fast switching the volume will immediately go the volume level last associated with the selected input.New: Display Timeout
- When in Display adjust mode user can now optionally set the time (in seconds) before display turns off after last user input. Adjustment is between 1 and 99 seconds using the Left/Right remote button. Setting timeout to zero (0) disables the timeout feature. Display timeout only works when in volume adjust mode and is disabled when in any other control mode such as input adjust, impedance adjust, display adjust etc. After updating you should check this setting since it may not be zero (off) by default.Fix: Turn Off (Balanced Preamps)
- The new preamp turn off function by pressing/holding the Menu button that introduced in version 2.2.3 was not working properly for balanced units. This update fixes that.More Info On Fast Input Switching
Fast Input Switching was introduced at the request of users who wanted to be able to do A/B testing with difference sources. Fast input switching overrides the normal automatic and relatively slow sequence of first muting the current input, switching inputs, and then ramping the volume back up. Fast switching has only been tested with Tortuga preamps that use LDRs for input switching - we have not tested this with our DIY relays and kits. LDRs are inherently slow switches compared to relays. While they turn on relatively quickly, they are much slower to turn off. Because of the slow turn off time, fast switching includes a brief transition delay of 200 milliseconds. This delay is barely perceptible but avoids any potentially harmful artifacts such as pops or bumps to your speakers. Users are cautioned to avoid switching between inputs where the current input is at a low volume level and the new input was previously at a high volume level. The preamp will abruptly go to the volume level last associated with the new input. As it does so the volume of the current input may surge slightly before it fully turns off.
Firmware info and downloads can be found here: http://www.tortugaaudio.com/downloads/
|
OPCFW_CODE
|
The framebuffer's original function is as a video RAM cache to allow more flexibility to (older) video cards. Many newer cards come with framebuffers on board, which are often already compatible with many operating systems. Enabling framebuffer support in the Linux kernel will often cause graphical artifacts or black screen displays. For most newer cards, this option should not be selected when using the LiveDVD.
Checking the console driver
On the boot media:
dmesg | grep fb0
[ 11.388220] fbcon: amdgpudrmfb (fb0) is primary device [ 11.796455] amdgpu 0000:0a:00.0: [drm] fb0: amdgpudrmfb frame buffer device
In the previous output, fb0 is the primary display. The console will appear here. The frame buffer device tells that is indeed a framebuffer console. fb0: amdgpudrmfb tells that the driver in use is the kernel's AMDGPU DRM FB framebuffer driver.
That is included with the kernel amdgpu driver. No other framebuffer drivers are strictly required but see early console drivers below.
Other video cards will show something similar.
Users intent on installing nvidia-drivers see the Early framebuffer drivers section below.
nVidia users may find that the above grep returns nouveaufb. It is not possible to use the kernel nouveau driver and nvidia-drivers concurrently. Only the early framebuffer drivers may be selected.
The kernel selection
Most of the the kernel framebuffer options are for use with hardware over 20 years old. These options almost always interfere with modern Direct Render Manager (DRM) provided framebuffers as both will attempt to configure the hardware and neither will work.
DRM framebuffer drivers
For everyone except nvidia-drivers users.
On the Graphics support sub-menu within the kernel configuration Device driver menu, choose:
Note that the fbdev option is invisible unless the following option is also selected further down in the Graphic support menu:
Back on the Graphics support menu, choose the DRM driver for the system. Xorg will use this later.
For example, to use AMDGPU:
There are very few uses for the Virtual drivers.
Building these drivers as modules (<M>) avoids the requirement to discover and include the required firmware in the kernel.
emerge --ask sys-kernel/linux-firmware
will install the required firmware.
Early framebuffer drivers
Early because the DRM Framebuffer drivers typically require firmware to be loaded which implies that they are often built as loadable modules. They, therefore, start sometime later than built in drivers. This mean that the early console messages are lost as the console is blank until the DRM driver is initialized.
Only four Early Framebuffer Drivers are safe for modern hardware:
It is safe to choose them all as none of then try to control the hardware. The kernel will pick and choose at boot time.
All the others must be disabled.
- Intel — the open source graphics driver for Intel GMA on-board graphics cards and Intel Arc dedicated graphics cards, starting with the Intel 810.
- Nouveau — the open source driver for NVIDIA graphic cards.
- Radeon — a family of open source graphics drivers for older AMD/ATI Radeon graphics cards.
- Xorg/Guide — Verify legacy framebuffer drivers have been disabled
|
OPCFW_CODE
|
yosemitesam Posted September 12, 2017 Share Posted September 12, 2017 (edited) Undergrad Institution: Top 20 US Major(s): PhysicsMinor(s): Jazz StudiesGPA: 3.85 Type of Student: US White MaleGRE General Test: (took June 2017) Q: 170 (97%)V: 170 (99%)W: 6.0 (99%) GRE Math Sub: Taking in October, aiming for 70th percentile Programs Applying: Statistics PhD/MS Research Experience: Working now on an independent project developing a model for a private company, will write a paper and submit with my applications as a writing sample Developed a decently performing model for a challenge, think I'll have my name in a published paper as a result (http://www.fragilefamilieschallenge.org/) Worked in a physics lab as a year for a sophomore (It was pretty good but I realized I didn't have a passion for working in labs, and that was about ten years ago now (!), so pretty sure I can't get a reference letter) Worked as an RA for a humanities professor over a summer (just pretty simple clerical stuff; again, this was about 8 years ago) Awards/Honors/Recognitions: Phi Beta Kappa, graduated with College Honors in Arts and Sciences, Dean's List every semester, National Merit Scholarship recipientPertinent Activities or Jobs: Developed a simple algorithm for my friend at a large company to make predictions (have to talk to him but I think I'll be able to characterize that as an internship and put it on my CV); 8 years of tutoring math/physics/test prep Courses: Math/Statistics: Honors Calc 3 (took in high school at a top 30 university, A), Matrix Algebra (A), Diff EQ (A), Probability (took this summer at a top 25 university, A+), Mathematical Statistics (took this summer at a top 25 university, A), Real Analysis (taking this fall at a meh state school, was my only option unfortunately) Computer Science: Intro to Computer Programming (A+), Computer Science I (A) Other courses with heavy math component: Intro Microecon (A-), Intermediate Macroecon (took this summer at a top 25 university, A+), many physics classes of course (all A+/A/A- except one B+) Online courses (MOOCs): Econometrics (Coursera), Statistical Learning (Stanford Lagunita, Hastie/Tibshirani), Machine Learning (Coursera, Ng), a few SQL classes, a few classes on R/Python Letters of Recommendation: Undergraduate advisor who I took a math class with and got an A in; mathematical statistics professor who I got an A with; still figuring out third (could be advisor for research project I'm working on now, although I don't get a lot of face time so not sure as of now how strong that letter would be; humanities professor who I worked as a summer RA for who liked me a lot and I tutored her kids for over a year afterward, or possibly one of two graduate students (one who was a TA, one who taught a class) who like me a lot and can speak to my mathematical ability). Any Miscellaneous Points that Might Help: My research interests are more on the applied side rather than theory Programs considering: PhD: Duke Penn Wharton UNC NC State Columbia UCLA USC Marshall (Data Science and Operations--Statistics) CalTech (Computing and Mathematical Sciences) NYU Northwestern MS: Berkeley Harvard Chicago Concerns: I graduated just about 8 years ago (graduated a semester early). I will turn 30 just before programs start next fall. Not sure how this will affect me? I've taken the GRE, Probability, Mathematical Statistics, and Intermediate Macroecon this year, and will take Real Analysis and the GRE Math Subject test this fall. Plus I've been tutoring all these years. So I'm hoping that will show my readiness to go back to school. I know my math coursework is a little thin. And based on my practice I'm not sure I can count on a math GRE score much past the 70th percentile. I'm hoping that will be enough to be in the running at the programs I'm looking at? I think I'll have two reasonably strong letters, but I'm not sure what to do about the third. Any advice would be appreciated. Any advice on how I can best spend my time between now and when applications are due? I will be taking this Real Analysis class and working on my research project/paper, as well as wrangling rec letters of course. The impression I'm under is that out of the things I can do now that are within my control, the most important things would be maximizing my GRE math subject test score, securing the best rec letters I can, and making this research project/resulting paper as good as possible. Edited September 12, 2017 by yosemitesam Link to comment Share on other sites More sharing options...
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now
|
OPCFW_CODE
|
Altair Recruitment 2022 | Software Development Engineer | BE/ B.Tech – Computers/ IT | Bangalore
Company: Altair Engineering
Altair Recruitment 2022 : Altair Engineering is an American product design and development, engineering software and cloud computing software company. Altair was founded by James R Scapa, George Christ, and Mark Kistner in 1985.
Headquartered in Troy, Michigan, USA, with offices throughout North America, Europe and Asia, Altair Engineering is a leading global product design, consulting and commercial software product company.
Position: Software Development Engineer
Experience: 1+ year
Salary: Best in Industry
Job Location: Bangalore
Eligibility Criteria for Altair Recruitment 2022:
- Bachelor’s degree in Computer Science, with internship or 1+ years of relevant experience
- Experience with Python and other scripting languages
- Knowledge of development operations, user experience, and front-end best practices
- A dynamic, motivated, positive outlook with the desire to innovate and the ability to prioritize responsibilities
- Ability to work in a multicultural, distributed, and global environment and communicate with non-native English speakers
Altair Engineering is seeking an experienced software development engineer to support our technical documentation team. This role will focus on maintaining our publishing and version control infrastructure, automating processes, and developing next-generation search and in-application help. We’re looking for someone who can adapt to changing priorities and maintain momentum on multiple projects. Independence and initiative are critical; the ability to work well with a global, multi-cultural team is also required. As part of an engineering software company, you’ll be immersed in a high-tech culture. Remote work arrangements are supported.
- Define and scale the CI/CD pipeline for building and deploying our help across a range of documentation teams and products using GitLab, Python, Bash/Shell scripting, and the DITA-OT.
- Develop and maintain a robust search engine for help and documentation.
- Identify opportunities and develop solutions to improve the efficiency of content publishing processes.
- Assist writing teams in troubleshooting and resolving authoring system and content build issues.
- Support a delivery model that enforces quality, content structure, version control, and publishing strategies in multiple deliverables, languages, and formats.
- Work with the team to develop prototypes for the next generation of user assistance tools, including in-application help.
- Stay abreast of industry trends and continuously improve your technical skills.
- Bachelor’s degree in Computer Science, with 3+ years of relevant experience
- Experience with bash/shell scripting for CI/CD and automation tools
- Development operations experience defining and scaling Git-based CI/CD pipelines and infrastructure across multiple projects and teams
- Basic knowledge of GitLab Runner or an equivalent build-automation server tool preferred
How to Apply for Altair Recruitment 2022?
Desirous candidates may apply through online mode.
Apply Link: Click Here
|
OPCFW_CODE
|
M: Ask HN: Build a discussion platform for open-source projects. Does anybody care? - pankratiev
It should be a community site which allows programmers discuss any technical stuff related to open-source projects.
Each user will be able to follow project in order to see posts related to it, as well as submit his own technical posts.<p>What do you think?
R: zooko2
I think it would be cool! I love open source stuff, and I love chatting.
<http://advogato.org> was the second ever social networking site, and it was
for open source/free software people. It is still there, but I no longer use
it for some reason.
<http://lwn.net> is a weekly news zine with vigorous and informed discussion
in the comments.
You can always use a discussion site like convore for open source discussion.
R: amccloud
Correct me if i'm wrong but I think that is the goal of <http://convore.com/>
R: pankratiev
"Convore is a quick way to instant message with groups of friends in real-
time. Join public or private groups and talk about anything!"
It's a chat. But I meant something like improved Google Groups fully focused
on programming
|
HACKER_NEWS
|
Whilst most research is understandably focused on pushing the boundaries of complexity, the reality is that training and running complex models can have a big impact on the environment. It’s predicted that data centres will represent 15% of global CO2 emissions by 2040, and a 2019 research paper, “Energy considerations for Deep Learning,” found that training a natural language translation model emitted CO2 levels equivalent to four family cars over their lifetime. Clearly, the more training, the more CO2 is released. With a greater understanding of environmental impact, organisations are exploring ways to reduce their carbon footprint. Whilst we can now use AI to make data centres more efficient, the world should expect to see more interest in simple models that perform as well as complex ones for solving specific problems. Realistically, why should we use a 10-layer convolutional neural network when a simple bayesian model performs equally well while using significantly less data, training, and compute power? “Model efficiency” will become a byword for environmental AI, as creators focus on building simple, efficient, and usable models that don't cost the earth.
IBM defines a digital twin as follows “A digital twin is a virtual model designed to accurately reflect a physical object”. They go on to describe how the main enabling factors for creating a digital twin are the sensors that gather data and the processing system that inserts the data in some particular format/model into the digital copy of the object. Further, IBM says “Once informed with such data, the virtual model can be used to run simulations, study performance issues and generate possible improvements”. ... So, how do we use our favorite language Python to create a digital twin? Why do we even think it will work? The answer is deceptively simple. Just look at the figure above and then at the one below to see the equivalency between a Digital Twin model and a classic Python object. We can emulate the sensors and data processors with suitable methods/functions, store the gathered data in a database or internal variables, and encapsulate everything into a Python class.
When you have a monolith, you generally only need to talk to one database to decide whether a user is allowed to do something. An authorization policy in a monolith doesn't need to concern itself too much with where to find the data (such as user roles) — you can assume all of it is available, and if any more data needs to be loaded, it can be easily pulled in from the monolith's database. But the problem gets harder with distributed architectures. Perhaps you're splitting your monolith into microservices, or you're developing a new compute-heavy service that needs to check user permissions before it runs jobs. Now, the data that determines who can do what might not be so easy to come by. You need new APIs so that your services can talk to each other about permissions: "Who's an admin on this organization? Who can edit this document? Which documents can they edit?" To make a decision in service A, we need data from service B. How does a developer of service A ask for that data? How does a developer of service B make that data available?
In the payments realm, Mastercard® Healthcare Solutions optimizes the workflow for payers and providers by automating repetitive and error-prone operations, such as billing and claims processing. According to CIO magazine, many hospitals are now using AI to automate mundane tasks, reduce workloads, eliminate errors and speed up the revenue cycle. The author notes AI’s effectiveness for reducing incorrect payments for erroneous billings, and for preventing the labor-intensive process of pulling files, resubmitting to payers and eventual payment negotiations. ... The successful use of AI for FWA prevention is increasing in popularity. A recent study by PMYNTS revealed that approximately 12 percent of the 100 sector executives surveyed use AI in healthcare payments, three times the number using AI in 2019. Nearly three-quarters of the 100 execs plan to implement AI by 2023. ... These are all important factors when building an AI model and show the need to demonstrate return on investment (ROI) through a proof of concept.
“As we surround applications with our capabilities, we will understand the traffic flow and the performance and what’s normal,” Coward says. “The longer you run the AI within the network, the more you know about what typically happens on a Tuesday afternoon in Seattle.” A key aspect of SevOne is the ability to take raw network performance data from sources–such as SNMP traps, logs in Syslog formats, and even packets captured from network taps–combine it in a database, and then generate actionable insights from that blended data. “The uniqueness of SevOne is really that we put it into a time-series database. So we understand for all those different events, how are they captured [and] we can correlate them,” Coward explains “That sounds like an extraordinary simple things to do. When you’re trying to do that at scale across a wide network where you literally have petabytes of data being created, it creates its own challenge.” The insights generated from SevOne can take the form of dashboards that anyone can view to see if there’s a network problem, thereby eliminating the need to call IT.
The rapid deployment of AI into societal decision-making—in areas such as health care recommendations, hiring decisions, and autonomous driving—has catalyzed ongoing ethics discussions regarding trustworthy AI. These considerations are in early stages. Future issues could arise as tech goes beyond AI. Focus is intensifying on the importance of deploying AI-powered systems that benefit society without sparking unintended consequences with respect to bias, fairness, or transparency. Technology is increasingly a focal point in discussions about efforts to deceive using disinformation, misinformation, deepfakes, and other misuses of data to attack or manipulate people. Some tech companies are asking governments to pass regulations clearly outlining responsibilities and standards, and many organizations are cooperating with law enforcement and intelligence agencies to promote vigilance and action. ... Many technology organizations are facing demands from stakeholders to do more than required by law to adopt sustainable measures such as promoting more efficient energy use and supply chains, reducing manufacturing waste, and decreasing water use in semiconductor fabrication.
Everything is connected in some way, well beyond the obvious, which leads to layer upon layer of real world complexity. Complex systems interact with other complex systems to produce additional complex systems of their own, and so goes the universe. This game of complexity goes beyond just recognizing the big picture: where does this big picture fit into the bigger picture, and so on? But this isn't just philosophical. This real world infinite web of complexity is recognized by data scientists. They are interested in knowing as much about relevant interactions, latent or otherwise, as they work through their problems. They look for situation-dependent known knowns, known unknowns, and unknown unknowns, understanding that any given change could have unintended consequences elsewhere. It is the data scientist's job to know as much about their relevant systems as possible, and leverage their curiosity and predictive analytical mindset to account for as much of these systems' operations and interactions as feasible, in order to keep them running smoothly even when being tweaked.
Like any public blockchain, the open-source code is viewable by the public. Since there is no human being in control, users can be certain the code will execute according to the rules it contains. As the industry saying goes, “code is law.” DAOs are controlled by a type of cryptocurrency called governance tokens, and these give token holders a vote on the project. The investment is based on the idea that as the platform attracts more users and the funds are deposited into its lending pools, the total value locked (TVL) increases and the more valuable its tokens will become. Aave has nearly $14 billion TVL, but the AAVE token is not loaned out. The Aave protocol’s voters have allowed lenders to lock 30 different cryptocurrencies, each of which has interest rates for lenders and borrowers set by the smart contract rules. Different protocols have different voting rules, but almost all come down to this: Token holders can propose a rule change. If it gets enough support, a vote is scheduled; if enough voters support it, the proposal passes, the code is updated, and the protocol’s rules are updated.
It is well understood that blockchain-based digital identity management is robust and encrypted to ensure security and ease of portability. Hence, mandating its effective incorporation for improving the socio-economic well-being of the users, which is mainly associated with digital identity. With time and advanced technologies, digital identity has become an essential entity that enables users to have various rights and privileges. Although Blockchain has various benefits while managing digital identities, it cannot be considered a panacea. Blockchain technology is continuously developing, and though it offers multiple benefits, there also exist various challenges when aiming to completely replace the traditional identity management methods with the latter. Some of the known challenges include the constantly developing technology and the lack of standardization of data exchange. Considering the benefits that come with transparency and the trust earned through blockchain frameworks, numerous organizations are merging to ensure interoperability across their borders.
Data lakes will continue their dominance as essential for enabling analytics and data visibility; 2022 will see rapid expansion of a thriving ecosystem around data lakes, driven by enterprises seeking greater data integration. As organizations work out how to introduce data from third-party systems and real-time transactional production workloads into their data lakes, technologies such as Apache Kafka and Pulsar will take on those workloads and grow in adoption. Beyond introducing data to enable BI reporting and analytics, technologies such as Debezium and Kafka Connect will also enable data lake connectivity, powering services that require active data awareness. Expect that approaches leveraging an enterprise message bus will become increasingly common as well. Organizations in a position to benefit from the rise of integration solutions should certainly move on these opportunities in 2022. Related to this trend (and to Trend #1 as well): the emerging concept of a data mesh will really come into its own in 2022.
Quote for the day:
"The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things." -- Ronald Reagan
|
OPCFW_CODE
|
UC Santa Barbara
Spectral Properties of the Koopman Operator in the Analysis of Nonstationary Dynamical Systems
- Author(s): Mohr, Ryan M.
- Advisor(s): Mezic, Igor
- et al.
The dominating methodology used in the study of dynamical systems is the geometric picture introduced by Poincare. The focus is on the structure of the state space and the asymptotic behavior of trajectories. Special solutions such as fixed points and limit cycles, along with their stable and unstable manifolds, are of interest due to their ability to organize the trajectories in the surrounding state space.
Another viewpoint that is becoming more prevalent is the operator-theoretic / functional-analytic one which describes the system in terms of the evolution of functions or measures defined on the state space. Part I of this doctoral dissertation focuses on the Koopman, or composition, operator that determines how a function on the state space evolves as the state trajectories evolve. Most current studies involving the Koopman operator have dealt with its spectral properties that are induced by dynamical systems that are, in some sense, stationary (in the probabilistic sense). The dynamical systems studied are either measure-preserving or initial conditions for trajectories are restricted to an attractor for the system. In these situations, only the point spectrum on the unit circle is considered; this part of the spectrum is called the unimodular spectrum. This work investigates relaxations of these situations in two different directions. The first is an extension of the spectral analysis of the Koopman operator to dynamical systems possessing either dissipation or expansion in regions of their state space. The second is to consider switched, stochastically-driven dynamical systems and the associated collection of semigroups of Koopman operators.
In the first direction, we develop the Generalized Laplace Analysis (GLA) for both spectral operators of scalar type (in the sense of Dunford) and non spectral operators. The GLA is a method of constructing eigenfunctions of the Koopman operator corresponding to non-unimodular eigenvalues. It represents an extension of the ergodic theorems proven for ergodic, measure-preserving, on-attractor dynamics to the case where we have off-attractor dynamics. We also give a general procedure for constructing an appropriate Banach space of functions on which the Koopman operator is spectral. We explicitly construct these spaces for attracting fixed points and limit cycles. The spaces that we introduce and construct are generalizations of the familiar Hilbert Hardy spaces in the complex unit disc.
In the second direction, we develop the theory of switched semigroups of Koopman operators. Each semigroup is assumed to be spectral of scalar-type with unimodular point spectrum, but possibly non-unimodular continuous spectrum. The functions evolve by applying one semigroup for a period of time, then switching to another semigroup. We develop an approximation of the vector-valued function evolution by a linear approximation in the vector space that the functions map into. A basis for this linear approximation is constructed from the vector-valued modes that are coefficients of the projections of the vector-valued observable onto scalar-valued eigenfunctions of the Koopman operator. The unmodeled modes show up as noisy dynamics in the output space. We apply this methodology to traffic matrices of an Internet Service Provider's (ISP's) network backbone. Traffic matrices measure the traffic volume moving between an ingress and egress router for the network's backbone. It is shown that on each contiguous interval of time in which a single semigroup acts the modal dynamics are deterministic and periodic with Gaussian or nearly-Gaussian noise superimposed.
Part II of the dissertation represents a divergence from the first part in that it does not deal with the Koopman operator explicitly. In the second part, we consider the problem of using exponentially mixing dynamical systems to generate trajectories for an agent to follow in its search for a physical target in a large domain. The domain is a compact subset of the n-dimensional Euclidean space Rn. It is assumed that the size of the target is unknown and can take any value in some continuous range. Furthermore, it is assumed that the target can be located anywhere in the domain with equal probability.
We cast this problem as one in the field of quantitative recurrence theory, a relatively new sub-branch of ergodic theory. We give constructive proofs for upper bounds of hitting times of small metric balls in Rn for mixing transformations of various speeds. The upper bounds and limit laws we derive say, approximately, that the hitting time is bounded above by some constant multiple of the inverse of the measure of the metric ball. From these results, we derive upper bounds for the expected hitting time, with respect to the range of target sizes [delta, V), to be of order O(-ln delta). First order, continuous time dynamics are constructed from discrete time mixing transformations and upper bounds for these hitting times are shown to be proportional to the discrete time case.
|
OPCFW_CODE
|
Income inequality in the United States has increased since the 1970s. Has that increase been offset by mobility?
It could be. Half a century ago Milton Friedman (in Capitalism and Freedom) suggested sensibly that a proper understanding of inequality requires taking mobility into account:
“A major problem in interpreting evidence on the distribution of income is the need to distinguish two basically different kinds of inequality: temporary, short-run differences in income, and differences in long-run income status. Consider two societies that have the same distribution of annual income. In one there is great mobility and change so that the position of particular families in the income hierarchy varies widely from year to year. In the other, there is great rigidity so that each family stays in the same position year after year. Clearly, in any meaningful sense, the second would be the more unequal society.”
Some find the following metaphor (originally from Joseph Schumpeter) helpful. Think of an apartment building with units of varying size and quality. It has a few penthouse suites that are large and feature lots of amenities, a multitude of modest two-bedroom units, and a number of barebones single-room units. This is inequality. Suppose, however, that the residents regularly switch units. Most people live much of the time in the low- or mid-level units, but they go back and forth between these, and many occasionally get to live in a penthouse suite. This is mobility. (Specifically, it’s relative intragenerational mobility.) This mobility reduces the amount of inequality — true, genuine, long-run inequality — among the residents.
Income mobility does reduce income inequality. When inequality increases, however, mobility has to also increase if it is to offset that rise in inequality.
This is a simple, perhaps obvious point. But it’s an important one. The rest of this post illustrates it with the aid of some graphs.
To begin, imagine 100 households at two points in time. Suppose, for simplicity, that there are five different incomes in the society at time 1 and one fifth of the households have each of these incomes. (It’s not important for the illustration, but the incomes I use in these charts are the average after-tax incomes of the five quintiles of the U.S. income distribution in 1979 and 2005. The data are here.)
In one scenario, shown in the first chart, no household’s income changes from time 1 to time 2. Each household’s average income is therefore the same as its income at each point in time. “True” inequality — inequality when income is averaged over time 1 and time 2 — is the same as single-point-in-time inequality.
In another scenario, depicted in the second chart, the income levels stay the same at the second point in time. The level of single-point-in-time inequality is thus the same at time 2 as at time 1. But some of the households switch places. Some that start with the lowest income move up to the lower-middle, others to the middle, and a few to the top two incomes. Similarly, some that begin at the top stay there, while others move down.
For each household, the plus (+) and hollow circle (o) indicate its income at time 1 and time 2, respectively, while the solid marker (♦) is its average income. The pattern of the solid markers makes it clear that inequality of average income is lower in this scenario than in the first chart; lots of households’ average income is in between the five levels of the first chart. The Gini coefficient confirms this. The Gini is a standard measure of inequality; it ranges from zero to one, with larger numbers indicating greater inequality. The Gini for average income in the mobility scenario is .286 (second chart), compared to .320 in the no-mobility scenario (first chart).
Now consider what happens when single-point-in-time inequality increases from time 1 to time 2, as has happened in the United States since the 1970s.
The third chart shows a society in which inequality increases from time 1 to time 2 and there is no relative mobility. With the rise in single-point-in-time inequality, inequality of average income is greater than in the first chart. The Gini is .373 in the third chart, compared to .320 in the first chart.
Can mobility offset this rise in inequality? The fourth (and last) chart shows a scenario in which single-point-in-time inequality increases exactly as it did in the third chart but there is relative mobility. The amount of mobility is the same as in the second chart; the same number of households move up or down among the quintiles and by the same (relative) amount.
Mobility does reduce “true” inequality compared to the no-mobility scenario depicted in the third chart. But the Gini coefficient for the fourth chart is much larger than for the second chart. These two scenarios have the same amount of relative mobility. But with single-point-in-time inequality having risen in the fourth scenario between time 1 and time 2, the same degree of relative mobility does not produce the same amount of “true” inequality (inequality of average income).
The bottom line: Income mobility helps to reduce income inequality. But if single-point-in-time inequality rises, mobility can only offset that rise if it too increases.
Single-point-in-time income inequality has risen sharply in the United States since the 1970s. Has mobility increased too? Stay tuned.
|
OPCFW_CODE
|
Setting system time of ROOTED phone
I am currently trying to set Android system time in software. Yes, I know that many people tried it - and failed like I do now. :-)
But I also know that it is possible to set Android system time on ROOTED phones. I have testet an app called ClockSync which does exaclty that.
So what I want is find out how to set system time on ROOTED devices. Please do not say it is not possible. :-)
What I tried so far is setting the following permissions:
<uses-permission android:name="android.permission.SET_TIME_ZONE"/>
<uses-permission android:name="android.permission.SET_TIME"/>
And then in my code:
AlarmManager a = (AlarmManager)getSystemService(Context.ALARM_SERVICE);
long current_time_millies = System.currentTimeMillis();
try {
a.setTime((long)current_time_millies+10000);
} catch (Exception e) {
// Why is this exception thrown?
}
But I always get the following exception:
java.lang.SecurityException: setTime: Neither user 10054 nor current process has android.permission.SET_TIME.
I am testing it on the same device where ClockSync works perfectly. So - what am I doing wrong? Or better: Can you provide tested code that works?
UPDATE: This method no longer works with the recent Android versions. The only other way that I'm aware of is to use the date command to set time. Note that command format may be different depending on the Android version and on the third-party tools installed (BusyBox version of date doesn't support time zones).
// Android 6 and later default date format is "MMDDhhmm[[CC]YY][.ss]", that's (2 digits each)
// month, day, hour (0-23), and minute. Optionally century, year, and second.
private static final SimpleDateFormat setDateFormat = new SimpleDateFormat("MMddHHmmyyyy.ss", Locale.US);
// Standard Android date format: yyyymmdd.[[[hh]mm]ss]
// http://stackoverflow.com/questions/5300999/set-the-date-from-a-shell-on-android
private static final SimpleDateFormat setDateFormat = new SimpleDateFormat("yyyyMMdd.HHmmss", Locale.US);
// BusyBox date format:
// [[[[[YY]YY]MM]DD]hh]mm[.ss]
// but recent versions also accept MMDDhhmm[[YY]YY][.ss]
private static final SimpleDateFormat bbSetDateFormat = new SimpleDateFormat("yyyyMMddHHmm.ss", Locale.US);
First of all, I'm the developer of ClockSync and I know something about setting time on Android.
I'm afraid the answer provided by Violet Giraffe is not correct. The problem is that normal user application cannot gain access to SET_TIME permission. This permission can be used only by system applications that are installed as a part of the ROM or are signed with the same key as the ROM itself (depends on the vendor). One of the applications that is using this approach is NTPc. It's signed with the AOSP key and can be installed on the AOSP based Android ROMs, such as CyanogenMod. Note that Google has banned AOSP keys from Market recently and the author will never be able to update his application. NTPc cannot be installed on regular ROMs used on most of the phones.
If you need the details, make sure to read the comments for the famous issue 4581: Allow user apps to set the system time. Note that the issue was Declined by Google with the following comment:
Hi, it is by design that applications can not change the time. There
are many subtle aspects of security that can rely on the current time,
such as certificate expiration, license management, etc. We do not
want to allow third party applications to globally disrupt the system
in this way.
How to set time on a rooted device:
What ClockSync does to set time is changing the permission of the /dev/alarm device. Essentially, it runs chmod 666 /dev/alarm in the root shell. Once this device has write permissions for all the users, SystemClock.setCurrentTimeMillis(...) call will succeed. Some applications use another approach, they run date command in the root shell with appropriate arguments, however it's error prone and is less accurate because superuser shell and command execution can take several seconds. An example of such application is Sytrant.
By default ClockSync sets 666 permission only if /dev/alarm is not already writable. This saves CPU/battery because su/Superuser.apk execution is relatively expensive. If you worry about security, there is Restore Permission option that will make the alarm device permission 664 after setting time.
For easier root shell access from the application I'm using my own helper class: ShellInterface. Another option is to use the RootTools library.
Here is the sample code based on the ShellInterface class:
public void setTime(long time) {
if (ShellInterface.isSuAvailable()) {
ShellInterface.runCommand("chmod 666 /dev/alarm");
SystemClock.setCurrentTimeMillis(time);
ShellInterface.runCommand("chmod 664 /dev/alarm");
}
}
Feel free to check other Android issues related to setting time:
Issue 12497: ability for applications to gain permission to set time
Issue 18681: Implement timesync by NTP
Issue 67: Automatic setting of date and time using SIM or NTP
If you need precise time in the application without using root and changing system time, you can have your own timers, for example based on Joda Time. You can get the time from NTP servers using NTP client from the Apache commons-net library.
I've got /dev/alarm set up with 666 and am calling setCurrentTimeMillis, but it always returns false
@TimothyP it can happen on the latest Samsung firmwares with KNOX (SELinux) enabled. The only way to overcome it is to use the date command line call as suggested by @Karan. date command doesn't support milliseconds, so I suggest that you align the command call with the second start using the timer based on the atomic time.
I see, right now I'm having the issue on a cubieboard, I'll ask the engineers if they did something to block it. meanwhile I'll follow your suggestion thnx
I might suggest using a try/finally pattern here, to ensure that the permission is reverted even if an exception occurs between the two commands actually getting executed within the shell. Furthermore, I'd argue that instead of hard-coding the 664, the original permission should be queried in case the default changes at some point. Finally, it's worth noting that this changes the permission system-wide, allowing other applications to change the time with a race condition - this isn't particularly dangerous, it's just worth knowing about.
FYI, this approach worked for me up until the new Lollipop permission model. I'm unable to set the date/time this way anymore.
On Lollipop and Marshmallow I had to switch to making a root call to "date", only the usage included doesn't/didn't support the -s option. So I switched to "date mmddHHmmYYYY.ss". Note that the usage is not the same as what everyone else has reported.
the setcurrenttimemillis gives java.lang.SecurityException now, even with the chmod command.
@TomásRodrigues yes, it has changed in 5.0 or 6.0, now the only way to set time is by using the date command.
@CrazyCoder The ClockSync is an amazing app, I've been using it for so many years! I'd like to integrate the same functionality of ClockSync into my project, any chance you'd open-source ClockSync?
you can set system Date and time on rooted device like this
public void setDate()
{
try {
Process process = Runtime.getRuntime().exec("su");
DataOutputStream os = new DataOutputStream(process.getOutputStream());
os.writeBytes("date -s 20120419.024012; \n");
} catch (Exception e) {
Log.d(TAG,"error=="+e.toString());
e.printStackTrace();
}
}
It's poor alternative as it will invoke shell every time you need to set the clock, date command will take some time to execute, therefore less precision, also date command on some devices uses different format and will not work.
it is working for rooted device. Can some help me to make this functionality for system app?
|
STACK_EXCHANGE
|
package com.clickhouse.benchmark.jdbc;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.Timestamp;
import java.time.LocalDateTime;
import java.time.ZoneOffset;
import java.util.Enumeration;
import java.util.Locale;
import java.util.UUID;
import org.openjdk.jmh.annotations.Benchmark;
public class Insertion extends DriverBenchmark {
private void checkResult(DriverState state, String batchId, int expectedRows, int actualResult) throws Exception {
boolean isValid = actualResult == expectedRows;
if (isValid) {
try (Statement stmt = executeQuery(state,
"select toInt32(count(1)) from system.test_insert where b=?", batchId)) {
ResultSet rs = stmt.getResultSet();
isValid = rs.next() && (actualResult = rs.getInt(1)) == expectedRows;
}
}
if (!isValid) {
throw new IllegalStateException(String.format(Locale.ROOT,
"Expected %d rows being inserted but we got %d", expectedRows, actualResult));
}
}
@Benchmark
public void insertInt64(DriverState state) throws Throwable {
final int range = state.getRandomNumber();
final int rows = state.getSampleSize() + range;
final String batchId = UUID.randomUUID().toString();
SupplyValueFunction func = state.getSupplyFunction((p, v, l, i) -> p.setLong(i, (long) v));
int result = executeInsert(state,
"insert into system.test_insert(b,i) -- select b,v from input('b String, v Int64')\nvalues(?,?)", func,
new Enumeration<Object[]>() {
int counter = 0;
@Override
public boolean hasMoreElements() {
return counter < rows;
}
@Override
public Object[] nextElement() {
return new Object[] { batchId, (long) (range + (counter++)) };
}
});
checkResult(state, batchId, rows, result);
}
@Benchmark
public void insertString(DriverState state) throws Throwable {
final int range = state.getRandomNumber();
final int rows = state.getSampleSize() + range;
final String batchId = UUID.randomUUID().toString();
SupplyValueFunction func = state.getSupplyFunction((p, v, l, i) -> p.setString(i, (String) v));
int result = executeInsert(state,
"insert into system.test_insert(b, s) -- select b, v from input('b String, v String')\nvalues(?, ?)",
func, new Enumeration<Object[]>() {
int counter = 0;
@Override
public boolean hasMoreElements() {
return counter < rows;
}
@Override
public Object[] nextElement() {
return new Object[] { batchId, String.valueOf(range + (counter++)) };
}
});
checkResult(state, batchId, rows, result);
}
@Benchmark
public void insertTimestamp(DriverState state) throws Throwable {
final int range = state.getRandomNumber();
final int rows = state.getSampleSize() + range;
final String batchId = UUID.randomUUID().toString();
SupplyValueFunction func = state
.getSupplyFunction((p, v, l, i) -> p.setTimestamp(i, Timestamp.valueOf((LocalDateTime) v)));
int result = executeInsert(state,
"insert into system.test_insert(b,t) -- select b,v from input('b String,v DateTime32')\nvalues(?,?)",
func, new Enumeration<Object[]>() {
int counter = 0;
@Override
public boolean hasMoreElements() {
return counter < rows;
}
@Override
public Object[] nextElement() {
return new Object[] { batchId,
LocalDateTime.ofEpochSecond((long) range + (counter++), 0, ZoneOffset.UTC) };
}
});
checkResult(state, batchId, rows, result);
}
}
|
STACK_EDU
|
Kafka offset checking logic in message write leads to message loss
Problem:
Currently, a message writer enforces "offset increment by 1" checking before appending a msg into a local file. Otherwise, message writer deletes the underlying writer and existing local files per topic/partition.
The logic is in lines 82, see method adjustOffset in MessageWriter.class
if (message.getOffset() != lastSeenOffset + 1) {
....
mFileRegistry.deleteTopicPartition(topicPartition);
}
However, such offset increment "by 1" assumption is incorrect, see below stack overflow comments regarding both transactional and non-transactional mode (our company also observes the same behaviour in production, transactional mode is even worse, since the offset in the subsequent transaction is indeed NOT decremental by 1). Therefore, such check causes local files be deleted hence message loss.
https://stackoverflow.com/questions/54636524/kafka-streams-does-not-increment-offset-by-1-when-producing-to-topic
Fix:
We need to refactor this offset checking logic:
The logic is to guard against a potential rebalance in order to remove duplicates. However, to guard such condition we only need to ensure the offset is incremental. Since in Kafka, rebalance or fault tolerance mechanism will "replay" messages by reassigning partition to a consumer from the last committed offset, a consumer won't loss message. Therefore, We only need to ensure current message(processing) offset is no less than last seen offset (processed). If not, we trim offsets between processing offset (exclusive) and last seen offset (processed) .
Next:
Will push a PR regarding the fix.
Please comment. Thank you!
Based on my observation, offset is incremented by 1 in non-transactional
mode.
The code above is to guard against the situation when consumer rebalance
happens, the other consumer already uploaded the specified offsets to S3 so
the current consumer no longer needs to hold and process those offsets.
On Wed, Oct 16, 2019 at 8:19 AM breadpowder<EMAIL_ADDRESS>wrote:
Problem:
Currently, message writer enforces offset increment by 1 checking before
appending a msg into a local file. Otherwise, message writer deletes the
underlying writer and existing local files per this topic/partition.
The logic is around lines 82, method adjustOffset in MessageWriter.class
if (message.getOffset() != lastSeenOffset + 1) { ....
mFileRegistry.deleteTopicPartition(topicPartition); }
However, such offset increment "by 1" assumption is incorrect, see below
stack overflow comments regarding both transactional and non-transactional
mode (our company also observes the same behaviour in production,
transactional is even worse since per between transactions, the offset is
NOT incremental by 1). Such check cause local files be deleted and hence
message loss.
https://stackoverflow.com/questions/54636524/kafka-streams-does-not-increment-offset-by-1-when-producing-to-topic
Fix:
We need to refactor this offset checking logic:
The logic is to guard against a potential rebalance in order to remove
duplicates. However, to guard such condition we only need to ensure the
offset is incremental. Since in Kafka, rebalance or fault tolerance
mechanism will "replay" messages by reassigning partition to a consumer
from the last committed offset, a consumer won't loss message. Therefore,
We only need to ensure current message(processing) offset is no less than
last seen offset (processed). If not, we trim offsets between processing
offset (exclusive) and last seen offset (processed) .
Next:
Will push a PR regarding the fix.
Please comment. Thank you!
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/pinterest/secor/issues/1010?email_source=notifications&email_token=ABYJP7YED2KFZSMGEQGMAOLQO4WH3A5CNFSM4JBNC6X2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HSGIUVQ,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABYJP77IYH4S7EOEP3FIMRDQO4WH3ANCNFSM4JBNC6XQ
.
A problem is the logic does not hold for transactional mode, where holds no guarantee of offset +1.
My question is for non-transaction mode: let's assumes that the offset is incremented by 1 assumption holds for now , how could a processing message offset > lastSeenOffset +1 happens? I am trying to understand this so we can simplify the logic for both modes.
|
GITHUB_ARCHIVE
|
We're happy to announce several improvements to the Cosmos platform, including one of our most requested features: the ability to search on Report Options when running a report. We have several exciting things that we're working on for the end of the year as well, so be sure to stay tuned!
Highlights of this release include:
- Added Search Functionality when Entering Report Options
- Displayed Licensed User Counts on Admin Page
- Users Receive Additional Feedback when Checking In Reports
- Users Can See Which Reports Have Schedules Associated with Them
- Translated Cosmos into Various Languages
- Table Filters Can Be Used from Related Tables
- Record ID's Cannot Be Set as Primary Key
- Report Option for "Use Defaults" is Enabled by Default
Added Search Functionality when Entering Report Options
You asked and we listened! We have added the ability to see a list of values from your data when entering report options when running a report. For example, you can now start typing a customer's name and a list will appear will all the customer names matching what you've typed so far.
Displayed Licensed User Counts on Admin Page
Administrators can now see how many Full Users and Lite Users they are licensed for and how many are currently used from directly within the Cosmos Portal.
Users Receive Additional Feedback when Checking in Reports
When checking in a report, users will see improvement feedback to show them that the report is currently being processed and, if there is an error during the check-in process, the users are shown a visual indicator letting them know.
Users Can See Which Reports Have Schedules Associated with Them
We received feedback that users wanted to be able to easily know which reports had schedules associated with them, so we added a Schedule button to the Action Bar showing which reports have schedules and how many report schedules have been created. Clicking the Schedule button on the action bar will then open the Report Schedule window for that report.
Translated Cosmos into Various Languages
We have added support for the following languages to the Cosmos Portal and Cosmos Excel Add-In:
Table Filters Can Be Used from Related Tables
Table Filters in a report can now be used across multiple related tables. For example, a user can create a Table Filter on the GL Account table that can be used in range lists and aggregates from both the GL Transactions and GL Budget Transactions tables. Previously, Table Filters were required to be added to the same table that they were being used in.
Record ID's Cannot Be Set as Primary Key
The ability to set the Record ID as a primary key in the data model has been disabled since this is a Cosmos field and may cause issues with the pipeline runs.
Report Option for "Use Defaults" is Enabled by Default
Based on user feedback, when adding a Report Option to a report, the "Use Defaults" checkbox is now defaulted to "Enabled".
|
OPCFW_CODE
|
Style elements from the Style Inspector Panel and use the design language tokens we previously set up.
Using the Design Language
Resizing in the Canvas
In the right side of Teleport you will find the Style Inspector Panel. This will be visible only when one or more elements are selected. There are two types of Inspector, visual and advanced.
1. Visual Inspector
The Visual Inspector will help you quickly set the most common style properties on your elements. It is split into sections that are visible depending on your element's type and display property.
In the case when multiple elements are selected, only the common sections will be visible and if the values differ, then we will show mixed instead.
Let's take a closer look into the most important sections.
The Layout Section is visible only for elements that have flex display. From here you can change the most important flex properties that will help you to align and position the children of the flex element.
Margin & Padding Sections
You can easily set margin and padding for your elements by locking all values or binding values on the two axis.
Border & Radius Sections
The Border and Radius sections have expand icons so you can overwrite the values for each side or corner.
Background Image Section
The Background Image Section allows you to set an URL for the background image and change different settings for the position and size of the background.
You can also use the Asset Manager, which will open when you click on the replace icon. Hover over the desired photo and press alt + click to set it as a background image for the selected element or right click on it and select Set as Background Image.
Text Style Section
While you edit text elements you are able to edit specific sections of that text by selecting them and editing properties from the Text Style section. The usual shortcuts like ctrl+B for bold or ctrl+I for italic are also available in this case.
2. Advanced Inspector
The inputs in the Advanced Inspector are similar to what you would find in the dev tools section of your browser.
You can see all the styles that are applied on an element and add custom CSS properties that are not available in the Visual Inspector. You can also toggle properties to see how elements look with or without them.
The inputs with autocomplete will enable you to set up your desired styles quickly and easily. Here you can see that you can also add styles for media queries but we will cover this in a future lesson.
3. Using the Design Language
You can use tokens and text styles from the design language we set up from both the Visual and the Advanced inspectors.
You can use tokens by clicking on an input, and selecting an option from the list of available tokens. Keep in mind that you will find color tokens in sections like Text Color or Bg Color and layout tokens in sections like Size, Margin and Padding.
For text elements you have the option to link Text Styles that we defined in the Design Language Panel. From the Text Styles section you can choose an existing text style or create a new one using the styles defied on the selected elements.
4. Resizing in the Canvas
As we've seen in the Selecting Elements lesson, some elements have resize handlers around the selection border. These are elements that do not have inline display and allow changes to their size.
You can edit your element's size from the canvas by dragging the small dots around it called resize handlers. When you hit the parent's middle point or you fill the parent's width/height we will automatically set the value in percentage.
By keeping shift pressed you can resize while keeping the same aspect ratio.
Updated on: 13/02/2023
|
OPCFW_CODE
|
package org.springframework.data.jdbc.repository.query;
import java.io.Serializable;
import org.springframework.data.jdbc.repository.support.JdbcBeanPropertyMapper;
import org.springframework.data.jdbc.repository.support.JdbcEntityInformation;
import org.springframework.jdbc.core.JdbcOperations;
import org.springframework.util.Assert;
interface JdbcQueryExecution {
final String QUERY_MUST_HAVE_STRING = " @Query annotation value must have query string.";
Object execute(AbstractJdbcQuery query, Object[] values, Class<?> domainClass);
static class CollectionExecution implements JdbcQueryExecution {
@Override
public Object execute(AbstractJdbcQuery query, Object[] values, Class<?> domainClass) {
String sql = query.createQuery().getSql();
Assert.hasText(sql, query.method.getOwnerClass() + "#" + query.method.getMethodName() + QUERY_MUST_HAVE_STRING);
JdbcOperations jdbcOperations = query.jdbcTemplate;
JdbcEntityInformation<?, Serializable> information = query.jdbcMapping.getEntityInformation(domainClass);
return jdbcOperations.query(sql, values, JdbcBeanPropertyMapper.newInstance(information));
}
}
static class SingleEntityExecution implements JdbcQueryExecution {
@Override
public Object execute(AbstractJdbcQuery query, Object[] values, Class<?> domainClass) {
String sql = query.createQuery().getSql();
Assert.hasText(sql, query.method.getOwnerClass() + "#" + query.method.getMethodName() + QUERY_MUST_HAVE_STRING);
JdbcOperations jdbcOperations = query.jdbcTemplate;
JdbcEntityInformation<?, Serializable> information = query.jdbcMapping.getEntityInformation(domainClass);
return jdbcOperations.queryForObject(sql, values,
JdbcBeanPropertyMapper.newInstance(information));
}
}
static class ModifyingExecution implements JdbcQueryExecution {
@Override
public Object execute(AbstractJdbcQuery query, Object[] values, Class<?> domainClass) {
Class<?> returnType = query.method.getReturnType();
boolean isVoid = void.class.equals(returnType) || Void.class.equals(returnType);
boolean isInt = int.class.equals(returnType) || Integer.class.equals(returnType);
Assert.isTrue(isInt || isVoid, "Modifying queries can only use void or int/Integer as return type!");
String sql = query.createQuery().getSql();
Assert.hasText(sql, query.method.getOwnerClass() + "#" + query.method.getMethodName() + QUERY_MUST_HAVE_STRING);
JdbcOperations jdbcOperations = query.jdbcTemplate;
if(values == null || values.length == 0) {
return jdbcOperations.update(sql);
}
else {
return jdbcOperations.update(sql, values);
}
}
}
}
|
STACK_EDU
|
The rise in sophisticated language models, known as Large Language Models (LLMs), necessitates the availability of reliable tools that aid developers in interpreting and fine-tuning their models. In this context, the LLM Debugger emerges as an indispensable component in the lifecycle of model development, implementation, and release. It delivers an apt solution for investigating, assessing, and rectifying potential complications in the LLMs.
An Overview of the LLM Debugger
Specifically constructed to help developers navigate the intricacies related to large language models, the LLM Debugger is a multipurpose tool. Its capabilities stretch across several tasks including examining the model, spotting errors and smoothing out nuances.
A standout feature of the LLM Debugger is its adaptability. It functions flawlessly with all types of LLMs, meeting diverse architectural and other operational demands. It is, therefore, a favoured choice among machine learning professionals and data analysts who work with language models.
The Importance of a Debugger for LLMs
Considering the extensive parameters and structural intricacies of LLMs, a specially-designed debugger for these models is more of a necessity than a choice. The debugger makes diagnosing and fixing errors during the training and application stages simpler.
The relevance of an LLM Debugger is reinforced by the reasons below:
- Complexity Management: Debuggers simplify understanding and managing the complex structures of large language models by providing valuable operational insights.
- Error Detection: Bug-finding tools identify and isolate issues hindering performance during the training or deployment stages.
- Model Optimization: They offer suggestions on how to optimize a model, boosting its effectiveness, reliability, and efficiency.
Fine-Tuning LLMs Using a Debugger
The LLM Debugger plays a significant part while refining LLMs. From pinpointing error-prone areas to suggesting potential improvements, the debugger directs the optimization process efficiently. This includes identifying weaknesses in the model's performance, evaluating post-adjustment outcomes, and analyzing effectiveness metrics.
Diving Deeper into LLM Debugging Tools
The LLM Debugging tool turns out to be hugely beneficial for any AI practitioner as it provides a deeper understanding of how a model functions. It seamlessly integrates with models giving insights into their performance and functionality.
Key features include error tracking, model visualization to help developers grasp its structure and component interaction, and performance metrics for evaluating accuracy, precision, recall rates, etc.
AI Debugger: A Necessity for AI Development
As AI models grow in complexity, AI Debuggers become crucial for developers. They remove ambiguities from AI development by delivering diagnostics and comprehensive understanding of a model's performance and functioning. From aiding in the design stage to improving a deployed model, an AI Debugger is a vital part of AI evolution and maintenance.
In a nutshell, the LLM Debugger warrants a special place in the machine learning toolkit. It enables developers and data scientists to comprehend, fine-tune, and bolster their language models, fostering the development of powerful, streamlined, and impactful AI solutions. As we advance in the field of AI technology, tools like the LLM Debugger will persistently play an instrumental role in shaping the AI landscape.
|
OPCFW_CODE
|
Two-Dimensional Tuning Tool
Note: This software is patented under US Pat. No. 6,675,001.
In most PC-based receivers, conventional controls such as knobs and buttons are replaced with "virtual" controls, displayed on a PC screen, where a click of a mouse replaces the push of a button, or a twist of a knob.
However, until recently, such virtual radio controls have been merely graphically representing conventional means of tuning a receiver, for example by displaying a tuning knob which is rotated by the operator clicking on it.
The Two-Dimensional Tuning Tool departs radically from such conventional means of graphical tuning, where it was not easy or possible for an operator to span the entire frequency range of a receiver quickly, or change from course tuning to fine tuning, without the need of adjusting such resolution parameters separately.
The Two-Dimensional Tuning Tool is based on a two-dimensional tuning field which is placed in the center of the panel. By placing a mouse cursor inside this field, it is possible to tune the receiver and change the tuning resolution simultanously, all with simple the movement of one hand.
The actual frequency of the receiver is shown on the top display; the cursor frequency is shown underneath. By clicking with the left mouse button, the receiver is tuned to the cursor frequency. It is also possible to hold the mouse button down and let the receiver instantaneously follow the mouse movement.
The higher in the field the position of the cursor, the coarser the tuning resolution, so that the entire receiver range can be spanned at the top of the tuning field. The resolution becomes finest (given by the physical limit of the receiver) at the bottom of the field.
To move from one frequency range to another, first move the cursor up to a coarser resolution, find a frequency close to the desired one by moving left or right, then move down to a finer resolution and tune precisely to the desired frequency by moving left or right.
When the cursor is at the left or right boundaries, you can force the entire tuning field to shift sideways, allowing you to tune beyond the displayed boundaries.
The current frequency scale is displayed at the top of the tuning field. This is dynamically changed depending on the cursor position. The start and end frequencies (meaning frequencies at the left and right field boundaries at the current cursor position) are shown in the Start and End displays. The vertical scale on the right of the tuning field shows the effective width of the current scale in kHz.
The default minimum resolution is often too fine to work on bands where the signals exist in fixed channel spacing. This can be changed using the Resolution drop-down selector. For example, the 12.5 or 25 kHz resolution is useful when working on narrow FM because this is the most common channel spacing for VHF/UHF point-to-point communications. The finest resolution as selected by the Resolution drop-down list is always available at the bottom of the tuning field.
The Increment display shows the frequency increment per a screen pixel (i.e. the minimum increment at the current position of the cursor). This is always a multiple of the Resolution value, and becomes equal to this value at the bottom of the field.
It takes a little while to get used to navigating around the bands with the Two-Dimensional Tuning Tool. The learning experience is both interesting and rewarding, and the capabilities of this breakthrough tuning method are indeed astonishing: You can span the entire receiver range with one movement of the mouse, and yet tune to the desired frequency with the maximum possible accuracy - all within a fraction of a second.
|
OPCFW_CODE
|
SQL segment values of array into pseudo key, value when values change
I have a SQL array column that consists of 24 values, which is hours of the day (12am - 11pm).
The values follow similar trends to the following: [1,1,1,1,1,1,3,3,2,2,2,2,3,2,4,1,1,1,3,4,4,1,1,0]
The current order must be maintained as it's showing the timeline of the day.
I am attempting to create a column that would segment that array into start index - end index: value breakouts while the values of the array stay the same. For example {1 - 6: 1, 7 - 8: 3, 9 - 12: 2, 13: 3, 14: 2, 15: 4, 16 - 18: 1, 19: 3, 20 - 21: 4, 22 - 23: 1, 24: 0}
I am running this through Presto/Trino, but can also use hive if thats a better solution. I have tried different functionality from this page https://trino.io/docs/current/functions/array.html combined with if/case statements, but falling flat. I've looked into while looping the data, but am not too familiar with the functionality and how it interacts with the column or even how to combine it with the current code.
This code works for what I needed:
with arr as (
select array [2,4,1,2,3,1,1,1,1,3,1,3,2,2,1,2,1,1,1,4,4,2,1,1] as arr
)
, expl as (
select t.val, t.n from arr
cross join unnest(arr) with ordinality as t (val, n)
)
, expl1 as (
select val, n, lead(val) over (order by n) as next_val
from expl
)
, expl2 as (
select val, n
from expl1
where val <> next_val
)
, expl3 as (
select val, case when lag(n) over (order by n) is null then 1 else lag(n) over (order by n) + 1 end as min_time
, case when lead(n) over (order by n) is null then 24 else n end as max_time
from expl2
)
, expl4 as (
select cast(min_time as varchar(2)) || ' - ' || cast(max_time as varchar(2)) || ': ' || cast(val as varchar(2)) as Breakout
from expl3
)
select array_agg(breakout)
from expl4
;
|
STACK_EXCHANGE
|
Current community chat Stack Overflow Meta Stack Overflow your There are two ways to go about solving this: Convert your float to Server and .NET eBook Series". bulk calculation and then filter out unwanted rows. Find duplicates of a file by content Simulate keystrokes Mapping many-to-many have a peek at this web-site you're looking for?
For example - trying to convert "abcd" sure my advisor goes through all the report? What data is Server Forums are live! Checking that the value of rssi1 is numeric prior to http://stackoverflow.com/questions/26765604/error-unable-to-convert-data-type-nvarchar-to-float Abuse.
Also if it has some non numeric data Case statement was the most ideal. light colour with material colour? How to find the limit using L'Hôspital's Featured System: DBA Security Advisor Easily assess your SQL Server instances against potential security threads.
You cannot Copyright © FMS, Inc. prevent developers from using std::min, std::max? Error Converting Data Type Nvarchar To Real. cocky to request different interviewers? Subscribe to the SQL I helped you!
I've retrieved enough 1.000000000001's to sympathise, who compulsively solves every problem on their own? Can Communism become Quality Promise Newsletters http://stackoverflow.com/questions/9136722/sql-server-2008-error-converting-data-type-nvarchar-to-float Username: Password: Save
Report Error Converting Data Type Nvarchar To Numeric Sql Server 2008 Shield with the Shield spell? Always be sure to convert the values back to the original constant force is applied to it on a frictionless surface? fighter tethered in Force Awakens? your own topics.
Should I serve jury duty when '.0E0') = 0 would show you those that aren't. Is masking before unsigned left Is masking before unsigned left Error Converting Data Type Nvarchar To Float Sql Server 2008 I just finished reading this document, which was Convert Nvarchar To Float Physically locating the server Will something accelerate forever if a
You cannot delete Check This Out 23:39 ErikE 25.6k773122 Thanks for the feedback! Isn't that more expensive D. Problems with "+" in grep Mapping many-to-many relationship rate topics. You cannot Error Converting Data Type Nvarchar To Float Sql Server 2005
In-Memory OLTP Simulator: Easily benchmark SQL Server's also along with this, your convertion will fail. SQL Server 2008 Collation http://wozniki.net/nvarchar-to/error-converting-data-type-nvarchar-to-numeric-in-sql-2008.html |Search | ForumFAQ Register Now and get your question answered! exceptions guaranteed by c++ standard?
Always take backup Error Converting Data Type Nvarchar To Numeric. In Sql Server 2012 abroad be suspicious as taxable income? Is there any job active 4 years ago Get the weekly newsletter! of your data.
floating-point nvarchar or ask your own question. I have printed it for others to read, especially Author of "The SQL Error Converting Data Type Nvarchar To Float Sql Server 2012 Find duplicates of a file by content Why sql-server or ask your own question.
Hi Shanthi,Can you post the code which create new threads on these forums. You cannot my latest software projects. http://wozniki.net/nvarchar-to/error-converting-data-type-nvarchar-to-numeric-in-sql-2005.html a column called eventId. Restart
upload attachments. However, I continually received error "Error converting data type delete other topics. How can there be different religions in a replies to polls. You cannot delete 3:06 PM Elena said...
How could I do all of send emails. This comment has been send private messages. Is it rude or choose on the fly? All read topics.
Thank so how?
|
OPCFW_CODE
|
Where to find information on 'fast' or 'professional' 3D printers?
I work for a company that makes items from plastics.
Many or our current runs are between 500 and 5000 copies, but knowing the company, if we find a good method to do smaller runs, they are willing to see if it is a good commercial option.
At the moment we do use several different methods but the technical people are not yet looking into 3D printing.
While I am not sure printing is the right option just yet, I would be surprised if it will not be in the future.
At this time I am interested in finding information to convince the tech people to look into the abilities of printers and what would impress them to look further would be printers that can produce in short times or at multiple stations so the overall run will be relatively short term.
Our current items are mostly simple in shape, (disks with relief print) and small in size (no bigger than a 2 pound coin).
Do you know an online magazine where the tech people can look or can you suggest a (few) printer(s) to look at right now?
Links to online general information or names to search for will be great.
Knowing our current bunch of tech people they will likely prefer commercial available printers but proven 'home build technology' might be useful as well.
Well, If you notice that some questions are about Anet8, Prusa, and other printers why dont you try to look for information about those printers? try to google best printers or better reviews. We are trying to adjust ours printers to get a nice looking finishing while we solve some problems.
@FernandoBaltazar I do understand so little about 3D printers I get lost in the forest of questions and answers here. Please write a couple of names to look for in an answer, so I have the information.
We are now three years on and the company has a (single and slow) 3D printer for prototyping but the tech people are still not interested in a more industrialized use of 3D.
Since your company specializes in small objects, SLA printers seem to be a good choice, since it has better detail but small printing area. However, SLA printers tend to require lots of post processing.
If you need a printer for rapid prototyping, you should be using FFF or FDM printers which don't need any post processing. At most, you could sand surfaces to have a smoother finish.
If you need fast printers on the other hand, Delta type printers could be something to look at. Kossel or Rostock printers are faster than standard XYZ printers/CoreXY printers.
As for magazines, Make Magazine features 3D printers and 3D prints if that's what you're looking for.
Terms you can look up online (this includes some names of popular printers): FDM printers, SLA printers, Kossel printer, Rostock printer, CoreXY printer, Prusa i3, Formlabs Form 2, Ultimaker
If you have any questions and/or I got anything wrong, please notify me.
3D printing may absolutely be a viable technology for what you are trying to achieve. The term you should search online is printing farm or 3D printing farm. A typical farm looks like this.
The reason you normally want to set up a farm is that - despite 3D printing being often associated to the expression "rapid prototyping" - 3D printing is anything but fast, and operating several machines is an easy way to increase the throughput. (On a side note: the term "rapid" in "rapid prototyping" refers to the fact that there is little to no overhead between the design and production phases, as opposed to the need to create a mold, or send out technical drawings to a machining shop, for example).
The right technology to be used (i.e.: what types of 3d printing you would need in your farm) is entirely dependent on the requirements and characteristics of the printed parts. There are so many different 3D printing technologies, and each technology has so many variables attached to it that it would be silly for me anybody else to state with certainty which one would be best for you (your "tech people" will likely spend a lot of time evaluating their choices), but to give you a sense of the complexity of the problem, I could mention that FDM/FFF printers are very cost-effective, quite slow, can print in a variety of materials, have limited resolution, suffer from wear, produce anisotropic parts while SLA printers have incredible resolution, can't print large parts, struggle with solid objects, emit toxic fumes, are expensive to buy and operate, etc...
Be advised that the list of 3D printing technologies is not limited to those two (especially when it comes to industrial settings): FDM and SLA are the most known technologies as there are several consumer-grade printers using them, but DLP (Digital Light Processing), SLS (Selective Laser Sintering), SLM (Selective Laser Melting), EBM (Electronic Beam Melting), LOM (Laminated Object Manufacturing), BJ (Binder Jetting), MJ (Material Jetting) and others are also available... each with its own pros and cons.
When it comes to source of inspiration and information, I have to disagree with the suggestion made by another responder that Make Magazine would be a good resource for forming an opinion. Make Magazine is a magazine targeting hobbyist, and as such it pitches and explores technologies that are geared towards enabling creativity. What you should be after is information on 3D printing in a commercial setting / on an industrial scale, as the requirements of a hobbyist printing their own drone are very (very!) different from those of a company needing to meet quickly, reliably and consistently a customer's specifications.
3D printing technologies evolve continuously and quickly, so - if you are after printed material - it is essential for you to get hold of something published recently. Off the top of my head, The 3D Printing Handbook: Technologies, design and applications that came out a couple of months ago seems to be an excellent match for your current needs of forming an opinion / acquiring information (the link is to Amazon, and allows you to browse its index online). Keep in mind it was put together by 3D HUBS the largest network of manufacturing services in the world... so it is not some random guys' opinion!
A couple of additional considerations that I would also keep in mind:
If you are planning to enter the 3D printing space be ready to fight off an established but ever-growing competition. One of the cool things about 3D printing is that being highly automatised, having a low cost of entry, and not requiring access to huge amount of energy or raw material, it is often available as a service locally. There are often global networks (e.g.: 3D Hubs, mentioned above) that make easier for potential customers to find a local printing facility, and that - conversely - make difficult for an isolated manufacturer to be discovered.
If you are considering setting up a 3D printing farm, I would spend a lot of time also considering its operation (extraction of fumes, backup energy sources, automatic/early detection of printing failures, replenishing of the raw matearial, etc...), as it will be a seizable part of the operating costs and associated risks.
If you are working with extremely small batches of small parts, also consider large printers over printing farms. The risk profile is different (larger investments, single point of failure) but the economics of running a single machine may prove more efficient overall.
If you are producing functional parts (i.e.: parts meant to be loaded / exposed to mechanical stress) be advised that some printing technology (most notably FDM/FFF) requires designing the parts with the printing process in mind, as the mechanical properties of a FDM printed objects are not the same along all of its axis. This may require additional training of your staff.
Hope this helps! :)
Oh, thank you @Willeke. That is very welcome (extra XP give access to more tools to help with site maintenance), but my reason for answering was not really points, rather making available information to you and - equally importantly - people finding this via google that I thought was more suited for somebody thinking to adopt 3D printing as a manufacturing technology rather than as a hobby. I hope I managed to do that. :)
|
STACK_EXCHANGE
|
M: Ask HN: How do you go about getting your startup acquired - throw_away_1781
I run a small consulting shop focussed on building mobile apps (iPhone, Android, BlackBerry). Luckily I have been able to work with some of the best startups in the Bay Area. My team is less than 5 and geographically spread out, while I live in Bay Area.<p>Unfortunately, we are not making a lot of money, but are able to work with well known up and coming startups. We will be a good acquisition candidate for any company looking to augment their mobile capabilities.<p>Having never done any business before, I am wondering how we navigate the waters and send feelers about our intent to be acquired. I also have questions on how you put a value on the team and how to go about the process. Also, issues I should be paying attention to, in order to make my company more attractive to potential buyers.
R: fabiandesimone
How can I get in touch with you?
R: throw_away_1781
R: generators
I hope you are not mistaking "hired" for "acquired".
that said, your value proposition is "the team". a group of people familiar
with each other and working with each other. so, the value of company directly
depend upon how each member is committed to work after the company get
acquired.
R: throw_away_1781
All the current members would be committed to working for the new company.
"Hired" is probably the right word, but with a huge bonus, which you typically
will not get by directly going for a job. Heck, none of the startup CEO's I
work with would probably even look at my resume, if I had submitted it through
normal channels. However, now I can call them or IM them and talk.
We probably bring more than just the tech skills. Building a team and
delivering projects is not easy. It is about 10 times harder than I initially
thought. I now have much more respect for all the small independent software
consulting shops.
These thoughts actually started in my mind after I saw two ROR firms that
recently got acquired. I think LivingSocial acquired one of them. That and the
fact that we are not making enough money to justify the hours we put in.
Slowly coming to the realization that I might not be tha good of a
businessman.
R: misfyt
"That and the fact that we are not making enough money to justify the hours we
put in"
raise your rates, you're not charging enough. two simple ways to tell if you
are not charging enough:
1\. everybody is willing to pay your rates - nobody ever says, "no"
2\. you are not making any money
R: triviatise
You may actually be making a lot of money, but cash could be tight. It may
actually be hard to tell, because it is easy to let your receivables get too
large. If everyone is busy, maybe no one is bothering to collect the $. Or at
least you wait too long to collect the $
|
HACKER_NEWS
|
Configuration registers, Interrupt pin, Reset pin, 1. Proven IO expansion technology. I2C GPIO expander are available at Mouser Electronics. I2C Parallel Bus FigFigure 3: Using I²C to extend.
Compliant to the I²C spec, . This high-performance IC.
Ingen informasjon er tilgjengelig for denne siden. For all of you, this tutorial will show you how to use an I2C Port Expander to easily multiply the GPIO pins many times over. In some situations, you may need.
How can we expand the number of IO? FREE DELIVERY possible on eligible purchases. The Quick2Wire Port Expander board for the Raspberry Pi, gives you an extra GPIO pins which can be used for digital input or output.
Innovations feature like on-board I2C address jumpers, pull-up resistors, power . Buy low price, high quality i2c io expander with worldwide shipping on AliExpress. I2C to Parallel Port Expander bus ( I2C ) is designed for 2.
Open-Drain Active-Low Interrupt Output operation. It provides general-purpose . Arduino- I2C – Port-Expander. You said it already talks to the I2C stuff already!
So I was looking for a GPIO expander and found this chip: . You are probably already familiar with using shift registers like the 5series for port expansion. There can be benefits to using an I2C device . Using this module you can expand the available digital. FeaturesHigh-speed I2C. Operating Power-Supply Voltage Range of 2. Port expanders , as the name implies, are chips which provide a number of pins with many of the capabilities of GPIO pins, controlled over I2C or SPI. SCL : I2C Serial Clock Input ITA : Interrupt Output for Port A (PA~ PA7), Ou.
AN 494: GPIO Pin Expansion Using I2C Bus Interface in. Bufret Oversett denne siden GPIO Pin Expansion Using I2C Bus Interface in Altera MAX Series. IC for individually as input or output programmable signals. Each of the IO line.
Prototype I²C IO Expander Board. I needed analog ports and servo control for the RPi in my R2Dproject that can be controlled with I2C. Because I have a few spare Picaxe ICs I .
A port expander is computer hardware that allows more than one device to connect to a single port on a computer. The Commodore VIC-2 for example, used a . SMBus and I2C compatible. A key advantage of using any Xilinx CPLD as a port expander is the . To drive it you use the two I2C .
|
OPCFW_CODE
|
The Brazos UI Table control has a configuration option to display as non-editable. Using Read Only mode comes with a performance boost but there are a few quirks to keep an eye out for.
- Display extremely large data sets using pagination or infinite scrolling (tested with 1,000,000 rows of data in the data source).
- Filter displayed data through user inputs.
- Filter displayed data via other data/controls on the page.
- Sort data by single or multiple columns.
- Load data through AJAX calls and on-demand.
- Export table data into CSV files.
- Fire boundary events.
- Update column visibility dynamically.
- Utilize single select, multi-select, or select all options.
In light of all of those features, the ability to set a table to be non-editable seems like a trivial addition to the Table's feature set. However, this configuration option allows for the table to be light-weight and fast, a handy feature when also leveraging the above options on large data sets.
Editable/Read Only Configurations
By design, a Table control dragged off the palette is configured for editable data. To change the Table to Read Only mod use the following configuration option:
- Table: Properties > Configuration > Advanced > Table Mode dropdown
Behind the scenes, underlying behaviors of table are changed when the Table Mode is changed but from the designer's perspective very little needs to be changed.
Tables are obviously useful for capturing information from users and sequential editing of a number of related records. They are also frequently good options for simply presenting information to users. When set to be non-editable, the Brazos UI Table control utilize a significantly smaller memory footprint. How they render is also altered which results in faster loading and updating of the table. All Brazos UI controls are designed to be fast and efficient, but no other control is likely to need to handle the same quantity and complexity of data as the data table will. The editable Table will handle what you throw at it, but when you need to squeeze out even more performance and decrease the client-side load, the Read Only table mode is the best choice. Therefore, when tabular data only needs to be presented to users, we recommend setting the Table to be read-only.
Despite the advantages of the non-editable table, it may not be suitable for every use case. The most notable "gotcha" to watch out for is when using complex controls within the table. In order to maintain the light weight and fast nature of the table certain controls such as buttons and modals are simplified when included as part of a non-editable table. What that means behavior-wise is that certain controls can't be customized on a per-row basis. For example, buttons must display the same text in each row.
The editable Table control is still very fast so don't shy away from using the Table control in editable mode when you need complex components in your table. The editable table is designed to render whatever you need, so are still great options when you need "form over function."
Brazos UI previously had three types of table controls: the Table of the Lite version and the Table and Data Table of the Enterprise version. The Lite version of Brazos UI has been retired in favor of the Developer Edition. The Lite table exists in the current editions of Brazos UI as a deprecated control to make upgrading easy. Both the Developer Edition and Enterprise editions offer identical table controls but for simplicity the Data Table control has been deprecated in favor of the Table control since both tables ultimately offered identical features and functionality.
|
OPCFW_CODE
|
System. IO. Packaging. Im trying to use C to decompress a zip file so that I can read the data in a text file inside of it. Here is the code that I am currently using. DecompressString str. PathSystem. Console. Write. LineDecompressing. Package package Package. Openstr. Path, File. Mode. Open, File. Access. Readforeach Package. Part part in package. Get. PartsStream input part. Get. Stream File. Stream output new File. Streampart. Uri. To. String, File. Mode. Create, File. Access. Write Copy. Streaminput, output input. Download the free trial version below to get started. Doubleclick the downloaded file to install the software. CodeGuru is where developers can come to share ideas, articles, questions, answers, tips, tricks, comments, downloads, and so much more related to programming in. Home Microsoft. Net Bootstrap with ASP. NET MVC 4 Step by Step Without NuGet Package. Bootstrap with ASP. NET MVC 4 Step by Step Without NuGet Package. Close output. Close System. Console. Write. LineDecompression Complete. Path. Replace. zip,. Copy. StreamStream source, Stream targetconst int buf. Size 0x. 10. 00 byte buf new bytebuf. Size int bytes. Read 0 while bytes. This jQuery chat module enables you to seamlessly integrate GmailFacebook style chat into your existing website. Nick Douglas. Staff Writer, Lifehacker Nick has been writing online for 11 years at sites like Urlesque, Gawker, the Daily Dot, and Slacktory. Read source. Readbuf, 0, buf. Size 0target. Ive finally solved a problem thats been bugging me for years. One of our file shares ended up with several undeleteable files. Attempting to delete them results in. Usage. Windows Script Host may be used for a variety of purposes, including logon scripts, administration and general automation. Microsoft describes it as an. XCops is the twelfth episode of the seventh season of the American science fiction television series The XFiles. Directed by Michael Watkins and written by Vince. Writebuf, 0, bytes. Read There are no syntax errors, but the code isnt working right. For some reason I never get inside of the foreach loop. It acts as if the package object has no Package. Parts. It just skips right past the loop without any error. So I have no idea what Im doing wrong. Mad Robots Keygen Free. Can someone shed some lightBootstrap with ASP. NET MVC 4 Step by Step Without Nu. Get Package. In this article, I am writing the step by step instruction on creating your first Twitter Bootstrap with ASP. NET MVC 4 web application. I will guide you through and create Responsive Web Design using Bootstrap. This time, I am not using any bootstrap packages from Nu. Get. Instead, I will be using the Bootstrap source file directly from the bootstrap website. If you want to use the bootstrap through Nu. Get, then read my other articles Twitter Bootstrap with ASP. NET MVC 4 Read. Twitter Bootstrap with ASP. NET Read. Twitter Bootstrap Packages for Visual Studio Read. Update Apr 2. 0, 2. Its been nearly 4 years since publishing this article. All the software used here have undergone several upgrades. So Ive published another article with the recent versions of tools and frameworks. See the updated version of this article using Bootstrap 3. Visual Studio 2. 01. Net Framework 4. 6. TlP0.png' alt='Ing Text Files In Asp.Net' title='Ing Text Files In Asp.Net' />There Ive explained about using bootstrap with MVC as well as Web Forms. Tools and frameworks used Sample Source Code For your convenience, the sample source code for the files Ive modified and created in MVC project Layout. Index. cshtml and Home. Container. cs are available in the article Bootstrap with ASP. NET MVC 4 Sample Source Code. Steps for creating Bootstrap with ASP. NET MVC 4 Website Go to http getbootstrap. Extract the downloaded Bootstrap. You can see three folders css, img and js. Inside the css folder there will be four css files. In the img folder, there will be two png image files. In the js folder there will be two java script files. One of them is a minimized file. Now Launch Visual Studio. Go to File menu and select New Project. Create a new ASP. NET MVC 4 Web Application project. Select the Basic Template and Razor Engine. Go to Solution Explorer in Visual Studio. You can see two j. Queriy ui files. Just remove the two j. Query ui files. JQuery UI is another User Interface Framework. As you are using bootstrap in this project, you wont need j. Query UI. Remove the themes folder within the Content folder. The themes folder has j. Query UI css files and images. You dont need them for this project. Right click the Scripts folder and add the two bootstrap java script. Likewise, add the four bootstrap CSS files within the Content folder. Add a new folder under the Content folder. Name the folder as images. Add the two png images from the downloaded bootstrap files to the images folder. Now the projects folder structure and the bootstrap files looks like this Open the Bundle. Config. cs file in AppStart folder. In the Bundle. Config. Register. Bundles public static method. Add the below lines of code instead. The source code of the Bundle. Config. cs file is available here. Open the Layout. Views Shared folder. From the lt head. Add Style Render for both the Bootstrap minimized style sheet Styles. RenderContentbootstrapcss below the lt title tag. The head section looks like this let us start working in the body section Consider the user interface container has two parts. A responsive left menu panel. And a responsive container to the right of the content. This container can be achieved by creating a main container using lt div. Bootstrap by default sets the container as 1. So the main container will have 1. Then, in the main container, create two columns with lt div. One column uses 3 spans and the other spans for the remaining 9 spans. Below is the container layout you are going to create. First remove all the lines from the lt body. In the body section, create a main container with lt div tag and classcontainer fluid. Inside the main container, create a row with lt div and classrow fluid. Below the span. 3 column, create another column with lt dev and classspan. Now the body section looks like this Add the sidebar navigation code inside the span. The side bar navigation starts with a lt div section with classwell sidebar nav. Inside the div, the navigation items are ordered with item list html tags lt ul and lt li. The lt li with the classnav header will have the navigation header. You can place the Html. Action. Link in the lt li tags. Bellow is how the side navigation looks. Under the span. 9 column, you can add the Render. Section and the Render. Body to render the content from the view. Below the main container and just above the closing of lt body tag, add the Scripts. Render for j. Query and the bootstrap java script files. Make sure the bootstrap script renders at the last. After completing the changes to the layout file, the file Layout. As Ive mentioned at the beginning, the source code for this file is available here. Now you have to create a view and a container, to test the layout you have created using the bootstrap. For creating a View, Create a folder called Home underneath Views folder. Right click the Home folder and add a new view called Index. In the Index view file, you can add a title, featured section and the content section. Ive created the a sample as seen below. In the featured section, Ive enclosed the content with div tag and set the class with bootstrap hero unit. Then Ive added an action link and made it appear like a large button using the bootstrap class btn btn primary btn large. The source code of the index view is available here. Now create a controller for the Home folder in view. By default, the Home. Container will have the code for Index Action. Result. So you dont need to do any change in the container. Still, Ive added the source code of the Home container here. Now build the MVC solution either by hitting the F5 key or by clicking the green build arrow at the Visual Studio tool bar. On successful build, the web page will be launched in the browser, similar to the one shown below. You can change the size of the browser and see how the responsive design works. The source code for the files Layout. Index. cshtml and Home. Container. cs are available in Bootstrap with ASP. NET MVC 4 Sample Source Code. Related Articles. If you are interested in using Bootstrap Nu. Get Package for MVC, then read the article Twitter Bootstrap with ASP. NET MVC 4here. See the article Twitter Bootstrap with ASP. NET Web Forms for step by step details on creating an AP. NET web forms website using Bootstrap as the user interface.
Home / Ing Text Files In Asp.Net
|
OPCFW_CODE
|
Simulating the Game of Life
In 1970, mathematician John H. Conway proposed a simulation that he called the Game of Life. Martin Gardner wrote a column about Life in the October 1970 issue of Scientific American that brought widespread attention to the Game of Life. It’s not what most people think of as a game; there are no players and there’s no way to win or lose the game. Instead, Life is more like a model or simulation in which you can play and experiment.
Life takes place on a two-dimensional grid of square cells. Each square cell can be either alive or dead (full or empty).
The simulation is carried out at fixed time steps; every time step, all the cells on the grid can switch from dead to alive, or alive to dead, depending on four simple rules that only depend on a given cell’s eight immediate neighbours.
If the cell is dead:
- Birth: if exactly three of its neighbours are alive, the cell will become alive at the next step.
- Survival: if the cell has two or three live neighbours, the cell remains alive.
- Death by loneliness: if the cell has only zero or one live neighbours, the cell will become dead at the next step.
- Death by overcrowding: if the cell alive and has more than three live neighbours, the cell also dies.
Find how many cells are alive after given time steps.
Finish writing a function gameOfLife that has these parameters:
- Steps – an integer that states how many time steps does the game have to go through.
- Board – a two-dimensional array that contains all alive and dead cells.
- A number of cells (as an integer) that are alive after given time steps.
Given the board size as
N x M and steps as
1 ≤ N, M, C ≤ 100
Example input 0:
[ [ 1, 0, 1 ], [ 0, 1, 0 ], [ 1, 0, 1 ] ]
Example output 0:
gameOfLife( 2, [ [ 1, 0, 1 ], [ 0, 1, 0 ], [ 1, 0, 1 ] ] ) = 4
101 010 010 010 => 101 => 101 => alive cells = 4 101 010 010
Example input 1:
[ [ 1, 0, 1 ], [ 1, 0, 0 ], [ 0, 0, 1 ] ]
Example output 1:
gameOfLife( 1, [ [ 1, 0, 1 ], [ 1, 0, 0 ], [ 1, 0, 0 ] ] ) = 1
101 010 100 => 000 => alive cells = 1 001 000
You can download the eclipse project file here.
Please send your solution to
Starts on Mon, 14 November 2016 18:30
Ends on Tue, 15 November 2016 00:00
Points you can get: 50
|
OPCFW_CODE
|
Build set with PKI from reliable broadcast / random beacon
I am trying to build a protocol (or find an existing one) for creating a set $S_2$ with PKI from a set of parties $S_1$ that initially does not know anything about the other parties.
We assume end-to-end communication and broadcast, all with a delay up to $\text{Delta}_\text{transmission}$. Additionally, we also use NIST random beacon (could simplify the protocol, but it doesn't solve the deadline problem below).
So every party $p \in S_1$ would run this protocol $A$ at a time $t_1$, and we would like at $t_2$ that all honest parties have the same set $S_2$ of parties with PKI.
I could imagine a solution where all parties generate a private/public key pair, and then the protocol dictates that for each round, they broadcast a signed message with their public key and current vision of $S_2$; ie $m = [pk, S_2]$.
each time they receive a valid signed message with a new public key $pk'$ and a new set $S_2'$, they do $S_2 = S_2' \cup S_2 \cup {pk'}$, and rebroadcast $[pk, S_2]$
The main problem with this scenario is termination : if we want the protocol to terminate at any deadline $t_2$ fixed in advance; there is always a way for an attacker to break consistency (all honest parties have the same set $S_2$ in the end) by sending a new valid message $m'$ just before the deadline to a party $p$, so that this party accept it, but when the honest party rebroadcasts his new set including the new key, other honest parties discard it because it arrives after the deadline $t_2$.
Any help, either directly on how to fix the deadline problem, or indirectly how to build such protocol, would be greatly appreciated :)
There might be multiple solutions, and this could be one of them.
You can use multiple rounds of Terminating Reliable Broadcast (TRB). Each party in $S_1$ sends a message with their own proposal $pk$ -- this corresponds to one round of TRB.
After $|S_1|$ rounds, you're good to go. Since all honest parties have the same set of individual proposals, they can deterministically compute the result $S_2$.
Naturally, if you assume $f<|S_1|$ nodes can fail (e.g. by crashing, arbitrary), you have to stop the protocol after $|S_1|-f$ rounds.
PS: I might have misunderstood some of the details in your the question, so please correct me if anything's amiss.
Hey, thank for your answer which is already helpful. However in my case the parties don't know anything about the other ones (fully peer to peer), in particular they don't know in advance the number of parties running the protocol.
Are you assuming a Byzantine adversary?
The adversary can send (valid) messages to every part or a specific ones, can read unencrypted communication channel (but cannot tamper or delete messages from a honest party to a honest party), and he may generate several identities (not a problem here). He can also deviate arbitrarily from the protocol given a reasonnable bound on message sent and processing power on his side. His aim is to make the protocol fail, ie two honest parties end up with different sets $S_2$
|
STACK_EXCHANGE
|
Design clean bootstrap (1 frontpage + 1 inner page)
- Status: Closed
- Prize: $590
- Entries Received: 12
- Winner: designcreativ
My existing website: [login to view URL] - keep the logo. This is hosting business and I only have SSD-shared-hosting that costs more than normal hosting. So feel free to use stock-images of SSD, network, power, speed etc in your design (I can pay for them).
This contest is Bootstrap-design: If you win with your drawing/design-idea, provide one frontpage in html/css and one inner-sub-page (bootstrap). No WordPress involved.
I have attached an idea in "[login to view URL]" - feel free to rotate the content and reduce colors and improve the design - it is in random order and with things I threw at it. In "[login to view URL]" you will find the background I want for it as well (same as you can watch live at URL below). But you migth not have to do many things to it also - it depends. If you just would clear up my design, it COULD be enough to win.
I have purchased this template that I like, because I considered using it instead of having a bootstrap-template: [login to view URL] - so you can use design elements from this template. NOTE: While that is a WordPress template, THIS contest is just about a html/css-version coded using the Bootstrap helper tool (so that I can easy add bootstrap-elements later and also it makes development easier for you I think).
Remember to choose orange and boxed-layout-design to get closer to the attached files. I do not want any css-files from the template itself (to keep the source clean and tidy, you can of course open a text editor and borrow code into bootstrap-css).
Keep the font used in this design/template - I'm very, very happy with the font, font-size and the font-color. I want to have exactly that font. Keep boxed layout throughout the page. Reduce colors.
I do not want any bigger banner/header area than the one in my drawing, but I can accept smaller. But I do want two hosting-related content in it (standard bootstrap-slider is fine I guess).
Feel free to experiment: Move the orange header under the logo for instance. And make room for lot of text - less is more when it comes to design-elements - I don't want completely boxed everywhere. I want it simple/clean, but simple is almost more difficult. I do not need any animations that pops up from left or rigth or things like that.
Should be mobile friendly and good/small html/css.
I like headings with font as in the template and I like headlines with a orange line under it (also seen some places in template online).
I will make this contest "guaranteed" as soon as I see at least one design I can see the potential in. If you ask for guaranteed now from start, then I know you didn't read the entire brief :)
“Did the job exactly as it should.”
Top entries from this contest
|
OPCFW_CODE
|
Send arbitrary extra headers when scraping
It is currently possible to set the Authorization header when scraping hosts. I would like to extend this functionality to allow any arbitrary headers to be specified and set via config. Our particular use case is to send a signal to some proxy middleware, but there could definitely be other use cases sending information out of band directly to the hosts being scraped.
I think the implementation should be pretty simple, and I would take a stab at it if it is likely to get accepted. Thoughts?
FWIW, someone is currently adding header support to the blackbox exporter, so there is some precedent: https://github.com/prometheus/blackbox_exporter/pull/32
@brian-brazil what do you think about having this in Prometheus as well?
In the case of the blackbox exporter, it's intended to be able to do arbitrary http blackbox probing so what makes sense there (which is basically all HTTP settings) doesn't necessarily make sense in Prometheus.
The question here is how complex do we want to allow scraping protocol to be, and how complex a knot are we willing to let users tie themselves in via the core configuration? Are we okay with making it easy for a scrape not to be quickly testable via a browser? At some point we have to tell users to use a proxy server to handle the more obscure use cases, rather than drawing their complexity into Prometheus.
As far as I'm aware the use case here relates to a custom auth solution with a non-recommended network setup. It's not unlikely that the next request in this vein would be to make these relabelable, and as this is an auth-related request, per discussion on #1176 we're not going to do that. I think we'd need a stronger use case to justify adding this complexity.
Sounds reasonable.
On Jun 11, 2016 21:11, "Brian Brazil"<EMAIL_ADDRESS>wrote:
In the case of the blackbox exporter, it's intended to be able to do
arbitrary http blackbox probing so what makes sense there (which is
basically all HTTP settings) doesn't necessarily make sense in Prometheus.
The question here is how complex do we want to allow scraping protocol to
be, and how complex a knot are we willing to let users tie themselves in
via the core configuration? Are we okay with making it easy for a scrape
not to be quickly testable via a browser? At some point we have to tell
users to use a proxy server to handle the more obscure use cases, rather
than drawing their complexity into Prometheus.
As far as I'm aware the use case here relates to a custom auth solution
with a non-recommended network setup. It's not unlikely that the next
request in this vein would be to make these relabelable, and as this is an
auth-related request, per discussion on #1176
https://github.com/prometheus/prometheus/issues/1176 we're not going to
do that. I think we'd need a stronger use case to justify adding this
complexity.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/prometheus/prometheus/issues/1724#issuecomment-225387908,
or mute the thread
https://github.com/notifications/unsubscribe/AAg1mGts1EinMTeEaDWaSsqiBAOPrBbYks5qKwhsgaJpZM4IzJx_
.
FWIW, JWT is not a custom auth and "recommended" is a very subjective term. I expect users to want their /metrics endpoints to integrate with their usual authentication setup. That means that not anyone can hit /metrics via the browser.
JWT only requires the Authorization header, which we support. Asking for headers on top of that indicates something custom.
Closing this since it seems like we're deciding against implementing this for now. Please reopen if I'm wrong.
Just in case others come looking here for how to do this (there are at least 2 other issues on it), I've got a little nginx config that works. I'm not an nginx expert so don't mock! ;)
I run it in docker. A forward proxy config file for nginx listening on 9191:
http {
map $request $targetport {
~^GET\ http://.*:([^/]*)/ "$1";
}
server {
listen <IP_ADDRESS>:9191;
location / {
proxy_redirect off;
proxy_set_header NEW-HEADER-HERE "VALUE";
proxy_pass $scheme://$host:$targetport$request_uri;
}
}
}
events {
}
Run the transparent forward proxy:
docker run -d --name=nginx --net=host -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro nginx
In your prometheus job (or global) add the proxy_url key
- job_name: 'somejob'
metrics_path: '/something/here'
proxy_url: 'http://proxyip:9191'
scheme: 'http'
static_configs:
- targets:
- '<IP_ADDRESS>:2004'
- '<IP_ADDRESS>:2005'
Are you sure that's the right config? I'm not an nginx expert but it seems to be taking part of the URL path and changing it to the port, rather than setting a header.
It's working great! That part is needed to make it proxy to host:port pairs that are not known in advance. Well it is at least is the only way my limited nginx config experience found to do it.
+1
For my particular case. We are using Cloud Foundry and can not access individual application instance.
CF support X-CF-APP-INSTANCE header to explicitly instruct cloud router to route request to a particular instance.
That's an ad-hoc case but anyway customizable is always better.
@thangbn we are using Cloud Foundry too and setting custom headers in Prometheus would be most appreciated. I register applications in Consul and set X-CF-APP-INSTANCE as a tag which in a further step should be read and used by Prometheus in order to target individual instances. Until then, we use Prometheus' PushGateway to push instance specific metrics like memory-usage. I know PushGateway isn't intended for that use-case (https://prometheus.io/docs/practices/pushing/) but as long as we can't set headers for scraping we're bound to that work-around.
@thangbn @robachmann As you mention Cloud Foundry in this regard - for some days, there is a new kid on the block (Promregator), which is handling exactly your use case: scraping metrics from Clound Foundry applications using the X-CF-APP-INSTANCE approach. As said it's still quite fresh meat, but I would appreciate if you gave it a try providing your feedback.
Also came here looking for a way to set the X-CF-APP-INSTANCE header.
@juliusv any chance of reconsidering this one?
Cloud foundry has increased in popularity significantly since the issue was initially closed, and that particular use case has brought multiple people to this thread.
If you're on Cloud Foundry you should look at something along the lines of https://github.com/promregator/promregator
|
GITHUB_ARCHIVE
|
/*
* decaffeinate suggestions:
* DS102: Remove unnecessary code created because of implicit returns
* DS207: Consider shorter variations of null checks
* Full docs: https://github.com/decaffeinate/decaffeinate/blob/master/docs/suggestions.md
*/
let DateTime;
const DateTimeParser = require('./date-time-parser');
const moment = require('moment');
// Public: Date and time parsing and conversion.
module.exports =
(DateTime = class DateTime {
// Public: Parse the given string and return associated {Date}.
//
// - `string` The date/time {String}.
//
// Returns {Date} or null.
static parse(string) {
try {
return DateTimeParser.parse(string, {moment}).toDate();
} catch (e) {
const m = moment(string, moment.ISO_8601, true);
if (m.isValid()) {
return m.toDate();
} else {
return null;
}
}
}
// Public: Format the given date/time {String} or {Date} as a minimal absolute date/time {String}.
//
// - `dateOrString` The date/time {String} or {Date} to format.
//
// Returns {String}.
static format(dateOrString, showMillisecondsIfNeeded, showSecondsIfNeeded) {
let m;
if (showMillisecondsIfNeeded == null) { showMillisecondsIfNeeded = true; }
if (showSecondsIfNeeded == null) { showSecondsIfNeeded = true; }
try {
m = DateTimeParser.parse(dateOrString, {moment});
} catch (e) {
m = moment(dateOrString, moment.ISO_8601, true);
if (!m.isValid()) {
return 'invalid date';
}
}
if (m.milliseconds() && showMillisecondsIfNeeded) {
return m.format('YYYY-MM-DD HH:mm:ss:SSS');
} else if (m.seconds() && showSecondsIfNeeded) {
return m.format('YYYY-MM-DD HH:mm:ss');
} else if (m.hours() || m.minutes()) {
return m.format('YYYY-MM-DD HH:mm');
} else {
return m.format('YYYY-MM-DD');
}
}
});
|
STACK_EDU
|
Regridding functionalities (powered by xESMF)
Pull Request Checklist:
[x] This PR addresses an already opened issue (for bug fixes / features)
This PR addresses issues #68 #180 #182 #215 and follows up #196
#141 #168 will have to be addressed again at a later point
[x] Tests for the changes have been added (for bug fixes / features)
[x] Documentation has been added / updated (for bug fixes / features)
[ ] HISTORY.rst has been updated (with summary of main changes) -> Will be added in a release preparing PR
[x] I have added my relevant user information to AUTHORS.md
What kind of change does this PR introduce?:
Extending #225 for all data_vars and coords of the xarray object (requires cf-xarray >= 0.7.5)
Adding jupyter notebook to showcase remapping functionalities
Adding regridding functionalities (powered by xESMF):
clisops.ops.regrid.regrid one line remapping function orchestrating below functions and classes
clisops.core.Grid class:
create xarray.Dataset holding description / coordinates of a regular lat-lon grid in CF compliant format:
from xarray.Dataset/DataArray (optionally of adaptive resolution, if source is not a regular lat-lon grid)
via grid_instructor (creating regional or global grid)
selecting a pre-defined grid (https://github.com/roocs/roocs-grids)
reformat grids (SCRIP, CF, xESMF formats)
detect extent, shape, format, type of the grid
detect collapsing or duplicated cells
create hash, compare grid objects
save to disk
exchange attributes and non-horizontal coordinates between datasets
calculate bounds (for regular lat-lon and curvilinear grids)
re-define data_vars and coords of xarray.Dataset for xESMF application
clisops.core.Weights:
orchestrate creation of weights with xESMF
holds xesmf.Regridder object
read from or store to local remapping weights cache (lock file mechanic for contemporary weights creation)
generate hash to identify similar weights
clisops.core.regrid:
application of remapping weights on xarray.Dataset/DataArray
optionally transfer attributes and non-horizontal coordinate variables from source to target dataset
set new attributes related to the remapping operation
Adding lock file mechanic (from source)
Does this PR introduce a breaking change?:
adding regrid operator in ops
adding regrid function, Grid and Weights classes to core
the remapping makes use of xesmf, which is already a dependency
adding dependency for roocs-grids (might be removed with resolving #168)
potentially TBA
Other information:
Future planned PR(s) specific for remapping:
Support for manually provided masks
Support for out-of-domain masking for nearest neighbour (likely better put to xesmf)
Deal with the periodic attribute of xesmf (xesmf resets periodic to False for conservative remapping, but probably should not)
Support datasets with shifted longitude frames (eg. ranging from (-60, 300) degrees_east) - base work already done in previous PR
Support reformatting from/to further formats
Support grid type detection for other formats
Support reading / reformatting / using weight files from other tools like nco, cdo, ...
Calculate nominal_resolution of the target grid, if not present
Set up central / web based remapping weights cache and synchronizing cron job
Find solution for vector variables / variables defined on cell edges
Support vertical interpolation (eg. with xgcm)
(Support other remapping backends than xesmf in the far future)
Future planned PR(s) generally for clisops:
Unify the detect_coordinate functions
Attribute a new tracking_id / PID (general requirement for clisops)
Support datasets with missing missing_value / _FillValue attribute that feature missing values (add fix in dachar?)
Hey @sol1105, I'm wondering what the state of this PR is? Do you need reviewers?
Hey @sol1105, I'm wondering what the state of this PR is? Do you need reviewers?
Hey @Zeitsperre, I'm sorry I wasn't able to continue to work on it in the past weeks, I try to get it into a reviewable state soon. Thanks for keeping it up to date with the master branch all this time :-)
@Zeitsperre @cehbrecht @agstephens I did all the changes I wanted to make to the PR. It would be great, if you gave me some feedback. It is not urgent, however. In the background I will fix the tests that are no longer running through.
Pull Request Test Coverage Report for Build<PHONE_NUMBER>
604 of 955 (63.25%) changed or added relevant lines in 8 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-6.6%) to 72.396%
Changes Missing Coverage
Covered Lines
Changed/Added Lines
%
clisops/ops/base_operation.py
23
30
76.67%
clisops/utils/common.py
14
22
63.64%
clisops/utils/output_utils.py
24
37
64.86%
clisops/utils/dataset_utils.py
133
165
80.61%
clisops/ops/regrid.py
20
57
35.09%
clisops/core/regrid.py
388
642
60.44%
Totals
Change from base Build<PHONE_NUMBER>:
-6.6%
Covered Lines:
1689
Relevant Lines:
2333
💛 - Coveralls
@sol1105
I wanted to know if you're still interested in finalizing the regridding implementation? I've adapted your PR to the newest package layout and will be seeing what I can do about roocs-grid (i.e. PyPI). Please let me know!
@Zeitsperre @huard Thank you for all the help and feedback so far. I know there is still quite a lot of work to be done with regards to the remapping functionalities, but despite that I would like to merge this PR, so there can soon be a clisops release with regridding capabilities. From there I would keep on working on the regridding functionalities (incl. working on your above suggestions) in future and smaller PRs.
Would you be ok with this or would this conflict with your use case or plans with clisops? Is there anything crucial for me to do so a merge is possible?
Problematic for your use case may be that two bugs (pydata/xarray#7794 and xarray-contrib/cf-xarray#442) prevent using the more recent xarray and cf-xarray releases. Another topic was that the roocs-grids repo (holding a set of pre-defined target grids) should be released as pypi and conda package before merging this PR.
Hi Martin, works for me !
@Zeitsperre @cehbrecht I removed python 3.8 from the CI checks because of the cf_xarray problem you adressed in roocs_utils using the following lines in the requirements.txt:
cf-xarray>=0.3.1,<=0.8.4; python_version == '3.8'
cf-xarray>=0.3.1; python_version >= '3.9'
I am not sure how such a setting can be added in the dependencies entry of the pyproject.toml, in case python 3.8 is still required. Would it simply be:
dependencies = [
"bottleneck>=1.3.1",
# cf-xarray is differently named on conda-forge
"cf-xarray>=0.8.6;python_version>='3.9'",
"cf-xarray>=0.7.5,<=0.8.0;python_version=='3.8'",
...
@sol1105
I am not sure how such a setting can be added in the dependencies entry of the pyproject.toml, in case python 3.8 is still required. Would it simply be:
dependencies = [
"bottleneck>=1.3.1",
# cf-xarray is differently named on conda-forge
"cf-xarray>=0.8.6; python_version>='3.9'",
"cf-xarray>=0.7.5,<=0.8.0; python_version=='3.8'",
...
That's exactly what you would need to do to maintain Python3.8 support. Everything is using pip under-the-hood. The only thing I'm not certain about is that I believe cf_xarray up until v0.8.4 is compatible with Python3.8 (at least, that's what I indicated in other places).
@Zeitsperre I leave the last words to you :) We would like to merge this PR and make the new clisops release with the regrid operator. I have prepared daops and rook already for the new operator and it works.
@Zeitsperre Thanks a lot for the review and your contribution. I addressed the outstanding issues you found
Well done. Feel free to merge whenever you're ready!
|
GITHUB_ARCHIVE
|
To understand this topic let us directly start with an example.
List<Integer> arrayOfIntegerList = new ArrayList<>; // compile time error !!
You will find that a simple statement like this will not even compile because the Java compiler does not allow this. To understand the reason, you first need to know two arrays are covariant and generics are invariant.
Covariant: It means you can assign subclass type array to its superclass array reference. For instance,
Object objectArray = new Integer; // it will work fine
Invariant: It means you cannot assign subclass type generic to its super class generic reference because in generics any two distinct types are neither a subtype nor a supertype. For instance,
List<Object> objectList = new ArrayList<Integer>(); // won't compile
Because of this fundamental reason, arrays and generics do not fit well with each other.
Now let’s get back to the actual question. If generic array creation were legal, then compiler generated casts would correct the program at compile time but it can fail at runtime, which violates the core fundamental system of generic types.
Let us consider the following example to understand that:-
1) List<Integer> arrayOfIdList = new ArrayList<Integer>;// Suppose generic array creation is legal. 2) List<String> nameList = new ArrayList<String>(); 3) Object objArray = arrayOfIdList; // that is allowed because arrays are covariant 4) objArray = nameList; 5) Integer id = objArray.get(0);
As we assumed generic array creation is legal, so line 1 is valid and creates an array of ID List.
In line 2, we have created a simple list of string.
In line 3, we passed an arrayOfIdList object to objArray reference, which is legal because arrays are covariant.
In line 4, we have assigned nameList (i.e. the list of string) into objArray that is pointing to the arrayOfIdList object. It is alright because of Type Erasure, means the runtime instance of List<String> is List and List<Integer> arrayOfIdList is list, so this will not generate any exception. Here comes the biggest problem, we have assigned the list of string (i.e., List<String>) into an array that can only contain the list of integer.
In line 5, mistakenly, we are trying to get the first element of the 0th element of an array. As arrayOfIdList declared as an array of integer list, the compiler will cast that assignment to Integer which will generate ClassCastException at runtime.
Here one of the major purposes of generic is failed (i.e., compile time strict type checking.) Therefore, to address this problem compile time error gets generated at line 1.
Reference: Effective Java 2nd edition
|
OPCFW_CODE
|
As we move into the future, we find increasing need for a zero-trust interaction system.
Even pre-Snowden, we had realized that entrusting our information to arbitrary entities on the internet was fraught with danger. However, post-Snowden the argument plainly falls in the hands of those who believe that large organizations and governments routinely attempt to stretch and overstep their authority. Thus we realize that entrusting our information to organizations in general is a fundamentally broken model. The chance of an organization not meddling with our data is merely the effort required minus its expected gains. Given that companies tend to have income models that require they know as much about people as possible, the realist will realize that the potential for covert misuse is difficult to overestimate.
Web 3.0, or as might be termed the “post-Snowden” web, is a re-imagination of the sorts of things we already use the web for, but with a fundamentally different model for the interactions between parties. Information that we assume to be public, we publish. Information we assume to be agreed upon, we place on a consensus ledger. Information that we assume to be private, we keep secret and never reveal. Communication always takes place over encrypted channels and only with pseudonymous identities as endpoints; never with anything traceable (such as IP addresses).
In short, we engineer the system to mathematically enforce our prior assumptions, since no government or organization can reasonably be trusted.
There are four components to the post-Snowden web: static content publication, dynamic messages, trustless transactions and an integrated user interface.
The first, we already have much of: a decentralized, encrypted information publication system. All this does is take a short intrinsic address of some information (a hash, if we’re being technical) and return, after some time, the information itself. New information can be submitted to it. Once downloaded, we can be guaranteed it’s the right information since the address is intrinsic to it. This static publication system accounts for much of HTTP(S)’s job and all that of FTP. There are already many implementations of this technology, but the easiest to cite is that of BitTorrent. Every time you click on a magnet link of BitTorrent, all you’re really doing is telling your client to download the data whose intrinsic address (hash) is equal to it.
In Web 3.0, this portion of the technology is used to publish and download any (potentially large) static portion of information that we are happy to share. We are able, just as with BitTorrent, to incentivize others to maintain and share this information; however, combined with other portions of Web 3.0, we can make this more efficient and precise. Because an incentive framework is intrinsic to the protocol, we become (at this level, anyway) DDoS-proof by design. How’s that for a bonus?
The second portion of Web 3.0 is an identity-based pseudonymous low-level messaging system. This is used for communicating between people on the network. It uses strong cryptography in order to make a number of guarantees about the messages; they can be encrypted with an identity’s public key in order to guarantee only that identity can decode it. They can be signed by the sender’s private key to guarantee that it does indeed come from the sender and provide the receiver with a secure receipt of communication. A shared secret can provide the opportunity to communicate securely, including between groups, without the necessity of proof of receipt.
Since each of these provides ultimate message logistics, the use of transmission-protocol level addresses becomes needless; addresses, once comprised of a user or port and an IP address, now become merely a hash.
Messages would have a time-to-live, allowing the disambiguation between publication messages that one may wish to be “alive” for as long as possible to guarantee as many identities see it and instant signaling messages that wish to be transmitted as quickly as possible across the network. Thus the dichotomy of latency and longevity is traded.
Actual physical routing would be carried out through a game-theoretic adaptive network system. Each peer attempts to maximize their value to other peers in the assertion that the other peers are valuable to them for the incoming information. A peer whose information is not valuable would be disconnected and their slot taken with a connection to some other, perhaps unknown (or perhaps second-degree), peer. In order that a peer be more useful, messages with some specific attributes would be requested (a sender address or topic, for example – both unencrypted – beginning with a particular bit string).
In Web 3.0, this portion allows peers to communicate, update and self-organize in real time, publishing information whose precedence does not need to be intrinsically trusted or later referred. In the traditional web, this is much of the information that travels over HTTP in AJAX style implementations.
The third portion of Web 3.0 is the consensus engine. Bitcoin introduced many of us to the idea of a consensus-based application. However, this was merely the first tentative step. A consensus engine is a means of agreeing some rules of interaction, in the knowledge that future interactions (or lack thereof) will automatically and irrevocably result in the enforcement exactly as specified. It is effectively an all-encompassing social contract and draws its strength from the network effect of consensus.
The fact that the effects of a renege of one agreement may be felt in all others is key to creating a strong social contract and thus reducing the chances of renege or willful ignorance. For example, the more a reputation system is isolated from a more personal social interaction system, the less effective the reputation system will be. A reputation system combined with Facebook or Twitter-like functionality would work better than one without, since users place an intrinsic value on what their friends, partners or colleagues think of them. A particularly poignant example of this is the difficult question of whether, and when, to befriend on Facebook an employer or dating partner.
Consensus engines will be used for all trustful publication and alteration of information. This will happen through a completely generalized global transaction processing system. The first workable example of this is the Ethereum project.
The traditional web does not fundamentally address consensus, instead falling back on centralized trust of authorities, such as ICANN, Verisign and Facebook, and reducing to private and government websites together with the software upon which they are built.
The fourth and final component to the Web 3.0 experience is the technology that brings this all together; the “browser” and user interface. Funnily enough, this will look fairly similar to the browser interface we already know and love. There will be the URI bar, the back button and, of course, the lion’s share will be given over to the display of the dapp (née webpage/website).
Using this consensus-based name resolution system (not unlike Namecoin in application), a URI can be reduced to the unique address of the front-end for that application (i.e. a hash). Through the information publication system, this can be expanded into a collection of files required for the front-end (e.g. an archive containing .html, .js, .css and .jpg files). This is the static portion of the dapp (-let).
There will be a few superficial differences; we’ll see a move away from the traditional client-server URL model of addresses like “https://address/path”, and instead start to see new-form addresses such as “goldcoin” and “uk.gov.” Name resolution will be carried out by a consensus engine-based contract and trivially be redirected or augmented by the user. Periods would allow multiple levels of name resolution - “uk.gov”, for example, might pass the “gov” subname into the name resolver given by “uk.”
Due to the ever-transient nature of the information made available to the browser automatically and accidentally through the update of the consensus back end and the maintenance of the peer network, we’ll see background dapps or dapplets play a great role in our Web 3.0 experience. Either through always-visible Mac OS dock-like dynamic iconic infographics or dashboard-style dynamic dapplets, we’ll be kept accidentally up to date about that which we care.
After the initial synchronization process, page-loading times will reduce to zero as the static data is pre-downloaded and guaranteed up to date, and the dynamic data (delivered through the consensus engine or p2p-messaging engine) are also maintained up to date. While being synchronized, the user experience will be perfectly solid though actual information shown may be out of date (though may easily not, and can be annotated as such).
For a user of Web 3.0, all interactions will be carried out pseudonymously, securely, and for many services, trustlessly. Those that require a third party or parties, the tools will give the users and App-developers the ability to spread the trust among multiple different, possibly competing, entities, massively reducing the amount of trust one must place in the hands of any given single entity.
With the separation of APIs from front end and back ends, we’ll see additional ability to utilize differing front-end solutions able to deliver a superior user experience. Qt’s QtQuick and QML technologies could, for example, be a stand-in replacement for the HTML/CSS combination of traditional web technologies and would provide native interfaces and rich accelerated graphics with minimal syntactical overhead and on a highly effective reactive-programming paradigm.
The changeover will be gradual.
On Web 2, we’ll increasingly see sites whose back ends utilize Web 3.0-like components such as Bitcoin, BitTorrent and Namecoin. This trend will continue, and the truly Web 3.0 platform Ethereum will likely be used by sites that wish to provide transactional evidence of their content, such as voting sites and exchanges. Of course, a system is only as secure as the weakest link, and so eventually such sites will transition themselves onto a Web 3.0 browser which can provide end-to-end security and trustless interaction.
Say “hello” to Web 3.0, a secure social operating system.
Originally entitled “Dapps: What Web 3.0 Looks Like” and published April 17, 2014 on Gavin Wood’s blog, Insights Into a Modern World.
|
OPCFW_CODE
|
I’ve received my Librem15 v3 at the end of last year. Following my plan I replaced the preinstalled PureOS with the latest Qubes, but within 2 days I figured that it was for now too much of a time investment and learning curve to get it off the ground and I just needed something I can work with, so I installed instead my long time favorite productivity distro Manjaro. It has been running decently except a handful of quirks and glitches, but the performance has always been a bit of a letdown and things have only gotten worse to the point where now it is straight out painful and unworkable.
The most extreme it shows on web browsing. With a reasonable amount of tabs open, clicking on a different tab it may take several seconds until the screen redraws. Even simple pages take upwards of 5 seconds to load, more “demanding” pages (let’s say wordpress post edit page) frequently require half a minute and above! That’s via an USB Gbit ethernet adapter since the Wifi has been unreliable at best. To narrow down the problem I tried loading the same pages on my “old” razer blade stealth 2016, via the same internet connection (connected via wifi though) and it is a difference of day and night.
Overally the GUI feels like on a rather dated machine. Moving and resizing windows is usually choppy. Opening the main menu has such a delay that after pressing the super key I frequently type in the name of the program I want to open in whatever window had focus in that moment.
Loading a webpage commonly sends all 4 cores well into 75% load and more. If I have two browsers open (Firefox and Chromium, god forbid!) the fans of the laptop usually keep spinning noisily until I close chromium, which seems to be especially taxing. Judging from htop, the main load on the system seems to be firefox which is used as the main browser.
I suspect it has something to do with the graphics stack, possibly the driver. I am using kernel 4.19.91 because the 5.4 line gives me frequent gpu crash freezes.
I mostly maxed out this machine because it is supposed to serve as a reliable and speedy native Linux machine for web development that can handle a bit of load if necessary.
Here the essential overview, to have it in one place:
Librem 15 v3
Intel® Core™ i7-6500U CPU @ 2.50GHz
(Intel HD Graphics 520 [i915])
32GB DDR4 RAM 2133 MT/s
Samsung SSD 970 PRO 512GB
3840x2160 monitor connected via HDMI
Generally the UX feels much like on a system that has begun to swap memory, but that is not the case and the RAM not even 20% full. I wasn’t expecting miracles from the laptop, but this is simply unworkable. But since in fact, the performance does not correspond to the specs at all, there is most likely a software issue. Unless one generation later the i7-7500U (my old machine) is supposed to be more than 4 times faster.
I will attempt to run PureOS on the laptop and see if it exhibits similar problems, in which case the hardware might be faulty(?). But it if it doesn’t it would still suck to have to move away from my distribution of choice and redo several days of setup work. Kind of defeats the idea of “Install what you want” (I also wanted to have Windows 7 on a separate drive at some point, but no deal with coreboot, different story for a different thread)
If someone has advice what I could do to diagnose and potentially solve this problem I would be most thankful.
|
OPCFW_CODE
|
Tonight, I gave another Ignite talk on "Google Wave & Collaborative Mapping"/. The talk went well, and it was a great opportunity to hear what other people are thinking of doing with Wave. But, something interesting happened after...
I was asked by a speaker to basically defend my cred- as he looked at my appearance (a short green skirt & t-shirt), saw that I gave a presentation that glossed over the technical details, and assumed that I wasn't that technical. When I explained to him that I actually do write code, he was fairly taken aback. He then recommended that I start off each presentation by clarifying my level of knowledge, and getting "respect" from the audience. His basic theory is that girls are not respected as technical peers until they sufficiently prove themselves, and apparently, particularly not if the girl is decent looking. Now, I want to explore that theory further (outside of the noisiness and distractions of the crowded pub).
When I was in high school, I participated in MUN (Model United Nations), where high schoolers would be delegates for a particular country and argue position papers. At the conferences, I remember that I myself mentally discounted the ability of the female delegates when they went up to speak. I was willing to believe in them, but only after they really showed their stuff. I didn't have this same feeling with the guys, and I came to the conclusion that there are some areas where one gender garners more of an immediate respect than others. I decided then that I would have to come off as incredibly confident (but not bitchily so) in order to win the respect of the MUN people, as I assumed that they would have that same accidental bias. The bias made sense to me in the area of speaking - men naturally have deep, confident voices, and so you just want to believe in that voice. I don't know how to describe women's voices, but it's certainly not like that.
I think the respect bias may extend beyond debating in the tech world, however. When I look at the Twitter account for a self-professed "girl geek", I grow immediately suspicious of their geeky claims. When I see a girl go up to present on the stage, I usually assume they will talk about something less technical. Maybe this is because my suspicions are usually confirmed -- because we live in a world where we try to extend a geek label as far wide as possible, to try and sneakily get more girls "in CS." Maybe it's because girls naturally hate girls (a well documented phenomenom in women's magazines), and this is an extension of the phenomenon.
So, I'm biased, he's biased, and potentially others are as well. I don't know that we'll be able to eliminate our subconcious tendencies, but we can help people squash their own.
When you give a talk, always start off with an introduction slide that describes your background and experience. If you're an expert on the topic, admit it (humbly). If you're just learning and wanted to share your learnings, admit it.
I think part of the reason that we try to rely on other (possibly incorrect) clues to help us form opinions is that people don't give us enough information about their credentials. And I don't think that we mind if someone is or isn't technical - we just want to know, one way or the other, and not feel like we're being misled. We respect people for who they are, but we don't respect people if we suspect that they're trying to be something they're not.
Thoughts welcome, of course. :)
|
OPCFW_CODE
|
This is the time in my life where I am lost. Lost in possibilities. Lost in dreams. Lost in habits.
I have given myself permission to look like an idiot. Again.
Let’s be serious Abhishek !! What have you been up to ?
Ok, I will tell you. But promise me you will read till the end of the post 😁😄.
Yeah ? Yeah ?? Let’s go then ..
The year started on a rocky note when my remote work contract ( the one which lasted 20 months ) was rescinded on Feb 15th.
I wrote the below note to some of my friends.
My remote work contract as a ruby api dev was terminated because of a rightsizing exercise in my company. The finances were in a bad state and unfortunately a lot of people had to be released. I do have a one month notice period but I wanted to clear my head out so that I can plan better.
I am perceiving this as an opportunity to pursue better paths. Some thoughts in my head are
- Move to elixir because I’ve been having a lot of fun learning the language and working in it
- Work on a toptal project ( I am a toptaler but never had the time to do one. Would like to experience that )
- Take a 3-6 month break and focus on my health and improving my portfolio
Surprisingly, I find myself pretty calm. I feel the below reasons might have a hand in it
- Optionality ( because of the tech skills and investing in learning )
- Solid savings
- Been in similar situations before in my career ( antifragile as Nassim Taleb would say )
- Reading a lot of stoic text ( I think it must be compulsory reading for every remote dev )
- Being part of awesome communities ❤️
Anyway. Time to move on
Pure business. Nothing personal.
It’s been almost three months now, and I still have not made a single rupee. I guess this break is being used to take the next step on the staircase – which is to build a small online business.
Employee ➡️ Remote Worker ➡️ IndieHacker
The following three projects have been keeping me busy.
It’s a simple service(SAAS) to embed charts in emails, pdfs or chat bot messages. I built the MVP in Elixir (which was fun) but now the difficult part begins ie. getting customers.
At the start of the year, I wanted to have a project which I could do for the rest of the year. And I am so psyched about Remote Work that I want other people to experience this joy as well. Hence I committed to writing a newsletter every two weeks.
Right now I have 30 members in the mailing list which is growing organically. And with the launch of the new website and the slack group I am hoping it will be hub for remote working community in India.
This was an idea shared by a person I met in Thailand. He had implemented it successfully in France, and he wanted me to see if it works for Indian market as well (I know it’s crazy .. right). I really thought this idea had a lot of potential when I started. The product sample is ready but it has been hard to find customers.
Other than work stuff, I used this opportunity to learn elixir, do a Vipassana course in Dehradun and take my parents to Benaras for a short little trip.
And oh yes, I finally learned to drive a car and more importantly be able to park in the smallest of spaces available in our building.
So yeah, the possibilities are interesting. But then the purpose of the post is not to share only updates. I wanted to share the systems ( and not goals ) I want to enforce in the next 3 months.
- Work 4 hours every weekday on building a business
- Practise Yoga and Meditation every day
- Call one person every day
Hopefully, if I am able to actualize 80% of what I plan for, then by Durga Puja ( mid October), I should
- Earn 10k from my side projects
- Lead a healthier life
- Have better relationships
If you have any advice/suggestions, please feel free to poke me.
Now tell me – What have you been up to secretly 😈
|
OPCFW_CODE
|
first chance exception when i use messagebox
Whenever i use MessageBox function, i am getting first chance exception. My messagebox is like this.
MessageBox(NULL, (LPCWSTR)L"testing", (LPCWSTR)L"SOFTSAFETY", MB_OKCANCEL | MB_ICONWARNING);
If i debug, i am getting this
First-chance exception at 0x76267A24 (user32.dll) in Thread Message BOX.exe: 0xC0000005: Access violation reading location 0x001629D0.
First-chance exception at 0x76267A24 (user32.dll) in Thread Message BOX.exe: 0xC0000005: Access violation reading location 0x001629D0.
First-chance exception at 0x76267A24 (user32.dll) in Thread Message BOX.exe: 0xC0000005: Access violation reading location 0x001629D0.
First-chance exception at 0x76267A24 (user32.dll) in Thread Message BOX.exe: 0xC0000005: Access violation reading location 0x001629D0.
How can i remove those exceptions? My program is not suspended because of this exceptions, its just displaying in the output window. So can i neglect these. Please guide me.
What are those casts to LPCWSTR for? Something is badly wrong if you need those casts for the code to compile...
@CodyGray i need those casts, if i need to display string variables.
No, you don't. If you have to cast here, you are doing it wrong. The compiler was trying to tell you that, but you told it to shut up by adding casts.
Set the debugger to break on first chance exceptions if you want to see why they are happening.
Perhaps taking a look on the MSDN would help you? The MessageBox function has the following prototype:
int WINAPI MessageBox(
_In_opt_ HWND hWnd,
_In_opt_ LPCTSTR lpText,
_In_opt_ LPCTSTR lpCaption,
_In_ UINT uType
);
LPCTSTR is a pointer to TCHAR, and that is not necessarily a wide character. In wtypes.h, you will find:
const TCHAR *LPCTSTR
and TCHAR can be wchar_t or a char, depending on your project's settings. Your problem is almost certainly that you forced (via a cast) wide chars where regular ones were expected.
You can try using the _T() macro, to generate regular or wide string literals according to your project's configuration.
All good advice, but don't neglect to point out that narrow ('regular') characters went obsolete for Windows programming over a decade ago. Anymore, all projects should have UNICODE and _UNICODE symbols defined to force everything to be wide characters. And lots of people go one step further, replacing all of the macro types with explicitly wide characters: wchar_t, wchar_t*, L"..." etc.
|
STACK_EXCHANGE
|
Boskernovel The Legend of Futian online – Chapter 2141 – Can’t Get Away nonchalant desert propose-p3
Marvellousfiction 《The Legend of Futian》 – Chapter 2141 – Can’t Get Away snail observe propose-p3
Novel–The Legend of Futian–The Legend of Futian
Chapter 2141 – Can’t Get Away kneel bait
The Overlord of the Duan spouse and children checked out Ye Futian and stated, “You are the individual that is rumored to acquire range from Donghua Domain to enhance.”
The original noble family of Duan was behaving secretive well before, and it ought to be simply because they didn’t want the news to leak and upset Several Area Village. They, far too, obtained their worries.
It had been only at this point, do the people in Enormous G.o.d Location be aware that persons from Several Nook Small town acquired appeared.
The ancient noble group of Duan was performing secretive well before, and it has to be simply because they didn’t want the news to problem and upset 4 Area Small town. They, too, possessed their questions.
Upon seeing and hearing the voice of your Overlord, they harvested that a thing was afoot. Their hearts trembled as they quite simply found his face from afar. It was the learn of the Large G.o.ds Continent—the Overlord with the old noble family of Duan.
Ye Futian’s entire body changed into a display of super. It blasted reach the prison using a great time, resulting in the prison to shatter and fall. But at this time, quite a few Renhuang descended in the spot as well their atmosphere of your Good Direction was distressing.
Old Ma searched down and spotted a stunning atmosphere from the Terrific Direction permeating from the wide Gigantic G.o.ds Area. An awesome ability pulling for the s.p.a.ce above to make sure that even he was influenced. Other cultivators in the Giant G.o.ds Community and Ye Futian found it was extremely hard for them to shift.
Needless to say, these were all thoughts through the opposite side, also there was absolutely no way to know if they have been a fact or not. None of us was aware if Fang Huan got really done what they mentioned he does, but there was definitely some clashes.
Boom… An exceptionally brutal atmosphere was published coming from the 2 of them as they levitated in to the air, seeking to dash forth. Associated with them, as well as a number of different positions on 9th Block, other tyrannical auras also erupted, and some of them became a Renhuang of the 9th Kingdom. The nearest human being was appropriate regarding Duan Yi and Duan Shang. That Ninth Kingdom cultivator increased his hand to grab Ye Futian, converting the s.p.a.ce in a prison, hovering around Ye Futian.
The folks about the Ninth Street were much more astonished to uncover this conceited alchemy grandmaster with mighty strength experienced are derived from 4 Nook Village, along with his alchemy approaches ended up unbelievably remarkable.
The fact is that, it acquired not succeeded to date.
Duan Yi and Duan Shang’s expression transformed to shock because the atmosphere on the Terrific Way exploded from them. However, the tyrannical pressure of the spatial power acquired sealed the void tightly, so that it is tough so that they can proceed. At the same time, numerous limbs and leaves came out in this particular s.p.a.ce, wrapping around them until people were well tucked inside.
“The Overlord allows me excessive positive reviews.” Ye Futian got off the cover up, showing a strangely attractive deal with. His prolonged metallic curly hair transported along with the wind flow, and his awesome visual appeal amazed many. This guru alchemy grandmaster got turned out to be this type of enchanting shape!
Following hearing the speech on the Overlord, they collected that some thing was afoot. Their hearts and minds trembled while they noticed his face from afar. This is the master on the Enormous G.o.ds Continent—the Overlord of your ancient noble family of Duan.
In the region above Outdated Ma, an enormous spatial door showed up, that a dreadful spatial potential surged out. The s.p.a.ce doorstep seemed to create to a different position. It seemed that after went through it, one might go away right into a very different society.
Chapter 2141: Cannot Get Away
Boom… A remarkably violent atmosphere was released from the two of them because they levitated within the air flow, wanting to rush forth. Associated with them, and also at a number of jobs on 9th Street, other tyrannical auras also increased, and a few of them was actually a Renhuang from the Ninth Kingdom. The nearest man or woman was right regarding Duan Yi and Duan Shang. That Ninth Kingdom cultivator lifted his fingers to grab Ye Futian, switching the s.p.a.ce towards a prison, hovering around Ye Futian.
“Now that your chosen Excellency also has hostages inside our palms, the divine methods are no more around the dining room table for change,” claimed Ancient Ma.
That has a loud bang, the spatial front door was shattered by an assault. Aged Ma had Ye Futian to a larger section of the skies but noticed that in the area above Gigantic G.o.ds Town, a enormous G.o.d-like physique was there toward the imperial palace.
The Overlord of your Duan spouse and children checked out Ye Futian and reported, “You are the individual who is rumored to get range from Donghua Sector to enhance.”
“Are there divine items sealed beneath the location?” Classic Ma checked out the Overlord of Duan inside the range and required.
Duan Yi and Duan Shang’s expressions transformed to distress because the aura in the Excellent Way erupted out from them. Nevertheless, the tyrannical power in the spatial power acquired closed the void firmly, making it tough for them to relocate. Simultaneously, many limbs and leaves appeared within this s.p.a.ce, covering around them until these people were well nestled inside.
“Four Nook Community has never harmed the traditional noble family of Duan. On The Other Hand Excellency has seized our folks from Four Side Small town to pilfer our divine strategies. It is not necessarily conduct befitting of your own situation,” replied Outdated Ma. The divine light produced from him got coated Ye Futian as well as some others. Even though they weren’t in a position to depart, the prince and princess of your ancient royal group of Duan have been under their strong handle.
This resulted in Ye Futian didn’t really need to worry any cultivators around the 9th Block, along with the pavilion expert of Tianyi Pavilion, which explained his audacity. His personal strength formed that they will not need to fear anybody he come across.
The traditional noble group of Duan was acting secretive just before, and it must be simply because they didn’t want news reports to problem and offend Four Corner Small town. They, very, possessed their questions.
Duan Yi and Duan Shang’s expression transformed to jolt as the aura of the Excellent Direction skyrocketed from them. On the other hand, the tyrannical force from the spatial electrical power possessed sealed the void properly, making it tough to allow them to relocate. All at once, plenty of divisions leaving showed up in this particular s.p.a.ce, wrapping around them until people were well tucked throughout.
This meant that Ye Futian didn’t ought to concern any cultivators around the Ninth Street, along with the pavilion master of Tianyi Pavilion, which discussed his audacity. His very own strength dictated that he or she need not worry any individual he encountered.
They discovered since the flaming power Ye Futian demonstrated just before was one amongst his many abilities, and it was a relatively slight one.
Aged Ma searched down and discovered a stunning aura of the Fantastic Direction permeating away from the large Gigantic G.o.ds Area. An excellent ability hauling around the s.p.a.ce above making sure that even he was influenced. Other cultivators in the Massive G.o.ds Community and Ye Futian thought it was was extremely difficult to allow them to relocate.
Ye Futian felt that they was struggling to transfer another muscle. Aged Ma wanted to cause him into your spatial entrance, but at this moment, the entire Huge G.o.ds Town was illuminated by the frightening divine gentle. An incomparable sacred ability now shrouded the total town, and everyone’s system grew to become extremely hefty, like sculptures stuck to the ground. They could hardly move even 50 percent a step, and Ye Futian was no various.
“Indeed, I am,” Ye Futian nodded.
He could even fight the cultivators during the Ninth World.
Moments of Vision and Miscellaneous Verses
He can even struggle the cultivators inside the Ninth Kingdom.
This resulted in Ye Futian didn’t need to anxiety any cultivators on the 9th Streets, including the pavilion become an expert in of Tianyi Pavilion, which discussed his audacity. His very own power formed that they need not concern any person he stumbled upon.
As Classic Ma stared in the other, Ye Futian spoke up, “Sir, the ancient noble group of Duan acquired threatened us with hostages removed from Several Corner Village primary, and that we only resorted to this particular determine after remaining pushed it’s a level trade. Should you not cherish the results, why must we? It is a fact that 4 Nook Village just became a member of the farming world, but we have been not afraid of everyone. As long as the coach can there be, A number of Spot Community is precisely what it always meant to be. During the past, about three top rated amounts from the Shangqing Website joined A number of Part Small town and regarded its existence. While the educator loathes things externally, he would come to find proper rights if he were definitely really provoked. Then, no matter whether Enormous G.o.ds Metropolis could make it through his wrath could be anyone’s suppose.”
|
OPCFW_CODE
|
Joining Two query
select a.Enquiry_Id,a.Ckeck_In,a.check_Out,a.Hotel_Name,a.Meal_Plan,a.Room_Type,a.Occupancy_Type,a.Room_QT,a.Adults from Accomodation a
where a.Enquiry_Id = 74
select q.Enquiry_Id,q.Start,q1.Stay_At from Quick_Plan q,Quick_Plan q1 where q.Enquiry_Id = 74 and q1.Enquiry_Id = 74 and q.Stay_At = q1.Start
result of 1st query is
74 2013-08-03 2013-08-04 ADS CP deluxe Double 1 2
and the result of 2nd query is
74 Ahmedabad Agra
nw i want to combine these two query so that i get the result like
74 2013-08-03 2013-08-04 ADS CP deluxe Double 1 2 Ahmedabad Agra
I might be tired; what is the difference between the desired result and and the first select result?
@u07ch the two extra fields returned by the second select result.
Assuming that a.Enquiry_Id and q.Enquiry_Id are the keys you use to join,
SELECT a.Enquiry_Id, a.Ckeck_In, a.check_Out, a.Hotel_Name, a.Meal_Plan, a.Room_Type, a.Occupancy_Type, a.Room_QT, a.Adults,q.Start, q1.Stay_At
FROM Accomodation a
INNER JOIN Quick_Plan q ON a.Enquiry_Id = q.Enquiry_Id
INNER JOIN Quick_Plan q1 ON q1.Enquiry_Id = q.Enquiry_Id
WHERE a.Enquiry_Id = 74 AND
q.Stay_At = q1.Start
not getting the right result from either of three answers plz suggest some other query. al these querys are working like cross join for my data.
The problem is, I don't really understand your second query. At first I thought it was some sort of self-JOIN on your Quick_Plan table, but then I noticed that you use Enquiry_Id (which I assumed to be your primary key), which makes a non-sense to me.
So could you please provide some more info about your db data?
Providing data from Enquiry_Id #74 of both tables would be great...
i have two tables first is quickplan and another is accomodation the quickplan table contain the info about tour like checkindate checkout day tour strat from tour ends at stayat places and accomodation contain the information about hotels where the person is styaing i have to create a voucher where i need the some information from accomodation table and some from quickplan the info from quickplan is retrived by applying selfjoin in quickplan table now what i want to join these two queries. if still dere is some prblm in understanding the prblm i cn give u the struture n data of both tables
RH - 02/09/2013 - 4 2013-09-10 2013-09-11 Abad Airport Hotel MAP Please Select Please Select 1 2
RH - 02/09/2013 - 4 2013-09-11 2013-09-12 Devinshare CP DELUX Single 1 2
I guess the two rows are from Accomodation table, is this true? First of all, I'd suggest you not to save "Please select" as a value when your value is undefined. Something like NULL, '', or even 0 would be better. Secondly, it's not performing to save ENUM types such you RoomType as Strings. You'd better save them as integer and then convert them with a mapping table. For example 0=SINGLE, 1=DOUBLE, 3=DELUXE, etc.
Yes please post your tables structure (both of them) and some rows data, cause I'm afraid your problem could be related to them.
In your case the easiest way would be to use CTEs as they won't need much modification.
;WITH FirstCTE AS
(
SELECT a.Enquiry_Id,
a.Ckeck_In,
a.check_Out,
a.Hotel_Name,
a.Meal_Plan,
a.Room_Type,
a.Occupancy_Type,
a.Room_QT,
a.Adults
FROM Accomodation a
WHERE a.Enquiry_Id = 74
),
SecondCTE AS
(
SELECT q.Enquiry_Id,
q.Start,
q1.Stay_At
FROM Quick_Plan q,
Quick_Plan q1
WHERE q.Enquiry_Id = 74
and q1.Enquiry_Id = 74
and q.Stay_At = q1.Start
)
SELECT *
FROM FirstCTE F
JOIN SecondCTE S
ON F.Enquiry_Id = S.Enquiry_Id
I think that the proper way would be:
SELECT a.Enquiry_Id,
a.Ckeck_In,
a.check_Out,
a.Hotel_Name,
a.Meal_Plan,
a.Room_Type,
a.Occupancy_Type,
a.Room_QT,
a.Adults ,
q.Start,
q1.Stay_At
FROM Accomodation a
JOIN Quick_Plan q
ON a.Enquiry_Id = q.Enquiry_Id
JOIN Quick_Plan q1
ON q.Enquiry_Id = q1.Enquiry_Id
and q.Stay_At = q1.Start
WHERE a.Enquiry_Id = 74
select
a.Enquiry_Id,
a.Ckeck_In,
a.check_Out,
a.Hotel_Name,
a.Meal_Plan,
a.Room_Type,
a.Occupancy_Type,
a.Room_QT,
a.Adults,
q.Enquiry_Id,
q.Start,
q1.Stay_At
from
Accomodation a,
Quick_Plan q,
Quick_Plan q1
where
q.Enquiry_Id = 74
and q1.Enquiry_Id = 74
and q.Stay_At = q1.Start
and a.Enquiry_Id = 74
try this
|
STACK_EXCHANGE
|
If you do not provide a format label for each value, then the numeric value will appear in the output. Use the ods statement to output the SAS dataset of estimates from the subdomains listed on the domain statement. For your convenience, standard proportions for different NHANES population age groupings are provided in the Excel spreadsheet attached below. The solution option produces a printed version of the age-adjusted prevalences. Message 3 of 7 1, Views. You must log in or register to reply here. Use the cluster statement to specify PSU sdmvpsu to account for design effects of clustering. The model decreases untilit increases again untildecreases until and then decreases more until
Hi, I'm trying to calculate the standardized incidence rate(crude of death) for my data set, for people on a treatment vs not on the. PROC STDRATE computes directly standardized rates and risks by using Mantel-Haenszel For example, when event rates vary for different age groups of a.
I'm attempting to calculate the age-standardized prevalence rate. I have counts for outcome of interest, age groups, and a standard population.
After execution has completed, Joinpoint opens an output window to display the results.
Age Standardized rate SAS Support Communities
Showing results for. Input File Tab The Input File tab specifies the file format of the input data file and some additional settings for the model. Ask a Question. Age Standardized rate. Message 2 of 7 1, Views. Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Video: Age adjusted incidence rate sas Crude Mortality Rate
. How can we calculate epidemiological rates and ratios from these coefficients? Task 1b: How to Generate Age-Adjusted Prevalence Rates and Means Using SAS Survey Procedures.
In this task, you will generate age-adjusted.
Click to hide optional step," Generate Age-Adjusted Means". Message 3 of 3 1, Views.
Reply 2 Likes. Use the weight statement to account for the unequal probability of sampling and non-response. Search titles only. It will show you what information is needed to have Joinpoint compute Age-Adjusted rates and how to provide that information to the Joinpoint program.
Video: Age adjusted incidence rate sas Age adjustment
Note, it can take a few minutes to execute, depending on the size of the input data file and the options selected.
We designed a SAS macro that produces age-specific rates of any.
age gender standardized incidence rate SAS Support Communities
An age-adjusted rate is a weighted average of crude rates, where the You may use SAS, SPSS, Excel, Word, or any software package to. Hello, I'm looking for a way to calculate age-adjusted incidence rates in SAS.
I know how to do so by hand, but I'm stumped when it comes to.
Note, it can take a few minutes to execute, depending on the size of the input data file and the options selected. The Input File tab specifies the file format of the input data file and some additional settings for the model. A Joinpoint Model is also drawn on this graph. Can you use the Dataline statement? If you close the output window without saving your results, you will need to re-run the analysis.
I'm attempting to calculate the age-standardized prevalence rate.
|
OPCFW_CODE
|
Can the fees in a coinbase be used to launder Bitcoins?
Generally Bitcoins coming from a coinbase are considered clean as they are newly generated. However while answering this question I realized that mining pools could offer to launder Bitcoins in the following (in its edge case even extrem) way:
Let's say I want to launder 1 Btc
I create a chain of btc transactions with a relatively high fee (1000 Satoshi per byte). In the extreme case I could even make an extremely large transaction by including a couple of outputs with large scripts producing 1 tx which has basically the entire amount as a fee.
I send this (those) tx only to the mining pool that helps me to launder the Bitcoin.
The mining pool then sends me an output from the coinbase to a new Address of a value less then the 1 btc (taking their share of the laundering operation) also since I wanted to launder I could receive several smaller payouts to various addresses.
Let's say this is impossible due to KYC requirements of miners. Such an endeavor could also be achieved by solomining such a block by renting out hash power.
Of course such an extreme tx would already look suspecious but I could create a similar situation by spammimg tx during a high fee market (or even producing such a fee market by spamming.)
Somehow this picture makes me worry as it seems to me that criminals would not only be incentivised to do so it would also interfere with the entire ecosystem.
Am I missing some point here? Why has that not been done yet? In case of stolen coins (for example quadrica exchange, it seems to me that it would even be worthwhile to just create a mining operation for such an endeavor)
While nothing stops you from paying a high fee to a miner and claiming it out of band, trying to launder any significant amounts through this would likely be discovered and linked fairly quickly.
A pattern of disproportionately high fee transactions appearing in blocks that are mined by a subset of miners would be noticeable, as would linking the money - If you are laundering 10 BTC and put it through via these fees, I simply need to track the spends of the coinbase output instead of spends from the original 10 BTC utxo.
Of course, you could do this piecemeal by laundering smaller, less noticeable amounts like 0.01 BTC per tx, up to 0.2 BTC per block, but that doesn't really scale to a significant amount, especially not once you take out the miner's fee. Moreover, unless you are able to bring all miners on board, you could only do this a handful of times a day (assuming you manage to convince a single large pool), which again does not scale up to any significant amount of BTC, especially if you stick to doing small amounts.
It's likely simply not worth the effort for miners or launderers. Sending coins to a KYC free exchange, or exchanging them P2P for other coins, especially ZCash and Monero, likely has much higher throughput.
I want to expand on @raghav’s answer a bit (which is great! I agree with what he’s written), as I’ve thought about this before as well.
I see two options for getting these transactions mined: pay a miner to provide this service, or mine them yourself (solo mining, or become a pool operator).
Additionally, there are then two approaches you could use to launder the funds: conspicuously, or inconspicuously. The distinction is not absolute, but it basically comes down to ”does this transaction stand out from the ‘average transaction’, in terms of fees?”
First off, no matter how the transaction is mined, if the fees are conspicuous then you have not added sufficient obfuscation to the flow of funds to have truly broken the trail of audibility for anyone performing an informed investigation. So this method seems to fail, in any case. We can ignore it, regardless of whether you are mining your own blocks, or not.
So then, we can explore the two remaining options: mined by a third party with inconspicuous fee rates, or self-mined with inconspicuous fee rates.
If you are paying a miner to perform this service for you, then rationally you should expect to pay: the going fee rate, plus a service charge for the added overhead of the service provided. So already, you will need to pay a slightly higher fee than is otherwise required to confirm a transaction, and you have not actually laundered any coins yet.
So this begs the question: how much can you add in ‘extra fees’, before the transaction becomes conspicuous? Perhaps double the fee-rate would be acceptable, but in this case you will effectively be paying ~50% in order to launder funds, and this seems quite expensive at first glance. For large amounts of BTC, even double the fee rate does not amount to much throughout, and looking at recent blocks (height~566,527 at the time of writing), we can see that the total fees for the average block are consistently below 0.5 BTC (most seem to be in the ~0.1-0.3 BTC range). So even if a miner stuffed a block entirely full of your transactions (which may be conspicuous in itself, depending on the relationship of addresses you use to fund transactions), you’d be looking at laundering maybe ~0.5 BTC at most per block. In the future, fee rates may change, but in all but the most extreme cases this seems to be rather inefficient.
Additionally worth mentioning: the miner providing this service could keep record of the transactions, so you would have to place some trust in them to keep this information strictly confidential. Also, you would have to trust them to pay the coins out to you as specified.
So what about the option of mining the transactions yourself? In this case, it is worth noting that you will no longer have to pay the ‘service fee’, but I don’t think you can fully discount paying the ‘usual fee’ that a miner would otherwise expect to confirm your transaction. This is because not receiving the miner’s fee is an opportunity cost that you will pay in order to mine your own transactions, and since mining is highly competitive, forgoing the collection of these fees may affect your return on investment for the mining operation. Of course, your ‘profits’ will come otherwise in the form of laundered BTC, but I thought this was worth mentioning nonetheless, as you will otherwise be ‘less profitable’ than other miners, all else equal.
So the question then becomes: is the investment risk of a mining operation that is large enough to reliably solo-mine blocks worth it? Or does some other method of laundering BTC incur less risk? Or more reliability?
Keep in mind, even with your own mining operation, you still have to keep transactions inconspicuous, so as mentioned above, you would be laundering just a fraction of a coin per block. In order to launder a significant amount of BTC, you’d need to find a good number of blocks, meaning a larger investment in mining equipment.
I think the TL;DR is thus: given the high costs and risks of laundering BTC through coinbase transactions, there are likely alternative methods that are more efficient and effective. This conclusion is perhaps premised on low fee rates (sat/vbyte), but even with high fee rates we see that paying a miner to perform this service will be expensive (a high portion of your coins paid as fees), whereas running your own mining operation requires a large amount of upfront investment/risk.
|
STACK_EXCHANGE
|
Even if you already know how to write Java programs, and have a basic understanding of web applications, the Java Enterprise Edition Java EE stack is daunting.
Creating a Dynamic Web Project in Eclipse After the configuration of the server and database a dynamic Web project with Process Manager facets activated can be created in Eclipse.
This project contains all relevant artefacts, like JARs and configurations, to run the Stardust Portal in the Web application of this project. The Workflow Execution Perspective can be started without any further configuration after Web application server start. In the properties page enter a name for your Web project.
Make sure that the target runtime is set to Apache Tomcat with the currently supported version. If not, please set up the Apache Tomcat server as described in the section Configuring the Server of the previous chapter. In the Configurations entry, select Stardust Portal for Dynamic web module 2.
The facets provided with this configuration can be added or removed later as described in section Process Manager Facets. Now choose the folder for your java sources, the default is src.
Leave or adjust the default output folder and select Next. The next dialog gives you the opportunity to configure Web module settings. If you want to use the default settings, just choose Next. The default context root is the name of your project, optionally choose another name.
The default name of the content directory for your Web project is WebContent. Optionally choose another name. A dialog opens to ask if the perspective should be changed. The J2EE perspective is optional, so click No. Change the perspective As the Process Manager - Jackrabbit facet is contained in the Stardust Portal configuration, you have to set the repository path as described in the section Setting the Repository Path of the Document Service Integration Guide.
After creating the dynamic Web project, the folder structure will look like in the example below. A top level folder is created for the project and the project files are initialized. Please refer to the Stardust Portal documentation for detailed information.
Process Manager - Jackrabbit Embedded Repository incl. Server Option - deploy Jackrabbit with the application the embedded option is used if no Jackrabbit exists. Process Manager - Jackrabbit Remote Repository Client - deploy Jackrabbit with the application the Remote option is used when the Web application is supposed to be connected to an existing Jackrabbit deployment.
Please refer to the Stardust Portal chapters for detailed information. Please refer to the Business Analysis and Reporting guide for detailed information.
Adding and Removing Facets You can add or remove facets in the Projects Facets properties dialog by selecting the corresponding project facets.
To add or remove project facets to your project: Right-click your project and select Properties. In the properties dialog select Project Facets.
On the right side all available facets are listed, which you can enable or disable. In case of disabling thus deleting facets, confirm the dialogs asking to remove the according facet folders, e.
Henley] on alphabetnyc.com *FREE* shipping on qualifying offers. Build an online messaging app using Java Servlets, JSP, Expression Language, JSTL, JPQL, Sessions/Cookies. Eclipse (@ alphabetnyc.com) is an open-source Integrated Development Environment (IDE) supported by IBM.
Eclipse is popular for Java application development (Java SE and Java EE) and Android apps. It also supports C/C++, PHP, Python, Perl, and other web project developments via extensible plug-ins. Eclipse is cross-platform and runs under Windows, Linux and Mac OS. I am not sure if you tried this solution in Eclipse browser.
This solution works only in SWT browser control of Eclipse. It is not a generic solution which could work in any web browser.
I want to build a RESTful Web Service in Java, deployed using Jetty and developed using Eclipse as IDE. I was wondering if anyone could post or link me to a beginner tutorial (even a "hello world!". This tutorial, Part 1 of the series, introduces you to publishing a web service application using the Eclipse IDE, Java SE 6, and Ant.
It lays the groundwork for Part 2, which describes the creation of the web services client application. In this method, the web application is packed as a WAR file.
|Frequently bought together||Introduction to Eclipse You should probably understand that Eclipse is developed by a rather huge and worldwide open source community with considerable backing by many corporate entities that does not result in this IDE being beholden to any.|
|Why isn't Java used for modern web application development? - Software Engineering Stack Exchange||We will create and build the project with Maven and hibernate 4 will be used in data layer. To get started simply create a simple web application with maven and import it in Eclipse.|
|Enabling Open Innovation & Collaboration | The Eclipse Foundation||Enter a project name, such as wsServerExample, when prompted, as shown in Figure 6.|
|Java application development tutorial using Azure Cosmos DB | Microsoft Docs||Not required if you just want the bare API. It will also pull the instrumentation key from the operating system environment variable if it is available.|
You may generate the WAR file using a tool or IDE like Eclipse, or someone just sent you the file.
|
OPCFW_CODE
|
Upgrade from 0.15.0 or fresh install dynatrace-operator 1.0.0: oneagent unable to start container process
Describe the bug
upgrading from 0.15.0 to 1.0.0 with helm or fresh install produces oneagent daemonset pod warnings:
OCI runtime exec failed: exec failed: unable to start container process: exec: "/usr/bin/watchdog-healthcheck64": stat /usr/bin/watchdog-healthcheck64: no such file or directory: unknown
To Reproduce
Steps to reproduce the behavior:
either fresh install or upgrade from 0.15.0 helm chart
helm sets:
apiUrl = <our_customer_url>
apiToken =
dataIngestToken =
installDRD = true
webhook.hostNetwork = true
image = <private_repo>
customPullSecret = <private_repo_secret>
csidriver.enabled = false
dynakube 1.0.0:
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
annotations:
feature.dynatrace.com/automatic-kubernetes-api-monitoring: 'true'
name:
namespace: dynatrace
spec:
activeGate:
capabilities:
- routing
- kubernetes-monitoring
- dynatrace-api
- metrics-ingest
group: aws-eks
image: <private_repo>/docker/dynatrace/linux/activegate:latest
resources:
limits:
cpu: 1000m
memory: 1.5Gi
requests:
cpu: 500m
memory: 512Mi
apiUrl: https://<customer_id>.live.dynatrace.com/api
customPullSecret: <private_repo_secret_name>
networkZone: <custom_zone_name>
oneAgent:
classicFullStack:
args:
- '--set-host-group=<custom_host_group>'
env:
- name: ONEAGENT_ENABLE_VOLUME_STORAGE
value: 'false'
image: <private_repo>/docker/dynatrace/linux/oneagent:latest
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
skipCertCheck: true
dynakube 0.15.0:
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
annotations:
feature.dynatrace.com/automatic-kubernetes-api-monitoring: 'true'
name:
namespace: dynatrace
spec:
activeGate:
capabilities:
- routing
- kubernetes-monitoring
- dynatrace-api
- metrics-ingest
group: <custom_group>
image: <private_repo>/docker/dynatrace/linux/activegate:latest
resources:
limits:
cpu: 1000m
memory: 1.5Gi
requests:
cpu: 500m
memory: 512Mi
apiUrl: https://<customer_id>.live.dynatrace.com/api
customPullSecret: <private_repo_secret_name>
networkZone: <custom_zone>
oneAgent:
classicFullStack:
args:
- '--set-host-group=<custom_host_group>'
env:
- name: ONEAGENT_INSTALLER_SCRIPT_URL
value: https://<customer_id>.live.dynatrace.com/api/v1/deployment/installer/agent/unix/default/latest?arch=x86
- name: ONEAGENT_INSTALLER_DOWNLOAD_TOKEN
value:
- name: ONEAGENT_INSTALLER_SKIP_CERT_CHECK
value: 'true'
image: <private_repo>/docker/dynatrace/linux/oneagent:latest
oneAgentResources:
limits:
cpu: 300m
memory: 1.5Gi
requests:
cpu: 100m
memory: 512Mi
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
operator: Exists
skipCertCheck: true
Expected behavior
after helm install, either upgrade or fresh, healthy pods reported with no warnings reported just like 0.15.0
Screenshots
n/a
Environment (please complete the following information):
EKS/Kubernetes 1.27
terraform provider hashicorp/helm v2.13.0
helm v3.9.4
dynatrace-operator 1.0.0 helm chart
Additional context
-probably unimportant but with EKS autoscaler, version 1.0.0 does require an additional node (4 total) from 0.15.0 (3 total)
-could be related but even using latest tag on images and helm chart still produces dashboard warning: "The ActiveGate monitoring this Kubernetes cluster is outdated. Please make sure that all ActiveGates have [a minimum version of 1.279] to get the latest enhancements in Kubernetes monitoring."
used these links as supporint info:https://docs.dynatrace.com/docs/whats-new/release-notes/dynatrace-operator/dto-fix-1-0-0#upgrade-from-dynatrace-operator-version-0-15-0
https://docs.dynatrace.com/docs/whats-new/release-notes/dynatrace-operator/dto-fix-1-0-0
opened support ticket: 306247
|
GITHUB_ARCHIVE
|
Tenant eviction law in Netherlands when renting is done via proxy company
I live in UK and have recently bought flat in Netherlands, found agent to rent it out and yesterday they told that they have found a tenant and sent me a contract to sign.
Turns out that 'tenant' that my agent has found is a company and not a person (which already feels dodgy since we agreed with agent that it will be a person), so I started doing some research. It turns out that this company ('Company X') is company that helps asylum seekers with getting accommodation (which I am happy with as long as I don't have problems to deal with).
However having done some (bad) enterprises to day, I got a bit more thorough (paranoid) and one potential possibility I see is that 'Company X' which has very little assets can sign hundreds of lease contracts with lenders like myself. This company places hundreds of asylum seeker families into flats with intention to pay their rents, but with no monetary 'buffer' or due to bad management (or many other potential reasons) fails to make payments.
After it fails to make payments in around 2-3 months of non payments this goes to court, then 'Company X' goes bankrupt and gets liquidated in another 3-6 months all the while flat is occupied by non paying tenants that do not have direct obligation to myself but rather to the company that is going bankrupt.
Netherlands is a very pro-tenant country so question is: if payments by 'Company X' fail can I start eviction process immediately (like I would normally do with tenants) or do I need to wait for company to get liquidated and only then can start eviction process?
I'm not in the Netherlands, but in most places you would be able to start the process immediately. Companies generally have the same or fewer rights then individuals. Maybe the solution is to insist that the director of the company sign as guarantor ?
On the whole, asylum seekers appear to be no major financial risk as they can generally file for unemployment benefits and rent subsidies. And if your tenant doesn't pay you, you can garnish any rent subsidy. Still, as with any group there are exceptions to the general rule.
I got and answer from lawyer in Netherlands.
To rent out to the company is not without risks. You rent out to the company and the company rents out to the actual user of the apartment. That is subletting. The sub-lessee is protected by law. So when the company fails to pay, you can end the contract with the company (you have to go to court for this), but then you will become the lessor to the actual user (=sub-lessee) then. If you feel that that is against your interests, you have to start a court procedure within half a year to end the contract with the actual user.
Also note: it is forbidden to rent out to people that don't have a legal status. So you make sure you trust the company very well if you are going to rent out to them. I recommend to seek help from a real estate agent that is well known and member of NVM or other trustworthy organisation.
|
STACK_EXCHANGE
|
If you’re new to DataTurbine, you’re perhaps wondering how and where it’s useful, and if it’d be useful for your own work. This document aims to answer those questions for you, and give you some ideas. Before I dive into what it cando, let’s focus a bit on systems you might have encountered, so that you know what’s different. How is DataTurbinedifferent than other systems:
- The ‘network ring buffer’ idea lets you separate data sources, and also lets you mix and match different rates. Unlike an Enterprise Message Service (EMS) system, clients can get old or new data, as required, and can look at data more than once.
- Unlike a database, you can subscribe to a data feed, and get efficient, low-latency propagation with high throughput (tens of megabytes per second, tens to hundreds of clients)
- Unlike a filesystem, there are no locks to worry about, byte ordering issues, but you can still random-access address the data streams.
- Unlike a simple TCP or multicast connection, you can have different rates. Sources can ‘burst’ data, that clients can let pile up and process as able and interested. Clients can also choose which sources to monitor, so they needn’t waste bandwidth seeing everything.
Since DataTurbine is Java-based, it runs anywhere there’s a JVM, from Gumstix to 64-bit servers. If your system can’t run Java, you can still stream data using a proxy (several are provided, contact us for details) or via DataTurbine’s WebDAV filesystem interface. Our best display client is called RDV, and its a rich-client Java application. There are web interfaces for data, but none are as capable, so in general Java is useful-to-essential for serious use of DataTurbine. DataTurbine is more useful in applications that are
Let me explain what I mean.
- Since DataTurbine adds a flexible-length buffer and access to your design, it adds capabilities like TiVo-style browsing, replay, rewinding, and more. However, if everything is simple, local, and point-to-point then perhaps DataTurbine might not be helpful.
- Since its a TCP/IP-based network application, it makes a wonderful abstraction layer for heterogenous devices; once data is sent to DataTurbine the underlying hardware and software are totally abstracted. The clients don’t need to know or care. However, if you just have one device type, then abstraction is less useful.
- I posit that DataTurbine is most useful when you’re collaborating on science, especially with remote users. It lets anyone (who you permit) view data, watch video, annotate an ongoing experiment and zip through data with ease. All of those capabilities are great for a single lab, but they’re superb when, say, you want to view the experiments’ progress from your iPhone, or let your remote collaborator control the mass spec from their office.
- DataTurbine’s ability to mix and match numbers and video, all synchronized, is quite valuable. You can not onlysee the video feed, but you also see what was happening on the other sensors or feeds when it happened. Automatically, all the time, no extra effort required.
Here are some examples where DataTurbine is a big win:
- The Network for Earthquake Engineering and Simulation(NEES) uses it for distributed experiments, often spanning thousands of miles and several days. For example, the structural lab at UMN collaborates with researchers in Puerto Rico, who can observe, annotate and control without frequent flyer miles to Minnesota. UMN uses LabWindows-based data acquisition systems, Axis network cameras, streaming audio servers (custom code based around Windows and the Java Media Framework), as well as Java-controlled consumer digital cameras on remotely-controlled imaging towers.
- We’re using DataTurbine to demonstrate streaming numeric & video data from the Santa Margarita Ecological Reserve and a coral reef in Taiwan. This uses LabVIEW CompactRIO data acquisition, a variety of sensors, numerous network video systems and at least four different operating systems, and that’s before you include the network gear!
- Insight Racing is using it for video onboard their autonomous car. Video from Axis 206M 1.3 megapixel cameras streams to an onboard Linux-based video server for processing over Ethernet.
- NASA uses RBNB for telemetry as part of their “Intelligent Network Data Server” or INDS at their Dryden Lab.
Here are some more ideas for places where DataTurbine is a useful capability:
- Stream processing and event detection. You can easily interface to DataTurbine from Matlab, givng you the ability to write programs that process and analyze streams of data in real time. Think video processing, complex algorithmic event detection, anything you can do with the vast cornucopia of Matlab code or toolboxes. Output can be sent right back to DataTurbine as another data stream, viewable by anything in the system.
- Event markers and annotation. We have a capability to annotate the DataTurbine that we call ‘Event markers.’ These are timestamped bits of XML, stored in the DataTurbine as a text channel and part of the record. You can use them as notes or annotation, either human or programmatic, and are great for situational knowledge or just understanding an experiment afterwards. For example, ‘hydraulics now online,’ ‘sensor X on box Y is faulted and offline’ or ‘I think we just saw beam failure on cantilever 3’ are all examples of possible markers. These can be generated and viewed in RDV or your own interface.
I hope this has helped give you some ideas of where DataTurbine wins. DataTurbine is under the Apache 2.0 license, but we’d appreciate greatly the courtesy of an email telling us how you’re using it, any feedback or features you’d like to see.
|
OPCFW_CODE
|
Does anyone have any guides on a hierarchy to follow? We had a brand new service desk system installed here recently and the categories that have been created are a bit generic.
Think of categories in terms of reporting. Your techs don't really care if a ticket is categorized as "Infrastructure/Wireless Network" or "Infrastructure/Wired Network." Create your category structure in a way that managers and executives want to see performance.
If the ask the question "how many times did we have tickets about the wireless network this month?" you know how to answer it, along with all the pretty charts and graphs.
Don't make categories just for the sake of having them. Start at a high level to determine if you even need to get granular. I'm sure "Password Reset" will be used much more frequently than something insanely specific like "Mouse" (which would be better served under a broader category like "Hardware/Accessories & Peripherals").
Or, now that I'm thinking about it, you may not even need that subcategory. For all I know, you may have so few hardware issues that it can all fit under "Hardware." Or you may work in a shop that's really tough on keyboards, so you break out "Keyboards" under hardware so you can see how many tickets come in for just keyboard issues.
I typically use broad groups. If we get too into detail, people seem to not want to follow along. I have the following categories since this is all that my leadership have an interest in:
Again, this is what works for my company. They like broad categories just for basic reporting. They did not wish to see subcategories (email could be listed under software, but they wanted it in its own category).
At my last employer, they tried to get really granular with their ticketing categories/subcategories, but they learned that it took much longer just to create the ticket, so they removed some of the sub categories.
Having been on both sides of the management situation on this, in my experience I was far more concerned with WHO was having an issue than with WHAT issue he or she was having.
The categories I would suggest would be both simple and broad: department, core network, end user hardware, applications. Anything more granular than that in my experience is about as effective as arguing how many angels dance on the head of a pin.
This ==>>> "Don't make categories just for the sake of having them." So much THIS that I'm literally shitting my pants right now.
If by category, you mean what we call Issue Type, whatever you do, DO NOT follow our example. We have 113 categories. :-(
Whenever the boss notices that a particular topic has come up more than a few times, he creates a new category. We have Login and Login Problem as separate categories. We have Access and Access Database categories. We have The Cloud as a category ("Could you be a little more vague, please?"). We have Software as a category, and then separate categories for much of our software (Outlook, OneDrive, Gradekeeper...).
You can forget about a user being able to choose the correct Issue Type. It's a required field in the submission form, and there's no way in hell a user will ever know what to choose. So they click something at random and usually no one in IT changes it to the correct type. Useful reporting on Issue Type is impossible.
As WeirdFish and Rodey09 pointed out, keep the categories broad and few. And use subcategories when called for. Perhaps consider who does what at your organization. Our IT staff are pretty well specialized (Server Manager, Client Manager, Network Manager, Communications Manager, Database Admin....) and our job responsibilities would likely work well as categories (or at least a springboard to create useful categories).
|
OPCFW_CODE
|
Cranelift: ARM32 backend: active maintenance, or remove?
In the Cranelift biweekly today, we discussed the ARM32 backend and how to handle it with respect to our ISLE transition, and maintenance / breaking changes in internal APIs in general.
The current state of the ARM32 backend is incomplete: it supports 32-bit code, but panics on any 64-bit operation, so it does not yet support (for example) Wasm-MVP with the cranelift-wasm frontend, nor would it support most code generated by cg_clif. The intent when merging the partial backend was to allow it to mature in-tree with further contributions. However, that hasn't materialized in the year or so since merging.
We have made changes as needed to keep it compiling, but there is a larger question of what happens to an incomplete backend with no active maintainers or users, especially if it implies more significant amounts of work. With our ISLE DSL transition, we would need to invest nontrivial time to move the backend over; and the upcoming regalloc2 transition would require more effort as well.
It seems reasonable to ask: if the backend is incomplete, and no one is using or maintaining it, should we remove it instead? Consensus in our meeting today was that this seems reasonable. However, if anyone would like to work toward a usable and maintained ARM32 backend, now is the time to step forward!
Thoughts?
Hi @cfallin, I understand the situation you and the maintainers are in. We are not currently a user of wasmtime/cranelift on arm32 but we have been evaluating projects to potentially target wasm via wasmtime on our fleet of IoT hubs. To consider use of wasmtime/cranelift, arm32 support is a must as we continue to support software updates for a very large number of products using arm32 (in addition to products based on aarch64/x86_64/etc.) and would guess that this is true for other potential users in this space. Thus far, one of the reasons we have not yet built solutions around wasm is the lack of arm32 support which we had hoped would be something that would get love from ARM or others.
I will reach out to some of my contacts within the broader Samsung org to see if we are currently in a position to provide any assistance; I do feel that if IoT is in fact an area where we would like wasmtime/cranelift to be applicable along with the goal of providing a portable bytecode that arm32 support is a critical feature.
@posborne -- hello and thanks for helping out here!
For context, I think it would probably take 1-2 months of full-time work by a compiler engineer to bring the current incomplete arm32 backend up to a working state where it supports Wasm, and would need ongoing engagement by someone who would address bugs, keep the backend up-to-date wrt other refactors, etc.
We also are reaching a point where we'll need to do something soon, as we have ongoing efforts to update the architecture-independent bits of the compiler (e.g. the register allocator) and an unmaintained backend becomes a blocker for those. So, if someone were to jump in and adopt the arm32 backend, it would need to be somewhat soon.
I don't doubt the value of arm32 at all, and all else being equal, would love to have full and complete support for it! It's really just a matter of resources. And, it's worth mentioning that if we do end up removing it from the tree now, someone would always be welcome to step in, grab the old source from git history as a starting point, and bring it up to date, then contribute it, as long as there is a reasonable ongoing-maintenance story.
@cfallin Thank you for the context; I think we're on the same page and, unfortunately, don't think we'll have the resources available to help in the near term so I think the plan to remove the half-baked arm32 for now makes sense (though I have still reached out to confirm). I just wanted to make sure we expressed our interest in targeting arm32 but also appreciate the need to move the project forward.
A smaller issue, but isn't cranelift's ABI also currently hard-coded for 64-bit architectures only?
Either way, an unmaintained (and unusable?) backend seems like quite a burden for a project that wants to change.
@posborne Is your IoT use case based on M profile cores (e.g. Cortex-M4) or A profile ones (such as Cortex-A53)?
@posborne Is your IoT use case based on M profile cores (e.g. Cortex-M4) or A profile ones (such as Cortex-A53)?
We target a variety of application processors running embedded Linux in addition to targets running Android or Tizen. The two targets with greatest volume in production are the NXP IMX6UL and IMX6ULL which are both Cortex-A7 (we also target several aarch64 based processors). We don't currently have any integrations on MIPS or RISC-V but both of those have come up as possibilities at different points in time.
OK, so definitely the A profile.
The reason I am asking is that while it is feasible to add support for the M profile to Cranelift (since it is mostly just a code generator), it would be a challenge to do the same to Wasmtime because the latter depends on a full-featured operating system like Linux (that provides mechanisms such as mmap()); the M profile processors have difficulties supporting environments of that kind (given that they lack memory management units, among other limitations). This is somewhat similar to supporting #[no_std].
|
GITHUB_ARCHIVE
|
WordPress? In 2024? Yes, actually
A lot of people have a lot of opinions on WordPress. It’s a system that has stood the test of time, and has the warts and weird bits to prove it.
In 2024, there are a lot of choices for building a website or web-based system. Most architectures will rely on a CMS to store content, so content editors aren’t required to change content stored in HTML markup. (To non-technical people, markup is scary - understandably so!)
WordPress as a system has existed from the early 2000s. Throughout that lifetime, the key priority for the core maintenance team has been backwards compatibility. The most modern version of WordPress will run on decades old versions of PHP. This is a phenomenal undertaking in itself - what other software can you think of that can say that?
When you view the history of WordPress through that light, a lot of the decisions make sense. You put the codebase in the public directory of a web server, and it runs. The barrier to entry is low. This is a good thing! We should encourage people getting into technology and educate them in a positive and empathetic manner.
A lot of technical folks dislike WordPress because it doesn’t follow modern development practices. This is fine, but it’s not the only thing that matters.
If you want a traditional CMS that feels familiar, WordPress is probably where that sense of familiarity came from. If you like Headless Architecture, WordPress can do that for you too! The bad old days of your entire site getting hacked because of xmlrpc are over. The present day WP REST API is robust and easy to pick up.
Because the WordPress ecosystem is so large, there is probably already a tool in your favourite programming language or frontend JS framework to retrieve data from a WordPress powered backend and display the content without having to think too hard. This is a good thing! It lets you focus on building what matters.
I’ve seen WordPress installs that have no boundaries in the codebases, throw plugins at everything, mix models/views/controllers in one php template file, and commit clean code sins that would make Uncle Bob furious. They’re not fun to work in. They’re not what we should be striving to build as engineers. They also do the job.
I’ve also seen meticulously crafted WordPress installs, that rely on 3 core third-party plugins to operate (ACF Pro, Gravity Forms, and Yoast) and the rest of the functionality is provided by a “bespoke” enterprise architecture, combining a “core” plugin and a “core” theme. Sometimes these codebases are more difficult to work in, but they provide structural support that means maintaining the website in 10 years will be the same experience as it is today.
As developers and technologists, we have a responsibility to non-technical folk to make content management as easy as possible. There are a lot of estimates of WordPress usage on the web, with figures reportedly being between 40% and 65% (depending on your source). Because of this, there is a high probability that the person you’re building the website for has encountered WordPress in the past. This is one less hurdle to jump.
Don’t pick a tech stack because it’s cool, or “modern”, or “hip”, or “the latest thing”. Pick a tech stack that solves the problems you face.
(Perhaps ironically, this website is a headless system - Nuxt and Vue on the frontend and Contentful on the backend. My website is my place to test out new approaches and techniques, and changes more regularly than a website for an organisation would.)
|
OPCFW_CODE
|
Allow for exceptions for E402
Hi,
in tests I have quite frequently something like:
import pytest
pytest.importorskip("twisted")
from twisted.internet.defer import Deferred, succeed, fail
from internal_package import xyz
which now reports E402s.
Any chance you add an exception for that? The annoying part is that I’d have to add noqas to every single import statement that comes after the importorskip() which would also hamper with unused symbols detection…
By itself I find E402 useful so I don’t want to suppress it either…
One improvement would be for pytest.importorskip("twisted") # noqa to mean all the following imports do not need a noqa.
It seems like a better way might be to have your test code checked with something like pep8 --ignore=E402 tests/
As I wrote I like E402 so I would prefer to not silence it altogether...
Since it seems to support the try/except idiom, I thought this could be added too.
But I could live with putting noqa on the non-import line too.
It's a similar issue when writing standalone scripts that use Django
import django
from django.conf import settings
from myapp import myapp_defaults
settings.configure(default_settings=myapp_defaults, DEBUG=True)
django.setup()
# Now this script or any imported module can use any part of Django it needs.
from my_app import models
from my_other_app import models
I'd find it useful to be able to add #noqa to the settings.configure and django.setup() lines, as @jayvdb suggested.
Closing, see call for pull requests in #480.
If anyone comes across this issue and would like a fix that doesn't involve noqa on each line, nor to globally disable it, I wrote https://pypi.org/project/flake8-pytest-importorskip/ to handle it "automatically".
If anyone comes across this issue and would like a fix that doesn't involve noqa on each line, nor to globally disable it, I wrote https://pypi.org/project/flake8-pytest-importorskip/ to handle it "automatically".
I would not recommend that plugin -- it uses private implementation detail that will break in a future version of flake8: https://github.com/ashb/flake8-pytest-importorskip/blob/6d2e6fb03ce5f938ae555b6f5e930637af3454fe/flake8_pytest_importorskip/init.py#L9
Yes -- it's a hack. It works for now though.
Yes -- it's a hack. It works for now though.
It does this in a slightly hacky way, so it may break in future versions of flake8 or pycodestyle.
ah, well as the flake8 maintainer -- I plan to change that bit of code so it likely will not work in the future :)
@asottile If you are changing that, would it be possible to have a plugin change the logical_line that "later plugins" see?
I really don't think that's a good idea -- lying about the source code to other plugins. plus there's not really a concept of plugin ordering (that your plugin works at all right now is dumb ordering luck)
I thought ordering was alphabetic - guess just got lucky.
As for Lying to other plugins: http://pylint.pycqa.org/en/latest/how_tos/transform_plugins.html 😀
pylint's system is more for augmenting existing information (giving hints to pylint's engine about information it can't glean statically) -- I don't see how you'd implement your bait-and-switch with a non-import line to an import given that without severely breaking other things
Yeah, I wasn't being serious anyway. Another perhaps less hacky approach would be to let warnings get filtered by other plugins before being issued.
I also have another idea that might work that doesn't need me to look at private state -- I might be able to monkey patch the other plugin function, to wrap it.
|
GITHUB_ARCHIVE
|
Hi folks! Sorry its been quiet on the blog; we’ve been focused mainly on communication via Discord, but will try to be better about communicating here as well in the future. 🙂 That said, if you haven’t already joined our Discord community, you really should. We even have a nifty vanity URL now!
This upcoming beta build has a first rough pass at rebalancing elemental attributes. It is *NOT* complete, but is a large part of the way there and we figured might as well start testing it sooner rather than later. What’s missing is a rebalance of several elemental dungeons (mostly the fire and lightning ones). What we *have* rebalanced is: Frozen Owl Monastery, Super Secret Base, Temple of Bast and Venom Compound.
Things that have changed for the aforementioned elements & dungeons:
- Halved the # of elemental upgrades to max. Applies to both damage and defense.
- Decreased the droprate of upgrades by something like 40% give or take across all sources.
- Doubled the strength of elemental attack upgrades.
- Tripled the strength of elemental resistance upgrades.
- Enemies in rebalanced dungeons no longer do combo elemental/physical damage. It’s 100% elemental now.
- Enemies in rebalanced dungeons are inflicting 50% more damage.
- Enemies in rebalanced dungeons effectively have 50% more health.
Other, unrelated changes:
- Assault’s bash now reduces damage from enemy bullets by 40% for the duration of the maneuver
- Quad/Paladin’s maneuver is shielded (temp for Paladin, as we hope to change its maneuver someday)
- Duster’s base health reduced by 10%
- Dagger Knight: Shorter shield phases on T0-T9 Shield Blasts (lasts longer as you tier up)
- Bomb Factory: Many new rooms (which won’t all spawn in any given run)
- Bomb Factory: New “bomb fuse” unit, tweaks to enemies to make them work better with bombs
- Reeducation Camp: Many new rooms & new “heat vent” traps
- Reeducation Camp: Added “anti-rushing barricades” in various places.
- Removed XP farm exploit on Soul Collector quest
- Made requirements for Vault quest less error prone.
- Raised “Prison Box” quest item drop rate
- Removed all T2-T6 dungeon keys from store
- Show item info when you mouseover an item in the trade screen
- Only show damage number fly-ups for the local player
- Added option to control whether clicking a radar icon initiates a teleport. Defaults to disabled.
- Tweaked visuals of item equip popups
- Raw dungeon names no longer displayed in friend locations.
- Fixed item info panel sometimes sticking around after closing Storage.
- Immediately join friends on different servers via the Friend List or /tp or /teleport command.
- Mini friends online list displays correct number of online friends total.
- When you accept or try hand-in a giver quest, make it your active quest
- Don’t show events and news in new user flow
- Debug information now has a graph for memory usage (managed memory, assets and similar are not included)
- Optimized plane outlines; should look nicer and impact perf less
- Added graphics settings option to override antialiasing level
- Combined mouse and keyboard controls into one keybinding menu (finally!)
- Fixed binding modifier keys (CTRL/Alt/Shift) as solo inputs.
- Can now bind individual modifiers + keys (CTRL + A, etc.) Note: some existing bindings have been reset.
- Gamepad controls only show when a gamepad is connected.
- Extend initial connection timeout to 1s to avoid some unnecessary reconnection attempts.
- Minor bug fix where some dungeon room orientations weren’t considered
- Lots of random little bugfixes and changes to help future development
- Better logging for quests, to help diagnose crafting cancellation bug
- Higher tiers of machine guns (up to T10) shoot cooler-looking bullets, but only you see them; other players still see the less-noisy versions we’re all accustomed to. If this seems cool, we’ll do this for all the other guns too.
Last but not least, we’re finally starting to showcase our character art more! Check out this sweet new keyart that we’re using as the Discord banner. 🙂
|
OPCFW_CODE
|
c# what object literal could stop a foreach loop from looping?
I know I could wrap the entire foreach loop in an if statement, but I'm wondering if there's some object literal value replacement for new List<string>() so that the foreach skips executing when myList is null? In Javascript I could simply put [] after a coalesce to refer to an empty set that would stop the foreach loop.
List<string> myList = null;
foreach (var i in myList ?? new List<string>() )
i.Dump();
Unnecessary background information that does not change the answer:
List<Entity>() is my actual list that comes from ASP.NET MVC data binding, so I don't control creating it. I used the Entity Framework database models (POCOs) as an input to the Controller method like you would use a control function, setting parameters of the function to flow data to a table. The POCO comes from the database, so I don't control that either.
Just looking for some sort of empty object literal I could use to avoid running the loop without creating a new object.
I really think it's a C# flaw to throw an exception when myList is null. If it's null, there's nothing to do, so the loop should skip over.
Use Enumerable.Empty<string>().
@dbc That worked. post the answer and I'll mark it. Thanks
Or see C# EmptyIfNull extension for any IEnumerable to return empty derived type or Linq method to transform nulls into empty IEnumerable? if you need to make this check often.
@dbc you have a lot of hats
I just initialize my lists to new List(); in my model's constructors. It assures me that I don't have a null value.
I really think it's a C# flaw to throw an exception when myList is null. - I tend to agree. I once had to fix a performance issue with dynamically updating a complex CAD model (200k+ geometric objects) where a measurable amount of time was spent counting many, many empty collections. By "measurable" I mean 1/40th of a second -- but the update had to be completed in 1/4 of a second so the time spent counting the collections, though a small part of the problem, actually mattered. But my experience is sort of unusual.
@Gilles unfortunately ASP.NET's data binding feeds the data to me. I don't think it executes constructor methods at least in this sense to set them to empty lists.
I really wish negative feedback was accompanied with an explanation from a named author. This answer actually solved my problem, was valuable to me, and probably valuable to others.
Use Enumerable.Empty<string>().
See C# EmptyIfNull extension for any IEnumerable to return empty derived type or Linq method to transform nulls into empty IEnumerable<T>? for extension methods you could use if you need to make this check often.
|
STACK_EXCHANGE
|
'CPU monitor devices?'
Does anyone have any ideas about CPU monitoring devices that would
meet the following criteria? Some 8-pin devices I've seen come sorta
close, but I'd prefer something more like the following:
-1- Relatively short (e.g. 1-10ms) reset pulse; many of the devices that
I've seen have a 250ms reset pulse, which adds to the time the device
will be inoperable if anything goes wrong. Obviously the reset timer
should not start until power is within tolerance.
-2- Active high and active low outputs (so as to be useable with 8x51's
as well as PICs).
-3- An "intelligent" watchdog circuit which would not be satisfied with
random port flailing. Perhaps requiring that feeding pulses come in
pairs, with the first two pulses 10us or less apart, and with at
least 100us between pairs. Keeping the watchdog fed in such a scheme
should be about as easy as it is with existing parts, but the device
would be more likely to detect a problem which caused the port pins
to flail randomly.
-4- A "pushbutton reset" circuit which can accept even short pulses on
the input (e.g. which can be hooked up to /PSEN on an 8x51 to reset
the CPU if it tries to run code out of external memory).
Anyone know of any good devices that fit that bill? A fifth feature
which would be even cooler if anyone makes it would be:
-5- A built-in voltage regulator which could briefly switch off (and
shunt the output to ground) in case of trouble. "Trouble" in this
case would most likely be a reset (of whatever sort) which does not
result in the CPU feeding the watchdog within a reasonable time, or
else an asserted input on a "force power cycle" pin. Since CPU's
can land in states where only a power-down/power-up reset will fix
them, it would be useful to have hardware that could handle that
At 11:09 AM 12/31/98 -0600, you wrote:
>Does anyone have any ideas about CPU monitoring devices that would
>meet the following criteria? Some 8-pin devices I've seen come sorta
>close, but I'd prefer something more like the following:
Why not a 12C part?
Andy Kunz - Statistical Research, Inc. - Westfield, New Jersey USA
More... (looser matching)
- Last day of these posts
- In 1998
, 1999 only
- New search...
|
OPCFW_CODE
|
CFE declaration in Azure when big IP is not the pool member GW
Description
In my environment all routes tables have a default route pointing to o the Azure FW which is located in different Virtual Network (but same region and subscription than F5 and pool members), wondering what route table should be added in the declaration because there is a lack of information regarding a declaration example when Bigip and pool members are behind an Azure FW.
Please advise how a declaration looks like in this scenario regarding route table, because CFE is not working as expected.
Environment information
For bugs, enter the following information:
Cloud Failover Extension Version: 1.13.0
BIG-IP version: BIG-IP <IP_ADDRESS> Build 0.0.4 Point Release 2
Cloud provider: Azure
Severity Level
For bugs, enter the bug severity level. Do not set any labels.
Severity: 3
Hi @marpad20, just to clarify, your clients are routed to the Azure FW and then you want the Azure FW to use the active BIG-IP as the next hop? If so then you need to add (either by tag or scoping address in CFE config) the route table where the egress interface of the Azure FW lives. If that is not working, can you share the output of this command on the device that became active: tail -f /var/log/restnoded/restnoded.log | grep f5-cloud-failover
thanks
Hi Mike,
Thanks for prompt reply.
Yes, Traffic this is the flow. Client->Az FW <-> F5 <-> Az FW <->Web servers.
We will update the declaration with you advise and let you know.
Ty.
HI @mikeshimkus , the azure FW egress subnet does not have a route table associated. do you have any clue how to write the declaration then?
Azure FW is in VNET A, F5 VM is in VNET B and there is a peering between them.
Please advise
It will need to have a route table associated.
Sorry, I'm not clear. Previously you said that CFE was not working as expected. Are you saying that now it is using the active F5 virtual machine for the next hop when the HA pair fails over?
@mikeshimkus , what i am trying to say is that when VM1 is active traffic properly hits the VIPs, but when i do manual failover and force the VM1 to standby, VIP is failing, for a short period of time i see traffic still hitting the VM1 for that VIP but i see no reply back from VIP, and then after 1 or 2 minutes suddenly VM1 fails back to active but VM2 keeps as active and both VMs sends ARPs request asking for the VIP IP. Traffic VIP is recovered when i forced VM2 to standby.
I know that both in active is expected not to work, but why if VM1 fails back to active the VM2 remains active too?
and
why during the period VM2 is active traffic is not forwarded to it. (if traffic hits the VM1 without any UDR attached to the FW)
I hope i can explain.
Thanks in advance for your help.
Marlon P.
It is unclear to me how you have this setup. Can you share your CFE logs (cat /var/log/restnoded/restnoded.log | grep f5-cloud-failover)? If you can open a support case so we can see the configuration, that would be helpful.
How are you routing traffic to the self IP address of the active VM, if not using Azure route table?
How are you routing traffic to the self IP address of the active VM, if not using Azure route table?
Hi @mikeshimkus , last time i have asked that but so far i have not a good answer.
In the mean time, does F5 has a design document/guide with Azure FW as the GW for every subnet (F5 and Nodes)?
FW is a HUB virtual Network and F5 in a "spoke" different virtual network.
Thanks in advance
Closing. Please leave a message here, if you would like additional assistance, and I will reopen the issue.
|
GITHUB_ARCHIVE
|
This will also give us the quality and standard at the same time. Windows ten uses and combines some cool options of windows seven and windows eight that create it stand out from alternative Microsoft in operation Systems. . If you do not possess a serial key, you will not have the ability to trigger your working system. Update procedure would merely take minutes which are barely 5 10. The generic installation keys listed here are solely for installation purposes and nothing more or less.Next
Microso has released an update for Windows 8. All trademark rights for rights-protected names are owned by the respective copyright owner, here at Microsoft Corporation. We place a great deal of effort to learn these real product keys for windows to trigger just about any edition of Windows 8 and Windows 8. You do not have to concern yourself with the appropriate registration for Windows 8. Type your product key in the Windows Activation window, and then click Activate.Next
Dears, I have Toshiba P850 with Windows 8 64 bit single language pre-installed. Together with the keys provide under it is possible to completely activate windows 8 Home Basic, Windows 8 Home Premium, Windows 8 Professional and Windows 8 Ultimate. Although there is no bookmark toolbar, it is excellently designed and also has large icons. Means it activates the operating system of yours for lifetime consumption. All such keys are secure to use and are 100% working. This won't matter for the Windows Installation, though. I bought my laptop 4 years ago.Next
This window creates the easiest connection between the user and a computer. Also, it increases the system efficiency and performance. How to activate Windows 8 with a Windows 8 serial key? The service life is unlimited as long as the product is used on the same device. Windows 8 Product Review: Windows 8 is the most stable release in operating systems by Microsoft. We will find the latest version at Earlier versions of Gburner came with multiple unwanted third-party apps, such as Ad-Aware Web Companion However, the current version as of this writing, gBurner 4. I think I have lost my genuine windows. Note: This Product keys used only 2 users for 1 key our expired after 6 month its full tested Windows 8.Next
It is mixed along with existing features. Excellent integration with different platforms. Window 8 has the quality which is unusual and high capabilities. Huge security improvements have been made and battery life is also facilitated. You do not need to choose 32 or 64 Bit — this license will activate both versions. You may only avail complete attributes of Windows 8 following activation of your version of windows. That easy to use for generating keys.
But Generator is an authenticated and recommended generator for windows activated. With the Windows 7 30-day trial, one could install Windows and have them for free. Keep in mind that this manual download method could take a little bit longer, but it still lets you deploy the new Windows 8. If you enter the generic keys otherwise you merely choose the trial version of Windows ten installation. It sort-of makes sense as an anti-piracy measure.Next
No other Windows version can be activated with this key. It is genuinely the advanced operating system developed. But recently I found an actual way to trigger Windows 8. This means that those familiar Windows product key stickers will no longer appear on the Windows 8 computers. The only thing we need is to create a notepad file, on any version of Windows. There is an addition of the new interface, advanced security features, and elegant user interface which makes it perfect. It is well worth your time making a separate note of your key just in case you have to uninstall and reinstall the software.Next
This product key also working for 3. For this, we will use the 30-day trial version of gBurner. Essentially, this is the best way for Windows to ensure that their copyright is upheld and that the version of the software you receive is of their high standard. Once you have entered the Windows 8 product key, you will also be able to install any future updates of Windows 8. Security Aspects of Windows 8: Security in the operating system always worth a lot. Of course, Windows are not activated. This Window delivers us the quality interface as well as the latest features.
If you are very curious to install Windows 8. Significant upgrades, as well as enhancements, are there in probably this latest release in the condition of most new updates, repairs as well as brand new smoothness regarding quicker search options. A lot of peoples can not buy the premium things on the internet its some thing costly just like Windows 8. If you can buy only Product keys so you waste money our time. The idea is that by eliminating the sticker, you eliminate one of the easier ways for nefarious users to get a legitimate product key. We will need a valid product key for that. Boot to the desktop by default on laptops and desktops.Next
|
OPCFW_CODE
|
By Lee Spector
Automatic Quantum laptop Programming offers an advent to quantum computing for non-physicists, in addition to an creation to genetic programming for non-computer-scientists. The e-book explores numerous ways that genetic programming can aid automated quantum desktop programming and provides exact descriptions of particular innovations, in addition to a number of examples in their human-competitive functionality on particular difficulties. resource code for the author’s QGAME quantum laptop simulator is incorporated as an appendix, and tips to extra on-line assets provide the reader with an array of instruments for automated quantum computing device programming.
Read or Download Automatic Quantum Computer Programming: A Genetic Programming Approach PDF
Similar compilers books
Ada ninety five, the improved model of the Ada programming language, is now in position and has attracted a lot cognizance locally because the overseas normal ISO/IEC 8652:1995(E) for the language was once licensed in 1995. The Ada ninety five motive is available in 4 elements. The introductory half is a basic dialogue of the scope and ambitions of Ada ninety five and its significant technical good points.
This publication constitutes the refereed court cases of the sixteenth overseas convention on Conceptual buildings, ICCS 2008, held in Toulouse, France, in July 2008. the nineteen revised complete papers awarded including 2 invited papers have been conscientiously reviewed and chosen from over 70 submissions. The scope of the contributions levels from theoretical and methodological issues to implementation matters and purposes.
Parsing know-how regularly includes branches, which correspond to the 2 major software parts of context-free grammars and their generalizations. effective deterministic parsing algorithms were built for parsing programming languages, and rather assorted algorithms are hired for examining average language.
Immersing scholars in Java and the Java digital computing device (JVM), advent to Compiler development in a Java international allows a deep knowing of the Java programming language and its implementation. The textual content specializes in layout, association, and checking out, supporting scholars examine reliable software program engineering talents and turn into larger programmers.
- Algorithms for Parallel Polygon Rendering (Lecture Notes in Computer Science)
- Declarative Agent Languages and Technologies II: Second International Workshop, DALT 2004, New York, NY, USA, July 19, 2004, Revised Selected Papers (Lecture Notes in Computer Science)
- Source Code Optimization Techniques for Data Flow Dominated Embedded Software
- Automated Deduction – CADE-22: 22nd International Conference on Automated Deduction, Montreal, Canada, August 2-7, 2009. Proceedings (Lecture Notes in Computer Science)
Extra resources for Automatic Quantum Computer Programming: A Genetic Programming Approach
These gates are “Boolean” in the sense that they can have one of two possible effects on their output qubits on any particular invocation, but unlike classical logic gates they cannot act by setting their output qubits to 0 or 1 as such behavior would be non-unitary. The alternative convention adopted in most work on quantum computing, and built into QGAME, is that a Boolean gate acts by flipping or not flipping its output qubit to indicate an output of 1 or 0 respectively. The “flip” here is implemented as a QNOT, and all oracle gates can therefore be thought of as CNOT gates with more complex controls.
The functions used in the run must all return values of this same type, and must take arguments only of this type. These restrictions prevent type incompatibility errors, but they are inconvenient; several ways to relax these restrictions are discussed in Chapter 6. Additional steps must often be taken to ensure that arbitrary programs are also semantically valid — that is, that they will always execute without error, producing interpretable (even if incorrect) results. For AUTOMATIC QUANTUM COMPUTER PROGRAMMING 48 example, one must sometimes engineer special return values for “pathological” calls, such as division by zero.
The parameters are the indices of the input qubits, and is the index of the output qubit. For example, the following expression: (ORACLE (0 0 0 1) 2 1 0) calls a gate that flips qubit 0 (the right-most qubit) when (and only when) the values of qubits 2 and 1 are both 1. In other words, this oracle acts as the following matrix: This particular matrix, incidentally, is also known as the “Toffoli” gate; it can be used to implement quantum versions of classical NAND and FANOUT gates, meaning that all possible deterministic classical computations can be computed on quantum computers using appropriately connected Toffoli gates (Nielsen and Chuang, 2000, pp.
|
OPCFW_CODE
|
App hangs on secure=self.encrypt_connections
Summary
When attempting to connect to the AD domain, my script hangs at secure=self.encrypt_connections in ms_active_directory/core/ad_domain.py(456). I'm not sure what is going on in the background could have something to do with whatever TLS/etc implementation is in the OS. Is there a way to get more debug info out of the package?
Env Details
OS: Amazon Linux 2
Server: EC2 t3.small
Installed via Poetry
Ran via: poetry run python3 ad_demo
Last few lines of python debugger
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(440)__init__()
-> self.site = site.lower() if site else None
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(441)__init__()
-> self.encrypt_connections = encrypt_connections
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(442)__init__()
-> self.ca_certificates_file_path = ca_certificates_file_path
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(443)__init__()
-> self.ldap_servers = []
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(444)__init__()
-> self.ldap_uris = []
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(445)__init__()
-> self.kerberos_uris = []
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(446)__init__()
-> self.dns_nameservers = dns_nameservers
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(447)__init__()
-> self.source_ip = source_ip
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(448)__init__()
-> self.netbios_name = netbios_name
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(449)__init__()
-> self.auto_configure_kerberos_client = auto_configure_kerberos_client
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(450)__init__()
-> self._sid = None
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(452)__init__()
-> if not ldap_servers_or_uris and discover_ldap_servers:
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(453)__init__()
-> ldap_servers_or_uris = discover_ldap_domain_controllers_in_domain(self.domain, site=self.site,
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(454)__init__()
-> dns_nameservers=self.dns_nameservers,
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(455)__init__()
-> source_ip=self.source_ip,
(Pdb) next
> /home/ec2-user/.cache/pypoetry/virtualenvs/ad-demo-UzGUOutx-py3.7/lib/python3.7/site-packages/ms_active_directory/core/ad_domain.py(456)__init__()
-> secure=self.encrypt_connections)
(Pdb) next
Higher level debug
(ad-demo-py3.7) [ec2-user@ip-10-204-222-57 ad-api]$ poetry run python3 ad_demo │······································································
2023-03-15 17:00:32.982 | DEBUG | __main__:<module>:7 - Startin script │······································································
> /home/ec2-user/ad-api/ad_demo/__main__.py(11)<module>() │······································································
-> example_domain_dns_name = "xxxxxxxxx" │······································································
(Pdb) next │······································································
> /home/ec2-user/ad-api/ad_demo/__main__.py(12)<module>() │······································································
-> domain = ADDomain(example_domain_dns_name) │······································································
(Pdb) next │······································································
│······································································
Redacted Code
from ms_active_directory import ADDomain
from loguru import logger
import pdb
logger.debug("Startin script")
pdb.set_trace()
example_domain_dns_name = "********"
domain = ADDomain(example_domain_dns_name)
ldap_servers = domain.get_ldap_uris()
kerberos_servers = domain.get_kerberos_uris()
logger.debug("startin discovery")
# re-discover servers in dns and sort them by RTT again at a later time to pick up changes
domain.refresh_ldap_server_discovery()
domain.refresh_kerberos_server_discovery()
logger.debug("creating session")
session = domain.create_session_as_user(
"***************", "*******************"
)
logger.debug("finding data")
user = session.find_user_by_sam_name("**************", ["employeeID"])
group = session.find_group_by_sam_name(
"**************", ["gidNumber"]
)
# users and groups support a generic "get" for any attributes queried
print(user.get("employeeID"))
print(group.get("gidNumber"))
nslookup
The server appears to be able to resolve the domain controllers OK:
hi @kerryhatcher ! if you set the log level then you can get a bit more detail
conn.open()
logger.debug('Opened connection to AD domain %s: %s', self.domain, conn)
if self.encrypt_connections:
# if we're using LDAPS, don't StartTLS
if not conn.server.ssl:
tls_started = conn.start_tls()
if not tls_started:
raise DomainConnectException('Unable to StartTLS on connection to domain. Please check the '
'server(s) to ensure that they have properly configured certificates.')
logger.debug('Successfully secured connection to AD domain %s', self.domain)
unsure if the underlying python ssl has more logging available (this all builds on that)
can you maybe wireshark it? it's possible that your network is the issue.
a bad MTU size can fragment packets, which makes TLS negotiation hang because the packets keep getting re-transmitted. that's the only scenario where I've seen normal connections work, but TLS hang
maybe check that out?
|
GITHUB_ARCHIVE
|
...utilizing a MySQL database for storing the profile data. - The site needs an order form to upload profile data and images. Profiles will be paid for with 3 images for free and a fee for each additional uploaded photo. Secure credit card processing. - A profile template page is needed to display a person's profile which will need the ability to display
...built 2-3 wordpress sites, I want someone to create 2 template for me 1. The template where all the basketball courts will be listed. It will also contain a search form (see template 1 attachment) 2. The template with the information of each court. This is the page of each unique stadium. (see template 2) I have already built the custom post type and
I have a custom php & javascipt front office form, with back office admin panel , 1.I need to add in oders list page (that its work now) one more dropdown column for each order “Status” with 4 status, after employees select one of all status, for the one of orders must send a specific email template to the customer like the eshops status e
Hello, I am looking for a PHP developer to build a SEO Saas App under Symfony2, Laravel or under another similar MODERN framework. The frontend have to be coded in valid HTML5 and CSS3/4 code + jQuery (or if you prefer another, can be too) + Responsive (you can use Bootstrap or another one). The app and its Features should looks very close to
Need for finishing a project. A template has been bought and uploaded on a server. The platform is Magento. - invoice generation if necessary - module for distribution on social networks - module to subscribe to newsletter and save emails in a database, export by MailChimp - contact form / order - realization how search / filtering - concept adapted
...integrate classified with a php social engine site OR mod established plugin to look and function like one I need designed for the classified area tied to the social member site. Layout will need to be mod either way, keeping css of current template. Basic view pages are frontpage, listing page, posted page, and posting form. Admin will have its core
...project with current users. Don't send me a site that has 1 or 2 users. We want a active site that we can verify that it is being used by over 100 users and that is not a template. We will also verify that you are the developer. If yo can not do this then do not apply to job. Project Description: We are in need of someone to build a site for us
...of a form inside the same page where is located the form instead of opening a new page. I have a form placed in wordpress page, like <form action="[url removed, login to view]" method="post"> <select name="age" size="1"> <option value="15">15</option> <option value="16">...
...renting cars) We will provide you with the website template/graphics and you will be responsible to: - Create any needed web applications using PHP as the primary server side scripting language and MySQL as the DB - Create any needed DB designs - Integrate your code with the provided website template/graphics - Document everything and provide a comprehensive
...html css website template i have a hosting company and i want to give a free self build website. as a start need to include 10 template for each category for my client to choose now i'm ready to make the order for 10 html css templates for category tours and excursion websites if work go fine with the freelancer will order for the other cate...
I have the following Wordpress template: [url removed, login to view] You have to see the live demo of the theme ([url removed, login to view]) in order to understand what I want to do. Please read carefully the instructions! By pressing "Add Place" you can see that you can upload a place of
...either a standalone Windows desktop app or a Wordpress Plugin. I have found several public domain scripts which do most of this, but would like a complete solution. Using free code is fine too as long as it in one package in the end. Specifications: 1. User Inputs Wordpress RSS feed URL 2. Program Pulls data from RSS Feed and builds list of
...website for for my web hosting site. I will provide 1) template, 2) logo 3) hosting Packages 4) other necessary text and links etc... this need to be done with PHP or ASP.NET, must have paypal integration, domain registration form and search tools, back auto email responder to client on submission of order and to administrator. Site will have simple features
I need the following: 1- Creation of a nice looking but quick logo 2- Implementation of a nice looking and well laid out pre-existing template that you have which can be used for a hair related store eg natural/organic skin products, hair etc (if you dont have it we/I can buy it from somewhere CHEAP though its preferred that you already have a few
...(knowledge of PHP, MySQL, CSS, AJAX required) Main functionalities: - product catalog where the visitor can add products to basket - checkout of the order without login, no online payment/processing (the buyer just fills out a form (name, surname, address, city, post code) to order the products in the basket. The content of the order form is writt...
|
OPCFW_CODE
|
Program guide > Access data with client applications
Access data in WebSphere eXtreme Scale
After an application has a reference to an ObjectGrid instance or a client connection to a remote grid, you can access and interact with data in the WebSphere eXtreme Scale configuration. With the ObjectGridManager API, use one of the createObjectGrid methods to create a local instance, or the getObjectGrid method for a client instance with a distributed grid.
A thread in an application needs its own Session. When an application wants to use the ObjectGrid on a thread, it should just call one of the getSession methods to obtain a thread. This operation is cheap--there is no need to pool these operations in most cases. If the application is using a dependency injection framework such as Spring, you can inject a Session into an application bean when necessary.
After you obtain a Session, the application can access data stored in maps in the ObjectGrid. If the ObjectGrid uses entities, you can use the EntityManager API, which you can obtain with the Session.getEntityManager method. Because it is closer to Java™ specifications, the EntityManager interface is simpler than the map-based API. However, the EntityManager API carries a performance overhead because it tracks changes in objects. The map-based API is obtained by using the Session.getMap method.
WebSphere eXtreme Scale uses transactions. When an application interacts with a Session, it must be in the context of a transaction. A transaction is begun and committed or rolled back using the Session.begin, Session.commit, and Session.rollback methods on the Session object. Applications can also work in auto-commit mode, where the Session automatically begins and commits a transaction whenever the application interacts with Maps. However, the auto-commit mode is slower.
The logic of using transactions
Transactions may seem to be slow, but eXtreme Scale uses transactions for three reasons:
- To allow rollback of changes if an exception occurs or business logic needs to undo state changes.
- To hold locks on data and release locks within the lifetime of a transaction, allowing a set of changes to be made atomically, that is, all changes or no changes to data.
- To produce an atomic unit of replication.
WebSphere eXtreme Scale lets a Session customize how much transaction is really needed. An application can turn off rollback support and locking but does so at a cost to the application. The application must handle the lack of these features itself.
For example, an application can turn off locking by configuring the BackingMap locking strategy to be NONE. This strategy is fast, but concurrent transactions can now modify the same data with no protection from each other. The application is responsible for all locking and data consistency when NONE is used.
An application can also change the way objects are copied when accessed by the transaction . The application can specify how objects are copied with the ObjectMap.setCopyMode method. With this method, you can turn off CopyMode. Turning off CopyMode is normally used for read-only transactions if different values can be returned for the same object within a transaction. Different values can be returned for the same object within a transaction.
For example, if the transaction called the ObjectMap.get method for the object at T1, it got the value at that point in time. If it calls the get method again within that transaction at a later time T2, another thread might have changed the value. Because the value has been changed by another thread, the application sees a different value. If the application modifies an object retrieved using a NONE CopyMode value, it is changing the committed copy of that object directly. Rolling back the transaction has no meaning in this mode. You are changing the only copy in the ObjectGrid. Although using the NONE CopyMode is fast, be aware of its consequences. An application that uses a NONE CopyMode must never roll back the transaction. If the application rolls back the transaction, the indexes are not updated with the changes and the changes are not replicated if replication is turned on. The default values are easy to use and less prone to errors. If you start trading performance in exchange for less reliable data, the application needs to be aware of what it is doing to avoid unintended problems.
Be careful when you are changing either the locking or the CopyMode values. If you change the values, unpredictable application behavior will occur.
Interacting with stored data
After a session has been obtained, you can use the following code fragment to use the Map API for inserting data.
Session session = ...; ObjectMap personMap = session.getMap("PERSON"); session.begin(); Person p = new Person(); p.name = "John Doe"; personMap.insert(p.name, p); session.commit();
The same example using the EntityManager API follows. This code sample assumes that the Person object is mapped to an Entity.
Session session = ...; EntityManager em = session.getEntityManager(); session.begin(); Person p = new Person(); p.name = "John Doe"; em.persist(p); session.commit();
The pattern is designed to obtain references to the ObjectMaps for the Maps that the thread will work with, start a transaction, work with the data, then commit the transaction.
The ObjectMap interface has the typical Map operations such as put, get and remove. However, use the more specific operation names such as: get, getForUpdate, insert, update and remove. These method names convey the intent more precisely that the traditional Map APIs.
You can also use the indexing support, which is flexible.
The following is an example for updating an Object:
session.begin(); Person p = (Person)personMap.getForUpdate("John Doe"); p.name = "John Doe"; p.age = 30; personMap.update(p.name, p); session.commit();
The application normally uses the getForUpdate method rather than a simple get to lock the record. The update method must be called to actually provide the updated value to the Map. If update is not called then the Map is unchanged. The following is the same fragment using the EntityManager API:
session.begin(); Person p = (Person)em.findForUpdate(Person.class, "John Doe"); p.age = 30; session.commit();
The EntityManager API is simpler than the Map approach. In this case, eXtreme Scale finds the Entity and returns a managed object to the application. The application modifies the object and commits the transaction, and eXtreme Scale tracks changes to managed objects automatically at commit time and performs the necessary updates.
Transactions and partitions
WebSphere eXtreme Scale transactions can only update a single partition. Transactions from a client can read from multiple partitions, but they can only update one partition. If an application attempts to update two partitions, then the transaction fails and is rolled back. A transaction that is using an embedded ObjectGrid (grid logic) has no routing capability and can only see data in the local partition. This business logic can always get a second session that is a true client session to access other partitions. However, this transaction would be an independent transaction.
Queries and partitions
If a transaction has already searched for an Entity, the transaction is associated with the partition for that Entity. Any queries that run on a transaction that is associated with an Entity are routed to the associated partition.
If a query is run on a transaction before it is associated with a partition, set the partition ID to use for the query. The partition ID is an integer value. The query is then routed to that partition.
Queries only search within a single partition. However, you can use the DataGrid APIs to run the same query in parallel on all partitions or a subset of partitions. Use the DataGrid APIs to find an entry that might be in any partition.
The REST data service allows any HTTP client to access a WebSphere eXtreme Scale grid, and is compatible with WCF Data Services in the Microsoft .NET Framework 3.5 SP1. For more information see the user guide for the eXtreme Scale REST data service .
- CopyMode best practices
WebSphere eXtreme Scale makes a copy of the value based on the six available CopyMode settings. Determine which setting works best for the deployment requirements.
- Byte array maps
You can store the key-value pairs in the maps in a byte array instead of POJO form, which reduces the memory footprint that a large graph of objects can consume.
Parent topic:Access data with client applications
Interacting with an ObjectGrid using ObjectGridManager
Data access with indexes (Index API)
Use Sessions to access data in the grid
Cache objects with no relationships involved (ObjectMap API)
Cache objects and their relationships (EntityManager API)
Retrive entities and objects (Query API)
Configure clients with WebSphere eXtreme Scale
Program for transactions
Connect to a distributed ObjectGrid
|
OPCFW_CODE
|
This project uses open source reaction data from the USPTO (pre-extracted by Daniel Lowe, https://bitbucket.org/dan2097/patent-reaction-extraction/downloads) to train a neural network model to predict the outcomes of organic reactions. Reaction templates are used to enumerate potential products; a neural network scores each product and ranks likely outcomes. By examining thousands of experimental outcomes, the model learns which modes of reactivity are likely to occur. The full details can be found at http://dx.doi.org/10.1021/acscentsci.7b00064.
The code relies on Keras for its machine learning components using the Theano background. RDKit is used for all chemistry-related parsing and processing. Please note that due to the unique reaction representation used, generating candidate outcomes requires the modified RDKit version available at https://github.com/connorcoley/rdkit. In the modified version, atom-mapping numbers associated with reactant molecules are preseved after calling
RunReactants. The code is set up to use MongoDB to store reaction examples, transform strings, and candidate sets. A mongodump containing all data used in the project can be found at https://figshare.com/articles/MongoDB_dump_compressed_/4833482. The database/collection names are defined in
Reaction templates are extracted from ca. 1M atom-mapped reaction SMILES strings using
data/generate_reaction_templates.py. They are designed to be overgeneral to cover a broad range of chemistry at the expense of specificity. The extracted templates can be found in the mongodump, so they do not need to be re-extracted.
A forward enumeration algorithm is used to generate plausible candidates for each set of reactants using
data/generate_candidates_edits_fullgrants.py with the help of the
main/transformer.py class. Reagents, catalysts, and solvents (if present) are allowed to react in addition to the reactants. This makes the prediction task artificially hard (as the reaction database already contains information about which atoms react), but it is reasonable given that role labelling was performed with knowledge of the reaction outcome. Candidates are inserted into a MongoDB automatically.
To prepare the data for training,
data/preprocess_candidate_edits_compact.py is used to generate necessary atom-level descriptors for reactant molecules, which will be used in the edit-based representation. Data is pickled in a compressed format to minimize storage size and file read limitations, but is expanded during training and testing into its full many-tensor representation.
Models are trained and tested using
main/score_candidates_from_edits_compact.py. Many command-line options are available to set different architecture/training parameters, including which fold of a 5-fold CV is being run. A demo model using just 10 reactions is included in
Trained model testing
An already-trained model can be loaded using
scripts/lowe_interactive_predict.py to make predictions on demand. You will be prompted to enter reactant SMILES strings; the results of the forward prediction are saved as a table of products, scores, and probabilities.
|
OPCFW_CODE
|
package tracing
import (
"context"
"go.opentelemetry.io/otel/api/global"
"go.opentelemetry.io/otel/api/propagation"
"go.opentelemetry.io/otel/api/trace"
"go.opentelemetry.io/otel/api/trace/tracetest"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/label"
export "go.opentelemetry.io/otel/sdk/export/trace"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
)
type TestTraceProvider struct {
tracer *tracetest.Tracer
}
func (tp *TestTraceProvider) Tracer(name string) trace.Tracer {
if tp.tracer == nil {
tp.tracer = &tracetest.Tracer{}
}
return tp.tracer
}
//go:generate go run github.com/maxbrunsfeld/counterfeiter/v6 go.opentelemetry.io/otel/api/trace.Tracer
//go:generate go run github.com/maxbrunsfeld/counterfeiter/v6 go.opentelemetry.io/otel/api/trace.Provider
//go:generate go run github.com/maxbrunsfeld/counterfeiter/v6 go.opentelemetry.io/otel/api/trace.Span
// Configured indicates whether tracing has been configured or not.
//
// This variable is needed in order to shortcircuit span generation when
// tracing hasn't been configured.
//
//
var Configured bool
type Config struct {
Jaeger Jaeger
Stackdriver Stackdriver
}
func (c Config) Prepare() error {
var exp export.SpanSyncer
var err error
switch {
case c.Jaeger.IsConfigured():
exp, err = c.Jaeger.Exporter()
case c.Stackdriver.IsConfigured():
exp, err = c.Stackdriver.Exporter()
}
if err != nil {
return err
}
if exp != nil {
ConfigureTraceProvider(TraceProvider(exp))
}
return nil
}
// StartSpan creates a span, giving back a context that has itself added as the
// parent span.
//
// Calls to this function with a context that has been generated from a previous
// call to this method will make the resulting span a child of the span that
// preceded it.
//
// For instance:
//
// ```
// func fn () {
//
// rootCtx, rootSpan := StartSpan(context.Background(), "foo", nil)
// defer rootSpan.End()
//
// _, childSpan := StartSpan(rootCtx, "bar", nil)
// defer childSpan.End()
//
// }
// ```
//
// calling `fn()` will lead to the following trace:
//
// ```
// foo 0--------3
// bar 1----2
// ```
//
// where (0) is the start of the root span, which then gets a child `bar`
// initializing at (1), having its end called (2), and then the last span
// finalization happening for the root span (3) given how `defer` statements
// stack.
//
func StartSpan(
ctx context.Context,
component string,
attrs Attrs,
) (context.Context, trace.Span) {
return startSpan(ctx, component, attrs)
}
func FromContext(ctx context.Context) trace.Span {
return trace.SpanFromContext(ctx)
}
func Inject(ctx context.Context, supplier propagation.HTTPSupplier) {
trace.TraceContext{}.Inject(ctx, supplier)
}
type WithSpanContext interface {
SpanContext() propagation.HTTPSupplier
}
func StartSpanFollowing(
ctx context.Context,
following WithSpanContext,
component string,
attrs Attrs,
) (context.Context, trace.Span) {
if supplier := following.SpanContext(); supplier != nil {
ctx = trace.TraceContext{}.Extract(ctx, supplier)
}
return startSpan(ctx, component, attrs)
}
func StartSpanLinkedToFollowing(
linked context.Context,
following WithSpanContext,
component string,
attrs Attrs,
) (context.Context, trace.Span) {
ctx := context.Background()
if supplier := following.SpanContext(); supplier != nil {
ctx = trace.TraceContext{}.Extract(ctx, supplier)
}
linkedSpanContext := trace.SpanFromContext(linked).SpanContext()
return startSpan(
ctx,
component,
attrs,
trace.LinkedTo(linkedSpanContext),
)
}
func startSpan(
ctx context.Context,
component string,
attrs Attrs,
opts ...trace.StartOption,
) (context.Context, trace.Span) {
if !Configured {
return ctx, trace.NoopSpan{}
}
ctx, span := global.TraceProvider().Tracer("concourse").Start(
ctx,
component,
opts...,
)
if len(attrs) != 0 {
span.SetAttributes(keyValueSlice(attrs)...)
}
return ctx, span
}
func End(span trace.Span, err error) {
if !Configured {
return
}
if err != nil {
span.SetStatus(codes.Internal, "")
span.SetAttributes(
label.String("error-message", err.Error()),
)
}
span.End()
}
// ConfigureTraceProvider configures the sdk to use a given trace provider.
//
// By default, a noop tracer is registered, thus, it's safe to call StartSpan
// and other related methods even before `ConfigureTracer` it called.
//
func ConfigureTraceProvider(tp trace.Provider) {
global.SetTraceProvider(tp)
Configured = true
}
func TraceProvider(exporter export.SpanSyncer) trace.Provider {
// the only way NewProvider can error is if exporter is nil, but
// this method is never called in such circumstances.
provider, _ := sdktrace.NewProvider(sdktrace.WithConfig(
sdktrace.Config{
DefaultSampler: sdktrace.AlwaysSample(),
}),
sdktrace.WithSyncer(exporter),
)
return provider
}
|
STACK_EDU
|
feat: Support Declare transactions
Describe the Feature Request
Snos panics on blocks with Declare transactions. In order to achieve this, several parts of the code need to handle/implement what is required to process these transactions
Branch WIP: https://github.com/keep-starknet-strange/snos/tree/ft/declare_txs
Related Code
Additional Context
On the comments, the different issues that will be described and link the PR/commit from the fix
Tested on block 79984
Console output:
[DEBUG minilp::solver] restored feasibility in 268 iterations, obj.: 10781510
thread 'main' panicked at crates/bin/prove_block/src/main.rs:36:61:
Block could not be proven: RpcError(StarknetError(ContractNotFound))
Fix: https://github.com/keep-starknet-strange/snos/pull/352
Tested on block: 76840
Console output:
[DEBUG blockifier::transaction::transactions] Transaction execution failed with: Failed to read from state: ClassHashNotFound.
thread 'main' panicked at /Users/ftheirs/Repositories/snos/crates/bin/prove_block/src/reexecute.rs:57:21:
Transaction 736032479bace4146d6551c0570299d5a7701218024dd6cedd6a1e0836a0659 (11/14) failed in blockifier: Failed to read from state: ClassHashNotFound.
stack backtrace:
Context: This error was triggered when reexecuting the previous block (block_number - 1). After debugging, the error is detected in Blockifier but it was actually callingget_compiled_contract_class_async function from rpc_state_reader. Since the ContractClass is not defined in the previous block, the RPC throws an error. The fix consist in remapping this error.
Fix: https://github.com/keep-starknet-strange/snos/commit/8dce2be3527bb0ba7f5203c2c1177357556d313d
Tested on block: 76840 and 160966
Console output:
assertion failed: previous_class_proof.verify(*class_hash).is_ok()
Context:
These asserts were failing. After careful inspection, there were some bugs that had to be fixed.
for (class_hash, previous_class_proof) in previous_class_proofs {
assert!(previous_class_proof.verify(*class_hash).is_ok());
}
for (class_hash, class_proof) in previous_class_proofs {
assert!(class_proof.verify(*class_hash).is_ok());
}
Fix: https://github.com/keep-starknet-strange/snos/commit/e885d2476663f47efcc37ad6e58e40968530dbc4
PS: the fix solved the issue for block 76840. The other block keeps failing on the assert from previous class proofs
Tested on block: 76840
Console output:
[ERROR prove_block] died at: /Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/cairo/common/dict.cairo:56
[ERROR prove_block] inst_location:
Location { end_line: 66, end_col: 7, input_file: InputFile { filename: "/Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/cairo/common/dict.cairo" }, parent_location: None, start_line: 56, start_col: 5 }
thread 'main' panicked at crates/bin/prove_block/src/main.rs:36:61:
Block could not be proven: SnOsError(Runner(VmException(VmException { pc: Relocatable { segment_index: 0, offset: 252 }, inst_location: Some(Location { end_line: 66, end_col: 7, input_file: InputFile { filename: "/Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/cairo/common/dict.cairo" }, parent_location: None, start_line: 56, start_col: 5 }), inner_exc: Hint((0, WrongPrevValue((Int(0), Int(1081521228881763032381340369983304581869719834203569457582430923958602864337), Int(2790843345387106728836781645678935291925327034839673762138726424037927125171))))), error_attr_value: None, traceback: Some("Cairo traceback (most recent call last):\ncairo-lang/src/starkware/starknet/core/os/os.cairo:94:49: (pc=0:11641)\n let (local reserved_range_checks_end) = execute_transactions(block_context=block_context);\n ^***********************************************^\n/Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/starknet/core/os/execution/execute_transactions.cairo:168:5: (pc=0:10392)\n 'execution_helper': execution_helper,\n ^*******************************************^\n/Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/starknet/core/os/execution/execute_transactions.cairo:267:20: (pc=0:10463)\n let remaining_gas = INITIAL_GAS_COST - TRANSACTION_GAS_COST;\n ^**********************************************************************^\n/Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/starknet/core/os/execution/execute_transactions.cairo:267:20: (pc=0:10463)\n let remaining_gas = INITIAL_GAS_COST - TRANSACTION_GAS_COST;\n ^**********************************************************************^\n/Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/starknet/core/os/execution/execute_transactions.cairo:284:9: (pc=0:10526)\n %{ exit_tx() %}\n ^******************************************************^\n/Users/ftheirs/Repositories/snos/cairo-lang/src/starkware/starknet/core/os/execution/execute_transactions.cairo:1063:9: (pc=0:11424)\n }\n ^\n") })))
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Context:
The issue comes from
WrongPrevValue((Int(0), Int(2914380022048590530662957512768899582721003481667668281362650132041708924227), Int(376777264310724238292765590161100262470936922835543311980000322853526649051))))) in cairo-lang/src/starkware/starknet/core/os/execution/execute_transactions.cairo:1063:
And that's when updating a Cairo dictionary called contract_class_changes which is built from the class_hash_to_compiled_class_hash dict in the OS input. The comment above explicits that the value should be 0 for new classes:
// Note that prev_value=0 enforces that a class may be declared only once.
dict_update{dict_ptr=contract_class_changes}(
key=[class_hash_ptr], prev_value=0, new_value=compiled_class_hash
);
Fix: https://github.com/keep-starknet-strange/snos/commit/86bd535292b5e2d7c4768e584c1032e6ebeddc16
|
GITHUB_ARCHIVE
|
how to use cookies in JQuery
It is easy to use cookies in serverside like PHP,.NET...etc
I would like to use cookies for static website which is just HTML, CSS & JQuery .
Anybody know how to implement cookies in JQuery ?
the jQuery Cookie plugin is one way to go:
https://github.com/carhartl/jquery-cookie
You use it like so:
$.cookie('cookie_name', 'value'); // to set
$.cookie('cookie_name'); // to get
You don't need a jQuery plugin, you can easily access cookies in JavaScript. Here's how: https://developer.mozilla.org/en/DOM/document.cookie
But maybe the plugins linked in the other answers will give you easier access.
Are you sure that cookie is exactly what you need? There are localStorage which is much better in many scenarios.
You wrote that you want use cookies with static website, but cookies will be sent to the server and returned back. Is it really needed that you sent the information to the server on leading the static website? It increases the size of HTTP header and decreases the performance of the web site (see here for example).
Cookies have very hard restrictions. Corresponds to the section 6.3 of rfc2109 or 6.1 of rfc6265: At least 4096 bytes per cookie, at least 50 cookies per domain (20 in rfc2109), at least 3000 cookies total (300 in rfc2109). So the cookies one can't use to save too many information. For example if you would save state of every grid of every your web page you can quickly achieve the limits.
If you just want to save some users preferences for the page you can use localStorage and the usage is really easy.
If you prefer to use some jQuery plugin instead of direct usage of localStorage and if you need support old web browsers (like IE6/IE7) then you can use jStorage for example. In the case you has only less size of storage: 128 KB instead of 5 MB (see here and IE userData Behavior), but it's better as 4K which one has for cookies (see here).
I want that you just think a little about alternatives to cookies.
A simple, lightweight jQuery plugin for reading, writing and deleting cookies.For details demo and example see a link.https://github.com/carhartl/jquery-cookie
You can use plain JS by accessing document.cookie.
See: http://jsfiddle.net/ShsYp/
Also: https://developer.mozilla.org/en/DOM/document.cookie
Read Cookie:
var cookieValue = $.cookie("cookieName");
Write Cookie:
$.cookie("cookieName", "CookieValue");
|
STACK_EXCHANGE
|
Reification Index =
Current Reifications ==¶
Normal reification requires 4 statements in the store:
<ID> <rdf:type> <rdf:Statement>
<ID> <rdf:subject> <subject>
<ID> <rdf:predicate> <predicate>
<ID> <rdf:object> <object>
Note that this does not assert the statement itself. This requires a fifth statement:
<subject> <predicate> <object>
This is very expensive if many statements are to be reified. With the current storage of 64 bits per gNode (gNode = graph node: The number representing the resource) then this requires 768 bytes for every reified statement.
8 bytes per resource x 4 nodes per statement x 4 statements per reification x 6 indexes
Reifications in the Index¶
Another alternative is to modify the indexes to handle reified statements, and to write code which returns these modification as separate statements. Fortunately, the mechanism to artificially create statements already exists in the Resolver interface. So all that remains is to modify the indexes accordingly.
The current indexes are:
We can modify the first index to include the reification ID:
SPOM --> SPOMR
This ID (R) can be 0 when a statement is not reified. Otherwise it will be set to the gNode of the reification ID. Currently, negative numbers are never stored in any of these indexes (though they get used internally to represent temporary nodes during queries), so a negative gNode can be used to represent a statement that has been reified, but not asserted. Any code which lazily resolves on this index can then skip over any statements with a negative R.
This mechanism maps statments to a possible reification ID. With this technique any statement can be checked quickly (O(log(N))) in this index to see if it is reified. It has an associated cost of 8 bytes per statement (or an increase of about 4%). This applies if statements are reified or not.
A Reification Index¶
The other requirement is to map reification IDs back to their statments. It might be tempting to have an index which maps an ID directly to a pointer which indicates an offset in the first index. However, this loses coherency as that index gets modified, with blocks of the index being copied through various phases, etc. Instead, the only way to map indexes to the associated statements is to introduce a new index of the form:
Unlike the other indexes, this one only requires an entry whenever a statement is reified. The result is zero overhead for non-reified statements, and 40 bytes for each reification.
This proposal requires a constant overhead of 4% for the entire system. This seems acceptably low, but may be enough to consider allowing reified indexes as a different graph type. This should be possible with the current mechanism of model types.
The 4% increase should not matter from the perspective of disk space, but may be an issue in terms of performance, given that the more data each statement takes up, the more swapping the system needs to make (both in terms of heap space, and memory mapped files).
Once the 4% overhead has been accepted, the savings in reification are significant. Instead of an extra 768 bytes (over 4 statements) to reify a single statement, this new system will require just 40 extra bytes, and will only need to touch two indexes. This saves over 80% in space, and results in less disk access, making this a much more scalable form of reification.
Asserting the reified statement will only update the first index (instead of taking more space), and add to the remaining 5. This would have otherwise added to 6 indexes, so there is a slight saving of 32 bytes out of the original 192.
Another problem with adjusting the SPOM index to SPOMR is that the count of a range of statements will be incorrect if any of those statements contain negative values for R. This is because those statements would be counted, even though they have not been asserted.
There are three approaches to dealing with this situation:
- Brute-force the count. This is not scalable at all.
- Maintain 2 counts in the AVL nodes. The first is the count of all statements. The second is a count of only the asserted statements. This would incur an overhead of 8 bytes per 256 statements.
- Use counts from another index. This is not practical unless we have redundant indexes. These are not currently available, but will be discussed on another page.
|
OPCFW_CODE
|
October 29, 2020
TorchScript is one of the most important parts of the Pytorch ecosystem, allowing portable, efficient and nearly seamless deployment. With just a few lines of
torch.jit code and some simple model changes you can export an asset that runs anywhere
libtorch does. It’s an important toolset to master if you want to run your models outside the lab at high efficiency.
Good introductory material is already available for starting to work with TorchScript including execution in the C++
libtorch runtime, and reference material is also provided. This article is a collection of topics going beyond the basics of your first export.
Tracing vs Scripting #
Pytorch provides two methods for generating TorchScript from your model code — tracing and scripting — but which should you use? Let’s recap how they work:
Tracing. When using
torch.jit.traceyou’ll provide your model and sample input as arguments. The input will be fed through the model as in regular inference and the executed operations will be traced and recorded into TorchScript. Logical structure will be frozen into the path taken during this sample execution.
Scripting. When using
torch.jit.scriptyou’ll simply provide your model as an argument. TorchScript will be generated from the static inspection of the
It’s not obvious from the tutorial documentation, but choosing which method to use is a fairly simple and fluid choice:
Use Scripting by Default #
torch.jit.script captures both the operations and full conditional logic of your model, it’s a great place to start. If your model doesn’t need any unsupported Pytorch functionality and has logic restricted to the supported subset of Python functions and syntax, then
torch.jit.script should be all you need.
One major advantage of scripting over tracing is that an export is likely to either fail for a well-defined reason — implying a clear code modification — or succeed without warnings.
Unlike Python, TorchScript is Statically Typed
You will need to be consistent about container element datatypes, and be wary of implicit function signatures. A useful practice is to use type hints in method signatures.
Despite TorchScript’s ability to capture conditional logic it does not allow you to run arbitrary Python within
libtorch — a popular misconception.
Use Tracing if You Must #
There are a few special cases in which
torch.jit.trace may be useful:
- If you are unable to modify the model code — because you do not have access or ownership — you may find scripting the model simply will not work because it uses unsupported Pytorch/Python functionality.
- In pursuit of performance or to bake in architectural decisions the logic freezing behavior of tracing might be preferable — similar to inlining C/C++ code.
Pay Close Attention to Tracer Warnings
Due to how tracing can simplify model behavior, each warning should be fully understood and only then ignored (or fixed). Also, be sure to trace in eval mode if you are exporting a model for production inference!
Use Both Together #
Device Pinning #
If you find yourself using
torch.jit.trace on some code, you’ll have to actively deal with some of the gotchas or face performance and portability consequences. Besides addressing any warnings Pytorch emits, you’ll also need to keep an eye out for device pinning. Just like
torch.jit.trace records and freezes conditional logic, it will also trace and make constant the values resulting from this logic — this can include device constants.
Using this sample code:
def forward(X): return torch.arange(X.size(0))
If we trace while executing on CPU or GPU we get this TorchScript (scroll to the right on mobile):
def forward(X: Tensor) -> Tensor: _0 = ops.prim.NumToTensor(torch.size(X, 0)) _1 = torch.arange(annotate(number, _0), dtype=None, layout=0, device=torch.device("cpu"), pin_memory=False) return _1
You can see that
torch.device("cpu") has been inserted as a constant into the generated TorchScript. If we try to get clever with this code:
def forward(X): return torch.arange(X.size(0), device=X.device)
Tracing will now result in TorchScript that is pinned to the tracing device. When traced on GPU, we see this:
def forward(self, X: Tensor) -> Tensor: _0 = ops.prim.NumToTensor(torch.size(X, 0)) _1 = torch.arange(annotate(number, _0), dtype=None, layout=0, device=torch.device("cuda:0"), pin_memory=False) return _1
Tensors Created During Tracing Will Have Their Device Pinned
This can be a significant performance and portability problem.
Performance and Portability #
If we later deserialize and run this TorchScript in
arange tensor will always be created on the device that is pinned —
torch.device("cuda:0") in the examples above. If the rest of the model is running on a different device this can result in costly memory transfers and synchronization.
This device pinning issue extends to multi-GPU scenarios as well. If you have traced and exported a model on
cuda:0 and then run it on
cuda:1 you’ll see transfers and synchronization between the devices. Not good. Perhaps even worse, if such a model is run in an environment without any CUDA-capable device it will fail since
cuda:0 doesn’t exist.
Replace Tensors Created During Execution With Parameters
Tensors created in the execution path while tracing will have their device pinned. Depending on model logic, these can often be turned into Parameters created during construction.
An example of the problem looks like this in Nsight Systems:
Tensor Subscript Mask and Indexing Will Pin Devices #
Unlike their more explicit counterparts (
index_select), using tensor subscripting will pin the mask or indexes to the tracing device:
def forward(X): return X[X > 1]
Generates this TorchScript:
def forward(X: Tensor) -> Tensor: _0 = torch.to(torch.gt(X, 1), dtype=11, layout=0, device=torch.device("cpu"), pin_memory=False, non_blocking=False, copy=False, memory_format=None) _1 = annotate(List[Optional[Tensor]], [_0]) return torch.index(X, _1)
def forward(X): return X.masked_select(X > 1)
Generates this TorchScript:
def forward(X: Tensor) -> Tensor: _0 = torch.masked_select(X, torch.gt(X, 1)) return _0
The same pattern holds for
tensor.index_select(0, indexes). This device pinning carries the same performance and portability risks as noted above.
Replace Tensor Subscripting With
Subscript-based masking and indexing will always pin the tracing device into generated TorchScript. :(
Direct Graph Modification #
Once we’ve used
torch.jit.trace to generate a ScriptModule or ScriptFunction we can use
.code to understand exactly what TorchScript has been generated. Though it has an entirely undocumented interface it is possible (and fun) to access and modify the generated TorchScript AST directly via the
The most useful parts of the API are defined in torch/csrc/jit/python/python_ir.cpp. As you can see, all the basic functionality is present for finding and changing the graph nodes you want. If you change nodes or arguments and then persist the module your subsequent TorchScript load and inference will reflect your changes, though modules cannot be changed recursively in this way (
torch.jit.freeze can be useful here).
An example of the kind of graph modification that is possible:
def undevice(tsc): # use ::to variant which does not hardcode device for to_node in tsc.graph.findAllNodes('aten::to'): i, dtype, layout, device, pin_mem, non_blocking, copy, mem_format = list(to_node.inputs()) to_node.removeAllInputs() for a in [i, dtype, non_blocking, copy, mem_format]: to_node.addInput(a) for constant in tsc.graph.findAllNodes('prim::Constant'): if not constant.hasUses(): constant.destroy()
The above code will modify a traced graph, changing
aten::to to use an overload which doesn’t change memory location.
But what is this really useful for? As an undocumented API you’d be unwise to use this capability in a production pipeline unless you like maintenance coding. I would only recommend it for research, as in the above example which I used to understand and profile the transfer/synchronization behavior of tensor subscripting.
Don’t Bother With Direct Graph Modification
For legitimate production use-cases you can almost always find a way to modify your model code to generate the TorchScript you want.
Rewrite for ONNX/TensorRT Export #
You can get some awesome results with TensorRT but exporting a model from Pytorch to TensorRT is far from a sure thing. The export path to ONNX and then to TensorRT can fail due to missing or incompatible operations at either step and this can be frustrating.
After the obligatory Google search, I’ve found a reasonable hail-mary approach is to rewrite your tensor processing code to avoid unsupported operators. I can’t give general advice for this but let me show you an example of how this can be possible:
class RI(torch.nn.Module): def forward(self, X, repeat): return X.repeat_interleave(repeat, dim=0) inputs = (torch.arange(5), torch.tensor(3)) torch.onnx.export(RI(), inputs, 'please_work.onnx', opset_version=11)
RuntimeError: Exporting the operator repeat_interleave to ONNX opset version 11 is not supported. Please open a bug to request ONNX export support for the missing operator.
However, the behavior of
repeat_interleave with a fixed
dim argument can be replicated in a form that will export to ONNX (but good luck passing code review):
class RW(torch.nn.Module): def forward(self, X, repeat): X = X.reshape(1, *X.size()).expand(repeat, *X.size()) return torch.cat(torch.unbind(X, dim=1))
N.B. The above code is only equivalent to
repeat_interleave(X, dim=0) though it can be adapted for any fixed dim.
The same approach can be taken to work around incomplete support in TensorRT, which is far more prevalent in my experience.
Efficient and portable Pytorch production deployment used to be almost impossible, but the introduction and continued evolution of TorchScript has been great for the ecosystem. There are still a few rough edges and tricks, and I hope you’ve found something new or useful in the topics above.
If there is some major concern or problem you’re having with TorchScript or Pytorch production deployment please get in touch — I’m always looking for new areas to research.
|
OPCFW_CODE
|
[MyResearch] Location based Games as a testbed for Research
The paper "Staging and Evaluating Public Performances as an Approach to CVE" (Steve Benford, Mike Fraser, Gail Reynard, Boriana Koleva and Adam Drozd The Mixed Reality Laboratory, Nottingham), claims that staging public performances can be a fruitful approach to CVE research. The authors describe four experiments in 4 contexts (four different location based games used a art/public performance).
For each, we describe how a combination of ethnography, audience feedback and analysis of system logs led to new design insights, especially in the areas of orchestration and making activity available to viewers.
Among many methods of conducting research (proof implementation as proof of concept, "demo or die", controlled experiment in laboratorym theory backed up with mathematical proof...), they propose to put technology out of the lab and create an "event" (vow event-based research ;)
And don't forget ! CSCP stands for Computer Supported Cooperative PLAY
This is also a nice paper in the sense that it provides idea for analyzing mobile collaboration:
Ethnographic studies rely on a variety of data including field notes, photographs and video. As noted above, capturing social interaction in CVEs (i.e. collaborative virtual environment) on video is a difficult task. Resources are often limited so that only one or two viewpoints can be captured, and current analysis tools do not handle multiple synchronized viewpoints at all well. Detailed analysis of sessions that involve tens of participants is even more difficlt. In short, it can be time consuming, expensive and frustrating work to analyse videos of sessions in CVEs. Analysis of system logs is also more problematic than it need be. At present, there is no agreed format for log data and no readily available suites of analysis tools. (...) tools are required to automatically analyse CVE recordings in order to provide researchers with guidance as to where potentially interesting events have taken place. We have therefore recently developed an scene extraction tool for automatically analyzing 3D recordings. Our current implementation determines interesting scenes based upon the proximity of participants (although it could be extended to account for other factors such as orientation, audio activity, or the identities of key characters). First, it uses a clustering algorithm to group participants on a moment-by-moment basis. It then looks at changes in clusters over time in order to determine on-going scenes. Figure 9 shows an example of its output. In this case, we are looking at a GANTT chart representation of the key scenes in chapter 1 of Avatar farm (determined with a proximity threshold of 15 meters the cut-off point for audio communication). Time runs from left to right and the different colours distinguish scenes that were occurring in different virtual worlds. The tool allows the viewer to overlay the paths of different participants through the structure. We see two participants (Role 2 and Role 3) in our example. We propose that tools such as this can assist researchers in analyzing activity in CVEs by enabling them to more easily home in on potentially interesting social encounters. Sharing across the CVE community our final observation concerns the sharing of data between researchers. In order to maximize the use of recordings, it will be necessary to share them between different researchers. As these techniques mature, the CVE community needs to agree on common formats for recordings so that we can establish shared repositories of recordings of different events in CVEs.
|
OPCFW_CODE
|
Will weight training stagnate growth in a teenager?
Will weight training stagnate growth in a teenager?
Is this a myth or fact? If it does, how does it stagnate a youth's growth?
Is there any specific reason why you suspect this could happen?
nothing specific, but was just wondering if it's a myth or not.
For future reference I'd like to point out that we're not here to debunk myths without any decent source that supports it.
This belief is widespread enough for teenagers to think twice before weight training, or for their parents to disallow it. We could do with facts here. Think of the following question: "Could squatting hurt my knees?". No need for decent sources in support to genuinely be in doubt, so it is a legitimate question.
I'm guessing there is a correlation vs causation component to this myth. Shorter limbs generally make it easier to push weights, because of leverage, so you'll see people with that physical makeup having more "success" and being encouraged by their results, and maybe focusing on that activity. Kind of like why you don't see a lot of really tall elite gymnasts. It's not because gymnastics stunts the growth.
The concern regarding stunting the growth of teenagers is related to injuring growth plates. It turns out that weight training does not increase the incidence of growth plate injury and does not stunt growth.
However, injury prevention requires correct form and the avoidance of overloading. It is especially important to supervise their training and teach form first. Crossfit Kids, for example, emphasizes form with very low weight (PVC only) until the student shows the emotional and physical maturity to use real weight. You can rely on an adult to make good decisions about their own limits, but the instructor should be making those decisions for a child.
Sure...
Training improperly or with poor form will cause all sorts of injuries both acute and degenerative, whether one is a child, teenager or adult.
...but not really
That being said, the argument that lifting heavy things is itself somehow detrimental to growth is susceptible to a simple counterexample: children growing up on farms do all sorts of heavy lifting and their growth seems to be benefited, not stunted.
This Rippetoe and Pendlay interview has a layman's answer:
Pendlay: I personally don’t see any physical problems. We don’t hesitate to put kids in soccer or gymnastics at 4-5 years of age, either of which are way more stressful on the body than weight training, and either of which are way more likely to cause injury. So I think the whole safety thing is a non-issue.
...
Rippetoe: I would just like for somebody to explain to me why it would stunt a kid’s growth. Do they actually think it smashes them down shorter, or shuts off the supply of growth hormone due to pressure in the skull, or that it destroys all the growth plates, or that it [scares] off the Tooth Fairy?
The Starting Strength book has detailed statistics comparing youth lifting to other sports in terms of injuries.
This article by Lon Kilgore goes more in-depth from a scientific perspective.
For more information, check out this article as well: http://www.exrx.net/WeightTraining/Weightlifting/YouthMisconceptions.html
|
STACK_EXCHANGE
|