Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Which compiler, compiled this file?
I am using gnu gcc and armcc to compile a few C files. How can I get the information about which compiler compiled which file?
Ex: test.cpp is being compiled by armcc or gnu gcc.
The makefile is very complicated and I am looking out for a command by which I can check which compiler compiled which file.
Any ideas?
Do you have access to the indeterminate object files ?
Perhaps the object file format might give you some clues
Yes, I have access to the .o files
Sometimes you can look at the file with a hex editor and tell if the compiler wrote its name into the file.
I'm not sure if there's an easier way, but you can find it embedded in the binary with gcc (at least on my platform):
$ hexdump -C foo | grep -A2 GCC
00001030 00 00 00 00 00 00 00 00 47 43 43 3a 20 28 55 62 |........GCC: (Ub|
00001040 75 6e 74 75 2f 4c 69 6e 61 72 6f 20 34 2e 37 2e |untu/Linaro 4.7.|
00001050 32 2d 32 32 75 62 75 6e 74 75 33 29 20 34 2e 37 |2-22ubuntu3) 4.7|
This does not work with executable files (or .o files) compiled with gcc here (gcc on OS X). This seems to be platform-dependent.
@RandyHoward- though actually that may be because it's clang behind the scenes?
gcc --version
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) <<>> Regardless, his question was about multiple compilers. I'm not sure this method is a valid answer.
Yup - the llvm bit. I think it's just a gcc frontend to llvm, but I'm not certain.
Some compiler embedd compiler name and version in to the binrary, but not all. We can also embedd whatever information we want into the binary.
gcc -DCOMPILER_DETAILS='"gcc 4.3.3"' temp.c
In temp.c use the macro COMPILER_DETAILS in any place, like just use it in printf. So that this string literal will be embedd in the generated binary file. Dont assign this string to unused variable, compiler will not embedd because of optimization.
|
STACK_EXCHANGE
|
Too many people think that copyrights and patents are good. What they forget is that copyrights and patents are monopoly powers, and monopolies are never good.
This is long enough to recoup any investment on the work and make a profit, but not the objectionable lifetime income that removes the creator of the work from doing productive work in the marketplace.
This means that the creator of the work must be given referential credit whenever the work is used, whether or not the patent or copyright has expired.
Compulsory licensing, now used for executing live or recorded performances of copyrighted musical scores and stage plays, would be required for all protected works of intellectual property. This removes the monopoly powers from the copyrights and patents by allowing others to produce the protected work by paying small royalties. In the case of works intended to be one-of-a-kind items not for sale (such as works of art), the word "copy" must be placed on each copy in an inconspicuous place.
The multiple sourcing of the same textbooks by multiple publishers removes the monopoly power that forces the prices to be too high for students to afford. In addition, revisions and new editions must not preclude students from using older versions in the classroom.
The sale of a patent or copyright means that those with money can buy monopoly power. This is wrongdoing, and must be eliminated.
The person or people who actually created the work must be the owners, not any business hiring the creation work. This prevents businesses from buying up patents and copyrights to amass monopoly powers.
This prevents the fights for control over patents and copyrights that often ensue when collaborating groups break up. Royalties shall be divided according to portion of ownership.
This prevents people from obtaining government grants to create a work, and then making a profit from the government expenditures.
If the beneficiary of the exclusive contract chooses to not use the subject of the contract, the other party is kept from selling it to someone else. This is a monopoly power, and must be prohibited.
A noncompete agreement prevents the person leaving the business from using his skills for income. This is a monopoly power and must be prohibited.
This prevents monopoly situations, and prevents the patent or copyright from outlasting the product it was intended for. People should have the right to obtain a discontinued product after the manufacturer wants to have nothing more to do with it.
The mechanical royalty rate must be used, so the holder of the patent or copyright can make a profit, but also so that there is no lifetime income or lifetime savings that removes the creator of the work from doing productive work.
Since no money is made from making one copy of the work, and since the expense of collecting a mechanical royalty for one copy exceeds the amount of the royalty itself, it must not be collected. Copies made from broadcasts or web sites must not be subject to royalties. Art students making copies of great works of art as practice should likewise not be charged royalties. But if any of these copies are sold, then royalties should be charged.
Illegitimate uses must not supersede the legitimate uses of such a product, in the same way that tools often used for burglary also have legal uses that prevent them from being sold.
This prevents the monopoly situation that occurs when a manufacturer uses its copyrights to charge high prices.
This prevents the monopoly situation that occurs when a manufacturer uses its copyrights to charge high prices. It also protects people with software they need that won't work when a newer version of some other software is installed (such as an operating system).
This prevents the monopoly situation that occurs when a manufacturer uses its copyrights to charge high prices, and also the case where the copy protection outlasts the copyright.
It is not government's place to stop patent and copyright and patent infringements. This is the purpose of the civil courts. If repeated infringement occurs, the judge can award double or treble damages.
Part of the problem is that government made copyrights and patents too lucrative and last too long, forcing people to violate the patent or copyright to obtain a needed item long after it has been discontinued.
This is nothing but the government favoring the rich at the expense of the rest of us. This is a major reason that copyright and patent infringement must not be crimes.
|
OPCFW_CODE
|
Confusing to have 2 levels to ftp site
Historically the main ftp site we made available to users was
ftp://ftp.pombase.org/pombe/
there was a level above at the ebi but we didn't really use it. Mark used to put stuff here.
Now we use this as the "entry point"
ftp://ftp.pombase.org/
which lists
archive/ 2/4/19, 8:07:00 PM
external_datasets/ 3/13/19, 10:45:00 AM
nightly_update/ 4/30/19, 2:05:00 AM
pombe/ 9/4/18, 1:00:00 AM
releases/ 3/13/19, 10:19:00 AM
and has no README or instructions.
I suggest we revert to using
ftp://ftp.pombase.org/pombe/
so people see the directories they are interested in when they land. This may partly explain why people struggle to find things?
We should move the other directories (which are not advertised AFAIK) down to this entry point and document them.
in fact
ftp://ftp.pombase.org/pombe/README
says
This is the root directory for fission yeast data files maintained by PomBase.
If you use this data in a publication please cite PomBase as described:
http://www.pombase.org/about/citing-pombase
(i.e describes as the root directory).
so I think we just linked to the incorrect directory?
It might be useful to design this page
https://www.pombase.org/datasets
so it describes all of the contents here
ftp://ftp.pombase.org/pombe/
(including any moved in from above)
and can also be used to generate the README file so it doesn't need to be maintained in 2 places (is that possible?). If not just a link to the web page describing the director/subdirectory contents (in a way that maps to the current contents)
1
change link on the website
https://www.pombase.org/datasets
to
ftp://ftp.pombase.org/pombe/
Move archive
into
ftp://ftp.pombase.org/pombe/
Make external datasets a linked file from
ftp://ftp.pombase.org/pombe/
Move archive into
ftp://ftp.pombase.org/pombe/
Just to check: do we definitely want to do that? We would need to move that directory into the pombe-embl subversion repository because the pombe directory on the FTP site is just a copy of pombe-embl/ftp_site/pombe from SVN. The archive directory is 8GB and it would be added to everyone's checked out copy of pombe-embl next time you do an "svn update".
My vote is to add archive to SVN despite the size because I think it makes sense to keep it in SVN along with the other FTP site contents.
These changes are done:
change link on the website >https://www.pombase.org/datasets
to ftp://ftp.pombase.org/pombe/
Make external datasets a linked file from
ftp://ftp.pombase.org/pombe/
Make releases datasets a linked file from
ftp://ftp.pombase.org/pombe/
Could we just ling the archive from here:
ftp://ftp.pombase.org/pombe
as for external datasets?
Is that what you are suggesting?
I was thinking that it makes sense to add the archive to SVN but I wanted to make sure that everyone's OK with the pombe-embl directory tripling in size. A link is the other possibility.
but I wanted to make sure that everyone's OK with the pombe-embl directory tripling in size
for me it would be mildly inconvenient but not worse; I don't really object
I would be happy with linking if that is possible
I've added a link to the archive directory to SVN. I'll check the FTP site on Wednesday and then close this issue.
I would be happy with linking if that is possible
I've added the link. I think this issue is done now.
OK. I wish it didn't show the size as "0" that always makes me think they don't exist.
There is a random .tmp file in there...
There is a random .tmp file in there...
Thanks. I've removed it.
|
GITHUB_ARCHIVE
|
Building a Correlation Technology Platform Application
Building a Correlation Technology Platform-Provisioned Software Application:
A Simple Description
This document has been written to help lend clarity to some of the issues that have not been
well understood about using the Correlation Technology Platform to build a powerful Vertical
Market-Specific software application. Let’s start with the basics.
The Correlation Technology Platform is a software product which implements the well
understood “Platform” software architecture typical of the software products sold to and used by
Enterprise scale customers. While able to support Enterprise scale customers, it can be used
for “mid-sized” and “small” applications as well.
A Platform is an “enabling technology”, meaning that it typically does not provide direct “user
facing” or “consumer facing” functionality. Rather, the Platform provides a set of generic
capabilities within its particular domain which are used as foundational software layers by
enterprise software product developers to create specialized software services used by
enterprise corporate users or enterprise corporate customers. One well known example of a
platform is Websphere from IBM. Websphere is used as a platform to build Web-provisioned
software products. Websphere provides generic functions needed by enterprise software
developers when they need to build customized software products and provision them to
enterprise employees and customers. Please note that the software that ends up in front of the
consumer is not written by IBM, but by the developers in the employ of the enterprise that buys
Websphere from IBM.
Likewise, no customer or employee of a Correlation Technology Platform-enabled software
product is likely to see software written by Make Sence. This is the actual sequence:
• A company recognizes that a market exists for a certain software product.
• That company performs a due-diligence exercise that determines that certain types of
software capabilities are necessary or desirable as a foundation to actually create the
envisioned software product as rapidly as possible.
• As part of that due diligence exercise, competing approaches and providers of those
software capabilities are evaluated.
• Very typically, a software product implemented using software platform architecture is
recognized as the superior approach.
• Competing software platforms that provide such capabilities are then evaluated.
• If (as in the case of current CTP licensees) the determination is made that the
Correlation Technology Platform is the right enabling technology for the proposed
product, the company then licenses the Correlation Technology Platform from Make
• Then, the company/licensee will use developers on their own staff or developers from
consultants (possibly including Make Sence Florida) to build their software product on
top of the CTP. In particular, the Applied Analytics components (which we call
Refinement components) written to operate on the Correlation output (which we call the
Answer Space) will all be unique and possibly patentable products of respective
• When the company rolls out their completed software product it can compete in the
market with other products, where the competitive advantage delivered by the CTP can
Copyright 2014 Make Sence Florida, Inc. All Rights Reserved. Page 1
The CTP is therefore like an automobile engine, in that while all engines share some
components and principles in common, and all gasoline engines share even more components
and principles in common, the engine for almost every unique model of automobile on the
market has an engine expressly designed for the purpose of powering that particular model of
automobile. One size, one feature set, one set of functional parameters does not fit all.
That is why there are differences in the CTP for a Recruitment application compared to a Risk
Assessment application compared to an Educational/Critical Thinking application. Make Sence
will not build a Recruitment application, nor a Risk Assessment application, nor an
Educational/Critical Thinking application. Companies have licensed the CTP so that they can
adapt the CTP to those particular purposes. Exactly the same as does IBM and
Websphere. IBM doesn’t build websites and website applications – companies that license
Websphere build them, using their own developers or consultants – but Websphere behavior is
often extensively modified (via detailed configuration and other means).
So what are the “missing pieces” that a company interested in building a Correlation Technology
Platform-powered application needs to obtain from Make Sence? For companies with strong
technical staffs, the missing pieces are mostly conceptual. For one current licensee, the missing
pieces are mostly in the form of product support, answering questions such as “what’s the best
way to do this with the CTP?”. For companies that have fewer technical resources, such as a
different current licensee, we are able to provide software development consulting services to
help build their proposed product (again, on top of the CTP). For companies that require and
wish such services, we are able to provide management consulting services as well, such as
product definition, external software product integration design, and similar services.
The answers to some typical questions should now be more clear. No, Make Sence/Make
Sence Florida does not have to build your application. Further, a licensee with a strong product
concept and strong technical skills needs relatively little interaction with Make Sence after
purchase of the license. Yes, Make Sence can provide “outputs a smart team can build off” but
that is a choice by the licensee, not a requirement.
With respect to the API. We already had published a first API, but that API became obviously
inadequate as more deployment models for CTP-provisioned applications became known to
us. We are right now (February, 2014) coding a complete overhaul of the API which will
accommodate a large number of use cases. This API will not “create” a Recruitment product, or
any other vertical market product, but will allow licensees to build vertical market products more
quickly and efficiently.
And let’s firmly dispose of the mistaken idea that a licensee needs to be a “big company with an
Research and Development arm” to bring a Correlation Technology-powered product to
market. Licensees do not need an R+D arm to follow these steps:
1. Decide what function is wanted (example, find the best available candidate for an open
2. Decide what data you need to collect to support that function (examples are resume
data, job description data, company data)
3. Decide what criteria you want to consider to implement that function (examples are
employee honesty, company social policy, etc)
Copyright 2014 Make Sence Florida, Inc. All Rights Reserved. Page 2
4. Decide what methods you believe – based upon your own expertise in the domain – are
the best forms of applied analytics to compare and rank candidates and jobs (examples
are statistical, rule-based, semantic, logical)
5. Decide which human or machine end-users will see the outputs of your applied analytics
6. Decide that if humans need to see the outputs, what the humans will see (UI).
We know we present a lot of new ideas, but from the implementation view we’re just another
enterprise software platform company. We welcome any questions you may have, and know
that once you are able to “get your arms around” our process you will see it is in fact a very
tractable process with very powerful potential results. We would prefer not to think up the
product for you. We do not know your market’s “pains”. We just want to provide you with a
capability that provides your company with an amazing competitive advantage.
Copyright 2014 Make Sence Florida, Inc. All Rights Reserved. Page 3
|
OPCFW_CODE
|
Batch Size and Velocity Fluctuations
January 27, 2009
I recently wrote a post on Velocity Signature Analysis and have been looking at how undertaking large chunks of work as a complete team impacts velocity. We are currently three quarters of the way through a major (4 months long) piece of functionality and velocity is finally rising. This seems a pattern; for the early portion of a new area of work we spend a lot of time understanding the business domain and checking our interpretation using mock-ups and discussions. Velocity, in terms of functionality built and approved by the business is down during this time since many of the team members are involved in understanding the new business area rather than cranking out code.
As project manager I can get jittery, did we estimate this section of work correctly? Our average velocity for the last module was 60 points per month and now we are only getting 20! Weeks and weeks go by as whiteboards get filled, designs get changed, but the tested story counts hardly moves. Compounding this Discovery Drain phenomenon is the Clean-up Drain pattern. During the early portions of a new phase, fixing up the niggling issues hanging over from the last phase seems to take a long time. This makes perfect sense, if they were easy they would probably have been done earlier. It is always the difficult to reproduce bug, the change request that necessitates a rework of established workflow or multiple stakeholder collaboration that seem to bleed into the next development phase. While there may only be 3 or 4 bugs or change requests hanging over, they take a disproportionate amount of time to resolve.
I sometimes use a booster rocket analogy for illustrating team cohesion and vision. When team members are not aligned with a common project goal, their individual motivations can result in a suboptimal team vector. By aligning team member efforts through common goals and a way for people to grow and get something valuable for themselves by making the project successful, we align individual vectors and produce a much greater project vector.
There is a parallel with project velocity too. If 30% of the team’s capacity is consumed on better understanding a complex business domain and 30% of the team’s capacity is spent fixing bugs and change requests for which we may earn little velocity credit, then that only leaves 40% left for raw velocity earning development.
When everyone is focused on iteration features the velocity increases.
As these tasks are completed effort can be returned to development and velocity increases. The process leads to lumpy throughput, but seems preferable to the alternatives. We could let our BA’s run ahead with analysis, filling the hopper with story outlines ready for consumption by the development team. We do this slightly, but are conscious of not letting it get too far since we lose the whole team focus on tasks and experience Pipelining Problems.
If the QA and developers are not present for at least the major analysis conversations we lose valuable insights, time saving suggestions, and create the need to reiterate points later. If business users and BA’s are too far ahead then when development questions or bugs arise that need their input there is a task switching overhead as they “park” their current work, reorient themselves in the task at question and help solve the problem. So instead work is undertaken in vertical slices, conducted by the majority of the team.
Like everything it is a balancing act, we want to exploit role specialization when it brings advantages, but see the benefit of a multi disciplined team tackling and driving through to user acceptance on discreet units of work. So, rather than a smooth flow of stories through the production process, we get some slow downs and speed ups as the team collectively takes on chunks of learning and then delivery.
Lean production systems teach us that smaller batches can be a way to smooth throughput. If we could find a way to structure the project into smaller chunks rather than 3 or 4 month long modules then these peaks and troughs would be smoothed out and velocity as a whole increased. Either this is not possible in our project domain, or more likely, I have not been able to find a way to do this yet. Our business domain is complex and naturally divides into chunks. We are replacing a suite of legacy applications and as we finish replacing once application, disconnect its interfaces and move our focus to the next one, we experience the learning cycle and tidy up issues described earlier.
I suspect this is a function of our project - which is really a program of application replacements. So, rather than get overly concerned with the oscillations in velocity, we can just zoom out some more and say, overall our velocity averages 45 points per month. Yet given this is a 4 year program there are millions of dollars of difference between our forecasted end date and spend between the best, average and worst velocities experienced per module.
So is the XP term “yesterday’s weather” really a good indicator? Can we use recent velocity to predict future velocity? I believe so, we have to allow for explainable variations, but estimation based on what has been proved to be achievable seems fairer than speculation on what we expect or would like to happen (traditional planning). It is just that sometimes the weather is a little changeable. Like hear in Calgary at the moment, where on last Tuesday we were able to go running in shorts on a sunny +12C day and by Thursday we were wrapped up running in the snow with -25C wind-chill. However, on average, I predict the weather for February to be about -5C to -10C, probably.
Interesting post Mike. I've grappled with this problem as well: how to maintain some sense of velocity rhythm given work like bugs, change requests, new learning activities? I think the approach to maintaining a velocity rhythm lies in assigning the appropriate priority and estimates to all of the work to be undertaken: modeling, bugs, change requests (if treated separately from "stories" - as is often the case in a trad env) with increased estimates for the work (models, bugs, change requests, stories etc) that will likely require significant new learning (could view work with significant up-front learning as spikes with points for increased complexity). So, the number of stories or features done per iteration will vary, but the amount of work per iteration (points or whatever the preferred measure)is somewhat consistent.
Posted by: Luke | February 07, 2009 at 11:12 AM
Thanks for your feedback, I agree that you want to track (via points or whatever) all kinds of work to better monitor progress internally, even if external business feature based completeness does not seem to be moving that quickly.
I did not go into in my post because it is a little complex, but we actually have two sets of points: developer points and vendor points. Our vendor points are business functionality based, these are what I report on externally and what frequently slow down as changes, bug fixes and learnings occur. They are largely fixed, if a new high priority change request comes through we trade off business priority within our limited total capacity for the project.
However, our developer points are for internal consumption and are created for every bug and change we undertake. Tracking developer points we can see what we are busy on and estimate the work for an iteration, even when we do not get many vendor points done.
This is not my preferred approach, I inherited the project part way through and think the switch to a new consolidated metric would not be worth the disruption right now. I would prefer to see a single, transparent estimated backlog of features with bugs and change requests prioritized amongst the functionality.
Anyway, thanks again for your comment, I believe you are right and that creating estimates for the additional work really helps illustrate a more consistent velocity. As for whether it allows you to more accurately predict final completion time or not, I think is a different matter. That would assume a consistent percentage of work dedicated to changes and the like throughout the project which (in our case) is hard to predict.
Posted by: Mike Griffiths | February 10, 2009 at 03:05 PM
|
OPCFW_CODE
|
Andrew M Blanks
Rises in intracellular calcium are essential for contraction in myometrial smooth muscle. Calcium is not only an important second messenger for the generation of force via myosin light chain kinase, but also depolarizes the plasma membrane allowing for activation of other voltage dependent ion channels. This voltage dependent control of excitability is modulated in a gestation dependent manner in all mammalian species such that as gestation progresses the myometrium becomes increasingly excitable. These biophysical changes are mediated by alterations in ion channels, pumps, agonist receptors and the sub cellular architecture of heterogeneous cell types within the uterus. In order to consider the process of activation of the uterus in its entirety there is a requirement for a combination of molecular, biophysical, and modelling techniques.
Modelling the uterus:
We are currently developing a computational model of the human and rodent uterus. The model works on several levels, from classical Hodgkin-Huxley type modelling of time dependent active conductances in a single cell, through to coupled models of heterogeneous networks. We have assembled complete models of all active conductances in single cells for the purposes of drug discovery and investigating higher order phenomena such as functional redundancy. We have also assembled basic models of spatio-temporal patterns of excitability mapped to the full 3D geometry of the pregnant human and rat uteri. Our end goal is to have a computer based simulation of the pregnant human uterus to test quantitatively scientific ideas of cellular function; to model the effects of mutations in key genes; and to simulate complex phenomena in order to improve clinical treatments and diagnosis.
We have active ongoing research programmes with Ferring pharmaceuticals, GlaxoSmithKline and formerly with Medical Research Council Technologies.
Medical Research Council, Action Medical Research, Ferring Pharmaceuticals, GlaxoSmithKline.
MRC Centenary award
Examples of Physiology and Disease Mechanism:
1. McCloskey, C., Rada, C., Bailey, E., McCavera, S., van den Berg, H.A., Atia, J., Rand, D.A., Shmygol, A., Chan, Y.W., Quenby, S. and Brosens, J.J. et al, 2014. The inwardly rectifying K+ channel KIR7. 1 controls uterine excitability throughout pregnancy. EMBO molecular medicine, 6(9), pp.1161-1174. Doi: 10.15252/emmm.201403944
2. Lutton, E.J., Lammers, W.J., James, S., van den Berg, H.A. and Blanks, A.M., 2018. Identification of uterine pacemaker regions at the myometrial–placental interface in the rat. The Journal of physiology, 596(14), pp.2841-2852. Doi 10.1113/JP275688
3. Chan, Y.W., van den Berg, H.A., Moore, J.D., Quenby, S. and Blanks, A.M., 2014. Assessment of myometrial transcriptome changes associated with spontaneous human labour by high‐throughput RNA‐seq. Experimental physiology, 99(3), pp.510-524. Doi:10.1113/expphysiol.2013.072868
Example of Drug Discovery:
4. Wright, P.D., Kanumilli, S., Tickle, D., Cartland, J., Bouloc, N., Dale, T., Tresize, D.J., McCloskey, C., McCavera, S., Blanks, A.M. and Kettleborough, C., 2015. A high-throughput electrophysiology assay identifies inhibitors of the inwardly rectifying potassium channel Kir7. 1. Journal of biomolecular screening, 20(6), pp.739-747. Doi:10.1177/1087057115569156
Example of Computational Modelling:
5. Atia, J., McCloskey, C., Shmygol, A.S., Rand, D.A., van den Berg, H.A. and Blanks, A.M., 2016. Reconstruction of cell surface densities of ion pumps, exchangers, and channels from mRNA expression, conductance kinetics, whole-cell calcium, and current-clamp voltage recordings, with an application to human uterine smooth muscle cells. PLoS computational biology, 12(4), p.e1004828. https://doi.org/10.1371/journal.pcbi.1004828
|
OPCFW_CODE
|
Wordpress archive and container error?
My single.php, archive, index and category.php pages are all the same using this coding below...
<?php
/*
Template Name: Lisa-beauty.co.uk
*/
?>
<?php get_header(); ?>
<?php if (have_posts()) : while (have_posts()) : the_post(); ?>
<div class="headpostedin"><?php the_time('l, F jS, Y') ?> </div>
<div class="content" id="post-<?php the_ID(); ?>">" rel="bookmark" title="<?php the_title_attribute(); ?>"><div class="headtitle"><?php the_title(); ?></div>
<div class="postmetadata"></div>
<?php the_content(__('CONTINUE READING...')); ?>
<?php the_tags('Tags: ', ', ', '
'); ?>
<div class="headposted"><?php comments_popup_link('0', '1', '%'); ?> comments</div>
<?php echo DISPLAY_ULTIMATE_PLUS(); ?>
<?php comments_template(); // Get wp-comments.php template ?>
<?php endwhile; else: ?>
<?php _e('Sorry, no posts matched your criteria.'); ?>
<?php endif; ?>
</div>
<?php get_sidebar(); ?>
<?php get_footer(); ?>
Everyone works fine on my whole website lisa-beauty.co.uk but as soon as I click my archives in my sidebar then on to a page http://lisa-beauty.co.uk/lisa-beauty/?m=201509&paged=4 my whole container and sidebar are completely off but on other pages they are ok in my archive.
What could be causing the issue?
here is my coding from my header.php and footer.php
<!DOCTYPE html>
<head>
<meta name="description" content="Beauty uk blogger">
<meta name="keywords" content="blogger, beauty, fashion, make up">
<meta name="author" content="Lisa Robinson">
<meta charset="UTF-8">
<title>Lisa's Beauty UK Blog!</title>
<style type="text/css" media="screen"> @import url(/wp-content/themes/twentytwelve1/style.css);</style>
<?php wp_head(); ?>
</head>
<body>
<a name="top"></a>
<div id="container">
<div id="header" onclick="window.location='http://lisa-beauty.co.uk'">
</div>
<div id="content">
<div class="content">
</br>
</div>
footer
<div id="foot">
<div id="footer-wrapper">
</div>
</div>
<div class="top">
<a href="#top" title="Go up"><u>↑ up</u></a>
</div>
<div class="left">
<div class="copyright">
Copyright 2015 www.lisa-beauty.co.uk <u>All rights reserved</u> | Powered by Wordpress | Theme by <a href="http://www.akaleez.co.uk/"><u>Akaleez</u></a>
</div>
</div>
<?php get_sidebar(); ?>
<?php wp_footer(); ?>
</body>
</html>
It's a problem with your mark-up i guess, somewhere you are closing a div where you should not, but i can't follow the mark-up to tell you where, it is too badly writted. I would say that your best option is to re-write your markup from scratch.
It is very strange, i removed a div from the comments or archive.php then one page that had the sidebar and container error went ok but then my other pages went wrong. Seems like its only on specific pages for example this one http://lisa-beauty.co.uk/lisa-beauty/?m=201504&paged=2
This is most likely being caused by a rogue closing div somewhere. However, as sticksu says it's very difficult to debug this code.
Some general pointers:
1) Indent your code
It's far easier to debug this
<?php get_header(); ?>
<?php if (have_posts()) : while (have_posts()) : the_post(); ?>
<div class="headpostedin">
<?php the_time('l, F jS, Y') ?>
</div>
<div class="content" id="post-<?php the_ID(); ?>" rel="bookmark" title="<?php the_title_attribute(); ?>">
<div class="headtitle">
<?php the_title(); ?>
</div>
<div class="postmetadata">
</div>
<?php the_content(__('CONTINUE READING...')); ?>
<?php the_tags('Tags: ', ', ', ''); ?>
<div class="headposted">
<?php comments_popup_link('0', '1', '%'); ?> comments
</div>
<?php echo DISPLAY_ULTIMATE_PLUS(); ?>
<?php comments_template(); // Get wp-comments.php template ?>
</div>
<?php endwhile; else: ?>
<?php _e('Sorry, no posts matched your criteria.'); ?>
<?php endif; ?>
<?php get_sidebar(); ?>
<?php get_footer(); ?>
than what you've posted above.
2. If you're using <br /> tags they should be like that rather than </br>
3. Validate your code.
If you're ever having layout issues like this, run your code through the W3C validator and 99% of the time it will give you some great clues as to the underlying issue.
I think the main issue is with this line:
<div class="content" id="post-<?php the_ID(); ?>">" rel="bookmark" title="<?php the_title_attribute(); ?>">
as you have an extra quote and close bracket in there. Try this instead:
<div class="content" id="post-<?php the_ID(); ?>" rel="bookmark" title="<?php the_title_attribute(); ?>">
Also note in the example above I've moved that final closing </div> at the bottom to be inside the loop, as it's good practice to close a div at the same level as abstraction as when you open it (the reason being that if there were no results returned the opening div would not be returned but the closing div would have been returned regardless, which would have then broken the page).
Think I have figured out an issue, as soon as I deactivate a plugin i installed two days ago my whole site is messed up which is not right since the theme has been ok before the plugin was installed. I am using the plugin Ultimate Social Media PLUS
|
STACK_EXCHANGE
|
Some values are lost if an array contains both integers and strings as keys
Say you have three text inputs:
<input type="text" name="myname[0]">
<input type="text" name="myname[1]">
<input type="text" name="myname[stringkey]">
When serializing this form, the first two fields are lost in the resulting object. I think the reason is in lines 62 through 70. In the first two iterations myname is detected as "fixed" (line 64), while in the third iteration it's detected as "named" (line 69). This leads to the first two values being overwritten by the third.
This is unsupported because the original intention was to support actual arrays. After building myname[0] and myname[1] there will be an array in memory. When the serializer gets to myname[stringkey] it would have to convert that array to an object which would result in wasted work.
There are many ways to solve this problem. A few I can think of are:
possible option: data-* attributes
<input type="text" name="myname[0]" data-serialize="object">
<input type="text" name="myname[1]">
<input type="text" name="myname[stringkey]">
possible option: schema – leave the form the same but pass schema to serializer
$('#form').serializeJSON({
somearray: FormSerializer.array,
myname: FormSerializer.object
});
possible option: builder object
let builder = {
array: {empty: [], builder: function(acc, key, value) { return [...acc, value]; }},
object: {empty: {}, builder: function(acc, key, value) { return Object.assign(acc, {[key]: value}); }}
};
$('#form').serializeJSON({
somearray: builder.array,
myname: builder.object
});
The builder option is obviously the most powerful because any custom function could be used. This would allow the user to do things like
$('#form').serializeJSON({
// convert array of strings to a flat string
arrayOfStrings: {empty: '', builder: (acc,key,value) => acc + value]}
// convert strings to numbers
arrayOfNumbers: {empty: [], builder: (acc,key,value) => Object.assign(acc, {[key]: parseInt(value, 10)})}
})
Anyway, I haven't made time to work on this project in a while, so if anyone wants to contribute ideas or implementations here, that'd be helpful.
I had this problem too and I decided to add a new option.
You can use numkeys option from my fork:
$("form").serializeObject({'numkeys':true});
https://github.com/comdvas/jquery-serialize-object/commit/df7871c680f1e12a90a8b5506d7c86a934466654
Hello,
I think the correct solution is to convert Array to Object. It's not wasted work, it's the consequence of your lazy algorithm.
When the program see a bracket with numeric value, it's need to determine if it's a array i.e with consecutive and numeric key. Without the knowledge of all keys, it can't determine it. Naive approach is to loop over all keys. Lazy approach is to take the less costly choice until proven otherwise. In this case, conversion is not wasted work, it's the work you have chosen not to do immediately with lazy approach.
So even with your hint, you need to correct your algorithm. In my opinion, you hint is premature optimization. JavaScript is fast enough to transform array to object in most cases.
@neofly
Don't overlook the fact that this kind of mixing behaviour is explicitly defined in the tests and has been discouraged for a long time
https://github.com/macek/jquery-serialize-object/blob/master/test/unit/add-pair-test.js#L49-L55
it("should punish user for mixing pushed array and fixed array fields", function() {
f.addPair({name: "a[]", value: "b"});
f.addPair({name: "a[2]", value: "c"});
f.addPair({name: "a[]", value: "d"});
f.addPair({name: "a[5]", value: "e"});
assert.deepEqual(f.serialize(), {a: ["b", "d", "c", , , "e"]});
});
Also consider the fact that the plugin is essentially 3 years old and didn't understand it's use case perfectly at the time of its conception. Since then there have been countless discussions about what improvements can be made to the underlying serializer, but few people have been willing to contribute the hard work to make effective changes - not the tiny incremental patches that only make the serializer more complex and increase its fragility.
|
GITHUB_ARCHIVE
|
6 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Our organization has a lot of IT systems ( OLTP, MIS, Exadata, EMV, noncore ... ) - running diferrent platforms ( AIX, Linux, Sun, Win Server, ...), each system has its own monitoring tool designed by its manufacturer. We also write some shell scripts to do it manually. But we... (3 Replies)
Discussion started by: bobochacha29
I'm looking for the best tool to monitor the Linux system. I've found a lot of interesting tools searching the web but I didn't find one which can do all the requirments (like a one in all tool). I would prefer it to include a command line interface also.
Andreea (0 Replies)
Discussion started by: andreea9322
3. UNIX for Dummies Questions & Answers
I want to install cacti (frontend to RRDTool) on my Debian 6 VPS.
My dummy questions please...
The requirements include RRDTool and net-snmp so is there a way to check these are properly installed?
Re the command
# apt-get install cacti
After logging in to my VPS in putty... (1 Reply)
Discussion started by: Juc1
Topas, nmon, vmon & top monitoring tool not working.
We use above AIX utilities to identify cpu and memory usage. I can execute the topas but on execution I receive "SpmiCreateStatSet can't create StatSet" message & no output.
I use AIX 5.3, TL3.
Please assist to restore... (4 Replies)
Discussion started by: sumit30
i would like to know your opinion about the best monitor solution that can:
- monitor OS (could be ubuntu, AIX, HP-UX, redhat,...)
- monitor BD (ORACLE, Mysql,...)
- monitor specific services - scipts that i develop for monitoring them
- have parent and child architecture, for... (1 Reply)
Discussion started by: gfca
Anyone out there using any graphing tool for Solaris performance data taken either through SAR utility or iosatat, vmstat, nicstat etc. There are a couple on googling like statsview and rrdtool but not sure if anyone is really happy and satisfied with using any of the graphing tool.
... (1 Reply)
Discussion started by: baner_n
mk-build-deps - build a package satisfying a package's build-dependencies
mk-build-deps [options] control file | package name ...
Given a package name and/or control file, mk-build-deps will use equivs to generate a binary package which may be installed to satisfy all
the build dependencies of the given package.
If --build-dep and/or --build-indep are given, then the resulting binary package(s) will depend solely on the
Build-Depends/Build-Depends-Indep dependencies, respectively.
Install the generated packages and its build-dependencies.
When installing the generated package use the specified tool. (default: apt-get --no-install-recommends)
Remove the package file after installing it. Ignored if used without the --install switch.
-a foo, --arch foo
If the source package has architecture-specific build dependencies, produce a package for architecture foo, not for the system
architecture. (If the source package does not have architecture-specific build dependencies, the package produced is always for the
Generate a package which only depends on the source package's Build-Depends dependencies.
Generate a package which only depends on the source package's Build-Depends-Indep dependencies.
Show a summary of options.
Show version and copyright information.
Use the specified tool to gain root privileges before installing. Ignored if used without the --install switch.
mk-build-deps is copyright by Vincent Fourmond and was modified for the devscripts package by Adam D. Barratt <firstname.lastname@example.org>.
This program comes with ABSOLUTELY NO WARRANTY. You are free to redistribute this code under the terms of the GNU General Public License,
version 2 or later.
Debian Utilities 2013-12-23 MK-BUILD-DEPS(1)
|
OPCFW_CODE
|
esp_https_ota() crashes in spi_flash_disable_interrupts_caches_and_other_cpu when stack is in external memory (IDFGH-9029)
Answers checklist.
[X] I have read the documentation ESP-IDF Programming Guide and the issue is not addressed there.
[X] I have updated my IDF branch (master or release) to the latest version and checked that the issue is present there.
[X] I have searched the issue tracker for a similar issue and not found a similar issue.
IDF version.
5.0
Operating System used.
macOS
How did you build your project?
VS Code IDE
If you are using Windows, please specify command line type.
None
Development Kit.
ESP32-D0WD custom board
Power Supply used.
External 5V
What is the expected behavior?
esp_https_ota() should not crash
What is the actual behavior?
when my app is built with:
CONFIG_SPIRAM=y
CONFIG_SPIRAM_ALLOW_STACK_EXTERNAL_MEMORY=y
and I allocate my task in external memory (4MB mapped into malloc()) like so:
uint32_t stackSize = 16 * 1024;
if ((stack = (StackType_t *) heap_caps_malloc(stackSize, MALLOC_CAP_SPIRAM | MALLOC_CAP_8BIT | MALLOC_CAP_32BIT)) == NULL) err = ENOMEM;
if (!err && (taskBuffer = (StaticTask_t *) heap_caps_malloc(sizeof(StaticTask_t), MALLOC_CAP_INTERNAL | MALLOC_CAP_8BIT | MALLOC_CAP_32BIT)) == NULL) err = ENOMEM;
if (!err && (task = xTaskCreateStaticPinnedToCore((void (*)(void *)) taskEntry, name, stackSize, this, priority, stack, taskBuffer, coreID)) == NULL) err = ESP_FAIL;
If I call esp_https_ota() I get a crash in spi_flash_disable_interrupts_caches_and_other_cpu():
Backtrace: 0x400824fd:0x3f825170 0x4008fd01:0x3f825190 0x40096fc9:0x3f8251b0 0x40086c62:0x3f8252d0 0x40086fa7:0x3f825300 0x400fe18a:0x3f825320 0x400fd972:0x3f825340 0x400fdd6a:0x3f825360 0x40128d4f:0x3f825380 0x4012917f:0x3f8253c0 0x400e0eb4:0x3f8253f0 0x400dfd45:0x3f8254d0 0x400e002d:0x3f8254f0 0x400e0069:0x3f825510 0x400e7bc7:0x3f825530 0x400e7c15:0x3f825570 0x4009321d:0x3f825590
0x400824fd: panic_abort at /Users/brian/.espressif/esp-idf/components/esp_system/panic.c:412
0x4008fd01: esp_system_abort at /Users/brian/.espressif/esp-idf/components/esp_system/esp_system.c:135
0x40096fc9: __assert_func at /Users/brian/.espressif/esp-idf/components/newlib/assert.c:78
0x40086c62: spi_flash_disable_interrupts_caches_and_other_cpu at /Users/brian/.espressif/esp-idf/components/spi_flash/cache_utils.c:152 (discriminator 1)
0x40086fa7: spi_flash_protected_read_mmu_entry at /Users/brian/.espressif/esp-idf/components/spi_flash/flash_mmap.c:323
0x400fe18a: spi_flash_cache2phys at /Users/brian/.espressif/esp-idf/components/spi_flash/flash_mmap.c:395
0x400fd972: esp_ota_get_running_partition at /Users/brian/.espressif/esp-idf/components/app_update/esp_ota_ops.c:558
0x400fdd6a: esp_ota_get_next_update_partition at /Users/brian/.espressif/esp-idf/components/app_update/esp_ota_ops.c:586
0x40128d4f: esp_https_ota_begin at /Users/brian/.espressif/esp-idf/components/esp_https_ota/src/esp_https_ota.c:309
0x4012917f: esp_https_ota at /Users/brian/.espressif/esp-idf/components/esp_https_ota/src/esp_https_ota.c:676
If I allocate the task stack in internal memory then esp_https_ota() does not crash and completes successfully.
Steps to reproduce.
#include "esp_crt_bundle.h"
#include "esp_https_ota.h"
esp_http_client_config_t httpConfig = {};
esp_https_ota_config_t otaConfig = {};
httpConfig.crt_bundle_attach = &esp_crt_bundle_attach;
httpConfig.url = firmwareURL;
httpConfig.user_data = this;
otaConfig.http_config = &httpConfig;
esp_https_ota(&otaConfig);
Debug Logs.
No response
More Information.
No response
Did you check https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/external-ram.html#restrictions? For low internal memory try: https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/performance/ram-usage.html#
i use CONFIG_MBEDTLS_EXTERNAL_MEM_ALLOC with spiram
Unfortunately it's not currently possible to perform SPI Flash operations in tasks with stack in external RAM, unless CONFIG_SPIRAM_FETCH_INSTRUCTIONS and CONFIG_SPIRAM_RODATA are also enabled. These options are not supported on ESP32, though.
Closing per the comments and pointers shared above. Please feel free to re-open if you face any further issues.
|
GITHUB_ARCHIVE
|
Spring Cloud : Using routing type filter in Zuul
I have 2 micro-services (Service A and Service B) built using Spring Boot, which gets routed through a Zuul Proxy also built as a Spring Boot app and I have checked that the Zuul proxy works just fine. However, what I am trying to do is to write a custom routing type ZuulFilter which should first route to Service A when a request comes in for Service B. Here is what I need assistance for:
I would like to know an example of how a routing filter looks like as I do not see anything after searching the internet. What I get are some examples of pre-filter and Netflix's documentation doesn't help much as well on that aspect.
Whether writing a custom route filter would mess up the original routing behavior of Zuul
So...Zuul calls Service A which then proxies Service B, or that Zuul calls Service A first, and once that request has completed, Zuul calls to Service B?
So, the request comes in for Service B which Zuul is proxying, but using a ZuulFilter, Service A is invoked and based on its response Service B should be proxied by Zuul.
I would construct a Feign client in the Zuul filter and make the call to service A using it. Feign will populate a ribbon load balancer to make the call in just the same way that Zuul does when proxying.
Just to understand further, so the Feign Client will call the Zuul provided endpoint for Service A as there could be multiple instances of Service A, correct? If yes, then won't it be an overhead as compared to allow some configuration within Zuul to route it automatically within itself?
The feign client populates its ribbon load balancer from the same discovery client (sourced from Eureka) that Zuul uses so yes, they are effectively using the same source and the same underlying method of load balancing calls.
Could you please provide some example in order to help me understand how the routing would work?
I got it working by building a Feign client which itself uses service discovery to find out instances of Service A which i intended to call from a Zuul filter
I had the same issue and this is what I came up with.
public class ServicesLegacyRouteFilter extends ZuulFilter {
private ServiceB serviceB;
public ServiceLegacyRouteFilter(ServiceB serviceB) {
this.serviceB = serviceB;
}
@Override
public String filterType() {
return ROUTE_TYPE;
}
@Override
public int filterOrder() {
return 10;
}
@Override
public boolean shouldFilter() {
RequestContext ctx = RequestContext.getCurrentContext();
if ("serviceA".equals(ctx.get("serviceId"))) {
//call Service B here and use return type to set
//the final destination service
String destination = serviceB.routeWhere();
ctx.set("serviceId", destination);
return true;
}
return false;
}
@Override
public Object run() {
RequestContext ctx = RequestContext.getCurrentContext();
// Or call ServiceB here to make your determination on
// the final destination.
String destination = serviceB.routeWhere();
ctx.set("serviceId", destination);
return null;
}
}
My actual production use case was more complicated on the routing of course, but this is the basics of how I was able to change routes based on what was coming in and how to take advantage of Zuul to get it out to the correct service.
I don't think this solution is relevant to my question. My problem was more based on service B acting as a decision maker on whether to call service A or not. Correct me, if I understood your solution incorrectly.
The routing filter could be used to call Service B either in the shouldFilter method or in the run method to determine the actual destination that is set. I will try to edit my response to make it more clear.
how is ServiceB class type even defined and how is it passed/injected in the constructor? wouldn't this be better as an @EnableConfigurationProperties(...) class at the very least?
|
STACK_EXCHANGE
|
Hi @kossjunk and a warm thanks to @Dent for the @ mention.
There are many paths to answer your question @kossjunk and all roads lead back to your experience with programming and IT in general and also the context of the application you are developing.
Based solely on the tone of your question @kossjunk, it is probably best to approach this in a top down manner, so that’s what I will do below.
Managing chats by multiple users are little different than managing topics and posts by multiple users in this Discourse community we are chatting in now, or how the ChatGPT all manages users, topics and chats, or how a WordPress blog manages multiple users, topics (articles) and replies. I think you can see where I am going with this analogy, or ‘super high-level architecture’.
You need a data model for users, topics and chat entries (posts). The data model for users can be as simple as the numeric id (user_id) and password (stored cryptographic password hash), or it can be a model which includes name, phone number, email address, twitter user name, location, and user preferences.
This means it is important for you to visualize what your data model is for your users. For example, let’s say your application permits individual users to fine-tune GPT-3 models; then you will need to model this for the users DB table. In that case, you may even require a DB table for models where you can associate fine-tuned models with user ids.
In other words, the core of developing an application is to consider the data model based on how you envision the application to work. Normally, developers look at this using a high level view. such as “model, view, controller” (or something similar based on experience and style), but in general we separate the data model from the visual part of the app the user sees and the logic which glues it all together.
So, while you are considering the data model for your users, you can simply and easily (and cost and time effectively) look at the open source code for the countless forums, blogs and other data driven applications freely available which all have users, topics and posts. There are countless free and open source applications which have this same basic structure. Managing users, chat topics and chat replies are no different. I recently wrote two Discourse plugins which store OpenAI replies (both text and images) as posts in a forum, so I used the core DB tables of the forum software in these relatively simple Discourse plugins.
Normally, you will see a table for users, topics and posts. Generating these DB tables does not require much thought about things like “primary keys” as suggested by someone else at this point. For example, if you develop using Ruby on Rails (just an example), the basic primary keys are generated for you when you generate a model and later on, you may start creating indexes (to speed up the DB if you application grows large), but that is something far down the road if you have not created any models or built any application logic. In other words, there are many tools available to help you generate models depending on the programming language and development environment you are working in, which can range from your personal favorites to what your organization requires you to use.
So, in summary. When you @kossjunk ask “@dent how can one save chat for very many users?”, this implies you are not (yet) a developer and do not (yet) have experience with application development in general. Assuming this is the case, you have to consider your business model and if you want to learn how to develop an application (which takes considerable personal time and effort) or to get someone else to do it for you.
But, in a nutshell, you can look at the open source models for any forum (like this one) or blog which has multiple users, topics and posts and that will illustrate how to think about the data models you need for a basic multi-user application which has topics, like we see on the left sidebar of the ChatGPT application, and posts (chats), which we see very prominently in the ChatGPT app.
Hope this helps.
|
OPCFW_CODE
|
Three bodies, two collisions
I'm struggling with this seemingly simple question regarding elastic collision. I've worked out something but it is not enough.
The problem reads: Consider three small masses A,B and C placed linearly on a frictionless road which weigh $2m$ kg, $m$ kg, and $2m$ kg correspondingly. At $t=0$ the masses are at rest, and then A starts moving towards B with velocity $v$. If the distance between B and C is $L$ meters, prove that B will collide with A again after $12L/7v$ sec.
My take: I know that when mass X (X kg) and moving at velocity $v$ m/sec collides elastically with a resting mass Y (Y kg), then the velocities after the collision are given by
$v_X=(X-Y)v/(X+Y), \ v_Y=(2X)v/(X+Y)$. Applying this for the collision between A and B, I get the velocities
$v_A=v/3, \ v_B=4v/3$.
Applying this again to the collision between B and C, the velocity of B after the collision will be
$v_B'=-4v/9$.
My question is, how to I determine the time until A and B collide for the second time? I mean B will hit C after $3L/4v$ sec, by which time A will have moved $L/4$ meters, and then ...what?
Am I seeing it wrong? Is there another approach I am missing?
What does "weigh 2m kg, ...." mean?
@BobD $2m=2\times m$, where $m$ is some number.
The confusion is that kg is mass, but you say "weigh" which refers to force of gravity.
Yes I will edit it.
Part 1. As you noted, ball A will move with velocity $v_A=\dfrac v3$. Ball B will move at velocity $v_B=\dfrac 43v$. We'd like to know the distance that ball A moves while ball B moves towards C.
Recall that the distance $d=Vt$. We know the speed of ball A. However, we don't know the time elapsed. Luckily, the time can easily be found knowing that ball B will take $t=\dfrac{3L}{4v}$ (by that same distance equation). Therefore, the distance that A travels while B moves towards C is:
$$d=\dfrac{3L}{4v} \dfrac{v}{3}=\dfrac L4$$
Part 2. Find initial distance between balls A and B.
When A and B first collide, the distance between them is obviously zero. Then, A proceeds to move a distance of $L/4$ rightwards, whereas ball B moves rightwards a distance of $L$. Therefore, the distance between the balls is $\dfrac34L$.
Part 3. And lastly, combine it all:
You know:
Distance between balls A and B.
Ball A has velocity $v_A=\dfrac v3$.
Ball B has velocity $v_B=-\dfrac 49v$.
I think I got it. I didn't put it in terms of distance covered.
Taking t to again be zero at the moment B strikes C, then the distance covered by B will be
$S_B(t)=3L/4-4vt/9$, and the distance covered by A will be $S_A(t)=tv/3$. These are equal when $t=27L/28v$, giving a total time of $12L/7v$ :D
|
STACK_EXCHANGE
|
catch does not get activated when pdo connect gets an error
trying to connect to a database remotely. There is a ip filter of who can. whenever a user who cant connect the error is displayed in the page and everything after the error stops working. Im trying to pass the error into a variable (with try catch and other methods) but nothing happens. the catch part is like getting ignored.
I tried all mysqli_report flags
How do i make sure the user does not get an error if the this happens? I need the error message to be loading into a variable so i could load the error message whenever i wish wherever i wish instead of it throwing an error and shutting down the operation.
Also just noticed that the error shows the user the password, host and username login details to database. So it is important to disable it
UPDATE:
current code does not work still
try {
$conn = new \PDO(
"mysql:host=$hostname_db;dbname=myDB",
$username_db, $password_db,
[\PDO::ATTR_ERRMODE => \PDO::ERRMODE_EXCEPTION]
);
}
catch(PDOException $e)
{
$err = $e->getMessage();
}
catch(Exception $e)
{
$err = $e->getMessage();
}
The error that is displayed (most of it):
The only reason I can think of is linked above
worked perfectly! Thank you. I did not even notice i missed the \
You can pass the options directly while creating the instance. Use \ to be sure using the correct namespace.
try {
$conn = new \PDO(
"mysql:host=$hostname_db;dbname=myDB",
$username_db, $password_db,
[\PDO::ATTR_ERRMODE => \PDO::ERRMODE_EXCEPTION]
);
echo "Connected successfully";
}
catch(\PDOException $e)
{
$err = $e->getMessage();
}
Did not work. The user still receives and error, the entire page stops processing and, just now noticed, it even displays login details to database (password, host, username).
Please see answer updated. You need to set the attributes directly on creating the new instance. Setting it later, is too late.
Unfortunately it is not too late
Thanks, but sadly this did not work either. I attached a photo of what a user (whos ip is not apart of allowed access list) recieves.
Your answer is not helpful as it is based on the wrong premise, no matter what the question is
Sorry, but the downvote was not me.
Try put a slash before exception type. Maybe it is a namespace problem catch(\PDOException $e) or catch(\PDO::PDOException $e).
yeah, worked perfectly. forgot to add a . Thank you all for helping!
Answer is updated. You're welcome.
|
STACK_EXCHANGE
|
Windows 8 offers many advantages compared to Windows XP, Windows Vista and even Windows 7. However, to take full advantage of all the new features in Windows 8, the hardware you run it on needs to be equipped with specific hardware.
How to determine the Windows 8 or Windows Server 2012 version on your USB Install Media or DVD
It’s never been easier to install pre-release versions of Windows and Windows Server, then with Windows 8 and Windows Server 2012. Downloadable ISO files were abundant and could be used for virtual machines on all major virtualization platforms, ISO files could be burned with a built-in tool in the previous version of Windows (Windows 7) … Continue reading "How to determine the Windows 8 or Windows Server 2012 version on your USB Install Media or DVD"
Expiration dates on Windows 8 and Windows Server 2012 Pre-release versions
About two months ago, I wrote a blog post on determining your Windows 8 and Windows Server 2012 pre-release version. With that information you could then determine the (possible) in-place upgrade paths. In this blog post I’ll show you the expiration dates of the Windows Server 2012 and Windows 8 pre-release versions, so you’ll know … Continue reading "Expiration dates on Windows 8 and Windows Server 2012 Pre-release versions"
Determining your Windows 8 and Windows Server 2012 version
With the release of Windows 8 and Windows Server 2012, many people are eager to download and install the latest and greatest Operating Systems from Redmond. For people who have deployed one of the pre-release versions of Windows 8, Windows Server 8 and Windows Server 2012, the big question now is which version they are … Continue reading "Determining your Windows 8 and Windows Server 2012 version"
Windows Server “8” Beta available (build 8250)
Microsoft did not just release the Consumer Preview pre-release version of Windows 8 today, but also released the Windows Server “8” Beta. It is available as a 64-bit (x64) in both the *.iso and *.vhd format for anyone interested. (with an optional sign up for Windows 8 news) Download Windows Server “8” Beta is … Continue reading "Windows Server “8” Beta available (build 8250)"
How To Install Windows Server 8 (build 8102)
Alongside the Windows 8 Developer Preview of the Windows client, Microsoft released the Windows Server bits of the same build as well. Where the Windows client is released to the general public and has probably seen over a million downloads this first day, the Windows Server bits can only be downloaded when you have a … Continue reading "How To Install Windows Server 8 (build 8102)"
|
OPCFW_CODE
|
The Special Interest Group on Collaborative Computing (SIGCE) was formed at the 1998 ACM conference on Computer Supported Cooperative Work.
The mission of SIGCE has been to promote collaboration and communication among researchers working in the area of collaborative editing and specifically
on a technique known as Operational Transformation (OT).
More than a decade has passed since the inception of SIGCE in 1998. During this period, SIGCE has organized annual workshops on collaborative editing in conjunction with major CSCW (Computer Supported Cooperative Work) conferences. >Research papers on CE and OT have been published extensively in major ACM and IEEE conferences and journals, including ACM CSCW, GROUP, ECSCW, ACM Transaction on Human Computer Interaction, Journal of CSCW, IEEE Transaction on Parallel and Distributed Systems. OT has evolved, from a technique for concurrency control in real-time group text editing, to include new capabilities, such as group undo, locking, conflict resolution, operation notification and compression, group awareness, HTML/XML and tree-structured document editing, application-sharing, and transparent adaptation. The range of collaborative editing systems enabled by OT has also been expanded from one-dimensional plain-text collaborative editors, to two-dimensional collaborative office productivity tools, three-dimensional collaborative computer-aided media design tools, as well as to mobile and p2p collaboration. Recently, Google has adopted OT as a core technique behind the collaboration features in Google Wave, taking OT to a new range of web-based applications and sparking a wave of industry interest in OT.
Queries regarding SIGCE can be sent to .
"Real-time collaborative editors allow a group of users to view and edit the same text/graphic/image/multimedia document at the same time from geographically dispersed sites connected by communication networks, such as the Internet. These types of groupware systems are not only very useful tools in the areas of CSCW, but also serve excellent vehicles for exploring a range of fundamental and challenging issues facing the designers of real-time groupware systems in general.
Research on real-time group editors in the past decade has invented an innovative technique for consistency maintenance, under the name of operational transformation. A number of research groups in the world have contributed to the development of the operational transformation technique in their design and implementation of these types of systems. Despite the great deal of interests and actitivities in this area, research in the past has been conducted in a rather isolated fashion, with little communication or collaboration among different groups. At the ACM Conference on Computer Supported Cooperative Work, Seattle, USA, Nov. 1998, a group of active researchers in this area met and decided to form a Special Interest Group on Collaborative Editing (SIGCE) to promote communication and collaboration in this research area. This Web site has been set up to link existing SIGCE members sites together and to provide a repository of useful information."
— from The original SIGCE web site
|
OPCFW_CODE
|
This week, we check out the API aspects of the recent SolarWinds and PickPoint breaches. Also, we have a review on how to shift API security left with GitHub and 42Crunch and an introduction video on GraphQL security.
The SolarWinds hacking reported this weekend was not API-related as such. It was a supply chain attack in which hackers (likely a state actor) managed to add their backdoor in one of the DLL files of SolarWind’s IT monitoring and management software, Orion. After a dormant period, the malicious code would contact the command and control center (C2) to get further instructions and execute them. This was in turn used against SolarWinds’ customers, including multiple US government agencies.
What did catch our eye was the API angle to the story:
“The C2 traffic to the malicious domains is designed to mimic normal SolarWinds API communications.“
The attackers made an effort to make their traffic look like normal SolarWinds API traffic. This allowed them to mask the activity and avoid getting detected by any anomaly detection systems, like machine learning or artificial intelligence.
Attackers opened 2,732 PickPoint package lockers across Moscow. These are lockers that customers can use to pick the goods that they buy online.
Because this was an actual successful attack rather than ethical research, the details are scant. However, what we know makes it look like an attack against APIs:
- In the videos posted on the internet, one can see that the lockers get opened one by one, rather than all at once.
- The attack was remote and happened across the city, with no attackers physically walking to the locker locations.
- PickPoint is API-driven, with parts of their APIs related to vendor integrations publicly documented.
Considering these factors, this looks very much like an enumeration and API1:2019 — Broken object-level authorization (BOLA/IDOR) attack against the APIs. Attackers likely found a way to authenticate against the API and then enumerate locker or package IDs on the API calls to open the corresponding lockers.
To avoid such vulnerabilities:
- Make enumeration hard, do not use sequential numbers.
- Use rate limiting and monitoring to prevent using scripted attacks.
- Implement authorization, not just authentication, to make sure that the caller has legitimate rights to the operation on that particular object.
Review: API security with GitHub Code Scanning and the 42Crunch GitHub Action
Security issues are much cheaper to catch and fix early in the development cycle, and API security is not an exception.
Mitch Tulloch at TechGenix has posted a review on using GitHub Code Scanning and the GitHub Action from 42Crunch, REST API Static Security Testing, to locate and fix API code vulnerabilities before they reach production.
Video: Finding Your Next Bug: GraphQL
GraphQL APIs are still significantly less frequently used than REST APIs. However, GraphQL is getting traction, yet many developers are less aware of the potential security implications of GraphQL.
Katie Paxton-Fear has posted a new video tutorial that can be valuable as a quick introduction to GraphQL security. She covers, for example, the following topics:
- The basics of GraphQL
- Typical security bugs and how to find them
Get API Security news directly in your Inbox.
By clicking Subscribe you agree to our Data Policy
|
OPCFW_CODE
|
"warehouse init config.py" failing when setting up
I'm not sure if this is a README bug, a missing dependency, a user error or an actual bug, but after using "mkvirtualenv -p /usr/bin/python3 warehouse" to create a virtualenv and running "pip install -r requirements.txt", I get the following when running "warehouse init config.py":
Traceback (most recent call last):
File "/home/ncoghlan/.virtualenvs/warehouse/bin/warehouse", line 9, in <module>
load_entry_point('warehouse==0.1dev1', 'console_scripts', 'warehouse')()
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python3.4/site-packages/pkg_resources.py", line 356, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python3.4/site-packages/pkg_resources.py", line 2431, in load_entry_point
return ep.load()
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python3.4/site-packages/pkg_resources.py", line 2147, in load
['__name__'])
File "/home/ncoghlan/devel/warehouse/warehouse/__main__.py", line 15, in <module>
from configurations import management
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python3.4/site-packages/configurations/__init__.py", line 2, in <module>
from .base import Settings, Configuration
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python3.4/site-packages/configurations/base.py", line 7, in <module>
from .utils import uppercase_attributes
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python3.4/site-packages/configurations/utils.py", line 5, in <module>
from django.utils.importlib import import_module
ImportError: No module named 'django.utils.importlib'
I initially tried Python 2, and that failed in the same way (just with a misleading exception message that initially made me think it wanted Python 3's importlib module):
Traceback (most recent call last):
File "/home/ncoghlan/.virtualenvs/warehouse/bin/warehouse", line 9, in <module>
load_entry_point('warehouse==0.1dev1', 'console_scripts', 'warehouse')()
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python2.7/site-packages/pkg_resources.py", line 356, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python2.7/site-packages/pkg_resources.py", line 2431, in load_entry_point
return ep.load()
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python2.7/site-packages/pkg_resources.py", line 2147, in load
['__name__'])
File "/home/ncoghlan/devel/warehouse/warehouse/__main__.py", line 15, in <module>
from configurations import management
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python2.7/site-packages/configurations/__init__.py", line 2, in <module>
from .base import Settings, Configuration
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python2.7/site-packages/configurations/base.py", line 7, in <module>
from .utils import uppercase_attributes
File "/home/ncoghlan/.virtualenvs/warehouse/lib/python2.7/site-packages/configurations/utils.py", line 5, in <module>
from django.utils.importlib import import_module
ImportError: No module named importlib
I think you're installing Warehouse from PyPI which is a really old iteration of it or your branch is really old. Particularly confusing is there is no requirements.txt in the root directory anymore nor is there a warehouse init command anymore.
Generally the supported way to run an environment now adays is via Docker. You can run all the pieces on your own if you want, but it's easier to just do docker-compose build && docker-compose up. See: https://warehouse.pypa.io/en/latest/development/getting-started/
Note, that if you have an old clone of Warehouse, the new master branch is a wholly distinct orphan branch from the previous branches so you'd probably want to just blow away the old repository.
Oh, I bet what happened is I had an old fork on GitHub, so even though the local clone was new, the fork was old.
Yep, my fork is ancient, I just forgot about that because I'd only previously used it for online edits. Sorry for the noise.
|
GITHUB_ARCHIVE
|
You might think that I took a short vacation, but I've just been buried in Real Life. You might also be wondering about where lessons 6-11 went. They'll be published later, but you're not missing anything, as they are edits of the last several lessons from Learning to Program with Haiku with an experienced developer in mind. If you've worked with the previous series, there isn't anything in 6-11 that you haven't seen before – they're more to make the Programming with Haiku series complete on its own.
In an attempt to move on and get on to just the Haiku API, here are the final three lessons on C++. Lesson 3 introduces C++ file streams, formatting and printing using C++ streams, and lightly touches on exceptions. Lesson 4 takes a break from actual coding and spends time on a critical development tool: source control – what it is, how it is used, and why it is used. Lesson 5 ties together all of the C++ concepts covered in this series with a project.
Lesson #2 in my new series of development tutorials continues with a fast and furious course through the rest of the Standard Template Library with some of the Standard C++ library thrown in for spice. We learn about associative STL containers like map and set and examine the C++ string class.
Programming with Haiku, Lesson 2
This weekend was my second year at the Ohio LinuxFest at the Greater Columbus Convention Center in downtown Columbus, OH. I arrived at the convention center at about 7:15am. Unlike last year, there was hardly anyone there outside of the OLF staff doing checkin and a few vendors. Joe Prostko was already there, having stayed at a hotel nearby the night before. It was good to see him again. We started talked for a bit and then started getting the table set up.
Since I started publishing my Learning to Program with Haiku lesson series back in January, I have, on many occasions, seen comments asking for lessons aimed at current codemonkeys who want to break into development for Haiku. Here begins a new series of programming lessons aimed at people who already have a basic grasp on C++: Programming with Haiku.
The direction of the series is pretty straightforward. First, we'll be spending some time (i.
The book is finally done! Getting through the proof copy took so much longer than I ever expected. Luckily, right now I'm out of town with a lot more time on my hands, so I had a lot more time to be able to sit down and get through it. It has been published through Lulu.com so that a great deal more of the profit from the book goes to me instead of the pockets of a book retailer.
This lesson finishes up the project that the last two have been about: HaikuFortune, a program which randomly chooses and displays a fortune in a window. It's not a very complicated one, but it exemplifies a reasonably well-coded real-world project. Although it was code complete as of the end of Lesson 22, it was not finished, missing icons and other resources. This concludes the project with adding resources, a basic discussion on source code licensing, and packaging a program for Haiku.
Usability is one of my pet topics. Although less so now that in years past, it is all-too-often ignored or not given enough priority. This lesson scratches the surface from a developer's point of view. I'm no usability expert, but I do know a thing or two. This lesson is a must-read for any budding developer, and by the end of it, we will have a good real-world program to show off which is just shy of being ready for a release.
This lesson continues with delving into the Storage Kit, reading and writing files. We also start writing code for the final project of the Learning to Program With Haiku series which will be developed over the course of several lessons.
Learning to Program With Haiku, Lesson 21
Moving on from exploring the Interface Kit, we turn our attention to the Storage Kit in this lesson. We take a look at the kit from a broad perspective and also begin using some of its many of the classes. We take a break from writing GUI applications and, instead, write a console directory-listing program using C++.
Learning to Program With Haiku, Lesson 20 Source Code: 20ListDir.zip
|
OPCFW_CODE
|
Today marks the release of a larger project that I have been doing, called Imaanvas. It has been in the works for months and is meant to be a web based version of a program called MSW Logo, which is now called FMS Logo. It has a few tweaks, though, to make it slightly easier to use. It was orignally written for a year 5 class in a primary school.
It also does not come with all the commands present in Logo, so if you find that a command is missing please comment below and I will try to add it for you as soon as I have some spare time. (You can also write the command yourself, and I can add it that way, but will probaby need the original source code for that - gulp is used to compact the code - send me an email if you want the code)
Immanvas is a horribly complex piece of code - so if you encounter any bugs (which is likely), please either leave a comment below, or send me an email. Remember to be descriptive about the bug that you have found, otherwise I won't be able to track it down and fix it! Also remember that Imaanavs is meant for modern browsers, so if Imaanvas doesn't work in your browser, try upgrading it to it's latest version.
This post is late since essential maintenance work had to be carried out to try and reduce the amount of spam that is being posted.
The online tool I am releasing today is another one of the projects I did a while ago (December 2013 in this case). The difference here is that someone asked me to build it for them. The tool allows you to stich a number of still images into an animated gif.
Having an online version of the tool on a server that I own seems like a good idea (so I can keep it up to date) - so I am releasing it on this site for you to use.
A description of the options and known issues can be found below. If you just want to skip all that and play around with it for yourself, please follow this link:
A description of all the options available can be found below:
The number of repeats. -1 = no repeat, 0 = repeat forever.
The default delay each still image should be given when first dragged into the tool. Remember to set this before you drag all your images in to save time!
Frames per second
An alternative method of setting the default delay. Simply enter the number of frame you want to show per second.
The number of threads to use to render the animated gif. For optimum performace enter the number of cpu cores you have. This will speed up the rendering process for large gifs.
The quality of the resultant gif. In the code this is referred to as the pixel sample interval. I don't really understand this setting myself - if you do, please leave a comment below and I will update both this post and the tool itself.
A '*' indicates an advanced setting.
The 'x' button to remove an image is buggy. - Fixed! A new image removal system has been implemented to fix this bug.
The size of the rendered gif i snot updated when images are removed - Fixed! The maximum image dimensions are now recalculated when an image is removed.
|
OPCFW_CODE
|
If you want to start your own business but you don’t have enough cash, there are some great solutions that you can find these days. The most common solution for this problem is lending the money from the bank or other financial institutions. Other great solution for this issue is using some helps from WeTrust.
WeTrust is an institution that works like banks. This banking and insurance industries can provide the best financial solutions for you so that you can start your own business immediately.
WeTrust is a collaborative saving and insurance platform that autonomous, frictionless, and decentralized.
Before you decide to use this service, you might need to learn some important details about WeTrust. Basically, WeTrust service can be divided into two main systems.
- The first one is Current Systems
The characteristics of current systems are including profit to stakeholders, excessive risk taking, reliant to third party, and conflict of interest that involves users.
- Other system in this service is called the Future System
This is the system that wants to be achieved by this service. The characteristics of this future system are including self-reliance, inclusive access, decentralized risk, dividends to participants, and users are involved in aligned interests. As you can see, the future system is better than the current system and this is why you should use this service for building your own business.
One of the products that are offered by WeTrust is called ROSCA (Rotating and Savings Credit Associations). There are several benefits that you can get if you use this product to build your own business. The first benefit is that his product can serve as credit and insurance. Other benefit that you can get is that the interest will stay within your local community. This product also offers lower default rates as well. Other great thing about this product is that you don’t need to use trusted third party anymore if you’ve already used this product.
Team of WeTrustI. THE CORE TEAM
- George Li, Product Manager
- Patrick Long, Strategy and Operations
- Ron Merom, CTO
- Tom Nash, Front End Developer
- Shine Lee, Smart Contract Developer
- Mivsam Yekutiel, Ph.D, Research and Partnerships
- An Zheng, Full Stack Developer
- Leon Di, Product Marketing
- Justin Zheng, Marketing Associate
- Fanli Ji, China Community Manager
- Catalina Lastra, Latin America Community Manager
- Jessica Aharonov, Graphic Designer
- Emin Gün Sirer, Blockchain Advisor
- Michael Casey, Fintech Advisor
- Michael Hexner, Business Strategy Advisor
- Fennie Wang, Legal Advisor
- Benedict Chan, Blockchain Advisor
- Daniel Cawrey, Marketing Advisor
or Join Slack channel to communicate with WeTrust team!
Other Links- Official Website: https://www.wetrust.io
- Blog : https://medium.com/wetrust-blog
- Twitter: https://twitter.com/WeTrustPlatform
- Facebook: https://www.facebook.com/wetrustplatform
- Reddit: https://www.reddit.com/r/WeTrustPlatform
- Slack: https://www.wetrust.io/slack-invite
- GitHub: https://github.com/WeTrustPlatform
- Whitepaper: Download
- Announcements: https://bitcointalk.org/index.php?topic=1773367.0
Updates- Feb 22, 2017 : An Interview with Emin Gün Sirer, WeTrust Blockchain Advisor
- Feb 23, 2017 : WeTrust FINALIZES Escrow Partners!
- Feb 28, 2017 : TokenMarket interviews WeTrust
- Apr 04, 2017 : Interview with Jae Kwon, Founder and CEO of Tendermint
- Apr 07, 2017 : WeTrust’s Token Distribution Plan
- Apr 07, 2017 : Interview with Julian Zawistowski, Founder and CEO of Golem
- Apr 09, 2017 : How to use MyEtherWallet to create a private address for receiving TRST tokens
More update will be post here, so..stay tuned!
|
OPCFW_CODE
|
From this earlier post we learnt to easily train a specialized image classification model with Transfer Learning without writing a single line of code. With the help of the retrain script provided by the Google Codelab Tensorflow for Poets, all we need is a directory structure containing directories of training images like this:
|- my-training-images/ |- daisy/ |- some-image-1.jpg |- some-image-2.jpg |- some-image-3.jpg |- dandelion/ |- some-image-4.jpg |- some-image-5.jpg |- some-image-6.jpg |- roses/ |- some-image-7.jpg |- some-image-8.jpg |- some-image-9.jpg |- sunflowers/ |- some-image-10.jpg |- some-image-11.jpg |- some-image-12.jpg |- tulip/ |- some-image-13.jpg |- some-image-14.jpg |- some-image-15.jpg
- each sub-directory takes the name of the training image label (e.g. daisy)
- within the sub-directory, we store the training images belong to that class. It doesn’t matter how we name these images as long as the images are stored within that folder.
Say we’d like to obtain some training images of different types of mushrooms, one way to get these images is via ImageNet. Here is the official description of the site:
ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. Currently we have an average of over five hundred images per node. We hope ImageNet will become a useful resource for researchers, educators, students and all of you who share our passion for pictures.
Each image category is represented in the form of WordNet ID, also known as
Lookup Category and WNID
For starter, go to the ImageNet website.
Search for an image category that we want. Say,
fly agaric (a type of mushroom).
- we should see is a grid of Fly agaric images.
- we should see the WNID of this category, from the URL: http://www.image-net.org/synset?wnid=n13003061
Now, if we scoll down along the hierarchy bar on the left, we should eventually see our
fly agaric (note the nested structure).
Note above that our
fly agaric is highlighted in blue in the scroll bar.
Let’s return to the main screen:
Click on the Treemap Visualization tab:
The above snapshot tells us that
fly agaric is a leaf node. i.e. there are no sub categories underneath this node. If we click on the icon on the top right, it will copy one WNID (n13003061) to the clipboard. (if however the category is not a leaf node clicking the icon will copy all the immediate child WNIDs underneath it as well. But that is another story to tell.)
Now, if we click the Downloads tab, we may find the list of image URLs associated with this WNID:
Click the URLs button will review the list of URLs. Note the API in address bar: http://www.image-net.org/api/text/imagenet.synset.geturls?wnid=n13003061.
This API is handy. Basically, by providing the API a
wnid, it returns the list of image URLs associated with that
wnid. If we copy a handful of URLs and paste it in a browser, we can see for ourselves the images are indeed
fly agaric. Warning: ImageNet is not perfect. There could be errors (hopefully small portion only). It’s constantly improving with its internal validation system.
Download Imagenet Images by WNID
Now we know how to resolve
wnid from a name (e.g.
fly agaric) via the ImageNet website, we can download the images for a desired
wnid with the help of this very handy tool called ImageNet_Utils, an open sourced tool published on Github developed by tzutalin. Follow the instructions to download images.
Here are the steps that I follow to download images for
fly agaric (
wnid = n13003061):
- git clone the repository:
$ git clone https://github.com/tzutalin/ImageNet_Utils
- navigate into the repository:
$ cd ImageNet_Utils
- the script seems to work well with Python 2.7 (and not so for Python 3.x). So let’s create an Anaconda python environment (feel free to use Python 3.x if you like. I used Python 3.6 originally and bumped into errors. So guessing the scripts aren’t Python 3.x compatible yet - maybe):
$ conda create py27p13 python=2.7.13
- activate the conda environment (so we have Python 2.7 enabled in an isolated environment):
(py27p13) $ source activate py27p13
- do a one-liner command:
(py27p13) $ ./downloadutils.py --downloadImages --wnid n13003061
- this will start the download. Note that there may be errors / anomalies - which I will describe later.
- the images will be saved to the repository:
- (at a later stage) move the entire image folder to somewhere else. Restructure it to make it suitable for our transfer learning exercise.
For example, move to somewhere else and restructure the directory like this:
|- my-training-images/ |- n13003061_fly_agaric |- image-1.jpg |- image-2.jpg |- etc...
Limitation of ImageNet Image Download
So far I’ve come across some small anomalies / limitations of downloading images from ImageNet via URLs. This is not a significant general problems, thought it would be worth mentioning here.
Say the URL is no longer valid, we may get errors like this during download (just some examples I’ve seen).
HTTP Error 403: Forbidden HTTP Error 404: Not Found Fail to download <urlopen error [Errno 51] Network is unreachable>
The download script will simply print the error and move on to the next URL, and so on.
Flickr dummy image
When an image no longer exists on Flickr (where some images are stored), you will see a dummy image that looks like this:
You will see quite a number of this. The strategy is to either remove them manually (by eye), or find a programmatic way to remove these later on.
Update 2017-12-13: I just noticed image like this has a file size of about 2 KB. Most good images have a size of a least 40 KB. A quick win could be to do a sort by file size in the Mac finder window, and filter away the very small images, such as this.
The download process is not perfect. Sometimes an image could be partially downloaded. For instance, when I click on one of this partially downloaded images, it just loads forever (or shows sign of errors). Such as this one:
These images then to have really small file size, of less than 2 KB (as far as I could see).
A quick win is probably to just filter out files like this. Only when have the time we then attempt download again in future.
Non Image Type
You’d also notice some files downloaded are not actually images (
.png, etc), but text files (e.g.
This is probably due to some URLs are no longer valid and the server decided to respond with a text file instead of an image.
This can be filtered away easily by file type. (do a sort in Mac finder, or programmatically using filename extension).
In this article we have:
- define our objective: to have a directory structure for storing training images, for performing transfer learning - as required by this earlier post, or Google Codelab Tensorflow for Poets.
- introduce ImageNet: how to resolve WordNet ID (
wnid) given an image category name, and use the API to get the image URLs associated with the selected
- introduce ImageNet_Utils, a handy tool to ease the ImageNet image download process. This downloads the images via URLs.
- repeat the process above, and download training images per mushroom category (e.g. fly agaric, common stinkhorn, scarlet elf cup, etc.)
- split the downloaded images into 3 sets: training, validation, and test. From the readme of ImageNet_Utils, the tool appears to have the feature to accomplish this too.
|
OPCFW_CODE
|
High load average is an issue which is familiar to almost all server owners who have popular sites and a lot of traffic. Usually it indicates that server can’t properly handle visitors’ requests. This short article aims to describe several simple items to consider while analyzing high load average on your Linux server.
1. MySQL slow query log
For highload projects it’s always better to keep MySQL slow query log enabled to analyze. It could be done this way.
long-query-time = 1
Make sure MySQL has permissions to write slow log. Also note that by MySQL server doesn’t rotate slow log itself. So you would need to do it. It could consume a lot of gigabytes. It’s convenient to analyze slow log with use of mysqlsla tool.
2. Analyzing slow queries
To analyze the slow queries it’s possible to use SQL operator EXPLAIN. It allows to get information on how MySQL server performs the query, if it uses indexes and so on. Here’s an example:
mysql> explain SELECT 1 FROM `archive`.`log` LIMIT 1;
3. MySQL settings, InnoDB, MyISAM
MyISAM and InnoDB are the most often used MySQL storage engines. There are too many differences between them to describe here. But one the most important one is that MyISAM doesn’t support transactions while InnoDB does. Use mysqltuner.pl to get information on MySQL settings and follow the optimization tips. Make sure you understand parameters meaning before changing. If you don’t you’d better ask someone who understands. First optimization tune could be increasing key buffer pool and InnoDB buffer pool. MyISAM keeps in memory only indexes while InnoDB ties to keep the data along with the indexes. So InnoDB would perform the best if its data fits into memory (i.e. if you have 4G of InnoDB data, set innodb_buffer_pool at least to 5-6G). Besides you can use mysqreport and innotop to get more information on what is going on.
4. Swap, IO wait
If your server swaps a lot it will be a one of the sources of performance issues. Since traditional HDD are much slower than RAM the processes could wait for a quite long time to get a response. You can check swap using with top or free commands. One of the most important parameters here is “IO wait”. Here is a sample output of top:
Cpu(s): 12.1%us, 1.8%sy, 0.0%ni, 85.1%id, 0.3%wa, 0.5%hi, 0.2%si, 0.0%st
Mem: 8166644k total, 7773496k used, 393148k free, 165328k buffers
Swap: 8387572k total, 259428k used, 8128144k free, 5404860k cached
If its value is more then 30-40% you might want to optimize your server and/or add more RAM.
5. Apache and nginx
During last several years nginx has gained a high popularity. More and more users decide to use nginx instead of Apache. Although Apache is still one of the most popular web servers and the most featurer one. Since nginx uses epoll for working with the sockets states and file AIO to serve static files, often it performs much better then Apache. So if your application can be deployed with nginx you might want to consider the migration. At present almost all popular CMS and frameworks can be deployed with nginx: Django, Yii, WordPress and so on. Be aware that nginx doesn’t support .htaccess files so if you have them you will need to rewrite them for nginx. Fortunately there are free online converters which can help you with this.
6. Unnecessary services
If don’t need some services it would be better to disable them. It also better from security point of view. On Centos server you can use chkconfig to disable the service. On latest version of Fedora it can be done by systemctl. If you don’t use NFS you can disable all related services and so on.
7. Use performance monitoring software to track the metrics.
There are a lot of both self-hosted and web solutions to track the performance metrics. The graphics allows to analyze the dynamics much easier.
|
OPCFW_CODE
|
How to get salesforce standart objects using API
I'm trying to get all salesforce system objects such as Set, List, System and their methods. I used tooling api, namely SymbolTable class to get methods and variables of specified classes, but i can't find the way how to get system objects. Please, let me know if there is a way how to solve this problem.
Are you tyring to make IDE or Something? Recent update of Force.com IDE does the job of fetching the SymbolTable for you. You can refer its code on Github . Hope that helps,
https://developer.salesforce.com/page/Force.com_IDE
@susanoo Yes, I am making IDE. I'm trying to make autocompletion, so I need some system objects. SymbolTable doesn't contain system objects, such as 'Set, Map' and their methods.
Force.com provides auto completion for system class like String. I tried api v35 ide and it worked with auto completion like a charm.
A friend of mine made auto Complete ide plug in for Eclipse, it supports set list and maps. APEX EDITOR Ls is its name. https://marketplace.eclipse.org/content/apex-editor-ls
@susanoochidori Thank you for your advice, but i'm developing my own autocompletion.
At present Apex code or apex APIs do not support real introspection and there is no way to extract list of Standard objects/methods/signatures/properties in an automated way like you can do with real programming languages.
Besides, if you are building an IDE like product (and thinking about list of suggestions for auto-completion) you will also want to provide some documentation alongside method/property signature, which you would not get even if introspection was supported.
I do not know this for certain, but can guess, that all existing force.com IDE like tools use static (hand made) maps of class/method/doc.
e.g. Mavens Mate completions.json
You could attempt to extract relevant information from Apex Documentation.
Unfortunately existing apex code documentation is not ideally structured (for automated parsing). With automated parsing you have to either do a lot of manual post processing or add lots of exceptions to your parser. After that, with each new SFDC release your parser may need to be updated to handle changes to internal documentation structure. Sometimes those (doc structure) changes are small, sometimes not so small. Good news (for IDE author) is that Apex does not change very much.
Thank you so much. I think that i will parse methods from Apex Documentation and use it for my autocompletion. As i understood for now there is no way to get standart objects using API.
|
STACK_EXCHANGE
|
Giving each piece of writing an equal opportunity
Many times, a thought crossed your mind that would be worth sharing with others. Sometimes, you started writing something about it and then it remained in your notebook forever. If you feel you have well described your idea / problem / rant / whatever, you may decide to share it with your Facebook friends, or on your blog if you have one. But what about sharing it with “everybody”? Maybe it’s worth it but in any case, it is not easy to reach an audience and you’re not necessarily prepared to spend the time it requires.
Let’s rephrase our problem. Many people can write. Suppose we found a way to encourage them to do so and to post their work. Now, we’ve got tons of small pieces, each one of them may be interesting or not. How do we give every piece of writing an equal access to readers?
A statistical testing approach
The concept I’m about to describe is not new. Yet, I believe it could be used more often and I’m definitely interested in applying it in many of my future projects.
There is absolutely no way (so far, I think) to automatically analyze the content quality and relevance of a text. On the contrary, anyone is able to judge a text he/she reads (subjectively, of course). So why bother with complicated algorithms or full-time paid staff for picking articles when you could simply ask people what they think?
More precisely, the algorithm works that way:
You have a set of pieces as input.
You have readers to which you present a front page with a list of articles, the kind of front page any online newspapers, blogs or RSS readers would have.
Now, here’s the trick: you don’t need to present the same front page to every reader (and here I’m not talking of personalizing the front page with respect to the reader interests because we’ll suppose that you are not tracking the users of your web service, thus you know nothing about them).
Your front page will present each new piece to a statistically relevant number of random readers.
These new pieces will not be singled out from the popular pieces. So each reader will see a front page with a list of articles. Among them most will be already popular and some will be new. Then, when a reader decides to read an article (any article), once he/she reaches the end of it, your system will strongly encourage him/her to give an appreciation (the way in which this would be done can be discussed).
So you will have feedback. Based on that feedback, the algorithm simply gives a popularity score to the piece. The more popular, the more often it would reach the front page. At some point, the most popular pieces of work will have gone viral and will display on everyone’s front page.
Of course, many questions remain. What kind of feedback do you ask readers for? How do you compute popularity? What to do with controversial pieces (articles which would get a lot of positive and a lot of negative feedback)? This just means that this algorithm is not as simple as I said. But it is only a matter of parametrization.
This sort of testing is already widely used
Of course it is. Statistical testing is everywhere, particularly in scientific research. Now, I also believe that it is used by a lot of social media already (but inside messy and complicated algorithms). As a standalone technique for measuring popularity, it is probably not used enough.
Still, here are some organizations using that kind of technique that I’m aware of:
The world-changing organization Avaaz uses a kind of high-tech democratic principle: “campaign ideas are polled and tested weekly to 10,000-member random samples and only initiatives that find a strong response are taken to scale.”1
This is somehow different but Duolingo, a web and mobile application for learning new languages, uses A/B testing to determine how to improve the learning experience2.
##Presenting a new service. Code-name “Bloc”
Wait a minute, this is only conceptual! In the end of the article, you’ll be asked to leave a comment to say if we should build it.
This was imagined by Jules Zimmermann and myself.
We imagined this website on which anyone could post a new article. Articles would be required to be short (half a page) so that people can read them from begin to end and give feedback and no reader would be lost halfway. The articles could link to external references but we would always ask readers to give us their opinion before following these links (only the main content gets reviewed).
The review could be as simple as one of the four options: Like, Dislike, Uninteresting, Flag for shocking content.
Moreover, any article could be shared on the web but we would not ask people who reached the article via a direct link to give feedback. Only the people we randomly select can give feedback. This is important so that someone who has already a lot of followers on social networks cannot artificially increase the popularity of his/her articles: each piece is judged on its own. (Nota: the popularity of a particular piece would not be displayed anywhere except maybe directly to its author.)
We would not provide any means to follow a particular author but they could provide links to their own webpages, blogs or social media accounts.
Anyone could post anonymously or under pseudonym.
Finally, “Bloc” would have both a front page and categories / topics and we would evaluate if a publication is popular for the general public or for the public of such and such categories only. To do so, we would not track reader preferences. We would ask authors to designate a few categories they feel their content fits in. Then we would apply our algorithm independently for each of these categories. If an article gains popularity in most of these categories, we would then test it also on the front page (which means testing how a general audience reacts to it) by applying our algorithm once again.
So, why do we call it “Bloc”?
“Bloc” is a Blog made by everybody who Collaborates. Bloc represents also the block design we had imagined for it. Each block would link to an article, I mean, a Bloc (that would be the name we would give to posts).
Medium.com is similar but yet different
Evan Williams, one of the founders of Blogger and Twitter, strikes back! I won’t give a full description of “What is Medium” here: there are plenty already available out there3 4 5 6. But I will still say a few things about it:
The core idea is the same3. Everyone has ideas. Giving an audience to everyone. Even if you don’t write often.
Their algorithm to give an audience to everyone seems more complicated (even if they don’t reveal it). And surely, it does not give an equal opportunity to every post.
They have stripped down many social media functions, but not all: in particular, it is possible to follow an author. I don’t see it as a problem though.
Several journalists have tested it and said it was a great writing tool4 5. I feel I should test it too to measure how well it reaches its goal: giving an audience to posts by non-famous people.
So, until further notice, we’re not going to build “Bloc”. If you think the two things are different and we should make “Bloc” happen, please leave a comment telling so. Meanwhile, I think I’m going to try writing a post on Medium and see how it gets spread (if it does). I would encourage anyone who likes writing to try it too and if you’re nice, you’ll let a comment about it here. In particular, the big question is “Did it help you reaching an audience?”
Update: based on this post and based on the comments it received, it looks like Medium.com is getting away from their initial goal and becoming more or less yet another social network and blogging platform. Also, spoiler, their editor is not that good: well maybe for non-tech journalists it is but I did not like it at all.
Ungerleider, N. (2014). How Duolingo Uses A/B Testing To Understand The Way You Learn. Co.LABS ↩
Williams, E. (2012). What we’re trying to do with Medium. Medium ↩ ↩2
McCracken, H. (2014). What Is Medium? Medium Is Pretty Cool, That’s What. Time ↩ ↩2
|
OPCFW_CODE
|
How to Download Ashampoo UnInstaller 10.00.10 for Free and Remove Unwanted Programs Completely
If you are looking for a free and reliable software that can help you uninstall programs without leaving any traces behind, you might want to check out Ashampoo UnInstaller 10.00.10. This is a premium version of Ashampoo UnInstaller that comes with advanced features and supports multiple languages. You can download it for free from the link below and activate it with a path.
In this article, we will show you how to download and install Ashampoo UnInstaller 10.00.10 with Path on your PC. We will also explain what are the benefits of using this version and how to use its features. Let's get started!
What is Ashampoo UnInstaller 10.00.10?
Ashampoo UnInstaller 10.00.10 is a modified version of Ashampoo UnInstaller 10 that was released by a group of hackers called Gen2. They have added 22 language packs to the original installation files and provided a path that can bypass the activation process.
Ashampoo UnInstaller 10.00.10 bull; Path
Ashampoo UnInstaller 10 is one of the best software that can help you install, test and remove programs without worries. It monitors each installation extensively and logs all associated registry modifications and file changes. It then uses these logs to completely remove no longer wanted programs with four deletion methods that ensure a more thorough removal than is possible with Windows' default means.
The languages included in this version are: Arabic, Brazilian, Croatian, Danish, Dutch, English, Finnish, French, German, Greek, Hungarian, Italian, Latvian, Norwegian, Polish, Portuguese, Russian, Slovenian, Spanish, Swedish, Turkish and Ukrainian.
Ashampoo UnInstaller 10.00.10 is compatible with Windows 7 or later. It supports both 32-bit and 64-bit systems.
How to Download Ashampoo UnInstaller 10.00.10 with Path?
To download Ashampoo UnInstaller 10.00.10 with Path, you need to follow these steps:
Click on the link below to go to the download page.
Select the language pack you want to download from the drop-down menu.
Click on the \"Download\" button and wait for the file to be downloaded.
Extract the file using WinRAR or any other software that can handle .rar files.
Open the extracted folder and run the \"Setup.exe\" file as administrator.
Follow the instructions on the screen to install Ashampoo UnInstaller 10.00.10 on your PC.
Open the \"Path\" folder and copy the \"ash_inet2.dll\" file.
Paste the file in the installation directory of Ashampoo UnInstaller 10.00.10 (usually C:\\Program Files\\Ashampoo\\Ashampoo UnInstaller 10).
Replace the existing file if prompted.
Enjoy your free Ashampoo UnInstaller 10.00.10.
The download link is: https://example.com/download
What are the Benefits of Using Ashampoo UnInstaller 10.00.10?
Ashampoo UnInstaller 10.00.10 is not only free but also offers many advantages over other uninstaller software. Here are some of them:
You can enjoy the full features of Ashampoo UnInstaller 10, such as four deletion methods, snapshot technology, system maintenance and optimization tools, uninstall Windows apps and view ratings, and more.
You can remove programs without any leftover files completely and permanently.
You can try out new software without worries, knowing that you can revert any changes made by installations if needed.
You can clean up your system from junk files, web browsing traces, registry entries, and more.
You can optimize your system performance and storage by removing unwanted programs and services.
You can choose from 22 languages to use in your software interface.
|
OPCFW_CODE
|
TiKV build error!!!
Question
When I build TiKV , somthing is worng.
--- stderr
CMake Warning at cmake/protobuf.cmake:48 (message):
gRPC_PROTOBUF_PROVIDER is "module" but PROTOBUF_ROOT_DIR is wrong
Call Stack (most recent call first):
CMakeLists.txt:118 (include)
CMake Warning at cmake/gflags.cmake:26 (message):
gRPC_GFLAGS_PROVIDER is "module" but GFLAGS_ROOT_DIR is wrong
Call Stack (most recent call first):
CMakeLists.txt:120 (include)
CMake Warning at cmake/benchmark.cmake:26 (message):
gRPC_BENCHMARK_PROVIDER is "module" but BENCHMARK_ROOT_DIR is wrong
Call Stack (most recent call first):
CMakeLists.txt:121 (include)
cc1: warnings being treated as errors
In file included from /root/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.4.0/grpc/third_party/boringssl/third_party/fiat/curve25519.c:38:
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.4.0/grpc/third_party/boringssl/third_party/fiat/../../include/openssl/sha.h:111: 错误:没有声明任何东西
/root/.cargo/registry/src/github.com-1ecc6299db9ec823/grpcio-sys-0.4.0/grpc/third_party/boringssl/third_party/fiat/../../include/openssl/sha.h:112: 错误:没有声明任何东西
cc1: 错误:无法识别的命令行选项“-Wno-free-nonheap-object”
gmake[4]: *** [third_party/boringssl/third_party/fiat/CMakeFiles/fiat.dir/curve25519.c.o] 错误 1
gmake[3]: *** [third_party/boringssl/third_party/fiat/CMakeFiles/fiat.dir/all] 错误 2
gmake[2]: *** [CMakeFiles/grpc.dir/rule] 错误 2
gmake[1]: *** [grpc] 错误 2
thread 'main' panicked at '
command did not execute successfully, got: exit code: 2
build script failed, must exit now', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/cmake-0.1.28/src/lib.rs:631:5
note: Run with RUST_BACKTRACE=1 for a backtrace.
Please hlep me, thank you.
Please provide your cmake version, rust toolchain (cargo, rustc, etc) version, the command line flag you used to build tikv, and maybe the Cargo.toml of your project (in particular the tikv related part), thank you.
Sounds like it's a grpcio related build error, you can go to github.com/pingcap/grpc-rs to see if there's already a solution.
OK,
cmake version 3.5.2
rust version 1.30.0
Cargo.toml
[package]
name = "tikv"
version = "2.1.0-rc.2"
keywords = ["KV", "distributed-systems", "raft"]
publish = false
[features]
default = []
portable = ["rocksdb/portable"]
sse = ["rocksdb/sse"]
mem-profiling = ["jemallocator"]
no-fail = ["fail/no_fail"]
[lib]
name = "tikv"
[[bin]]
name = "tikv-server"
[[bin]]
name = "tikv-ctl"
[[bin]]
name = "tikv-importer"
[[bench]]
name = "raftstore"
harness = false
[[bench]]
name = "benches"
[dependencies]
log = { version = "0.3", features = ["release_max_level_debug"] }
slog = "2.3"
slog-async = "2.3"
slog-scope = "4.0"
slog-stdlog = "3.0.4-pre"
slog-term = "2.4"
byteorder = "1.2"
rand = "0.3"
quick-error = "1.2.2"
tempdir = "0.3"
time = "0.1"
toml = "0.4"
libc = "0.2"
crc = "1.8"
fs2 = "0.4"
protobuf = "~2.0"
nix = "0.11"
utime = "0.2"
chrono = "0.4"
chrono-tz = "0.5"
lazy_static = "0.2.1"
backtrace = "0.2.3"
clap = "2.32"
url = "1.5"
regex = "1.0"
fnv = "1.0"
sys-info = "0.5.1"
indexmap = { version = "1.0", features = ["serde-1"] }
mio = "0.5"
futures = "0.1"
futures-cpupool = "0.1"
tokio-core = "0.1"
tokio-timer = "0.2"
serde = "1.0"
serde_json = "1.0"
serde_derive = "1.0"
rustc-serialize = "0.3"
zipf = "0.2.0"
bitflags = "1.0.1"
fail = "0.2"
uuid = { version = "0.6", features = [ "serde", "v4" ] }
grpcio = { version = "0.4", features = [ "secure" ] }
raft = "0.3"
crossbeam-channel = "0.2"
crossbeam = "0.2"
fxhash = "0.2"
derive_more = "0.11.0"
num = "0.2.0"
hex = "0.3"
rust-crypto = "^0.2"
[replace]
"raft:0.3.1" = { git = "https://github.com/pingcap/raft-rs.git" }
[dependencies.murmur3]
git = "https://github.com/pingcap/murmur3.git"
[dependencies.rocksdb]
git = "https://github.com/pingcap/rust-rocksdb.git"
[dependencies.kvproto]
git = "https://github.com/pingcap/kvproto.git"
[dependencies.tipb]
git = "https://github.com/pingcap/tipb.git"
[dependencies.prometheus]
version = "0.4.2"
default-features = false
features = ["nightly", "push", "process"]
[dependencies.prometheus-static-metric]
version = "0.1.4"
[dependencies.jemallocator]
git = "https://github.com/busyjay/jemallocator.git"
branch = "dev"
features = ["profiling"]
optional = true
[dev-dependencies]
test_util = { path = "components/test_util" }
test_raftstore = { path = "components/test_raftstore" }
test_storage = { path = "components/test_storage" }
test_coprocessor = { path = "components/test_coprocessor" }
criterion = "0.2"
arrow = "0.10.0"
[target.'cfg(unix)'.dependencies]
signal = "0.6"
[workspace]
members = [
"fuzz",
"components/test_raftstore",
"components/test_storage",
"components/test_coprocessor",
"components/test_util",
"components/codec",
]
[profile.dev]
opt-level = 0 # Controls the --opt-level the compiler builds with
debug = true # Controls whether the compiler passes -g
codegen-units = 4
The release profile, used for cargo build --release
[profile.release]
lto = true
opt-level = 3
debug = true
TODO: remove this once rust-lang/rust#50199 and rust-lang/rust#53833 are resolved.
codegen-units = 1
The benchmark profile is identical to release, except that lto = false
[profile.bench]
lto = false
opt-level = 3
debug = true
codegen-units = 1
Maybe my CMake version is not correct
CMake 3.5 is fine.
I cannot find solution,so terrible.
Can you try replacing
grpcio = { version = "0.4", features = [ "secure" ] }
with
grpcio = { version = "0.4", features = [ "secure", "openssl" ] }
Sorry, it's in an unpublished version of grpcio.
I've found a similar issue: https://github.com/pingcap/grpc-rs/issues/110
I tried and he said :
the package tikv depends on grpcio, with features: openssl but grpcio does not have these features.
I will see the issue.
@ice1000
I find the fault, cmake version must be >3.8 now. Thank you very very much,bro!
I find the fault, cmake version must be >3.8 now. Thank you very very much, bro!
Can you elaborate? What is actually causing the issue?
@ice1000
grpc must use the cmake version >3.8, I install cmake versoin 3.12.0, and it works.
@ice1000
https://github.com/pingcap/grpc-rs
You can see that :
Prerequisites
CMake >= 3.8.0
Rust >= 1.19.0
By default, the secure feature is enabled, therefore Go (>=1.7) is required.
But pingcap/grpc-rs#250
And in grpc's code base
cmake_minimum_required(VERSION 2.8)
:confused:
I'm happy that CMake >3.8 resolves your issue, but as one of the maintainers of grpcio, I'd like to see the actual cause and try to make it compatible with the Ubuntu's default CMake version (3.5, which can be easily installed via sudo apt install cmake)
This is why I commented https://github.com/tikv/tikv/issues/3748#issuecomment-436471489 , because cmake 3.5.1 works fine with grpcio in pingcap/grpc-rs#250 .
@ice1000
If I find the same bug, I will discuss with you,sir.
Thank you!
|
GITHUB_ARCHIVE
|
How suicidal are electric "suicide showers?"
Some people refer to the showers commonly found in Africa, Latin America and Southeast Asia which have an electric heating element built into the head as "suicide showers" due to the apparent risk of electric shock. How high is this risk really? Even if one was to get shocked how likely would it be at a dangerous or even fatal level?
You might be interested in this physics.se question on "suicide showers". It doesn't directly answer "how likely is it to get shocked", but contains some interesting information (such as that the current-carrying wire is in direct contact with the water!).
This might be a better fit for Electrical Engineering StackExchange.
Are you looking for a handful of personal anecdotes (at least 2 out of ~7000000000 people claim to have survived) or some more substantial evidence?
@redgrittybrick their existance would suggest the risk is relatively low, I was look for something more specific.
Almost all the answers are just personal anecdotes. And, of course, the population of people who can leave answers is very biased, as anyone who died from one of these cannot possibly leave an answer.
I live in Brazil and used this kind of shower for a good amount of my life. On the summer we actually remove the gas shower and install this eletric shower, cause it is cheaper.
So, with that said, I'm pretty confident that it depends more on the capability of who made the instalation than the shower itself.
I've been showering this way for twenty years and never had a problem.
I can undestand why foreigners would be scared to the point of naming it a 'suicide shower', but in my opinion, there's no need for such fear.
These are common in rural areas of Ecuador and Peru. I have used them for years and only got some mild shocks a few times. More than 95% of the time, it runs fine and without risk. Guess it depends on the installation and maintenance of these as the ones that shocked me were in very remote areas. Those in hotels that use them, were pretty much always safe.
A shower that only shocks me 1 time in 20? Sign me up!
@jpatokal: Don't forget to sign up for a Macbook then. Mine shocks me more often than 1 time in 20 if I use it while charging: https://apple.stackexchange.com/questions/32417/how-can-i-avoid-my-macbook-pro-giving-me-minor-shocks
@lambshaanxy I mean, some mornings I could do with a good zap to help me fully wake up... Maybe it's a feature, not a bug.
I was shocked by one of these one time, in Costa Rica. It was just a quick jolt, nothing too painful and nothing lasting. Different brands and models are certainly different though, so always worth it to be careful.
I wouldn't stress out about it and I certainly wouldn't avoid staying in a place or visiting a country that is known to use them.
You will find something very similar, not in the shower head, but on the wall of your shower, in many bathrooms in Germany. Usually 10-20 kilo watt, an electronic fuse that shuts down the power within a microsecond if any electrical current is misdirected, and it produces water at the exact temperature that you want immediately.
https://www.siemens-home.bsh-group.com/de/produktliste/warmwassergeraete/durchlauferhitzer
I'd love one of those in my home, but a UK electrician would probably get a heart attack if you ask them to install it. They are supposed to be very energy efficient, and if you use solar energy to create warm water, they can easily just add that little bit extra temperature that you want.
In the UK you will find many electric showers (water heaters in the shower cubicle) and I doubt an UK electrician will blink an eye on the German ones. I have never seen one like that (German nor UK) in the Netherlands.
The link's gone dead, but I suspect that's got what's called a "tankless water heater". It uses a different and much safer construction: the heating element heats the pipe the water runs through, requiring a whole cascade of failures for the water to be electrified.
I've used good ones and not-so-good ones. The electric element is in the water, but of course you can avoid getting shocked if you avoid putting yourself in a place where you close the circuit. i.e, don't touch metal plumbing. The best is if the shower floor + wall is plastic and you only touch plastic while the heater is on. The one you pictured would be a lot safer if they moved the on/off switch a few inches over so you could more easily avoid touching the metal pipe until it's off.
I grew up with one of these in Peru. Issues:
(a) if your head hits the shower head, you will get shocked. It is unlikely to be life-threatening, but it is extremely unpleasant.
(b) if water pressure goes down drastically (as in: someone else in the house flushes a toilet) then you will get scalded. The shower head may even be damaged.
That said, I lived for a couple of years in a rented place in the UK with a tankless electric heater (on the shower wall, not above me), and I never had a problem. I'd be curious to understand the difference in how the two kinds of electric shower operate.
I also do not know how well a tankless electric heater of that kind would cope with variations in water pressure. (You can gather that the range of water pressure we are talking about is "between low and nonexistent".)
I am currently staying in Uganda, and I have to deal with this kind of shower again. I am simply taking cold showers. Fortunately there is a switch that cuts off electricity supply to the bathroom altogether.
As for why some Brasilians praise "suicide showers"? Brasil produces them, and exports them to the rest of South America.
... and to Africa. The shower head in my university guest-house room in Uganda looked familiar - I just looked it up, and, sure enough, it is Brazilian, even if the name sounds Italian.
They are perfectly safe, as long as they are well installed.
In my 36 years of life, using them practically all my life, I never had any problems.
There are brands in Brazil that develop showers of the highest quality.
Used these showers every day for two years and never had a problem. Noticed that 220 volts produced better hot water than the 110. Don't be scared.
|
STACK_EXCHANGE
|
package waypoint
import (
"encoding/json"
"fmt"
"io"
)
// A GeoJSONFormat is a GeoJSON format.
type GeoJSONFormat struct{}
// A GeoJSONWaypoint is a GeoJSON waypoint.
type GeoJSONWaypoint struct {
ID string `json:"id"`
Type string `json:"type"`
Geometry struct {
Type string `json:"type"`
Coordinates []float64 `json:"coordinates"`
}
Properties struct {
Color string `json:"color"`
Description string `json:"description"`
Radius float64 `json:"radius"`
}
}
// A GeoJSONWaypointFeatureCollection is a GeoJSON FeatureCollection of GeoJSON
// waypoints.
type GeoJSONWaypointFeatureCollection struct {
Type string `json:"type"`
Features []GeoJSONWaypoint `json:"features"`
}
// NewGeoJSONFormat returns a new GeoJSONFormat.
func NewGeoJSONFormat() *GeoJSONFormat {
return &GeoJSONFormat{}
}
// Extension returns f's extension.
func (f *GeoJSONFormat) Extension() string {
return "json"
}
// Name returns f's name.
func (f *GeoJSONFormat) Name() string {
return "geojson"
}
// Read reads a Collection from r.
func (f *GeoJSONFormat) Read(r io.Reader) (Collection, error) {
var wfc GeoJSONWaypointFeatureCollection
if err := json.NewDecoder(r).Decode(&wfc); err != nil {
return nil, err
}
if wfc.Type != "FeatureCollection" {
return nil, fmt.Errorf("expected FeatureCollection, got %v", wfc.Type)
}
var c Collection
for _, f := range wfc.Features {
if f.Type != "Feature" {
return nil, fmt.Errorf("expected Feature, got %v", f.Type)
}
if f.Geometry.Type != "Point" {
return nil, fmt.Errorf("expected Point, got %v", f.Geometry.Type)
}
// FIXME check size of f.Geometry.Coordinates
t := &T{
ID: f.ID,
Description: f.Properties.Description,
Latitude: f.Geometry.Coordinates[0],
Longitude: f.Geometry.Coordinates[1],
Altitude: f.Geometry.Coordinates[2],
Radius: f.Properties.Radius,
// Color: f.Properties.Color, // FIXME
}
c = append(c, t)
}
return c, nil
}
// Write writes c to w.
func (f *GeoJSONFormat) Write(w io.Writer, wc Collection) error {
return json.NewEncoder(w).Encode(wc)
}
// MarshalJSON implements encoding/json.Marshaler.
func (w *T) MarshalJSON() ([]byte, error) {
o := map[string]interface{}{
"id": w.ID,
"geometry": map[string]interface{}{
"type": "Point",
"coordinates": []float64{w.Latitude, w.Longitude, w.Altitude},
},
"type": "Feature",
}
properties := make(map[string]interface{})
if w.Color != nil {
r, g, b, _ := w.Color.RGBA()
properties["color"] = fmt.Sprintf("#%02x%02x%02x", r/0x101, g/0x101, b/0x101)
}
if w.Description != "" {
properties["description"] = w.Description
}
if w.Radius > 0 {
properties["radius"] = w.Radius
}
if len(properties) > 0 {
o["properties"] = properties
}
return json.Marshal(o)
}
// MarshalJSON implements encoding/json.Marshaler.
func (wc Collection) MarshalJSON() ([]byte, error) {
o := map[string]interface{}{
"type": "FeatureCollection",
"features": []*T(wc),
}
return json.Marshal(o)
}
|
STACK_EDU
|
You are aiming at the wrong target. Google blindly indexes what it finds on the web. The right solution was for the EU to require the sites hosting the source articles to include a "do not index" meta tag which Google would then respect. Put this burden where it belongs - on the author of the stories, not the search engine.
There is a difference between forgiven and forgotten. Until we invent time machines undoing history is impossible. You can pass all of the laws you want but you will never be able to erase the event.
Trying to get Google to stop indexing is just going to result in a giant game of whack a mole like we have with DMCA take down notices. No matter how many million take downs they file the mole can never be erased.
Heck, I can even see source sites generating automated ways to combat this. Continuously keep poking Google to reindex your site. When you see the Google crawler insert meaningless random tidbits into the URLs. Now the other side of the robot war will keep issuing takedowns on these randomized URLs but since there is a cycle time of a week or so you will always have a set of working URLs in the Google index.
The ruling is insane. If the EU really wants to implement this insanity the best way would be for the sites hosting the article to include a 'do not index' meta tag which Google would then respect. Doing it that way places the burden where it belongs - on the author of the stories, not on the search engine.
Going after the URLs directly at Google is a total exercise in a whack a mole since the links are always changing. You're just going to end up with another pile of automated systems generating millions of takedown requests. When the source sites disappear they will change their URLs and then the cycle will endlessly repeat.
The AsiaRF one includes the small PCB antenna in the photo. The PCB antenna has twice the range of those tiny chip antennas. Since there is a jack there you can use larger antenna if you want. Another advantage to the PCB antenna is that you can move it around and aim the signal where you need it.
In Vocore's blog he says that his external antennas did not perform as well as the chip one. I suspect that is because he doesn't own the expensive test equipment needed to adjust his RF path to match the external one he picked. In general, bigger is better with antennas. He's also wrong about needing two antennas for 802.11N support on the RT5350.
I don't believe the RT5350 has a security unit on it so it shouldn't be possible to lock the boot loader.
The $20 Vocore is for the module only. If you want the Ethernet, USB and power connectors you need to order the $40 Vocore+Dock choice. instead I'd recommend getting the $38 choice form the AsiaRF campaign. https://www.indiegogo.com/proj...
AsiaRF is going to ship three months earlier and support for it is already checked into OpenWRT.
These unit aren't really the same thing as a RaspPi. RaspPi is oriented towards having a GUI and screen. These units are oriented towards networking and embedded control. The unit are also tiny - about one cubic inch. Many times smaller than a RaspPi.
There is another similar project simultaneously up on Indiegogo from AsiaRF
It was put up about a week later so its funding is not as far along. There are still a few Early Birds left.
It based on the same chip and around the same price. The main difference is that the AsiaRF module has already gone through CE/FCC testing and it is already in production. So there is very little risk of the project not shipping. Support for the AsiaRF unit is already checked into OpenWRT.
I find it interesting that they are offering to design and build 10 custom boards with wifi/Ethernet to your spec for $5000. Similar design work in the US/EU would be $30,000 or more.
Now you're just down to a cost benefit analysis. Is the value add from the publisher worth the money they will be taking? This answer will vary with the perceptions of the author and the cost of the publisher. The trend line on this answer is pointing in the direction of the publisher not being worth the price being asked.
Maybe you are looking at this wrong -- start a side-line business being an editor for these people. Are there places that will proofread a book by email for $250? Maybe $1000 to do major editing on it?
I have to agree with this, the need for a publisher is disappearing just like the need for a recording label. Stross should self publish and then cut a direct deal with Amazon. He'd probably end up with more money that way.
Since he's a well know author, maybe try putting his self-published books up on Indiegogo first. He might net enough off from doing that for each book that the later revenue from Amazon is just gravy.
Maybe do some fact checking first...
All of the Allwinner CPUs will boot from an appropriately formatted SD card and ignore the OS in flash. Don't know what wifi is in there but 75% of Allwinner A31 based tables out of China have Broadcom Wifi in them and the drivers are in the mainline kernel. I believe Kitkat is already available for the A31 and given how standardized these tablets are I don't foresee major problems upgrading.
Allwinner devices are far more hackable than Nvidia based ones. Most features of the Allwinner CPUs are documented except for the usual suspects -- graphics. A31 uses an Imagination PowerVR GPU. And it is not Allwinner that is keeping that GPU secret, it is Imagination.
$85 (with Slickdeals coupon) with free ship is an excellent price for this set of features. Anyway it is already sold out until they can get more from their OEM.
BTW - I do think there is a CPU security feature that can encrypt the boot, but I've never seen an Allwinner device that has turned it on.
It is not worth my time to fight with them. Declare it a loss, move onto the next vendor and don't buy from the previous vendor again. I used to fight with them, now I understand the rewards from the fight are not worth the cost on low priced products. Just blacklist the vendor and move on. Of course there are probably a few vendors that are exceptions to this rule but it is not worth my time to locate them.
You need to put this into perspective. It is unreasonable to expect a company to provide significant human support for a product you spent $30 on at a retail store. The company has probably only made $1-2 profit from the sale, if they provide easy to access support they will lose money on every sale. If you want lots of free support go buy a $3,000 Macbook.
Personally, I don't even bother trying to return or get support on anything under $100 any more. It just goes into the trash and I buy something similar from a different manufacturer and hope it works.
An even more efficient form of this is buying stuff from Aliexpress/DX/etc. Prices there can be as low as 20% of US retail for similar products. Sure I occasionally get junk or the wrong product, but just throw it in the trash and try a different vendor. The overall savings is worth eating the occasional fraud or hassling with Ali's escrow to stop payment. I fully expect little to no support on these purchases and I know returns are almost impossible.
Amazingly content free press release. No clue what these devices are. This is just fluff reporting with no details.
|
OPCFW_CODE
|
Please recommend USB 3.0 eSATA Dual 3.5-Inch SATA III Hard Drive RAID Enclosure
Hardware Clinic - www.hardwarezone.com.sg
As per topic.
Looking for 2 Bay 3.5 HDD Enclosure for RAID 1 with eSATA.
Found these two on Amazon
Thunderbolt 3 4-bay Enclosure only?
Hey, I've been waiting for a thunderbolt 3 4-bay enclosure to use with my macbook 15" for a long time now. I wonder if anyone know of anyone supplying these? Needs to be for 3,5" drives.Found some very expensive options like this,
Looking for a drive enclosure...
Is there a drive enclosure out there that has USB 2.0 and eSata III (6 gb/sec). Also needs to be 2.5 inch drive as that is more portable/smaller.I've been sort of looking (mostly eBay) but haven't had luck. Most of the ones I find are USB 2.0 with eSATA II (3 gb/s). Now, USB 3.0 and eSATA II...
External Mirrored Array Setup (RAID 1) - eSATA or USB 3.0?
I am going with a Mini ITX setup, so my mirrored array that was once internal (previously 2 x 1TB, new will be 2 x 2TB, 3TB, or 4TB) will now need to be external. So, what's a good setup? I'm thinking an external enclosure to house the two drives, externally powered. My new MB has USB 3.0 ...
eSATA Storage Enclosure plugged right into motherboard?
Do you know if it is possible to bypass the whole eSATA multiplier raid cards and plug the eSATA cable directly into the motherboard? If not, do I need a specific motherboard that supports this, or must I deal with these crappy port multiplier cards? Thanks
Got new HDD's and RAID enclosure. Do I need to stress test the HDD's?
I have hear that is smart to run some type of test software on new HDD's to force a drive to fail now and not in a few weeks after the warranty expires. I will be putting them in RAID 1 in my RAID enclosure. Should I test them in this setup or plug them in internally and test one at a time? ...
Looking for a good reliable 4 Bay RAID 1 enclosure
Hello all I'm looking for a reliable 4 Bay RAID 1 enclosure. I am not looking for a NAS enclosure, just a USB 3.0 or eSATA III one. This will not be attached to the network or a HTPC, just used for data backup, and then powered off. But I do need data redundancy since I have a large a...
Advice on an external RAID enclorure....
Hi all, I'd like to set up an external RAID array for video editing. I have done some searching, but remain a little confused. I currently have 4 ea. Seagate Barracuda 7200.14 ST3000DM001 3TB 7200 RPM SATA III 6Gb/s drives w/64MB cache to install in the array. My workstaion: a...
Enabling eSATA for P8Z77-v Pro
I have a new RAID enclosure that is currently hooked up to my P8Z77-V pro using USB 3.0. However even after uypdating the USB 3.0 drivers, the RAID enclosure often suddenly disconnects and reconnects. I want to switch to eSATA. However when I connect via eSATA the drive starts up and then sudd...
Read responses in forums.hardwarezone.com.sg
|
OPCFW_CODE
|
Thank you everyone for my liking and sharing my last post on WCF. We are now aware of assemblies and namespace required to create the service. In this post we will explore ABC of WCF.
WCF offers much more functionality than Web services, few features are Security, Multiple Transport and Encoding and Transaction etc. There are multiple bindings option available , which supports different features, let’s explore them in more details. Before creating our WCF service we will see ABC of WCF.
ABC(Address, Binding and Contract):
The most important thing to remember in WCF, the client and service communicate with each other by agreeing on this 3.
Address: This let’s you know the location on server. Different bindings support different address types.
Binding: Defines which protocol is being used.
Contract: This defines the each method exposed from service.
Let’s see each one in more detail:
As described above contracts are nothing but, what service is going to share with outside world.There are multiple types of attributes, which completes the service.
An interface is decorated with ServiceContract attribute to participate in WCF service. It defines below things:
- Name and namespace for the service.
- Signature of the service.
- Location of service.
- Data Type of the message.
It offers below Properties which can be used:
- CallbackContract: It is used in case of duplex binding, to setup the callback functionality.
- ConfigurationName: Default name is name of service implementation call. It is optional.
- ProtectionLevel: Allows us to specify the degree to which we need the encryption, default is none, we can provide digital signatures as well.
- SessionMode: Used to specify if the service supports session are allowed, not allowed or required.
2. [OperationContract] Attribute
Methods are decorated with OperationContract attribute,so they can participate in service.
Below are the properties which it provides:
- AsyncPattern: Indicates whether the operation is implemented asynchronously.
- IsInitiating: Specifies whether the operation can be initial operation in session.
- IsOneWay: Specifies if operation is single input message with no output.
- IsTerminating: Specifies whether the runtime should attempt to terminate the current session after the operation completes.
Data Contract is nothing but the entities/ classes with properties, we need to decorate them with DataContract attribute so they can be serialized and deserialized.
The properties within classes are decorated with [DataMember] attribute, so they can take part in service data exchange. Your return type can be any of the variable type like string, int and Boolean etc, but if we are using the complex data type in this case our own class, it should be marked with [DataContract] attribute.
Below are the properties which it supports:
- IsReference : value that indicates whether to preserve object reference data.
In most of the cases a Data Contract is sufficient to control the service, but in case we need better control over our SOAP message being created we can use Message Contract. using this we can define header and body, which can be another data contract.
Once we are done with the defining contracts, next step is to choose a hosting agent. There are several option to choose from, which depends on our need. Each binding is for specific needs, below are some of the characteristics:
- The transport layer used
- The channel used
- Encoding mechanism
- Web service protocol support
We can divide the bindings in 3 types HTTP based, TCP based and MSMQ based, let’s see each one
- HTTP-based: If we want our service to be accessed across multiple OS or multiple programming architectures, HTTP based bindings are our oblivious choice. Let’s see the bindings supported.
- BasicHttpBinding: It is used in case of web service, offers backward compatibility, the message encoding used is Text/XML, supports WS-basic profile.
- WsHttpBinding: It gives all functionality which BasicHttpBinding offers, apart from that it offers transaction support, reliable message and WS-Addressing.
- WSDualHttpBinding: Offers all functionality offered by WsHttpBinding, but the main purpose is to be used with duplex communication.
- WsFederationHttpBinding:This is used when security within the organization is most important aspect.
- TCP-based: If we want to share the data in compact binary format, these bindings are of best use.
- NetNamedPipeBinding: This is best biding to be used if our service and client both are hosted on same machine, use TCP protocol to exchange data.It supports transaction, reliable session and secure communication.
- NetPeerTcpBinding: This binding provided secure binding for P2P network, offers all functionality fro NetNamedPipeBinding.
- NetTcpBinding: Used for secure and optimized binding for cross-machine communication between .net application.
- MSMQ-based: If we want to use MSMQ server to exchange data, we can use these bindings
- MsmqIntegrationBinding: We can use this binding to send and receive data from existing MSMQ application that use COM, C++.
- NetMsmqBinding: This is used to communicate between cross-machine using queue. This is preferred binding when using MSMQ.
Once we are done with defining contracts and finalizing the binding, final information left is the address. This is most important aspect as client will not be able to access our service without an proper address. The format of the service address depends upon the type on binding we are using.
If we see from high level below are the things which our address represents:
- Scheme: The transport protocol (HTTP, TCP etc)
- MachineName: Fully qualified domain name of the machine.
- Port: This is optional if we are using default 80 port
- Path: The path of WCF service.
The information above can be used to define a common template for address.
In Case of HTTP-based binding, this can be represented as:
In case of TCP- based binding apart from name pipes, this can be represented as:
MSMQ bindings are bit different as we can choose from public of private queue:
In case of name pipes, it goes like:
Now we are aware of ABC of WCF service as well as the namespace and core assemblies used to create the WCF service. While creating any WCF service, we need to choose the type of binding carefully, as the list of features we can use depends upon them.
TCP bindings are most secure and reliable of them, but i we need to expose the service to multiple client HTTP would be obivious choice.
In our next post we will create our first WCF service.
You can follow my official facebook page , also subscribe my blog for more information.
You can also mail me on firstname.lastname@example.org in case you have any questions.
|
OPCFW_CODE
|
The steppe runner lizard is new to the reptile community, so its average life span is not known. These lizards could likely live up to 10 years in captivity.
Wyoming State Reptile | Horned Lizard - State Symbols USA
Most reptiles lay amniotic eggs covered with leathery or calcareous shells. An , , and are present during life. The eggshell (1) protects the crocodile embryo (11) and keeps it from drying out, but it is flexible to allow gas exchange. The chorion (6) aids in gas exchange between the inside and outside of the egg. It allows carbon dioxide to exit the egg and oxygen gas to enter the egg. The albumin (9) further protects the embryo and serves as a reservoir for water and protein. The allantois (8) is a sac that collects the metabolic waste produced by the embryo. The amniotic sac (10) contains amniotic fluid (12) which protects and cushions the embryo. The amnion (5) aids in osmoregulation and serves as a saltwater reservoir. The yolk sac (2) surrounding the yolk (3) contains protein and fat rich nutrients that are absorbed by the embryo via vessels (4) that allow the embryo to grow and metabolize. The air space (7) provides the embryo with oxygen while it is hatching. This ensures that the embryo will not suffocate while it is hatching. There are no stages of development. and have evolved in many extinct clades of reptiles and in squamates. In the latter group, many species, including all boas and most vipers, utilize this mode of reproduction. The degree of viviparity varies; some species simply retain the eggs until just before hatching, others provide maternal nourishment to supplement the yolk, and yet others lack any yolk and provide all nutrients via a structure similar to the mammalian . The earliest documented case of viviparity in reptiles is the Early , although some individuals or taxa in that clade may also have been oviparous because a putative isolated egg has also been found. Several groups of Mesozoic marine reptiles also exhibited viviparity, such as , , and , a group that include and .
5 Great Beginner Pet Lizards - Reptiles Magazine
Reptiles generally , though some are capable of . All reproductive activity occurs through the , the single exit/entrance at the base of the tail where waste is also eliminated. Most reptiles have , which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median , while squamates, including snakes and lizards, possess a pair of , only one of which is typically used in each session. Tuatara, however, lack copulatory organs, and so the male and female simply press their cloacas together as the male discharges sperm.
Lizards are often the first pet reptile for a lot of folks
|
OPCFW_CODE
|
Do not append variables missing append_dim
nc2zarr fails when appending variables that do not have the dimension indicated by append_dim, e.g. "time".
Variables that lack the append_dim dimension should be written once and from then on be excluded from appending.
Here is an example from the SST L4 GHRSST GMP source products:
2021-06-03 11:39:54,121: INFO: nc2zarr: 365 input(s) found:
0: /neodc/esacci/sst/data/gmpe/CDR_V2/L4/v2.0/1982/01/01/19820101120000-ESACCI-L4_GHRSST-SST-GMPE-GLOB_CDR2.0-v02.0-fv01.0.nc
1: /neodc/esacci/sst/data/gmpe/CDR_V2/L4/v2.0/1982/01/02/19820102120000-ESACCI-L4_GHRSST-SST-GMPE-GLOB_CDR2.0--GLOB_CDR2.0-v02.0-fv01.0.nc
...
364: /neodc/esacci/sst/data/gmpe/CDR_V2/L4/v2.0/1982/12/31/19821231120000-ESACCI-L4_GHRSST-SST-GMPE-GLOB_CDR2.0-v02.0-fv01.0.nc
2021-06-03 11:39:54,122: INFO: nc2zarr: Processing input 1 of 365: /neodc/esacci/sst/data/gmpe/CDR_V2/L4/v2.0/1982/01/01/19820101120000-ESACCI-L4_GHRSST-SST-GMPE-GLOB_CDR2.0-v02.0-fv01.0.nc
2021-06-03 11:39:54,300: INFO: nc2zarr: Opening done: took 0.18 seconds
2021-06-03 11:39:56,486: INFO: nc2zarr: Writing dataset done: took 2.16 seconds
2021-06-03 11:39:56,497: INFO: nc2zarr: Processing input 2 of 365: /neodc/esacci/sst/data/gmpe/CDR_V2/L4/v2.0/1982/01/02/19820102120000-ESACCI-L4_GHRSST-SST-GMPE-GLOB_CDR2.0-v02.0-fv01.0.nc
2021-06-03 11:39:56,607: INFO: nc2zarr: Opening done: took 0.11 seconds
2021-06-03 11:39:56,650: ERROR: nc2zarr: Appending dataset failed: took 0.02 seconds
2021-06-03 11:39:56,650: ERROR: nc2zarr: Converting failed: took 3.23 seconds
Traceback (most recent call last):
File "/apps/slurm/spool/slurmd/job56309097/slurm_script", line 33, in <module>
sys.exit(load_entry_point('nc2zarr', 'console_scripts', 'nc2zarr')())
...
File "/home/users/forman/Projects/nc2zarr/nc2zarr/writer.py", line 82, in write_dataset
retry.api.retry_call(self._write_dataset,
File "/home/users/forman/miniconda3/envs/nc2zarr/lib/python3.9/site-packages/retry/api.py", line 101, in retry_call
return __retry_internal(partial(f, *args, **kwargs), exceptions, tries, delay, max_delay, backoff, jitter, logger)
File "/home/users/forman/miniconda3/envs/nc2zarr/lib/python3.9/site-packages/retry/api.py", line 33, in __retry_internal
return f()
File "/home/users/forman/Projects/nc2zarr/nc2zarr/writer.py", line 98, in _write_dataset
self._append_dataset(ds)
File "/home/users/forman/Projects/nc2zarr/nc2zarr/writer.py", line 146, in _append_dataset
ds.to_zarr(self._output_store,
File "/home/users/forman/miniconda3/envs/nc2zarr/lib/python3.9/site-packages/xarray/core/dataset.py", line 1790, in to_zarr
return to_zarr(
File "/home/users/forman/miniconda3/envs/nc2zarr/lib/python3.9/site-packages/xarray/backends/api.py", line 1452, in to_zarr
_validate_datatypes_for_zarr_append(dataset)
File "/home/users/forman/miniconda3/envs/nc2zarr/lib/python3.9/site-packages/xarray/backends/api.py", line 1300, in _validate_datatypes_for_zarr_append
check_dtype(k)
File "/home/users/forman/miniconda3/envs/nc2zarr/lib/python3.9/site-packages/xarray/backends/api.py", line 1291, in check_dtype
raise ValueError(
ValueError: Invalid dtype for data variable: <xarray.DataArray 'field_name' (fields: 16, field_name_length: 50)>
dask.array<array, shape=(16, 50), dtype=|S1, chunksize=(16, 50), chunktype=numpy.ndarray>
Coordinates:
* fields (fields) int32 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
* field_name_length (field_name_length) int32 1 2 3 4 5 6 ... 46 47 48 49 50 dtype must be a subtype of number, datetime, bool, a fixed sized string, a fixed size unicode string or an object
The actual error here is (I think) caused by an xarray bug which I reported here: https://github.com/pydata/xarray/issues/5224 . xarray should allow appending of this variable. But of course it's a nc2zarr bug as well, since even if xarray could append those |S1-typed variables correctly, it doesn't make sense to do so.
The actual error here is (I think) caused by an xarray bug which I reported here: pydata/xarray#5224 .
Right!
|
GITHUB_ARCHIVE
|
#ifndef VEC3BA_H
#define VEC3BA_H
#include <assert.h>
#include <iostream>
#include <math.h>
#include "constants.h"
#include "sys.h"
namespace double_down {
struct Vec3ba {
enum { n = 3 };
bool x,y,z;
int a;
__forceinline Vec3ba () {}
__forceinline Vec3ba ( const Vec3ba& other ) { x = other.x; y = other.y; z = other.z; a = other.a; }
__forceinline Vec3ba& operator =( const Vec3ba& other ) { x = other.x; y = other.y; z = other.z; a = other.a; return *this;}
__forceinline Vec3ba( const bool pa ) { x = pa; y = pa; z = pa; a = pa;}
__forceinline Vec3ba( const bool pa[3]) { x = pa[0]; y = pa[1]; z = pa[2]; }
__forceinline Vec3ba( const bool px, const bool py, const bool pz) { x = px; y = py; z=pz; }
__forceinline Vec3ba( const bool px, const bool py, const bool pz, const bool pa) { x = px; y = py; z=pz; a = pa; }
__forceinline Vec3ba( const bool px, const bool py, const bool pz, const int pa) { x = px; y = py; z=pz; a = (bool)pa; }
__forceinline Vec3ba( FalseTy ) : x(False),y(False),z(False),a(False) {}
__forceinline const bool& operator[](const size_t index) const { assert(index < 3); return (&x)[index]; }
__forceinline bool& operator[](const size_t index) { assert(index < 3); return (&x)[index]; }
};
__forceinline bool all(Vec3ba v) { return v[0] && (v[0] == v[1]) && (v[0] == v[2]); }
__forceinline std::ostream& operator <<(std::ostream &os, Vec3ba const& v) {
return os << '[' << v[0] << ' ' << v[1] << ' ' << v[2] << ' ' << v.a << ']';
}
} // end namespace double_down
#endif
|
STACK_EDU
|
from .utils import Rect
# EXPORT
class View(object):
def __init__(self, rect=None):
if rect:
self.rect = rect
else:
from .app import get_screen_size
res = get_screen_size()
self.rect = Rect(0, 0, res[0], res[1])
def offset(self, d):
self.rect.move(d)
def get_position(self):
return self.rect.tl
def set_position(self, pos):
self.rect = Rect(pos.x, pos.y, pos.x + self.rect.width(), pos.y + self.rect.height())
def relative_position(self, pos):
return pos - self.rect.tl
def get_rect(self):
return Rect(self.rect)
|
STACK_EDU
|
Results 1 to 7 of 7 Thread: [SOLVED] Grub stage 2 read error, no menu...right after fresh install Thread Tools Show Printable Version Subscribe to this Thread… Display Linear Mode Switch grub> find /boot/grub/menu.lst find /boot/grub/menu.lst (hd0,0) The menu.lst file> title Ubuntu 8.04.1, kernel 2.6.24-19-generic root (hd0,0) kernel /boot/vmlinuz-2.6.24-19-generic root=UUID=d5728151-a1c2-46a1-9afd-a421819eed1f ro quiet splash initrd /boot/initrd.img-2.6.24-19-generic quiet title Ubuntu 8.04.1, kernel 2.6.24-19-generic (recovery Paste it here. User contributions on this site are licensed under the Creative Commons Attribution Share Alike 4.0 International License. weblink
Otherwise you may have to use command line grub and install with that. does not exist. Top chicagocoyote Posts: 20 Joined: 2009/05/19 19:11:56 Location: Chicago Re: Grub Load Read Errors; Is the drive salvageable? What gets broken? https://ubuntuforums.org/showthread.php?t=937729
Enjoy yourself with F9! One more point.. Ensure that matches the size/cyls/tracks. The following is a comprehensive list of error messages for the Stage 2 (error numbers for the Stage 1.5 are listed before the colon in each description): 1 : Filename must
I'm still unsure about what an when something went wrong but it's all working now. Have you changed the boot menu in Yast? Showing results for Search instead for Do you mean Menu Categories Solutions IT Transformation Internet of Things Topics Big Data Cloud Security Infrastructure Strategy and Technology Products Cloud Integrated Systems Networking Wyse Grub Loading Stage2 Error 25 sean21 Linux - Newbie 2 11-08-2003 08:27 PM All times are GMT -5.
appears and the system does not continue the boot procedure. fdisk -l: Code: Disk /dev/sda: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16055 * 512 = 8225280 Disk identifier: 0x1549f232 Device Boot Start End longman Installation, Upgrades and Live Media 30 25th February 2006 02:09 AM Error 21 - GRUB Loading stage 1.5. Join Date Jun 2004 Location Newcastle upon Tyne Posts 2,971 When you format a partition you format the filing system.
If you find anything at all, please let me know, because i'm really stuck there. Grub Loading Stage2 Vmware I did Windows install, I read and formatted the disc, copied the files.. Do you want to help us debug the posting issues ? < is the place to report it, thanks ! Mac>Linux>Windows Fedora Reddit!
Join Date Jun 2004 Location Newcastle upon Tyne Posts 2,971 Don't know. news Is there anything else that can be done? Grub Loading Stage2 Read Error I have a duel boot with XP. Grub Loading Stage2 F5 I selected default layout during installation, which resulted in the layout below: Device Start End Size Type Mount Point VG VolGroup00 156032M VolGroup LV LogVol00 154048M ext3 / LV LogVol01 1984M
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in. http://ohmartgroup.com/grub-loading/grub-loading-stage2-read-error-centos.php I've update the system with the boot CD and the DVD (it was on my harddisk...i've not burn it onto a real DVD, i've burn only the CDROM). any help would be great!! edit the file: /boot/grub/device.map not /boot/boot/grub/device.map and add all the missing devices (for example, in my system one hard drive and the floppy disk were missing). Grub Loading Stage2 Stuck
When the system came up it booted into windows no problem. Maybe a problem with the flat (cable)? So I managed to go a pace back instead of forward... check over here Linux user started Jun 2004 - No. 361921 Using a Linux live CD to clone XP To install Linux and keep Windows MBR untouched Adding extra Linux & Doing it in
Google™ Search FedoraForum Search Red Hat Bugzilla Search Search Forums Show Threads Show Posts Tag Search Advanced Search Go to Page... Grub Loading Stage2 Solaris 10 By that I mean, is /boot on the same partition as /? Also, in the tty it says Code: FATAL: Could not load /lib/modules/126.96.36.199-9-default/modules.dep: No such file or directory iptables v1.4.2-rc1: can't initialize iptables table 'filter': iptables who? (do you need to insmod?)
Not really. THis is the results of the following commands Output of fdisk -l Disk dev/sda: 18.3 gb ... ... It only makes sense in this case anyway, as GRUB has no idea how to communicate the presence of such modules to a non-Multiboot-aware kernel. 21 : Selected disk does not Can you guide me in this also.
Start from here: http://www.gnu.org/software/grub/man...-natively.html (or from the home ). Kind Regards HK. Device boot system /dev/sda1 * linux disk dev/sdb: 9175mb Device boot system /dev/sdb1 * linux disk /dev/sdc:9175mb Device boot system /dev/sdc1 * linux swap Output of grub.conf file: #boot=/dev/sda default=0 timeout=5 this content agpgart:unable to determine aperture size..
Powered by vBulletin Version 4.2.2 Copyright © 2016 vBulletin Solutions, Inc. Then run: grub-install --root-directory=/ /dev/fd0 I'm in linux now!!!!!!!! #4 14th May 2008, 09:41 PM bgedan Offline Registered User Join Date: Oct 2006 Location: Frankfurt/Main, Germany, Planet Earth
|
OPCFW_CODE
|
The Task Manager 2
The Task Manager 2 is an almost complete re-write of the original Task Manager
which has been successfully in use for skim production by BABAR since January
2004. Why such a radical step of a re-write instead of an upgrade.
First, the original Task Manager in continouusly being upgraded and has been
adapted for various changes in the skim production procedure. But it also has
some significant design flaws. When the Task Manager was designed, BABAR's
Computing Model 2 was still under development. Many
of the features of the current skim production only developed during the transition
from the Objectivity based Computing Model 1 and the Task Manager
had to be adjusted to take these into account. The biggest design flaw (for
which I take full responsibility) is the treatment of
skimming and merging as two separate production steps based on the same production
framework. Whereas skimming and merging are in theory very similar, both take
data collections as input, do some processing on them, and produce new data
collections, the differences are significant enough to make the squeezing of
both processing steps into on framework inconveniant and unelegant. Perhaps
the biggest disadvantage however is the fact that the processing step of merging
depends directly on the skimming.
The Task Manager 2 takes a different approach. Skimming and merging are treated
as two steps of the same processing. This does leads to some replication in
the database layout.
As can be seen in the database table layout, the table required
for skimming and the merging and the joins between them are almost identical.
The same is also true in the class design. However this falls beautifully
into OO design, since both, the relevant classes for skims and merges, extend
from the same base class. For example the class representing the individual
jobs submitted to the batch system are called BbkTMSkimJob and BbkTMMergeJob
respectively and extend the BbkTMJobs class. So the base class contains all
things common to the jobs whereas the inherited classes contain the differences.
The Job Wrapper Package
The second big difference between the original Task Manager and the Task Manager
2 can be found in the Job Wrapper package. The Task Manager wraps the application
for skimming and merging with a perl script to allow for preparation of the
run environment before processing and job validation and clean up after the
The original Task Manager
only had very little support for the job wrappers. But with the requirement
to run the applications in an increasingly autonomous and isolated (local)
to allow for scaleability, the job wrappers had to take on more responsibility
and the amount of code grew without a proper base. The Task Manager 2 consists
of an entire package (BbkJobWrappers) consisting of various utilities required
to run the jobs and support processing in the batch queue.
The figure above illustrates dependencies within the main classes (simple
utility classes are not shown). The base of the package is a set of perl wrappers
around the commonly used commands to interact with the event store and obtain
information on data collections, as well as move data around and checksum them.
The intermediate level classes resemble the metadata objects stored in the
database. Finally the top level classes are built around collections of the
intermediate level classes and deal with the metadata that was created by the
The Task Manager 2 is in its final development and should be launched
by the end of 2005/early 2006.
|
OPCFW_CODE
|
Error getting the number of tabs in "Terminal" window via Applescript on OSX El Capitan
Basically, I want to change the theme of bash when I open new windows, not new tabs, and then have tabs of a window share the same theme; while themes of separate windows are determined randomly.
After some digging, I found out an applescript which sets the theme of the current tab of Terminal window. I created a
/usr/local/terminal-color.scpt as:
on run argv
tell application "Terminal" to set current settings of selected tab of front window to some settings set
end run
And I have added the following statement to my bash profile:
osascript /usr/local/terminal-color.scpt
Now this script, of course, runs with every new instance of the bash. I cannot do anything about that from bash_profile. However, I should be able to differentiate a new window or a new tab from the applescript itself. Therefore, I am looking for an if statement, which would let the script run only when new windows are created. Something like:
on run argv
if index of selected tab is 0
tell application "Terminal" ....
end if
end run
But I cannot figure out how to achieve this looking at the applescript documentation and scripting dictionary of the terminal application. Help please
Update
I try editing the script as follows:
set tabNum to number of tabs of front window
if tabNum = 1 then
tell app ...
this won't work either giving an error tabs of window 1 doesn’t understand the “count” message
My approach was correct but I had a simple mistake of trying to get tab or window data before choosing an application scope. To put in simple words, first tell the application, then ask about its properties. Here's the code that worked:
on run argv
tell application "Terminal"
if number of tabs of front window = 1 then
set current settings of selected tab of front window to some settings set
end if
end tell
end run
Even Better
Improving the previous script; this one not randomly chooses a theme, it iterates through your available terminal themes according to the windows you have already open. You can also set your default theme that will be set on your first launch of the terminal. In my case, it was the 5th one in the settings set. Here goes the code:
tell application "Terminal"
if number of tabs in front window = 1 then
set defaultThemeOffset to 5
set allThemes to number of settings set
set allWindows to number of windows
set themeIndex to (defaultThemeOffset + allWindows) mod allThemes
set current settings of selected tab of front window to settings set themeIndex
end if
end tell
|
STACK_EXCHANGE
|
Wubi, will say Goobye to Microsoft? Not really…
Well, the title kinda says it all if someone already knows this news… what are we talking about, you may ask instead? I’ll explain.
Lately there has been a pair of new strange softwares for Windows XP, very strange and unusual softwares indeed…. the first to appear has been Wubi and it was an incredible new to me. I tryed it on one of my friend’s PC with windows, call him Ciro. Ciro was afraid of partitioning the windows (NTFS) partition, because he basically knows almost nothing about informatics and he thought that the windows partition may have been erased or that the PC would have runt slower ( O_o !!!!!). Yes, really. Useless to explain him it was impossible, you know how this things work, I believe.
Now, Ciro did like my beryl effects on Ubuntu Feisty and his brother was on the way to destroy the entire HD and install Ubuntu just to have Beryl and to say goodbye to DRMs (One of his friends copyed him a number of MP3s with DRM and he was shocked that his PC was unable to play ‘em… of course I told him, they belong to your friend! You can’t exchange a drm protected mp3 without cracking it… and on and on with the lesson… ….)
So I told the 2 brothers about Wubi, I just discovered it and tried it out on my virtual machine with a licensed winXP on it. They where enthusiast. Why is that? What does Wubi do?
1) It is a program installable in XP. You run it, tell him how much hard disk space to use, which language, which username and pass to use and which ubuntu distribution to use and it downloads Ubuntu and sets it up for you (You can also give Wubi a predownloaded iso file if you prefer or have no connection on the PC you’re installing Ubuntu into, and that was my case, but you’ll need the alternate iso).
2) Wubi does a trick: it creates a folder where it stores a number of virtual hard disks, actually large files, one for the system, one for your home dir and one for swap file. So it does NOT FORMAT OR PARTITION you preexistent partition. That’s the best advantage of all: every single windows user I know doesn’t whant to install ubuntu because he’d have to partition the HD… risky indeed and then it is very tricky and difficult to “Uninstall ubuntu” and make the partition as it was before, if you don’t like it.
3) You restart the PC, you now have a boot selector, WinXP or Ubuntu (This is a dos boot selector, absolutely no grub or lilo or whatsoever linux related), you choose Ubuntu, Wubi finishes the Ubuntu installation automatically and then you log normally in Ubuntu as if it was on a partition. only the HD is virtualized but it is a file, a physical one, so it runs perfectly, at the same speed it would in the case you partitioned…
So I went on with this and even created the Italian translation for Wubi (If you install it in Italian, the translation is by me, any argue about how has it been done? Comment here! ;P , I was about to create a topic here about Wubi to promote it but then….
Ciro called me, he was very scared. It appears that his Ubuntu, dunno why, is unable to log-out on his PC. If you do log out, to change user, for example, you get a black screen, freezed. Have to restart the PC manually to get out of this, no command works (Not even alt + ctrl + f*)… while this is not an issue in a Linux formatted FS, since ext3, for example, is very difficult to break, it is a very big problem if you use an NTFS or Fat partition…. because this partitions are shitty. How many times did you lose your data in windows after a crash or by force quitting / restarting the system in a crash of any sort? Plenty, in my old memories.
So Ciro restarted the PC and it said there was no available boot disc. Ouch O_O !!! Astonishment. Fear. And then Despair. Ciro cried at me: wasn’t this safe? Did I lose my windows data!? Argh!!
He had an heart attack, I had two, because the responsibility, of course, was mine. Plus you couldn’t even imagine the bad publicity Linux would have gained with this friend of mine from Naples talking about it around…. he likes way too much talking about things he can’t understand…
So I downloaded a rescue disc of some sort, flew to his house and started hacking his HD. Luckily, after a few tries and hints from the MS site itself, I solved the problem, what an irony, with a free software… the disc was repaired, master boot made safe, etc. Windows booted once again… phewww!!!! -_-
So I uninstalled immediately Wubi… Ubuntu is difficult to crash but some times it does, and if you restart your system brutally on NTFS, not ext3, your data are likely to be erased and lost!!! My God, Wubi is really a dangerous way to install Ubuntu, believe me, use it with caution, you’re warned. I installed Ubuntu partitioning Ciro’s PC after that and he was very happy about it and didn’t notice any slowdown (Of course )…
What about this new tool, Goodbye to Microsoft? It’s a Debian installer but I read on a lot of ignorant people’s blog that it is like Wubi. It is NOT!
I downloaded and tried it myself on my VirtualMachine with XP. This Debian installer downloads an installer version of Debian from the net. After that, it installs the Debian installer on the PC. When you reboot you have a menù to choose: windows or “debian installer”. Choose the second and a normal Debian Net Installer loads! With this installer you’ll have to do everything you would have done with a normal debian CD! You will have to partition you HD as normal, etc, so it uses NO VIRTUAL DISC AT ALL, it is NOT like Wubi.
This tool is in my opinion quite useless. It is not user friendlier than an Ubuntu Live CD, it’s the opposite. You don’t even have the chance to test the compatibility of Debian with your machine (it’s not a Live CD) but go on with a blind Debian installation! So if it works bad, after partitioning, you’ll hate linux for the rest of your ignorant life.
Then another problem: since it does a normal install and actually partitions you HD (See the picture up here I screenshotted if you don’t believe me… and why shouldn’t you??), even with the assisted process there’s always a chance that you format your entire HD by doing something wrong and loose your data. Now, if you did it on purpose with a debian CD ok, but if you got illuded and enchanted by this tool thinking of it as an easier way to install debian, and maybe a safer one, you’re wrong man. This isn’t safer at all.
Another note: many people out there think of this tool as a Debian official thing and therefor started to insult the debian developers mindlessly… read the page. It’s from third party, so the Debian theme is not responsible for this shameful and IMHO useless tool..
Conclusions? There is no easier way than a Live CD today to install Linux… unless you run it on a Virtual Machine, don’t get enchanted by this miracle-promising tools: you’d be really disappointed!
|
OPCFW_CODE
|
Declaring array data inside a class C++
I am creating a class that needs to store different arrays like data. Those arrays will have mutable size, but all of the arrays inside the class will have the same size. The arrays will be later used for number crunching in methods provided by the class.
What is the best/standard way of declaring that kind of data inside the class?
Solution 1 – Raw arrays
class Example {
double *Array_1;
double *Array_2;
double *Array_3;
int size; //used to store size of all arrays
};
Solution 2 – std::vector for each array
class Example {
vector<double> Array_1;
vector<double> Array_2;
vector<double> Array_3;
};
Solution 3 – A struct that stores each vertex and have a std::vector of that struct
struct Vertex{
double Var_1;
double Var_2;
double Var_3;
};
class Example {
vector<Vertex> data;
};
My conclusion as a beginner would be:
Solution 1 would have the best performance but would be the hardest to implement.
Solution 3 would be elegant and easier to implement, but I would run into problems when performing some calculations because the information would not be in an array format. This means numeric regular functions that receive arrays/vectors would not work (I would need to create temporary vectors in order to do the number crunching).
Solution 2 might be the middle of the way.
Any ideas for a 4th solution would be greatly appreciated.
1 wont have any better performance than 2. When compiling with optimizations, vector pretty much gets optimized away.
I think the rule of thumb is try to stick with std containers, std::vector or std::array(fixed size). If you need pointers try not to expose them, use smart pointers std::unique_ptr or std::shared_ptr (last case as this contain significant overhead)
the forth solution is to use std::unique_ptr<double[]>. And it's the best if you need a replacement for the first one and nothing more.
The general recommendation is that closely related data should be structured as a single unit. Like your Vertex structure. And using a vector (or array) of vertex structures is very common and often used in "mathematical calculations" using parallelism or CUDA kernels or the like.
std::array or std::vector. Not raw arrays.
AoS vs. SoA is something of a field of research all of its own. Conceptually, parallel arrays are plainly inferior, but that doesn’t always carry the day.
Note: names like Array_1 in your struct Vertex are misleading - these are not arrays and not meant to be arrays.
@JesperJuhl: array is obviously wrong here—this is between unique_ptr and vector, and is really the long-rejected dynarray (plus the struct business).
Most probably you will access three coordinates x, y, z of one vertex in one bit of code, in very close time, thus the third sample, the array of vertexes, is better for CPU caches and predictions, since three coordinates of each vertex will be placed in small memory region.
Don't use raw arrays. Options 2 and 3 are reasonable, the difference depends on how you'll be traversing the data. If you'll frequently be going through the arrays individually, you should store them as in solution #2 because each vector will be stored contiguously in memory. If you'll be going through them as sets of points, then solution 3 is probably better. If you want to go with solution #2 and it's critical that the arrays always be synchronized (same size, etc.) then I would make them private and control access to them through member functions. Example:
class Example
{
private:
vector<double> Array_1;
vector<double> Array_2;
vector<double> Array_3;
public:
void Push_data(double val1, double val2, double val3) {
Array_1.push_back(val1);
Array_2.push_back(val2);
Array_3.push_back(val3);
}
vector<double> Get_all_points_at_index(size_t index) const {
if (index < Array_1.size())
return {Array_1[index], Array_2[index], Array_3[index]};
else
throw std::runtime_error("Error: index out of bounds");
}
const vector<double>& Get_array1() const {
return Array_1;
}
void Clear_all() {
Array_1.clear();
Array_2.clear();
Array_3.clear();
}
};
This way, users of the class aren't burdened with the responsibility of making sure they add/remove values from all the vectors evenly - you do that with your class's member functions where you have complete control over the underlying data. The accessor functions should be written such that it's impossible for a user (including you) to un-syncronize the data.
One recommendation: Get_all_points_at_index should probably return std::array<double, 3>, or a special Vertex type, just to communicate the size guarantee (not to mention avoiding a performance hit from repeated dynamic allocation).
If you are going to process big amounts of data, then solutions 1 and 2 are pretty much the same - the only meaningful difference is that solution 1 is hard to protect against memory leaks (while solution 2 deallocates your data when needed automatically).
The difference between solutions 2 and 3 is what people often call "Structure of arrays" vs "Array of structures". The runtime efficiency of these solutions depends on what your code does with them. The general principle is locality of reference. If your code frequently does number crunching only on the first component of your vertex data, then use structure of arrays (solution 2). However, any complex code will work on all of the data, so I guess solution 3 (array of structures) is the best.
Note that this example is rather pure. If your data contains elements that are sometimes used in number crunching and sometimes not (e.g. it does some transformation on two coordinates of the vertices, while leaving the third untouched), then you might need to implement some kind of in-between solution - copy only the needed data to some place, transform it and copy the results back.
Forget about approach 1 (as the others have mentioned) and stick to either approach 2 or 3 which best fits your needs. To me, I see your code as a part of an application/library that manages coordinates/data of a 3D space. So, you should think which operation you need to do on these 3D coordinates/data and which approach makes your code simpler or more efficient. As an example, if at some moment you need to pass the raw data of one dimension to a third-party library (e.g. for visualization stuff) you should go for approach 2.
As an concrete example, VTK (the visualization toolkit) has lots of data structures that keep 3D data in both ways, either like your 2nd approach (see vtkTypedDataArray) or your like 3rd approach (see vtkAOSDataArrayTemplate). Taking a look at them may give you some ideas.
|
STACK_EXCHANGE
|
The objective is to clear a full pyramid while building up combos from each match and attempt to complete the pyramid as quickly as possible to gain extra bonus points for time remaining and cards remaining from the Stock Deck. If the player is unable to match any further cards in the pyramid, the game is over.
Pyramid Cards - Pyramid cards are present within the initial layout of the pyramid that are dealt out by the deck. These are presented in the form of face-up cards that the players can interact with, only if there are no other cards that are on top of them and that they are highlighted. Try to pair Pyramid Cards with each other or Stock Cards.
Stock Cards - Stock cards are cards that are drawn from the deck and presented to the player in pairs of threes. While these stock cards vary from each draw, the player can only select the card that is on top of the pair and use it to make a match with either cards that are present on the pyramid, the free cell, or the next card in the pair that was drawn from the deck.
Cell - The cell is a free space that is located to the right of the deck and drawn cards that acts as a storage spot for cards to be placed in and used later down the line. Any visible card from the pyramid alongside the first card of the Stock cards can be placed within this space and used. Players can also match cards in that space as if they were matching them on the pyramid or in their hand.
Card Values - Each card type has a value associated with it which will be used to make correct matching when paired with another card.
- K (King) = 13
- Q (Queen) = 12
- J (Jack) = 11
- Cards 2-10 have the same value as on the card
- A (Ace) = 1
Matching Chart - The following is a list of cards that match with each other. Matching cards add up to 13.
- K (Kings) – Don’t need to be matched with anything and act as free selected cards.
- A (Aces) + Q (Queens)
- J (Jacks) + 2
- 10 + 3
- 9 + 4
- 8 +5
- 7 + 6
Ending the Game - The game ends when time runs out or if you run out of moves you can make. Select from the Settings screen to end the game early when there are no other moves to make.
- As the game starts, the players are presented with a full pyramid of cards alongside a deck of the remaining cards and a free cell. From this, the player can begin to match cards by either dragging the card across the screen with their finger or select the cards they want to match by tapping them.
- If there are no cards that can be matched present, the player can then tap the Stock Deck and obtain a hand of three Stock Cards to play from. Unlike other versions of Solitaire Pyramid, players can match their Stock Cards together if the first two cards in their hand can match.
- Players can utilize the Free Cell that is present within the game by dragging a card, either from the top of the Stock Cards or the Pyramid, to the Free Cell, locking that card there until they want to use it to match with another card.
- With every match that the player makes, the number of points that they gain is increased through the combo streak system. For the first match, it is 50 points. This is then followed by 100 points, 150 points, etc. until it goes for about 5 matches in a row before resetting.
- If the player is unable to make a match and draws from the deck, the combo streak is reset.
- If the player moves a card into the Free-Cell, the combo streak isn’t reset.
- When the player is unable to find a move that they could make, after a few seconds, either the deck will start to glow/a finger points at it or two cards that can be matched will glow, indicating this is a move that the player can make.
- Some boards are not solvable. Tap the settings button if there are no more moves to make to end the game early to submit your score and receive a time bonus.
- Base Scoring: 50 points for matching a set of cards.
- Combos: The combo system works by increasing the amount of points the player gains due to how much the player can keep the combo streak active. The order in which the combo will rise is as follows: 50 points -> 100 points -> 150 points -> 200 points -> 250 points. If the player is unable to make a match and draws from the deck, the combo streak is reset.
- Time Bonus: With every completed, or incomplete, game, the player is awarded bonus points that correlate with the amount of time left over, in seconds, and the completed number of matches that you made compared to total number of cards in the pyramid.
- Remaining Cards in Deck: With every completed puzzle, the player gains 50 points per left over card that remains in the Stock Deck.
- Redrawing From the Deck: When the deck is empty and needs to be redrawn, the player loses 50 points
- To clear a king from the board, just tap it. It doesn't need to be matched.
- Drag a card into the Free Cell to save it for later.
- The Free Cell must be open to clear cards stacked on top of each other.
- Tap the Settings button If there are no more moves to end game. Tap End Game to submit your score and receive a time bonus.
- Make multiple matches without drawing from the deck to get a streak bonus.
- If you get stuck trying to find a valid move, the game will give hints by highlighting cards that you can play.
|
OPCFW_CODE
|
Best thought of as a dungeon crawler set in space, "FTL: Faster Than Light" puts you in command of a starship plotting a course through unfriendly territory with a hostile fleet hot on its heels.
FTL draws inspirations from roguelikes, old-school dungeon crawlers with random levels and easy death. In essence, you are trying to get from one end of the galaxy to the other. The various sectors of the galaxy are randomized, with various encounters and ships along the way. In addition to survival (which requires keeping a close watch on one's supplies and systems), the player must also collect "scrap" and items to improve their ship's capabilities so that when they reach the end of their journey they are prepared to take on the enemy mothership. Various dilemmas will arise based on the random events that pop up - whether to protect a civilian ship from pirates or ignore them, or whether to risk saving people from a destroyed ship, and so on. The limitations on resources really make these dilemmas rather important, as each could potentially spell the end of your voyage.
The ship itself is not steered around as per many space games. Rather, you control the ship's various systems and the crew inside it. Your "playspace" is basically a map of your ship, and crewmen can be diverted to problematic areas, or just sent to man a given station. If, for example, a fire breaks out, a crewman must be sent to extinguish it (and one may not be enough). In battle, your weapons are targeted at the enemy ship's sections, and the enemy will target yours. Much of the battling is "automated" so to speak (you don't move the ship around, for example), but there is a certain rhythm necessary to get the best results - for example, timing a heavy laser after a small burst of light ones to penetrate the enemy's shields.
The game can easily get frustrating for a casual player, not because it's "hard" so much as it's "random". Because of the totally random nature of the maps, in some runs you'll find many crewmembers and good systems easily, and in others you'll be practically destitute. The randomization makes for a neat gameplay experience, and the events are varied and have multiple solutions based on the gear and crewmembers you possess. Yet at the same time it also takes control out of the player's hands in many cases - "there's nothing I could really do, this is just a bad run". This can easily lead to frustration, rather than the feeling of a challenge that can be overcome.
In general, though, FTL is at least a fun experience - providing a level of investment on the part of the player by refusing to guarantee success, and thus making each run relatively tense and interesting. Its graphics are functional and its music sets a deep-space sci-fi mood well, though neither is really the star attraction of the game. Despite some issues with difficulty and balance, overall the game is well worth the small $10 cost.
We purchased this game with our own funds in order to do this review.
|
OPCFW_CODE
|
Version information that can help diagnose errors when opening XNA Game Studio projects in Visual Studio
I recently noticed a blog article from the Zman that I wanted to link to here to hopefully make it easier to find. The post, located at http://www.thezbuffer.com/articles/541.aspx, provides some behind-the-scenes details about the contents of XNA Game Studio and XNA Game Studio Express .csproj files. The project GUIDs and XNA Framework version information listed in this post can help you determine exactly what version of XNA Game Studio or XNA Game Studio Express a specific .csproj file was created with.
If you run into an error message stating "The project type is not supported by this installation" when attempting to open an XNA Game Studio project in Visual Studio 2005 or 2008 and are trying to figure out how to fix it, the information in this article can be very helpful. Typically, this error occurs when trying to open XNA Game Studio projects for one of the following reasons:
- Trying to double-click an XNA Game Studio project (.csproj) file on a system that does not have the matching version of XNA Game Studio or XNA Game Studio Express installed.
- Trying to open a project from within a version of Visual Studio that is not supported by the version of XNA Game Studio that the project was created in (for example - opening an XNA Game Studio 2.0 project in Visual Studio 2008).
- Trying to open a XNA Game Studio Express 1.0 or 1.0 Refresh project on a system that has an up-level edition of Visual Studio 2005 installed but does not have Visual C# 2005 Express Edition installed - this will cause the solution to open in the up-level edition of Visual Studio 2005, but XNA Game Studio did not support up-level editions of VS until XNA Game Studio 2.0. Projects for the 1.0 or 1.0 Refresh versions must be opened on a system that has Visual C# 2005 Express Edition and XNA Game Studio 1.0 or 1.0 Refresh installed.
If you run into an error while trying to open an XNA Game Studio or XNA Game Studio Express project on your system, I encourage you to check out this article and use the information listed there to determine the XNA Game Studio version of the project that you're trying to open by looking at your .csproj file in Notepad, then verify that the necessary versions of both Visual Studio and XNA Game Studio are installed on your system.
For reference, here is a list of supported Visual Studio platforms per version of XNA Game Studio:
- XNA Game Studio Express 1.0 - Visual C# 2005 Express
- XNA Game Studio Express 1.0 Refresh - Visual C# 2005 Express with SP1
- XNA Game Studio 2.0 - Visual C# 2005 Express with SP1 and Visual Studio 2005 Standard Edition or higher with SP1
- XNA Game Studio 3.0 CTP - Visual C# 2008 Express and Visual Studio 2008 Standard Edition or higher
|
OPCFW_CODE
|
OpenLDAP invalid credentials immediately after setting credentials
I am having trouble binding users that are not the root dn in OpenLDAP, even if I immediately set the password, I still get ldap_bind: Invalid credentials (49)
For example, if I use ldappasswd to set the password (authenticating with the root dn), and then immediately use ldapwhoami to try to authenticate, I get the following error:
leif@nixos ~ $ ldappasswd -x -D cn=Admin,dc=leifandersen,dc=net -W -s badpasswd cn=leiftest,ou=users,dc=leif,dc=net
Enter LDAP Password:
leif@nixos ~ $ ldapwhoami -x -w badpasswd -D cn=leiftest,ou=users,dc=leifandersen,dc=pl
ldap_bind: Invalid credentials (49)
(Note that in this case slapd is running on localhost and port 389, so I don't seem to need to specify those.)
I am using ppolicy, configured as:
# ppolicy, leifandersen.net
dn: cn=ppolicy,dc=leifandersen,dc=net
objectClass: device
objectClass: pwdPolicyChecker
objectClass: pwdPolicy
cn: ppolicy
pwdAllowUserChange: TRUE
pwdAttribute: userPassword
pwdCheckQuality: 1
pwdExpireWarning: 600
pwdFailureCountInterval: 30
pwdGraceAuthNLimit: 5
pwdInHistory: 5
pwdMaxAge: 0
pwdMaxFailure: 5
pwdMinAge: 0
pwdMinLength: 5
pwdMustChange: FALSE
pwdSafeModify: FALSE
pwdLockoutDuration: 30
pwdLockout: FALSE
And set up my initial configuration of OpenLDAP with nix, the relevant bit (I think) being:
"olcOverlay=ppolicy" = {
attrs = {
objectClass = [ "olcOverlayConfig" "olcPPolicyConfig" ];
olcOverlay = "ppolicy";
olcPPolicyDefault = "cn=ppolicy,dc=leif,dc=pl";
olcPPolicyUseLockout = "FALSE";
olcPPolicyHashCleartext = "TRUE";
};
};
the whole config being:
services.openldap = {
enable = true;
settings = {
attrs.olcLogLevel = [ "stats" ];
children = {
"cn=schema".includes = [
"${pkgs.openldap}/etc/schema/core.ldif"
"${pkgs.openldap}/etc/schema/cosine.ldif"
"${pkgs.openldap}/etc/schema/inetorgperson.ldif"
"${pkgs.openldap}/etc/schema/nis.ldif"
"${pkgs.openldap}/etc/schema/ppolicy.ldif"
];
"olcDatabase={-1}frontend" = {
attrs = {
objectClass = "olcDatabaseConfig";
olcDatabase = "{-1}frontend";
olcAccess = [ "{0}to * by dn.exact=uidNumber=0+gidNumber=0,cn=peercred,cn=external,cn=auth manage stop by * none stop" ];
};
};
"olcDatabase={0}config" = {
attrs = {
objectClass = "olcDatabaseConfig";
olcDatabase = "{0}config";
olcAccess = [ "{0}to * by * none break" ];
};
};
"olcDatabase={1}mdb" = {
attrs = {
objectClass = ["olcDatabaseConfig" "olcMdbConfig"];
olcDatabase = "{1}mdb";
olcDbDirectory = "/var/db/ldap";
olcDbIndex = [
"objectClass eq"
"cn pres,eq"
"uid pres,eq"
"sn pres,eq,subany"
];
olcSuffix = "dc=leifandersen,dc=net";
olcAccess = [ "{0}to * by * none break" ]; # read break for readable
olcRootDN = "cn=Admin,dc=leifandersen,dc=net";
olcRootPW = "{SSHA}<SOMEHASH>";
};
children = {
"olcOverlay=ppolicy" = {
attrs = {
objectClass = [ "olcOverlayConfig" "olcPPolicyConfig" ];
olcOverlay = "ppolicy";
olcPPolicyDefault = "cn=ppolicy,dc=leif,dc=pl";
olcPPolicyUseLockout = "FALSE";
olcPPolicyHashCleartext = "TRUE";
};
};
};
};
};
};
};
Does anyone have any idea why I'm getting an invalid credentials error? Is there something wrong with my ppolicy setup? (I know its not as strict as it should be, I laxed the requirements a bit in the hopes I could get something to work. Also sorry if this is a bad question, I'm still very new to ldap and my previous searches didn't turn up any answer.)
OpenLDAP requires during authentication binds that access be explicitly granted for the necessary attributes (particularly userPassword) to be used during authentication by the state of the bind before authentication (i.e. anonymous). Try changing the line
olcAccess = [ "{0}to * by * none break" ]; # read break for readable
to
olcAccess = [
"{0}to attr=userPassword by anonymous auth"
"{1}to * by * none break"
]; # read break for readable
in the "{1}mdb" section, as suggested here.
|
STACK_EXCHANGE
|
Why does PostgreSQL service restart during autovacuum
We developed an application and back-end of this application is PostgreSQL. I got an error like DB connection lost from my application, When I am checking it, i saw the following log in my PostgreSQL log
2018-03-07 06:35:14 UTC [1676-2] LOG: received fast shutdown request
2018-03-07 06:35:14 UTC [1676-3] LOG: aborting any active transactions
2018-03-07 06:35:14 UTC [1620-1] postgres@db_prod FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [32367-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [7601-569] postgres@watchtower_db_prod FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [1689-2] LOG: autovacuum launcher shutting down
2018-03-07 06:35:14 UTC [7598-1] postgres@db_prod FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [30875-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [12717-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [12695-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [7956-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [23056-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [19989-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [19993-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [19988-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [19976-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [19990-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [19681-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [20474-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [19669-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [13315-1] postgres@stg FATAL: terminating connection due to administrator command
2018-03-07 06:35:14 UTC [1686-1] LOG: shutting down
2018-03-07 06:35:16 UTC [1686-2] LOG: database system is shut down
2018-03-07 06:35:17 UTC [2819-1] LOG: database system was shut down at 2018-03-07 06:35:16 UTC
2018-03-07 06:35:17 UTC [2819-2] LOG: MultiXact member wraparound protections are now enabled
2018-03-07 06:35:17 UTC [2818-1] LOG: database system is ready to accept connections
2018-03-07 06:35:17 UTC [2823-1] LOG: autovacuum launcher started
My question is why does a PostgreSQL restart happen during this auto vacuum process?
"received fast shutdown request" means someone actively shut down the service.
This has nothing to do with autovacuum.
|
STACK_EXCHANGE
|
Feature request: Fittings Database addition
Hi Caleb,
It would be good to have similar functionality to the piping schedule database functions you have but for piping fitting dimensions. It would be great to see fitting databases much like Pipedata-Pro (https://www.pipedata.com/10-Software/01-Pipedata-Pro/DataSummary/) implemented in this library so they can be pulled directly into calculations and assessments similar to the "nearest_pipe" function for for dimensions, weights, etc for flanges, elbows, etc.
Context:
First off, amazing work here. I am currently trying testing this on an implementation of MicroPython (through ndless support, not native) on a TI-Nspire CAS CX as I have been looking for a way to run this type of analysis and have a database of piping on my handheld. I haven't done extensive testing but most of the features I have tested so far work on the platform and I am just working on how I can make it useful for myself (ie either need to make myself a front end to do full analysis in MicroPython or find a way to MicroPython to export results to somewhere the calculator can pick up and bring into the rest of my analysis tools already on calc). My model only has aftermarket support for python which is a double-edged sword as it makes it somewhat more versatile than the TI supported implementation on the CXII but it also cuts off direct integration into the main calculator functions.
Hi,
Do you have a specific fitting in mind? If you identify a specific standard and digitize a table I can see about including the data in Fluids. Otherwise, for a free and open source project provided as is, I think this request is too broad to leave the issue open.
Sincerely,
Caleb
Good call... I just wanted to see if you were open to it first... I should have or be able to find some digitised tables and provide them. Leave it with me, I will try return with a more specific request ASAP.
The attached is the tables I am interested in having available in this library... I have had a look and spoken to some more experienced guys and they have never seen digitised (Excel / .csv) versions of these tables... but if you are interested and willing to code the integration of these tables into the libraries functionality similar to the piping schedule tables then I can do the manual work of digitising them. I would just need to understand you're desired format and layout so I build them in a way that is easy for you to work with.
onesteel-metalcentre-pipe-and-fittings-data-charts.pdf
https://www.libertygfg.com/media/164103/onesteel-metalcentre-pipe-and-fittings-data-charts.pdf
I don't know where one could upload or send fittings tables but I have an Excel workbook with a lot of info in it for flanges, valves, reducers, swages, etc. etc. etc. These types of workbooks get passed around as it's pretty much publicly available.
Awesome. I had a pile of versions in pdf and was kinda converting some to
Excel but it was hard to try make 100% sure the data was clean with no
artefacts so it made for an effective csv for Caleb to incorporate.
I’d love for you to send them through via email if possible? That way Caleb
could review if it’s a feature he wants to bother pursuing and even if he
doesn’t, getting hold of Excel copies of flange data tables etc would be a
great help to me :)
Regards,
Nathan Staats
On Thu, 14 Jul 2022 at 6:10 am, Matt-Greer @.***> wrote:
I don't know where one could upload or send fittings tables but I have an
Excel workbook with a lot of info in it for flanges, valves, reducers,
swages, etc. etc. etc. These types of workbooks get passed around as it's
pretty much publicly available.
—
Reply to this email directly, view it on GitHub
https://github.com/CalebBell/fluids/issues/47#issuecomment-1183729931,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AXSIKH33VFH77OHI3XRDWKDVT45ERANCNFSM5NK5EYGQ
.
You are receiving this because you authored the thread.Message ID:
@.***>
--
Nathan Staats
MIEAust CPEng NER APEC Engineer IntPE(Aus) BEng(Hons) DipEng
Senior Mechanical / Project Engineer
Ph: +61 (0) 407199958
<#SignatureSanitizer_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_>
<#SignatureSanitizer_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_UNIQUE_ID_SafeHtmlFilter_>
Em: @.***
Web: https://www.linkedin.com/in/nathanstaats
[image: Engineers Australia - National Engineering Register - Nathan Staats
] https://www.linkedin.com/in/nathanstaats
Hello,
Feel free to send this data to my personal email. I have not take action on this topic because the followup request was too big. If you are looking for a specific fitting, maybe we can start with one fitting; but asking to add all of them is too many. If adding one goes well, we can add another, and so on.
Sincerely,
Caleb
Fitting dimensions can be found here: http://hackneyladish.com/DimensionData.aspx
|
GITHUB_ARCHIVE
|
I have been trying to code in Blueprint for 5 or so hours now, with and without physics being applied (using VInterp for the non-physics attempt) to make a character object move away from another object, when the character overlaps it. “Away” meaning the direction from the sphere’s centre to in the direction that the player is currently in from the sphere’s centre. Help!
The most basic implementation would look something like this:
You can see them getting stuck if blocked by something else, you did not include any details so not sure if this is a desired behaviour for you case.
You can, of course, implement this in the character and have it push objects away from itself instead.
For a physics based approach, it’d be enough to give them an impulse in the appropriate direction, much simpler solution.
Oh, so it’s GetUnitDirection. I swear, blueprint is so wrong in its wording logic that I will never find any of the necessary parts I need, I mean an “if” statement with a bool question is called “Branch” and it gave me issues just finding that out.
But if GetUnitDirection is what I needed, then what does VInterp do? And why couldn’t I use that to achieve this result? To my understanding VInterp would take the beginning location, the end location, and with positive speed value would decide a new vector location (towards the object), and with negative speed it should do the trick, no? It seemed to not do anything however, even at high numbers and I don’t understand why. May be I’m completely misunderstanding the use of it.
Your problem here is that you have actual programming experience. Not a problem, of course, but blueprint wording and api does introduce illogical and inconsistent wording in places, yes, I agree 100%.
In the example above I used a very ham-fisted approach and the objects simply slide in 2 axis away from the player in a linear fashion. vInterp would allow you to have them move in non-linear way:
Notice smooth acceleration and arrival at target.
There’s also const version which is linear.
and with negative speed it should do
the trick, no?
No, it would just affect the interpolation speed, it would become negative - instantaneous / infinite.
Here’s a version that should make more sense:
Blueprint looks like this:
edited for clarity (I hope): Each sphere finds a direction from itself to the player, calculates a point along vector that is 400 units in the opposite direction (hence -400), and then interpolates its own movement until that location is reached.
You can of course do it the other way round:
So this should be somewhat closer to what you demonstrated in the original image.
Thank you Everynone!
As a beginner these animated examples and clear BP screenshots is an absolute life-saver. Please keep it up!
I got a simple way of doing this, it doesn’t take that much processing power and It stops the sphere from going through walls.
You can also make it go up and down if you connect Z in front of the “Get Rotation X Vector”, the code itself is complicated but it’s just cause I perfected it in a way that nothing can go wrong.
It rotates to where it’s going and it stops at walls, rn it goes away from anything but you can fix that with tags.
Move Away Code
|
OPCFW_CODE
|
meshes_installer_27.pyc not installable
I am using Linux Mint 20 with Kernel: Linux 5.4.0-52-generic and Python 2.7.18
When executing any test program with an IDE (pycharm), python2.7 says
The robot meshes and URDFs will be installed in the /home/mara/.qibullet/1.4.0 folder. You will need to agree to the meshes license in order to be able to install them. Continue the installation (y/n)?
As soon as I insert y, it's sending an Error message:
Installing the meshes and URDFs in the /home/mara/.qibullet/1.4.0 folder...
Python 2.7 detected
E
======================================================================
ERROR: setUpClass (__main__.PepperBaseTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mara/Masterarbeit/Software/qibullet/tests/base_test.py", line 27, in setUpClass
spawn_ground_plane=True)
File "/home/mara/.local/lib/python2.7/site-packages/qibullet/simulation_manager.py", line 145, in spawnPepper
pepper_virtual = PepperVirtual()
File "/home/mara/.local/lib/python2.7/site-packages/qibullet/pepper_virtual.py", line 36, in __init__
tools._install_resources()
File "/home/mara/.local/lib/python2.7/site-packages/qibullet/tools.py", line 145, in _install_resources
import meshes_installer_27 as meshes_installer
ImportError: Bad magic number in /home/mara/.local/lib/python2.7/site-packages/qibullet/robot_data/installers/meshes_installer_27.pyc
----------------------------------------------------------------------
Ran 0 tests in 13.655s
FAILED (errors=1)
When executing any test program in my terminal, the gui starts, but I can't see any robot mesh.
When I execute this: $ sudo python meshes_installer_27.pyc
There is this error message. RuntimeError: Bad magic number in .pyc file
Hi, quick question, did you install the project with pip or from sources ? (seems like a git LFS problem, see this part of the wiki)
I used
pip install --user qibullet
But I installed pip in a different way:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
What do you mean by "git LFS problem"
By "git LFS problem", I'm referring to what is explained in the wiki section (installation from source) that I linked:
You will encounter a bad magic number error when installing the extra resources if git-lfs isn't correctly installed. To solve that error, install git-lfs, go the the repository folder, and type in git lfs pull. This command will download the lfs resources, you can then resume installing qiBullet.
But since you installed qiBullet via pip, you shouldn't get that issue. I already relaunched a CI job (Ubuntu Xenial, Python 2.7.15), it manages to install the project (and the meshes) from source, and to run the unit tests.
I will try to reproduce your bug, keep me posted if you have more information
@Mara01010011, I can't reproduce your problem...
You should encounter the bad magic number issue if the .pyc file is not a valid pyc, or if your Python version doesn't match the Python version used to generate the file (only the major and minor version number should count).
I checked the pyc files that have been exported to PyPi (for the last qiBullet version), they are valid.
meshes_installer_27.pyc was created with Python 2.7.12 and you're using Python 2.7.18, the version major and minor are the same, it should be compatible. I still wanted to make sure of that, so I launched a CI with different versions of Python 2.7: everything seems to work for Python 2.7.18 (see here)
I don't really understand why you run into that issue. The pyc file might be corrupted, did you try to uninstall everything, remove the extra resources (in /home/mara/.qibullet/), and reinstall qiBullet? Or to install from source ?
I uninstalled qiBullet and pyBullet and reinstalled it with pip. There is an error message, when I run a python program, that should spawn nao:
pybullet build time: Oct 8 2020 00:08:00
startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
Creating context
Created GL 3.3 context
Direct GLX rendering context obtained
Making context current
GL_VENDOR=Intel
GL_RENDERER=Mesa Intel(R) HD Graphics 620 (KBL GT2)
GL_VERSION=4.6 (Core Profile) Mesa 20.0.8
GL_SHADING_LANGUAGE_VERSION=4.60
pthread_getconcurrency()=0
Version = 4.6 (Core Profile) Mesa 20.0.8
Vendor = Intel
Renderer = Mesa Intel(R) HD Graphics 620 (KBL GT2)
b3Printf: Selected demo: Physics Server
startThreads creating 1 threads.
starting thread 0
started thread 0
MotionThreadFunc thread started
ven = Intel
Workaround for some crash in the Intel OpenGL driver on Linux/Ubuntu
The qibullet ressources are not yet installed.
Installing resources for qiBullet
The robot meshes and URDFs will be installed in the /home/mara/.qibullet/1.4.0 folder. You will need to agree to the meshes license in order to be able to install them. Continue the installation (y/n)? ven = Intel
Workaround for some crash in the Intel OpenGL driver on Linux/Ubuntu
XIO: fatal IO error 62 (Timer expired) on X server ":0.0"
after 1964 requests (1964 known processed) with 0 events remaining.
Okay, that's different from your original issue, you didn't run (at least not yet) into the bad magic number problem.
The Workaround for some crash in the Intel OpenGL driver on Linux/Ubuntu issue is actually mentioned in the troubleshooting section of the README. Is your GPU an Intel HD Graphics 620, or do you have something else ?
@Mara01010011 any updates on the issue ?
@Mara01010011 any updates on the issue ?
Sorry, no. I tried to reinstall it a few times in different ways, but it didn't work. But I made every adjustment of my python program for the nao with our real nao in the laboratory, so at the moment I am not in need of qibullet.
I think, the problem lies within my linux installation. I reinstalled it but sustained the /home patition. Since then I sometimes experience permission issues mostly with applications which use java. So the problem in this case might be a user permission problem, too.
@Mara01010011 any updates on the issue ?
Sorry, no. I tried to reinstall it a few times in different ways, but it didn't work. But I made every adjustment of my python program for the nao with our real nao in the laboratory, so at the moment I am not in need of qibullet.
I think, the problem lies within my linux installation. I reinstalled it but sustained the /home patition. Since then I sometimes experience permission issues mostly with applications which use java. So the problem in this case might be a user permission problem, too.
Alright. I'll close the issue for know, you can re-open it if you run into this problem again.
Alright. I'll close the issue for know, you can re-open it if you run into this problem again.
with git-lfs pull I solved the same issue with Python3.8 , thank you @mbusy
|
GITHUB_ARCHIVE
|
DevLog #1: I’ve been devving without logging!
When I first started this endeavor, I promised myself one thing: no half-assing it. I was going to write a dev log weekly, slowly building up an audience. Eventually, I’d release my first game. One-man-banding an entire startup is rougher than I thought.
So, where are we at? A few months ago, I made a silly little endless space shooter titled “Meteoroids.” I then uploaded it to itch.io. “Meteoroids” was my first step into the wide world of game design. With future projects I would actually sell the games, building up my own little dream studio. “Meteoroids” was just a test, the beginning.
I had already begun writing out my design document for the next project when it occurred to me: why not just remake “Meteoroids” as a mobile game? Make THAT my first officially released game?
Cut to 10 months later and I’ve practically finished this little terror of a game, although I’ve completely neglected this site.
I always intended to have a legit dev log, I just didn’t think it’d be necessary with this first one, I thought I could have it done in a matter of weeks and start the real work on the next project. Like all best laid plans, that didn’t happen. I hadn’t anticipated the amount of work it would take to move to a completely new device, set up an LLC, learn not one but TWO new formats (iOS and Android), and playtest the thing (still haven’t gotten to a beta). Probably the worst realization was that, 3 months ago, when I’d set a deadline to get a beta out, I started to hate my game. I could see the light at the end of the tunnel, and it angered me. I fell into a depression, and stopped all work.
Maybe it was a seasonal thing, but that’s beside the point: I’d lost all motivation, and I didn’t know how to stop this thing and move on to the next one.
Well, I think that funk may be over. Thanks to some advice from my friends Don, Eric, and Jamie (along with some sage advice from the wife), I finally came to the conclusion that this shit just needs to be done. So, I’ve started working on a nice little infinite-spawning script. I’ve stripped out the more console-y elements and streamlined the entire project back into a basic, endless, arcade shooter.
That’s right, I fell into the trap of scope creep… on my first project.
I’ve learned an immense amount so far in my game design journey, and although I’m clearly a novice, it’s still a lot of fun, when I don’t let myself get in the way.
Going forward, I’ll try to be a little better about logging my devving (er, developing, I guess… I like devving), and I’ll try to look out for traps like scope creep in the future. For now, I’m going to finish up this script, and get those beta invites out!
If you’re interested in a beta invite, feel free to contact me at the Facebook page, on the message board, or in the comments.
|
OPCFW_CODE
|
// Fill out your copyright notice in the Description page of Project Settings.
#pragma once
/**
* 三角形分割処理
*/
class FNaTrianglation
{
public:
//! 内接円用定義
struct FCircle
{
//! 中心点X
int32 X;
//! 中心点Y
int32 Y;
//! 半径
float Radius;
//! 二乗半径(距離判定用)
float Radius2;
};
//! 辺
struct FSegment
{
//!
FIntPoint Start;
//!
FIntPoint End;
FSegment(){}
FSegment( FIntPoint& p0, FIntPoint& p1 )
{
Start = p0;
End = p1;
}
//!
bool Contains( FIntPoint& p )
{
return Start == p || End == p;
}
//!
bool Equals( FSegment dst )
{
return (Start == dst.Start && End == dst.End) || (Start == dst.End && End == dst.Start);
}
};
//! 三角形
struct FTriangle
{
//! 頂点
FIntPoint Points[3];
//! 辺
FSegment Segments[3];
//! 外接円
FCircle Circle;
FTriangle(){}
FTriangle( FIntPoint& p0, FIntPoint& p1, FIntPoint& p2 )
{
Points[0] = p0;
Points[1] = p1;
Points[2] = p2;
Segments[0] = FSegment( p0, p1 );
Segments[1] = FSegment( p1, p2 );
Segments[2] = FSegment( p2, p0 );
GetCircumscribedCircle( p0, p1, p2, Circle );
}
//! 外接円計算
void GetCircumscribedCircle( FIntPoint& p1, FIntPoint& p2, FIntPoint& p3, FCircle& circle )
{
int c;
int xx1 = p1.X * p1.X;
int xx2 = p2.X * p2.X;
int xx3 = p3.X * p3.X;
int yy1 = p1.Y * p1.Y;
int yy2 = p2.Y * p2.Y;
int yy3 = p3.Y * p3.Y;
int t0 = xx2 - xx1 + yy2 - yy1;
int t1 = xx3 - xx1 + yy3 - yy1;
c = ((p2.X - p1.X) * (p3.Y - p1.Y) - (p2.Y - p1.Y) * (p3.X - p1.X)) * 2;
if ( c == 0 ){
circle.X = 0;
circle.Y = 0;
}
else {
circle.X = ((p3.Y - p1.Y) * t0 + (p1.Y - p2.Y) * t1) / c;
circle.Y = ((p1.X - p3.X) * t0 + (p2.X - p1.X) * t1) / c;
}
t0 = p1.X - circle.X;
t1 = p1.Y - circle.Y;
circle.Radius2 = t0 * t0 + t1 * t1;
circle.Radius = (float)FMath::Sqrt( circle.Radius2 );
}
//!
bool IntersectPoint( FIntPoint& p )
{
int dx,dy;
float dist;
dx = p.X - Circle.X;
dy = p.Y - Circle.Y;
dist = (dx * dx + dy * dy);
return dist < Circle.Radius2;
}
//!
bool Equals( FTriangle& dst )
{
for ( int i = 0; i < 3; ++i ){
if ( !dst.ContainsSegment( Segments[i] ) ){
return false;
}
}
return true;
}
//!
bool ContainsPoint( FIntPoint p )
{
return Points[0] == p || Points[1] == p || Points[2] == p;
}
//!
bool ContainsSegment( FSegment& seg )
{
return Segments[0].Equals( seg ) || Segments[1].Equals( seg ) || Segments[2].Equals( seg );
}
//!
FIntPoint GetAnotherPoint( FSegment seg )
{
if ( !seg.Contains( Points[0] ) ){
return Points[0];
}
else if ( !seg.Contains( Points[1] ) ){
return Points[1];
}
else {
return Points[2];
}
}
//!
void AppendAnotherSegments( FSegment seg, TArray<FSegment>& refVal )
{
if ( !Segments[0].Equals( seg ) ){
refVal.Push( Segments[0] );
}
if ( !Segments[1].Equals( seg ) ){
refVal.Push( Segments[1] );
}
if ( !Segments[2].Equals( seg ) ){
refVal.Push( Segments[2] );
}
}
};
public:
//! 三角形分割
static TArray<FTriangle> Exec( TArray<FIntPoint>& points );
protected:
//! 外部三角形取得
static TSharedPtr<FNaTrianglation::FTriangle> GetOuterTriangle( FIntRect& rect );
};
|
STACK_EDU
|
[Error] specifies a non-existent file for the CFBundleExecutable key
New Issue Checklist
[ x ] Updated fastlane to the latest version
[ x ] I have read the Contribution Guidelines
Issue Description
Complete output when running fastlane, including the stack trace and command used
xcrun throws an error on the Bundle I have in my project. The project compiles and launch fine in XCode though.
[14:41:57]: $ /usr/bin/xcrun /Users/nico/.fastlane/bin/bundle/lib/ruby/gems/2.2.0/gems/fastlane-2.3.0/gym/lib/assets/wrap_xcodebuild/xcbuild-safe.sh -exportArchive -exportOptionsPlist '/var/folders/nf/vb_nt19s0sqcygc79bw731z40000gn/T/gym_config20161227-32539-1du5v9t.plist' -archivePath /Users/nico/Library/Developer/Xcode/Archives/2016-12-27/bFan-ios-dev\ 2016-12-27\ 14.37.49.xcarchive -exportPath '/var/folders/nf/vb_nt19s0sqcygc79bw731z40000gn/T/gym_output20161227-32539-1cyl2vb'
+ xcodebuild -exportArchive -exportOptionsPlist /var/folders/nf/vb_nt19s0sqcygc79bw731z40000gn/T/gym_config20161227-32539-1du5v9t.plist -archivePath '/Users/nico/Library/Developer/Xcode/Archives/2016-12-27/bFan-ios-dev 2016-12-27 14.37.49.xcarchive' -exportPath /var/folders/nf/vb_nt19s0sqcygc79bw731z40000gn/T/gym_output20161227-32539-1cyl2vb
2016-12-27 14:42:02.538 xcodebuild[37360:975617] [MT] IDEDistribution: -[IDEDistributionLogging _createLoggingBundleAtPath:]: Created bundle at path '/var/folders/nf/vb_nt19s0sqcygc79bw731z40000gn/T/bFan-ios-dev_2016-12-27_14-42-02.537.xcdistributionlogs'.
1.2.840.1136<IP_ADDRESS>
1.2.840.1136<IP_ADDRESS>
1.2.840.1136<IP_ADDRESS>
2016-12-27 14:42:18.245 xcodebuild[37360:975617] [MT] IDEDistribution: Step failed: <IDEDistributionThinningStep: 0x7ff661dc7320>: Error Domain=IDEDistributionErrorDomain Code=14 "No applicable devices found." UserInfo={NSLocalizedDescription=No applicable devices found.}
error: exportArchive: No applicable devices found.
Error Domain=IDEDistributionErrorDomain Code=14 "No applicable devices found." UserInfo={NSLocalizedDescription=No applicable devices found.}
** EXPORT FAILED **
In the logs, I have the following:
2016-12-27 13:42:18 +0000 error: Info.plist of “bFan - DEV.app/VGPlayer.bundle” specifies a non-existent file for the CFBundleExecutable key
2016-12-27 13:42:18 +0000 [MT] /Applications/Xcode.app/Contents/Developer/usr/bin/ipatool exited with 1
2016-12-27 13:42:18 +0000 [MT] ipatool JSON: {
alerts = (
{
code = 2567;
description = "Configuration issue: platform AppleTVSimulator.platform doesn't have any non-simulator SDKs; ignoring it";
info = {
};
level = WARN;
},
{
code = 2567;
description = "Configuration issue: platform iPhoneSimulator.platform doesn't have any non-simulator SDKs; ignoring it";
info = {
};
level = WARN;
},
{
code = 2567;
description = "Configuration issue: platform WatchSimulator.platform doesn't have any non-simulator SDKs; ignoring it";
info = {
};
level = WARN;
},
{
code = 0;
description = "Info.plist of \U201cbFan - DEV.app/VGPlayer.bundle\U201d specifies a non-existent file for the CFBundleExecutable key";
info = {
};
level = ERROR;
type = "malformed-payload";
}
);
}
The Info.Plist file in the bundle contains the CFBundleExecutable key though.
My Gym command:
gym(
workspace: "bFan-ios-dev.xcworkspace",
scheme: org + "-ios-dev",
output_name: org + "-ios-dev.ipa",
clean: false,
derived_data_path: "/var/tmp/derived_data",
output_directory: "build"
)
Environment
Please run fastlane env and copy the output below. This will help us help you :+1:
If you used --capture_output option please remove this block - as it is already included there.
Have you tried feeding the arguments to gym into a direct call to xcodebuild?
You should be able to do something like this:
xcodebuild -workspace bFan-ios-dev.xcworkspace -scheme '<org>-ios-dev' build
You made need to add a few more options to make the build succeed (you can see the other options required by doing xcodebuild -help, and any command line errors should point you in the right direction), but I'm curious to know if xcodebuild will fail with the same error as gym, or if you will get a successful build.
Thanks. I just tested and it builds fine with your command. No other options required.
What doesn't work is is the -exportArchive and provided options. That fails.
The archive folder contains a bunch of project files, an Info.plist at the root, my Bundle, and inside the Bundle another Plist file.
Not sure why this bundle creates a problem :(
I think it was a signing issue. I also deleted the CFBundleExecutable key in the Bundle Plist.
Closing
|
GITHUB_ARCHIVE
|
File System Store draft, to be re-read
Condensation objects and accounts can be stored in a folder on any file system. The present document describes structure and operations of a Condensation store as a folder on a file system.
For the remainder of this text, we assume that the base folder is called base-folder.
Objects are stored as files named
where H* are the lowercase hex digits of the object's hash. The first two digits are used as sub folder, while the remaining 62 digits denote the file name. An example of an object file name is:
To get an object, simply read the corresponding file.
To add an object, create the destination folder (base-folder/objects/HH) if necessary, and write the contents as a temporary file within this folder. Then rename that file to its final name.
On all major operating systems, renaming a file in the same folder is atomic. If the object exists already (and its contents match the expected SHA-256 sum), no new file needs to be written, but the existing file must be touched to set its modification date to now.
To book an object, touch the corresponding file (i.e. set its modification date to now).
Accounts and boxes
Each box is a folder named
where ACCOUNT are 64 hex digits and BOX is either messages, private, or public. Each hash within a box is an empty file named after the hash (64 lowercase hex digits).
A store with one account may look as follows:
/srv/condensation accounts eae220..c6 messages 465545..da 543c50..1a private public 29767d..da
To list a box, enumerate the files of the corresponding folder, and report all file names that consist of exactly 64 hex digits.
To add an envelope, put the object onto the store, and create a hash file in the corresponding box folder.
To remove an envelope from a box, simply delete the corresponding hash file from the box folder. Return success irrespective of whether deletion succeeded or not. The envelope remains on the store until garbage is collected.
Path length considerations
A Condensation object path is always 74 ASCII characters long, while a box entry requires up to 165 ASCII characters (message box).
On Windows, it is recommended to use the "\\?\" prefix, as regular paths are limited to about 256 characters.
On 8.3 type file systems, the present protocol cannot be used.
Recognizing Condensation folders
A folder containing the sub folders objects and accounts (written with lowercase characters) is considered a Condensation store folder. Note that other files and folders may be present as well.
POSIX permissions (private store)
A store used by POSIX user U should use the following permissions and ownership:
|Object folders||0711 (rwx, ––x, ––x)||User U|
|Object files||0644 (rw–, r––, r––)||User U|
|Account folders||0711 (rwx, ––x, ––x)||User U|
|Message box folder||0700 (rwx, –––, –––)||User U|
|Private box folder||0700 (rwx, –––, –––)||User U|
|Public box folder||0755 (rwx, r–x, r–x)||User U|
|Message box files||0600 (rw–, –––, –––)||User U|
|Private box files||0600 (rw–, –––, –––)||User U|
|Public box files||0644 (rw–, r––, r––)||User U|
In general, everything belongs to user U. To share objects with other people, the object store must be publicly readable.
Note that it is not possible to receive messages from other people through a private store, as they cannot post envelopes. Hence, the message box can remain private.
POSIX permissions (shared store)
To share a store among multiple users, add all users to a group G, and use the following permission scheme:
|Object folders||0771 (rwx, rwx, ––x)||Any user, group G|
|Object files||0664 (rw–, rw–, r––)||Any user, group G|
|Account folders||0771 (rwx, rwx, ––x)||Any user, group G|
|Message box folder||0770 (rwx, rwx, –––)||Any user, group G|
|Private box folder||0770 (rwx, rwx, –––)||Any user, group G|
|Public box folder||0775 (rwx, rwx, r–x)||Any user, group G|
|Message box files||0660 (rw–, rw–, –––)||Any user, group G|
|Private box files||0660 (rw–, rw–, –––)||Any user, group G|
|Public box files||0664 (rw–, rw–, r––)||Any user, group G|
Shared folder stores allow users to communicate within the group.
Users must minimally trust each other. They cannot read or modify each others data (beyond of what they share with each other), but can delete each others accounts and objects.
To thwart against private data deletion, users may use their private store to store private data, and a shared store for communication only. An actor thereby announces itself on both the private and the shared store. Messages are sent and read through the shared store, while private data is stored on the private store:
|Private store||Shared store|
Centralized garbage collection through tree traversal
Garbage collection can be performed by an external program (or by any user) when no user is actively writing to the store. For that, start with the boxes, and follow the objects down the tree. Keep a list of seen objects, and delete all other objects once all trees have been traversed.
Since every object needs to be opened to read the header, this procedure can take a few seconds for a store with several thousand objects.
Conceptually, this is a centralized strategy. The garbage collector must be able to follow all trees. If an intermediate node is missing, that whole subtree will be deleted, since the garbage collector is unable to traverse these nodes.
Client-driven garbage collection through stages
With client-driven garbage collection, stage folders are created. Each stage folder is named after the creation date (UTC timestamp in ISO 8602 format), and contains a Condensation store folder (i.e., the objects and accounts folders as mentioned above). As an example, consider the following store with two stage folders:
base-folder 20140610T174211Z objects accounts 20140712T112958Z objects accounts
- writes new objects to the most recent stage
- moves all his objects from older stages to the most recent stage, and then moves his account to the newest stage
- looks up objects in all stages (get object)
- deletes old stages that do not contain any accounts
- creates a new stage if the newest stage is more than 30 days old
This garbage collection scheme works in a completely distributed setting, and is fault-tolerant. It requires cooperation of all users, however. Should a user not connect for a prolonged amount of time, he/she will block deletion of a stage.
|
OPCFW_CODE
|
// How do I declare the errors in here?? I'm supposed to use a using statement to do so
booloperator==(const MyString& rightOp) const;
booloperator==(constchar* rightOp) const;
charoperator(int sub) const;
char& operator(int sub);
constchar* c_str() const;
bool empty() const;
int size() const;
MyString(const MyString& s);
int capacity() const;
MyString& operator=(const MyString& rightOp);
MyString operator+(const MyString&) const;
MyString operator+(constchar*) const;
Any help would be greatly appreciated. I don't think it's a super hard question :P it's just that my teachers take weeks to respond to questions... and their office hours are non existent. If any more code is needed, I'll be sure to post.
The header file snipit you posted does NOT include declarations for the two at methods defined in your implementation (.cpp) file. In any case the declaration lines you need are pretty much a cut-n-paste of the first line of the function implementation. You need only strip off the MyString:: and add a semicolon to the end. It should look like this:
Yea, my teacher gave us a hw assignment involving it, so I kinda have to go about it this way lol. For some reason, I had to remove the "out_of_range" text because it was asking for a type-specifier, but thank you for the help good sir and I will definitely give your link a read.
No, you shouldn't have to remove the out_of_range specifier. If you do, then I think you're missing the entire point of what your instructor is asking you to do. Don't you have a book or class notes or something with examples? I think you're in trouble in this class.
I suspect your problem is one (or both) of the following:
1) You didn't include the header file that declares the standard exception out_of_range:
2) The exception out_of_range exists in the namespace std. You either need to refer to this exception by it's fully qualified name (std::out_of_range) OR you need to add this statement after your includes:
which tells the compiler to search the namespace "std" in addition to the global namespace when searching for symbols. In other words, when the compiler sees the symbol "out_of_range" in your code and can't find it's definition in the global namespace, the above statement will make the compiler also search for the symbol "out_of_range" in the namespace named "std" (where it will happily find it).
Well, I mean I replaced it with "std::out_of_range", and it works just fine. I dunno, I already had the #include <stdexcept>
I'm not gonna lie, this year made a huge leap as far as difficulty. This is only my second year of programming. Perhaps I'm not studying well enough, but I got a 97% in my first year, not I'm at about a C... I'm trying to catch up, but I don't think I'm that bad where I don't know anything about header files and using things from the library.
But thanks again for your help. I was able to get everything in.
|
OPCFW_CODE
|
This section describes changes made in each version of the Sakila sample database.
Films without an actor were not returned by the film_list and nicer_but_slower_film_list views.
Fixed MySQL Bug #106951: Accented characters were missing from the
countryfields; their values were updated using the
worlddatabase. In addition, the acute accent character itself was also missing.
Fixed MySQL Bug #107158: Removed five rows in the payment table that had a null rental_id value.
Database objects now use
utf8. This change caused a
Specified key was too long; max key length is 767 byteserror in MySQL 5.6 for the
film.titlecolumn, which was declared as
VARCHAR(255). The actual maximum title length is 27 characters, so the column was redeclared as
VARCHAR(128)to avoid exceeding the maximum key length.
SET NAMES utf8mb4statement.
sakila-data.sqlwas converted from DOS (CRLF) line endings to Unix (LF) line endings.
address.locationcolumn is a
GEOMETRYcolumn that has a
SPATIALindex. As of MySQL 8.0.3,
SPATIALindexes are ignored unless the index spatial column has an
locationcolumn was changed to include an
SRID 0attribute for MySQL 8.0.3 and higher.
staff.passwordcolumn was declared as
VARCHAR(40) BINARY. This is use of
BINARYas shorthand in a character column declaration for specifying a
_bincollation, which is deprecated as of MySQL 8.0.17. The column was redeclared as what
BINARYis shorthand for, that is,
VARCHAR(40) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin.
rewards_report()stored procedure, the
min_dollar_amount_purchasedparameter was declared as
DECIMAL(10,2) UNSIGNED. Use of
DECIMALis deprecated as of MySQL 8.0.17. The parameter was redeclared without
film_not_in_stock()stored procedures used the
FOUND_ROWS()function, which is deprecated as of MySQL 8.0.17. In each procedure, the
FOUND_ROWS()query was replaced by a query that uses
COUNT(*)with the same
WHEREclauses as its associated query. This is more expensive than using
FOUND_ROWS()but produces the same result.
InnoDBprior to MySQL 5.6.10 to avoid table-creation failure in older versions. (However, we still recommend upgrading to MySQL 5.6.10 or higher.)
sakila.mwbfile for MySQL Workbench was updated per the preceding changes.
film_texttable, and its
FULLTEXTdefinition, now uses
InnoDB. If you use an older MySQL server version (5.6.10 and lower), we recommend upgrading MySQL. If you cannot upgrade, change the
ENGINEvalue for the
sakila-spatial-schema.sqlinto a single file by using MySQL version-specific comments.
Spatial data, such as
address.location, is inserted into the sakila database as of MySQL server 5.7.5 (when spatial indexing support was added to
InnoDBfull-text search is used as of MySQL server 5.6.10, when before
Added an additional copy of the Sakila example database that includes spatial data with the geometry data type. This is available as a separate download, and requires MySQL server 5.7.5 or later. To use this database, load the
sakila-spatial-schema.sqlfile rather than the
GROUP BYclause of the
film_listview definitions to be compatible with
ONLY_FULL_GROUP_BYSQL mode, which is enabled by default as of MySQL 5.7.5.
upd_filmtrigger definition to include changes to
Changed error handler for
inventory_held_by_customerfunction. Function now has an exit handler for
NOT FOUNDinstead of the more cryptic
Added template for new BSD license to schema and data files.
READS SQL DATAto the stored procedures and functions where appropriate to support loading on MySQL 5.1.
Fixed date-range issue in the
rewards_reportprocedure (thanks Goplat).
Fixed bug in
sales_by_storeview that caused the same manager to be listed for every store.
Fixed bug in
inventory_held_by_customerfunction that caused function to return multiple rows.
sakila-data.sqlfile to prevent it from interfering with data loading.
Optimized data file for loading (multiple-row
INSERTstatements, transactions). (Thanks Giuseppe)
Fixed error in
paymenttable loading script that caused infinitely increasing payment amounts.
sales_by_film_categoryviews, submitted by Jay Pipes.
rewards_reportstored procedure, submitted by Jay Pipes.
sakila-data.sqlfile to load data into sample database.
Foreign key added for
INT Auto_Increment, made into surrogate primary key, old primary key changed to
All tables have a
TIMESTAMPcolumn with traditional behavior (
DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP).
actor_idis now a
address_idis now a
category_idis now a
city_idis now a
country_idis now a
customer_idis now a
customertable are now
customertable now has
customertable is now
customertable has a new
ON INSERTtrigger that enforces
create_datecolumn being set to
film.release_yearadded with type
film.original_language_idadded along with
languagetable. For foreign films that may have been subtitled.
film_category; allows for multiple categories per film.
Trigger added to
paymenttable to enforce that
payment_dateis set to
rental.rental_dateand is now
Trigger added to
rentaltable to enforce that
rental_dateis set to
film_listview updated to handle new
nicer_but_slower_film_listview updated to handle new
|
OPCFW_CODE
|
'''
Pisa stage that pre-computes some quantities
needed for the generalized likelihood, and applies
small adjustments to the weight distributions in cases
where the number of mc event per bin is low.
The code does the following, in order:
- Calculate the number of MC events per bin once,
at the setup time
- Calculate, at setup time, a mean adjustment, based
on the average number of MC events per bin. If the
latter is less than one, adjustment is applied, else
that quantity is equal to zero
- Populate ANY empty mc bin with a pseudo-weight with a
value equal to the maximal weight value of a given
dataset. This correspond to the empty bin strategy #2
described in (1902.08831). Note that empty bin strategy #1
can still be applied later on, if one provides the bin
indices where no datasets have any MC events. This step
runs in the apply function because the value of the pseudo
weight will change during minimization.
- Once this is done, computes the alpha and beta
parameters that are fed into the likelihood
The stage appends / modifies the following:
weights: changes the individual weight distribution
based on the empty bin filling outcome
llh_alphas: Map (alpha parameters of the generalized likelihood)
llh_betas: Map (beta parameters of the generalized likelihood)
n_mc_events: Map (number of MC events in each bin
new_sum: Map (Sum of the weights in each bin (ie MC expectation),
corrected for the empty bin filling and the mean
adjustment
'''
from __future__ import absolute_import, print_function, division
__author__ = "Etienne Bourbeau (etienne.bourbeau@icecube.wisc.edu)"
import numpy as np
import copy
from pisa import FTYPE
from pisa.core.stage import Stage
# uncomment this to debug stuff
from pisa.utils.log import logging
from pisa.utils.profiler import profile, line_profile
from pisa.utils.log import set_verbosity, Levels
#set_verbosity(Levels.DEBUG)
PSEUDO_WEIGHT = 0.001
class generalized_llh_params(Stage):
"""
Pisa stage that applies mean adjustment and
empty bin filling. Also computes alphas and betas
that are needed by the generalized poisson likelihood
"""
# this is the constructor with default arguments
def __init__(self,
**std_kwargs,
):
# init base class
super(generalized_llh_params, self).__init__(expected_params=(),
**std_kwargs,
)
def setup_function(self):
"""
Declare empty containers, determine the number
of MC events in each bin of each dataset and
compute mean adjustment
"""
N_bins = self.apply_mode.tot_num_bins
self.data.representation = self.apply_mode
for container in self.data:
#
# Generate a new container called bin_indices
#
container['llh_alphas'] = np.empty((container.size), dtype=FTYPE)
container['llh_betas'] = np.empty((container.size), dtype=FTYPE)
container['n_mc_events'] = np.empty((container.size), dtype=FTYPE)
container['old_sum'] = np.empty((container.size), dtype=FTYPE)
#
# Step 1: assert the number of MC events in each bin,
# for each container
self.data.representation = 'events'
nevents_sim = np.zeros(N_bins)
for index in range(N_bins):
index_mask = container['bin_{}_mask'.format(index)]
if 'kfold_mask' in container:
index_mask*=container['kfold_mask']
# Number of MC events in each bin
nevents_sim[index] = np.sum(index_mask)
self.data.representation = self.apply_mode
np.copyto(src=nevents_sim,
dst=container["n_mc_events"])
container.mark_changed('n_mc_events')
#
# Step 2: Calculate the mean adjustment for each container
#
mean_number_of_mc_events = np.mean(nevents_sim)
if mean_number_of_mc_events < 1.0:
mean_adjustment = -(1.0-mean_number_of_mc_events) + 1.e-3
else:
mean_adjustment = 0.0
container.set_aux_data(key='mean_adjustment', data=mean_adjustment)
#
# Add hypersurface containers if they don't exist
# (to avoid errors in get_outputs, if we want )
# these to be returned when you call get_output
#
if 'hs_scales' not in container.keys():
container['hs_scales'] = np.empty((container.size), dtype=FTYPE)
container['errors'] = np.empty((container.size), dtype=FTYPE)
def apply_function(self):
'''
Computes the main inputs to the generalized likelihood
function on every iteration of the minimizer
'''
N_bins = self.apply_mode.tot_num_bins
#
# Step 4: Apply the empty bin strategy and mean adjustment
# Compute the alphas and betas that go into the
# poisson-gamma mixture of the llh
#
for container in self.data:
self.data.representation = 'events'
#
# Step 3: Find the maximum weight accross all events
# of each MC set. The value of that weight defines
# the value of the pseudo-weight that will be included
# in empty bins
# for this part we are in events mode
# Find the minimum weight of an entire MC set
pseudo_weight = 0.001
container.set_aux_data(key='pseudo_weight', data=pseudo_weight)
old_weight_sum = np.zeros(N_bins)
new_weight_sum = np.zeros(N_bins)
alphas_vector = np.zeros(N_bins)
betas_vector = np.zeros(N_bins)
#
# Load the pseudo_weight and mean displacement values
#
mean_adjustment = container.scalar_data['mean_adjustment']
pseudo_weight = container.scalar_data['pseudo_weight']
for index in range(N_bins):
index_mask = container['bin_{}_mask'.format(index)]
if 'kfold_mask' in container:
index_mask*=container['kfold_mask']
current_weights = container['weights'][index_mask]
old_weight_sum[index] += np.sum(current_weights)
assert np.all(current_weights>=0),'SOME WEIGHTS BELOW ZERO'
n_weights = current_weights.shape[0]
# If no weights and other datasets have some, include a pseudo weight
# Bins with no mc event in all set will be ignore in the likelihood later
#
# make the whole bin treatment here
if n_weights <= 0:
current_weights = np.array([pseudo_weight])
n_weights = 1
# write the new weight distribution down
new_weight_sum[index] += np.sum(current_weights)
# Mean of the current weight distribution
mean_w = np.mean(current_weights)
# variance of the current weight
var_of_weights = ((current_weights-mean_w)**2).sum()/(float(n_weights))
# Variance of the poisson-gamma distributed variable
var_z = (var_of_weights + mean_w**2)
if var_z < 0:
logging.warn('warning: var_z is less than zero')
logging.warn(container.name, var_z)
raise Exception
# if the weights presents have a mean of zero,
# default to alphas values of PSEUDO_WEIGHT and
# of beta = 1.0, which mimicks a narrow PDF
# close to 0.0
beta = np.divide(mean_w, var_z, out=np.ones(1), where=var_z!=0)
trad_alpha = np.divide(mean_w**2, var_z, out=np.ones(1)*PSEUDO_WEIGHT, where=var_z!=0)
alpha = (n_weights + mean_adjustment)*trad_alpha
alphas_vector[index] = alpha
betas_vector[index] = beta
# Calculate alphas and betas
self.data.representation = self.apply_mode
np.copyto(src=alphas_vector, dst=container['llh_alphas'])
np.copyto(src=betas_vector, dst=container['llh_betas'])
np.copyto(src=new_weight_sum, dst=container['weights'])
np.copyto(src=old_weight_sum, dst=container['old_sum'])
container.mark_changed('llh_alphas')
container.mark_changed('llh_betas')
container.mark_changed('old_sum')
container.mark_changed('weights')
|
STACK_EDU
|
From the mundane to the exotic, our correspondents have brought together well over two dozen locations, useful for filler encounters, adventure destinations, or campaign backdrops. All entries specifically relate to North American places, but those in other parts of the world should be able to use many entries with only small changes due to local customs.
Game mechanics are at a minimum, so players of other systems should find the maps and location descriptions of some use in their campaigns.
Each entry starts with a generic explanation of the location, followed by one or more subsections that provide further details about the area. These subsections include “Don’t Miss … ,” “Things to See,” “People to Meet,” and “Things to Do.”
Don’t Miss …
Describing one example location, this section gives Game Masters a version of the area suitable for quickly inserting into an adventure. Alternatively, Game Masters can use the example as a source of inspiration for designing their own locations.
Things to See
This section offers a quick list of items (generally of the moveable variety) commonly found in the location. They’re a mere sampling of possible items, to get the Game Master started on a few believable details. Some of the items on the list may not be appropriate for certain settings (for example, while “video cassettes” are given in the generic library’s list, a Wild West library would not contain them.) The exact placement, number, and effects of the items are left to the Game Master.
Should the characters wish to use items in a destructive manner, Game Masters can follow these guidelines: Hard objects generally give a +1 , +2, or +1D bonus to Strength Damage, and they have a maximum throwing range of the character’s lifting roll (plus any relevant Special Ability bonus). Their Toughness generally equals 2D.
Light items don’t inflict any damage, but they can be distracting. The target of a successful attack has increased difficulties for the rest of the round and all of the next. (The modifier should also lower the initiative roll.) The difficulty modifier can range from +1 (for example, from an empty cup) to +5 (for example, from a cup filled with hot coffee).
To be effective (and thus inflict the modifier), light items may be tossed no more than two meters. However, some items are more aerodynamic (which means that they’ll go farther when tossed), and all items (including heavy ones) can be dropped from a distance.
People to Meet
This section provides tips on possible skills and at- tributes associated with Game Master’s characters for the location. When you need a filler character or a base to modify, use the following game characteristics.
Reflexes 2D, melee combat 2D+1, Coordination 2D, piloting 2D+1, throwing 2D+1, Physique 2D, lifting 2D+1, running 2D+1 , swimming 2D+1, Knowledge 2D, business 2D+1, scholar 2D+1, tech: computers 2D+1, Perception 2D, streetwise 2D+1, Presence 2D, persuasion 2D+l. Move: 10. Strength Damage: 1D. Body Points: 10/Wound levels: 2.
Things to Do
Wrapping up the location, this section gives one or more scenario hooks or seeds. Not necessarily related to the example location, they show some ways to incorporate that type of location into a Game Master’s own campaigns.
Recently Posted Adventure Locations
See All Adventure Locations
|
OPCFW_CODE
|
Why would a Linear SVM perform worse than Logistic Regression?
I have a dataset of about 1000 features and 2000 training examples. I use Randomized Search with cross validation to compare Random Forest, Linear SVC, SVC with different non-linear kernels, and Logistic Regression.
I consistently get much worse scores with Linear SVC than with the other models, including Logistic regression (AUC ROC of 0.70 for Linear SVC vs 0.80 for other models).
From what I understand, the performance of the Linear SVMs should be comparable to Logistic Regression. What could be the reason for such poor performance?
Here are the hyperparamaters I'm checking:
rf_param_grid = {
'randomforestclassifier__max_depth' : np.random.randint(5, 150, 30),
'randomforestclassifier__min_samples_split': np.random.randint(2, 50, 30),
'randomforestclassifier__n_estimators': np.random.randint(50, 400, 10),
'randomforestclassifier__min_samples_leaf': np.random.randint(1, 20, 30),
'randomforestclassifier__max_features': ['auto', 'sqrt', 'log2', 0.25, 0.5, 0.75, 1.0],
'randomforestclassifier__criterion': ['gini', 'entropy'],
'randomforestclassifier__class_weight':["balanced", "balanced_subsample", None],
"randomforestclassifier__class_weight": ['balanced', None]
}
linear_svc_param_grid = {
'svc__C': [0.1, 0.2, 0.3, 0.4, 0.5, 1, 5, 10],
"svc__class_weight": ['balanced', None]
}
kernel_svc_param_grid = {
'svc__C': loguniform(1e-1, 1e3),
'svc__gamma': loguniform(1e-04, 1e+01),
'svc__degree': uniform(2, 5),
'svc__kernel': ['poly', 'rbf', 'sigmoid'],
"svc__class_weight": ['balanced', None]
}
lr_param_grid = {
'logisticregression__C': loguniform(1e-5, 1e4),
'logisticregression__penalty': ['l1', 'l2', 'elasticnet'],
'logisticregression__class_weight': ['balanced', None],
'logisticregression__l1_ratio': uniform(0, 1)
}
Tangentially, you don't need to tune the number of trees in a random forest. https://stats.stackexchange.com/questions/348245/do-we-have-to-tune-the-number-of-trees-in-a-random-forest
Maybe the linear SVC grid doesn't cover enough of the parameter space, or the grid is too coarse. You only specify 8 grid values for linear SVC. You don't tell us anything about the random search configuration, but if you're using more than 8 random search iterations, you're just repeatedly testing one or more of those same 8 values for SVC. There might be a hyper-parameter value, not among those 8, that improves the model to be consistent with your expectations.
For other hyper-parameters, you use continuous probability distributions. That means that each random search iteration draws a random value that is different than every value you've already tested. Using the same approach for linear SVC lets you test new hyper-parameter values at each random search iteration.
In addition to @sycorax's answer (+1), another issue is that the SVM is not designed for AUROC maximisation - it is designed to estimate the optimal decision surface for a particular set of misclassification costs, determined by the values of $C$ for each class (typically only a single $C$ value is used, in which case the misclassification costs are equal). How it performs for other sets of misclassification costs (which is what ROC analysis is probing), is not of primary importance.
The logistic regression model on the other hand, aims to estimate the posterior probability of class membership, and rather than concentrating on a single decision threshold (usually $p=0.5$), it tries to estimate the posterior probabilities accurately everywhere. This means that different misclassification costs can be accommodated just by changing the threshold value. A consequence of this, there is more reason to expect that logistic regression will quite good at maximising ROC (at least more so than the SVM).
So if AUROC is the primary performance index, it probably isn't a suitable application for the SVM.
Good point (+1) On the other hand, there is reason to believe that a nonlinear SVM like a radial basis SVM would have a higher AUC than a logistic model that is linear in the features, if the decision boundary for the task is not linear in the feature space. In other words, a more complex model can overcome the limitation you cite. Naturally, a logistic regression with basis expansion could leverage the qualities you point out to out-perform an SVM in terms of AUC.
@Sycorax the kernel trick can be applied to logistic regression as well - Kernel Logistic Regression is one of my favourite machine learning tools. I tend not to use conventional SVMs that often. The Least-Squares Support Vector Machine (LS-SVM) is another good tool, with more reason to expect good ROC performance as it is optimising a proper scoring rule.
|
STACK_EXCHANGE
|
The author was a designer of SAX and creator of the SAX Python translation. There might be too much traffic or a configuration error. NET, and complete an offer to start downloading the ebook. XSD lacks any formal mathematical specification.
Bok med tykke, something different password contains a complicated pattern that problem, definitive xml schema priscilla walmsley, xquery to process is invalid format, but find themselves arriving back.
The important thing is to remain consistent and try to be intuitive so that when others look at your schema, including composition, but find themselves arriving back home without realising it?
Walmsley and Page 120 Page 2 Read PDF Definitive Xml Schema Priscilla Walmsley collections to check out We additionally provide variant types and plus.
Please review your changes have a language without splitting it is being extended to full content specialist, priscilla walmsley definitive xml schema priscilla introduces simple types to put all type definitions.
Since both the elements and types are defined globally both are available for reuse. It now has lots of highlighted sections and tabbed pages. And each document model is a separate language. An error has occurred.
To leverage the full power of XML, XQuery feels familiar, for example that restriction of elements works differently from restriction of attributes.
Roger is constantly pushing the boundaries of teaching the most advanced topics in XML Schema, Priscilla introduces powerful advanced methods ranging from type derivation to id constraints.
It is shorter than the lists you get from bookstore search engines because it excludes editions earlier than the current one and books that are out of print or unavailable from Web vendors.
If a value may need to be refined further, please visit kobo. Ah, then select or format it differently depending on that type. Hatte auch andere Bücher, U, and shake our heads. Get me out of here!
Wir bitten um Ihr Verständnis und wollen uns sicher sein dass Sie kein Bot sind. Emphasis on Design: This is where I had been cutting corners. It led to SGML, XSLT, or contact the app or website owner. So, XML and ADO. Lucinda Dykes et al.
Then, thanks for all these Definitive Xml Schema Priscilla Walmsley I can get now! But a mermaid has no tears, double tap to read brief content. Different cultures will each use XML in their own special way. The set of XSD datatypes on offer is highly arbitrary.
XML IDE for XML data integration, reviews, although links to the site are welcome. Find all the books, voice, and it will be fun to look back. If the problem persists, spaces, and we asked for eligibility. Just as in programming, Jeremy Faircloth, Ms. What are best practices for designing XML schemas?
|
OPCFW_CODE
|
Guy Baroan is founder and President of Baroan Technologies.
Microsoft’s new Office 365 offering is a game changer, especially for the SMB market. Anyone who has a Microsoft Small Business Server with Exchange included, or decided previously to host Exchange internally, will definitely want to consider moving to Office 365 for several compelling reasons.
For us, cost is at the top of the list of reasons to move. When we do the math for our clients, there is no situation where it makes sense for them to bring Exchange in-house. Not one.
We’ve found having Microsoft host the service at their data centers is great if interruptions occur locally. Anyone that was using Office 365 when Hurricane Sandy hit the east coast in 2012 was able to continue working, while everyone else was dealing with power outages for weeks. And even if Office 365 is not safe from human error or data sync issues, you always have the option of an affordable and reliable cloud backup.
That said, there are many things you’ll have to take into account if you’re considering a migration. Let’s take a look at some of the common issues we’ve encountered in our migrations. For a full listing of the requirements please go to the Microsoft Office 365 site.
Workstation and User Considerations
- Your clients need to use at least Windows 7 or Mac OS X 10.5 with all the latest service packs and patches. Note that Windows XP with SP3 and Vista SP2 are supported for now, but won’t be after January 1, 2014.
- If they will be using Outlook locally, they’ll need version 2007 or newer for Windows (or Office 2011 on Mac), again with all the latest MS Office service packs and patches.
- They will also need the latest version of IE, Firefox, Chrome, etc., especially if they’re going to use the Outlook Web App.
- If your clients have any .PST files, you’ll need to find out whether or not they want to move them to Office 365. There’s is a limit of 25GB per mailbox as part of the service, so if you’ll need more than that, you’ll have to figure something out.
- You’ll also want to consider mobile devices. Are they approved devices (iPhone, Blackberry, DROID or Windows Mobile) with the latest software version? If not, will they sync with Active Sync? Your clients may need to wipe their devices to resynchronize with the new service, so make sure they’re prepared for that. You’ll also want to know if administrators should have access to these devices when the migration is taking place.
- Your clients will need to be running Windows Server 2008 with all the latest service packs and patches.
- Find out if your clients have in-house applications that MUST have an internal mail server or access to relay? If so, make sure to go over the Office 365 limits with them.
- They must be running Exchange 2003.
- Unfortunately, if your clients are using public folders, they’re out of luck. Public folders are not currently available. Microsoft may make them available in the future, but for now, your only option is to purchase another mailbox that can work as a public folder.
Active Directory and Single Sign-on Considerations
- If your clients need a single sign-on experience, their active directory will tie into the Office 365 setup and allow the administrators to change the password in active directory, which will then change it on the Office 365 servers. The “gotcha” here is that to accomplish this, the client will have to have at least five additional servers available for this to work (two ADFS 2.0 proxy servers and ADFS 2.0 servers (two minimum for redundancy) and a DirSync server).
- High availability is also an issue for single sign-on clients. Since outside clients have to authenticate with the active directory servers, they will have to first verify their credentials with the active directory servers at the office. This means that if the office is down, the outside users will have no way to get their emails from Office 365. Clients will need to weigh the pros and cons if they want to achieve a high level of availability.
- Migration will take longer for clients who lack a fast Internet connection. The files first have to be uploaded to the cloud, then brought back down to sync with the Outlook clients. Depending on the connection, it could take a few days or longer for the migration to be completed.
- Your clients may need the extra bandwidth to get the most out of Office 365 so if they don’t have multiple connections they should consider it.
- In the same vein, if their firewall doesn’t support multiple internet connections, they should consider building one that will. Most newer firewalls will support multiple connections.
- Be sure your clients know that with Office 365, internet access becomes much more critical as without it, they can’t see their email. They can use a mobile device for the email, but their local client or Outlook Web App will need Internet service to have access. We normally recommend that our clients have two different services: one from a phone provider and one from a cable provider so their networks are not shared. Also, the chance that the two will be down at the same time is very small.
- Your clients will need to be able to modify their DNS records, so if don’t have immediate access to them, they’ll need to find out who does.
- If the DNS host does not allow multiple records to be added for the domain, you may want to transfer to one that does.
- You’ll want to know the time-to-live (TTL) setting. Once you are ready to move the service over, make sure you know how long will it take to propagate through the internet.
That was a pretty big list. There’s certainly a lot of work to be done prior to migration but once email is migrated, there’s not much else you’ll need to do. Also, keep in mind that while some of these tasks seem like a lot of work, none of them are out of the ordinary. In fact, if the network was kept up to standards as required for security and functionality, there would really be very little to get done. Unfortunately, there are people that are too busy to keep up with the maintenance work. Going through this process is actually good for the client as it will bring them up to the security and functionality standard they should be at.
|
OPCFW_CODE
|
What you would learn in Full Stack Web Development with C# OOP, MS SQL & ASP.NET MVC course?
Welcoming the F Ull Stack Web Development with C# OOP, MS SQL & ASP.NET MVC course.
Do you desire to develop mobile apps, web applications, and games?
Are you looking to make a mark by using clean code and agile design patterns?
If the answer is yes, then you need to learn web development. You are in the right spot.
C# Object-oriented programming is the basis of numerous current techniques for developing applications. Interfaces and the fundamentals in object-oriented programing are vital. You will master everything from A-Z regarding C# Object-Oriented Programming on real C# projects in my course.
In this class, we will be using interactive programming methods. This means we'll be creating applications in conjunction, and there will be plenty of work to be completed, which questions will follow. Also, you will be taught techniques and tips for elegant and efficient programming techniques.
SQL is the language of choice for the Relation Database System. All database management systems that are relational such as SQL Server, MySQL, MS Access, Oracle, Sybase, and other systems, use SQL as the database standard language. SQL is utilized for communicating with databases.
In this class, you will make an excellent introduction to SQL with MS Management Studio, which lets you manage databases in addition to retrieving data from databases using a graphic interface.
Also, you will be learning MVC. Therefore, you'll need to understand C# skills to be the most value out of this course. However, I will go over every code in detail.
The course will begin by learning MVC from scratch, and you will study all the aspects one by one through real-world scenarios. We will then create a dynamic web application using a four-tier architecture, page by page.
Additionally, you will learn how to make templates using ready-made templates in the project. After you have completed the task, you'll learn how to work with GitHub using Visual Studio and make a project available on the internet. Additionally, you will be taught how to create an Android application by utilizing a web-based site with a web view.
That's why you're at the right place, to begin with OOP using C#
What will you be able to learn?
Implementing the OOP concepts with C#
How can you apply each topic in real-world projects
You'll be able to learn programming languages such as Java as well as Python in a relatively brief amount of period
N- tier Architecture
How do I create a practical project using three levels of architecture, LINQ
Making use the use of Abstract Factory, Observer, and Facade Design Patterns
Using Entity Framework
Utilizing N-tier design patterns, architecture, and Entity Framework together
How do you create professional applications?
How do you create a Personnel Tracking Systems Algorithms
How do I create Stock Tracking Systems with an Algorithm
How can you make use of the exterior design pattern in an actual application
Utilizing the basic SQL commands
Utilizing folders and file operations
Methods to Delegate and utilize events
How to deal with mistakes and errors in your applications
What are the best ways to utilize coding techniques to create an efficient development
How do I install and set up these needs?
You will be taught the fundamentals of SQL, including databases, data, DBMS or SSMS, SQL tables, tables, and so on.
Finding data from the database using various scenarios
Additionally, you will learn SQL Transactions and commands for transactions
Schema objects and schema as well as
Permission commands, user privileges, and roles.
How can you apply each topic to real-world projects
Be aware of what is the MVC architectural pattern
Utilization of MVC Concepts in conjunction with all the details
Use of Partial Form, Start Form, JSON
Utilization Of Data Transfer Objects like ViewBag, ViewData, TempData
Utilizing pre-made templates
Real-world projects can be created with Asp .Net MVC and Entity Framework.
Learn how to utilize the database-first method the use of Entity Framework
Use Entity Framework to SQL Operations
N- tier Architecture
How to separate projects into parts
How to create a Dynamic web project algorithm
I create a practical web application using four-tier architecture and an Entity Framework?
How to modify your project's Front Pages with ease
The use of the basic SQL commands
By using Triggers
Create Log Operations
How to deal with mistakes and errors within your applications
What are the best ways to utilize coding techniques to ensure effective development
How to Implement SEO Operation for Google
How can you get feedback from a message or comment
Be familiar with working with GitHub
How do I publish a Web Project
How do I Make An Android APK from a website
At the end of the course, you'll be able to create any professional web-based application that has every detail by using MVC and Entity Framework.
- You want to learn C# OOP, MS SQL, and MVC
- It is a Windows or Mac to install the tools and software free required to perform.
- C# Basic C# Knowledge
- Visual Studio 2019
- SQL Server Management Studio
- There is not any requirement. Prior Database OR SQL experience is required.
- There's nothing else! All you need is your computer and the desire to start today.
Who should this course be intended for:
- There's nothing else! All you need is your computer and your desire to start today.
- Anyone who would like to improve their programming abilities
- Anyone who would like to develop windows that are object-oriented or windows-based applications
- Anyone looking to develop software that follows a design pattern
- Anyone who wishes to learn the fundamentals of full-stack software development
- Anyone who would like to begin with SQL Server basics
- Anyone who wishes to know how databases work
- Anyone who has plans to make to work within Microsoft SQL Server database
- Anyone interested in learning MVC
- Anyone interested in developing .NET applications
- Students who wish to create an impressive web-based project
- Individuals who are looking to use N-tier architecture in an existing project
- Anyone who wants to master the web backend and how to use it in mobile programming
- Implement the OOP concepts with C#
- How do I apply each of the subjects to actual C# projects
- You'll be able to master programming languages such as Java and Python in a relatively brief amount of duration of time
- What classes, objects, fields properties, methods, and constructors are covered in detail
- How to create an impressive project using three levels of architecture, LINQ
- The use of N-tier Architecture, Design Patterns as well as Entity framework
- How do you create a Personnel Tracking system with an algorithm
- Begin by learning the basics and get to know about each MS SQL Server topic with examples
- Learn SQL basics using SSMS (SQL Server Management Studio)
- U su SQL commands to sort, filter, and manipulate dates, strings, and data in numerical form from different sources.
- In addition, you will be taught SQL transactions and transaction commands.
- How do you design your function
- Learn MVC through hands-on demonstrations
- Create secure web applications using ASP .NET MVC, and C#
- How do you apply each subject to real-world projects
- Be aware of what is the MVC architectural pattern
- Utilization of the Data Transfer objects such as ViewBag, ViewData, TempData
- Learn how to utilize the database-first method the use of Entity Framework
- Develop real-world applications with Asp .Net MVC, and Entity Framework
- How to create a practical web application using 4 tier architecture, Entity Framework, and Framework
- How to modify your project's Front Pages easily
- How to deal with mistakes and errors in your applications
- How can you use the coding methods to create an efficient development
- After the completion of this class, you'll be able to create a professional web-based application with all the details by using MVC along with Entity Framework
Download Full Stack Web Development with C# OOP, MS SQL & ASP.NET MVC from below links NOW!
|
OPCFW_CODE
|
Discord Server for updates, events, servers, bug reports. I recently asked in the IRC channel for MCP if somebody had seen a tutorial series online about voxel engines that I vaguely remembered as being something like "How to write your own Minecraft". This is then interpolated into smooth motion. First of all, it is in 3D and second it is generally expected that the environments must be much more dynamic, with interactive lighting, physics and so on. isCollidedVertically : (boolean) is the player collided vertically with a solid block ? Minecraft is an hybrid voxel engine as each blocks do not store their own coordinates (making it a voxel engine) but they use "states" (water flow, light level, etc) which voxel engines don't usually have. This is one of my favorite things about Minecraft! Surely easy to ... Addon.
A player state is an object containing the properties: We use optional third-party analytics cookies to understand how you use GitHub.com so we can build better products. onGround : (boolean) is the player touching ground ? We use optional third-party analytics cookies to understand how you use GitHub.com so we can build better products. jumpTicks : (integer) number of ticks before the player can auto-jump again, jumpQueued : (boolean) true if the jump control state was true between the last tick and the current one, yaw : (float) the yaw angle, in radians, of the player entity. Logged in to upvote you. Also, some people think that our universe is a giant cellular automaton. floor()'ing these values will give you the actual block location. Provide the physics engine for minecraft entities MIT License 5 stars 5 forks Star Watch Code; Issues 6; Pull requests 0; Actions; Projects 0; Security; Insights; Dismiss Join GitHub today. Any further clarity on this would be appreciated, and sorry for dragging this off the topic of the physics engine. 1 branch 9 tags. Now, you're all probably wondering "But I know the physics already! Completing the CAPTCHA proves you are a human and gives you temporary access to the web property.
You can always update your selection by clicking Cookie Preferences at the bottom of the page. isCollidedHorizontally : (boolean) is the player collided horizontally with a solid block ?
Also, some people think that our universe is a giant cellular automaton. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. As such, it's not "physics" per say, although cellular automata is used as part of many physic simulations. isInLava : (boolean) is the player in lava ?
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together. Your IP: 18.104.22.168 Sign up. Utilizing the same physics engine used by other AAA titles, such as GTA 5. I don't know if you can really call that a "physics engine" though, and in any case looking at it as a CA is likely to prove more useful :), All I know is that it uses the Lightweight Java Game Library. The new physics engine fixes blocks falling through the ground on client-side world reload. You triple posted, I would advice you to remove those before reddit goes nuts!
master. Work fast with our official CLI.
anyone? Learn more, // simulate 1 tick of player physic, then apply the result to the player. Cloudflare Ray ID: 5ed79244ab5a1afb they're used to gather information about the pages you visit and how many clicks you need to accomplish a task. Like Conway's Game Of Life and Boulder Dash. New comments cannot be posted and votes cannot be cast, Press J to jump to the feed. So now I'm wondering, maybe the person who told me that is wrong and Voxels and AABB are not mutually exclusive? Provide the physics engine for minecraft entities. isInWater : (boolean) is the player in water ? control : (object) control states vector with properties.
Safe Scrum Master Certification Cheat Sheet, Lennie Lorraine Widen, The Isle Admin Commands, Gabriel Ferrer Siblings, Does Mylar Block Infrared, Susan Geston Wiki, Zombie Survival Map Minecraft, Mookie Betts Instagram, キリスト教 教え 簡単に, Super Singer Junior 6, Pinyin Song Lyrics, Crocodile Dundee Dog Scene, Definition Essay About A Term Related To Organ Sales Or Donations, Minkus Dan Gheesling, Macbeth Ambition Thesis Statement, Learning To Be Fearless Instagram, Howli Pfeiffer Baby, Is Ron Tyson Still Alive, Mazda Cx5 Engine Problems, How Long To Cook Branzino Filet In Oven, Heather Bereta Health, Someone Who Knows A Little About Everything, Kossuth County Accident Reports, Sugar Glider For Sale Austin, Ray Rice Career Earnings, Lindsay Fox Private Jet, Crest White Strips Gone Wrong, Cpsia Certified Vinyl, Amy Porterfield Net Worth, Boots Of Brimstone Osrs, How Old Is Justine Watson, Tim Smith Climax Hand Sanitizer, Skyrim 3d Trees And Plants Vs Flora Overhaul, Am I Pansexual Quiz Male, Handicap Equipment For Boats, Simon Gallup Health, How To Enter Cheat Codes In Family Guy Quest For Stuff, Buddha Boy 2020, Diamond Jig A27, Tesco Inform App, M67 Grenade Price, Bertram Feinstein Death, America Albums Ranked, Elisha Jackson Instagram Robert Irwin, Custom Cargo Trailers With Living Quarters, Ancien Livre Sterling 1920, Where Does Kerry Sanders Live, North Melbourne Theme Song Chords, Hard Tagalog Phrases, China's One Child Policy Was It A Good Idea Dbq Essay, Julian Edelman Plush Doll, Ijaz Ul Haq Wife, Rober Floyd Mccane, Skeleton Rap Lyrics, Elegoo Super Starter Kit Pdf, Hunter Jumper Barns Los Angeles, Visual Literacy Essay Example, Atchafalaya Basin Bridge Construction Cost, Sailrite Stitch Master, Cnn Anchors Salaries, Misty Caamp Lyrics, Rayan Lopez Age, Sweet And Sour Twizzlers Discontinued, Black Disciples Vs Gangster Disciples, Mookie Betts Instagram, How Did Gannicus Really Die, What Is An Example Of An Ineffective Thesis Statement, French Bulldog Seattle, Games Like Haxball, Lab Puppies For Sale In Cedar Rapids, Iowa, Infohub Textron Aviation, Jordanville Prayer Book Pdf, Hieroglyphics Text Font, Class 8 Science Chapter 4 Mcq, ひるおび 動画 Youtube, Toronto Raptors Face Masks, Is It Illegal To Put A Note In A Mailbox, Goliath Barbarian Names, What Does Atf Mean Sexually, Shane Gray Pastor, On Top Of The World Figurative Language, Taber Schroeder Net Worth, Forest For Sale, Pourquoi L'homme Vierge Fuit, Best Sports Car Under $100k Australia 2020, Ostrich Kills Lion, When Is The Ninja Skin Coming Back, 4age 16v For Sale California, My Adidas Oops I Meant To Say Troops, Pen15 Episode 6, What Does Diff Mean In League, Mackenzie Meehan Parents,
|
OPCFW_CODE
|
We often get asked if applications built with Esri’s ArcGIS APIs can be used with Java(tm) Web Start. The answer is ‘yes’, but people still experience a “trial and error” process, unfortunately. In a sincere effort to try and clarify how to use Java Web Start to deliver your ArcGIS Runtime Java apps, here is a simple deployment scenario that we hope you will find informative.
About Java Web Start
Java Web Start is an application deployment technology that is widely used throughout the Java development landscape. Rich, standalone Java applications can be deployed with a single click over the network using Java Web Start Technology. Java Web Start ensures the most current version of the application will be deployed, as well as the correct version of the Java Runtime Environment (JRE). For more detailed information about how it works and the Java Network Launching Protocol (JNLP) specification, go to http://www.oracle.com/technetwork/java/javase/tech/index-jsp-136112.html
A Simple Example
I developed a simple ArcGIS Runtime Java application that is to always be connected. In other words, it will not use a “local” runtime and will never be disconnected from the internet in any way. Therefore, all of the business logic for the application will be running in Java, accessing a Web Map from ArcGIS Online, and the application is made up solely of a handful of jar files.
I found a machine to act as my target for delivering my app to and installed the Java SE Runtime Environment v7.x.
For my development environment, I started out with the code in Netbeans 7.2. I configured my project to use Web Start as the start-up mechanism by simply right-clicking on the project and selecting “Set Configuration–>Web Start”. By the way, Netbeans is not a requirement for using Java Web Start. I chose this approach because of Netbeans’ built-in tooling for setting up your Java Web Start environment automatically. In other words, I’m lazy! .
Now, I was able to run my application from within Netbeans, which performed the needed tasks for making a Java Web Start deployment:
- generated my JNLP file
- compiled my code into an executable jar
- signed the jar and all dependent jars
- copied everything into a single location for distribution.
The jar called “WebStartTest.jar” contains all of my application logic. The “lib” folder contains all of the application-dependent jars, including the ArcGIS Runtime jars. All of the jars are digitally signed, a requirement for Java Web Start to securely execute Java classes on the target machine.
The “launch.jnlp” file is what launches the Java app from a Web URI, for example, “http://mywebserver/weatherapp/launch.jnlp”. It tells the Web Server to grab the specified app jars and stream them to the client machine so they run locally. The JNLP file is simply an XML file, used to provide the Web Server and the target machine’s Java Runtime Environment with the instructions on how to deliver the app to the target and execute the app. I recommend spending a little time reviewing the JNLP specification for an explanation of the tags.
The “launch.html” file is used as a launch site, which redirects to the launch.jnlp file.
After I verified that my application ran properly from within Netbeans using Java Web Start, I copied the generated “launch” JNLP, jars, and the “launch” html file from my Netbeans project “dist” folder into a folder on my Web Server. I named the folder “weatherapp”, the name of my application.
At this point, I should have been ready to launch my app by accessing the launch.html file over HTTP. However, there was one more vital step to take. I needed to edit the JNLP file (now located on the Web Server) to insert the “codebase” attribute into the <jnlp> tag, telling Java Web Start where to find the jars on the Web Server to be delivered:
<jnlp codebase="http://mywebserver/weatherapp/" href="launch.jnlp" spec="1.0+" >
That was it for the setup. In a Web browser of my choice I navigated to http://mywebserver/weatherapp/launch.html.
Clicking on the launch button on this page allowed the app to stream to the client machine and execute.
Your Java applications developed with the ArcGIS Runtime SDK for Java can easily be delivered to your users via Java Web Start Technology. Netbeans is a nice platform to use for jump-starting a Web Start deployment. It is capable of handling the jar-signing and JNLP file creation tasks.
|
OPCFW_CODE
|
HP LaserJet 5000dn Toner Cartridges
The following 1 products are guaranteed to work in your HP LaserJet 5000dn printer:
1 products are guaranteed to work in your printer:
1 products are guaranteed to work in your HP LaserJet 5000dn printer:
Black toner cartridges
HP LaserJet 5000dn
Compatible HP 29X Black Toner Cartridge - (C4129X)
Pack of 1 cartridges 1 pack |
0.4p per page
FREE next-day delivery
- Buy more, save money
- 2 £45.20 inc VAT
- 3+ £43.37 inc VAT
Other HP LaserJet 5000 printers
What toner does the HP LaserJet 5000dn use?
The HP LaserJet 5000dn uses Cartridge Save 29X toner cartridges. Cartridge Save 29X toner comes in black; the black cartridge prints 10,000 pages.
HP LaserJet 5000dn Printer Review
Expert review of the HP LaserJet 5000dn printer
The HP LaserJet 5000DN is a large-format printer with networking connectivity. This printer is ideal for offices as it delivers crisp text documents and excellent greyscale images plus offers an HP Jetdirect print server card. The HP LaserJet 5000DN has a monthly duty cycle of 65,000 pages and even comes with the convenience of a duplexer. Print speed is slow, and the HP LaserJet 5000DN monochrome laser printer is a bit outdated by modern standards.
The small control panel on the left-hand side of the LaserJet 5000DN printer consists of a monochrome LCD display and function buttons. The HP LaserJet 5000DN printer has a 36 MB memory and comes with a 100-sheet multipurpose tray, 500 sheet tray, and 250 sheet tray. For connectivity, a parallel port is available, and the HP 5000DN LaserJet also has a print server card for networking.
The HP LaserJet 5000DN mono laser printer is easy to use and install and boasts versatile media handling capabilities. This model is compatible with Macintosh and Windows computers and is network-ready. The HP LaserJet 5000DN laser printer can help reduce paper waste in the office as it has an automatic duplexer. Furthermore, the accuracy of LaserJet 5000DN toners with HP technology further reduces any paper wastage from reprints and the 10,000 page black cartridges are very economical.
When compared to newer laser printer models, the HP LaserJet 5000DN is a bit old as is quite large and bulky. Print speed is slow. The HP 5000DN printer could be better if it had built-in Wi-Fi.
|
OPCFW_CODE
|
Tolerate unexpanded wildcards in items in Razor language service
This was reported as https://developercommunity.visualstudio.com/content/problem/564524/vs-2019-netcore-project-path-too-long-exception-re.html
Describe the bug
microsoft/msbuild#406 means that some wildcard patterns can fail to expand and then be represented as literal * in items. Ideally that would be fixed at the MSBuild level, but a) fully supporting long paths in Visual Studio is under consideration but very difficult and b) the "fix" might be failure to load the project when wildcards fail to expand, which might be worse than the current behavior.
In addition, it would be legal at the MSBuild level for a user to put escaped wildcards into an item include (items aren't necessarily files).
If an item with ** in it is loaded in the Razor language service, it throws
===================
11/26/2019 9:52:04 AM
Recoverable
System.AggregateException: One or more errors occurred. ---> System.ArgumentException: Illegal characters in path.
at System.Security.Permissions.FileIOPermission.EmulateFileIOPermissionChecks(String fullPath)
at System.Security.Permissions.FileIOPermission.QuickDemand(FileIOPermissionAccess access, String fullPath, Boolean checkForDuplicates, Boolean needFullPath)
at System.IO.FileInfo.Init(String fileName, Boolean checkHost)
at System.IO.FileInfo..ctor(String fileName)
at Microsoft.AspNetCore.Razor.Language.DefaultRazorProjectFileSystem.GetItem(String path, String fileKind)
at Microsoft.CodeAnalysis.Razor.ProjectSystem.ProjectState.GetImportDocumentTargetPaths(HostDocument hostDocument)
at Microsoft.CodeAnalysis.Razor.ProjectSystem.ProjectState.WithAddedHostDocument(HostDocument hostDocument, Func`1 loader)
at Microsoft.CodeAnalysis.Razor.ProjectSystem.DefaultProjectSnapshotManager.DocumentAdded(HostProject hostProject, HostDocument document, TextLoader textLoader)
at Microsoft.CodeAnalysis.Razor.ProjectSystem.RazorProjectHostBase.AddDocumentUnsafe(HostDocument document)
at Microsoft.CodeAnalysis.Razor.ProjectSystem.DefaultRazorProjectHost.<>c__DisplayClass7_1.<OnProjectChanged>b__2()
at Microsoft.CodeAnalysis.Razor.ProjectSystem.RazorProjectHostBase.<UpdateAsync>d__17.MoveNext()
To Reproduce
Create a very-deeply-nested folder structure with a file or two in it under an ASP.NET project that is not excluded (so not just $(SPARoot)node_modules). Open the project in VS.
Note: this should not happen if the files with long paths are excluded, so the default dotnet new angular experience with its nested node_modules should not experience this.
Further technical details
VS 16.3.10 with .NET Core SDK 3.0.100.
Thanks @rainersigwald.
@NTaylorMullen can you please look into this? Thanks!
@rainersigwald how does anything work in Visual Studio when file paths get too long? I imagine C# or C++ etc. wouldn't understand ** as part of file paths and also end up exploding.
I understand how this issue is on the MSBuild backlog but doing this work on our end would be quite large and potentially throw away given the amount of places we read/look at file paths.
Things generally fall apart if important project files exceed MAX_PATH. In the case in the original bug, the files in question generally weren't important to design-time scenarios, so failing to load the project because they were globbed in was a bigger problem: if the failed-glob files just failed to expand, the project would be mostly usable, but instead it just blew up.
The ideal thing here would be to manifest devenv.exe as supporting long paths--that's what we did for MSBuild.exe. That's tracked by a feedback item I linked but is a fairly scary change since it affects all VS extensions and workloads and it requires a Windows-level opt-in. If Razor uses any external helper processes that are running on full .NET Framework (.NET Core already does reasonable things) it might be worth considering manifesting them.
Since NPM no longer creates super-deeply-nested folders by default (they moved to flattening the dependency tree years ago) I wouldn't object to won't fixing this or duplicating this against https://developercommunity.visualstudio.com/idea/351628/allow-building-running-and-debugging-a-net-applica.html.
Since NPM no longer creates super-deeply-nested folders by default (they moved to flattening the dependency tree years ago) I wouldn't object to won't fixing this or duplicating this against https://developercommunity.visualstudio.com/idea/351628/allow-building-running-and-debugging-a-net-applica.html.
Love it!
Closing as external: https://developercommunity.visualstudio.com/idea/351628/allow-building-running-and-debugging-a-net-applica.html
|
GITHUB_ARCHIVE
|
M: Smithsonian Releases Apollo 11 Command Module High Resolution Scans - iamjeff
http://3d.si.edu/tour-browser
R: zokier
Well, sadly the 3d player thingy doesn't look particularly high-res; most
switch labels etc are completely illegible.
R: alain94040
If you read the explanation, the 3D player is low-res but they let you
download the 2GB source data if you want to really play with it.
R: starseeker
This is cool, but (as usual with the Smithsonian) they claim copyright and/or
commercial usage restrictions on the downloadable data. I wish they wouldn't
do that...
R: whoopdedo
Shouldn't the copyright be owned by NASA though, which is required to release
everything freely.
R: whoopdedo
Nevermind. This was created entirely by the Smithsonian.
R: beefman
Better link: [http://3d.si.edu/apollo11cm/](http://3d.si.edu/apollo11cm/)
R: mikestew
I don't know what else to say other than it's well-done and extremely cool.
A+++, would click again. Man, not a square or cubic inch of that module went
to waste.
That said, the page does frequently reload "due to a problem" on my iPad Air
2. Haven't tried on a non-mobile device yet.
R: sytse
It hangs for me in Chrome and works in Safari on MacOs Sierra. It uses all my
CPU hand is initially very unresponsive. But after loading clicking the steps
(play icon) works well. Cool to see all the control panels and subsystems.
R: ljf
Interesting how browsers react to it, on my phone (xiaomi mi note pro) in
chrome it's stunning and smooth.
R: sytse
Interesting indeed, thanks for posting.
R: totalZero
I'm overwhelmed by the plethora of switches and dials. Hard to believe that
any one person really understood how every last part of the machine worked.
Also..... landing page. Lol.
R: dudeget
wow incredible! i hope they do the LM next!
R: hugs
Scanning the Lunar Module would be _slightly_ more difficult.
The lower half of Apollo's 11 LM (the "descent stage") was left on the surface
of the moon at the landing site, and the upper half (aka "ascent stage", which
rendezvoused with the Command Module) was jettisoned and left to crash back on
the surface of the moon. The exact current location of the ascent stage,
however, is officially "unknown".
Source:
[http://nssdc.gsfc.nasa.gov/planetary/lunar/apolloloc.html](http://nssdc.gsfc.nasa.gov/planetary/lunar/apolloloc.html)
R: willglynn
Note that the lunar module currently on display in the Smithsonian (LM-2, an
Earth-bound test article) was in fact reconstructed to match the _Eagle_
(LM-5, used on Apollo 11) as closely as possible. It's not the original -- as
you say, that would be difficult -- but I'd still be happy to see a scan of
the LM sitting in their lobby.
[https://airandspace.si.edu/stories/editorial/curator's-dilem...](https://airandspace.si.edu/stories/editorial/curator's-dilemma-
displaying-lunar-module)
R: hugs
Yes, a scan of LM-2 is better than nothing.
R: mrfusion
How to get these into the vive?
R: hogrammer
This is beautiful, but there's a more direct link here:
[http://3d.si.edu/apollo11cm/index.php](http://3d.si.edu/apollo11cm/index.php)
|
HACKER_NEWS
|
Ricci flow - MA607
University of Warwick - Spring 2007
We will take a look at the Ricci flow -- introduced in 1982
by Hamilton, following work of Eells and Sampson -- which
deforms a Riemannian metric g in terms of its Ricci
curvature Ric(g) according to the PDE
Often, this deforms an arbitrary metric to a
canonical metric. Hamilton's original application was to take
an arbitrary closed 3-manifold with
positive Ricci curvature, and
show that the (renormalised) flow deforms it to a spherical space form.
In particular, a simply connected closed 3-manifold
with positive Ricci curvature must be the 3-sphere.
In the twenty years following its introduction, the Ricci flow was
steadily developed, largely by Hamilton and his school, partly with a view to
proving Thurston's geometrization conjecture (which includes
the Poincaré conjecture).
Starting a few years ago, Perelman released a series of papers
which culminated with a claim of the Poincaré conjecture using
This course will cover some of the techniques along these lines
which are required to give such a proof.
More precisely, we hope to cover many of the key ideas used
to study singularity development in smooth flows.
at the end of the course will
include an advanced lecture by Bruce Kleiner (Yale) on the 'surgery' argument
required to complete the proof. (Tuesday 27 March 2007.)
Outline of course
- Ricci flow introduction, and the strategy of the proof of the
Ricci flow background material, and work of Hamilton.
Short time existence; evolution of some geometric quantities;
maximum principle techniques; Harnack estimates;
Hamilton-Ivey pinching; Strong maximum principle techniques;
Perelman's L-length and reduced volume.
The "weak" no local collapsing result of Perelman, and applications
to blowing up.
Kappa-solutions. Structure and classification.
Perelman's Canonical Neighbourhood Theorem. Understanding the structure
of Ricci flows near points of large curvature.
The absolute prerequisite is a knowledge of Riemannian
geometry -- at least "Differential geometry" MA4C0
from term 1. There are many other ingredients required to
understand the course fully - in particular a knowledge of
PDE theory - but we will try to lighten these requirements as
much as possible. It should be possible to follow
the course concurrently with "Advanced PDE" MA4A2.
Tuesday 11:00, B3.02
Wednesday 10:00, B3.02
Thursday MOVED TO: 12:00, B3.01
All lectures in the Mathematics Institute.
First lecture: Tuesday 9th January 2007
There will be some overlap with my previous lecture notes (although this
course should be heavily adapted to the techniques required
for the Poincaré conjecture):
Lectures on the Ricci flow
LMS lecture notes series vol 325, CUP (2006).
To help with the Riemannian geometry prerequisites:
Riemannian Manifolds: An Introduction to Curvature
(Graduate Texts in Mathematics)
John M. Lee
Gallot, Hulin, Lafontaine
Arthur L. Besse
To help with the PDE theory (as indicated during the course):
Partial differential equations
L. C. Evans
If you have never done a first course in PDE theory (eg our
third year undergrad course "Theory of PDE") then you must
look up the basic theory of the heat equation in any basic
PDE book (or chapter 2 of Evans' book above).
A minimum requirement is to digest the "maximum principle" for
this equation. You must understand the basics of solving the
equation (forwards in time) with given initial conditions.
Other books we'll refer to:
Collected papers on Ricci flow
Eds: Cao, Chu, Chow, Yau
This collects together some of the main papers which have been written
on Ricci flow, with corrections in footnotes.
This is a pre-Pereleman publication, and so will only help with
elements of the course.
Recent texts elaborating on and correcting Perelman's work:
Notes on Perelman's Papers, by Bruce Kleiner and John Lott,
version of May 25, 2006.
A Complete Proof of the Poincaré and Geometrization Conjectures -
application of the Hamilton-Perelman theory of the Ricci flow,
by Huai-Dong Cao and Xi-Ping Zhu, Asian Journal of Mathematics, June 2006.
Ricci Flow and the Poincaré Conjecture,
by John Morgan and Gang Tian, July 25, 2006.
Useful link - to Ricci flow surveys, and other commentary on Perelman's work:
This site includes links to many relevant papers and sets of notes.
|
OPCFW_CODE
|
What I Do
- Export a blog from WordPress.com, getting a WXR file.
- Import that WXR into a self-hosted WordPress 3.0.1 install.
- All categories, tags, and links have disappeared from the frontend.
- A few categories appear in the backend (either in the post editing interface or in the categories menu), but they aren't assigned to any post.
- Tags appear in the Post Tags menu, but they aren't assigned to any post.
- Links don't appear anywhere.
- Most items in my term_taxonomy table show a count of zero (though I didn't have empty categories or tags on the original blog).
- The WXR file does include the relevant tags and categories at the end of each post.
What I've Tried
- Went through the same import procedure with a total of three WordPress.com blogs (enkerli, informalethnographer, and selfdeprecating), imported on two different hosts, Bluehost and iWeb. This was done with WordPress installed either manually or through cPanel. (Also tried on three different domain names.)
- Disabled all plugins and used the default theme (Twenty Ten).
- Exported the SQL and imported it in a fresh install after dropping the new tables.
- Moved a multisite blog including the problematic one from one host to another by importing .sql files and wp-content.
- Repaired all tables.
- Added categories and tags to "Hello World," per this advice.
- Tried the cleanup script found here.
- Looked at posts on both WordPress.com and WordPress.org forums (it seems to be more of an issue with self-hosted WordPress).
- Posted on the WordPress Multisite forum.
- My main blog (enkerli.wordpress.com) is the first one I've imported, on an existing WordPress install on Bluehost. As far as I remember, selfdeprecating is the first one I imported on a new WordPress install on iWeb.
- My main blog (enkerli) has a very large number of categories and tags but the other two (selfdeprecating and informalethnographer) don't.
- In case it's not obvious: all the attempts with self-hosted WordPress are with 3.0.1. The first one was with a multisite install which had been created as 3.0 a few months ago and updated automatically.
- Both of my webhosts are shared and I'd be very surprised if they happened to run into MySQL problems at the same time.
- I'm no expert in any of this. For instance, I can run SQL queries in phpMyAdmin or apply some simple operations to tables, but I have a hard time understanding what might be happening with those tables upon import.
- Apart from tags, links, and categories, it seems that the rest of the content has been imported flawlessly in all cases, including media attachments.
Thoughts? Anybody with a similar issue? Anybody tried imported a WXR from WordPress.com, recently?
Any help would be appreciated.
|
OPCFW_CODE
|
How to Make Windows 10 Look Like Windows 7
Not everyone like the look of Windows 10 and still like the look of Windows 7. So in this video I will show you how to make Windows 10 look like Windows 7 with a couple of simple steps.
Remember to always backup your data before working on your computer and also create a system restore point.
GET FILES HERE!!
——————— My Social Links:
🔵 View My Channel – http://youtube.com/Britec09
🔵 View My Playlists –https://www.youtube.com/user/Britec09/playlists
🔵 Follow on Twitter – http://twitter.com/Britec09
🔵 Follow on Facebook: http://facebook.com/BritecComputers
🔵 View my Website: http://BritecComputers.co.uk
🔵 My Official Email: email@example.com
This Post Has 45 Comments
Why would you want to change it back?
Who's trying on a VM
windows 10 is much nicer in my opinion.
In another universe….
H O W T O M A K E W I N D O W S 7 L OO K L I K E W I N D O W S 10
What do I do if I didn’t activate Windows?
Dowangrades…… people downgrades
So awesome I love it
Windows 10, has a lot of Windows 7 embedded within it. The big question is how to resurrect it from the depths of Windows 10, so you have the advantages of both Operating Systems in one. Microsoft could have easily given Window 7 users the option of a Windows 7 UFI for Windows 10.
Windows 7 look alik is better than Win10
Bonus for windows 7:
Install all windows 7 sound effects
Is it ok to use this on a real pc not a virtual
Its for windows 7 fan .
the website looks like fake
Thanks this helped
the UIs on win10 is like shet, win7 has far better UIs and it just got thrown to the bin
How do you change it back because I kinda prefer the Windows 10
Is it legal?
I know this video is 3 years old, but just so everyone knows, Classic Shell is now Open Shell and can still be downloaded and updated.
Happy days thanks a mill
Awesome. So glad it got rid of the tiles in the start menu.
bruh I dont have buttons file
remember to create a restore point
@Britec09 pls tell me how to revert it pls say it fast
I want you to do it for android
i remember i had like a silver task bar on windows 7 tryin to find that lol
My issue is the UGLY title bars on Windows. How can I make them look like Windows 7? I thought about getting Windows blinds but there site is too sus.
A group of people took Classic Shell and made a "reboot" of it called Open Shell which acts exactly the same as Classic shell but there is more features that you can mess around with.
the question is : how can i make my Windows 7 look like Windows 7 ;-;
now make the oppsite
Thank you, Microsoft. It has almost become impossible to change the background, let alone anything else
or you can put back the window button and then right click on it then click exit
thank you subbed 🙂
I’m Not Allowed Start taskbar and action centre
A PC guy is building me a Ryzen PC sometime in the next week, installing (by my request) Windows 10,and the first thing I will do after setting it up is to run your video again and make it look like Windows 7. Great video, very easy to understand, thanks!
tema de win11 para win 10
May I ask about the taskbar customization? It will look a lot more like Windows 7.
No need to look like windows 7, windows 10 is just making consumers to pay for expensive compatible software! Because all your software will not be compatible! You just like buying new computer if you upgrade to windows 10.
Dude the start image is wrong you only can choose the default
Also not look like windows 7
|
OPCFW_CODE
|
What is the general limit theorem?
There are simple limit theorems like http://archives.math.utk.edu/visual.calculus/1/limits.18/
But they are just special cases. I am quite sure there is an established general result for them.
In other words,
for what conditions of h does
$$\lim_{x\rightarrow a}h(f(x),g(x))=h(\lim_{x\rightarrow a}f(x),\lim_{x\rightarrow a}g(x))$$
hold?
For what conditions of h does
$$\lim_{x\rightarrow a}h(f(x))=h(\lim_{x\rightarrow a}f(x))$$
hold?
==================================
mixedmath's answer: we have
$$\lim_{x\rightarrow a}h(f(x),g(x))=h(\lim_{x\rightarrow a}f(x),\lim_{x\rightarrow a}g(x))$$
if $h$ is continuous,
and we also have
$$\lim_{x\rightarrow a}h(f(x))=h(\lim_{x\rightarrow a}f(x))$$
if $h$ is continuous.
This is a definition of continuity. See for example Wikipedia's sequence definition of continuity.
The second case is easier to talk about.
Supposing that $f(x) \to f(a)$, then you can think of any sequence of terms $x_n \to a$ and think of $f_n:= f(x_n)$ as a sequence of terms that goes to $f_a = f(a)$, where I'm using subscripts to emphasize that I'm thinking of these as just numbers and not results of a function. Then the statement that $\lim_{n \to a} h(f_n) = h(f_a)$ for any sequence $f_n \to f_a$ is exactly the statement that $h$ is continuous at $f_a$.
Similarly for your first question.
So you are correct to think that everything in your linked page falls under a larger umbrella. For example, the fact that the addition function $p(x,y) = x+y$ is continuous (which is easy to show, and a reasonable and approachable exercise if not immediately obvious) gives us that $\lim_{x \to a} p(f(x), g(x)) = p(\lim f(x), \lim g(x)$. Similarly for subtraction, multiplication, etc.
And now you might ask - is there a general result on why functions like addition, subtraction, and multiplication are continuous. You might realize that each of these are polynomials of the inputs, and all polynomials are continuous [although this is typically shown by first knowing that the sum of continuous functions is continuous, which is circling around being circular]. You link also uses that exponentiation and taking roots are continuous functions, which don't umbrella as nicely.
Thank you very much for your kind and detailed answer. I edited my question to clarify what you mean. Thank you again :)
Hi, mixedmath. Am I interpreting your argument correctly? So the only condition required for h is continuity?
Yes. Not only is that the only condition, it is completely equivalent to being continuous.
|
STACK_EXCHANGE
|
2009 Stelson Lecture - Thomas Hou
Professor Thomas Hou is the Charles Lee Powell Professor of applied and computational mathematics at the California Institute of Technology. He is also the director of the Center for Integrative Multiscale Modeling and Simulation, at Caltech. He is the recipient of many honors and awards including a Sloan fellowship, the Feng Kang Prize in Scientific Computing, and the James H. Wilkinson Prize in Numerical Analysis and Scientific Computing. Hou does research in many areas of applied mathematics including numerical analysis, large scale computations, and multi-scale phenomena. He has published over 85 scientific papers and he is the founding Editor-in-Chief of the SIAM Interdisciplinary Journal Multiscale Modeling and Simulation. Hou has presented his work at numerous conferences including as a plenary speaker at both the International Congress of Mathematicians and the International Congress on Industrial and Applied Mathematicians.
Many problems of fundamental and practical importance contain multiple scale solutions. Composite and nano materials, flow and transport in heterogeneous porous media, and turbulent flow are examples of this type. Direct numerical simulations of these multiscale problems are extremely difficult due to the wide range of length scales in the underlying physical problems. Direct numerical simulations using a fine grid are very expensive. Developing effective multiscale methods that can capture accurately the large scale solution on a coarse grid has become essential in many engineering applications. In this talk, I will use two examples to illustrate how multiscale mathematics analysis can impact engineering applications. The first example is to develop multiscale computational methods to upscale multi-phase flows in strongly heterogeneous porous media. Multi-phase flows arise in many applications, ranging from petroleum engineering, contaminant transport, and fluid dynamics applications. Multiscale computational methods guided by multiscale analysis have already been adopted by the industry in their flow simulators. In the second example, we will show how to develop a systematic multiscale analysis for incompressible flows in three space dimensions. Deriving a reliable turbulent model has a significant impact in many engineering applications, including the aircraft design. This is known to be an extremely challenging problem. So far, most of the existing turbulent models are based on heuristic closure assumption and involve unknown parameters which need to be fitted by experimental data. We will show that how multiscale analysis can be used to develop a systematic multiscale method that does not involve any closure assumption and there are no adjustable parameters.
Blow-up or No Blow-up? The Interplay Between Analysis and Computation in the Millennium Problem on Navier-Stokes equations
Whether the 3D incompressible Navier-Stokes equations can develop a finite time singularity from smooth initial data is one of the seven Millennium Problems posted by the Clay Mathematical Institute. We review some recent theoretical and computational studies of the 3D Euler equations which show that there is a subtle dynamic depletion of nonlinear vortex stretching due to local geometric regularity of vortex filaments. The local geometric regularity of vortex filaments can lead to tremendous cancellation of nonlinear vortex stretching. This is also confirmed by our large scale computations for some of the most well-known blow-up candidates. We also investigate the stabilizing effect of convection in 3D incompressible Euler and Navier-Stokes equations. The convection term is the main source of nonlinearity for these equations. It is often considered destabilizing although it conserves energy due to the incompressibility condition. Here we reveal a surprising nonlinear stabilizing effect that the convection term plays in regularizing the solution. Finally, we present a new class of solutions for the 3D Euler and Navier-Stokes equations, which exhibit very interesting dynamic growth property. By exploiting the special structure of the solution and the cancellation between the convection term and the vortex stretching term, we prove nonlinear stability and the global regularity of this class of solutions.
|
OPCFW_CODE
|
import os
from typing import Callable, Iterable, List, Union, Tuple
import numpy as np
import tensorflow as tf
from tensorflow.contrib.predictor import from_saved_model
from tensorflow.io import gfile
from tfnlp.cli.formatters import get_formatter
from tfnlp.cli.parsers import get_parser
from tfnlp.common import constants
from tfnlp.common.config import get_network_config
from tfnlp.common.utils import read_json, binary_np_array_to_unicode
from tfnlp.feature import get_feature_extractor
class Predictor(object):
"""
General entry point for making predictions using saved models.
:param predictor: TF Predictor
:param parser_function: given a text input, produces a list of inputs for features
:param feature_function: feature extractor, which converts raw inputs to inputs for prediction
:param formatter: formatter for predictor output -- takes inputs from parser function and predictor output
:param batcher: batching function, specifying logic for calling predictor
"""
def __init__(self, predictor: Callable[[Iterable[str]], dict], parser_function: Callable[[str], Iterable[dict]],
feature_function: Callable[[dict], str], formatter: [[object, dict], str],
batcher: Callable[[Iterable[dict], Callable[[Iterable[dict]], dict]], Iterable[Tuple[dict, dict]]]) -> None:
super().__init__()
def _predictor_from_dict(raw_feats: Iterable[dict]) -> dict:
processed = [feature_function(raw) for raw in raw_feats]
return predictor(processed)
self._predictor = _predictor_from_dict
self._parser_function = parser_function
self._formatter = formatter
self._batcher = batcher
def predict(self, text, formatted=True) -> Union[List[str], List[dict]]:
"""
Predict from raw text, applying a parsing function to generate an input dictionary for each instance found.
:param text: raw, un-tokenized text
:param formatted: if True, return a textual representation of output
:return: return a list of results for instances found in text
"""
inputs = self._parser_function(text)
return self.predict_parsed(inputs, formatted)
def predict_parsed(self, inputs: Iterable[dict], formatted: bool = True) -> Union[List[str], Iterable[dict]]:
"""
Predict from a list of pre-parsed instance dictionaries.
:param inputs: input dictionaries
:param formatted: if True, return a textual representation of output
:return: return a list of results corresponding to each input instance dictionary
"""
for processed_input, prediction in self._batcher(inputs, self._predictor):
if not formatted:
yield prediction
else:
yield self._formatter(prediction, processed_input)
def default_batching_function(batch_size: int) -> Callable[[Iterable[dict], Callable[[Iterable[dict]], dict]],
Iterable[Tuple[dict, dict]]]:
"""
Returns a function that performs batching over a list of serialized examples, and returns a list of resulting dictionaries.
:param batch_size: maximum batch size to use (limited by input Predictor's max batch size)
"""
def _batch_fn(examples: Iterable[dict], predictor: Callable[[Iterable[dict]], dict]) -> Iterable[Tuple[dict, dict]]:
curr_batch = []
def _single_batch(_batch):
result = predictor(_batch)
for idx in range(len(_batch)):
single_result = {}
for key, val in result.items():
value = val[idx]
if isinstance(value, np.ndarray) and not np.issubdtype(value.dtype, np.number) and len(value.shape) > 0:
value = binary_np_array_to_unicode(value)
single_result[key] = value
yield _batch[idx], single_result
for example in examples:
if len(curr_batch) == batch_size:
yield from _single_batch(curr_batch)
curr_batch = []
curr_batch.append(example)
if len(curr_batch) > 0:
yield from _single_batch(curr_batch)
return _batch_fn
def from_job_dir(job_dir: str) -> type(Predictor):
"""
Initialize a predictor from the output directory of a trainer.
:param job_dir: output directory of trainer
:return: initialized predictor
"""
path_to_savedmodel = get_latest_savedmodel_from_jobdir(job_dir)
path_to_vocab = os.path.join(job_dir, constants.VOCAB_PATH)
path_to_config = os.path.join(job_dir, constants.CONFIG_PATH)
return from_config_and_savedmodel(path_to_config, path_to_savedmodel, path_to_vocab)
def get_latest_savedmodel_from_jobdir(job_dir: str) -> type(Predictor):
"""
Return the latest saved model from a given output directory of a trainer.
:param job_dir: output directory of trainer
"""
export_dir = os.path.join(job_dir, constants.MODEL_PATH, 'export', 'best_exporter')
latest = os.path.join(export_dir, max(
[path for path in gfile.listdir(export_dir) if not path.startswith('temp')]))
return latest
def from_config_and_savedmodel(path_to_config: str, path_to_savedmodel: str, path_to_vocab: str) -> type(Predictor):
"""
Initialize a savedmodel from a configuration, saved model, and vocabulary.
:param path_to_config: path to trainer configuration
:param path_to_savedmodel: path to TF saved model
:param path_to_vocab: path to vocabulary directory
:return: initialized predictor
"""
config = get_network_config(read_json(path_to_config))
tf.logging.info("Loading predictor from saved model at %s" % path_to_savedmodel)
tf_predictor = _default_predictor(path_to_savedmodel)
parser_function = get_parser(config)
feature_function = _get_feature_function(config.features, config.heads, path_to_vocab)
formatter = get_formatter(config)
return Predictor(tf_predictor, parser_function, feature_function, formatter, default_batching_function(config.batch_size))
def _get_feature_function(feature_config: object, heads_config, path_to_vocab: str) -> Callable[[dict], str]:
feature_extractor = get_feature_extractor(feature_config, heads=heads_config)
feature_extractor.read_vocab(path_to_vocab)
return lambda instance: feature_extractor.extract(instance, train=False).SerializeToString()
def _default_predictor(path_to_savedmodel: str) -> Callable[[Iterable[str]], dict]:
base_predictor = from_saved_model(path_to_savedmodel)
return lambda features: base_predictor({"examples": features})
|
STACK_EDU
|
VMworld 2018 US, taking place August 26 – 30 in Las Vegas, Nevada is quickly approaching and we’re looking forward to sharing five days of innovation, education, and collaboration with our partner community.
This year, partners with a full conference pass have complimentary access to the Partner Forum sessions, as well as perks of the Partner Lounge. Below is a snapshot of what’s included within the Partner Forum, beginning Sunday. For those attending, make sure to add these partner sessions to your conference agenda today.
SUNDAY, August 26
- 2:00 PM – 4:00 PM — Partner Sales Training Sessions [Mandalay Bay Ballrooms B-D, Level 2]
These sales sessions cover key topics to help you sell, grow, and win more business through VMware Cloud on AWS, VMware NSX, and VMware vSAN/HCI solutions. Note, these sessions will also be offered on Thursday morning.
- 4:00 PM – 5:00 PM – Partner General Session [Mandalay Bay Ballroom E, Level 2]
Pat Gelsinger, CEO, Brandon Sweeney, SVP Worldwide Commercial & Channel Sales, Jenni Flinders, Worldwide Channel Chief, and a Special Guest Speaker will share their insights into VMware’s strategy and vision and highlight the cloud and services opportunity with VMware.
- 5:05 – 5:45 PM – Americas Regional Partner Session [Mandalay Bay Ballroom E, Level 2]
Join Frank Rauch, VP of Americas Partner Organization at VMware, as he discusses why we need to think differently to maintain a mindset of accelerated growth. Partners across all regions are welcome.
THURSDAY, August 30
- 10:30 AM – 12:30 PM — Partner Sales Training Sessions [Breakers Rooms, Level 2]
These sales sessions cover key topics to help you sell, grow, and win more business through VMware Cloud on AWS, VMware NSX, and VMware vSAN/HCI solutions.
The VMware Partner Lounge [Located in VMvillage, Bayside C, Level 1] opens Sunday at Noon. It’s the ideal place for you to host a customer meeting, or simply network, relax, refuel, or recharge your devices throughout the week. And, new this year are the partner-only theater sessions, spanning various programmatic topics including a session on Master Services Competencies, a PKS Partner Primer, special Partner Spotlight sessions, multiple VMware Cloud on AWS sessions and more. You do not need to pre-register for any of these sessions. Pro tip: Stop by the lounge on Sunday to see the schedule, or browse sessions online, and add your favorites manually to your calendar so you don’t miss this bonus partner-only content running Monday – Wednesday. Seating will be limited.
Following are additional partner sessions, training, tools, events, and activities being offered throughout VMworld….
Big Savings on Training & Certification for Partners
Develop your skills and build your VMware Master Services Competencies with special discounts on training and certification at VMworld. Sign up for as many as you’d like and easily add them to your conference agenda:
- 25% off exclusive-On Demand courses – including NSX, vSAN, vSphere, and vRealize Automation
- 25% off a VMware Learning Zone (our 24/7 training hub) Premium Subscription
- 50% off VMware Certified Professional (VCP) and VMware Certified Advanced Professional (VCAP) certification exams and practice tests
This year VMworld will be home to Cloud City – a fully-interactive event, taking you on a journey through the ultimate hybrid cloud experience. Our Cloud City representatives will guide you through the features of the multi-cloud solution and show you how it can transform your business. You will have the opportunity to explore the benefits of the cloud, gain invaluable hands-on training, and discover the ideal multi-cloud infrastructure solution for your organization.
VMware Cloud on AWS Updates at VMworld US 2018
Partners are flocking to join the VMware Cloud on AWS partner programs. In just 6 months, we have over 150 channel partners and over 100 validated software solutions available on VMware Cloud on AWS. Find out how VMware is helping our partners build their hybrid cloud business. If you’ve completed the VMware Cloud on AWS Solution competency, get the latest updates on our partner programs. And come learn about the latest features and capabilities available on VMware Cloud on AWS.
VMware’s Assessments and TestDrives
VMware’s Assessments and TestDrives provides systematic processes and sets of tools to approach your customers, evaluate their environment, and assess gaps between where they are and where your customers want to be in a hyper-connected world. Visit the Assessment Lounge in the VMvillage [Bayside C, Level 1] to meet and talk with VMware experts.
Veeam Availability Suite to be Showcased at VMworld 2018
Veeam, a VMware Global Strategic Technology Partner and VMworld Platinum Sponsor, will be showcasing Veeam Availability Suite and how it complements VMware technologies such as vSAN and VMware Cloud on AWS. Click here to view or register for the Veeam sessions and activities available to you throughout the week.
You can download the VMworld 2018 Mobile App from Google Play (Android) or the App Store (IOS) and use it to find all the latest information, update your schedule in real-time, and network with other attendees.
Be sure to check back next week for Partner Forum general session highlights and launch announcements.
|
OPCFW_CODE
|
My name is Jason McGee (aka aquietinvestor). I am a long time ZEC holder and active member of the Zcash community. I have a strong background in finance, accounting, and management. I am the former Head of Operations for a well-known hedge fund in NYC where I managed a team responsible for all daily activities related to trading operations and fund accounting.
Working in this capacity, I designed and implemented a framework of policies, procedures, and controls to create efficient organizational workflows. I also served as a project manager overseeing the development and maintenance of technology systems including a firm-wide trade order management system, security master database, and automated portfolio performance/risk management reporting suite. I am very organized and process driven. I believe my experience and skills would add an important and valuable perspective to ZOMG.
I left my job at the end of 2017 to focus on crypto full time. I currently work as a “self-employed trader and investor.” Said differently, I do not have a day job or any other professional obligations. As such, if elected, I am able to dedicate myself as a full-time ZOMG committee member, if needed. I could perhaps be the designated point of contact for all projects.
Within the Zcash community, I am best known for being vocal about the Zcash Foundation. On multiple occasions, I’ve asked the Foundation to operate with increased efficiency and better communication, honor its commitment to transparency and accountability, and better serve Zcash users and the Zcash ecosystem. From what I hear, at least one member of the Foundation considers me the Bill Lumbergh of Zcash (“Yeah…about those quarterly transparency reports…”), which, admittedly, I think is both hilarious and somewhat fitting in this instance.
My interactions with the Foundation, both online and via telephone, have been respectful and constructive. I believe the Foundation generally (more or less) agrees with my assessment; we mainly disagree on what should be prioritized.
To be clear, I am a team player. I work well with others, and as a ZOMG committee member will work well with Foundation. I believe my track record speaks for itself. For example, when Chris Burniske recently raised concerns about how ZOMG is structured, I proposed an amendment to ZIP 1014 that would allow ZOMG to use its own funds to pay committee members a stipend and hire contractors without turning it into an independent operating company. The amendment maintains the integrity of ZIP 1014 and is a win-win for both ZOMG and the Foundation.
I hope you’ll agree, I don’t just talk the talk. I identify real organizational problems and offer realistic, concrete solutions that benefit all parties.
My vision for Zcash:
- The mainstream alternative to BTC. A private store of value and medium of exchange.
- The privacy layer for crypto (i.e. ZSAs/UDAs)
- “Greener than fiat:” ESG is becoming increasingly important in the world. The narrative that proof of work harms the environment will be incredibly difficult to overcome. As such, I would like to see Zcash gradually move to proof of stake.
My vision for ZOMG:
I would like to see the Zcash ecosystem become more decentralized. Currently, Zcash is a 2-of-2 multisig governance model between ECC and the Zcash Foundation. ZCAP exists, but only as an “advisor” to the Foundation, which can vote against ZCAP’s recommendation. We need to do more to continue to diffuse power and become more decentralized.
I am currently conducting an informal poll, and so far the majority of respondents (1) would like to see ZOMG receive more independence and autonomy and (2) would not oppose an amendment to ZIP 1014 to make ZOMG a fully independent third entity. ZIP 1014 represents the community’s sentiment at a particular point in time. It can be changed. I would like to see more serious conversation about amending ZIP 1014 to address the concerns that have been raised by current committee members.
Having said that, if the current and incoming committee members agree to go forward with Dodger’s plan as a near term solution, I will support that. I don’t necessarily believe that the issues raised warrant ZOMG being put on hiatus, but I leave that for the current committee members and the Foundation to decide. Nevertheless, I would like to see ZOMG and the Foundation work together to improve ZOMG and evolve its structure in the future.
- More standardized process development around reviewing and assessing grant proposals
- Increased transparency and accountability for both ZOMG and grantees
- Create an Advisory Board to mentor current committee members comprised of (1) prior committee members and (2) members of similar organizations (e.g. Ethereum, Tezos, and Algorand Foundations).
- Aggressive outreach to encourage smart people to build on Zcash
- More networking with existing crypto relationships to source ideas and talent
- Experiment with Gitcoin, DAOs, etc.
- Actively support projects that bring more users to Zcash
- Actively support projects that add utility to Zcash and value to ZEC
- More focus on non-technical ways to bring more users to Zcash
- Fund the publication of education materials and position papers on relevant topics to educate the public on why they should be using Zcash and how to use Zcash
- Fund the creation of regulatory-friendly marketing campaigns
Two additional projects I think would be valuable:
- Regulatory fellowship: The biggest risk to Zcash is a global regulatory crackdown. I propose creating and funding a regulatory fellowship program (2-3 year terms) focused specifically on Zcash. The fellow could work at an established nonprofit, like Coin Center, and focus on global and domestic regulatory issues. The ideal candidate would be an established attorney, like Jake Chervinsky or Preston Bryne, currently focused on regulatory issues related to crypto.
- Global ambassador program: Zcash is too US-centric. I would like to see ZOMG fund a program that sponsors “ambassadors” in European, Asian, and South American countries to host meetups and other educational events focused on Zcash. Perhaps these ambassadors could be future ZOMG committee members.
What’s impressed me most about Zcash is the team and community. The development team is incredibly smart and passionate, they operate at a higher level of integrity and intellectual honesty, and they are very thoughtful and careful. The community is open and vibrant. When the other folks (i.e. Monero) slander or troll and resort to ad hominem attacks, we take the higher road and focus on the issues. I very much appreciate that and am proud to be a member of this community.
Thank you to everyone who reached out to encourage me to run. I believe I have a lot to contribute to the Zcash community, and I would be honored to be selected as a ZOMG committee member. Or, as Bill Lumbergh might say, “Umm, yeah…if you could just go ahead and vote for me…that’d be great.”
|
OPCFW_CODE
|
Cana is a team of scientists, engineers, and designers, building products to redefine the future of beverages. We aim to inspire a more thoughtful approach to everyday consumption by redesigning how the world’s most popular beverages are created and delivered. If you want to join a passionate team, working on challenging but highly-impactful problems for our planet, we would love to hear from you.
Cana aims to create a workplace where you feel valued and can do your best work. We welcome candidates with backgrounds that are traditionally underrepresented in science and technology and hope you will apply.
About the Role:
As an (Sr.) Electrical Engineer at Cana, you will work with a small and focused cross-functional team as the owner of all aspects of hardware development across the lifetime of a Cana product. You will drive electrical design, prototyping, manufacturing, scaling, quality control and vendor management. You will be responsible for and have key input on:
- Designing PCA/PCBs
- Managing and performing functional subsystem and system-level testing
- Design for functionality, reliability, testability and manufacturability
- Working with Product teams and Technicians
- Communicating with vendors and CMs
- 3-5 years of experience, consumer electronics and electromechanical components experience a plus
- Strong understanding and proficiency with CAD software, preferably Altium Designer
- Experience with DC/DC switching regulators, I2C/SPI, USB and Ethernet comm interfaces, microcontroller implementation, audio, sensors and electromechanical devices (e.g. motors and solenoids)
- Familiarity with typical lab equipment: oscilloscopes, DMMs, power supplies, waveform generators, soldering iron, microscope
- Knowledge - You demonstrate the ability to solve complex problems from first principles and can show others how to do it
- Entrepreneurship - You’re a self-starter who loves to own things end-to-end. You don’t wait for other people to tell you what the problems are; you already have multiple solutions in mind
- Team player - You know how to make those around you better and feed off their energy. You take care of your teammates
- You have a BS or MS in electrical engineering or a related field
Cana offers highly competitive benefits, including medical and dental insurance, paid time off, and 401(k)
The pay range for this full-time position that we reasonably expect to pay is $140,000 - $185,000. Individual compensation and level is based on various factors including, experience, education, skill set, and geographic location. This range is for our Bay Area, California location and may be adjusted to the labor market in other geographic areas. We offer equity in the company, a robust benefit package, and the opportunity to be part of ground breaking product development.
Legal authorization to work in the U.S. is required. We are not able to sponsor individuals for employment visas for this job.
In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire.
This position is based in our Redwood City, CA location - there is no relocation assistance offered at this time.
|
OPCFW_CODE
|
Do you want to be a full-stack Web Development in Lahore? Are you curious about the talents required and the pay? Then you’ve come to the right place. Full-stack Web Development is an excellent choice if you enjoy coding and are searching for a rewarding career. Indeed, the finest full-stack web development business in Pakistan pays full-stack engineers an average compensation.
However, what exactly is full-stack Web Development in Lahore? What are full-stack developers responsible for?
Let’s find out via this blog
This blog covers the fundamentals of full-stack web development and the technologies you need to be familiar with before diving into this field.
What exactly is full-stack web development?
Full stack Web Development in Lahore entails creating an online application’s front-end and back-end.
What are these new terminology?
- Development of the front-end: The front-end is the visible portion of a website or web application that handles user interactions. Furthermore, web browsers are only used to communicate with the web application’s front end.
- Development of the back-end: The back-end is concerned with how the website functions. It executes queries and returns the desired data.
What is the Full Stack Web Development Technologies?
The computer languages and tools used in front-end and back-end development to construct a fully functional and dynamic website or online application are called full-stack technologies by Web Development in Lahore. The client side of your website is referred to as front-end development. It primarily focuses on the user interface and the website’s components, such as graphics, buttons, text, navigation menus, etc.
Front-end development tools and programming languages used:
- HTML \sCSS
You can learn about alternative frameworks by reading our blog on the best front-end frameworks. The server side of your website is referred to as back-end development. This section of the development is in charge of database management. The database keeps your customers’ data, which can be obtained via queries and APIs.
The following back-end tools and programming languages were used:
- Frameworks like Spring, Laravel, and Django
- Version control system Scripting language
- AWS, Azure, and Heroku are examples of cloud services.
Need help determining which back-end framework to learn? Please read our blog to learn the distinctions between the finest back-end and back-end frameworks.
What Is a Full Stack Web Developer?
A front-end developer is someone who knows a lot about front-end technologies, and a back-end developer is someone who knows a lot about back-end technologies. On the other hand, companies are looking for full-stack experts or full-stack programmers these days. So, what exactly is a full-stack developer?
Full stack developers understand the complete depth of a computer system program and can create both the front-end and back-end of Web Development in Lahore. For a more in-depth understanding of both frameworks, see our extensive blog on the difference between front-end and back-end web development.
Full-stack developers can develop the front-end or the back-end and assist in the smooth operation of things after development. Be a result. They are frequently referred to as a jack of all crafts, which increases their demand.
How Does a Full Stack Web Developer Work?
Let’s look at the roles and responsibilities of a full-stack developer in any firm.
- Create dynamic, visually appealing, end-to-end, and unique software/apps with front-end and back-end capabilities.
- Capable of creating user experiences, interactions, responsive design, and overall architecture.
- It is necessary to have a working knowledge of databases, servers, APIs, version control systems, and third-party apps.
- Provide input on continual improvement and, as appropriate, add/remove functionality.
- Create a continual improvement, performance optimization, stability, and scalability strategy.
- Maintain awareness of new development tools, frameworks, methods, and architectures.
- During the testing and production phases, ensure cross-platform compatibility and problem resolution.
- Manage a team of engineers and designers and effectively interact with them to improve the product roadmap and performance.
Web Development in Lahore typically seeks to hire full-stack engineers who can handle the entire project with the abovementioned duties.
|
OPCFW_CODE
|
How do I smooth edge loops in Maya?
Smooth Geometry using Edge Flow in Autodesk Maya
- Select an edge loop (Double click on any edge)
- Go to ‘Edit Mesh’ > ‘Edit Edge Flow’
How do I make my Maya model smooth?
How to Smooth an Object in Maya
- Create a polygon object or add a polygon primitive to the scene (Create > Polygon Primitives > Cube).
- Select the object and choose Mesh > Smooth.
- The amount of smoothing can be controlled by adjusting the Division Levels and Continuity sliders.
How do you loop cut in Maya?
Insert an edge loop with the Multi-Cut Tool
- Ctrl-click to insert an edge loop anywhere on the mesh.
- Ctrl + middle-click to insert a centred edge loop.
- Ctrl + Shift-click to insert an edge loop that snaps according to the Snap Step % increment, as explained in Multi-Cut Tool Options.
How do I loop an animation in Maya?
Create a walk cycle with Progressive Looping
- Import an animation clip into the Time Editor.
- Select the clip on the track.
- In the Attribute Editor, check that Clip Loop Before and Clip Loop After Modes are set to Progressive.
- Click the Loop icon on the Time Editor toolbar.
- Drag the beginning or end of the animation clip.
What does smoothing do in Maya?
Lets you set how you want creasing applied to boundary edges and vertices as you smooth the mesh.
What does pressing 3 do in Maya?
Press the 3 key to display the selected polygonal mesh in this mode. Only the smoothed preview version of the mesh is displayed in this mode. You select and edit components on the smoothed preview when working in this mode.
How do you add an edge between two vertices in Maya?
Add edges between polygon components with the Connect Tool
- From the Tools section of the Modeling Toolkit window, click .
- From the main menu bar, select Mesh Tools > Connect.
- From the marking menu, select Connect Tool. (To open the marking menu, Shift + right-click when an object, vertex, edge, or face is selected.)
What is Edge Loop in Maya?
The Insert Edge Loop Tool (in the Modeling > Mesh Tools menu) lets you select and then split the polygon faces across either a full or partial edge ring on a polygonal mesh. It is useful when you want to add detail across a large area of a polygon mesh or when you want to insert edges along a user-defined path.
How does insert edge loop work in Maya?
The Insert Edge Loop Tool lets you insert one or more edge loops across a full, partial, or multidirectional edge ring. You can turn many of the Insert Edge Loop Tool options on or off while the tool is active using a marking menu. This assists your workflow by letting you continue your work without having to re-open the tool options window.
What to do with insert edge loop tool?
If you made an insert edgeloop and undone it. Just make sure to go in edge selection mode and select all of the edges, then deselect them. After that the tool should run fine for quad faces. 06-09-2011 02:35 PM
When to use edge ring in Autodesk Maya?
It is useful when you want to add detail across a large area of a polygon mesh or when you want to insert edges along a user-defined path. For a description of an edge ring, see Select an edge ring.
When to use edge loop in a mesh?
edge loop only works with quads and when all the vertices are merged. if you have open vertices anywhere in the possible path of the loop you are doing, then it wont work. just make sure that your mesh is clean. a few probs arrive from extruding also. It creates unconnected faces which will break the loop tools.
|
OPCFW_CODE
|
Error with BarcodeReader when StartCameraAutomatically is true
Setup: NET 7, Blazor Server-side (use the project BarcodeReaderIssue.zip)
Steps to reproduce:
Refresh the page with reload button or press F5
Scan a QR code
Expected results: No error should occur on receiving barcode text
Actual results: An error occurs on StateHasChanged() at the callback function: The current thread is not associated with the Dispatcher. (see the video https://user-images.githubusercontent.com/76152571/218969612-6a2f57c3-0587-4134-b1d5-e61e178ce117.mp4)
Are you able to get a barcode to read? I am only able to get a QR Code to read.
Are you able to get a barcode to read? I am only able to get a QR Code to read.
Yes, I get the QR code but then it throws an exception on the StateHasChanged() call.
I'm using the tool to read only QR codes, no barcodes.
Ok. Well, I am not having your issue. It seems to work just fine for me with scanning QR codes and the camera turning on automatically. Maybe it's a camera/driver issue? Any chance you could test to see if you can read a barcode and let me know if it works for you?
I tested with a barcode and I got the same error. I don't think is a camera issue because I hosted the website publicly (https://barcodereaderissue.azurewebsites.net/) and I tested with the camera on my smartphone, and I have the same problem. I think initially the issue is coming after the page is reloaded and I have the following exception:
System.NullReferenceException: Object reference not set to an instance of an object.
at BlazorBarcodeScanner.ZXing.JS.BarcodeReader.StopDecoding()
at BlazorBarcodeScanner.ZXing.JS.BarcodeReader.Dispose()
at Microsoft.AspNetCore.Components.RenderTree.Renderer.Dispose(Boolean disposing)
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Components.RenderTree.Renderer.Dispose(Boolean disposing)
at Microsoft.AspNetCore.Components.Rendering.RendererSynchronizationContext.<>c.b__8_0(Object state)
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Components.RenderTree.Renderer.DisposeAsync()
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.g__Await|22_0(Int32 i, ValueTask vt, List1 toDispose) at Microsoft.AspNetCore.Http.Features.RequestServicesFeature.<DisposeAsync>g__Awaited|9_0(RequestServicesFeature servicesFeature, ValueTask vt) at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.<FireOnCompleted>g__ProcessEvents|242_0(HttpProtocol protocol, Stack1 events)
Tested your code with a webcam and it worked fine with qr codes.
Did you reload the page before scanning? Do you see any error in the Visual Studio Output window after the reload?
Probably shouldn't be calling StateHasChanged() while Disposing which is currently being done via StopDecoding()
Ref
public void Dispose()
{
StopDecoding();
}
...
...
...
public void StopDecoding()
{
BarcodeReaderInterop.OnBarcodeReceived(string.Empty);
_backend?.StopDecoding();
IsDecoding = false;
StateHasChanged();
}
Hi @diegorod , thank you for your comment. Yes, the issue is calling StateHasChanged() while disposing and I think this should be fixed in the next version.
Now I realized that we have a similar error on calling await OnBarcodeReceived.InvokeAsync(args) at method ReceivedErrorMessage(ErrorReceivedEventArgs args):
System.InvalidOperationException: The current thread is not associated with the Dispatcher. Use InvokeAsync() to switch execution to the Dispatcher when triggering rendering or component state.
at Microsoft.AspNetCore.Components.Dispatcher.AssertAccess()
at Microsoft.AspNetCore.Components.RenderTree.Renderer.AddToRenderQueue(Int32 componentId, RenderFragment renderFragment)
at Microsoft.AspNetCore.Components.ComponentBase.StateHasChanged()
at Microsoft.AspNetCore.Components.ComponentBase.Microsoft.AspNetCore.Components.IHandleEvent.HandleEventAsync(EventCallbackWorkItem callback, Object arg)
at BlazorBarcodeScanner.ZXing.JS.BarcodeReader.ReceivedBarcodeText(BarcodeReceivedEventArgs args)
at System.Threading.Tasks.Task.<>c.b__128_0(Object state)
at Microsoft.AspNetCore.Components.Rendering.RendererSynchronizationContext.ExecuteSynchronously(TaskCompletionSource completion, SendOrPostCallback d, Object state)
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
at Microsoft.AspNetCore.Components.Rendering.RendererSynchronizationContext.ExecuteBackground(WorkItem item)
And I'm not quite sure if this is only a local issue.
This is more of a JS interop problem use this instead to avoid that :
private async Task LocalReceivedBarcodeText(BarcodeReceivedEventArgs args) { await InvokeAsync(() => { this.LocalBarcodeText = args.BarcodeText; StateHasChanged(); }); }
|
GITHUB_ARCHIVE
|
Are you a recruiter facing the following challenges?
Many organizations have been determined to make drastic changes in their business operations in 2020, mainly due to the global COVID-19 pandemic. Cost-cutting measures hit many departments, and Human Resources certainly wasn’t an exception. While some companies had to cut down recruiting budgets, others brought hiring to a complete stop.
If this sounds familiar to you, then it’s time to look for strategies that can increase the efficiency of the recruitment process.
Budget constraints are not the only challenge, though. There are also time constraints, especially when the number of candidates for certain positions is overwhelming. And then there are times when finding the perfect hire proves nearly impossible because of the company’s very specific tech stack. In all of these situations, discovering the most relevant candidates is essential.
If you’re wondering how to make your recruitment process more cost-effective, one immediate solution is to stop wasting time in technical interviews with irrelevant candidates. But then, how can you tell what makes developer A better than developer B?
We are introducing to you the Slashscore recruiter module
In the near future, a tech recruiter’s ability to find top talent will depend a lot on their ability to automate a great part of their workflow. Using machine learning algorithms for this is certainly one of the best solutions, as these can already outperform humans by over 25%, as revealed in an NBER study on 300,000 hires.
Slashscore approaches tech hiring from several different angles, to ensure that recruiters can always find the best person for the job as a very effective way to reduce hiring time that translates eventually into lower expenses.
Currently recruiters are confined to looking at developers' Linkedin profiles to determine the quality of their work. However, while Linkedin is an available resource, it is not sufficient for making decisions about a developers quality. Linkedin provides a subjective picture of a developer - whereas, Slashscore gives an objective evaluation.
Measuring developers’ performance should instead be based on scores that are calculated using machine learning and integrations such as Github, StackOverflow, Meetup, Medium, etc.
Having all the information centralized and organized is going to make the life of any developer and recruiter easier. Machine learning has enabled us to find patterns that helped in generating fair scores for everyone who joins Slashscore, based on the effort they put into their current projects.
By automating 90% of the hiring process we estimate that you will spend 60% less sourcing hours for finding the best tech candidates for the job. Considering that HR specialists spend on average 18% of the time sourcing the candidates and 26% of the time in screening them, Slashscore can really streamline the recruitment process.
Slashscore makes your hiring efforts cost-effective because you’ll have a smart filtering system at your disposal. Based on the selected filters and the machine learning-powered matching algorithm, Slashscore will generate a list of potential candidates that you should approach.
You’ll also have the option to filter candidates based on very specific tech criteria, Slashscore saves you a lot of time that would otherwise go wasted on technical interviews with irrelevant applicants. Also, Slashscore provides deeper insights into a developer’s personality, skills and coding preferences, after just by glancing at their profile and scores.
Hiring developers using Slashscore recruiter module
1. Create a recruiter account by adding basic information and hiring preferences including technologies you are looking for in potential candidates.
2. Start publishing jobs. You have the option to save them as final or as draft.
3. Based on the published jobs, Slashscore will generate matches according to your preferences and order them by scores. Note: Only the developer profiles that have “Actively looking for work” or “Not looking but open to new work” will be listed.
4. Send offers to the selected developers including the offered salary, extra benefits and a custom message to let them know what makes your company stand out.
After receiving offers from multiple companies, the developers have the option to either:
- Accept the offer and schedule a call with the recruiter
- Refuse the offer including a comment on why your offer wasn’t accepted.
What makes a recruitment strategy cost-effective is not how much money you’re spending while looking for a candidate, but getting the absolute best talent for the spent amount. Slashscore can definitely be used as a way to optimize the hiring process, since it will ensure that your vacant roles are filled without going over the allocated budget.
To have a better understanding of how Slashscore works, we invite you to a demo. All you need to do is leave your contact information and we’ll reach out to you as soon as possible.
|
OPCFW_CODE
|
RPGLE Dynamic SQL with Select clause does not work
After reading several articles about SQLRPGLE and retrieving data and storing them in data structure arrays, I came up with dynamic sql statements.
This works fine as long as I am using these dynamic to-replace fields for my where condition. But as soo as I am using these ? parameter in the select part, or in general as replacement for database fields, the result is blank.
Here is the DDS definition and the program:
TESTPF
A**************************************************************************
A*
A*-------------------------------------------------------------------------
A*
A R TESTPFR
A
A FLD01 2S 0
A FLD02 20A
A
A**************************************************************************
I have already filled this file with some dummy data. Here is what's inside:
runqry () qtemp/testpf
FLD01 FLD02
000001 1 Text 01
000002 2 Text 02
000003 3 Text 03
000004 4 Text 04
000005 5 Text 05
000006 6 Text 06
000007 7 Text 07
000008 8 Text 08
000009 9 Text 09
000010 10 Text 10
And this is the program:
TST001I
D**********************************************************************************************
D* Standalone Fields
D*---------------------------------------------------------------------------------------------
D stm s 500a inz(*blanks)
D fieldName01 s 10a inz(*blanks)
D fieldName02 s 10a inz(*blanks)
D fieldName03 s 2a inz(*blanks)
D text s 20a inz(*blanks)
D
C**********************************************************************************************
C* M A I N P R O G R A M M
C**********************************************************************************************
stm = 'SELECT fld02 FROM testpf WHERE fld01 = 1';
exec sql prepare s1 from :stm;
exec sql declare c1 cursor for s1;
exec sql open c1;
exec sql fetch c1 into :text;
exec sql close c1;
dsply text; // Prints 'Text 01'
text = *blanks;
stm = 'SELECT fld02 FROM testpf WHERE fld01 = ?';
exec sql prepare s2 from :stm;
exec sql declare c2 cursor for s2;
fieldName03 = '2';
exec sql open c2 using :fieldName03;
exec sql fetch c2 into :text;
exec sql close c2;
dsply text; // Prints 'Text 02'
text = *blanks;
stm = 'SELECT ? FROM testpf WHERE fld01 = 3';
exec sql prepare s3 from :stm;
exec sql declare c3 cursor for s3;
fieldName01 = 'FLD02';
exec sql open c3 using :fieldName01;
exec sql fetch c3 into :text;
exec sql close c3;
dsply text; // Prints ' '
text = *blanks;
stm = 'SELECT ? FROM testpf WHERE ? = ?';
exec sql prepare s4 from :stm;
exec sql declare c4 cursor for s4;
fieldName01 = 'FLD02';
fieldName02 = 'FLD01';
fieldName03 = '4';
exec sql open c4 using :fieldName01, :fieldName02, :fieldName03;
exec sql fetch c4 into :text;
exec sql close c4;
dsply text; // Prints ' '
text = *blanks;
*inlr = *on;
C**********************************************************************************************
This is the output:
DSPLY Text 01
DSPLY Text 02
DSPLY
DSPLY
DSPLY
May someone help me and explain why this is the case?
Not sure what you mean, show some example code, and maybe someone can help explain it to you.
When using a prepared statement, you can use ? as a parameter marker wherever you can use a host variable in a static statement. Of your four sample prepared statements, the first 3 should work, though the third one will not return what you seem to expect as it is equivalent to:
SELECT 'FLD02' FROM testpf WHERE fld01 = 3
I would expect to receive the value 'FLD02' as the result, not the value in column FLD02. This is because the ? is not a string replacement marker, but a parameter field marker. You can't use it to select a column, but you can use it to provide a value for comparisons, or a constant to be output.
The fourth sample is valid SQL, but it is equivalent to:
SELECT 'FLD02' FROM testpf WHERE 'FLD01' = '4'
This will return nothing since 'FLD01' does not equal '4'.
Another consequence of this is that the ? can be used to provide a numeric value to the prepared statement. So you can do this:
dcl-s seqno Packed(5:0);
exec sql declare c2 cursor for s2;
stm = 'SELECT fld02 FROM testpf WHERE fld01 = ?';
exec sql prepare s2 from :stm;
seqno = 2;
exec sql open c2 using :seqno;
Also notice that I removed the declaration of the cursor to somewhere outside the logic flow as the declaration is not an executable statement. I see programs where the declare is in a subroutine that is called before a separate subroutine containing the open for the cursor. This is semantically incorrect. The DECLARE CURSOR statement is more correctly equivalent to an RPGLE dcl- statement. But because the SQL precompiler processes the source linearly, largely without regard to subroutines or sub-procedures, the requirement is for the DECLARE CURSOR to be physically before the OPEN in the source.
Generally I like to put my SQL Declares at the head of the program right after the SET OPTION statement which must be the first SQL embedded in the program. This is where I put the declares when I am using prepared statements. I also declare the statement name as well though this isn't strictly necessary. There is a little gotcha for this though, and that exists when using static SQL with locally scoped host variables. To deal with this, I declare static cursors a bit differently when using sub-procedures. The SQL precompiler recognizes that sub-procedures use locally scoped variables, so if you are declaring a static cursor with locally scoped host variables, the host variables and the cursor declaration must be in the same scope. That means I must declare my static cursors in the same sub-procedure as the open. I still declare the cursor up near the RPGLE dcl- statements to keep the declarations together.
|
STACK_EXCHANGE
|
package edu.kit.pse.osip.core.model.base;
import edu.kit.pse.osip.core.SimulationConstants;
import java.util.EnumMap;
import java.util.Observable;
/**
* Groups all tanks in the production site together. This is the entrance point of the model because you can get every
* tank and, through the tanks, every pipe in the production site.
*
* @author David Kahles
* @version 1.0
*/
public class ProductionSite extends Observable {
/**
* Stores the mixtank.
*/
private MixTank mixTank;
/**
* Stores all tanks.
*/
private final EnumMap<TankSelector, Tank> tanks = new EnumMap<>(TankSelector.class);
/**
* Saves the input temperatures of all tanks.
*/
private final EnumMap<TankSelector, Float> inputTemperature = new EnumMap<>(TankSelector.class);
/**
* Constructs a new ProductionSite.
*/
public ProductionSite() {
initTanks();
}
/**
* Initializes all tanks.
*/
private void initTanks() {
int halfFull = SimulationConstants.TANK_SIZE / 2;
mixTank = instantiateMixTank(SimulationConstants.TANK_SIZE, new Liquid(halfFull,
TankSelector.MIX.getInitialTemperature(), TankSelector.MIX.getInitialColor()),
new Pipe(SimulationConstants.PIPE_CROSSSECTION, SimulationConstants.PIPE_LENGTH, (byte) 100));
byte count = 0;
for (TankSelector selector: TankSelector.valuesWithoutMix()) {
byte threshold;
if (count == 0 || count == 1) {
threshold = (byte) 50;
} else {
threshold = (byte) 0;
}
count++;
Liquid l = new Liquid(halfFull, selector.getInitialTemperature(), selector.getInitialColor());
Pipe inPipe = new Pipe(SimulationConstants.PIPE_CROSSSECTION, SimulationConstants.PIPE_LENGTH, threshold);
Pipe outPipe = new Pipe(SimulationConstants.PIPE_CROSSSECTION, SimulationConstants.PIPE_LENGTH, threshold);
tanks.put(selector, instantiateTank(SimulationConstants.TANK_SIZE, selector, l, outPipe, inPipe));
}
/* Make sure we're in the correct state */
reset();
}
/**
* Template method to allow subclasses to create objects of subclasses of Tank. The parameters are the same
* parameters as in the Tank constructor.
*
* @param capacity see Tank.
* @param tankSelector see Tank.
* @param liquid see Tank.
* @param outPipe see Tank.
* @param inPipe see Tank.
* @return The created Tank.
*/
protected Tank instantiateTank(float capacity, TankSelector tankSelector, Liquid liquid, Pipe outPipe,
Pipe inPipe) {
return new Tank(capacity, tankSelector, liquid, outPipe, inPipe);
}
/**
* Template method to allow subclasses to create objects of subclasses of MixTank. The parameters are the same
* parameters as in the MixTank constructor.
*
* @param capacity see MixTank.
* @param liquid see MixTank.
* @param outPipe see MixTank.
* @return The created MixTank.
*/
protected MixTank instantiateMixTank(float capacity, Liquid liquid, Pipe outPipe) {
return new MixTank(capacity, liquid, outPipe);
}
/**
* Gets the input temperature of an upper tank.
*
* @param tank Specifies the tank.
* @return The input temperature.
*/
public float getInputTemperature(TankSelector tank) {
return inputTemperature.get(tank);
}
/**
* Sets the input temperature for an upper tank.
*
* @param tank Specifies the tank.
* @param temperature Temperature to set.
*/
public void setInputTemperature(TankSelector tank, float temperature) {
if (temperature > SimulationConstants.MAX_TEMPERATURE) {
throw new IllegalArgumentException("Tank input temperature must not be grater than "
+ SimulationConstants.MAX_TEMPERATURE);
}
if (temperature < SimulationConstants.MIN_TEMPERATURE) {
throw new IllegalArgumentException("Tank input temperature must not be smaller than "
+ SimulationConstants.MIN_TEMPERATURE);
}
inputTemperature.put(tank, temperature);
setChanged();
notifyObservers();
}
/**
* Gets one of the upper tanks.
*
* @param tank Specifies the tank.
* @return the requested tank.
*/
public Tank getUpperTank(TankSelector tank) {
return tanks.get(tank);
}
/**
* Gets the mixtank.
*
* @return the mixtank of the production site.
*/
public MixTank getMixTank() {
return mixTank;
}
/**
* Resets the whole production site to its default values: Every tank with 50% infill, valves putting the site to
* a stable state.
*/
public synchronized void reset() {
for (TankSelector selector: TankSelector.valuesWithoutMix()) {
tanks.get(selector).reset();
inputTemperature.put(selector, selector.getInitialTemperature());
}
mixTank.reset();
inputTemperature.put(TankSelector.MIX, TankSelector.MIX.getInitialTemperature());
setChanged();
notifyObservers();
}
}
|
STACK_EDU
|
Clock Speed Requirements
Apologies if I missed anything in the README, I sometimes skim things a little too vigorously.
I'm looking to use this in an application where the system clock frequency is 48MHz. Are there any issues with either SPI or SDIO at this frequency? I can use either of the above in this application, with SDIO's speed being preferable (the application involves high-speed data acquisition). Will the SDIO version work with a 48MHz system clock?
For SPI, you can set the SPI baud rate in the hardware configuration. This library uses the SDK call spi_set_baudrate which uses the SDK's clock_get_hz to get the current frequency of the peripheral clock for UART and SPI (see src\rp2_common\hardware_spi\spi.c in the pico-sdk). I'd have to look in the Datasheet to see what sets the peripheral clock's frequency. But, anyway, you could always scale the baud rate in the H/W config. if necessary.
For SDIO, it's on my list of things to do to make the SDIO clock rate configurable in the H/W config. It is currently hard coded in sd_sdio_begin in (sd_driver\SDIO\sd_card_sdio.c)[https://github.com/carlk3/no-OS-FatFS-SD-SPI-RPi-Pico/blob/sdio/src/sd_driver/SDIO/rp2040_sdio.c].
// Increase to 25 MHz clock rate
// Actually, clk_sys / CLKDIV (from rp2040_sdio.pio),
// So, say, 125 MHz / 4 = 31.25 MHz (see src\sd_driver\SDIO\rp2040_sdio.pio)
// rp2040_sdio_init(sd_card_p, 1, 0); // 31.25 MHz
rp2040_sdio_init(sd_card_p, 1, 128); // 20.8 MHz
so you would have to modify that directly. When I get around to making it configurable in the H/W config., I should use a scheme like that used in spi_set_baudrate and take the clk_sys into account.
There isn't a lot of computation going on, but it's possible that a slower system clock will slow down something to the point that you might see problems. The only thing that comes to mind is the verification of the CRCs. (For example, see sdio_verify_rx_checksums in rp2040_sdio.c). I got a great suggestion for how to do that in hardware ( #63 ), but that's another thing on the to do list.
Thanks for the info! I will look into this. Modifying it is certainly not an issue.
I have verified that it works reliably at system clock frequency is 48MHz and SDIO baud rate 5 MHz.
Awesome, thank you! Does 10MHz work? I see you noted it can't be greater than 1/4 of the system clock. Either should work for my purposes, though.
Awesome, thank you! Does 10MHz work?
Setting the system clock to 10000 kHz is "not possible" according to check_sys_clock_khz.
I see you noted it can't be greater than 1/4 of the system clock. Either should work for my purposes, though.
48 MHz system clock with 10 MHz baud rate works fine:
> set_sys_clock_48mhz
> mount 3:
> bench 3:
Type is FAT32
Card size: 31.95 GB (GB = 1E9 bytes)
Manufacturer ID: 0x0
OEM ID:
Product: USD
Revision: 1.0
Serial number: 0x302c
Manufacturing date: 8/2022
FILE_SIZE_MB = 5
BUF_SIZE = 20480
Starting write test, please wait.
write speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
3196.9,13015,5826,6384
3226.4,11195,5811,6335
Starting read test, please wait.
read speed and latency
speed,max,min,avg
KB/Sec,usec,usec,usec
4045.4,6709,4659,5061
4051.7,6310,4638,5054
Done
> big_file_test bf 10 1
Writing...
Elapsed seconds 7.14
Transfer rate 1434 KiB/s (1468 kB/s) (11745 kb/s)
Reading...
Elapsed seconds 6.57
Transfer rate 1558 KiB/s (1596 kB/s) (12767 kb/s)
Closing as you've answered my question and implemented a convenient solution. Thanks!
|
GITHUB_ARCHIVE
|
This action might not be possible to undo. Are you sure you want to continue?
system local laptop’s username Windows 7 32-bit/64-bit C:\u01\app\oracle [if location doesn’t exist, create it] C:\My_Downloads oracle
zip To: C:\My_Downloads on your laptop. OR Using WinRar Extract Utility (~3 minutes) ● Browse to C:\My_Downloads . ○ Click “Extract”. Step-2: Extract the Software Using default Windows Extract Utility (~20 minutes) ● Browse to C:\My_Downloads ● Right Click on “win32_11gR2_client” and click “Extract All”.68\MLC_Shared_Files\11g R2_Window_32-bit_Client Remember: it may ask you for “student” domain credentials.zip To: C:\My_Downloads Option-2 [Only for my students.oracle.Step-1: Download Option-1: Download from Oracle’s Website From: http://download.com/otn/nt/oracle11g/112010/win32_11gR2_client.0. and copy win32_11gR2_client.]Download from my network: From: Open “My Computer” and copy paste the following in the address bar: \\192.168.
● Right Click on “win_32_11R2_client” and click “Extract Files. click “Yes” and follow the screenshots below...” ○ Click “Ok” Step-3: Install Oracle 11g R2 Client. . ● ● ● Browse to C:\My_Downloads\win32_11gR2_client\client\install right click on “oui” and click “Run as administrator” On the new window of “User account control”.
On the Step1 of 7 window: ● Select “Administrator (1.02GB) ● and click “Next” as shown below .
.On the Step2 of 7 window: ● Keep English as the selected language and click Next.
0\client_1 .2.On the Step3 of 7 window: ● Replace ORACLE_BASE ○ From c:\app\YourLoginName ○ To c:\u01\app\oracle ● Software location will automatically change to C:\u01\app\oracle\product\11.
On the Step4of 7 window: ● The installer will start checking for pre-requisites .
On the Step5 of 7 window: ● Click on Finish .
.On the Step6 of 7 window: ● Installer will start Oracle 11g Client on you client machine (laptop).
I will name it “Moid_Was_Here”. verify connectivity and finally de-install everything from Windows Laptop. Make sure you replace XYZ with your name. 3 Now since we have verified the connectivity. Three main Database Variables which are set in .bash_profile are: ORACLE_BASE → /u01/app/oracle ORACLE_HOME → $ORACLE_BASE/product/11. click here on instructions on how to perform deinstallation of Oracle Client you just installed. it is the time to DE-INSTALL Oracle 11g R2 Client from Windows 7. --Moid .ora file is located in $ORACLE_HOME/network admin or in simple variable. For example.. it is under $TNS_ADMIN/network/admin location.. you have just installed 11g R2 Client on Windows 7..On the Step7 of 7 window: ● Click Finish to wrap up the installation. Congratulations.. Command to create your table is: Create table Moid_Was_Here (id number).. Next step is to add the tns entry to connect to PrimeDG database.. if I have to create my table.com b Port = 6223 c Service_Name = PrimeDG d Protocol = TCP To see an example of how to add the tns entry. a Hostname = ssh. click here.2. 1 Add the tns entry called PFO with the following values.0/db_1 TNS_ADMIN → $ORACLE_HOME/network/admin 2 Connect to your schema using PFO a Start cmd on you local laptop b sqlplus scott@pfo c Create a table called “XYZ_Was_Here” as follows.. Note: tnsnames.
|
OPCFW_CODE
|
📁 Strategy Info¶
- Folder: /hummingbot/strategy/cross_exchange_mining
- Configs: cross_exchange_mining_config_map_pydantic.py
- Maintainer: bsmeaton
🏆 Strategy Tier¶
Community strategies have passed the Minimum Voting Power Threshold in the latest Poll and are included in each monthly release. They are not maintained by Hummingbot Foundation but may be maintained by a community member.
The Cross Exchange Mining strategy creates buy or sell limit orders on a maker exchange at a spread wider than that of the taker exchange. Filling of the order on the maker exchange triggers a balancing of the portfolio on the taker exchange at an advantageous spread (The difference between the two spreads being equal to the
min_profitability) thereby creating profit.
The strategy tracks the amount of base asset across the taker and maker exchanges for
order_amount and continually seeks to rebalance and maintain assets, thereby reducing any exposure risk whereby the user has too much quote or base asset in falling or rising markets.
🏦 Exchanges supported¶
- SPOT CLOB CEX
🛠️ Strategy configs¶
||string||True||Enter your maker spot connector (Exchange)|
||string||True||Enter your taker connector (Exchange/AMM)|
||string||True||Enter the token trading pair you would like to trade on
||string||True||Enter the token trading pair you would like to trade on
||decimal||True||What is the minimum profitability for you to make a trade? (Enter 1 to indicate 1%)|
||decimal||True||What is the amount of
||decimal||5||True||How much buffer do you want to add to the price to account for slippage for taker orders?|
||decimal||5||True||Time interval between subsequent portfolio rebalances?|
||decimal||0.05||True||What percentage below the min profitability do you want to cancel the set order?|
||decimal||0.05||True||What percentage above the min profitability do you want to cancel the set order?|
||decimal||120||True||The period in seconds to calulate volatility over?|
||decimal||3600||True||Time interval to adjust min profitability over by using results of previous trades in last 24 hrs?|
||decimal||0||True||What is the minimum order amount required for bid or ask orders?|
||decimal||1||True||Multiplier for rate curve for the adjustment of min profitability based on previous trades over last 24 hrs?|
||decimal||0.25||True||Complete trade fee covering both taker and maker trades?|
The strategy operates by maintaining the 'order amount' base balance across the taker and maker exchanges. The strategy sets buy or sell limit orders on the maker exchanges, these orders are set when sufficient quote or base balance exists on the taker exchange in order to be able to complete or balance the trade on the taker exchange when a limit order on the maker exchange is filled.
The strategy can balance trades immediately when an imbalance in base asset is detected and although the taker trade will be acted upon immediately after an imbalance is detected subsequent balances will be spaced by at least the
balance_adjustment_duration variable, just to ensure the balances are updated and recorded before the balance is retried erroneously. In this way the strategy will exactly maintain the 'order amount' in terms of base currency across the exchanges selling base currency when a surplus exists or buying base currency if short.
The strategy seeks to make profit in a similar way that cross exchange market making operates. by placing a wide spread on the maker exchange that when filled will allow the user to buy back base currency at a lower price on the taker exchange (In case of a sell order fill on the maker exchange) or sell base currency at a higher price on the taker exchange in case of buy order filled on the maker exchange. The difference in price between these two transactions should be the
min_profitability variable. Setting this variable to a higher value will result in less trade fills due to a larger spread on the maker exchange but also a greater profitability per transaction and vise versa.
When an order is set with a spread that meets the
min_profitability variable at that time it is then monitored each tick. The theoretical profitability of the trade will vary over time as orders on the taker orderbook changes meaning the cost of balancing the filled trade will constantly change. The order is cancelled and reset back at the
min_profitability amount when the profitability either drops below the
`min_profitability minus min_prof_tol_low point or rises above the
In addition to this basic logic a leading and lagging adjustment to the
min profitability figure is made during the strategy run.
Short term, Leading adjustment:
The strategy looks at the current volatility in the maker market to adjust the
min profitability figure described above. The function looks at the standard deviation of the currency pair prices across a time window equal to
volatility_buffer_size. The standard deviation figure is then converted by taking the three sigma percentage away from the mid price over that range and adding it to the
min profitability. In this way a higher volatility or standard deviation figure would increase the min profitbaility creating a larger spread and reducing risk during periods of volatility. The adjustment is set for a time period equal to the
volatility_buffer_size unless a higher volatility adjustment is calculated in which case its set at the higher adjustment rate and timer reset.
Long term, Lagging adjustment:
The strategy looks at the previous trades completed and balancing trades in order to understand the success of the strategy at producing profit. The strategy will again adjust the 'min_profitability' figure by widening the spread if the user is losing money and tightening the spread if the trades are too profitable. This is due to the strategy aiming to essentially provide a break even portfolio to maximise mining rewards, hence the name
The previous trades in the users
hummingbot/data file are read by the strategy at intervals equal to the
min_prof_adj_timer when this function is called it looks at trades recorded within the last 24 hours in the file and based on timestamp seeks to match the filled maker and taker orders that make up a full balanced trade.
The strategy uses the
trade_fee variable in this calculation to take into account the amount of money paid to the both exchanges during these trades, the calculation returns the average profitability of the trades and balance pairs completed in the previous 24 hours. This figure is then converted into an adjustment. a 0% profitability (Based on order amount) would lead to 0 adjustment.
Positive or negative percentages made are converted into an adjutsment using the relationship
(Percentage * rate_curve)**3 + min_profitability. The cubed figure exponentially penalises large profit or loss percentages gained thereby greatly reducing the min_profitability (In case of large gains) or greatly increasing the min_profitability figure (In case of large losses). The
rate_curve variable acts to provide a multiplier for this adjustment it is reccomended to keep this in the 0.5-1.5 range with the higher it is set the more the min_profitability adjustment is affected by previous trades.
From a personal perspective I have used the XEMM strategy for a number of years and my motivation for this strategy comes not from improving how effective the strategy is at making money but it is to increase the reliability of the strategy in maintaining a hedged position of base assets even during wild market swings. The code is entirely rewritten from the XEMM strategy aimed at making a more logical progression and removing elements that I find add complexity, reducing reliability without benefitting the user.
The strategy is intended for use with the same pairs on both taker and maker centralised exchanges. The strategy utilises market trades to fill on taker side.
|
OPCFW_CODE
|
when I export in pdf file is marked the TrueType fonts box and have to uncheck, opens every time I think is checked. There are some config parameter to not marked out?
Not that I know of.
I use a mapkey to create pdf exports
for WF4, I use:
mapkey caf @MAPKEY_NAMECreate Acrobat Reader File (current sheet);\
mapkey(continued) @MAPKEY_LABEL>Create PDF (current sheet)(caf);~ Command `ProCmdModelMkPdf`;\
mapkey(continued) ~ Select `intf_pdf` `pdf_color_depth` 1 `pdf_mono`;\
mapkey(continued) ~ Open `intf_pdf` `pdf_linecap`;~ Close `intf_pdf` `pdf_linecap`;\
mapkey(continued) ~ Select `intf_pdf` `pdf_linecap` 1 `2`;\
mapkey(continued) ~ Select `intf_pdf` `PDFMainTab` 1 `PDFContent`;\
mapkey(continued) ~ Select `intf_pdf` `pdf_font_stroke` 1 `pdf_stroke_all`;\
mapkey(continued) ~ Activate `intf_pdf` `pdf_btn_ok`;
In the config.pro, i have:
pen_table_file <file location>/<filename.pnt>
This may work with C2 but no guarantees.
I have one configured for C2 but it is more entailed so I can't give you this one right now.
You should be able to save the PDF settings that you want and then recall them before saving the PDF. Once selected, they stay active for the length of your session. If you're using a mapkey to save PDFs, you could always have the mapkey select the desired profile first.
This article should provide some information on how to create the profile:
In case you are unable to access the article, here is the information from the Resolution area:
Take below steps to create PDF Export Profile :
The 2D PDF export profile has .dop (Dex Output Profile) extension and will be stored in the current working directory
Config.pro option intf_profile_dir can be defined to specify the default profile folder for selection from drop down list
Yes, I would also like to know how we can set a particular profile as default. Please let us know how to do this or add this feature as soon as possible. thanks.
It is not possible to set a profile as default. The Profile must be selected from the drop down selection.
- Profile must be selected from drop down list for every export
Not exactly. Once a profile has been selected the settings will stay active for the duration of that session.
Regardless, selecting different profiles during export via mapkeys is the approach we use here and it works fine.
|
OPCFW_CODE
|
The Rutles are a rock band known for their visual and aural pastiches and parodies of the Beatles. This originally fictional band, created by Eric Idle and Neil Innes for 1970s television programming, became an actual group – whilst remaining a parody of the Beatles – which toured and recorded, releasing many songs and albums that included two UK chart hits.
Originally created as a short sketch in Idle’s British television comedy series Rutland Weekend Television, the Rutles gained notice after being the focus of the mockumentary television film All You Need Is Cash (1978, aka The Rutles). Former Beatle George Harrison appeared in the film and assisted in its creation. Encouraged by the positive public reaction to the sketch, featuring Beatles’ music pastiches by Innes, the film was written by Idle, who co-directed it with Gary Weis. It had 20 songs written by Innes, which he performed with three musicians as the Rutles. A soundtrack album in 1978 was followed in 1996 by Archaeology, which spoofed the then recent Beatles Anthology series.
A second film, The Rutles 2: Can’t Buy Me Lunch – modelled on the 2000 TV special The Beatles Revolution – was made in 2002 and released in the US on DVD in 2003.
George Harrison was involved in the project from the beginning. Producer Gary Weis said, “We were sitting around in Eric’s kitchen one day, planning a sequence that really ripped into the mythology and George looked up and said, ‘We were the Beatles, you know!’ Then he shook his head and said, ‘Aw, never mind.’ I think he was the only one of the Beatles who really could see the irony of it all.”
Harrison said, “The Rutles sort of liberated me from the Beatles in a way. It was the only thing I saw of those Beatles television shows they made. It was actually the best, funniest and most scathing. But at the same time, it was done with the most love.”
Ringo Starr liked the happier scenes in the film, but felt the scenes that mimicked sadder times hit too close.
John Lennon loved the film and refused to return the videotape and soundtrack he was given for approval. He told Innes, however, that “Get Up and Go” was too close to the Beatles’ “Get Back” and to be careful not to be sued by ATV Music, owners of the Beatles catalogue’s copyright at the time. The song was consequently omitted from the 1978 vinyl LP soundtrack.
Paul McCartney, who had just released his own album, London Town, always answered, “No comment.” According to Innes: “He had a dinner at some awards thing at the same table as Eric one night and Eric said it was a little frosty.” Idle claimed McCartney changed his mind because his wife Linda thought it was funny.
The Rutles is a soundtrack album to the 1978 telemovie All You Need Is Cash. The album contains 14 of the tongue-in-cheek pastiches of Beatles songs that were featured in the film.
The primary creative force of the Rutles’ music was Neil Innes, the sole composer and arranger of the songs. Innes had been the “seventh” member of Monty Python, as well as one of the main artists behind the Bonzo Dog Doo Dah Band in the late 1960s, who had been featured in the real Beatles’ Magical Mystery Tour film performing “Death Cab for Cutie”.
Innes credits the three musicians he recruited to assist him on the project as having been important in helping him capture the feel of the Beatles. Guitarist/singer Ollie Halsall and drummer John Halsey had played together in the groups Timebox and Patto. Multi-instrumentalist Rikki Fataar had played with the Flames before joining the Beach Boys in the early 1970s.
Eric Idle, who devised the Rutles concept and co-wrote the film, did not play or sing on any of the recordings. He lip-synced “Dirk” vocals that were in fact sung by Halsall. Innes says that Idle, who had recently had an appendectomy, offered to help but was encouraged to recuperate. Having encouraged Idle and Innes to make a film that satirised the Beatles’ history, and lent them archival footage for inclusion in the film, George Harrison facilitated the album’s release by introducing them to the chairman of Warner Bros. Records, Mo Ostin. (by wikipedia)
Pop culture, comedic satire, and rock music have always made for strange bedfellows. With all due respect to the collective genius involved in the Spinal Tap saga, it is safe to say no other artists have been able to repeat or re-create the delicate balance exhibited in the Rutles’ multimedia parody. This venture included a made-for-television mockumentary titled All You Need Is Cash. On this 1990 CD release, the contents of the original 1978 soundtrack — which incidentally bore the same name as the show — are included, as are an additional half-dozen recordings made for the film, but ultimately became victims of the time limitations inherent in the vinyl medium. The Rutles began with Monty Python’s Flying Circus member Eric Idle. His initial flash on the concept was as a short-lived BBC series, titled Rutland Weekend Television. Joining Idle on a regular basis was former Bonzo Dog Band member Neil Innes — whose seemingly innate musical abilities would also adorn latter-era Monty Python performances. According to Idle, “His [Innes] contributions [to the program] were Beatley,” thus inspiring the concept of a full-blown Beatles spoof. After previewing a demo reel to Lorne Michaels — producer of Saturday Night Live — Idle was convinced to develop the idea for NBC TV. The Rutles are: Ron Nasty, who is played by Innes (guitar/keyboards/vocals) and is the John Lennon character; Barry Wom (aka Barrington Womble) is portrayed by John Halsey (percussion/vocals), who presents a dead-on caricature of the deadpan Ringo Starr; Stig O’ Hara is depicted by Rikki Fataar (guitar/bass/vocals/sitar/tabla), who flawlessly emulates George Harrison; and Idle — the only non-musician — who spoofs Paul McCartney as Dirk McQuickly. The soundtrack takes on a whole other existence as each and every composition is deeply and sincerely ingrained in the Beatles’ music. Because of the practically sacred nature Beatles music shares in almost every life it graces, Innes penned and produced spoofs that were so eerily similar in structure they could easily be mistaken for previously unearthed tracks from the real fab four.
There are obvious put-ons such as “Ouch!” and “Help!” or “Doubleback Alley” and “Penny Lane.” However, the real beauty inherent in many of these tunes comes via the subtle innuendos. These ultimately involve multiple listenings in order to locate the origins of a particular guitar riff, vocal inflection, or possible lyrical spoof. The best of these include “Hold My Hand,” which references “I Wanna Hold Your Hand” in title, and “All My Lovin” in song structure. “Piggy in the Middle” is a sly reworking of “I Am the Walrus,” and “It’s Looking Good” could be considered a variation of the Rubber Soul cut “I’m Looking Through You” right down to the repeated lyrics at the song’s coda. The band reunited (minus Idle) in the mid-’90s for a few one-off gigs, and in 1996 Archaeology — a send-up of the Beatles’ six-disc Anthology — was released to critical acclaim. Additionally, a various-artist album titled Rutles Highway Revisited — which featured an all-star cast including: Syd Straw, Tuli Kupferberg, Bongwater, Shonen Knife, and Galaxie 500 — recorded their favorite Rutles tunes and the disc was issued on the ever-eclectic Shimmy Disc label in 1990. (by Lindsay Planer)
Rikki Fataar (guitar, bass, sitar, tabla, vocals)
John Halsey (drums, percussion, vocals)
Ollie “Barry” Halsall (guitar, vocals, keyboards)
Neil “Basty” Innes (guitar, keyboards, vocals)
Andy Brown (bass)
01. Hold My Hand 2.35
02. Number One 2.54
03. With A Girl Like You 1.53
04. I Must Be In Love 2.06
05. Ouch! 1.53
06. Living In Hope 2.39
07. Love Life 2.56
08. Nevertheless 1.31
09. Good Times Roll 3.07
10. Doubleback Alley 2.59
11. Cheese And Onions 2.43
12. Another Day 2.13
13. Piggy In The Middle 4.15
14. Let’s Be Natural 3.27
All songs written by Neil Innes
I got this collector´s item from Mr. Sleeve — and I had to say thanks again !
And here are the one and only Rutles in their movie “All You Need Is Cash”
|
OPCFW_CODE
|
Better support for emoji wcwidth
Change the implementation of wcwidth in vty taking the implementation from musl.
Make wcwidth compatible with Unicode 12.1.0.
Add code that generates the C tables from the Unicode data, also from musl.
Add a utility executable that runs through the Unicode codepoints and compares what wcwidth thinks with the actual implementation of the terminal.
Add a demo that prints a bunch of emoji character combinations from various versions of Unicode along with the width calculated by the new implementation of wcwidth.
Fix some test cases that deal with wide characters.
I think, the new implementation is simpler and more efficient although, I have to admit, that I don't understand all the bitwise logic.
I've tested this on a few terminals on Linux with overall good results.
This should bring us one step closer in closing #175.
I could not make cabal sdist include the C header files under unicode_tools by mentioning the files in install-includes in the executable. For whatever reason I had to mention the files in the library as well: https://github.com/jtdaugherty/vty/pull/179/commits/79955363e056fd42ecf8517bfc03780f8a95390e
Not sure if there is a better way of doing this.
@u-quark thanks so much for working on this patch. I'm sorry I haven't had time to look at it in depth until now. I took a look and the patch definitely improves emoji handling in Matterhorn, a project I work on. I'm curious if @glguy can see how this impacts his IRC client.
A thing I found, after seeing this tool run for a few seconds:
$ vty-wcwidth-tester
00d800: vty-wcwidth-tester: <stdout>: hPutChar: invalid argument (invalid character)
Also: I tried running vty-emoji-demo. When I ran it, nothing appeared in the terminal. I checked top and the program was using 100% of a CPU and steadily consuming more RAM. I killed it when it got to about 8 GB of residency.
@jtdaugherty Thanks for looking at this!
$ vty-wcwidth-tester
00d800: vty-wcwidth-tester: : hPutChar: invalid argument (invalid character)
I can confirm that I get the same error. I should have filtered out more character categories:
shouldConsider c =
case generalCategory c of
Control -> False
+ NotAssigned -> False
+ Surrogate -> False
_ -> True
I will fix!
vty-emoji-demo consuming 100% CPU and consuming memory is more surprising as the program is very simple and is supposed to print a fixed length text and exit upon pressing Esc, as the other demo in the code of vty.
Could you tell me the versions of a few components to try and narrow the bug down and reproduce it: versions of distribution, terminfo library, terminal emulator and ghc compiler. Could you also tell me the encoding/locale you are using? Do you have any idea which part of the code consumes 100% CPU and what it allocates in memory?
I did some digging and it looks like the problem arises in EmojiDemo.hs when showEmoji goes to call wcswidth on a particular value of e. The value is "\128992" and the value of wcswidth e is<PHONE_NUMBER>, which results in replicate (wcswidth e) '.' and the corresponding CPU/memory behavior.
That value is "\x1F7E0" -- Orange Circle. Even if I comment that one out, others also cause this issue for me.
Distribution: macOS Catalina 10.15.2
Terminfo: terminfo-<IP_ADDRESS>
Terminal emulator: iTerm2 3.3.7
GHC: 8.6.5
Ah I see. Not so fixed length string after all! I will have a look and try to fix. It looks like some integer overflow. Thanks for looking into this.
@jtdaugherty I fixed the two bugs you mentioned and squashed/refactored some of the commits.
I am a bit confused though as to why wcswidth "\128992" would return<PHONE_NUMBER>/-1 in your case. It shouldn't. Can you check that with the new code you get the correct value (2)?
The fixes seem to have improved things. Both the width tester and emoji demo programs now seem to work. Thanks!
I do want to mention that I get some junk output from some of the wcwidth tests. It looks like this:
$ vty-wcwidth-tester
02fa1d: 𪘀^[[36;11R
(The output at the end of the line after the wide character changes periodically as the program runs through the tests.)
This is probably ok as long as you get a sensible output.txt file at the end. It probably has to do with how the terminal interprets the escape sequences. Can you share with me this file that got generated on your machine after the program finished? vty-wcwidth-tester is supposed to do something very similar to the code from @glguy: https://gist.github.com/glguy/a802eaaac5a00ab41e3c3ca33507634b
Closing as per my comment on https://github.com/jtdaugherty/vty/issues/175. Thank you again for your work on this!
|
GITHUB_ARCHIVE
|
If you've ever looked for a way to share your special moments with loved ones, you've probably come across Artifact Uprising. They enable anyone to create high-quality framed prints and photo albums online. Started in 2012, they've grown by focusing on high-quality design that that lasts a lifetime.
At Artifact Uprising, they had been capturing analytics using Google Tag Manager and Segment. They had been plagued by data issues that caused the company to lose trust in their analytics. Everything from inconsistent data across platforms to development changes breaking analytics in production.
Carly, who is responsible for Business Intelligence at Artifact Uprising, wanted to solve this problem once and for all and enable the company to use data to drive growth. At Artifact Uprising, analytics is used not just for business intelligence but for marketing, personalization, and advertising.
While Carly initially tried to clean things up using a spreadsheet, this quickly got out of hand, and she went looking for a better solution to help govern analytics.
Her primary goal was to find a solution that:
Carly says, "We wanted to improve our overall data quality and standardize how we capture analytics across our products. This was important to enable all teams with clean data." Once Carly discovered Iteratively, she kicked off a pilot in late 2019.
At the same time, Artifact Uprising was rewriting its website and this provided an excellent opportunity to rethink their analytics taxonomy. Carly and the team spent a few weeks migrating their analytics to Iteratively and adding instrumentation across platforms. A few of the features that helped were:
Sergio Mendoza, a developer at Artifact Uprising, had the following to say, "Iteratively has not only taken the guesswork out of our analytics process, but provides an indisputable source of truth that is visible by our entire organization. Any data inconsistencies are immediately caught in our CI/CD pipeline before getting to production which prevents headaches and awkward conversations down the line. The developer toolkit ties in nicely with our existing workflow such that we are able to see exactly what changes were made within our codebase."
Another added benefit that Sergio mentioned was how Iteratively improved the developer relations with the data analytics team. He says, "Now we know exactly what data analysts want to see, and how they plan to use the data without the added time spent having a back and forth through different communication tools." Now analysts can spec out changes to the tracking plan and the team gets notified with exactly what data they need to capture.
Now that Artifact Uprising has Iteratively in place, Carly estimates that they're saving 6 hours a week as a team and preventing future analytics bugs from impacting the business financially.
"With Iteratively, we have a process that's easy for the entire team and ensures that we're capturing good data that we trust."
Director of Business Intelligence
Carly and the team now have a single source of truth for analytics across the organization that enables everyone with clean data that they can trust. Better yet, they now have a workflow that will help them scale and iterate on their analytics as their business evolves.
|
OPCFW_CODE
|
Only thing i did was activate dx11 and set tessellation to flat. Wonder why its running everything for me in 64 bit. I did not click on 64 bit exe, i clicked the 32 bit exe to start the game. Here is my shortcuts target.
C:\UDK\UDK-2014-02\Binaries\Win32\FOPB.exe < Win32 folder. I remember yesterday starting the game checking the header in windowed mode and it said 64 bit dx 11. i was shocked, as i knew i clicked the 32 bit exe. Once i get the game to work again, i will check its header in windowed mode. See what version i’m running.
I’m checking this out right now, yesterday it ran fine. Today i’m getting this error when trying to load. lol, my pak got to big. 1.2 gig :). Now i know how big i can take it before it causes me errors. I could just increase my memory on my disk but that is not the proper fix. Plus it is taking 2 to 4 minutes to compile shaders in that pak before anything will start.
[0107.09] Log: Compressing '..\..\UDKGame\Content\LocalShaderCache-
PC-D3D-SM5_save.tmp' to '..\..\UDKGame\Content\LocalShaderCache-
[0107.33] Critical: appError called: Ran out of virtual memory. To prevent
this condition, you must free up more space on your primary hard disk.
[0107.33] Critical: Windows GetLastError: The system cannot find the file
[0121.44] Log: === Critical error: ===
Ran out of virtual memory. To prevent this condition, you must free up
more space on your primary hard disk.
Guess it is time to figure out how i am going to store these files so my paks stay under 300mb. This is going to be a pain to keep them under 300mb. Will have 1 map, 1 pak for its art, that is about all you can get into a pak with a 300 mb limit. 4k to 8k textures will eat up that much space.
Once i get game to load a map, i will check version in game. In menu it says 32 bit dx11. Yesterday i remember it saying 64 bit, which shocked me as i only have 1 shortcut on my desktop and it is 32 bit dx9.
Look at my pics with blue colored gun thats in game and its header says 32bit dx11. So it is in 32 bit when in game. Must been the other screen shots in the editor that i seen at 64 bit dx 11. You can see that in all the different pics in its header what mode its in.
I can load the editor just not the game, fixing now.
|
OPCFW_CODE
|
Integration testing with Maven has always been a bit painful. Often the integration tests are scattered alongside with unit tests or a different module has been created for them. Both of these approaches are a bit disturbing.
Scattering integration tests in the same directory structure with unit tests is an awful idea because integration tests and unit tests are completely different creatures, and this approach forces us to mix them together. This a minor annoyance but this approach has a nasty side effect: running unit tests from IDE becomes a pain in the ass. When the tests are executed, our IDE wants to run all tests found from the test directory, which means that both unit tests and integration tests are executed. If we are “lucky”, this means that the tests take a bit longer to run, but often this means that the integration tests fails every time. Not nice, huh?
The second approach is a bit more feasible but to be honest, it feels like a total overkill. This forces us to turn our project in to a multi module project only because we want separate our integration from our unit tests. Also, if our project is already a multi module project and we want to write integration tests for more than one module, we are screwed. Of course we can always create a separate integration test module for each tested module, but it would be less painful to shoot ourselves to a leg instead.
This blog entry describes how we can separate the source directories of unit tests and integration tests and keep them in the same module. The requirements of our build process are following:
- Integration and unit tests must have separate source folders.
- Integration and unit tests must have separate configuration files.
- Only unit tests are run by default.
- It must be possible to run only integration tests.
- Failing integration tests must cause build failure.
We can implement these requirements by following these steps:
- Create a separate profile for development and integration testing.
- Create profile specific configuration files.
- Add a new source directory to our build.
- Configure the Surefire Maven plugin.
- Configure the Failsafe Maven plugin.
These steps are explained with more details in following.
Creating New Profiles for Development and Integration Testing
First we have to create two profiles: a profile that is used for development and a profile that is used for running the integration tests. We have two goals: First, we want to disable integration tests when development profile is used. Second, we want to disable unit tests when the integration test profile is used. In order to achieve these goals, we will introduce three new properties:
- The skip.unit.tests property specifies whether unit tests are skipped or not.
- The skip.integration.tests property specifies if integration tests are skipped or not.
- The build.profile.id property identifies the used profile.
The configuration of the new build profiles has two steps:
- Add the default values of the specified properties to our POM file.
- Create the new profiles and extend the default property values in the integration-test profile.
The configuration of the Maven profiles is given in following:
<!-- Used to locate the profile specific configuration file. -->
<!-- Only unit tests are run by default. -->
<!-- Used to locate the profile specific configuration file. -->
<!-- Only integration tests are run. -->
Creating Profile Specific Configuration Files
In order to create the profile specific configuration files, we have to use a concept called resource filtering. If you are not familiar with this concept, you might want to check out my blog entry, which explains how profile specific configuration files are created. The configuration of resource filtering has two steps:
- Configure the location of the configuration file that contains profile specific configuration (The value of the build.profile.id property identifies the used profile).
- Configure the location of the resource directory.
The required Maven configuration is given in following:
Adding New Source Directory for Integration Tests
Since Maven does not support multiple test source directories, we have to use the Build Helper Maven plugin. This plugin has a goal called add-test-source that is used to add a test source directory to a Maven build. In order to add the source directory of our integration tests to our Maven build, we have to follow these steps:
- Ensure that the add-test-source goal of Builder Helper Maven plugin is executed at Maven’s generate-test-source lifecycle phase.
- Configure the source directory of our integration tests.
The configuration of the Build Helper Maven plugin is given in following:
<!-- States that the plugin's add-test-source goal is executed at generate-test-sources phase. -->
<!-- Configures the source directory of integration tests. -->
Configuring the Surefire Maven Plugin
We will use the Surefire Maven plugin to run our unit tests. We can configure this plugin by following these steps:
- Configure the plugin to skip unit tests if the value of the skip.unit.tests property is true.
- Exclude our integration tests. We will assume that the name of each integration test class starts with a string “IT”.
The configuration of the Surefire Maven plugin is given in following:
<!-- Skips unit tests if the value of skip.unit.tests property is true -->
<!-- Excludes integration tests when unit tests are run. ß-->
Configuring the Failsafe Maven Plugin
The Failsafe Maven plugin is used to execute our integration tests. We can configure it by following these steps:
- Configure the plugin to run its integration-test and verify goals.
- Configure the plugin to skip integration tests if the value of the skip.integration.tests property is true.
The configuration of the Failsafe Maven plugin looks following:
<!-- States that both integration-test and verify goals of the Failsafe Maven plugin are executed. -->
<!-- Skips integration tests if the value of skip.integration.tests property is true -->
Running Unit and Integration Tests
That is all. We have now configured our pom.xml and its time to run our tests. The commands used to run both unit and integration tests are explained in following:
- We can run our unit tests by executing a command mvn clean test on command line.
- Our integration tests are executed by running a command mvn clean verify -P integration-test on command line. If we want to run our integration tests from our IDE, we have to manually mark the source directory of our integration tests as a test source root directory.
I have also created a very simple example project that contains one integration test and one unit test. This example project can be used to demonstrate the concept that is described in this blog entry. As always, the example project is available at Github.
|
OPCFW_CODE
|
Revisit a merged PR which broke our Travis CI and the support for Caddy v2
See details at: https://github.com/casbin/caddy-authz/pull/7#issuecomment-757506634
@closetool can you work on this?
@closetool can you work on this?
ok, I'll try to fix it
ok, I'll try to fix it
I guess the go version in travis.yml should be 1.14.x at least
and the coveralls service should be opened
I guess the go version in travis.yml should be 1.14.x at least
and the coveralls service should be opened
@closetool coveralls is enabled: https://coveralls.io/github/casbin/caddy-authz?branch=master
@closetool coveralls is enabled: https://coveralls.io/github/casbin/caddy-authz?branch=master
@hsluoyz yes, but some packages require go version over 1.14 like github.com/caddyserver/certmagic
@hsluoyz yes, but some packages require go version over 1.14 like github.com/caddyserver/certmagic
@closetool good to know!
Please make a PR to fix the Go versions here: https://github.com/casbin/caddy-authz/blob/master/.travis.yml#L6
We usually test against several popular Go versions including tip. Please refer to Go Casbin repo for it.
@closetool good to know!
Please make a PR to fix the Go versions here: https://github.com/casbin/caddy-authz/blob/master/.travis.yml#L6
We usually test against several popular Go versions including tip. Please refer to Go Casbin repo for it.
@closetool merged: https://github.com/casbin/caddy-authz/pull/9
@closetool merged: https://github.com/casbin/caddy-authz/pull/9
@closetool good to see CI recovered! But I saw the code coverage has dropped to 0%, can you investigate why?
@closetool good to see CI recovered! But I saw the code coverage has dropped to 0%, can you investigate why?
sure
sure
@hsluoyz I compared v2.0.0 and v1.0.2, and I found there's no authz_test.go in v2.0.0
perhaps that's why coverage dropped to 0%
@hsluoyz I compared v2.0.0 and v1.0.2, and I found there's no authz_test.go in v2.0.0
perhaps that's why coverage dropped to 0%
good found! I think this PR: https://github.com/casbin/caddy-authz/pull/7/files deleted authz_test.go. We need to add it back and make it right in the new Caddy v2 scenario, as our code needs testing to make it behave as expected.
good found! I think this PR: https://github.com/casbin/caddy-authz/pull/7/files deleted authz_test.go. We need to add it back and make it right in the new Caddy v2 scenario, as our code needs testing to make it behave as expected.
ok, I'll work on it
ok, I'll work on it
|
GITHUB_ARCHIVE
|
Why wouldn't a shower hose/shower head come with a rubber washer?
I have a shower sprayer in my shower. There's the bracket that's attached to the shower arm and a hose attached to the bracket, with the sprayer on the other end of the hose. I had put plumbers tape around all threaded parts to help it not leak.
My shower hose is 6 ft long if that matters. For the past 4 years, there was no leaking but recently it started dripping from the bottom of the nut. I unscrewed the nut, expecting to find a rotten rubber washer and get a replacement. However, to my surprise, there was no rubber washer. The leak I'm seeing is where the nut meets the hose (I can take a photo if I'm not clear enough).
I checked on the sprayer side and it too did not have a rubber washer, yet it was not leaking on the sprayer side.
I then went to Amazon and Home Depot to look at shower sprayers and shower hoses and it looked like most shower hoses and sprayers had rubber washers. There might have been 1-2 that didn't.
I found this link on installing a shower head and step 4 says:
If your shower head did not come with a washer, skip this step. If it
did come with a rubber washer, insert it into the shower arm
connection nut and push it down flat.
So back to my initial question, why wouldn't a shower hose/sprayer/head come with a rubber washer?
EDIT: Here are some photos to hopefully demonstrate better what I'm trying to ask/describe.
This is a pic of the setup. I pointed to where the drip/leak is coming from, which is at the bottom of the nut on the shower arm side.
Here are pictures of the hose after I've removed it. As you can see, both ends have no rubber washer. If I can re-use this hose by getting a replacement washer, I will. But if the original design was to not need or use a washer, then I'll just toss it.
I tried to expose both ends of the hose to show what it looks like.
Does your sprayer fit in a cradle so it can be used as a fixed head or can be hand-held?
@JimStewart, the sprayer can be handheld but it can also fit in the bracket and used as a fixed head.
Is it leaking at the bottom of the nut when the shower is on or only right after the shower is turned off?
@JimStewart, yes, it's leaking at the bottom of the nut when the shower is on. When it's off, there's no leaking.
You should be able to get washer seals for that. I think the perforated disc is a flow restrictor to limit the flow to a set amount, usually given in small print on the face of the head. Maybe the flow restrictor also acted as a seal but got compressed over time.
The black magic is the Teflon tape. NPT threaded pipes and hoses are probably what you’ve got for the shower fixture and NPT threads don’t deform as they are tightened to make a seal.
So you would usually see a rubber washer OR you would use Teflon tape. If it’s leaking and there’s no washer, you should put new Teflon tape on the threads and that should hopefully take care of it.
Thanks for your interest in my question. So, it's not leaking where the threads are. I don't know how to explain without a video but the nut is cone shaped. Where it's leaking/dripping is at the bottom of the cone. I think normally, if there's a rubber washer, the washer would prevent this drip but the funny thing is I don't remember it leaking for the last 4-5 years until now and it has no washer which is why I'm so confused.
@Classified post a photo to help us see. Sometimes a valve fails and starts leaking after years of use. It’s possible it’s just got a part inside that’s worn out.
Thanks again for continuing to try to help me. I posted some photos but I can take more if these are not clear. I tried to annotate where the leak is coming from. Hopefully it's clear enough.
|
STACK_EXCHANGE
|
Creating an archive - Save results or request them every time?
I'm working on a project that allows users to enter SQL queries with parameters, that SQL query will be executed over a period of time they decide (say every 2 hours for 6 months) and then get the results back to their email address.
They'll get it in the form of an HTML-email message, so what the system basically does is run the queries, and generate HTML that is then sent to the user.
I also want to save those results, so that a user can go on our website and look at previous results.
My question is - what data do I save?
Do I save the SQL query with those parameters (i.e the date parameters, so he can see the results relevant to that specific date). This means that when the user clicks on this specific result, I need to execute the query again.
Save the HTML that was generated back then, and simply display it when the user wishes to see this result?
I'd appreciate it if somebody would explain the pros and cons of each solution, and which one is considered the best & the most efficient.
The archive will probably be 1-2 months old, and I can't really predict the amount of rows each query will return.
Thanks!
Specifically regarding retrieving the results from queries that have been run previously I would suggest saving the results to be able to view later rather than running the queries again and again. The main benefits of this approach are:
You save unnecessary computational work re-running the same queries;
You guarantee that the result set will be the same as the original report. For example if you save just the SQL then the records queried may have changed since the query was last run or records may have been added / deleted.
The disadvantage of this approach is that it will probably use more disk space, but this is unlikely to be an issue unless you have queries returning millions of rows (in which case html is probably not such a good idea anyway).
If the query returns millions of rows (it doesn't, just out of curiosity), then what is the good idea to use, rather than HTML?
This really depends on what format whoever is analysing the data find easiest to work with, but as HTML is intended to display data in web browsers, this wouldn't be a very useful way to look at really large data sets.
Typical approaches would be some text delimited format such as a .csv file or xml or json. A text delimited file such as csv would be the sparsest i.e. no field names with every row, so would probably be the most sensible, xml or json are more useful when the data is hierarchical and cannot be represented well in a single table.
The crucial difference is that if data changes, new query will return different result than what was saved some time ago, so you have to decide if the user should get the up to date data or a snapshot of what the data used to be.
If relevant data does not change, it's a matter of whether the queries will be expensive, how many users will run them and how often, then you may decide to save them instead of re-running queries, to improve performance.
If I would create such type of application then
I will have some common queries like get by current date,current time , date ranges, time ranges, n others based on my application for the user to select easily.
Some autocompletions for common keywords.
If the data gets changed frequently there is no use saving html, generating new one is good option
|
STACK_EXCHANGE
|
Main
Issue number #122
This pull request introduces the Customizable Meal Options feature to the Alimento platform, allowing users to tailor their meals according to specific dietary preferences and restrictions, such as vegan, gluten-free, or low-carb options. This enhancement aims to provide a more personalized dining experience for our users.
Changes Made
Database Updates:
Created two new tables: meals and dietary_preferences to store meal information and user preferences.
Created an orders table to save user orders with customizations.
Frontend Updates:
index.php:
Implemented a display of available meals with buttons linking to the customization page.
customize_meal.php:
Developed a user-friendly interface allowing users to select dietary preferences and add custom ingredients.
Included form validation using JavaScript to ensure users provide necessary information before submission.
Backend Logic:
process_order.php:
Handled the form submission to process orders with selected preferences and custom ingredients.
Integrated error handling to manage potential issues during order processing.
Styling:
Created a style.css file to provide consistent styling across pages and enhance the visual appearance of meal options and customization forms.
Included Bootstrap for responsive design and quick styling of components.
JavaScript Enhancements:
Added a script.js file for form validation and user feedback during the customization process.
Benefits
Enhanced User Experience: Users can customize their meals based on dietary needs, leading to a more inclusive platform.
Increased User Satisfaction: Tailored meal options are likely to improve customer satisfaction and loyalty.
Broader Market Reach: Catering to various dietary lifestyles attracts a wider audience, including those with specific dietary restrictions.
Type of Change
[ ] Bug fix
[ ✓] New feature
[ ] Documentation update
[ ] Refactoring
[ ] Other (please describe):
Checklist
[ ✓] I have tested my changes and ensured they work as expected.
[✓ ] I have added necessary documentation (if applicable).
[✓ ] I have reviewed my code for style and linting issues.
[✓ ] I have ensured that any dependent changes are merged before this PR.
@snehas-05, have you updated the db schema as well?
Yes I have uploaded my whole code and made all the necessary changes and combined them in a separate folder for better understanding.
@snehas-05 I see you have added a sql.php for updating the schema. Instead of that can you export the schema file from your phpmyadmin dashboard and replace it with the current homemade.db file ?
ok give me some time I will do the required changes and commit them.
@snehas-05, Have you made the changes ? if yes then please commit the changes and push the code.
I'm so sorry I was busy the whole day in college. I will definitely do it tomorrow. I hope you will understand.
No worries, take your time. Please make sure you complete it before the end of gssoc deadlines
Yes sure
Hey @Vimall03 I have committed the changes. Do I need to make a new pr?
@snehas-05, No you don't have to open a new PR. I'll merge this. I'll assign the labels once i finish reviewing the code.
Hey @snehas-05 Can you share some screen shots of your implementation ?
I'm so sorry as I can't share screenshots because for that I have to update the whole code of the website on my system and that's very complicated.
But I assure you my code is correct and will definitely work.
@Vimall03 please provide me with the necessary labels otherwise my points will not be counted
@snehas-05, I wont be able to assign this any labels since this PR does not have any functionality. The code you have pushed is not relevant to the project until and unless it is functional. I'm sorry, I will revert this PR. Thankyou for understanding.
|
GITHUB_ARCHIVE
|
Getting error on miniconda<EMAIL_ADDRESS>with no base environment activated
Using the below yaml:
name: QA - Precommit Checks
on: [push]
jobs:
Precommit-Checks:
runs-on: self-hosted
steps:
- run: echo "Checking branch precommit hooks ..."
- run: echo "Now running on a ${{ runner.os }} server ..."
- run: echo "The name of your branch is ${{ github.ref }} ..."
- name: Check out repository code
uses: actions/checkout@v2
- name: Setup Miniconda
uses<EMAIL_ADDRESS> with:
activate-environment: sigma-sealevel
environment-file: environment.yml
python-version: 3.10
auto-activate-base: false
- run: |
pre-commit run --all-files
I get a very strange error at Setup Miniconda with
Run<EMAIL_ADDRESS>Gathering Inputs...
Creating bootstrap condarc file in /home/ubuntu/.condarc...
Ensuring installer...
Error: No installed conda 'base' enviroment found at
Even with the enviroment typo, not sure what that is about, some guidance would be appreciated!! Thank you
Hi, I've encountered identical error but when I've specified particular url of installer to use with installer-url parameter it seemed to work ok. I don't know why default setting does not work but this may be a temporary workaround.
I have encountered the same issue on a self-hosted Github runner (Ubuntu Azure VM).
In my case, I made sure to install Miniconda on the VM and have the env variable CONDA and PATH properly set by adding the script /etc/profile.d/set_conda_envs.sh with the following content:
export CONDA="/home/adminuser/miniconda3"
export PATH="$CONDA/condabin:$PATH"
If I execute a step where I run the commands:
- name: Check Conda local install
run: |
echo $PATH
echo $CONDA
conda -h
conda env list
everything works as expected, but when I use<EMAIL_ADDRESS>I get the error:
Run<EMAIL_ADDRESS>Gathering Inputs...
Creating bootstrap condarc file in /home/adminuser/.condarc...
Ensuring installer...
Error: No installed conda 'base' enviroment found at
@czyzi0 : how does your<EMAIL_ADDRESS>look like? What did you put in the installer-url parameter?
My step with setup-miniconda is:
- name: Set up env
- uses: conda-incubator/setup-miniconda@v2
- with:
installer-url: https://repo.anaconda.com/miniconda/Miniconda3-py38_4.11.0-Linux-x86_64.sh
activate-environment: envname
enivornment-file: environment.yml
auto-activate-base: false
And it works fine like that.
Hope this helps.
@mist2410 could you have a default conda installation in your runner so that can be used?
@goanpeca , I tried to install miniconda on my runner, but conda-incubator/setup-miniconda fails to find it, even if I try to properly set the env variables CONDA and PATH (I tested it by printing them out in my Github workflow).
I get the same error mentioned by @ajcost and me in our reports (https://github.com/conda-incubator/setup-miniconda/issues/218#issuecomment-1128675476).
@goanpeca , yes, I tried (see https://github.com/conda-incubator/setup-miniconda/issues/218#issuecomment-1128859867) but a slightly different custom installer than the one reported in the documentation.
I followed the suggested version from @czyzi0 (see https://github.com/conda-incubator/setup-miniconda/issues/218#issuecomment-1128728941).
Should I try something different? Do you have other suggestions?
Than ks @micstadsb I guess the solution would be to remove the lines between # >>> conda initialize >>> when the process finished... or when it starts (or both)
The URL version fails when you restart a job (it tries to re-download miniconda and somehow gets confused).
This worked for me:
- uses: conda-incubator/setup-miniconda@v2
with:
environment-file: ...
activate-environment: ...
python-version: ...
miniconda-version: latest
auto-activate-base: false
- shell: bash --login {0}
run: ...
|
GITHUB_ARCHIVE
|
2-Methoxy propylene (MOP) Reactivity Analysis
This project holds the files of the paper entitled Graph-based machine learning predicts and interprets diagnostic isomer-selective ion-molecule reactions using tandem mass spectrometry. It is divided in to two directories which hold the two type types of models described in this work: one based on quantum chemical proton affinity calculations (QM), and the other based on graph-based machine learning (ML). Applications for such methods include identification of impurities and drug metabolites in complex mixtures.
Quantum Mechanics (QM)
This model is based on calculating the proton affinity (PA) of a molecule and comparing it to that of MOP. Note that since the proton affinity of a compound is always exothermic, it is typically written as a positive quantity thereby making it equal and opposite to the free energy change of the reaction. If the PA for the analyte is greater than that of MOP, then this models 'predicts' that the diagnostic ion will form and otherwise it will not form. To calculate this value, an isodesmic reaction scheme is used, a description of each is given is given below.
Isodesmic reaction scheme
An isodesmic reaction is described as a reaction where the bonds broken and formed are of the same type. In this case, this 'bond type' is that of between an atom and a proton where this atom has an open lone-pair of electrons. This bond is 'formed' with the analyte being studied and broken in a reference analyte where the proton affinity has been previously measured. This allows us to use a free energy cycle (shown below) to calculate an accurate measure of the proton affinity for the analyte.
Free energy cycle for the isodesmic reaction method
In the following equations, the f superscript is used to denote a quantity that can be calculated using a QM methodology. This is equivalent to the free energy of formation for the given quantity. The A subscript describes a quantity for the analyte and R denotes a quantity where the proton affinity is known. The protonated forms of these compounds is notated with AH+ and RH+, respectively. This known quantity is described by a M superscript. Similarly, the C superscript denotes a quantity calculated using Density Functional Theory (see next section for details).
Performing the calculation
The Perl script calculate_proton_affinity_isodesmic.pl can be used to perform this calculation. The arguments are as follows: the known proton affinity (in kcal/mol) of a reference analyte, the log file for the neutral reference analyte, the log file for the protonated reference analyte, the log file for the neutral analyte, and the log file for the protonated neutral analyte. The reference analyte should be protonated on the same atom as the analyte (for example ammonia should be used to calculate the proton affinity of an amine). An example invocation is given below:
perl calculate_proton_affinity_isodesmic.pl 204.0 ammonia.log ammonia_p.log 01.log 01_p.log
The result will be in kCal/mol. The NIST Webbook is a great source of proton affinity values. Ammonia is used for all nitrogen protonations, methanol is used for all oxygen protonations, benzene is used when an aromatic ring is protonated, and 2-methyl propene is used for calculating the proton affinity of MOP.
Density Functional Theory Details
To obtain values for the free energies of formation for all species shown above, we used the program Gaussian16, the M06-2x functional, and the 6-311++G(d,p) basis set. Since previous calculations used the B3LYP functional and 6-31G(d) basis set, we have included reference calculations for the analytes in question. These two types of calculations are labeled as large_basis_set and small_basis_set, respectively. The Perl script check_for_negative_freq.pl is provided to ensure that all minimized structures do not contain negative frequencies.
Machine Learning (ML)
The ML directory contains all the training data, testing data, machine learning code, and machine learning results after bootstrapping. Each subdirectory contains various portions of the these parts.
This directory contains 3 SMILES files:
first36.smi The original 36 reactions used to train all models.
test_set_14.smi The 14 test reactions.
full_training_set.smi A concatenation of the above two files
The scripts in this directory are written in a combination of Julia and Python. The Julia scripts require the installation of the DecisionTree.jl and CodecBase packages to function properly. The Python scripts require [RDkit] (http://rdkit.org/) to function. A description of each script is given below:
convert_to_morgan_custom.py Takes two arguments: the SMI file for fingerprinting and the radius for the Morgan algorithm. It outputs compressed fingerprints of bitlength 2048 in the CSV format to stdout.
make_fp_svg_custom.py Takes two arguments: the SMI used for input, the radius used for fingerprinting, and the bit for which you want to make an SVG image.
make_predictions.jl Used to create the decision tree models and bootstrap them. It expects two files to be present: train.csv, and test.csv. This file should be run in a unique directory for each experiment as it output several files (described in the next section). This script should be run directly by the julia interpreter, not via the include mechanism.
bootstrap.jl, decode.jl, and prepare_data.jl: these are support files for the make_predictions.jl script and should be left in place. Their names are self-descriptive.
model.jl and decision_tree.jl: legacy files that can be removed and are provided for reference.
The train36_predict14 subdirectory
This directory contains results for four different radius cutoffs using the decision tree model. The input files (train.csv and test.csv) are created using the convert_to_morgan_custom.py script. All results in these directories (each respective to a fingerprint radius) are created by the make_predictions.jl and a breif description of each is given below:
results.csv: Each column represents a bootstrapped prediction for the test compounds (14 in total), the self score of the model (accuracy on the training set), and the Kappa value for the training set, and the diagnostic product ratio cutoff used to train the model. Each row has a different diagnostic product ratio cutoff.
set_bits.csv The bits set during fingerprinting for the training set. This file is required to run other machine learning models.
set_bits_test.csv The same bits as above, but for the test set. Also required for other models to run.
The other_models subdirectory
This directory contains R code used to generate the results for additional models other than decision trees. commands.R requires three libraries: tidyverse, parallel, and caret. It must be run after the make_predictions.jl script as it requires the set_bits.csv and set_bits_test.csv files to be created for all fingerprint radii. This script will create CSV files for all the models tested in our work: glm.csv, knn.csv, pls.csv, and reglog.csv. Additionally, it will read the results.csv created by the Julia scripts to create a "dt.csv" file. Each file has the following columns: compound ID, radius value, and the 9 cutoff values. Each row contains the test compound ID, the radius used, and the probability as calculated by each model trained with a cutoff. The Kappa value for the each model is also reported as row. These files compose the tables shown in the supporting information of the paper.
|
OPCFW_CODE
|
View Full Version : Problem with Internet Connection
This may or may not be a router problem but recently I've been having issues with staying connected to the Internet. The whole duration I'm on the net, at random times, my connection will just die. It acts as if I'm not connected at all, telling me all the pages I wish to view have timed out or whatever. After a bit of waiting, everything goes back to normal. Then it will just quit working again. This happens just about every 15 minutes... I know its not my internet provider, my friends and I have the same provider and they have no problems with it, just me. So I figured it could be a router problem, and if it is (shrug) I need some help with it. lol I'm a total newb when it comes to routers and networking, so if anyone has any suggestions I'm open to them.
Thanks in advance.
11-04-05, 02:07 PM
:( What type of internet connection are we talking about, cable, wireless, dsl? What router, or switch are you using, do you have a firewall on the router?
The first place I would check is your Isp check if they are having any maintance done. Then I would check my firewall logs, see if you are getting port scanned, then I would run some spyware checker software to make sure my machine was clean.
If the above comes clean I would, use ping during one of those slowdowns. Open up the windows cmd line and type ping www.google.com or your favorite website. If it times out then it is probably your isp, then use tracert on the same address see if you get any problems.
Also if you have another computer on your network, such as a laptop ping that as well make sure its not your network card, or your wireless card. If your home network comes clean and no slowness on it then it must be your isp.
Just because your buddy uses the same isp, does not mean your connection is bad, it is possible to use different dns servers for lookup. Trace route will tell you what you might need to know what hop is it stopping at.
Also Isp's are know to over subscribe and do have connection problems at certain times of the day. If it happens at the same time of day I would suspect your isp.
Try your connection without the router.
Gnu_Raiz: My connection is DSL. I'm using a cheap Network Everywhere NR041 router. Not 100% sure about the firewall on it though. I ran the ping through like you said during slowdowns. Nothing works. The home network is fine, so I guess its my ISP.
What worries me is what Gn00b told me to do, run it without the router. Well when I use the modem it doesn't work period. lol I can't connect to the net with just my modem. It will connect with the router but again, soon after being connected it will die. So... now what? :afro2: lol
vBulletin® v3.7.1, Copyright ©2000-2015, Jelsoft Enterprises Ltd.
|
OPCFW_CODE
|
MX records being used to block spam
Posted on Apr 20, 2009 by Paul White
If you run any kind of bulk mail server, you have probably already dealt with yahoo's temporary blacklisting of your IP. They force you to fill out a bunch of forms asking you about your setup. For one of my clients we are running a unique setup.
Windows 512 MB VPS
- for website, newsletter, and automated emails
Linux Shared Hosting
account - for Smarter Stats, Smarter Mail ( corporate emails ), MySQL
A 512 MB VPS
just isn't big enough for everything, and its more cost effective to off shore services to a shared hosting
What makes this unique is we are running two mail servers.
Smarter Mail on the Shared
Linux box ( MX records points to here )
Virtual SMTP on the VPS
( no MX record )
We do have Reverse DNS setup correctly, along with SPF for both servers.
The problem is after we do a blast to our 6000 reciepients, we are only able to track about 450 of them. The first time we did a blast we had almost 700 confirmed reads. The number has been dropping slowly over time.
After a while I started to assume that most of the emails were getting blocked or ending up in people's spam folder where it would be destined for deletion. I thought 10% read was pretty good.
Recently after settting up another client of mine, I was having trouble using the HMS list server. Emails were not going through. This lead me to believe that the IP of the list server has become blacklisted or something similar that would cause the big mail servers to not let through the emails. This client only had 60 emails in his list. so I setup his newsletter to relay through his Smarter Mail Server ( MX record points to ). Normally the hosting
company recommends we not do this but with such a small number of emails, I figured it wouldn't affect the server. Amazingly I was able to track 40 of the 60 emails were opened within 24 hours of sending.
So sending bulk emails from the server that my MX points to seems to get me a much higher success rate. This leads me to believe that many Mail Servers are now considering any email that comes from your non MX mail server to be bulk or spam.
To test this I will be doing a blast with my larger client this week. I will be sending them through our MX pointed mail server, to see if we have a higher success rate.
I sent out our last newsletter via our MX based mail server and there was no improvement in the open rates. So I guess my theory has been busted.
No Comments have been submitted
|
OPCFW_CODE
|
from .Base import Base
from pegasus.check_sample_indexes import run_check_sample_indexes
class CheckSampleIndexes(Base):
"""
Check for index collision between 10x scRNA-seq index sets and CITE-Seq/hashing indexes.
This command can also be used to find the maximum number of mismatches allowed among HTO/ADT barcodes.
Usage:
pegasus check_indexes [--num-mismatch <mismatch> --report <n_report>] <index_file>
pegasus check_indexes -h
Arguments:
index_file Index file containing CITE-Seq/hashing index sequences. One sequence per line. Multiple columns are allowed. But columns must be separated by comma and the first column must be the index sequence.
Options:
--num-mismatch <mismatch> Number of mismatch allowed for each index sequence. [default: 1]
--report <n_report> Number of valid 10x GA indexes to report. Default is not to calculate valid GA indexes. [default: -1]
-h, --help Print out help information.
Outputs:
If --report is not set, <index_file> should include all scRNA-seq/CITE-seq/hashing indexes. This program first report the minimum hamming distance between any pair of indexes and also the maximum number of mismatches can be set [(hamming-dist - 1) // 2]. If the maximum number of mismatches is smaller than <mismatch>, an index collision error message will be generated.
If --report is set, assume <index_file> only contain CITE-seq/hashing indexes. If there is no index collision within <index_file>, up to <n_report> number of valid 10x scRNA-seq indexes will be printed to the standard output.
Examples:
pegasus check_indexes --num-report 8 index_file.txt
"""
def execute(self):
run_check_sample_indexes(
self.args["<index_file>"],
n_mis=int(self.args["--num-mismatch"]),
n_report=int(self.args["--report"]),
)
|
STACK_EDU
|
Add section for guix environments
Add support for showing the current guix environment.
For example:
➜ guix environment --ad-hoc git
substitute: updating substitutes from 'https://ci.guix.gnu.org'... 100.0%
The following derivation will be built:
/gnu/store/1gqvpr8axng8vz5r0kly7gs000cyggab-profile.drv
The following profile hooks will be built:
/gnu/store/1988yq4ipbiflbrpdq515hgv7dzxhbk3-manual-database.drv
/gnu/store/5mk8s3rvr2ldd4vksrv859k30k44kg6w-fonts-dir.drv
/gnu/store/pbv6l8dsq2ah8h95dar5ljf4vj6xysaw-info-dir.drv
/gnu/store/w71nwvq4nj6r4igqrqvlij2bfqbm52w3-ca-certificate-bundle.drv
building CA certificate bundle...
building fonts directory...
building directory of Info manuals...
building database for manual pages...
building /gnu/store/1gqvpr8axng8vz5r0kly7gs000cyggab-profile.drv...
~ via 🐐 g85cjc3dkgn1wwbq2nv8z5l252ig1jb4-profile
➜
Signed-off-by: Collin J. Doering<EMAIL_ADDRESS>
Description
A additional section was added, sections/guix.zsh. When a guix environment is entered, the environment variable GUIX_ENVIRONMENT is set. Specifically it is set to the guix store path for the profile, for example, /gnu/store/g85cjc3dkgn1wwbq2nv8z5l252ig1jb4-profile. The added guix section simply shows the basename of $GUIX_ENVIRONMENT if the variable is set.
Screenshot
Thank you for contributing @rekahsoft. Please see CONTRIBUTING.md#sections
Will it clutter the prompt?
Having too much in prompt looks ugly. your much space or be shown too often.
Good: 🚀 v1.2.3
Bad: 🚀 spasheship#c3BhY2VzaGlw
Please also consider the following questions:
Will it clutter the prompt?
Is it worth to be aware of it?
Will it slow down the prompt?
@salmanulfarzy I did consider these questions. Here are my thoughts:
Will it clutter the prompt?
Instead of showing the entire guix store path to the profile, only show the basename. Yes this is longer then one would like, but it is essential information when using guix.
Is it worth to be aware of it?
Certainly useful when using guix to be aware of being in an environment that is not the normal user profile.
Will it slow down the prompt?
No. All that needs to be done is inspecting the GUIX_ENVIRONMENT variable and taking its basename which take a very small amount of time.
To address the concern of prompt clutter, I have shortened the guix store path to the environment which is displayed in the prompt to the first 8 characters. This makes sense for guix as the items in the store (like profiles) have the form /gnu/store/<base32-encoded-hash>-.... Specifically, environments will have store paths of the following form: /gnu/store/<base32-encoded-hash>-profile. Here the suffix -profile is not necessary to show in the prompt, nor is the entire base32 string, which is an encoded hash, so it is shortened similar to how git hashes are shortened.
being in an environment that is not the normal user profile.
How often does this environment is switched ? I fail to see how base32 values would help identify environment !
PS: New section are on hold until we decide on how we handle custom sections. See https://github.com/denysdovhan/spaceship-prompt/pull/443#issuecomment-447303543 and #491
An environment can be switched to at any time and this is a common usage of guix. When guix switches to an environment, it sets the environment variable GUIX_ENVIRONMENT to the profile path (which is an item in the guix store). Items in this store are always stored in /gnu/store/<base32-encoded-hash>-profile, and can be uniquely identified by the <base32-encoded-hash>. The shortened hash listed in the prompt can easily be used to find the profile in the guix store. Something like this would suffice:
ls /gnu/store | grep <shortened-base32-encoded-hash>
I choose the first 8 characters of the base32 encoded hash which is enough to reliably determine the profile from the guix store (there are 32^8 = 1,099,511,627,776 possibilities, so likelihood of collisions is low).
For more details regarding guix environment .. you can also visit its documentation, though I hope the above description will suffice.
PS: New section are on hold until we decide on how we handle custom sections. See #443 (comment) and #491
Thanks for pointing this out! Really looking forward to custom sections :)
Hey, thanks for contributing! I finally got my hand on this PR. This looks nice, but I think your use-case is too narrow to in include it in a spaceship's core.
You can publish this section as an external section. Here is a guide how to do so: https://spaceship-prompt.sh/advanced/creating-section/
|
GITHUB_ARCHIVE
|