url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://appsource.microsoft.com/sr-latn/product/office/WA104379660?tab=Overview
|
code
|
Workday for Outlook allows you to complete simple Workday tasks from directly within Outlook and without launching Workday. For example, you can approve time off requests or view a sender’s worker profile all within your Outlook mailbox.
Use Workday for Outlook to:
Workday for Outlook requires a Workday application account. The app must first be enabled in "Workday Tenant Setup System" by your Workday administrator.
Workday for Outlook is available from the Microsoft Office Store at https://store.office.com/.
Workday for Outlook does not support Internet Explorer 9, Safari, or Outlook client on OSx.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057973.90/warc/CC-MAIN-20210926205414-20210926235414-00475.warc.gz
|
CC-MAIN-2021-39
| 608
| 5
|
https://darianbjohnson.com/category/amazon-web-services/
|
code
|
Spending $200 on a smart lamp is not very smart when you can build one yourself with a simple and cost effective DIY project
A major manufacturer recently announced a smart lamp that will setup you back about $200. While GE’s lamp is cool, neither my budget (nor my wife) supports a purchase of that size — so I set out to design and build a “smarter lamp” for the more budget conscious consumers.
In order to compete with GE, the design of my smart lamp needed to include voice activation by Alexa. Since my wife wanted the ability to control the lamp using a standard switch, the design also needed to be practical and functional.
To meet my needs, the smart lamp had account for state — regardless of whether a voice or a physical switch was used for power.
Read more on Medium…
In my second IoT project, I tackle feeding my cats by voice commands.
My family owns three cats; for the most part, they are well behaved – unless they are hungry. When it’s time for them to eat, they get a little crazy – constantly meowing and running under/between our legs, or waking us up at night.
We used to keep extra food in their dishes, but they would just overeat – resulting in cat throw-up (which, without fail, I seemed to step in every morning on my way to the kitchen).
We’ve been living in this “claw-ful” situation for a few years, and never really considered resolving the problem. My oldest daughter suggested that we (and by we, she really meant me) build an automated cat feeder. I told her that I didn’t have the time to build one… but then, I figured, why not give it a try.
Full instructions are on the write up at Hackster – https://www.hackster.io/darian-johnson/alexa-powered-automated-cat-feeder-9416d4
In a span of a few hours, I successfully migrated my Wordpress blog from an EC2 instance to Amazon Lightsail.
Of all the new releases announced at AWS re:Invent, I was most excited about Amazon Lightsail. I love AWS, but sometimes it’s too complicated. If someone wants to run a blog, then they shouldn’t have to learn about VPCs, subnets, etc…. they should, in a few clicks, be up and running.
So, I spent a few hour this weekend migrating this blog from the t2.small EC2 instance I’ve been running (with RDS and Memcache) to a new, smaller Lightsail instance.
The migration was straight forward (instructions here: https://docs.bitnami.com/aws/how-to/migrate-wordpress)- the biggest challenge was re-installing my WordPress plugins (they did not migrate over).
Will this be better than running my own VPC and EC2 instance? I’m not sure. I still have my old instances available if I need to switch back. I’m hoping that it does; I was spending about $20 a month running my t2.small (I know, I know.. I should have been running on an RI to reduce cost). The small instance of Lightsail is on $5/month.
Some people spend their vacations traveling, or relaxing, or visiting family. I spent my two weeks off building an Alexa enabled, Raspberry Pi device for Hackster’s Internet of Voice challenge.
But, to paraphrase Madonna: “Don’t Cry for Me, Internet.” I really enjoyed those two plus weeks of coding. I learned a ton about AWS IoT and MQTT (and re-enforced some “non-sexy” skills – like security and IAM).
And the device that I decided to build…. a magic mirror. Why a magic mirror? Well, I am the guy that:
- Never checks for delays in his work commute until he is stuck in a four-lane accident
- Forgets his umbrella when the forecast calls for afternoon showers
- Doesn’t find out about a major news event unless the story breaks on ESPN
- Always forgets to pull my trash bins to the curb on garbage pick-up day
In short, my morning routine is a mess (#firstworldproblems). An Amazon Echo (or a phone, for that matter) would resolve most of those problems. Unfortunately, I never seem to have my phone with me as I’m getting ready in the morning (it’s usually charging). And I’m usually not asking Alexa for these details (I don’t have an Alexa device in my bathroom).
60% of my morning routine is centered in and around the bathroom or bedroom, so I decided to build an Alexa skill and Alexa Voice Service-enabled magic mirror – which I’ve titled the Mystic Mirror.
Continue reading “Building a Magic Mirror using Alexa, AWS, and a Raspberry Pi”
Integration with Alexa allows a user to obtain a workout recommendation (and create a machine learning model) all by voice command.
[su_note note_color=”#d3d3d3″]Note: This is the third post about using Amazon Machine Learning to predict workout intensity. Check out Part 1 (Overview) and Part 2 (Building the Machine Learning Model) for background. A working model is available via web and Alexa. Code can be found/downloaded from my Hackster site.[/su_note]
After I was able to build a working model, I needed to come up a way to automate the model. I originally planned to allow access through my website, but decided to use Alexa in addition to the website link.
Note: The process of creating an Alexa skill isn’t too complicated (if you have experience building lambda functions). That being said, I suggest you start by building a sample skill – such as the Fact Skill example. Also, be sure to read and follow the certification requirements.
Alexa, AWS, and the exposed Fitbit APIs provided a mechanism to build a model and return results for a specific user – all initiated by voice.
Step 1 – Linking the user’s Fitbit account to the skill
A user has to link his/her Fitbit account to the skill before s/he can (a) build a specific machine learning model based on their history and (b) get a workout recommendation. Step 1 covers the logic for this functionality.
Click image to enlarge
Continue reading “Using Amazon Machine Learning to Predict the Best Time of Day for Exercise – Pt 3: Automating the Model with Alexa”
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817181.55/warc/CC-MAIN-20240417204934-20240417234934-00070.warc.gz
|
CC-MAIN-2024-18
| 5,892
| 34
|
http://www.sequana.com/en/non-classe/opera/
|
code
|
Internationalization and localization are terms used to describe the effort to make WordPress (and other such projects) available in languages other than English, for people from different locales, who use different dialects and local preferences.
The process of localizing a program has two steps. The first step is when the program’s developers provide a mechanism and method for the eventual translation of the program and its interface to suit local preferences and languages for users worldwide. WordPress developers have done this, so in theory, WordPress can be used in any language.
The second step is the actual localization, the process by which the text on the page and other settings are translated and adapted to another language and culture, using the framework prescribed by the developers of the software. WordPress has already been localized into many otinformation).
This article explains how translators (bi- or multi-lingual WordPress users) can go about localizing WordPress to more languages.
Before you start translating WordPress, check WordPress in Your Language (and resources cited there) to see if a translation of WordPress into your language already exists. It is also possible that someone (or a team) is already working on translating WordPress into your language, but they haven’t finished yet. To find out, subscribe to the polyglots’ blog, introduce yourself, and ask if there’s anyone translating into your language. There is also a list of localization teams and localization teams currently forming, which you can check to see if a translation is in progress.
Assuming that a WordPress translation into your language does not already exist or has someone working on it, you may want to volunteer to create a public translation of WordPress into your language. If so, here are the qualifications you will need:
- You need to be truly bilingual — fluent in both written English and the language(s) you will be translating into. Casual knowledge of either one will make translating difficult for you, or make the localization you create confusing to native speakers.
- You need to be familiar with PHP, as you will sometimes need to read through the WordPress code to figure out the best way to translate messages.
- You should be familiar with human language constructs: nouns, verbs, articles, etc., different types of each, and be able to identify variations of their contexts in English.
A locale is a combination of language and regional dialect. Usually locales correspond to countries, as is the case with Portuguese (Portugal) and Portuguese (Brazil).
You can do a translation for any locale you wish, even other English locales such as Canadian English or Australian English, to adjust for regional spelling and idioms.
The default locale of WordPress is U.S. English.
WordPress’s developers chose to use the ocalization framework to provide localization infrastructure to WordPress. gettext is a mature, widely used framework for modular translation of software, and is the de facto standard for localization in the open source/free software realm.
gettext uses message-level translation — that is, every “message” displayed to users is translated individually, whether it be a paragraph or a single word. In WordPress, such “messages” are generated, translated, and used by the WordPress PHP files via two PHP functions. __() is used when the message is passed as an argument to another function; _e() is used to write the message directly to the page. More detail on these two functions:
- Searches the localization module for the translation of $message, and passes the translation to the PHP return statement. If no translation is found for $message, it just returns $message.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190236.99/warc/CC-MAIN-20170322212950-00566-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 3,747
| 15
|
http://askubuntu.com/questions/435815/usb-log-history-overwritten
|
code
|
I need to know how some files have been saved in my computer 7 months ago. I know that they have been copied from a USB peripheral and i know the exact date and time they were saved, i want to discover the model or some information about that usb key.
I know that ubuntu archives logs, unfortunately older logs archives are maximum 6 months older and i can't find any older log. (i checked kern.log.gz and syslog.gz )
Could you suggest a different way to check model or brand of the usb key from were files where copied?
system is Lubuntu (very similar to ubuntu)
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464088.46/warc/CC-MAIN-20150226074104-00168-ip-10-28-5-156.ec2.internal.warc.gz
|
CC-MAIN-2015-11
| 563
| 4
|
http://forums.roadfood.com/Help-Joining-Please-m681461.aspx
|
code
|
Help Joining, Please
I'm trying to become an insider, and it seems it only accepts Paypal. I no longer have a PayPal account nor do I want another one. Is there any way to pay for a membership without PayPal? Even when I click on the link to use a credit card, it still tries to use PayPal.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809695.97/warc/CC-MAIN-20171125071427-20171125091427-00400.warc.gz
|
CC-MAIN-2017-47
| 290
| 2
|
https://flylib.com/books/en/2.343.1.213/1/
|
code
|
Figuring Out What's Wrong
If you're reading this section, it's because the quick fix techniques listed in the "The Three Basic Suitability Tests" section haven't solved your problems. In this section, I hope to help you out, but please bear in mind this rather brutal truth about PCs in general:
Sometimes there may not be a solution to your problem.
The problem with PCs is simply that not all hardware is created equal; not all the hardware that costs megabucks is guaranteed to be the best; and sadly, not all PCs are destined to ever become NLE workstations.
This all might sound a little on the bleak side, but it's worth bearing this important rule in mind before you start endlessly trying to reconfigure your system. Otherwise you could end up stuck in a circle where you wind up reformatting the drive to reinstall the OS; in order to reconstruct perhaps a little sanity back into your relentlessly PC-persecuted life.
Try to remember that drivers for graphics cards and sound cards can be and often are bugged; hard drives can go wrong just after you buy them; and PC motherboards are updated so often that compatibility lists are often pretty meaningless.
Pretty scary, huh? But if you know about this in advance, then you're prepared (at least mentally) for the road ahead.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00213.warc.gz
|
CC-MAIN-2021-25
| 1,285
| 7
|
https://startupfreakgame.com/2016/02/06/basic-game-loop-tiled-maps-and-more/
|
code
|
This week I focused on two goals: 1) Get a basic game loop going 2) Start narrowing down on the visual direction of the game so I can begin putting in placeholder art and also look into sourcing the artwork.
The core of this is now done and consists of the following:
- Your potential market consists of a “customer acquisition funnel”, i.e. the total number of people in that market, those who have visited your product, those who have signed up for a trial, those who are paying customers (in the case of a paid product), and those who are leaving (churn).
- At the end of each turn (sprint), a score is calculated for each of these segments. This is the complex part, and involves lots of factors. For example each market has a sensitivity factor towards the various tech, design and marketing efforts. Some markets respond better to social marketing while others put more emphasis on your features. Another important factor I will need to build later is the affect of competition.
- Once the segment scores are calculated, it’s used to determine how many customers are moving from one segment to the next for that sprint.
I have also added a very basic finance structure with outgoing (salaries, rent, etc), and incoming (paying users) to determine the player’s net position.
As a side note, once I realized how much complexity is involved in some of the above and that doing it by trial and error isn’t going to cut it, I searched for an online graphics calculator and came across a really handy one called Desmos. It came in really handy to project some of my functions out to see where they would end up in late game. Definitely check it out if you are looking for a graphing tool.
I have been trying hard to get away from the top-down or isometric view for this game for a couple of reasons: several other games I have seen in this space use that style which makes it harder for me to differentiate. But additionally the top-down view is more appropriate for a game where you can build your own offices (it almost creates the expectation). While I’ll definitely be adding elements of customization, and purchasing new items and goodies for your office, I don’t want to add a full blown office building feature as it would take away from the core of the game.
This week I experimented a little with rendering a tile-based side view of an office and some backgrounds. I have read advice here and there that I shouldn’t share very early screenshots with place-holder art because they can “stick”. But it’s too annoying not to share so here goes.
Please note: the following are only placeholder graphics and will not appear in the final game.
A side view of an office. I’m using the very awesome Tiled Editor to generate the tile layers, as well as object layers for “anchor points” where various items will be positioned like workstations.
A loosely named “Product Backlog” which consists of high level tasks. The player will focus on these during the course of the game.
Beginnings of an email client that will be a primary source of information for the user in the game. It’ll hopefully also be a good source of humor.
Some sort of Profile view for hiring candidates. If I have time I really want to build a mini interview game where you can ask the candidates questions. They may even decide not to take the job based on your terms.
The side view also lends itself well to fun and interesting backgrounds and weather effects. I’m thinking of having seasonal effects like rain and snow, color changes, and so on. Here is a simple haze/fog using the Unity particle system:
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00294.warc.gz
|
CC-MAIN-2020-45
| 3,614
| 15
|
https://www.jhsph.edu/faculty/research/map/MW/1866
|
code
|
Improving adolescent and adult mortality data in developing countries
In low-income countries with limited vital registration systems, the trends and causes of adolescent and adult mortality are measured retrospectively during surveys, but these surveys often yield inaccurate data. We propose to improve the accuracy of survey-based estimates of adolescent and adult mortality through a) innovative data collection techniques (e.g., event history calendars, recall cues) and b) integrated Bayesian methods that account for sampling and non-sampling errors. Results from this study will help develop and target adolescent and adult health interventions in low-income countries, and evaluate the effectiveness of global health initiatives focused on preventing premature deaths in those age groups (e.g, PEPFAR).
- Dhaka, Bangladesh: 86 projects
- Bissau, Guinea-Bissau: 3 projects
- Chilumba, Malawi - selected city
- Rakai, Uganda: 37 projects
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00369.warc.gz
|
CC-MAIN-2021-21
| 944
| 6
|
https://magnimindacademy.com/magnimind-machine-learning-quiz/
|
code
|
In L1 regularization, we penalize the absolute value of the weights while in L2 regularization, we penalize the squared value of the weights.
The scikit-learn Python machine learning library provides the ColumnTransformer that allows you to selectively apply data transforms to different columns in your dataset
It is good practice to use MinMaxScaling on a feature with a few extreme outliers.
The parameter k should take odd values in kNN, so that there are no ties in the voting.
Treating a non-ordinal categorical variable as continuous variable would result in a better predictive model.
One-hot-encoding increasing the dimensionality of a data set.
OLS Regression is expected to have more overfitting (lower bias) than Ridge.
Fitting your scaling transformation separately to your training and the test sets improves the model performance.
The more features that we use to represent our data, the better the learning algorithm will generalize to new data points.
It is not a good machine learning practice to use the test set to help adjust the hyperparameters of your learning algorithm
Say, there are two kids Jack and Jill in a maths exam. Jack only learnt additions and Jill memorized the questions and their answers from the maths book. Now, who will succeed in the exam? The answer is neither. From machine learning lingo, Jack is blank1 and Jill is blank2.
You are working on a classification problem. For validation purposes, you have randomly sampled the training data set into train and validation. You are confident that your model will work incredibly well on unseen data since your validation accuracy is high. However, you get shocked after getting poor test accuracy. What might have gone wrong?
Which of the following models is more acceptable then others to be implemented on new data points?
Which of the following is not correct regarding LogisticRegression and LinearSVM?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943698.79/warc/CC-MAIN-20230321131205-20230321161205-00798.warc.gz
|
CC-MAIN-2023-14
| 1,897
| 14
|
https://forum.zdoom.org/viewtopic.php?t=72966
|
code
|
Spoiler: LinkTo begin with the mod is for now compatible with both Zandronum and GZdoom, but for the future I don't know if I will keep the compatibility with Zandronum as it is too much outdated and many bugs have the possibility to appear online but never offline. To make that project I have been inspired by Laggy Blazko author of "Demon Counter Strike", but somehow I felt that I didn't have enought control on what monster/item is allowed to spawn and at what specific moment. So I decided to make my own project on the same kind of goal. Anyway let's begin the explanation.
So the project is devided in 3 parts, monster spawners, items spawners and props spawners.
Spoiler: Endless Battle General OptionsAs you can see you have here access to different settings allowing you modify how the world will adapt to your situation.
Let's begin with monsters.
Spoiler: Monsters Global OptionsHere are the following options
- monsters spawn limit defines the maximal amount of every monsters that the mod can spawn on a map.
- monsters spawn range defines how far they are allowed to spawn from the player.
- the three multipliers can adjust the spawn rate of monsters by changing the chances, the cooldown or the spawning amount.
- spawned monsters ambush defines if the spawned monsters can hear the player firing from the other side of the map or not.
- spawned boss annoucement will play a custom sound and tell a message if a cyberdemon/spidermastermind has spawned.
- monsters condition log will tell you if the criteria are meet to spawn the specified monster.
- spawned monsters log will tell you each the mod is trying to spawn a specific monster.
Spoiler: Monsters Specific OptionsFollowing the monsters global options you can now deal with the monsters specific options menu. Here you can edit every single monsters of Doom2 properly. The first option which is the encounter type define in which conditions the monster is allowed to spawn. There are a total of 5 different options for them.
- Never which means that the monster will just never spawn no matter what are the other settings.
- Present means that the monster can spawn for the entire map only if he was present on it.
- Progression means that the monster can spawn for the entire following game if he has been placed on the map.
- Killed means that the monster can spawn for the entire game if he was killed by the player.
- Always which means that the monster will always spawn without checking any previous condition.
Now regarding the items options, the settings are quite the same. The only difference is the fact that the "Killed" criteria is replaced by the "Picked Up" criteria which activate the spawn of the item when the player has picked it up at least once in the his gameplay.
And for the props menu, well you have probably found out, and yes it concerns only barrels for now. So if you like barrels and want tons of them in your map then you are welcome haha.
So this is a little small project that I have been working on for a little week. It's was quite hard to find a way to keep compatibility for both Zandronum & GZdoom and also finding a way for monsters to spawn on the map without breaking anything. The only solution that I found was to replace the DoomNum of the actors of Doom2 which means that each time you start the game, you will get a warning message about it but do not worry as it should work properly.
I have also tested mod compatiblity with randomizer such as Complex Doom, Brutal Doom or Pandemonia and it should work fine without problem so have fun.
Regards and any comment is welcome, sorry for my english haha.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335303.67/warc/CC-MAIN-20220929003121-20220929033121-00695.warc.gz
|
CC-MAIN-2022-40
| 3,623
| 23
|
https://gis.stackexchange.com/questions/460323/creating-arrow-effect-using-qgis-geometry-generator
|
code
|
You can download the project + data used to create the screenshot from here: https://drive.switch.ch/index.php/s/6hj4F3EO9NCxuds so you can inspect the details, based on the following description, and try to tweak the values yourself.
Reducing line's length
The expression you see in the screenshot is creating a variable
@ll that stores the line connecting the points in the hierarchical order (as described in the answer) and then based on this uses the function
line_substring() to shorten the line a bit. In fact, the expression
line_substring(@ll,0,length(@ll)*0.98) returns 98% length of the input line. This is to get a small gap between the top of the arrow and the point. Like this, differnt arrow heads and points do not overlap too much.
Styling the line/arrow with date driven override
I then used several data driven overrides in layer styling panel to style the arrows differently: arrow widht, head length, head thickness and fill color. For this, I used the Assistant (see next screenshot). As
Source, use any attribute or expression that gives you a range of numbers - in this specific case, I used (based on the answer) the expression
array_find (array_sort(array_distinct (array_agg(hierarchy))),hierarchy).
Then load or manually define values
from/to for the possible range of the output of this expression and finally set a value output
from/to: in the screenshot, the smallest arrow head has a size of 2, the largest a size of 10. In a similar way, I defined values for the other style elements. You can check it in detail in the project linked above.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00403.warc.gz
|
CC-MAIN-2023-50
| 1,573
| 13
|
http://t-t.dk/gekko/roadmap/
|
code
|
[As of June, 2020]
Gekko 3.0 was released as a stable version in the spring of 2019, and new users are advised to use versions in the 3.1.x series, which can be thought of as a “stable” development series (cf. the versions overview page). A stable version 2.4 exists, but in general, the work on the 2.x series is discontinued. Development-wise, at the moment the focus is on stabilization of the 3.1.x version, including help systems etc. If some syntax or other choices turn out to be unfortunate, this will be dealt with in a future “risky” 3.3.x development series that is not even at the drawing board yet (and may not be until 2021).
Aims regarding the further development of the 3.1.x series:
- Stabilizing 3.1.x, continuing to fix bugs and glitches. Better error messages in Gekko 3.1.x.
- Improved user manual, with more guided tours etc. Perhaps a guide like “The sun is always shining in Gekko”, adapted from the similar guide for Gekko 2.4 (in Danish).
- Improved solver capabilities. Blocks of equations, equation objects, model objects. More means than goals. Improved tracking when simulations fail. (Handling blocks of equations has been implemented in the 2.5.x series, but regarding the 3.1.x version, this will be built differently, using equation objects etc. Still, the experiences from model blocks in 2.5.x will be useful).
- Static simulation possibilities (in a sense removing lags and solving the model for one period to obtain long-run values).
- Improving daily and weekly frequencies. Quite a bit of work on this has already been done in 3.1.x.
- Improved DECOMP, work is underway.
- More advanced PLOT windows, work is underway.
- Better translator from Gekko 2.0/2.2/2.4 to 3.0 and 3.1.x.
- Databank API? It would be nice to be able to separate the databank read/write part of Gekko into a clean API that can be called from .NET languages like C#, but maybe also from languages like Python, R or other. Regarding Excel, see below.
- Improved Excel integration (the so-called ‘Gekcel’ project). A proof of concept is already up and running. The idea is to use an Excel add-in (.xll) that enables Excel to call Gekko (and Gekko databanks) easily. In essence, this works like an Excel API for Gekko.
- Better Python and R interfaces. Dataframe objects in Gekko would facilitate communication with Python and R.
- Implementing some of the “missing” functions/procedures from AREMOS that deal with holes in data, interpolation, extrapolation, or conversion between frequencies.
- More advanced seasonal correction (JDemetra+ etc.).
- Improved and up-to-date source code documentation + method API for the Gekko C# source code. There is a rather large project underway regarding this, starting in the fall of 2020.
- 64-bit version of Gekko. GAMS has gone 64-bit only now, and perhaps it would be timely to do the same regarding Gekko. Perhaps the migration from 32- to 64 bit ought to be done at the same time as Gekko migrates to .NET Core. Also, it would be nice to improve digital signing of gekko.exe to avoid warnings from Windows etc.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988753.91/warc/CC-MAIN-20210506083716-20210506113716-00181.warc.gz
|
CC-MAIN-2021-21
| 3,086
| 18
|
https://experienceleaguecommunities.adobe.com/t5/user/viewprofilepage/user-id/9938590
|
code
|
It can be a lot of reasons, If you see more than 2 image request shows
up in the DigitalPulse Debugger, it is likely that you might have
existing page codes on the site, there might be an extra s.t() call on
the page that's firing the request.
There tons of options for doing what you do, and each has pros and cons.
Adobe analytic doesn't care what your site domain URL is, and you don't
have to specificed in order for the page to be tracked, however, there
are reports that will needs to be configured, such as the
InternalURLfillter setting, and the linkinternalfilter setting that
needs to be modify based on the URL you are tracking. I suggest you talk
to an implementation engineer either from Adobe or an Adobe Certified
I know what 'None' means, but it just look very bad in the report. I had
to filter this value out everytime when I create dashboard. I wish there
is a setting in the reporting interface that allows me to hide 'None'
per report basis.
In SiteCatalyst, when you click Add Metric, there is a pop up that shows
a list of metrics with a drop down and a search filter. Now that there
are 1000 custom metrics, it's time to make some tweak. I think we should
start by categorizing metrics. The following category should be applied
to metrics, and appear in the metric drop-down, so user can select the
category to immediately see what's available in that category. It will
also help educate users on what's out-of-the-box, and what's customiz...
Thanks Ben, it will be for postmortem reporting. In the metric report,
we can view data by hour,day,week,month, it would be nice to be able to
view by minute. It would be nice to able to say something like 'the
campaign went out at 3:35, at 3:55, the clicks start coming in"
I think this would be very beneficial for media companies, especially
during live events, each minute makes a different. By the way, I think
Time of Day, Day of Week, Day of Month, and Minute of Hour should all be
a out-of-the-box dimension, similar to Day of Week as a lifecycle metric
for mobile app.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00038.warc.gz
|
CC-MAIN-2022-40
| 2,044
| 31
|
https://www.binance.com/en/research/analysis/eos-governance
|
code
|
Owing to its high market cap, EOS, a delegated Proof of Stake consensus network, has oftentimes been picked out and labeled as a victim of its governance, where the largest EOS holders further consolidate their position and hold all the power.
The notion of how decentralised the governance of EOS may indeed be, is best assessed by looking at proxies thereof.
The assessment is conducted in a three-fold manner, by looking at the performance of EOS in regards to Collusion Resistance, Fault Tolerance and Attack Resistance.
In regards to collusion resistance, the insights are:
The governance of EOS lacks mechanisms to avoid or structure the process of vote trading.
The incentive structure of EOS reinforces consolidation, by promoting vote trading and selfish acts.
Individual parties, such as proxies or block.one, have the influence to drastically change votes.
In regards to fault tolerance:
Operational metrics that measure reliability and capability to react, show that ⅔ of the exchanges operating as block producers (BP), had the worst performance among the 21 BPs.
There were furthermore two incidents where failures occurred.
In general, EOS problems seem to be enabled and aggravated by a number of issues such as low voter turnouts, little resistance to Sybil attacks, and coherently little transparency, the 1-token-30-votes system, as well as the changed block rewards.
Lastly, in regards to attack resistance, it appears as if there were two clusters of block producers, evidently displayed in correlations between voting patterns and regional distribution.
“If the potential of a startup is proportionate to the size times the incompetence of its competitors, the most promising startup of all would be one that competed with national governments. It's not impossible; this is what cryptocurrencies do.”
Paul Graham, Y-combinator
It is what cryptocurrencies do, or at least aspire to do. There are various different approaches on how to coordinate and agree on the network status. For this reason, the governance of EOS will be singled out in the scope of this case study.
After a brief general introduction to EOS, the concept of decentralisation is introduced and subsequently tested against the governance of EOS.
1. Description of EOS and its governance
1.1 Description of EOS
Block.one raised more than USD 4.1 bn in a year-long ICO that ended in July 2017. This constitutes the largest ICO recorded, exceeding the second largest ICO of 2017 (i.e., Filecoin with $257mn) by almost 16 times and the second-largest ICO across all time by 2.5 times1. Subsequently, Block.one’s product, EOS.io, officially launched on mainnet on the 1st of June, 2018.
EOS is a third-generation blockchain - for more information on the applied methodology, refer to chapter 2.1 of the report on the Telegram Open Network - and was quick to gain traction after the launch. The latter is also displayed in Chart 1, which shows the development of the USD denominated EOS closing price from the 1st June 2017 to the 10st of February 2020.
Chart 1 - Historical EOS closing price (in USD) from July 1st 2017 to February 10th 2020
Sources: CoinMarketCap, Binance Research
Not only the price but also the ecosystem of EOS was quick to grow. EOS now constitutes one of the most popular blockchains for dApp development. As of writing2, data from Dapp.Review suggests that 676 dApps chose to build on EOS, a figure that is only exceeded by Tron (693 dApps) and Ethereum (2,195).
These EOS dApps are also being used. In fact, their activity is far exceeding the activity of dApps on Tron or Ethereum. As of writing, Dapp.Review recorded almost 99% (or ~32mn txs) of all transactions that involve a smart contract of a dApp on EOS. The amount of respective transactions is one factor of magnitude higher than for dApps on Tron (~330k txs) which is still more than double the amount of dApp txs on Ethereum (~150k txs).
This insight is even more surprising when including the number of users per chain. While EOS had almost 85k users three months ago (30/10/2019),EOS now has the least amount of users (~10k users) among the three compared blockchains3. Ethereum is clearly leading with the largest user base (~50k users), which is more than double the amount of Tron’s users (~24k users).
This proxy measurement for activity - transactions involving dApp smart contracts - can be interpreted in several ways and may give insights in regards to:
dApp complexity, as more complex dApps might leverage multiple smart contracts.
dApp types like games, for example, are likely to require more frequent user interactions.
Most relevant to the interpretation of this metric is, however, the underlying technical infrastructure. Unlike Ethereum, which (still) uses a Proof of Work consensus mechanism, EOS uses a delegated Proof of Stake consensus (dPoS) mechanism. While dPoS enables higher network throughput, it comes at the cost of decreased decentralisation, as it is based on the “institutional reputation” of a small set of actors.
The EOS infrastructure uses a set of 21 delegates, also referred to as supernodes, that may vote on new blocks in a round-robin model. These delegates are elected by EOS token owners out of a larger set of candidate block producers. Since block producers get rewarded per block validation, they have an incentive to get elected as a block producer, which puts them in direct competition for votes with each other. The block rewards are paid from annual token inflation.
EOS token owners execute their vote for a certain block producer by staking their tokens for them for a period of 3 days. Votes “decay” over time and are void after two years. For votes to maintain a high vote strength, it is required to resubmit votes on a weekly basis. While token ownership and voting power generally increase linearly, at a one to one ratio, it is possible to vote for up to 30 block producers simultaneously. Effectively, this means that 1 token may equal to 30 votes.
Besides voting rights, token owners get usually compensated with a pro-rata share of the block rewards their block producer reaps and may similarly claim network resources such as RAM, CPU, and bandwidth (in jargon “NET”).
To avoid an inefficient resource allocation from inactive - “hodling” - token owners that are not utilizing their allocated resources, additional market-driven allocation mechanisms have been introduced4.
1.2 Is EOS (too) centralised?
Even though this particular problem of optimizing the resource utilisation of the EOS network was overcome easily by stakeholder-driven innovation, other problems have turned out to be more persistent. In particular, one concern has consistently accompanied EOS from the initial ICO to the current state - the fear that EOS may be too centralised, as “too few people own too many tokens” (WeissCrypto, 2019).
Generally speaking, all blockchains have been touted (Hacker et al. 2019) to be:
“prone to patterns of re-centralisation: they are informally dominated by coalitions of powerful players within the cryptocurrency ecosystem who may violate basic rules of the blockchain community without accountability or sanction”.
To answer the question of EOS potentially being too centralised, one must start by defining the opaque concept of centralisation.
1.3 Measuring decentralisation
The definition and measurement of “centralisation” have long garnered a lot of interest. Especially in the early years of Bitcoin and crypto-asset adoption, being “decentralised” has been a guiding goalpost.
Buterin’s blog posts [1, 2, 3] in 2017 kickstarted a discussion that was coined by a less dogmatic and increasingly pragmatic stance towards the purpose and benefits of decentralisation. Until then, the goal of being decentralised was considered an irrefutable necessity. A notion that is furthermore well represented in early debates about the Bitcoin block size - for more information on this refer, for example, to this medium article. The majority of the discourse was not framed within a “value from decentralisation” or cost-benefit perspective but was categorically ideological5.
Buterin’s efforts reinforced previously existing research efforts, such as the popular position paper “On Scaling decentralised Blockchains” (2016), that attempted to rethink the design of blockchains by splitting them into the parts network, consensus, storage, view, and side planes.
This general idea of having identifiable sub../components of blockchains was adopted and enriched by transferring the Gini coefficient concept onto it. The Gini coefficient is the most common measurement of inequality, but is based on several conditions that drastically reduce its value in this context6.
Srinivasan (2017) subsequently popularized the Minimum Nakamoto Coefficient that measures the Gini coefficient of subsystems to ultimately derive a total score on the equality of token ownership of a chosen crypto-asset. This concept experienced wide popularity and was implemented in a public Python library and further developed in the Minimum Nakamoto Coefficient “2.0”.
However, any attempt to quantify the level of centralisation of a blockchain has rightfully been criticized as being flawed. “Proposals such as the Minimum Nakamoto Coefficient try to quantify exactly this, but run the risk of providing an illusion of measurability” (Walch, 2019). Instead of thus entertaining the vain effort of applying one of these well-intentioned frameworks onto EOS, EOS governance is best assessed by looking at proxies thereof. A proxy measurement can be understood as an indirect measurement of the subject of interest.
2. Current status of EOS governance
2.1 General introduction
To reiterate on the core principles of EOS governance: 21 block producers (BPs) are elected by EOS token holders from a broader set of block producer candidates. These BPs follow and sign (in every transaction) a Ricardian constitution that now “acts as a peer-to-peer end-user license agreement”. In practice, however, EOS governance was already undergoing drastic changes.
The first (interim) constitution of EOS was published in May 2018. The constitution was enforced by the EOS Core Arbitration Forum (ECAF), which also served to settle disputes among EOS token owners. While the ECAF and its “analogue” approach of requesting BPs to sign and follow endorsements attracted some criticism, its purpose was to balance the influence of BPs. This is something that was deemed necessary, shown, for example, in circumstantial evidence that points to EOS users perceiving block producers as too powerful (c.f. Reddit, 2019). As one user put it, “block producers control all decisions made on EOS, from validating blocks to seizing funds from under your private key.”
Nonetheless, the BP EOS New York spurred previous efforts to discontinue the ECAF by publicly declaring not to follow ECAF decisions any longer. This followed two questionable incidents, where (1) ECAF first ruled to freeze 27 accounts without providing any reasoning (Coindesk, 2018) and (2) a popular, but fake ECAF ruling demanding the reversal of an EOS transaction (c.f. Hoskinson, 2018), but nonetheless represented a clear divergence from previously agreed-upon code of conduct.
The next development of EOS governance was coined by not only abandoning ECAF but also replacing the constitution with an EOS User Agreement (EUA). Effectively, this means that at this point, all original guidelines of the governance of EOS have been substituted. The process of replacing the constitution was - once again - fairly chaotic. The original proposal to replace the Interim Constitution with the EUA required a voter participation rate of 15%. However, it ended with a voter turnout of merely 2%. Nonetheless, EOS New York suggested to go ahead anyhow and received support by 15 out of the 21 BPs. Ever since, 21 elected BPs and the EUA are at the core of EOS governance. Some additional governance tools have recently been implemented, such as the EOS Enhancement Proposal and the BP System Upgrade Proposal, which have, however, a merely supportive function.
With a better understanding of the current and former fundamentals of EOS governance, it is possible to move to the practical implementation thereof. The chosen way of doing this is to assess the supposedly greatest fear of EOS being “too centralised” in its governance. This will be done by testing EOS against the three main goals of decentralisation (Buterin, 2017): (1) collusion resistance, (2) fault tolerance, and (3) attack resistance.
(1) Collusion resistance describes the ease of system participants to organize in ways that benefit them at the expense of others. Arguably, it is thus the most relevant metric against which to assess EOS’ governance.
A paper published by Whiteblock (2019) found that block producers have formalized incentives to collude. These incentives are originating in the substantial revenue from validating blocks and are only possible because of the following factors:
Block rewards from inflation: originally an annual inflation of 5% was split to fund a community pool (4%) and to fund block rewards (1%). It was, however, decided to discontinue the community pool and reduce the inflation to 1% to completely funnel it toward block producer and standby block producer spots. This further consolidated control of BPs, as it effectively increases their revenue.
1 token - 30 votes: the ability to vote for 30 block producers with a single token facilitates vote trading and vote sharing incentives for the largest BPs. Effectively, the largest BPs may build a moat of votes by coordinating in networks of up to 30, respectively 21 parties.
Vulnerable to Sybil attacks: the current economic incentives and voting structure are very susceptible to Sybil attacks. A single actor may register multiple block producer accounts and multiply their voting weight at a negligible cost. Simultaneously, having multiple BP entities allows to allocate more block rewards to voters, increasing the competitiveness of the underlying actor (i.e., the mining pool).
The role of proxies further aggravates the threat by Sybil attacks: a proxy is entrusted to vote for BPs on behalf of EOS token holders. Proxies are usually led by community contributors that are deeply entrenched within the ecosystem. Allegedly, several proxies have been contacted by BPs to continue acting as a front-facing intermediary, but let the BP gain control over the votes in exchange for a monetary reward (see @ColinTCrypto).
Low voter turnouts: similar to other systems requiring network user participation, voting turnouts in EOS are generally low. This makes it easier for large individual players to coordinate and dominate votes as their relative control over “active tokens” is higher than their control over all tokens.
Tokens stored on exchanges: lastly, several custodial exchanges can vote with entrusted tokens, which also leads to a significant consolidation of voting rights.
Within this setting, an economically rational agent must collude with others in order to maintain and maximize their profit (c.f. Whiteblock, 2019). This situation is further aggravated by an open, unregulated market for votes.
Generally speaking, the process of vote-buying is not inherently bad as any purchase can only be done if the buying party values a vote more than the selling party and thus has a stronger interest in expressing a position that is subjectively perceived as more relevant.
However, it may lead to an aristocracy, as rich individuals are in a “virtuous circle” where they can amass an increasing amount of votes, as they earn income from using them. Coherently, the 21 BPs earn almost seven times as much as other high placed BP candidates (avg. daily reward 978 EOS for the 21 BP vs. 142 EOS for BP 31 to 51). Posner and Weyl (2018) further assess the subject of vote buying in a book titled “Radical Markets” and suggest that votes should be subject to a cost-function, where votes may indeed be tradeable and purchasable, but using or purchasing them is coupled to a decreasing utility.
In EOS, however, any such cost-function is absent. The market for votes is furthermore very intransparent and structured via a lot of informal “quid pro quo” of BPs. It is nonetheless possible to conduct a basic assessment of voting patterns, which reveals the following picture.
2.2 Voting patterns
Out of the original 21 block producers, only five of them are still producing blocks at all. One of the remaining BPs completely unregistered as a BP and stepped away from EOS, while the rest are in standby and not part of the top 21. This is indicative of the significant changes in EOS governance participants. Chart 2, for example, shows the distribution of all casted votes for BPs (as of 07/02/2020).
Chart 2 - Distribution of EOS votes BPs as of February 12th 2020
Sources: EOSAuthority, Binance Research
The most apparent insight from this chart is that the largest 164 voters have 72% of the weight in regards to casted votes. Additionally, a large number of voters (~480k) have less than one EOS staked and, therefore, very little impact. This preliminary insight indicatively displays the large influence of the largest EOS holders.
Chart 3 - Voting pattern of EOS whales against all EOS accounts
Sources: CoinMarketCap, Binance Research
Out of these 164 whales, 84% (123) voted simultaneously for 30 BPs - a figure that is considerably higher than the 52% (38,858) of all accounts that chose 30 BPs. One way to interpret this figure is that large EOS holders could simply be more attractive targets for vote trading schemes. A second notable voting pattern is to vote for ~20 block producers. The votes around 20 BPs are also likely to originate from collusion and vote trading. The last insight is that many - presumably smaller - voters only voted for one BP.
When it comes to BPs, themselves, the following BPs are currently producing blocks.
Chart 4 - The 21 block producers with the most received votes
Sources: CoinMarketCap, Binance Research
Chart 4 shows, for example, that the founding BP EOSHuobiPool has the largest amount of votes with 336mn votes. In comparison, the largest proxy, “colintcrypto” controls roughly 10mn EOS, with the largest 21 proxies controlling over 94mn votes. Other than BPs, the overarching idea behind proxies is similar to representative democracies, where voters may decide to elect chosen individuals to act on their behalf.
Even though this ignores the fact that at least three of these proxies, “bitfinexvp13, bitfinexvp21 and bitfinexvp33”, are, for example, controlled by Bitfinex, it shows that proxies have considerable influence. This is especially clear when considering that for eosrapidprod to become the largest BP, it would only require 13mn more EOS (~70mn USD). Self-evidently, this analysis is nonetheless overly simplistic.
2.3 Fault tolerance
A theoretical definition of (2) fault tolerance may describe it as the number of failures a system can endure while maintaining its function. Having a high number of separate ../components, i.e., a high redundancy generally increases fault tolerance. In line with the chosen methodology of observing outcomes, the fault tolerance of EOS can be assessed by looking for metrics or events that describe failures, and the respective outcomes.
There are two prominent examples of BP failures:
(i) One BP failed to update the list of blacklisted accounts. The result of this failure was the loss of USD 7.2mn. Even though these funds were later recovered by Huobi, the EOS blockchain showed no fault tolerance, as the funds were mismanaged and only reappeared because of the actions of an external actor.
(ii) The second example relates to a “bad allocation” error that forced the nodes of several BPs to go offline. These BPs were only temporarily replaced after 30mins, leaving the EOS blockchain exposed by having a reduced amount of BPs.
Besides these two high-profile events, BPs can be assessed on two additional metrics: reliability and capability to react.
The reliability of the 21 block producers (as of 10/02/2020) is displayed in chart 4 via the block and round availability. These metrics describe the amount of produced blocks i.e.,completed rounds divided by the scheduled amount of blocks/rounds.
Blocks may be not produced for various reasons - and failure to do so may not always be the fault of the block producer. Hence, this metric gives a general idea of the availability of BPs, but must be complemented with a metric that keeps BPs more accountable: round availability. A round is a series of 12 blocks, missing 12 blocks in a row is thus likely the fault of the respective BP.
Chart 5 - Historical block and round availability of BPs as of February 10th 2020
Sources: AlohaEOS, Binance Research
The second metric to assess BPs is ‘capability to react’. This ability can be measured as the execution time of custom EOS contracts. On-chain data is gathered by EOS Mechanics via a smart contract calculating Mersenne prime numbers.
Chart 6 - Historical box plot data for BP’s CPU performance as of February 10th 2020 7
Sources: AlohaEOS, Binance Research
Except for Bitfinex, EOS WIKI, and EOSHuobiPool, all 21 BPs have a low and consistent execution time, indicating a sufficient resource allocation to their activities as BPs. While it is interesting that two exchanges are amongst the least performant (i.e., invested?) BPs, all of them performed reasonably well on the metrics of and .
2.4 Attack resistance
The last assessment of EOS governance targets the (3) attack resistance of EOS. This attack resistance can manifest in various ways such as, for example, censoring attacks. Generally speaking, decentralised systems are supposedly more expensive to attack due to their lack of central points of failure.
However, circumstantial evidence points to significant consolidation of the EOS network. EOS New York, for example, reported that one entity registered six different BPs. Similarly, many BPs are voting for themselves through proxies. Some of these proxies are openly associated to a particular BP - for example, Huobi has 5 proxies with a total of 5mn EOS, BigOne has 15 proxies with a total of 1mn EOS and Bitfinex has 13 proxies with a total of 40mn EOS. Additionally, it appears as BPs may operate up to 50 different covert proxies.
Another attack vector is of geographical nature. One third of all block producers are based in China, and more than half of all block producers are based in Asia. There is thus a strong regional focus, which is similar to the geographical distribution of mining pools (c.f. Wang et al., 2019). Nonetheless, correlation analyses from EOSAuthority suggest that there are two major clusters. One that would be centered around EOSAuthority and another around EOSHuobiPools. This insight mirrors the previously mentioned geographical divide.
Lastly, the issue of attack resistance generally links back to the chapter on voting patterns and in particular vote trading, as well as the influence of large token holders.
3. No immutable problems
EOS has a function called regproducer that is a “mutually agreed-upon guide” to enforce on-chain standards among block producers. One BP submitted a referendum to update this contract and thus raise the bar for BPs. As any change must be implemented with approval from at least 15/21 BPs and this update only received 13 out of 21 votes, it was destined not to get implemented.
However, large proxies may change the ranking of candidate block producers by re-allocating their votes. This is what happened in the previously described case and led to two candidate block producers move up the ranking, become a BP, and thus obtain voting rights.
Subsequently, it can be concluded that BPs may have the sole decision making power, but are nonetheless dependent on votes on becoming and staying a BP.
Large proxies and accounts thus have considerable influence. Similarly, the company launching EOS, block.one, has over 96 million EOS and could theoretically use this anytime to vote and change the order of BPs8. To put this amount into perspective: block.one holds almost ten times as much EOS, as the proxy that moved the ranking of seven BPs and BP candidates. Similarly, the holdings of block.one and the next four largest EOS holders already amount to one quarter of the entire circulating supply.
Another promising approach to improving EOS’ governance could relate to formalizing the process of vote trading. By introducing a formal, transparent mechanism with a cost function for votes, it might be possible to reduce informal coordination by introducing diminishing returns on purchasing EOS votes. This would maintain the ability of large token holders to have more sway, but avoid some “vicious virtuous-cycle” of further EOS consolidation. Alternatively, such cost-function could also be translated on the costs of casting a vote.
Other previously suggested ideas to improve EOS are:
Including a random shuffle to choose the 21 BPs out of a set of the 100 largest BP candidates.
Introducing a universal inflation based on the amount of staked EOS.
Introducing negative votes and/or a voting cap.
Introducing a proxy with block.one funds that votes according to community preferences for BPs.
Introducing BP diversity guidelines to restrict the amount of regional concentration.
These ideas are described in more detail by EOS Go.
“While Web2 was defined by philosophies like ‘Move Fast, Break Things,’ Web3 should be guided by mantras like ‘Do it the Right Way This Time.”
Andrew Keys, DARMA capitalisation
This inspirational appeal of Keys can - more pragamatically - be understood as a basic need to get the fundamental crypto-economics right: as soon as possible and ideally before the inception of a network.
This being said, it remains to be seen if EOS can overcome its very own structural problems. Unfortunately, EOS’s vulnerability to Sybil attacks reduce transparency and thus make it difficult to have definite conclusions in regards to voting patterns of BPs and BP associated proxies.
Two rather unrelated issues that may, however, indirectly aggravate the situation of EOS governance are relating to dApps on EOS. While EOS has manifold documentation for developers, only a few APIs are provided. The costly provision of APIs is completely voluntary and does not constitute any obligation of the BPs. Simultaneously, the amount of EOS dApp users has - lately - strongly declining over the course of the last 6 months. This being said, the upcoming beta launch of “block.one’s Facebook”, Voice, is widely treated as a milestone for general adoption.
While it is generally unclear to what extent collusion among the block producers does occur, circumstantial evidence points to a problematic consolidation of the network that appears to be rooted in the fundamentals behind EOS: a governance with an intransparent, poorly understood voting market, aggravated by the use of a dPoS consensys system with incomplete incentive allocations.
This problem is, however, not restricted to EOS alone, but appears to be an inherent problem of dPoS blockchains and is generally aggravated by custodial ownership of tokens (e.g., via exchanges). As the largest dPoS blockchain, EOS naturally encounters the highest amount of scrutiny and must identify and adopt pioneering solutions.
Buterin (2017). The Meaning of Decentralisation. Available online at: https://medium.com/@VitalikButerin/the-meaning-of-decentralisation-a0c92b76a274
ColinTCrypto (2019). Tweet. Available online at: https://twitter.com/ColinTCrypto/status/1172580098370953217?s=20
Coindesk (2018). EOS’ Blockchain Arbitrator Froze 27 Accounts. Available online at: https://www.coindesk.com/eos-blockchain-arbitrator-orders-freeze-of-27-accounts
Cointelegraph (2019). ICO Market 2017 vs 2018. Available online at: https://cointelegraph.com/news/ico-market-2018-vs-2017-trends-capitalization-localization-industries-success-rate
EOS Authority (2020). Tools. Available online at: https://eosauthority.com/schedule_first?network=eos.
EOS New York (2019). Tweet. Available online at: https://twitter.com/eosnewyork/status/1199813240307568641
EosRex (2020). Website. Available online at: https://eosrex.io/
Hacker et. al (2017). Corporate Governance for Complex Cryptocurrencies? A Framework for Stability and Decision Making in Blockchain-Based Organizations. Last revised at: 19 Sep 2019. Available online at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2998830
Hoskinson (2018). Tweet. Available online at: https://twitter.com/IOHK_Charles/status/1010820002533036033?s=20
Posner & Weyl (2018). Radical markets: Uprooting capitalism and democracy for a just society. Princeton University Press.
Song (2018). Why Bitcoin is Different. Available online at: https://medium.com/@jimmysong/why-bitcoin-is-different-e17b813fd947
Walch (2019). Deconstructing 'Decentralisation': Exploring the Core Claim of Crypto Systems. Crypto Assets: Legal and Monetary Perspectives (OUP, forthcoming 2019).
Wang et al. (2019). Measurement and analysis of the bitcoin networks: A view from mining pools. arXiv preprint arXiv:1902.07549. Available online at: https://arxiv.org/pdf/1902.07549.pdf
Whiteblock (2019). EOS: An Architectural, Performance, and Economic Analysis. Available online at: https://whiteblock.io/wp-content/uploads/2019/07/eos-test-report.pdf
Seven-day average with data from 05.02.2020 to 11.02.2020. The same methodology is applied for all data from Dapp.Review.↩
Even though EOS might have the fewest users among these three chains, this number is still well above average and reflects the high market capitalisation of the EOS blockchain.↩
While RAM was always prized and distributed via a smart contract using the ratio of available RAM to EOS tokens in the contract, CPU and NET have only become tradeable with EOS REX, the EOS Resource Exchange. Before that the only way to optimize resource usage was a surplus allocation, that allowed dApps to exceed their allocated resources and draw from unused resources.↩
This is based on circumstantial evidence, well represented in several Medium articles, as for example in Song (2018). The concept of decentralisation as a scale was only developed and popularized later on.↩
The Gini coefficient is indicative of the wealth inequality of a population by measuring the ownership of the - by and large - only value/ means of payment within a population. As crypto-assets do, however, not constitute the only value of owners and the population is difficult or impossible to estimate, the Gini coefficient is a poor measurement of wealth inequality in this context. While it does show the ownership structure of a crypto-asset in a highly accessible manner, few insights can be derived from that.↩
Data over the last 12 months is considered as of 10/02/2020. Data source: AlohaEOS.↩
Until now, 13/02/2020, b1 has always abstained from voting.↩
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00598.warc.gz
|
CC-MAIN-2023-40
| 31,165
| 137
|
https://arnoldlau.wordpress.com/category/dessert/
|
code
|
Category Archives: dessert
In the middle of a Pennsylvania farm, Premise Maid is a great break on a drive through Pennsylvania
Magnifico has some great ice cream and Italian ice.
Afternoon tea at Azabuya
Got some coconut drink at tianzifang
Always up for fresh egg waffles
After a great dinner at Hakkassan, we wrapped up downstairs at Robuchon
Great gelato served in chocolate dipped cone at Venchi
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251773463.72/warc/CC-MAIN-20200128030221-20200128060221-00127.warc.gz
|
CC-MAIN-2020-05
| 399
| 8
|
https://www.overclock.net/forum/6412437-post10.html
|
code
|
my new side panels and face panels finished and picked them up.
there are actually 2x of each panel. whats great with how i have things set up is that i can use either air or water cooling relatively easily.
here are the same panels with the brass linings. whats next is to get the case powder coated.
started sleeving the PSU. finished the 8 pin and 1/6th of the 24pin
4 pins left for the 24 pin. man.. reminds me how much work individually sleeving each wire is.
will be finished by the next update. then gotta sleeving 2x 6pin PCIe connectors, 1x SATA cable, and maybe a couple of Molexes for LEDs.
and we hava test fit of the side panel... PERFECT!
* This Worklog post was generated using WorklogCreator - Version: 22.214.171.124
* Free Download: http://www.mod2software.com/worklogc...logcreator.zip
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00235.warc.gz
|
CC-MAIN-2020-10
| 804
| 9
|
https://www.cakeresume.com/companies/robert-walters/jobs/data-scientist-79b797
|
code
|
* Prototype network analyses, working with AI team members and PM to identify business value.
* Train and test supervised and unsupervised models for identified tasks, anywhere from logistic regression to neural networks and deep learning, or clustering methods.
* Identify and refine methodologies for clustering and trend detection.
* Develop scalable graph analyses of social media, news and other consumer and market data or any other scalable machine learning solutions.
* At least 3 years of software industry experience owning and driving Data Science or NLP projects.
* Experience with graph development and visualization tools (Gephi, GraphML, Neo4j, JanusGraph, etc), or in machine learning techniques applied to NLP, data mining and content discovery
* Experience writing production quality code in Python or Java.
* Experience with big data ecosystems like Hadoop/Spark.
* Master’s degree in relevant field (Computational Linguistics, Computer Science, NLP, etc.)
* Understanding of machine learning (including deep learning) algorithms and workflows
* Familiarity with libraries like Tensorflow, SparkMLlib, and Scikit-learn
* Experience with social media data
1. online technical test
2. onsite interview
3. English interview
Established in 1985, Robert Walters is a world-leading specialist professional recruitment consultancy and the core brand of the Group. Our clients range from the largest corporates world-wide through to SMEs and start ups. We recruit people for permanent, contract and interim roles across the world.
Robert Walters Taiwan established in 2011, we offer a highly professional and specialised service to candidates and clients alike and is proud of the long established track record we hold with many of the world’s leading institutions. Working closely with our offices in mainland China and Hong Kong, we are able to offer a one-stop shop service for your recruitment needs in Greater China.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00118.warc.gz
|
CC-MAIN-2021-21
| 1,936
| 17
|
https://forum.pycom.io/topic/4773/pymakr-is-gone
|
code
|
pymakr is gone
I don't know what to do. My pymakr plugin is installed, but looks like it's gone.
Can't select the terminal window (python is gone also I am not sure is it relaited), not in the dropdown, and can't run the commands.
I should continue my work, but I struggling with my this bug. Please help me.
Version: 1.33.1 (user setup)
OS: Windows_NT x64 10.0.17763
It's worth mentioning that you should turn off "Extensions: Auto Update" or it'll automatically put 1.1.0 back! This is also probably why it suddenly happened in the first place. I had to to restart VS Code once I changed this setting too, as it still auto updated a couple of times for me, before I did.
(Preferences > Settings > Features > Extension Viewlet > Auto Update (can also search for auto update to get to this)
Thanks, that solved it.
(Should ask first next time, I even reinstalled my pc... )
@tttadam There is an ongoing issue with pymakr:
I've solved temporalily installing 1.0.7 version
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643462.13/warc/CC-MAIN-20230528015553-20230528045553-00755.warc.gz
|
CC-MAIN-2023-23
| 970
| 12
|
https://github.com/hhallman/photoupload
|
code
|
C++ C C# ASP
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Web control for uploading picture files to the www.minakort.com website. It is possible to adapt to upload to your photosite instead. Designed to provide a very smooth user experience, where the user can annotate pictures while the upload is in progress. About the repository: This repository was first created in September of 2009, no less than four years after the project was last touched. So please excuse if some generated files are added. The generated files should be removed and some cleaning up is bound to do. However, I don't have a build environment for these files right now. Created the repository to get the files under control, instead of having them laying around. (Note that I did have version control on the files, but this has been lost as the version control resided in MS SQL Server in some proprietary format.) The repository initial commit was created, changing the commit-date to the last-changed source file, with command: env GIT_AUTHOR_DATE="`ls -rt *.cpp|tail -1|xargs date -u -r`" git commit -m "Old sources retaining old change-dates of last changed file: `ls -rt *.cpp|tail -1`, actual commit date: `date`" All the change-dates before commit are in file .changedates
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824217.36/warc/CC-MAIN-20160723071024-00055-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 1,285
| 4
|
https://thechainlink.org/group/bikewinter/forum/topics/waterproof-gloves-suggestions?page=2&commentId=2211490%3AComment%3A765932&x=1#2211490Comment765932
|
code
|
I've been riding around with simple, knitted gloves, and on days like today when it's wet/raining, well, yikes!
I've never been into the idea of biking gloves - I'm cheap? - but obviously I need some sort of waterproof glove. Be it a thin waterproof layer than I can put over my other gloves, or thicker/warmer for when the cold kicks in, I'll take either. Both! All I know is that I need gloves that aren't my snowboarding gloves because I need flexibility, but again, something that are waterproof/weatherproof!
Anyone have any suggestions?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00278.warc.gz
|
CC-MAIN-2022-21
| 542
| 3
|
https://merl.com/news/talk-20200507-1312
|
code
|
Date & Time:
Thursday, May 7, 2020; 12:00 PM
In the context of science, the well-known adage "a picture is worth a thousand words" might well be "a model is worth a thousand datasets." Scientific models, such as Newtonian physics or biological gene regulatory networks, are human-driven simplifications of complex phenomena that serve as surrogates for the countless experiments that validated the models. Recently, machine learning has been able to overcome the inaccuracies of approximate modeling by directly learning the entire set of nonlinear interactions from data. However, without any predetermined structure from the scientific basis behind the problem, machine learning approaches are flexible but data-expensive, requiring large databases of homogeneous labeled training data. A central challenge is reco nciling data that is at odds with simplified models without requiring "big data". In this talk we discuss a new methodology, universal differential equations (UDEs), which augment scientific models with machine-learnable structures for scientifically-based learning. We show how UDEs can be utilized to discover previously unknown governing equations, accurately extrapolate beyond the original data, and accelerate model simulation, all in a time and data-efficient manner. This advance is coupled with open-source software that allows for training UDEs which incorporate physical constraints, delayed interactions, implicitly-defined events, and intrinsic stochasticity in the model. Our examples show how a diverse set of computationally-difficult modeling issues across scientific disciplines, from automatically discovering biological mechanisms to accelerating climate simulations by 15,000x, can be handled by training UDEs.
Christopher Rackauckas is an Applied Mathematics Instructor at the Massachusetts Institute of Technology and a Senior Research Analyst at University of Maryland, Baltimore, School of Pharmacy in the Center for Translational Medicine. Chris's research is focused on numerical differential equations and scientific machine learning with applications from climate to biological modeling. He is the developer of over many core numerical packages for the Julia programming language, including DifferentialEquations.jl for which he won the inaugural Julia community prize, and the Pumas.jl for pharmaceutical modeling and simulation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571959.66/warc/CC-MAIN-20220813142020-20220813172020-00521.warc.gz
|
CC-MAIN-2022-33
| 2,376
| 4
|
http://www.princeton.edu/geosciences/tromp/people/
|
code
|
Jeroen Tromp joined the Department of Geosciences in July 2008 as Blair Professor of Geology and Professor of Applied & Computational Mathematics. He comes from the California Institute of Technology, where he was the Director of the Seismological Laboratory and McMillan Professor of Geophysics. From 1992 to 2000, he was a faculty member of the Department of Earth & Planetary Sciences at Harvard University. His Ph.D. (1992) and M.S. (1990) in Geophysics are from Princeton University, and he received his B.Sc. (1988) in Geophysics from the University of Utrecht in the Netherlands, of which he is a native.
Tromp’s primary research areas are in Theoretical & Computational Seismology. Research topics include: surface waves, free oscillations, body waves, seismic tomography, numerical simulations of 3-D wave propagation, and seismic hazard assessment. In collaboration with the late Princeton Geosciences faculty member Tony Dahlen he published the book Theoretical Global Seismology.
Ebru joined Jeroen Tromp’s group as a postdoctoral research associate in September 2009. She obtained a Ph.D degree in Seismology from Utrecht University, the Netherlands, under the supervision of Jeannot Trampert, and MSc/BSc degrees in Geophysics from Istanbul Technical University, Turkey. Ebru's research interests lie in computational seismology, more specifically, in focusing on full waveform tomography at global and regional scales. During her PhD, using 3-D numerical wave simulations, she investigated crustal effects in global mantle tomography and worked on defining new misfit functions for full-waveform tomography such as instantaneous phase and envelope measurements. Ebru's current project is dedicated to obtaining a global tomographic model using adjoint techniques by inverting crust and upper-mantle together to avoid any bias introduced in upper-mantle images due to "crustal corrections."
Collaborations: Hejun Zhu, Daniel Peter, Yang Luo, and Jeroen Tromp.
Shravan Hanasoge received his doctorate from Stanford University in 2007. He continued on for a brief stint as a postdoctoral scholar, and spent a few months visiting Monash University, Australia and Indian Institute of Astrophysics, Bangalore. He then pursued a joint appointment between Max-Planck Institute for Solar System Research in Germany and the Department of Geosciences, Princeton University.
His primary research area is in helioseismology - specifically using adjoint methods to invert for the structure and dynamics of 3D phenomena such as sunspots, convection etc. He also maintains an active interest in turbulence and convection, numerical methods, and terrestrial seismology.
Collaborations: Aaron Birch (CORA/NWRA), Thomas Duvall, Jr. (NASA/Stanford University), Laurent Gizon (Max-Planck Instiute), Steven Orszag (Yale University), Katepalli Sreenivasan (New York University), and Jeroen Tromp (Princeton University).
Matthieu joined Jeroen Tromp’s group as a postdoctoral research associate in January 2013. He received a Ph.D. in mathematics from University Paris Nord, in France, under the supervision of Pr. Claude Basdevant, while studying how to accelerate computational fluid dynamics algorithms on GPU at ONERA. Matthieu holds a Ms. Sc. in computer sciences from ENSEIRB Bordeaux, France, with a specialization in high performance computing.
His researches focus on how to accelerate science, in particular, numerical simulations on hardware architectures at the cutting edge of technology.
Collaborations: Ebru Bozdag, Wenjie Lei, Herurisa Rusmanugroho, James Smith, and Jeroen Tromp.
Hom N. Gharti joined Jeroen Tromp's group as a postdoctoral research associate starting in January 2012. Gharti received a Ph.D. in geophysics from the University of Oslo and NORSAR, Norway, and a M.Sc. in earthquake engineering from University of Tokyo, Japan. Gharti holds a B.E. degree in civil engineering from Tribhuvan University, Nepal. His primary research interests include computational (geo)mechanics including glacial rebound, wave propagation, inverse problems, and microseismicity.
Collaborations: Daniel Peter, Ebru Bozdag, Hejun Zhu, Yang Luo, and Jeroen Tromp.
Heru joined Jeroen Tromp’s group as a postdoctoral research associate in January 2013. He received his Ph.D. (2011) in Geosciences from the University of Texas at Dallas. He received his M.Sc. (2005) and B.Sc. (2003) in Geophysics from Institute of Technology Bandung, Indonesia.
His primary research area is in Exploration Seismology including wave propagation, imaging, and inverse problems. During his Ph.D., he researched anisotropic 3D, 9-C seismic modeling and inversion, under the supervision of George McMechan.
Collaborations: Matthieu Lefebvre, Ryan Modrak, and Jeroen Tromp.
Wenjie Lei is a graduate student at the Department of Geosciences, Princeton University. He joined Prof. Jeroen Tromp's group in 2012. Wenjie has studied at the University of Science and Technology of China from 2008, and obtained his B.S. degree in Geophysics in 2012. His primary research interests include global seismology, tomography, and seismic wavefield simulation.
Collaborations: Jeroen Tromp, Ebru Bozdag, James Smith, Hejun Zhu.
Ryan Modrak has been a graduate student in Jeroen Tromp’s group since Fall 2010. Prior to joining Princeton, he received B.Sc. degrees from Penn State University in math and geosciences and worked at Los Alamos Laboratory. His research interests include optimization and data inversion applied to both seismology and exploration geophysics. He is currently working in seismic interferometry / noise tomography and has worked in the past on joint inversion and event location problems.
Collaborations: Jeroen Tromp, Yang Luo (Tromp group), Hejun Zhu (Tromp group), Daniel Peter (Tromp group), Monica Maceira (Los Alamos), and Stephen Arrowsmith (Los Alamos).
James Smith is a graduate student in the Department of Geosciences. He joined Jeroen Tromp's group in August 2012. James received a B.S. in geosciences from Colorado State University and a B.A. in mathematics from Knox College. His research interests are in seismic imaging and inverse problems. He is currently working with Ebru Bozdag and Jeroen Tromp on the global tomographic model.
Collaborations: Jeroen Tromp, Ebru Bozdag, Wenjie Lei, Derek Schutt (CSU)
Hejun Zhu is a Ph.D. candidate in Geophysics at the Department of Geosciences. He has a M.A. in geophysics from Princeton, a M.S. in geophysics from Peking University, Beijing, China, and a B.S. in geosciences from Sun-Yat-Sen University, Guangzhou, China. Since 2008, Hejun's research interests include imaging and tomography, seismic wavefield and dynamic rupture simulations at Princeton. Hejun was a research assistant at the Department of Geophysics, at Peking University, China, from 2005 to 2008. At that time, his studies included seismic wavefield simulation and dynamic rupture by using finite difference method.
Collaborations: Jeroen Tromp, Christina Morency, Daniel Peter, Ebru Bozdag, Yang Luo, Shravan Hanasoge, and Ryan Modrak. Before coming to Princeton University, Hejun worked with Prof. Xiaofei Chen at Peking University.
Visiting Student Research Collaborator
Rafael Abrew is a VSRC (visiting student research collaborator) in the Geosciences Department, Princeton University. Abreu is currently a graduate student in the Andalusian Institute of Geophysics at the University of Granada, Spain. He holds a B.S. degree in petroleum engineering and M.Sc. degree in stochastic models from the Central University of Venezuela. He also holds a M.Sc. degree in geophysics and meteorology from the University of Granada. Before joining the Andalusian Institute of Geophysics in Spain, he worked at FUNVISIS (Venezuelan Foundation for Seismological Research) for two years on seismic microzoning projects, including numerical modeling and field works.
Abreu's current research focuses on numerical simulation of seismic wave propagation, using spectral elements, finite differences and complex variable methods, source inversions and rotational seismology.
Collaborations: Daniel Stich (University of Granada), Michael Schmitz (FUNVISIS), Luis Dalguer (ETH), Stephan Nielsen (INGV), Apostolos Papageorgiou (University of Patras) and Jeroen Tromp (Princeton University).
Huub has been a visiting researcher in the Princeton seismology group since summer 2010. He is currently employed by ION Geophysical as a senior research geophysicist. In this role he is actively seeking collaborations with the global seismology community at universities, since global seismology and exploration seismology seem to be converging at an ever increasing rate; due to the advance of array-based acquisition methodology in global seismology and passive seismic monitoring in exploration seismology, both fields are currently overlapping more than ever. Huub's visit to Princeton University is an example of the fact that converging research fields can benefit from converging communities.
Huub obtained his M.Sc. in geophysics from Utrecht University in 1996, and subsequently, worked from 1997-2001 as a field seismic analyst, data processing analyst, staff geophysicist, and research geophysicist for Western Geophysical from 1997-2001. He obtained his Ph.D. in geophysics from the Center for Wave Phenomena (CWP) at Colorado School of Mines under the guidance of Roel Snieder. Subsequently he was a Hess postdoctoral research fellow at Princeton University where he collaborated with the late Tony Dahlen, Guust Nolet, and Ingrid Daubechies. His research interests span a wide range of topics. Currently, he is mostly working on surface wave inversion, seismic interferometry and reciprocity theorems, adjoint tomography, and passive seismic monitoring.
Collaborations: Jeroen Tromp (Princeton University), Matthew Haney (Alaska Volcano Observatory), Kees, Wapenaar (Delft University of Technology), and Roel Snieder (Center for Wave Phenomena/Colorado School of Mines).
Former Postdoctoral Scholars:
Richard Allen, Berkeley
Juliette Artru, CNES
Emanuele Casarotti, INGV, Rome
Dimitri Komatitsch, CNRS, Marseille
Swaminathan Krishnan, Caltech
Carene Larmat, LANL
Konstantin Latychev (with Prof. Jerry Mitrovica), Toronto University
Hong Ma (with Prof. Adam Dziewonski), Bank Boston
Alessia Maggi, University of Strasbourg
Christina Morency, LLNL
Tarje Nissen-Meyer, ETH, Zürich
Daniel Peter, ETH, Zürich
Anne Sieminski (with Prof. Jeannot Trampert), University of Grenoble
Christiane Stidham (with Prof. John Shaw), SUNY
Peter Süss (with Prof. John Shaw), unknown
Mark Tamisiea (with Prof. Jerry Mitrovica), Proudman Oceanographic Laboratory
Cédric Vonesch, University of Lausanne
Zheng Wang, unknown
Ying Zhou, Virginia Tech
Former Graduate Students:
Min Chen, Rice University
Vala Hjórleifsdóttir, UNAM, Mexico
Miaki Ishii (with Prof. Adam Dziewonski), Harvard
YoungHee Kim (with Prof. Rob Clayton), Seoul University
Qinya Liu, University of Toronto
Yang Luo, Chevron
Carl Tape, University of Alaska, Fairbanks;
Former Visitors and Friends:
Karolin F. Elcomert, Istanbul Technical University
Irene Molinari, University of Bologna
Federica Magnoni, University of Bologna
Matthias A. Meschede, LMU München
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701153213/warc/CC-MAIN-20130516104553-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 11,231
| 63
|
https://www.ibm.com/developerworks/community/blogs/messaging/date/201502?lang=en
|
code
|
Updated 23rd November 2015 to reflect current details.
We’re pleased to announce the availability of a technical preview of an MQ Advanced for Developers image for Docker. The source code for this image is available on GitHub. This allows you to run an MQ queue manager inside a Docker container, which those familiar with Docker will know, can be useful for several reasons:
- MQ is running inside a container managed by the Linux kernel so helps you to isolate MQ from the rest of your system:
- Process isolation – all the processes associated with MQ are run in their own process space, and can’t see any other processes running on your server
- Resource isolation – you can limit the amount of memory and CPU you allocate to a container
- Dependency isolation - all software which MQ depends on is included in the MQ image, except the Linux kernel itself. You don’t have to worry about having other incompatible software installed, as the MQ processes will see their own private filesystem. This also means that even though the MQ image uses an Ubuntu Linux filesystem, you can run it on a server with a different Linux distribution (as long as it has a kernel capable of running Docker).
- The efficient use of images and containers can be very helpful with continuous delivery (see Understanding Docker for more information).
Check out this short demo video.
Building an image and running a queue manager
After extracting the code from the GitHub repository, you can build the image using the following command:
sudo docker build --tag mq-for-developers ./8.0.0/
This build step downloads a minimal Ubuntu Linux image, then downloads and installs MQ for Developers. Next, you're going to have to apply your own configuration to allow secure access. The recommended way to do this is to create your own Docker image, using this image as a parent. The first thing to do is to create a new directory, and add a file called
config.mqsc, with the following contents:
DEFINE CHANNEL(PASSWORD.SVRCONN) CHLTYPE(SVRCONN) SET CHLAUTH(PASSWORD.SVRCONN) TYPE(BLOCKUSER) USERLIST('nobody') DESCR('Allow privileged users on this channel') SET CHLAUTH('*') TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(NOACCESS) DESCR('BackStop rule') SET CHLAUTH(PASSWORD.SVRCONN) TYPE(ADDRESSMAP) ADDRESS('*') USERSRC(CHANNEL) CHCKCLNT(REQUIRED) ALTER AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) AUTHTYPE(IDPWOS) ADOPTCTX(YES) REFRESH SECURITY TYPE(CONNAUTH)
These MQSC commands were taken from Morag Hughson's recent blog post. You can of course, apply any security configuration, but this simple user/password authentication is a good place to start. The next thing to do is to create a file called
Dockerfile, with the following contents:
FROM mq-for-developers RUN useradd alice -G mqm && \ echo alice:passw0rd | chpasswd COPY config.mqsc /etc/mqm/
You can then build your custom Docker image using the following command (where "." is the directory containing the two files we've just created).
sudo docker build -t mymq .
Docker then creates a temporary container using that image, and runs the remaining commands. The RUN command adds a user named "alice" with password "passw0rd", and the COPY command adds the config.mqsc into a specific location known by the parent image.
You can now run your new customized image as follows:
sudo docker run \ --env LICENSE=accept \ --env MQ_QMGR_NAME=QM1 \ --volume /var/example:/var/mqm \ --publish 1414:1414 \ --detach \ mymq
This command creates a new container, with the disk image we just created. Your new image layer didn't specify any particular command to run, so that has been inherited from the parent image. The parent's entrypoint (code available on GitHub) creates a queue manager, starts it, creates a default listener, and then runs any MQSC commands from
/etc/mqm/config.mqsc. So what are those parameters doing?
- The first
--envparameter passes an environment variable into the container, which acknowledges your acceptance of the IBM license for MQ Advanced for Developers. You can also set the
viewto view the license.
- The second
--envparameter sets the queue manager name to use.
--volumeparameter tells the container that whatever MQ writes to
/var/mqmshould actually be written to
/var/exampleon the host. This is so that we can easily delete the container later, and still keep any persistent data. It also makes it easier to view logs.
--publishparameters map ports on the host system to ports in the container. The container runs by default with its own internal IP address, which means that you need to specifically map any ports that you want to expose. In this case, that means mapping port 1414 on the host to port 1414 in the container.
--detachparameter runs the container in the background.
You can view running containers using
docker ps command. You can view the MQ processes running in your container using the
docker top command.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00263.warc.gz
|
CC-MAIN-2019-43
| 4,891
| 37
|
https://www.manosphere.tv/watch/are-modern-women-hopelessly-corrupt-dr-shawn-t-smith_b6gzTwqqB6MWlUp.html
|
code
|
Are Modern Women Hopelessly Corrupt? – @Dr. Shawn T. Smith
Dr. Shawn T. Smith returns to The 21 Convention stage at 21 Summit in Orlando Florida. This is a free preview of his latest speech, now playing early, ad-free, and censorship-free at 21 University. Watch the full video now with a free 30 day trial https://21university.com/progr....ams/shawn-smith-rela
Exclusive to the 21 Studios Locals (like Patreon) is the full audio of this speech, join and listen to it now at https://21studios.locals.com/p....ost/1710543/vet-the-
#Relationships #Psychology #Feminism #RelationshipAdvice
Download the free 21 University app to watch our content early, ad-free, and censorship free. iPhone link https://apple.co/3qOhbGX Android link https://bit.ly/3hGM7Vl
Get on The 21 Convention VIP list https://the21convention.org
Make Women Great Again℠ https://22convention.com
Positive Videos for Men https://21university.com
Help fight woke feminist big tech censorship:⠀
1) Subscribe and click the bell.
2) Like this video.
3) Comment to feed the algorithm.
4) Share on your social.
5) Follow us on free speech platforms
6) Buy merch or donate to support https://www.the21store.com
Follow ADJ on Twitter https://twitter.com/beachmuscles
Follow us on Spotify https://open.spotify.com/show/....1kVFMU2dysjBwase4W7S
Follow us on Bitchute https://www.bitchute.com/21studios/
Follow us on Odysee https://odysee.com/@21:7
Follow us on Gab https://gab.com/21studios
Follow us on Twitter https://twitter.com/21Convention
Follow us on Instagram https://www.instagram.com/21Convention/
Follow us on Facebook https://www.facebook.com/the21convention
Subscribe to 21 Studios https://t21c.com/12YTr3X
Become a channel member: https://t21c.com/21sytm
Subscribe to Red Man Group: https://t21c.com/rmgsub21
Direct donation to 21 Studios https://t21c.com/donate21
Support our sponsors: https://21studios.com/sponsor
Follow us: https://21studios.com/connect
#MGTOW #Manosphere #MensRights #RedPill #men #man #menshealth
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00553.warc.gz
|
CC-MAIN-2022-40
| 1,997
| 30
|
http://it.slashdot.org/story/07/11/20/2139227/dan-geer-on-trusting-pcs-in-botnets
|
code
|
walk*bound writes "In an essay published by ZDNet, security scientist Dan Geer has an interesting proposal for e-commerce sites to evaluate the trustworthiness of clients that try to connect. Assume that end users either always say 'Yes' or always say 'No' to security dialog boxes. Then make the decision one of two ways: 'When the user connects, ask whether they would like to use your extra special secure connection. If they say "Yes," then you presume that they always say "Yes" and thus they are so likely to be infected that you must not shake hands with them without some latex between you and them. In other words, you should immediately 0wn their machine for the duration of the transaction — by, say, stealing their keyboard away from their OS and attaching it to a special encrypting network stack all of which you make possible by sending a small, use-once rootkit down the wire at login time, just after they say "Yes."'"
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709101476/warc/CC-MAIN-20130516125821-00011-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 937
| 1
|
https://www.millioncenters.com/ernakulam/fitness-centers-in-jasola----
|
code
|
Tell us what you want to Learn
Avail 90% Discount on all products on firstcry.com
only for new users, and use same number for registration to get the offer.
“How do I make money wit ...read more
“When was the last time ...read more
7 Common Mistakes by Learning Centres &a ...read more
If you haven't been living under a r ...read more
Being a mother is no easy task; some say ...read more
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00600.warc.gz
|
CC-MAIN-2020-40
| 393
| 8
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/harryadel/meteor-galaxy-roadmap-22hf
|
code
|
Another great news hits the Meteor community: Meteor Galaxy, the official platform for hosting Meteor application, gets a news a roadmap.
The new plan features:
Native app build and publish
Deploy from Git push
New Pricing Plan
They took into consideration many of the suggestions provided on the Meteor forums.
With a plan for Meteor and Galaxy one can only anticipate the awesome future that awaits us!
Top comments (1)
Is there data for Meteor Galaxy incomes for yearly or whatever?
I'm curious how the Meteor or MDG group growth is progressing whether I'll be a big fan as usual ;)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00660.warc.gz
|
CC-MAIN-2023-23
| 585
| 10
|
https://www.moddb.com/news/fuzzy-horde-ludum-dare-33-you-are-the-monster-entry
|
code
|
This weekend was the 33rd iteration of the three-times-yearly Ludum Dare competition. Participants spend a couple of weeks submitting, slaughtering and ultimately picking themes, until one single theme is left. That theme gets announced and then developers have 48 hours to make a game based on that theme, or 72 for the Jam version. For the competition, you must work solo and develop all assets during the 48 hours (it's pretty brutal), while the Jam lets you use pre-made assets and work in teams. I'm obviously hardcore so for the third time, I entered the competition. In true me-style, my monsters were not towering, huge world-eaters, nor were they creepy undead beings. No, they were cute fuzzballs instead.
These cute little fuzzballs, called Fuzzies, are your minions.
Fuzzy Horde: A reverse tower defence game
The basic idea behind the game is that you're in a tower defence game, but you play as the waves of monsters. It's an idea I'd really like to flesh out and refine, because I haven't seen it very much elsewhere. Your Fuzzies live under the rule of Lord Fuzzbon, and the aim is to make it past the Raiders' defences in order to steal the treasure. Y'know, because I definitely didn't forget to put the treasure sprites at the end of the level. In response to some of the criticisms of my previous Ludum Dare entry, I made this one a lot harder (maybe too hard, but everyone loves a challenge, right?) and there's no confusion about ammo, because there is no ammo (the purple variety of Fuzzy - Ballistic Fuzzy - can shoot, but has infinite ammo). I think this entry falls short of the previous one in terms of art; although I prefer the individual pieces of art in Fuzzy Horde, I feel like I Will Be Happy had a much more full and well-rounded world in terms of its art direction.
Textures are more detailed and larger than I Will Be Happy, but there are fewer of them.
At the start of each wave, you are given a quota of Fuzzies and can freely place them on a 6x6 grid. This allows you some degree of freedom with your tactics - maybe I send in my TNT Fuzzies first to blow the crap out of the turrets, or maybe my Eaty Fuzzies should go first and soak up the some bullets while I shoot with my Ballistic Fuzzies from a distance. Once the wave has started, you control all Fuzzies simultaneously; moving left moves all Fuzzies left, and attacking with one Fuzzy means you're attacking with all of them. This means you'll need to watch out then using TNT Fuzzies to make sure you don't blow holes in your base when you only meant to shoot with a Ballistic Fuzzy. The movement setup opens up some interesting tactics, such as leaving some barriers intact so that some Fuzzies can walk into it and let others go on ahead.
What even is a tutorial?
Once you've deployed your Fuzzies, you'll be tasked with avoiding and destroying various turrets placed along the way between Fuzzy Base and the treasure, which is definitely there at the end of the level. There's two kinds of turret - one that casts a blue search laser and shoots the instant it sees a Fuzzy with a short cooldown afterwards, and one that locks onto a Fuzzy from far away and shoots if it stays locked on for long enough. There are also landmines that blow up a Fuzzy the moment it walks into its explosion radius. Sometimes it's a useful strategy to walk into the mines with an Eaty Fuzzy so you can rush ahead with hordes of TNT Fuzzies next wave and not have to worry about more valuable Fuzzies being destroyed. Besides, Eaty Fuzzies are disposable.
You only have a limited number of waves to steal the treasure.
You have 16 waves of Fuzzies at your disposal, with later waves generally gaining more Fuzzies to work with, and the aim is to steal the treasure while ensuring the Raiders have the lowest score possible; Raiders earn points for every Fuzzy they kill and every intact turret they have at the end. I'd really like to build upon this idea in many ways, first of all by improving the level design and adding more varieties of Fuzzy. I'd have Armoured Fuzzies that move slow but have high health, Fast Fuzzies that run quickly, Barrier Fuzzies that act as a blockade to hold some Fuzzies back - the potential variety is endless. I'd also have more enemy types, as the turrets feel pretty generic - maybe if I could find a way to add some kind of personality to the turrets somehow, perhaps by having Raider characters who act functionally like turrets, I could then flesh out the story surrounding the Fuzzy-Raider rivalry and explain why both sides want the treasure so much. I'd also put a lot more effort into the sound design, especially since there is such a lack of audio in this version (I did have a bunch of voice recordings for the Fuzzies, but I ran out of time to properly implement them).
I also need to improve the logo.
While I'm away thinking up ideas for improving the competition version, you can play and rate the mostly-coherent masterpiece (ahaha) on the Ludum Dare site. Check out all the other games entered in the competition too - after all, they're all free and there's bound to be a lot of talent! You can also see the timelapse video I made using Chronolapse of almost the whole 48 hours, minus sleep and breaks, condensed into 11 minutes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00721.warc.gz
|
CC-MAIN-2024-18
| 5,262
| 12
|
https://conference.unri.ac.id/index.php/unricsce/article/view/278
|
code
|
Pelatihan pengembangan media pembelajaran daring menggunakan aplikasi Wix bagi sukarelawan Yayasan Rumah Impian Yogyakarta
This Community Service (PkM) aims to improve the knowledge and skills of the volunteers of Dream House Foundation in Yogyakarta in assisting street children during online learning which includes basic knowledge about Wix, steps to create, use, and develop Wix as an online learning medium. The activity is implemented using training methods as a form of Participatory Learning and Action, with the following implementation stages: (1) counseling on the benefits of Wix to support online learning (2) training and practice of creating Wix as online learning medium (3) FGD (4 ) creating Wix in groups (4) group work mentoring (5) result presentation and improvements (6) evaluation. The evaluation was started from the beginning of the activity with the criteria of the number of attendance, participation in each stage of the activity, and the presentation of Wix site design. The evaluation results show that 90% of the participants understand and can design Wix site to develop web-based online learning media. The creation of the Wix site according to the needs of the children in the assisted areas has referred to the Wix site developed by the facilitator team and supports the availability of online learning materials to study from home.
Appana, S. (2008). A Review of Benefits and Limitations of Online Learning in the Context of the Student, the Instructor, and the Tenured Faculty. International Journal on E-Learning, 7(1), 5-22. https://www.learntechlib.org/primary/p/22909/
Atmojo, A.E.P., & A. Nugroho. (2020). EFL Classes Must Go Online! Teaching Activities and Challenges During Covid-19 Pandemic in Indonesia. Register Journal, 13(1), 49-76. https://doi.org/10.18326/rgt.v13i1.49-76
Ayu, M. (2018). Interactive Activities for Effective Learning in the Overcrowded Classroom. Linguists, 4(2), 1-6. http://dx.doi.org/10.29300/ling.v4i2.1658
Bozalek, V., & Biersteker, L. (2010). Exploring Power and Privilege Using Participatory Learning and Action Techniques. Social Work Education, 29(5), 551-572. https://doi.org/10.1080/02615470903193785
Handoyo, A. (2020). Pelatihan Pola Pengasuhan Menghadapi Anak Trauma. Riau Journal of Empowerment, 3(3), 171-182. https://doi.org/10.31258/raje.3.3.171-182
Hariyadi, S. (2011). Cara Asyik! Membuat Flash Website dengan Wix.com.Jakarta: PT Elex Media Komputindo.
Ko, S., & Rossen, S. (2017). Teaching Online: A Practical Guide. Taylor & Francis
Moorhouse, B. L. (2020). Adaptations to a Face-to-face Initial Teacher Education Course ‘Forced’ Online due to the COVID-19 Pandemic. Journal of Education for Teaching, 00(00), 1-3. http://dx.doi.org/10.1080/02607476.2020.1755205
Portal Rumah Belajar. (2021). Portal Rumah Belajar. https://belajar.kemdikbud.go.id/ Diakses pada 28 September 2021.
Susyetina, A. (2021). Challenges and pportunities of L2 remote learning during the pandemic: A case study of UKDW. Dalam Prosiding UNNES-TEFLIN National Seminar, Semarang, 19 Juni.
The Dream House Website. (2021). Siapa Kami, https://thedreamhouse.org/id/siapakami/ Diakses pada 28 September 2021.
UKDW Website. (2021). Nilai-Nilai Universitas Kristen Duta Wacana, https://www.ukdw.ac.id/profil/nilai-nilai-ukdw/ Diakses pada 28
Copyright (c) 2021 Christmastuti Nur, Arida Susyetina, Rama E Darmayanan, Karolas Wijaya
This work is licensed under a Creative Commons Attribution 4.0 International License.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00300.warc.gz
|
CC-MAIN-2023-50
| 3,477
| 16
|
http://www.codeguru.com/cpp/v-s/48/
|
code
|
MSBuild is one of the major new features in Visual Studio .NET 2005. Discover the motivation for MSBuild, how it works, and how Visual C++ developers can get their hands on it.
Latest Visual Studio Articles - Page 9
A "how to" guide for creating a custom C++ appwizard using the IDTWizard interface.
An easy-to-use tool to find the description of various error codes, especially the ones returned by Platform SDK APIs, but also extendable for application-specific codes.
Improvements to the IDE are one of the givens in any new release of Visual C++; hence, they are often overlooked. Take a closer look at some of the new features that the Visual C++ 2005 IDE delivers.
Learn about a simple class that shows you how to center text vertically in a single-line edit control.
Visual C++.NET supports the automatic detection of stack-based buffer overruns through the use of the /GS compiler switch. Learn why stack-based buffer overruns are so serious, and how /GS and other Visual C++ settings can combat them.
Latest Developer VideosMore...
Latest CodeGuru Developer Columns
Become more proficient on the usage of statements to control the flow of execution through a C++/CLI application.
Learn how to use .NET code to configure email. There's one "gotcha," but it's thoroughly explained.
Visual Basic gives you plenty of tools to work with external windows. Here, we manipulate Notepad.
Have you ever wanted to control your garage door from your smartphone? Here is your chance. Get your hands dirty with an Arduino Garage Door Controller.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131304444.86/warc/CC-MAIN-20150323172144-00030-ip-10-168-14-71.ec2.internal.warc.gz
|
CC-MAIN-2015-14
| 1,540
| 13
|
https://docs.bmc.com/docs/bcm2008/hardware-inventory-attributes-930382776.html
|
code
|
Hardware Inventory Attributes
Hardware inventory results will vary depending on the operating system installed on the managed device, that is, Windows, UNIX or Mac OS, and, of course, on the administrator's choice. When you select one of the objects all its properties will be displayed in tabular format in the right window pane.
The most commonly displayed objects with some examples of their properties are the following:
Displays information about the BIOS , such as the name and manufacturer, the installable languages, the status, version or release date, and so on.
Displays information about the Cache Memory, such as associativity, block size, installed size, level and location, purpose and write policy, and so on.
Displays information about the CDROM Drive, such as availability, drive, ID, media type, status and system name, and so on.
Displays information about the Desktop Monitor, such as display type, name, screen width and height, status and system name, and so on.
Displays information about the Disk Drive, such as caption, index, interface type, media type, SCSI bus, sectors per track, size, status or the total number of cylinders, and so on.
Displays information about the Display Configuration, such as the device name, the display flags and frequency, dither type, the driver version or specification version, and so on.
Displays information about the Floppy Drive, such as the manufacturer name, the status or system name, and so on.
Displays information about the Keyboard, such as the layout, the number of function keys, the power management supported or the status, and so on.
Displays information about the Logical Disk, such as drive and media type, system name, file system, free space, size, volume name and serial number, and so on.
Displays information about the Motherboard Device, such as availability, caption, primary and secondary bus type and the system name, and so on.
Mouse / Pointing Device
Displays information about the Mouse or Pointing Device, such as the device interface, manufacturer, number of buttons, pointing type, status and system name, and so on.
Displays information about the Network Adapter, such as the adapter type, index, MAC address, product and service name, and the time of the last reset, and so on.
Displays information about the Parallel Port, such as availability, caption, operating system - autodiscoverable, supported protocols and system name, and so on.
Displays information about the Physical Memory, such as the bank label, capacity, device locator, form factor, memory type and type details, and so on.
Displays information about the Printers attached to the device, such as attributes, availability, default priority, driver name, location, print processor, status and vertical resolution, and so on.
Displays information about the Processor, such as architecture, CPU status, L2 cache size, load percentage, processor type, role, socket designation and stepping, and so on.
Displays information about the Sound Device, such as availability, caption, manufacturer, name and status, and so on.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511106.1/warc/CC-MAIN-20231003124522-20231003154522-00804.warc.gz
|
CC-MAIN-2023-40
| 3,077
| 21
|
https://discuss.circleci.com/t/ssh-permissions-not-working-on-circleci-v2-0/15962
|
code
|
My CircleCI project checks-out from a GitHub repo (i.e. that’s what the build is linked to) and also checks out another repo from GitHub. I have configured a “machine key” for the other repo via SSH Permissions at the project level.
However, I receive the following error when
git cloneing the secondary repo:
Please make sure you have the correct access rights and the repository exists.
When I check which SSH key is being used to authenticate w/ GitHub, I see it’s the “deploy key” (which doesn’t have access to the secondary repo):
ssh -T firstname.lastname@example.org Hi owner/repo! You've successfully authenticated, but GitHub does not provide shell access.
To fix this issue, I remove the “deploy key” during build startup:
ssh-keygen -y -f ~/.ssh/id_rsa > ~/.ssh/id_rsa.pub ssh-add -d ~/.ssh/id_rsa.pub
Then I get the correct response:
ssh -T email@example.com Hi machine-account! You've successfully authenticated, but GitHub does not provide shell access.
git clone works too
This a nasty hack though! Is it possible to fix this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514708.24/warc/CC-MAIN-20181022050544-20181022072044-00081.warc.gz
|
CC-MAIN-2018-43
| 1,059
| 12
|
https://recipelovers.net/chicken-recipes/98-how-to-make-appetizing-chilli-chicken/
|
code
|
Chilli chicken. Chilli chicken is a sweet, spicy & slightly sour crispy appetizer made with chicken, bell peppers, garlic, chilli sauce & soya sauce. This crisp fried saucy chilli chicken recipe is hands down the best that.. Chilli Chicken Dry Recipe Ingredients for Chilli Chicken Recipe: For marination: – Boneless chicken (preferably – Add the fried chicken pieces and mix well so that the pieces are coated with the sauce.
In the same pan, add the garlic, red chilli, soy sauce, tomato purée, and water. Chilli chicken is an indo chinese version of making chicken which has become quite popular in India. It has been posted along with a video procedure. You can have Chilli chicken using 8 ingredients and 7 steps. Here is how you achieve it.
Ingredients of Chilli chicken
- You need 600 gram of boneless chicken.
- It’s 2 of eggs.
- Prepare 1 of green and red bell pepper each.
- You need 1 cup of mixture of sauce( soy,tomato and chilli>.
- You need 1 of onion cubed.
- Prepare 1 tsp of corn flour.
- It’s To taste of salt and black pepper.
- Prepare As needed of white oil.
It is basically found on streets of India in every fast. Chilli chicken – one of the favorite dishes of non vegetarian lovers. Chilli chicken is indigenous to China; however, it is prepared all over the world. Get yourself through the week with Ina Garten's Chicken Chili recipe from Barefoot Contessa on Food Network; it's low in calories but high in Ina makes a traditional chili with chicken instead of beef.
Chilli chicken step by step
- Marinate the chicken pieces with salt black pepper,sauce, eggs and corn flour,keep aside.
- Heat oil and fry the chicken pieces keep aside.
- Now add the onion slices and garlic saute well adding salt.
- Add bell pepper slices saute well.
- Add the suace mixture and boil.
- Now add the fried chicken and simmer over low heat till the chicken get soft and moist.
- Serve it with Chinese fried rice.
Chilli chicken is perhaps the most popular Indo-Chinese dish found both in restaurants and street side stalls. The gravy chilli chicken is more popular as a side to either chowmein or with fried rice.. Chicken Without Frying Recipes on Yummly Chongqing Chilli Chicken, Coca Cola Chilli Chicken, Chilli Chicken With Ginger & Coriander – Gordon Chilli Chicken DrySimple Indian Recipes. Was chilli chicken the first thing that you used to order in a restaurant during your school/college days?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00157.warc.gz
|
CC-MAIN-2021-39
| 2,425
| 21
|
https://osu.ppy.sh/community/forums/topics/537385?start=5702849
|
code
|
looks fresh af
This is a really nice skin, although I did find a small issue:Just few minutes ago, i've fixed it and it is now updated in the download link (hence the post name). Thanks for addressing the issue.menu-back-12 is twice the size as all of the other menu-back frames and it causes the back button to appear twice the size for a frame.
I'll be using this skin for a while, I really like it .
Edit: I wrote this post literally at the same time as you updating the skin xD
Gameplay is clean and simplistic, really easy to play with. One thing I found is that one element of the spinner is off centre and wobbles (im not sure which one or if it's just me), great skin keep up the great work!Yes it does wobble (the blue circle thing in the middle) because it is not symmetrical (basically means it wobbles). And I will be changing that to non-wobbly lol. I do apologise for this.
I feel like the scorebar + background you changed in v1.2 is really unclear. It's hard to see how much health I have with the score bar and background being about the same color and thickness.In my opinion, its quite clear but it all comes to preference so i will not disagree/agree with your point.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00114.warc.gz
|
CC-MAIN-2019-18
| 1,187
| 6
|
https://desk.draw.io/support/solutions/articles/16000101596-how-to-fix-svg-images-if-you-re-using-a-proxy-server
|
code
|
Instructions for Apache HTTP Server
Some Confluence installations are using Apache HTTP Server as a proxy between the browser and the Confluence server.
Sometimes Apache server is not configured to properly serve SVG files. As a result, SVG icons in the draw.io diagram editor will look like on the screenshot below.
In that case, edit the server's configuration file and add the AddType directive to allow for SVG files to be served with a proper MIME type. Below is the snippet of the example http.conf file.
AddType directive needs to be added within the <IfModule mime_module> block.
Save the changes and restart the Apache HTTP server. To confirm the SVGs are now served properly, re-open the draw.io editor.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00541.warc.gz
|
CC-MAIN-2020-16
| 713
| 6
|
https://discussions.apple.com/thread/4769008?tstart=0
|
code
|
The first statement is wrong. Don't know what you're doing but you apply transitions in secondaries the same way you do in the primary. No idea what the position tool has to do with it. You select the edit point in the secondary and double-click the transition you want or press Cmd-T for the default. The clips do have to be in line and adjacent to each other, but that's the same in any storyline or any NLE for that matter.
I think the other answers have pretty much nailed it - but just to confirm in point form:
- There is no such thing as a "secondary timeline" (the beginining of the confusion)
- What I believe you have is a bunch of clips "connected" to the main story line
- What you need, to create a transition between these attached clips, is a (secondary) "storyline" - by selecting the clips, Clip | Create Storyline
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468396.75/warc/CC-MAIN-20151124205428-00196-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 831
| 5
|
https://kirang.in/post/building-an-open-source-python-application-the-right-way/
|
code
|
If you love Python and love open source like I do, you’d probably be open sourcing something new every day/week/month. Sure that there are quite a lot of articles online that tell you the best practices of writing Python code, testing, packaging, distributing etc, I haven’t really found a good article that highlights what are the best practices/conventions to be followed while building a full fledged and open source Python application. So I decided to write one.
While I know that this was a good idea, I also wanted some sort of template code that I could reuse in every project. Hence, I decided to write one. Meet bootstrapy, a bootstrap python application that takes the pain out of setting up a sample application and lets you focus on writing code and tests for them. As a follow up of the application, I’ll try and explain the purpose of various files in as simple manner as possible.
First, let’s take a look at the directory structure of bootstrapy:
- AUTHORS.rst - This is where you would add yourself as well as the names of other contributors to your project
- CHANGELOG - Contains the list of changes in your application for each release you do. This serves as a quick overview of what has changed in your application/project for both developers/users. This file may be optional or mandatory, depending on the license you choose.
- CONTRIBUTING.rst - Contains the instructions to be followed that instructs developers on how they can help out with your project.
- MANIFEST.in - While this is not mandatory, it is fairly common to list the contents of your distribution in this file. This file indicates what files are needed to be included in the source distribution but does directly affect what files are installed. In short words, the packages that need to be installed, should be mentioned in setup.py and the extras needed in the final binary of your application, should be mentioned in the MANIFEST file.
- Makefile - Makefiles are generally used to organise code compilation. But we can abuse it a bit and use it to do various other things like setting up an environment, installing dependencies, cleaning up……you get the picture.
- requirements.txt - This is where you put all the dependencies/packages needed by your project/application. Doing so, you can install them using make deps which internally runs pip install -r requirements.txt
- setup.py - Tells you that the package/module that you’re about to install have been packaged and distributed using Distutils, which is the standard for distributing Python modules. This allows for easy installation of Python projects by just running python setup.py install
- docs - This is the directory to be created to store all the documentation to be used by Sphinx. You should basically store the documentation files in .rst(reStructured Text) format.
- mypackage - This is the main package of your project. This is where your project/application code goes into. You can see some sample, useless code in myapp.py .
- tests - This is where all your tests go into. Add all your test cases/suites in this directory and execute them by callingmake test . I have currently configured the project to run tests using Nose. It automagically detects where all your tests are located and then runs them. It is recommended that you follow the pattern ‘test_<feature-to-be-tested>.py’ while naming files containing test cases for <feature-to-be-tested>.
So now whenever I need to work on a Python application, all I need to do is to clone the bootstrapy repo, and voila ! I can start writing code. Sweet, isn’t it ?
This is the project structure that I would recommend, but there may be different opinions about the same. If you have a suggestion, feel free to raise an issue on github. If you have a question/feedback instead, ping me directly via email.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100146.5/warc/CC-MAIN-20231129204528-20231129234528-00624.warc.gz
|
CC-MAIN-2023-50
| 3,836
| 15
|
https://www.itsmearunchandel.co.in/linux/how-to-add-and-delete-users-on-centos-7.html
|
code
|
In the previous tutorial, I have explained how to set date and time. Now you will learn how to add and delete users on CentOS 7 and will also learn where
Add a New User
- To add a new user, use command
adduser, here I am adding user “centos” & also assigning the password.
[root@localhost ~]# adduser centos
[root@localhost ~]# passwd centos
newadded user entry goes in /etc/ passwdfile and to check the newly added user, view the output of /etc/passwd/ file.
[root@localhost ~]# cat /etc/passwd
How to Delete a User
Use the below command to delete a user. Now, user centos would not be found, as we can check it using cat /etc/
[root@localhost centos]# userdel centos
[root@localhost centos]# cat /etc/passwd
- If we go to deleted user home directory, then we can find all centos user data
- Now we are going to completely remove user along with user’s home directory data, use the
[root@localhost centos]# userdel -r centos
- /etc/passwd/: keep records of all the added users. We can check registered users from this path.
- /home: User’s home directory path by default.
Perfect!! You have learned how to add and delete a user.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00640.warc.gz
|
CC-MAIN-2020-40
| 1,137
| 18
|
https://bitotoken.com/the-relationship-of-beatwith-numbers-fractal-beats/
|
code
|
If anyone asserts to be a mathematician and also a musician or Asserts that a mathematician can be a great musician he may be declared insane. Nothing regarding this can be false or skeptical. The man asserting such matters is absolutely correct and this has already become an undeniable fact when Pythagoras the famous mathematician who’s best known for his own Pythagoras theorem invented Fractal Beat with his mathematical mind.
Throughout the Fifth century, the Greek mathematician Explored the relationship of math and music at which new music intervals are represented as ratios of total numbers. This wasn’t the only mathematic musical method, there have been many different theories concerning exactly the very same however, also the Pythagoras system was way better and gain approval.
An Intro to Fractals:
Fractals Are a Rather interesting idea for Those People Who Have an Eye for amounts and equations and also for those who are able to look past the infinity. Discovering fractals is complicated because everybody else has their understanding of fractals.
For one, these are visual representations of how particular mathematical Works while some other fractals are contours which can be complex in their specifics along with their total variant.
The Idea of Fractal Beat is as easy like a mathematic equation. As for a lot of people, fractal equations might be rough as these really are created from your non-linear equation which means that there will always be repeated solutions . This really is complex because the math men and women are mostly mindful of equations that are linear.
Learning the Trick of Fractal Beats:
The key behind jelqing music is that a theory called mapping. Mapping in simple terms is creating a connection between an mathematical equation and certain parameters which generate direct fractal images which is just producing graphics by mapping the output of equations to shade pixels.
The reasons why fractal music is still sonic rather than visual Are pitch, rhythmic values, and dynamics.
The Entire Idea of Fractal Beat teaches that a real Lover of math might create music that is fantastic.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058263.20/warc/CC-MAIN-20210927030035-20210927060035-00016.warc.gz
|
CC-MAIN-2021-39
| 2,139
| 10
|
https://bugs.php.net/bug.php?id=47757
|
code
|
go to bug id or search bugs for
I think inconsistent naming is quite annoying.
To compile GD with JPEG support you have to do something like ./configure --with-gd --with-jpeg-dir. However in phpinfo pages JPEG support is displayed as "JPG Support enabled".
So basicly, when I actually successfully compiled GD with JPEG-support: I thought it failed because I was looking for "JPEG" in phpinfo, and not "JPG".
Not quite an essential bug, but perhaps worth fixing in the future.
'./configure' '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-apxs2=/usr/sbin/apxs' '--with-ldap=/usr' '--with-kerberos=/usr' '--enable-cli' '--with-zlib-dir=/usr' '--enable-exif' '--enable-ftp' '--enable-mbstring' '--enable-mbregex' '--enable-sockets' '--with-iodbc=/usr' '--with-curl=/usr' '--with-config-file-path=/etc' '--sysconfdir=/private/etc' '--with-mysql-sock=/var/mysql' '--with-mysqli=/usr/local/mysql/bin/mysql_config' '--with-mysql=/usr/local/mysql' '--with-openssl' '--with-xmlrpc' '--with-xsl=/usr' '--without-pear' --with-jpeg-dir=/usr/local/lib/ --with-gd
Add a Patch
Add a Pull Request
This bug has been fixed in CVS.
Snapshots of the sources are packaged every three hours; this change
will be in the next snapshot. You can grab the snapshot at
Thank you for the report, and for helping us make PHP better.
So unfortunately this fix had a side effect of breaking various scripts that checked for JPEG image format support in GD by calling gd_info() and looking for the key 'JPG Support' .
I'm surprised that the source of this breakage was just this complaint about compiler flag labeling.
Support for existing runtime behavior and avoiding breaking currently working scripts should easily trump worries about compiler flag consistency, it would be cool to take that more into account in the future.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647707.33/warc/CC-MAIN-20180321234947-20180322014947-00540.warc.gz
|
CC-MAIN-2018-13
| 1,826
| 15
|
https://orionmartsintl.com/pages/about
|
code
|
At Orionmartsintl, we're passionate about cool gadgets and technology. We believe that everyone should have access to the latest and greatest innovations in the tech industry, and we're committed to making that happen.
Our team of experts scours the globe to find the most exciting and innovative cool gadgets and technology products on the market. From smart home devices to cutting-edge gaming gear, we have everything you need to stay ahead of the curve and make the most of your tech.
At Orionmartsintl, we're also committed to providing our customers with the best possible shopping experience. That's why we offer personalized support, a comprehensive FAQ section, and a newsletter packed with exclusive offers and product updates. We want to make it easy and fun for you to shop for cool gadgets and technology.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00555.warc.gz
|
CC-MAIN-2023-14
| 818
| 3
|
https://www.fr.freelancer.com/projects/php/adding-few-search-fields-adding-18001465/
|
code
|
Hello, need someone to work on this and must be available for communication consistant and without excuses of time difference.
Adding a few search Fields, adding import, and export csv and export pdf doc with data
php js mysql pdf csv project.
10 freelance font une offre moyenne de $30 pour ce travail
Dear, I am GangLee, WEB developer . I'm a certificated freelancer with over 1000 good reviews from clients. I have great deal of experience in node.js,angular.js,monogdb,ionic,react , php framework site optimizatio Plus
I read your project description ……Adding a few search Fields, adding import, and export csv and export pdf doc with data php js mysql Tnx …..and we are very exited to work on it We are an group and we have an expe Plus
Hi greetings, I will add the fields on filter and also export that data In export CSV , PDF,doc . And show the result accordingly and also provide the import CSV option only.
Key Technologies: * React Native * Android * iOS * Redux * React * GraphQL * Node.js I’m a keen, fast learning developer and have a passion for all aspects of programming with a particular focus recently on Plus
Hello there, Greetings...! A pleasure to submit the proposal for your kind consideration. We have studied your post and understood the requirement. We would like to let you know that I have more than 4 yea Plus
I have known of cvv import or export also I am experts in Php , SQL and Ajax ,and basic jquray also this project is finish in 1 day but any other issue or source code read time added so 3 day thanks
I'm beginner to freelancer but i have work on it so, i'm interested to work with you. I will give my best .
Hello, We understood your requirement and able to complete your job. We have experience of more than 4+ years with PHP platform and we have faced many challenges to achieve our clients business obstacles. Based u Plus
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662690.13/warc/CC-MAIN-20190119054606-20190119080606-00156.warc.gz
|
CC-MAIN-2019-04
| 1,872
| 12
|
https://directions4partners.com/events/directions-asia-2023/schedule/?tid=482039
|
code
|
Directions ASIA 2023
Ask Microsoft: Application and client
Date: 28-04-2023 | From: 14:10 to 14:55 | Room: Wind
Microsoft product team is interested in hearing your experience, questions, ideas, and suggestions related with the application and client features. Come to share your feedback and questions to help Business Central team grow and improve application and client to match your customers' needs.
Principal Group Product Manager, Microsoft
Bio: Jannik Bausager is Principal Group Program Manager for the Dynamics 365 Business Central team which defines the scope and the future for the Business Central application (finance, sales, purchase, inventory, warehouse, manufacturing, and project) and our clients (web client, mobile apps), and also our integrations to the Power Platform and M365 products. Jannik has 29 years of experience within the IT industry. His experience ranges from product development, consultancy, sales, and strategy development. He has been 3 years in Asia as country sales manager.
Bio: Aleksandar is Program Manager for Dynamics 365 Business Central responsible for regulatory features and geo-expansion. On previous role in Microsoft, he was responsible to help partners in building new knowledge related with business applications. Before joined to Microsoft he was an MVP for Dynamics NAV. He is experienced lecturer on many global conferences.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651815.80/warc/CC-MAIN-20230605085657-20230605115657-00014.warc.gz
|
CC-MAIN-2023-23
| 1,382
| 7
|
https://www.linux-mips.org/archives/linux-mips/2008-01/msg00151.html
|
code
|
On Tue, Jan 15, 2008 at 12:24:20PM +0100, Thomas Bogendoerfer wrote:
> we are facing a strange problem with lenny/sid chroots on IP28. The
> machine locks up after issuing a few ls/ps commands in a chroot
> bash. This only happens with a lenny/sid chroot, but not with etch.
> The major difference is probably the updare to glibc2.7. Since
> IP28 isn't really a nice R10k machine, it would be good, if someone
> with a working IP27/IP30 could try a lenny/sid chroot and tell us,
> if it's working/not working.
Simple testcase for me is:
/chroots/chroot-sid/lib/ld.so.1 --library-path /chroots/chroot-sid/lib /bin/bash
than the machine locks up hard ... This is with
Linux ip28 2.6.24-rc7-g0f154c48-dirty #38 Fri Jan 11 17:03:25 CET 2008 mips64
Florian Lohoff firstname.lastname@example.org +49-171-2280134
Those who would give up a little freedom to get a little
security shall soon have neither - Benjamin Franklin
Description: Digital signature
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945724.44/warc/CC-MAIN-20180423031429-20180423051429-00627.warc.gz
|
CC-MAIN-2018-17
| 946
| 16
|
https://scholarcommons.sc.edu/etd/5073/
|
code
|
Date of Award
Open Access Dissertation
One of the most ubiquitous steps in neuroimaging is the normalization of brain images. The process of normalization attempts to match any given brain to a standardized template image (e.g. the MNI 152 image). However, clinical images such as those from stroke participants present many challenges when we attempt to warp them to the space of template images, which are typically representative of neurologically healthy individuals. Many software packages exist to facilitate normalization of brain images, but most have limited options available to compensate for brain injury, which is often disruptive to these algorithms. Of the injury compensation methods that do exist, they are varied across software packages. The current study aimed to assess the contemporary methods available in state of the software commonly used across the field. Specifically, we assessed SPM12’s new tissue filling procedure on masked clinical images, and LINDA, a fully automated lesion segmentation algorithm combined with ANTs normalization. Across normalization methods, we compared each software package’s default injury compensation strategy to the nonstandard enantiomorphic lesion healing procedure. We created an artificial dataset of more than 10,000 images representing stroke related injury, and assessed each normalization method (SPM’s unified segmentation, DARTEL, ANTs) on multiple performance metrics. Overall, we found that the optimal injury compensation strategy for clinical images varied by the normalization method used, and the metric it was evaluated on. Finally, we present evidence of each vi normalization method and brain injury compensation technique’s effect on predicting behavior deficits from brain injury using support vector regression. Our results show that prediction accuracy (and error) can be affected by the normalization technique used.
Hanayik, T.(2018). An Investigation Of Brain Normalization And Lesion Compensation Techniques Applied To Stroke. (Doctoral dissertation). Retrieved from https://scholarcommons.sc.edu/etd/5073
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00179.warc.gz
|
CC-MAIN-2022-49
| 2,100
| 4
|
https://help.pdq.com/hc/en-us/community/posts/360050590871-Multi-Domain-Environment?page=1#community_comment_360007710472
|
code
|
Multi Domain Environment
We are currently using a trial version of PDQ Inventory and PDQ Deploy for testing purposes. Since we have two Active Directory Domains, we would need to install the PDQ products on two servers. However, only one IT administrator would access the software on both servers. Is one license for PDQ Inventory and one license for PDQ Deploy enough for this configuration?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817674.12/warc/CC-MAIN-20240420184033-20240420214033-00078.warc.gz
|
CC-MAIN-2024-18
| 392
| 2
|
https://yanonis.com/observationclub
|
code
|
Had great fun meeting up with the colleagues for @olia.ernst.elt
book club, discussing 'Developing teacher' by Duncan Foord.
Covered such essential thing as giving feedback, peer support, surviving the teaching-online-issues, taking it to the pub (whaaaat?)
Managed not to rant too much about the book✅
Looking forward to the next meet-up, ladies❤️
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00229.warc.gz
|
CC-MAIN-2022-21
| 354
| 5
|
https://www.ezinedirector.com/index.cfm?fa=marFeature.dedicatedVsShared
|
code
|
For your information, I have switched from Microsoft BCentral to your company because of your prices are almost 2/3rds cheaper that them. Keep your prices low and I'll be a customer for a long time. The fact that mailings can be scheduled ahead of time is also great.
Just a quick note to say a big “Thank You” to Heath who helped me one-on-one via phone support for an file import issue. Issue solved in under 5 minutes (I think I’m even rounding up) thanks to his patience and clarification of the Ezine Director process!
This is great. Simple and cheap. Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00848.warc.gz
|
CC-MAIN-2022-49
| 570
| 3
|
https://sourceforge.net/directory/home-education/os:emx/license:afl/
|
code
|
- OSI-Approved Open Source (5)
- Emulation and API Compatibility (5)
- Grouping and Descriptive Categories (5)
- Windows (5)
- Other Operating Systems (3)
- Linux (2)
- Mac (2)
- Modern (2)
Pc Calculator is a clever note and formula editor combined with an advanced and strong scientific calculator. Being an editor it is extremely user-friendly allowing all possible typing and other errors to be easily corrected and fast recalculated.2 weekly downloads
Solved classic algorithms in Pascal for the University Education.1 weekly downloads
Registration Description: This project is to show people that you can make a compiler using BASIC. Sence this project is for educational perposes only there will be no standerd command set. Compiler my need to be changed depending on CPU...
Slick Linux by Cellocity Based on Fedora Core 6. Using the Kernel 2.6.18. with only basic core elements needed to operate. Slick is just that it's slick because it's striped down. It will have full compiling and running features. It will come with YUM.
This project will add,subtract,divide and multiply numbers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00021-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 1,093
| 13
|
https://mail.python.org/pipermail/distutils-sig/2005-November/005385.html
|
code
|
[Distutils] Mac OS 10.4
robin at jessikat.fsnet.co.uk
Fri Nov 18 13:54:09 CET 2005
Bob Ippolito wrote:
> On Nov 17, 2005, at 9:54 AM, Robin Becker wrote:
> The extensions you build may not be compatible with previous versions
> of OS X, and you may need to use GCC 3.3 to compile some extensions
> (gcc_select makes it easy to do that).
Thanks for the info.
So is it impossible to compile dylibs etc for earlier versions of darwin? Or can
that also be achieved using some special environment?
More information about the Distutils-SIG
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945433.92/warc/CC-MAIN-20230326044821-20230326074821-00321.warc.gz
|
CC-MAIN-2023-14
| 533
| 12
|
https://www.libhunt.com/compare-gomega-vs-godog
|
code
|
|about 16 hours ago||1 day ago|
|MIT License||MIT License|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Writing tests for a Kubernetes Operator
3 projects | dev.to | 7 Oct 2023
Gomega: is a test assertion library, a vital dependency on Ginkgo.
Learning Go by examples: part 6 - Create a gRPC app in Go
7 projects | dev.to | 18 Aug 2021
Gomega is a Go library that allows you to make assertions. In our example, we check if what we got is null, not null, or equal to an exact value, but the gomega library is much richer than that.
Tips to prevent adoption of your API
2 projects | news.ycombinator.com | 16 Jun 2021
Depends on the API and how much testing you need. You want to test your code, not the API's availability or correctness.
But it can be as easy as using a fake http library and mocking the responses, or using a httptest server: https://onsi.github.io/gomega/#ghttp-testing-http-clients
If the API is complicated and you have to write your own fake server, that might not make sense for small projects.
fluentassert - a prototype of yet another assertion library
7 projects | /r/golang | 28 Mar 2021
Go generics beyond the playground
6 projects | dev.to | 25 Mar 2021
If we do the count, we gather that subtest appear to solve five out of the six problems we identified with the assert library. At this point though, it's important to note that at the time when the assert package was designed, the sub-test feature in Go did not yet exist. Therefore it would have been impossible for that library to embed it into it's design. This is also true for when Gomega and Ginko where designed. If these test frameworks where created now, then most likely some parts of their design would have been done differently. What I am trying to say is that with even the slightest change in the Go language and standard library, completely new ways of designing programs become possible. Especially for new packages without any legacy use-cases to consider. And this brings us to generics.
What's your favourite part of unit testing?
2 projects | /r/golang | 19 Jan 2023
I also use BDD (Gherkin with godog in particular) to verify and document the expected behaviour of a product from an end user's perspective when needed. I usually do this when the product also contains untested code that I have no control over when I'm working on a problem - this gives me peace of mind over something I can't control while doubling as documentation.
Behaviour Driven Development (BDD) boilerplate tests generator
3 projects | /r/golang | 20 Mar 2022
It looks like it is not possible to share steps between scenario's or features. In https://github.com/cucumber/godog it is possible to share steps.
Behaviour Driven Development (BDD) boilerplate tests generator for Golang
2 projects | /r/golang | 21 Jan 2022
Differences between gherkingen and godog are:
BDD (Behavior-driven development) mit Go
2 projects | dev.to | 18 May 2021
What are some alternatives?
ginkgo - A Modern Testing Framework for Go
Testify - A toolkit with common assertions and mocks that plays nicely with the standard library
GoConvey - Go testing in the browser. Integrates with `go test`. Write behavioral tests in Go.
venom - 🐍 Manage and run your integration tests with efficiency - Venom run executors (script, HTTP Request, web, imap, etc... ) and assertions
assert - :exclamation:Basic Assertion Library used along side native go testing, with building blocks for custom assertions
Gauge - Light weight cross-platform test automation
goblin - Minimal and Beautiful Go testing framework
go-vcr - Record and replay your HTTP interactions for fast, deterministic and accurate tests
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100677.45/warc/CC-MAIN-20231207153748-20231207183748-00848.warc.gz
|
CC-MAIN-2023-50
| 3,981
| 41
|
https://www.dotnetnuke.ru/rrd-files-not-updating-9003.html
|
code
|
Rrd files not updating barbara jean blank dating wwe melina
The format of the value acquired from the data source is dependent on the data source type chosen.
Normally it will be numeric, but the data acquisition modules may impose their very own parsing of this parameter as long as the colon (:) remains the data source value separator.
It can be useful when re-playing old data into an rrd file and you are not sure how many updates have already been applied.
If given, RRDtool will try to connect to the caching daemon rrdcached at address.
The v stands for verbose, which describes the output returned.
Note that depending on the arguments of the current and previous call to update, the list may have no entries or a large number of entries.
The order of this list is the same as the order the data sources were defined in the RRA.
If there is no data for a certain data-source, the letter U (e.g., N:0.1: U:1) can be specified.
if the third data source DST is COMPUTE, the third input value will be mapped to the fourth data source in the RRD and so on).
This is not very error resistant, as you might be sending the wrong data into an RRD.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00363.warc.gz
|
CC-MAIN-2021-04
| 1,147
| 11
|
https://community.ptc.com/t5/Windchill-Systems-Software/How-to-maintain-a-sub-project-with-the-possibility-of-each/td-p/570629
|
code
|
The main project (or parent project) I use have my application and also couple of sub-projects that is delivered externally. The sub-projects can have the folders, source files (mainly the .h, and I use the libraries from them) changes for each release, i.e. a folder can be renamed, removed similarly header files can be removed, renamed or new header files can be added. Now I create a new sub-project whenever I get a new delivery, so I have to always link the new sub-project to the parent project.
Is it possible to have one sub-project with a check-point, and the new delivery can be checked-in on top of the old check point, but just have the files and folders only based on the new delivery ? By doing this I don't have to create a new sub-projects every time and also the linking.
Provided you have external_LIB1\project.pj
What do you mean with "linking"? Something like this
+ working subproject as Integrity share to external_LIB1_Vx\project.pj or a build/checkpoint of it
(see Subproject->Add Shared)
or "link" to new directory in your build system?
I think you are nearly right, however I'll explain the scenario bit more (also I'm new to integrity so please bear with me).
In the above Main_Project, External_app_V1.0 will have different contents to the External_app_V2.0 (this being the latest delivery of the same sub-project)
Therefore for every new delivery of External_app, there is a new sub_project created (with different names, mainly by using the version number attached to the name).
Ideally I want to use the one sub project External_app - so I don't have to create new sub-project and link them to Main_Project.pj every time.
You can have just one subproject External_app. With each new version, you can either update just the changed files, or you could just drop everything in the External_app subproject and add in the files and folders for the new version. It depends how different the versions are.
I recommend creating a checkpoint with a descriptive label whenever you update the subproject. That way, it's easier to retrieve older versions of External_app if necessary.
In detail for me I would imagine a workflow like following on new app releases:
1.) Checkpoint External_app\project.pj with LABEL like "BACKUP_BEFORE_NEW_VERSION_V2_0"
Just to make sure to have a well defined checkpoint if somebody did changes to the subproject.
2.) Remove working files from subsandbox/folder of External_app\project.pj
3.) copy new sources of external app to folder External_app
4.) Drop members with missing working files
5.) Check in changed members (say no when beeing asked about checkin in unchanged files)
6.) Add new members (e.g by using Sandbox-View Nonmembers)
7.) Checkpoint External_app\project.pj with LABEL like "APP_V2_0"
8.) optional Configure Subpproject to the build number that hast the Label „APP_V2_0“
(eg External_app\project.pj (1.3))
Thus you can see at glance in the Client that you are using a specific Project Revision.
Furthermore nobody who is working with the parent Main\project.pj will be able to modify
subproject External_app\project.pj unless the subproject is configured again to default.
When Main\project.pj gets checkpointed no new (potentially dummy) project revision will be generated
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00678.warc.gz
|
CC-MAIN-2023-40
| 3,254
| 28
|
https://cheesecakelabs.com/blog/android-automated-ui-tests-baby-steps-2/
|
code
|
Navigating Classic Assets and Smart Contract Tokens on Soroban
Fabricius Zatti | Feb 28, 2024
Since I started building apps I was sure of only two things: one is that I love seeing users enjoying my apps; and the second is that I hate seeing users clicking everywhere and crashing them. So how can I be sure that my users will be able to have a joyful experience (even with those features that are hidden 15 clicks away and that I don’t even remember they exist anymore)? By testing them all!
UI testing is nothing more than simulating the user’s environment, performing the available actions and verifying if everything is behaving correctly.
Of course anyone can test features by simply having a human doing this job, but how time-consuming, boring and error-prone can it be? By having automated UI tests we assure that all the features will be tested reliably and quickly. We also become more confident to create or refactor features by being sure that if something gets broken we will know what and where the problem is, leading to faster development, less bugs, and better design decisions.
If it’s not clear yet, by using UI automated tests a specific user’s action or input can be simulated and we can check if it returns the correct UI behavior. Besides that, interactions between our app and a third-one can also be tested to be sure that features like content-sharing are working perfectly. All of it can be achieved by using testing frameworks like Espresso or UI Automator.
It’s also important to use a framework to mock the data while testing the app, like Mockito, for example, since by relying on real APIs during testing we cannot control the scenario, exposing tests to failures in the external APIs. It’s also good to mention that by relying on external APIs we are essentially writing integration tests, which is not our objective here.
Another important point to think about is to choose a test-friendly architecture (aka modular ones) that will allow us to change from mock data during tests to real data during production.
A particular point of testing is that it’s endless and it’s very important to decide when and where to stop it, mostly because software has a huge testing scope and, even though it’s possible to cover all the problematic points it can have, it will take like forever. So, it’s needed to get into a point where we can say that the existent tests are enough to prove that the app does work as intended, serving to its proposal.
To wrap it up, keep in mind that automating UI testing is a great way to ensure that actions available in the app will have the expected behavior, which can lead to better user experience, also a more free and enjoyable programming time.
By automating UI testing we simulate the user actions and assure that everything is how we expect. Use testing frameworks to make it easier, like Espresso or UI Automator as well as Mockito to mock your data and not get fooled by unpredictable external APIs content. Embrace the reliability that it can offer and be more confident to do changes in your code. Enjoy!
Has an incurable crush for experiencing and building fascinating mobile apps, enjoys learning how everything works and how to keep cute flowers alive.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948217723.97/warc/CC-MAIN-20240305024700-20240305054700-00226.warc.gz
|
CC-MAIN-2024-10
| 3,245
| 12
|
http://philmelito.com/?portfolio=jack-adams-logo
|
code
|
I've been very fortunate to work on some great projects throughout the years for every type of business and industry you can imagine . I enjoy all types of creative mediums and it's my pleasure to share 20+ years of my work here on my website portfolio.
Jack & Adams Logo
Jack & Adam’s Bicycles is a popular triathlon store in Austin. When they opened a location in Fredericksburg, Texas, I created an alternate version of their logo featuring iconic area landmarks and scenery.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112682.87/warc/CC-MAIN-20170822201124-20170822221124-00686.warc.gz
|
CC-MAIN-2017-34
| 480
| 3
|
https://practonet.com/describe-remote-access-and-site-to-site-vpns/
|
code
|
To build the Internet, Internet service providers (ISP) need links to other ISPs as well as links to the ISPs’ customers. The Internet core connects ISPs to each other using a variety of highspeed technologies. Additionally, Internet access links connect an ISP to each customer, again with a wide variety of technologies. The combination of ISP networks and customer networks that connect to the ISPs together create the worldwide Internet.
For these customer access links, the technologies need to be inexpensive so that a typical consumer can afford to pay for the service. But businesses can use many of these same technologies to connect to the Internet. Some WAN technologies happen to work particularly well as Internet access technologies. For example, several use the same telephone line installed into most homes by the phone company so that the ISPs do not have to install additional cabling. Some use the TV cabling, whereas others use wireless. While consumers typically connect to the Internet to reach destinations on the Internet, businesses can also use the Internet as a WAN service. First, the enterprise can connect each business site to the Internet. Then, using virtual private network (VPN) technology, the enterprise can create an Internet VPN. An Internet VPN can keep the enterprise’s packet private through encryption and other means, even while sending the data over the Internet.
Internet VPN Fundamentals
Private WANs have some wonderful security features. In particular, the customers who send data through the WAN have good reason to believe that no attackers saw the data in transit or even changed the data to cause some harm. The private WAN service provider promises to send one customer’s data to other sites owned by that customer, but not to sites owned by other customers, and vice versa. VPNs try to provide the same secure features as a private WAN while sending data over a network that is open to other parties (such as the Internet). Compared to a private WAN, the Internet does not provide for a secure environment that protects the privacy of an enterprise’s data. Internet VPNs can provide important security features, such as the following:
- Confidentiality (privacy): Preventing anyone in the middle of the Internet (man in the middle) from being able to read the data
- Authentication: Verifying that the sender of the VPN packet is a legitimate device and not a device used by an attacker
- Data integrity: Verifying that the packet was not changed as the packet transited the Internet
- Anti–replay: Preventing a man in the middle from copying and later replaying the packets sent by a legitimate user, for the purpose of appearing to be a legitimate user.
To accomplish these goals, two devices near the edge of the Internet create a VPN, sometimes called a VPN tunnel. These devices add headers to the original packet, with these headers including fields that allow the VPN devices to make the traffic secure. The VPN devices also encrypt the original IP packet, meaning that the original packet’s contents are undecipherable to anyone who happens to see a copy of the packet as it traverses the Internet.
Figure shows the general idea of what typically occurs with a VPN tunnel. The figure shows a VPN created between a branch office router and a Cisco firewall. In this case, the VPN is called a site-to-site VPN because it connects two sites of a company.
The figure shows the following steps, which explain the overall flow:
- Host PC1 (10.2.2.2) on the right sends a packet to the web server (10.1.1.1), just as it would without a VPN.
- The router encrypts the packet, adds some VPN headers, adds another IP header (with public IP addresses), and forwards the packet.
- An attacker in the Internet copies the packet (called a man-in-the-middle attack). However, the attacker cannot change the packet without being noticed and cannot read the contents of the original packet.
- Firewall FW1 receives the packet, confirms the authenticity of the sender, confirms that the packet has not been changed, and then decrypts the original packet.
- Server S1 receives the unencrypted packet.
The benefits of using an Internet-based VPN are many. The cost of a high-speed Internet access connection as discussed in the last few pages is usually much less than that of many private WAN options. The Internet is seemingly everywhere, making this kind of solution available worldwide. And by using VPN technology and protocols, the communications are secure.
Site-to-Site VPNs with IPsec
A site-to-site VPN provides VPN services for the devices at two sites with a single VPN tunnel. For instance, if each site has dozens of devices that need to communicate between sites, the various devices do not have to act to create the VPN. Instead, the network engineers configure devices such as routers and firewalls to create one VPN tunnel. The tunnel endpoints create the tunnel and leave it up and operating all the time, so that when any device at either site decides to send data, the VPN is available. All the devices at each site can communicate using the VPN, receiving all the benefits of the VPN, without requiring each device to create a VPN for themselves.
IPsec defines one popular set of rules for creating secure VPNs. IPsec is an architecture or framework for security services for IP networks. The name itself is not an acronym, but rather a name derived from the title of the RFC that defines it (RFC 4301, “Security Architecture for the Internet Protocol”), more generally called IP Security, or IPsec. IPsec defines how two devices, both of which connect to the Internet, can achieve the main goals of a VPN as listed at the beginning of this section: confidentiality, authentication, data integrity, and anti-replay. IPsec does not define just one way to implement a VPN, instead allowing several different protocol options for each VPN feature. One of IPsec’s strengths is that its role as an architecture allows it to be added to and changed over time as improvements to individual security functions are made.
The idea of IPsec encryption might sound intimidating, but if you ignore the math—and thankfully, you can—IPsec encryption is not too difficult to understand. IPsec encryption uses a pair of encryption algorithms, which are essentially math formulas, to meet a couple of requirements. First, the two math formulas are a matched set:
- One to hide (encrypt) the data
- Another to re-create (decrypt) the original data based on the encrypted data
Besides those somewhat obvious functions, the two math formulas were chosen so that if an attacker intercepted the encrypted text but did not have the secret password (called an encryption key), decrypting that one packet would be difficult. In addition, the formulas are also chosen so that if an attacker did happen to decrypt one packet, that information would not give the attacker any advantages in decrypting the other packets. The process for encrypting data for an IPsec VPN works generally as shown in Figure, Note that the encryption key is also known as the session key, shared key, or shared session key.
The four steps highlighted in the figure are as follows:
- The sending VPN device feeds the original packet and the session key into the encryption formula, calculating the encrypted data.
- The sending device encapsulates the encrypted data into a packet, which includes the new IP header and VPN header.
- The sending device sends this new packet to the destination VPN device.
- The receiving VPN device runs the corresponding decryption formula, using the encrypted data and session key—the same key value as was used on the sending VPN device—to decrypt the data.
While above describes the basic encryption process, Below figure shows a broader view of IPsec VPNs in an enterprise. First, devices use some related VPN technology like Generic Routing Encapsulation (GRE) to create the concept of a tunnel (a virtual link between the routers), with three such tunnels shown in the figure. Without IPsec, each GRE tunnel could be used to forward unencrypted traffic over the Internet. IPsec adds the security features to the data that flows over the tunnel. (Note that the figure shows IPsec and GRE, but IPsec teams with other VPN technologies as well.)
Remote Access VPNs with TLS
A site-to-site VPN exists to support multiple devices at each site and is typically created by devices supported by the IT staff. In contrast, individual devices can dynamically initiate their own VPN connections in cases where a permanent site-to-site VPN does not exist. For instance, a user can walk into a coffee shop and connect to the free Wi-Fi, but that coffee shop does not have a site-to-site VPN to the user’s enterprise network. Instead, the user’s device creates a secure remote access VPN connection back to the enterprise network before sending any data to hosts in the enterprise. While IPsec and GRE (or other) tunnels work well for site-to-site VPNs, remote access VPNs often use the Transport Layer Security (TLS) protocol to create a secure VPN session.
TLS has many uses today, but most commonly, TLS provides the security features of HTTP Secure (HTTPS). Today’s web browsers support HTTPS (with TLS) as a way to dynamically create a secure connection from the web browser to a web server, supporting safe online access to financial transactions. To do so, the browser creates a TCP connection to server well-known port 443 (default) and then initializes a TLS session. TLS encrypts data sent between the browser and the server and authenticating the user. Then, the HTTP messages flow over the TLS VPN connection.
The built-in TLS functions of a web browser create one secure web browsing session, but each session secures only the data sent in that session. This same TLS technology can be used to create a client VPN that secures all packets from the device to a site by using a Cisco VPN client. The Cisco AnyConnect Secure Mobility Client (or AnyConnect Client for short) is software that sits on a user’s PC and uses TLS to create one end of a VPN remote-access tunnel. As a result, all the packets sent to the other end of the tunnel are encrypted, not just those sent over a single HTTP connection in a web browser.
Figure compares the option to create a VPN remote access VPN session from a computer to a site versus for a single HTTPS session. The figure shows a VPN tunnel for PC using the AnyConnect Client to create a client VPN. The AnyConnect Client creates a TLS tunnel to the firewall that has been installed to expect VPN clients to connect to it. The tunnel encrypts all traffic so that PC A can use any application available at the enterprise network on the right.
Note that while the figure shows a firewall used at the main enterprise site, many types of devices can be used on the server side of a TLS connection as well. The bottom of Figure shows a client VPN that supports a web application for a single web browser tab. The experience is much like when you connect to any other secure website today: the session uses TLS, so all traffic sent to and from that web browser tab is encrypted with TLS. Note that PC B does not use the AnyConnect Client; the user simply opens a web browser to browse to server S2.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100545.7/warc/CC-MAIN-20231205041842-20231205071842-00482.warc.gz
|
CC-MAIN-2023-50
| 11,315
| 36
|
https://www.tanitjobs.com/job/614288/senior-full-stack-developer-javascript-django/
|
code
|
About Sigma Unit
We specialise in placing remote developers into UK teams. Our UK branch has been serving customers for over 10 years (under a different name). Our Tunisian branch was founded in 2019 and we are in the process of establishing the core team. All our developers work in English speaking environments following Agile methodologies. We are looking for serious developers who are looking to take their career to the next level.
Benefits of working with us
If your application is successful we will offer you specialist coaching to help you thrive in your new team, we'll also help you develop and refine your English skills. We work remotely so you'll be free to choose where you work from. We pay very well and we provide the best equipment.
Quality-focused, you’re encouraged to do your best work
About the role
The team you will be joining is formed of developers form the UK and Europe. You will join an existing remote team who are using agile methodologies in English. Communication is key.
You will be expected to work autonomously and take on leadership responsibilities such as mentoring mid-level and junior developers.
Problem solving skills
Minimum 3 years of commercial experience
Mobile optimisation & responsive design expert
Professional level (or above) of spoken English
Good unix shell / local devops skills
Familiarity with unit testing and its concepts.
Deep understanding of code patterns and how to code for performance
Based in Tunisia
Pragmatic, Ability to work with old and new technology (eg. jQuery and React)
Database management experience
Version control with git
Experience working with Django and/or Python
Back-end development experience
Remote work experience
Team lead background
This role is suited for senior developer who want to take their career to the next level.
To us, a senior developer is someone who is able to work autonomously, review other developers’ work, and contribute system design ideas.
You are confident in your abilities to deliver and are comfortable picking up new technologies.
You have the ability to communicate complex technical issues clearly.
You pay attention to detail in your communication, your work and the way you present your work.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540551267.14/warc/CC-MAIN-20191213071155-20191213095155-00288.warc.gz
|
CC-MAIN-2019-51
| 2,219
| 28
|
https://teamrelated.com/videos/how-automate-your-work-slack
|
code
|
How to automate your work with Slack
Workflow Builder is a tool in Slack that allows you to automate tasks without needing to know how to code.
You can use it to create workflows that send messages, create forms, or integrate with other apps.
0:11 - Why you need to automate your work
0:28 - What is Workflow Builder?
0:42 - Things you should automate
0:54 - Step-by-step: build your 1st automation
3:53 - Best practices for automations
Here are some examples of common tasks you could automate with the workflow builder:
- Collecting and providing feedback
- Requesting help including IT, engineering or creative help
- Onboarding processes for new employees or partners
- Prompting status updates such as for a big project
Let’s work together and create a workflow using Workflow Builder, showing you how to get started + some best practices.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00871.warc.gz
|
CC-MAIN-2024-10
| 846
| 14
|
http://sourceforge.net/projects/obiblio/develop?source=navbar
|
code
|
sosley, I am getting the same warning: Warning: Missing argument 1 for BiblioCopy::validateData(), called in C:\xampp\htdocs\openbiblio\catalog\upload_csv.php on line 275 and defined in C:\xampp\htdocs\openbiblio\classes\BiblioCopy.php on line 43 It only happens when my .csv file includes a column for "barCo". When I remove that column, the file tests just fine. The problem is...
2013-05-22 05:42:48 PDT by michaeldroush
Thanks. Yes, adding the neccesary .layout line in the report file was indeed all that was wrong. I'll need to make a note of that somewhere for the next time I can't remember! For copies.rpt I'm using the "Copy Search (Extended)" from 0.6.1 rather than the one included with 0.7.1.
2013-05-06 20:05:44 PDT by https://www.google.com/accounts
[excerpt from direct communication] OpenBiblio downloads are offered through SourceForge and these are mirrored automatically. http://sourceforge.net/apps/trac/sourceforge/wiki/Mirrors Help is still appreciated, especially for answering messages in the forums and improving the documentation in our wiki.
2013-05-06 03:25:59 PDT by infinite-mnkz
Correct. An easy fix is overwriting the 0.7.1 .rpt files with the files from the 0.6.1. installation, except for the reports that were updated for 0.7.1 (copies.rpt, members.rpt, popularBiblios.rpt).
2013-05-06 03:18:28 PDT by infinite-mnkz
I think I may have found the problem. I had completely forgotten that the .rpt files need to be edited as well to include which printable layouts can be accessed after the report is run. Will try this.
2013-05-05 19:48:26 PDT by https://www.google.com/accounts
Update ... This is still stumping me. I thought for a second I had it figured out when I realized I had renamed my custom label file and then not changed the class name to match, but after editing that I still had no success. After I run a report, the only options for printing that will display under "Report Results" are from labels.php and list.php. Why can't I get anything else from...
2013-05-05 17:42:42 PDT by https://www.google.com/accounts
I just upgraded from 0.6.1 to 0.7.1 and now when I go to print labels from a report, no options are showing as before. I have a customized label file in ./layouts but neither that nor any of the included in ./layouts/default are showing in the menu. I sort of feel like I've been in this spot here before a year or two ago with the previous installation. Is there some configuration setting...
2013-05-04 18:37:25 PDT by https://www.google.com/accounts
Good afternoon, I have a home web server running Debian Stable with 250GB of disk space. I'm not going to be using all of that myself, so I'm wondering if the devs would mind if I use part of the drive to create a publically-accessible mirror of OpenBiblio. I have no knowledge of PHP or any kind of coding, so this seems to be the most obvious way I can help. I'm in southern California, if...
2013-05-04 12:58:14 PDT by zlanvok
Hi, This is for a not for profit organisation looking at serving it's members with an online library facility. We are looking for some customisation in the area of look and feel and few other things. Any one interested, please mail me to bapajan at yahoo dot co dot uk Regards.
2013-04-30 03:43:05 PDT by bapajan
Hello, Thank you for your support. I'm gonna try!
2013-04-22 02:15:37 PDT by llimoner
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702447607/warc/CC-MAIN-20130516110727-00031-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 3,344
| 20
|
https://www.modalplus.net/2022/11/28/livejasmin-support-staff-jobs-employment/
|
code
|
For us the health of our colleagues and candidates is the most important, due to this fact our recruitment and the whole onboarding course of is solved through online instruments (e.g. video interview) without private conferences. We are nonetheless very pleased to receive and review your utility. Before getting started, learn the assessments of other members.
- Also you presumably can be taught the essential reviews of the site’s founders or perhaps different customers referring to the web.
- The us tab gives a sensible way for prime degree informal get together websites.
- Aside from in search of somebody who shares associated pursuits and values, nearly all the websites present video and voice connection choices.
- Naturally this suggests there’s completely no assume movement in any other case statement that’s not regarding now.
- Uncommon Giving helps registered nonprofits expand their humanitarian attain, and people with new ways to help make an actual distinction.
Clients now have better control of selections as the brand new resolution makes it possible for users to settle subscription rates using crytocurrencies without having to reveal their identities. Merchants can also keep away from reversal of transactions while lowering transaction expenses. Subscriptions can additionally be cancelled instantly using the PumaPay pockets app. Needs to review the safety of your connection earlier than proceeding. People continually getting hired/fired, workers sleeping collectively, no clear firm path, no advertising plan, and no qualified managers. Upper level administration doesn’t value creativity or new concepts.
You can use your account to request a substitute Social Security card, examine the status of an software program, estimate future advantages, or handle the advantages you already obtain. Perhaps one to corresponding to for example a life or occurring retreats are designed that technique. Are you ready to grasp a e-book all via the today? All patch, character growth and so forth won’t exists, simply the time period you had been learning, if it.
Close Hooks For Evaluation Paperwork Authorship Optimum Connect For Ones
The most interesting way to avoid these types of is to keep away from these issues altogether. Co-workers were great to work with and had a unbelievable time working there for essentially the most part. I would go in to work and get the job achieved and assist be Support member for their Sonicbox Team. Provide customer support utilizing the system and operating system of your alternative. Make your assist flexible and connect live jasmen com with clients utilizing instruments of your selection, as you like. The property belonged to her paternal grandmother, Marjorie, who used to run a small general store out of one of many buildings on site. Ms. Anderson purchased it from her a long time ago so that, she mentioned, her grandmother may have the market worth in money to distribute to her youngsters, and the land might keep within the household.
It’s an exciting alternative for an experienced developer who is looking for a model new problem with a fast-growing startup concentrating on the fastest-growing region on the planet. The place is on a contract foundation for 6+ months, on a milestone foundation with the choice to potentially turn into a full-time staff member. We hope that you’re able to conducting research, data analysis and growth identification, and you’ll help to produce tales and evaluation to help data-driven dedication making for our analysis staff. We favor that you’re college students based mostly in the US and enrolled in an undergraduate or graduate … You may have administrative duties in rising and implementing marketing strategies. As a advertising intern, you’ll collaborate with our marketing group in all levels of selling campaigns.
Tel Avivs Biking Revealing Suppliers Tel
With this partnership, LiveJasmin customers now have a broader system for crypto payments. We are Sygnific International Consulting, an skilled data analysis staff with proficient and passionate members who’ve work experiences in top-tier consulting firms. Our motto is «Politics First.» Our team is solely as forward-thinking as our evaluation. By offering expertise on how political developments move markets, we help purchasers anticipate and reply to dangers and alternatives. We are literally in search of a Researcher to join out Global Macro follow. The utility reveals the quickest approach to the vacation spot after taking up-to-date site guests information into consideration, corresponding to crashes and visitors jams.
Non Secular Relationship Websites Reviews
Your personal knowledge shall be used to support your experience all through this website online, to handle entry to your account, and for different functions described in our política de privacidade. Hi Niels T., I observed your profile and wish to give you my project. Do you could have time and data to help us to port a smaller GUI library? It is written in C to at least one hundred pc and you should use XML along with CSS to explain interface structure and style. It originated from Linux however we want to use this library together with FreeBSD.
Exclusive: 21shares President On Large Success Of $sol And $dot Products And Why They Use Cryptocompare’s Value Data
Yeah, in fact it’s, we obtained that message already from everything else she’s had to endure and we all know that she’s capable of overcoming it. Mulan doesn’t sing it again at the end after she saves the day or anything, she proves her value via her actions, which is what Jasmine’s speech should characterize. Jasmine’s music coming on the end of the movie and right before a speech that’s much more significant to her character is irritatingly distracting. It’s superbly sung, but it’s a lame track that’s hollow and just feels like Disney pandering to its audiences.
Love And Marriage
We are on the lookout for a graphic designer with B2B experience to create engaging and on-brand graphics for a selection of media. Your graphics ought to capture the eye of those that see them and communicate the right message. For this, you want to have a creative flair and a strong ability to translate necessities into design. If you’ll be able to communicate well and work methodically as a part of a staff, we’d like to satisfy you. The .exe will include my modules, Cefpython, wxPython, a few knowledge files, Python three.9. (CEF is Chromium Embedded Framework to level out HTML recordsdata in one other application, and CefPython connects CEF with Python.) I assume you’ll use Pyinstaller or py2exe. Obviously that is beneath Windows however bonus factors for supporting Linux too.
A Educated Free Relationship Websites! To Personal Swingers And You’ll Threesomes
The us tab offers a practical means for top degree casual get collectively sites. You can also view real-life profiles of folks that have tried the service. Even although this method is definitely not perfect for every single individual, they’ve an effective approach to satisfy folks inside the consolation of your private home. And because it’s unknown, there’s no factor to worry about any unfavorable reviews. America tab is the most effective platform for locating informal hookups.
Be positive to learn consumer evaluations before subscribing a web dating site. Be sure to read the critiques of others as this assists you cut up the wheat from the skin. While the software program might have some flaws, it has the nonetheless a incredible place to fulfill somebody designed for informal hookups and internet courting. It’s positively price a peek, but anticipate to pay a tiny fee.
Professionals who provide this service could perform quite lots of administration providers, corresponding to planning all companies and checking legislation for tasks. Quality control Quality management is a sort of services administration that ensures that every one merchandise created by a company meet the required standards. Facility administration companies focusing on high quality control may examine tools to see if it meets the standard and prepare for contractors to fix it when required. Security services Security services amenities administration focuses on keeping staff members of a specific firm secure. The company took the idea and expertise from LiveJasmin and used it to provide numerous Internet-based services. The umbrella of corporations that have been created and developed via Docler Holding is greater than diverse.
In truth, I thought that the love interest with him and Jasmine’s servant was a welcome addition to the film, albeit and unnecessary one. It’s nothing particular, but it at least does one thing to set itself other than the unique in a means that doesn’t involve lame pandering. PumaPay is an open supply blockchain-based protocol that provides a extensive range of transaction choices. The platform’s robust and flexible protocol permits users to hold out transactions with just about each existing billing kind using blockchain-based solutions. The launch of its PullPayment protocol attracted a substantial quantity of consideration when it launched earlier in 2018. PumaPay allows recurrent funds which have been beforehand inconceivable on blockchains platforms.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510319.87/warc/CC-MAIN-20230927171156-20230927201156-00607.warc.gz
|
CC-MAIN-2023-40
| 9,368
| 24
|
https://products.aspose.com/cells/net/conversion/mhtml-to-tsv/
|
code
|
Convert MHTML to TSV in C#
High-speed C# library for converting MHTML to TSV. This is a professional software solution to import and export MHTML, TSV, and many other formats on .NET Framework, .NET Core or Mono Platforms.
Convert MHTML to TSV Using C#
How do I convert MHTML to TSV? With Aspose.Cells for .NET library, you can easily convert MHTML to TSV programmatically with a few lines of code. Aspose.Cells for .NET is capable of building cross-platform applications with the ability to generate, modify, convert, render and print all Excel files. .NET Excel API not only convert between spreadsheet formats, it can also render Excel files as images, PDF, HTML, ODS, CSV, SVG, JSON, WORD, PPT and more, thus making it a perfect choice to exchange documents in industry-standard formats. Open NuGet package manager, search for Aspose.Cells and install. You may also use the following command from the Package Manager Console.
Package Manager Console Command
PM> Install-Package Aspose.Cells
Save MHTML to TSV in C#
The following example demonstrates how to convert MHTML to TSV in C#.
Follow the easy steps to convert MHTML to TSV. Upload your MHTML file, then simply save it as TSV file. For both MHTML reading and TSV writing you can use fully qualified filenames. The output TSV content and formatting will be identical to the original MHTML document.
How to Convert MHTML to TSV via C#
Need to convert MHTML files to TSV programmatically? .NET developers can easily load & convert MHTML to TSV in just a few lines of code.
- Install ‘Aspose.Cells for .NET’.
- Add a library reference (import the library) to your C# project.
- Load MHTML file with an instance of Workbook.
- Convert MHTML to TSV by calling Workbook.Save method.
- Get the conversion result of MHTML to TSV.
C# library to convert MHTML to TSV
There are two alternative options to install “Aspose.Cells for .NET” onto your system. Please choose one that resembles your needs and follow the step-by-step instructions:
Before running the .NET conversion example code, make sure that you have the following prerequisites.
- Microsoft Windows or a compatible OS with .NET, .NET Core, Windows Azure or Mono Platforms..
- Development environment like Microsoft Visual Studio.
- Add reference to the Aspose.Cells for .NET DLL in your project.
MHTML What is MHTML File Format?
Files with MHTML extension represent a web page archive format that can be created by a number of different applications. The format is known as archive format because it saves the web HTML code and associated resources in a single file. These resources include anything linked to the webpage such as images, applets, animations, audio files and so on. MHTML files can be opened in a variety of applications such as Internet Explorer and Microsoft Word. Microsoft Windows uses MHTML file format for recording scenarios of problems observed during the usage of any application on Windows that raises issues. The MHTML file format encodes the page contents similar to specifications defined in message/rfc822 which is plain text email related specifications.Read More
TSV What is TSV File Format?
A Tab-Separated Values (TSV) file format represents data separated with tabs in plain text format. The file format, similar to CSV, is used for organization of data in a structured manner in order to import and export between different applications. The format is primarily used for data import/export and exchange in Spreadsheet applications and databases. Each record in a TSV file is contained in a single line of text file where each field value is separated by a tab character. Media type for TSV file format is text/tab-separated-values.Read More
Other Supported Conversions
You can also convert MHTML to many other file formats including few listed below.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816942.33/warc/CC-MAIN-20240415045222-20240415075222-00511.warc.gz
|
CC-MAIN-2024-18
| 3,807
| 28
|
https://galileoandeinstein.phys.virginia.edu/Elec_Mag/2022_Lectures/EM_09_Reciprocation_Theorem.html
|
code
|
9. Green’s Reciprocation Theorem
What It Is
One simple theorem George Green published in his 1828 paper is his Reciprocation Theorem. (This is Jackson's term, Wikipedia calls it reciprocity.) It seems almost trivial, but often leads to surprising results with very little effort. Here it is:
Consider two different volume and surface charge distributions, in the same identical geometry.
Charge distribution A has volume charge density and boundary surface charge densities generating electrostatic potential
In that same space (meaning with the same surfaces) charge distribution B, with densities , would give potential
Then the Reciprocation Theorem is:
In words: If the A charge densities were at rest in the B potential, the total potential energy would be the same as that of the B charge densities held at rest in the A potential.
To prove it, we’ll just consider volume charges (the surface charges could be taken as volume charges in a thin layer limit anyway).
But this is symmetric in A, B, proving the theorem: it’s that simple!
Earnshaw’s Theorem from the Reciprocation Theorem
Earnshaw’s theorem is often stated simply as: In a charge-free region, the potential cannot have a maximum or minimum. (Because locally the field in all directions would have to point all inwards or all outwards, so nonzero divergence, meaning there must be charge present.)
A more informative version is: in a charge-free region, the potential at a point is the same as the average potential on any spherical surface centered at that point, provided only that the spherical surface is itself within the charge-free region.
To prove this using the Reciprocation Theorem, take:
System A: the existing setup, with charge distribution all outside the region of interest, generating potential
System B: a spherical surface, radius with uniform surface charge density , the total surface charge being and a point charge at the center of the sphere.
The spherical surface is taken to be in the charge-free region of system A.
The B system, the charged sphere plus the equal but negative central point charge, gives zero potential outside the sphere (which is where all the A system charge is), so
Therefore from the theorem and taking the center of the sphere as the origin for convenience, this becomes, integrating over the area of the spherical surface (and cancelling out from both sides):
Note: if this looks too easy, there are many more difficult proofs on the web.
Exercise: Suppose we take a new system B: a sphere centered at the origin with surface charge density proportional to the coordinate We replace the center point charge with a point dipole, such that the potential from this system is zero outside the sphere. System A is as before: what does the Reciprocation Theorem tell you this time? Hint: you can think of that surface charge as two equally-sized solid spheres of charge of opposite sign, their centers a very small distance apart.
Revisiting an Earlier Result Using the Reciprocation Theorem
We established in the Electrostatics II lecture that if different parts of a spherical surface are held at different potentials, the consequent potential at any point P in space outside the sphere can be found by integrating over the spherical surface with a weighting factor equal to the density of induced charge on a perfectly conducting grounded sphere with a unit charge at the point P.
In fact, this result follows immediately from the reciprocation theorem!
consider two systems A, B with identical geometry: just a sphere plus one point outside it.
System A is our spherical surface divided into parts held at different potentials, labeled by the variable potential being any point on the surface of the sphere, and system A also includes the point P, position say, at which we want to find the potential from the charge distribution on the spherical surface (but there is no volume charge in A).
System B is the geometrically identical sphere, but now a fully connected conducting surface, grounded and so at zero potential, and now there is a unit charge at the point P, that is,
The left-hand side of the equation above is identically zero: in the volume, on the surface.
The right-hand side, using gives where would be the surface charge density induced at on a grounded conducting sphere by unit charge at
Note also that this proof generalizes from a conducting sphere to any closed conducting surface.
Symmetry of the Dirichlet Green's Function
We’ve shown the Dirichlet Green’s function is symmetric,
This also follows easily from the Reciprocation Theorem: take two systems A, B having the same set of grounded conducting surfaces, one with a single unit charge at the other with a single unit charge at . Now, by definition, is the potential at from the single unit charge at plus the charges induced on the grounded surfaces, and vice versa. Symmetry follows from The Theorem. (The result is certainly not intuitively obvious: think of an odd shaped conductor, say a sphere but with a tall thin conical "mountain" somewhere. Now put just above the mountain peak, above the plane on the other side.)
Exercise: Write this out explicitly, in terms of
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00183.warc.gz
|
CC-MAIN-2023-14
| 5,184
| 34
|
http://www.tomshardware.com/forum/263994-31-barebones-screen-light
|
code
|
Msi 17" barebones-screen does not light up
the power and battery lights work, but an external montior will not work either. how do i tell if it is the conputer or perhaos the CPU i installed?
i got the msi wind and am so inpressed with the clarity of the screen i though ti wanted the larger laptop. i have been using 390x and a31 thinkpads and it is time for somehting new and snazzy-or maybe keen.
i have experience replacing or changing components in thinkpads, and the MSI 17" 'barebones system seemed simple, i only had to add memory, CPU, hard drive and OS. i have done memory and hard drives dozens of times and the CPU was not hard, after i learned what the right ones were i could use.
the critical issue is whether the problem is that th eCPU is bad or is the 'barebones' computer bad? i can get the wireless wifi to activate and light up the buttons, and the hard drive light is on but not blinking.
i am not sure if the CPU activates the screen, if not then the barebones is bad.
i just found out the frpom the vendor of the hard drive that paypal put a hold on his payment. i just contacted the vendor of the barebones and he says he is not a techie and can not help me figure it out. he is only an hour drive and i hope he has another unit we could put the CPU in, or another CPU to put in mine, so we know which is bad. but he may not do this. i do not like to hold up paying the guy for the cpu or returning it if it is not bad, so i hope to find out how to figure out where the problem is by what the laptop is doing.
i have read comments that this laptop is the best some people ever had.....please help!!!
try the motherboard outside the shell with an aftermarket fan and see if the fan revs up. if it does, connect a monitor to the graphics output and see if it works then.
If it does work then the barebones screen is faulty, if it doesn't you have a faulty graphics processor.
If the fan doesn't come on then the barebones motherboard is faulty and/or your cpu doesn't work anymore.
thank you for your help. the motherboard is part of the 'barebones' that i can't get to, but if the problem is from the other parts you name, i can either get the guy replace it or refund my money on the 90 days guarantee.
since the supplier is within driving distance, if he lets me put th eparts in a different unit that would show the CPU was working or not....
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815843.84/warc/CC-MAIN-20180224152306-20180224172306-00165.warc.gz
|
CC-MAIN-2018-09
| 2,369
| 13
|
https://angryweasel.com/blog/year-end-clearance/
|
code
|
I’m on vacation, and this post is auto-generated. See, you can trust automation sometimes…
Another year gone by, and another few dozen posts. Here are the top viewed posts of the last year (note – not all of these were written last year – this is just what people read the most last year).
In order of views:
- Titles for Testers
- Dichotomy for Dummies
- Debugging for Testers
- Coding, Testing, and the A Word
- Exploring Testing and Programming
- Why Information is Important
- Tear Down the Wall
Thanks to everyone who reads, comments, or reacts on Twitter to my rants and ramblings. Plenty of big announcements coming up, as well as more thoughts on what I see happening with testing in the future.
Happy New Year.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00588.warc.gz
|
CC-MAIN-2022-21
| 727
| 12
|
https://lika.be/wp/2008/12/permission-hierarchy-for-trac/
|
code
|
How to implement a hierarchical permission tree in the project-management tool Trac.
After installing Trac, the default permission settings are rather ‘permissive’.
I tend to lock things down in the following way: I create a set of groups, and every higher-level group has additional permissions compared to the lower-level group. The following groups are created:
- guests are able to view the site
- readers additionally have access to the source browser
- developers have read permission, next to ticket creation, wiki access, report and log access
- managers additionally can admin milestones and roadmap
- admins have full access.
Adding a new user to the permission list is then simply a matter of adding her/him to the correct ‘group’.
Setting up this permission hierarchy can be sone by executing trac-admin on the folder that contains your site database.
All required actions are listed in a text file.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00031.warc.gz
|
CC-MAIN-2021-39
| 919
| 11
|
https://www.dnnsoftware.com/answers/adding-fb-app-id-and-like-to-new-blog-in-dnn-7x
|
code
|
Your question has been submitted and is awaiting moderation.
I am having a problem with the newer DNN FB integration for the blog. I have 7.01.01 DNN and the blog module is v 5.0.0. I have setup a FB APP under the developers.facebook.com for website integration and entered the ID.
My problem is this: the like button now shows but when I go to click it the share box is cut off and only shows the first inch or so on the left of the box. I am trying to make sure I set this up correctly and that the blog is supposed to work with just plugging in the APP ID.
http://osptgroup.com.dnnmax.com/EducationBlog/tabid/138/EntryId/4/Big-News.aspx is an example of this. Any help is greatly appreciated!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257731.70/warc/CC-MAIN-20190524184553-20190524210553-00213.warc.gz
|
CC-MAIN-2019-22
| 696
| 4
|
http://mirror0.alcancelibre.org/aldos/1.4/updates/source/repoview/glibc.html
|
code
|
glibc - The GNU libc libraries
|License:||LGPLv2+ and LGPLv2+ with exceptions and GPLv2+|
|Vendor:||Alcance Libre, Inc.|
The glibc package contains standard libraries which are used by multiple programs on the system. In order to save disk space and memory, as well as to make upgrading easier, common system code is kept in one place and shared between programs. This particular package contains the most important sets of shared libraries: the standard C library and the standard math library. Without these two libraries, a Linux system will not function.
|glibc-2.17-260.fc14.al.5.src [25.0 MiB]||
by Joel Barrios (2019-05-16):
- Use versioned Obsoletes: for nss_db (#1704593) - ja_JP: Add new Japanese Era name (#1693152) - elf: Fix data race in _dl_profile_fixup (#1661242)
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998238.28/warc/CC-MAIN-20190616122738-20190616144738-00497.warc.gz
|
CC-MAIN-2019-26
| 779
| 7
|
https://ko.ifixit.com/Answers/View/191099/iPhone+5+stuck+in+not+restoring+DFU+mode!
|
code
|
iPhone 5 stuck in not restoring DFU mode!
I have an Iphone 5 (A1429) that randomly went into dfu mode... I wasn't able to get out of DFU nor do a restore (itunes would hang on "waiting for device").
After charging the battery for a night (it gots charge now, checked it with voltmeter) i got it to a stage where it is dispaying the itunes logo with usb cable (recovery mode) but i can't get out of that either (tried holding home and power for 10 sec, restarts and goes back into the same recovery mode)
if i connect it to Itunes now, i get it to actually starting to restore but 10 seconds after the status bar displays on both my laptop and iphone, itunes stops with error 14. I already tried re-installing itunes, deleting old IPSW and tried other wifi/ computer. I'm desperate so pleas help, i'm willing to open up my IPhone and i'm in possession of a mac, windows and ubuntu PC.
here is the log i get (only error section):
[15:25:29.0492] Failure Description:
[15:25:29.0492] Depth:0 Error:AMRestorePerformRestoreModeRestoreWithError failed with error: 14
[15:25:29.0492] Depth:1 Error:The operation couldn’t be completed. (AMRestoreErrorDomain error 14 - Failed to handle data request message)
[15:25:29.0492] Depth:2 Error:The operation couldn’t be completed. (AMRestoreErrorDomain error 14 - Failed to handle image request)
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706285.92/warc/CC-MAIN-20221126080725-20221126110725-00448.warc.gz
|
CC-MAIN-2022-49
| 1,335
| 9
|
https://2023.splashcon.org/profile/kartiksinghal
|
code
|
Registered user since Thu 11 Jul 2019
PhD student at UChicago CS since September 2017. Working on PL design and program verification for practical-scale quantum computation.
Affiliation:University of Chicago
Personal website: https://ks.cs.uchicago.edu
Research interests:Programming Languages, Quantum Computing, Type Theory, Hoare-like Logics
Using general profile
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506420.84/warc/CC-MAIN-20230922134342-20230922164342-00331.warc.gz
|
CC-MAIN-2023-40
| 366
| 6
|
https://sourceforge.net/p/graphicsmagick/feature-requests/30/
|
code
|
DjVu is an interesting file format:
- Better than JPEG for photos. (see c44)
- Can compress with limited amount of colors (see cpaldjvu)
- Incredible compression ratio for black&white (see cjb2)
- Support multipage (like TIFF or PDF).
- The ammount of memory required is minimal (fast zoom-in and zoom-out). Huge pictures are easily managed (thanks to wavelet compression).
- It supports OCR.
All this make it very suitable for many purposes (I use for both storing pictures and documentation). It is one of the supported formats in http://www.openlibrary.org
More information here:
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295854.33/warc/CC-MAIN-20160823195815-00267-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 582
| 9
|
https://pahe.sbs/becky-2020/
|
code
|
A teenager’s weekend at a lake house with her father takes a turn for the worse when a group of convicts wreaks havoc on their lives.
Download Becky (2020)
All files or contents hosted on third party websites. PaHe does not accept responsibility for contents hosted on third party websites. We just index those links which are already available in internet.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710918.58/warc/CC-MAIN-20221203011523-20221203041523-00665.warc.gz
|
CC-MAIN-2022-49
| 359
| 3
|
https://helpwiki.evergreen.edu/wiki/index.php?title=Subscribe2_-_Wordpress&direction=next&oldid=8107
|
code
|
Subscribe2 - Wordpress
From Help Wiki
Revision as of 11:37, 26 February 2010 by Greenea
Subscribe2 is a plugin that, once activated, will allow your readers to receive an email notification whenever a new post is published to the site.
Activate the plugin
- Activate the Subscibe2 plugin in your plugins panel
- Create a new page (probably called something like "Subscribe")
- Click the S2 button that should now appear in your quickbar to automatically insert the subscribe2 token. Ensure the token is on a line by itself and that it has a blank line above and below. This token will automatically be replaced by dynamic subscription information and will display all forms and messages as necessary.
- If you're interested in changing some of the default behaviors of the plugin, go to Tools > Subscribers and change your Subscriber settings here.
For your readers to subscribe:
- Readers simply need to navigate to your subscription page and enter their email address.
- They will receive an email that they must confirm the subscription.
- Note: if a a potential subscriber is already logged into their blog at blogs.evergreen.edu and they go to subscribe to another blog they will be directed to manage they subscriptions via the Subscribe2 panel in their Dashboard. They will need to activate the Subscribe2 plugin first and then they will have a list of all available blogs at blogs.evergreen.edu that are subscribable.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00274.warc.gz
|
CC-MAIN-2022-33
| 1,425
| 13
|
http://eprints.soton.ac.uk/271542/
|
code
|
Cruz, Fabiano, Barreto, Raimundo, Cordeiro, Lucas and Maciel, Paulo
ezRealtime: A Domain-Specific Modeling Tool for Embedded Hard Real-Time Software Synthesis.
In, Design, Automation and Test in Europe (DATE), Munich, Germany,
10 - 14 Mar 2008.
IEEE Computer Society, .
In this paper, we introduce the ezRealtime project, which relies on the Time Petri Net (TPN) formalism and defines a Domain-Specific Modeling (DSM) tool to provide an easy- to-use environment for specifying Embedded Hard Real-Time (EHRT) systems and for synthesizing timely and predictable scheduled C code. Therefore, this paper presents a generative programming method in order to boost code quality and improve substantially developer productivity by making use of automated software synthesis. The ezRealtime tool reads and automatically translates the system's specification to a time Petri net model through composition of building blocks with the purpose of providing a complete model of all tasks in the system. Hence, this model is used to find a feasible schedule by applying a depth-first search algorithm. Finally, the scheduled code is generated by traversing the feasible schedule, and replacing transition's instances by the respective code segments. We also present the application of the proposed method in an expressive case study.
Actions (login required)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00174-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 1,344
| 7
|
https://answers.sap.com/questions/6564173/no-bom-explosion-for-sub-contract-purchased-materi.html
|
code
|
I have searched google and this forum and have not found any info on this. Hopefully someone has run across this same situation and can help.
We do not want to have a bom explosion happen for externally procured materials (using CU51). We are using user exit CCUX0800 (EXIT_SAPLCUKO_008) to set no_expl_ext_procurement = 'X'. This works good, except if the material is set up for external procurement (marc-beskz = "F") and also has a special procurement (marc-sobsl) of "30". Include LCUKOFFB says to still explode the BOM if the special procurement is "30".
Does anyone know why it does this? We don't want the bom to explode for externally procured parts even if they are set up as subcontracting ("30"). Any way we can do this with a different user exit?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00438.warc.gz
|
CC-MAIN-2022-40
| 758
| 3
|
http://dandyhorsemagazine.com/blog/
|
code
|
Image - Bloor Street Pilot Metrics
Bring on the Data!
Story by Robert Zachowski
This story was originally posted on Robert's blog Two-Wheeled Politics
Throughout my years in cycling advocacy, I gained an understanding about how external factors such as budget funding, design guidelines, inspiration from other cities, and partnerships with residents, businesses, schools, and community groups can influence road safety improvements. Another area Toronto must improve on is data collection in determining how effective cycling projects are. During the Winter Cycling Congress in Montréal (see recap and Montréal cycling posts), I attended their “A Matter of Data” workshop to learn about data collection in Anchorage, Montréal, and Ottawa.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189471.55/warc/CC-MAIN-20170322212949-00095-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 746
| 5
|
https://www.raspberrypi.org/blog/adapting-culturally-relevant-computing-resources-primary-school-research-study/
|
code
|
We are looking for primary schools in England to get involved in our new research study investigating how to adapt Computing resources to make them culturally relevant for pupils. In a project in 2021, we created guidelines that included ideas about how teachers can modify Computing lessons so they are culturally relevant for their learners. In this new project, we will work closely with primary teachers to explore this adaptation process.
This project will help increase the education community’s understanding of ways to widen participation in Computing. The need to do this is demonstrated (as only one example among many) by the fact that in England’s 2017 GCSE Computer Science cohort, Black students were the most underrepresented group. We will investigate how resources adapted to be culturally relevant might influence students’ ideas about computing and contribute to their sense of identity as a “computer person”.
This study is funded by the Cognizant Foundation and we are grateful for their generous support. Since 2018, the Cognizant Foundation has worked to ensure that all individuals have equitable opportunities to thrive in the jobs driving the future. Their work aligns with our mission to enable young people to realise their full potential through the power of computing and digital technologies.
What will taking part in the project involve?
This project about culturally adapted resources will take place between October 2022 and July 2023. It draws from ideas on how to bridge the gap between academic research and classroom teaching, and we are looking for 12 primary teachers to work closely with our researchers and content writers in three phases using a tested co-creation model.
By taking part, you will gain an excellent understanding of culturally relevant pedagogy and develop your knowledge and skills in delivering culturally responsive Computing lessons. We will value your expertise and your insights into what works in your classroom, and we will listen to your ideas.
Phase 1 (November 2022)
We will kick off the project with a day-long workshop on 2 November at our head office in Cambridge, which will bring all the participating teachers together. (Funding is available for participating schools to cover supply costs and teachers’ travel costs.) In the workshop, we will first explore what culturally relevant and responsive computing means. Then we will work together to look at a half-term unit of work of Computing lessons and identify how it could be adapted. After the workshop day, we will produce an adapted version of the unit of work based on the teachers’ input and ideas.
Phase 2 (February to March 2023)
In the Spring Term, teachers will deliver the adapted unit of work to their class in the second half of the term. Through a survey before and after the set of lessons, students will be asked about their views of computing. Throughout this time, the research team will be available for online support. We may also visit your school to carry out an observation of one of the lessons.
Phase 3 (April to May 2023)
During this phase, the research team will ask participating teachers about their experiences, and about whether and how they further adapted the lessons. Teachers will likely spend 2 to 3 hours in either April or May sharing their insights and recommendations. After this phase, we will analyse the findings from the study and share the results both with the participating teachers and the wider computing education community.
Who are we looking for to take part in this study?
For this study, we are looking for primary teachers who teach Computing to Year 4 or Year 5 pupils in a school in England.
- You may be a generalist primary class teacher who teaches all subjects to your year group, or you may be a specialist primary Computing teacher
- To take part, your pupils will need access to desktop or laptop computers in the Spring Term, but your school will not need any specialist hardware or software
- You will need to attend the in-person workshop in Cambridge on Wednesday 2 November and commit to the project for the rest of the 2022/2023 academic year; funding is available for participating schools to cover supply costs and teachers’ travel costs
- Your headteacher will need to support your participation in the study
We will also give preference to schools with culturally diverse catchment areas.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100989.75/warc/CC-MAIN-20231209233632-20231210023632-00442.warc.gz
|
CC-MAIN-2023-50
| 4,405
| 19
|
http://www.sandiego.edu/webdev/notices/systems/styleguide/troubleshooting.php
|
code
|
When your code isn’t doing what you think it ought to be doing, you need to examine and test your code to pinpoint where the problem lies. You need to be able to describe what the code is doing wrong, in addition to what is being displayed wrong in the browser.
These tests are easy and quick, and will help you discover where the problem lies in order to ask a coworker for assistance, and even perhaps to discover where the solution lies.
- Indent your code according to our style guidelines. This can show you where the logic is failing, whether it’s a missing close brace, a misplaced ENDIF, or an ELSE attached to the wrong IF. It is also a necessity if you want to ask someone else to look at your code and understand the expected logic and flow enough to assist you.
- If it involves PHP, add the PHP test lines to the very top of your code. If the page doesn’t display at all, run it through command-line php. But be careful for errors that only show up on the command line, such as MySQL connections that are forbidden except from the server. These errors can become red herrings that waste your time. Only trust the command line over warnings displayed by PHP in the browser if the page isn’t displaying any warnings at all.
- If it involves a PHP variable, use print_r to make sure that the variable contains what you think it contains. echo '<pre>VariableName:'; print_r($variable); echo '</pre>';
- When asking a colleague for assistance, describe the problem in terms of the HTML and the code that generates it. Since you’ve viewed source and you know what is generating that source, you should now be able to describe the problem in terms of what the code is doing, in addition to or rather than how the code is displaying. Often, describing the problem shows you where the error is.
- If you choose to troubleshoot further, make guesses, not assumptions. Assumptions permanently restrict your ability to find the true error. Guesses focus your attention temporarily on a specific part of the code. Many of your assumptions will be correct, and will not trip you up; but many will also be incorrect, and will make finding the error impossible until you lose the assumption. As a rule of thumb, if you’re ignoring PHP’s warnings and/or errors, or you’re disbelieving what print_r tells you a variable contains, you have made an unsupportable assumption. Turn it into a guess, and test it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159031.19/warc/CC-MAIN-20160205193919-00095-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 2,418
| 7
|
https://github.com/M4rtinK/monav-data-generator
|
code
|
Monav data generator
This simple script generates modRana compatible Monav routing data packs from OpenStreetMap data files (both plain osm files and pbf are supported).
You need to have monav-preprocessor installed (as this script is basically a wrapper around it). The monav-preprocessor package should be available from the default repositories in Debian, Ubuntu, Fedora and other major distributions.
./generator.py osm_data_file [output_directory_name]
If no output_directory_name is provided, the filename of the osm_data_file without extension will be used instead.
./generate.py czech_republic.osm.pbf Czech_Republic_2012
The Monav preprocessor can run a part the conversion in multiple threads, speeding the whole process quite a bit. By default, 4 threads are used.
To set the thread number, just edit the THREADS variable in the generate.py file. The number of threads should roughly correspond to the number of logical processing cores on your machine.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662893.38/warc/CC-MAIN-20190119095153-20190119121153-00471.warc.gz
|
CC-MAIN-2019-04
| 964
| 8
|
https://www.giteshportfolio.com/
|
code
|
I am an experienced Sitecore & Kentico Certified Developer, working as a Solution Architect with AKQA. AKQA gives me the platform to work on different technologies on the web. Before joining AKQA, I worked with Kudos Web for 5 years in Auckland as a lead back-end developer. The purpose of this website is to share knowledge through different blogs and learn new stuff.
I started my web development career in 2006. It all started with the basic programming languages called "C" & C++. In 2007, I started learning the basics of web development in India. At that time, making a 5 page HTML site looked like a big deal. In 2008, I came to Auckland for my higher studies at AUT (Auckland University of technology). Here, I learned more about Databases, CMS and different coding platforms.
Following that, I worked at different companies like Tentronix, Zeald, Phoenix Books, Kudosweb and currently working at AKQA.
Gitesh means achieving real business results that allow you to transform and not just maintain your operations. You will experience requirements that are met on-time, within budget and with high quality, greater efficiency and responsiveness to your business.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00363.warc.gz
|
CC-MAIN-2023-40
| 1,170
| 4
|
https://containerjournal.com/features/containerx-adds-vmware-vsphere-integration-container-platform/
|
code
|
ContainerX hopes easing container orchestration and deployment for enterprises will give it a leg up in the market. That’s what the company aims to do by integrating its platform with VMware vSphere, a move it announced this week.
The ContainerX product pitch centers on making container deployment seamless, simple and — above all — enterprise-friendly. The company is now doing that, it says, by allowing companies to run its container platform on top of vSphere. That makes it easy to integrate containers into the data center infrastructure that enterprises already have in production.
This approach also makes the ContainerX platform compatible with a broad set of platforms. Those include bare-metal servers and virtual machines, Linux and Windows environments and private and public clouds.
ContainerX announced beta support for the integration of its platform with bare-metal servers and AWS clusters in November 2015. The vSphere integration, however, takes the product’s portability and enterprise-readiness to the next level.
This is an important strategy given that container providers of all kinds — both open source and proprietary — are still struggling to come up with deployment tools that enterprises will really want to use. Solutions like Docker and CoreOS currently require a fair amount of set up and infrastructure investment for most types of implementations. In contrast, by offering a container platform that can run in the vSphere environments that many organizations already have, ContainerX is introducing a container solution that promises to appeal to enterprises that might otherwise be reluctant to make the jump into the container space.
ContainerX hopes to stand out by offering other advantages, too, which go beyond simplified deployment and management. Its platform is designed to optimize container efficiency and stability by automatically allocating infrastructure resources to containers and preventing rogue containers from compromising the performance of other ones.
ContainerX was launched about a year ago by veterans of VMware, Citrix and Microsoft. The company is funded by General Catalyst, Greylock and Caffeinated Capital.
Currently, all ContainerX products remain in beta. The company says its platform will go into general availability in May 2016.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348517506.81/warc/CC-MAIN-20200606155701-20200606185701-00181.warc.gz
|
CC-MAIN-2020-24
| 2,313
| 8
|
http://www.mobius.co.uk/network-automation-with-cloud-assembly-and-nsx-part-2/
|
code
|
In the first post, I covered how to add Cloud Accounts and view discovered compute and network resources in Cloud Assembly. If you missed the first post or need a refresher, click here – part 1 – to check it out. Each of the posts in this series assumes you have access to Cloud Assembly with the pre-reqs for provisioning handled. You will also want NSX configured and running with uplink connectivity to your gateway.
Let’s dive in. Network Profiles control which network constructs are used for placement decisions during a deploy. They also control the level of isolation a workload will have when deployed. If you recall from the first post, I added vc-south and nsx-south environments and will continue working with those entities. In the examples, two network profiles have been created for nsx-south, the NSX-T environment. Each profile widget includes a summary of the compute and networking entities associated with the profile. The options for NSX-v and NSX-T are very similar, with some differences behind the scenes, but essentially the same end results. The first networkTypes I will cover are Existing networks and Outbound on-demand networks. I’ll cover Private on-demand security group options, public and routed networks in the third post.
The best place to begin learning how Cloud Assembly interacts with networking is by looking at a blueprint. Selecting the Blueprints option allows you to add and interact with blueprints. I’ll start by opening the example blueprint called Single-VM-Nat. As the blueprint name suggests, a single machine object and network with a NAT rule will be created, among other configurations, during the deployment.
To create a similar blueprint, simply drag the Cloud Agnostic machine object and NSX Network object to the canvas. I highlighted the portion of yaml code which tells Code Assembly what network capabilities to look for during a deployment. In the example below, I’ve selected an NSX network entity. The networkType: outbound code will create a new network with outbound access and NAT configured. The networkType setting instructs the placement engine to look for a network profile that matches the request. But I’m getting ahead of myself, I’ll provide more detail on network profile and blueprint interactions later in the post.
Note: you could also drag a Cloud Agnostic or vSphere network onto the canvas. For this example, I chose NSX in order to display all available networkTypes in the blueprint.
Before moving forward, I removed outbound from the code panel and selected networkType. Doing so displays available networkType options as shown below. We’ll go through each type and I’ll show you how to configure Cloud Assembly network profiles to accommodate each blueprint setting.
networkType: existing uses a discovered network or deployed network Origin type. I know that can be confusing so let me explain.
- Discovered origin: manually created network objects found through the Cloud Assembly discovery service
- Deployed origin: network objects provisioned by Cloud Assembly
For example, a provider network would appear as deployed. A provider network is created from a blueprint where the only object on the canvas is a network. No machines are associated. There are scenarios where an organization may want to create a static provider network for developers to use without creating an on-demand network each time a deployment occurs. I’ll spend more time on that topic in another blog post. The key thing to remember is both Origin types can be used for existing networkTypes provisioning.
When NSX is involved, deployed and discovered networks can be a switch configured for VXLAN or Geneve overlay networking. The switch could be VLAN backed too. Workloads deployed, using networkType: existing, with either Origin type, will use static or dynamically assigned IP addresses depending on your NSX, VM, and potentially cloudConfig setup.
Switching from my blueprint to a network profile, I’ve selected the Networks option. I added existing networks, notice the switch configured manually (outside Cloud Assembly) shows Discovered for an Origin, whereas the Cloud Assembly created switch shows Deployed, as I mentioned.
In another example, I added existing networks that will be used for placement decisions. I chose App and DB with the tags net:app and net:db for each switch. These tags allow me to assign specific networks to each tier of my application in the blueprint. The switches have their own DHCP server and assigned IP range, importantly this is configured in NSX.
So using this configuration, App and DB will have their own distinct IP ranges. We could take it further and create tags in NSX as part of the provisioning process to help with identification of Cloud Assembly deployed resources. I’m working on another blog post covering how to tag objects in NSX with Cloud Assembly. Keep an eye out for that post.
Next, switch to the Network Policies option and confirm the Do not create on-demand network or on-demand security group radio button is selected, as shown in the screenshot. Don’t forget to confirm the Existing network, T-0 logical router, and Edge cluster provide the desired network connectivity for your VMs.
networkType: outbound creates on-demand networks. For NSX-T, a new T-1 gateway, L2 switch, one-to-many SNAT rule, DHCP server with IP pool, and the proper uplinks/downlinks will be created for each blueprint deployment. Allocated IPs and DHCP IP pools are based upon the CIDR and subnetting configuration specified in the network policies and network options.
Outbound networkTypes require that we have a network configured with a routable network range. Well, it’s ‘required’ if you want outbound network connectivity! In the example, I used Management – Switch and a pool of routable IPs. The correct routed network CIDR must be used for placement decisions. Click the checkbox for the switch and choose Manage IP Ranges.
Manage IP Ranges creates a pool of routable IPs Cloud Assembly will allocate for use as translated external IPs. The IPs are assigned to each SNAT rule that is created in NSX.
Remember Cloud Assembly requires a fully configured NSX environment with uplink connectivity to an underlay network for this scenario to work properly.
Cloud Assembly will track the allocation of IP addresses using the built-in IPAM capability. Click the IP range name to view allocation details. If you are running out of IP addresses, simply modify the start and end IP addresses to increase the pool size. Alternatively a new IP range can be added.
Switching to the Network Policies option, in a network profile, select the Create an on-demand network radio button. Then you must add the discovered transport zone, external switch, T-0 gateway, and Edge cluster. You will need to add a desired private network CIDR and subnet range, in this case I chose a /29 that will be assigned to each on-demand network.
Clicking the dropdown presents the available subnet size options. Subnet size instructs Cloud Assembly to create a new network based on the configured /24 CIDR, in this case part of 220.127.116.11. Remember with NSX, a T-1 gateway, switch, DHCP server and pool, plus SNAT rule is configured for each subnet Cloud Assembly adds. The VM(s) are assigned to the switch as part of the deployment process. This is a really cool option, and is similar to how a network team might carve up private network ranges in an underlay network, but with far less effort and a much shorter time for creation.
In summary, each time a blueprint with networkType: outbound is deployed, Cloud Assembly is instructing NSX to spin up networking entities and make numerous configuration changes. In fact, it’s so easy to create on-demand networks (and delete them) in NSX, you will want to provide governance over their creation and use to minimize potential network sprawl. This situation is also where Provider networks often come into play.
The awesome news is VMware provides a number of options allowing you to manage network creation, including instance limits, and leases in Cloud Assembly. Also you can see what has been deployed through Cloud Assembly and NSX. Of course, if you want the best visibility into your NSX and vSphere networking and security configurations, there is no better product than vRealize Network Insight!
Now that you see how easy it is to create and use networks in NSX, don’t hesitate to try it out in your environment. In the next post I’ll go through the other networkTypes and ability to use NSX security groups. For the final post, existing and on demand load balancers are on tap. You can expect to see part 3 soon!
The post Network Automation with Cloud Assembly and NSX – part 2 appeared first on VMware Cloud Management.
Powered by WPeMatico
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321786.95/warc/CC-MAIN-20190824214845-20190825000845-00446.warc.gz
|
CC-MAIN-2019-35
| 8,832
| 27
|
https://posidev.com/blog/category/software/
|
code
|
This week Facebook open sourced a project called osquery, which offers the ability to access low-level operating system information through simple SQL queries (more precisely SQL as understood by SQLite). More information for how to navigate through the tables can be found in the github page.
Installing/building osquery in Linux (in my case Ubuntu 14.04 LTS) is as follows:
git clone https://github.com/facebook/osquery
Testing the project: make test
Deploying and running it: make install
make deps will take care of installing everything you need to compile osquery.
If you have any errors in your source list make deps will end with errors and osquery will not be installed, because the used packages are not available. Therefore make sure that you have the latest packages and don’t get any errors in the source.list: sudo apt-get update (also sudo apt-get upgrade). In case of errors, you can fix the source.list by editing: sudo gedit /etc/apt/sources.list
Here is another good tutorial on installing and using osquery.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00072.warc.gz
|
CC-MAIN-2020-16
| 1,029
| 8
|
https://coderanch.com/t/278061/java/Stall-reading-variable
|
code
|
I'm currently having an issue with an Piped input/output situation. The program is stalling everytime it get so the .read() and is not getting to the finished read statment. I could use some help explaining why this is the case. Here is the code:
[ June 29, 2006: Message edited by: Brian Duncan ] [ June 29, 2006: Message edited by: Brian Duncan ]
From the doc: Typically, data is read from a PipedInputStream object by one thread and data is written to the corresponding PipedOutputStream by some other thread. Attempting to use both objects from a single thread is not recommended, as it may deadlock the thread. If you're not comfortable with threads, the threading forum is just a line or two away on the forum menu.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585522.78/warc/CC-MAIN-20211022212051-20211023002051-00144.warc.gz
|
CC-MAIN-2021-43
| 914
| 4
|
http://forum.conartistgames.com/index.php/topic/15659-raiding-ai-compounds-suggestion/
|
code
|
Hi all, ive decided to start this topic because is on my mind for some time and reading again roadmap 2016 ( http://deadzonegame....oadmap-2016.jpg) made me want to do it.
The thing i want to talk is about "raiding ai compounds". what con already wrote in roadmap is already good enought at it is the only changes i would do are:
-1) Instead of replacing existing compounds in the map, utilize the "raiding practice icon" making it into a menu with a list. in it u would find your compound in first up position or appropriate button, and other names based on difficulty, and similar to "quest menu" (with better graphic), once u complete one the next unlock or appear with increased difficulty. once completed they dont disappear so u can redo one lv any number of time u want. The only restriction i would apply would be the same as doing missions in exp reduction; that would help players training in tipe of defences they have problem facing.
-2) Loot resources would be beneficial in finding rare resources like water or ammo for all kind of players. For the quantity id use buildings level requirements since they r already present in the game. the number of storages or drop offs increase as difficulty do or at determinate difficulty lv.
-3) To make it more interesting for all the players it would be nice if a fuel generator will be there (not at all levels only at some degree of difficulty)
The amount of fuel inside rise as the difficulty (upgrade lv requirement would be nice), but since i dont think u make 50 lvs its impossible. My suggestion is increasing the fuel inside by a determinated amount each lv, if u make 10 lvs it would be 3 each (calculation is from NOT researched generator) and since u dont want to give too much fuel away free i would add a "generating" period like we have on ours generators (if im not mistaken 1 fuel every 80-90 min) to make it more similar at an user compound.
Thats all for now ill hope my suggestion help. Thank you and keep the great work
Edit 2: add of point 3
Edited by survivor_i, 29 October 2016 - 07:04 PM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00510.warc.gz
|
CC-MAIN-2020-24
| 2,067
| 9
|
http://lists.idyll.org/pipermail/testing-in-python/2009-September/002291.html
|
code
|
[TIP] including tests in packages, or not
robertc at robertcollins.net
Sun Sep 20 21:01:53 PDT 2009
On Sun, 2009-09-20 at 20:14 -0700, C. Titus Brown wrote:
> On Mon, Sep 21, 2009 at 12:56:07PM +1000, Robert Collins wrote:
> > As for your CI server; if it speaks Pandokia or Subunit, you could just
> > toss the tests at an anonymous RPC.
> > For subunit that would be
> > python -m subunit.run tagnabbit.test_suite | <something that pipes stdin
> > to your RPC>
> Hi Rob,
> could you explain? I have lots of RPC - RPC is coming out of my ears
> ;). However, all of my RPC speaks some specific protocol (XML-RPC,
> JSON-RPC, etc.) and wants to connect to some specific URL...
There was a certain amount of fiction, as all the uses I've seen made of
Subunit have been either client driven (e.g. my using ec2 to distribute
bzr test runs and show the results locally) or static reporting (e.g.
+Build;host=snab;tree=samba_3_master;compiler=checker) so far.
The amount of fiction is pretty small though. Subunit already defines a
wire protocol, so if you had a server at http://example.com/submit-tests
which accepts a POST and interprets the body of the POST as Subunit, it
could then insert that into whatever CI format you're using [by using
whatever listener/testresult/$foo your CI uses].
On the client side, it just needs something to take the subunit stream
and do an HTTP Post of it. For instance:
--- subunit-submit ---
cat - > postdata.tmp
curl $1 -d "@postdata.tmp"
echo "test report submitted"
$ python -m subunit.run tagnabbit.test_suite | subunit-submit
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 197 bytes
Desc: This is a digitally signed message part
Url : http://lists.idyll.org/pipermail/testing-in-python/attachments/20090921/641cf3ca/attachment.pgp
More information about the testing-in-python
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069133.25/warc/CC-MAIN-20210412175257-20210412205257-00030.warc.gz
|
CC-MAIN-2021-17
| 1,870
| 37
|
https://community.sonarsource.com/t/sonar-analysis-fails-in-travis-claiming-not-authorized/31410
|
code
|
We added the AxonFramework extension-reactor using the instructions provided, but while this has worked for all AxonFramework projects so far, this one keeps failing with “Not authorized. Please check the properties sonar.login and sonar.password.”
The build file contains the token as instructed. None of the projects use “sonar.login” or “sonar.password”, all use a token. Only this one fails. I have regenerated tokens, properly encrypting it, but it still fails.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880519.12/warc/CC-MAIN-20201023014545-20201023044545-00329.warc.gz
|
CC-MAIN-2020-45
| 478
| 2
|
https://www.fixya.com/support/t9905144-need_remote_control_control_philips
|
code
|
Well DVD's from regon 3 are from these parts of the world:
And a few others in that area
Region 1 is what we use for DVD's in the USA
From Wikipedia, the free encyclopedia
DVD region code
"Region 1–8" redirects here. For the ITU regions, see International Telecommunication Union region.
DVD RegionsDVD video discs may be encoded with a region code restricting the area of the world in which they can be played. Discs without region coding are called all region or region 0 discs.
The commercial DVD player specification requires that a player to be sold in a given place not play discs encoded for a different region (region 0 discs are not restricted). The purpose of this is to allow motion picture studios to control aspects of a release, including content, release date, and, especially, price, according to the region. Many DVD players are or can be modified to be region-free, allowing playback of all discs.
I looked on a few sites for region codes for your player but they all came up with same answer as this one http://www.videohelp.com/dvdhacks/magnavox-mdv453/3134
there doesnt seem to be any code available for your model to set it to region-0 (region free)
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00046.warc.gz
|
CC-MAIN-2021-49
| 1,173
| 10
|
https://support.awork.io/hc/en-us/articles/360016875419-All-for-the-team-04-10-2020
|
code
|
Teams for your workspace
Create teams and assign users and projects to the teams to limit the visibility of projects and manage larger numbers of users more easily.
More powerful task bundles
The task bundles have been upgraded and you now can also assign users or project roles already in the templates, attach files, etc.
Filter task views by status name
In the task views or on the dashboard you now have the possibility to filter by the name of a task status if you only want to see the tasks with a specific status.
Edit dependencies in tasks
Task dependencies can no longer be edited in the timeline only, but can be set in the task details for all project tasks - even for tasks without start and end date.
Better display of the workload
In the team planning, the actual workload is now displayed for the users in mouse-over.
The user profile settings have moved to the first tab of the profile, making them more accessible.
If you create another workspace, there is a new intermediate step to avoid accidentally creating a new workspace.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088471.40/warc/CC-MAIN-20210416012946-20210416042946-00181.warc.gz
|
CC-MAIN-2021-17
| 1,045
| 12
|
https://greenchapel.dev/2021/10/13/getting-the-pull-requests/
|
code
|
So the task for the evening is to allow the user to drill down into a selected repository from within the list repo’s page.
To start with a little UI tweak to capture the user clicking the card seems the sensible way to drill down into that clicked repo.
A new component and addition to the router is needed for this. Adding the router is simple as these pages don’t have authentication yet. Hooking up navigation is also a simple 1 line addition.
Angular has some easy to use functions in the router library, you can simply say what page you want to navigate to and also pass the query parameters you need. For now the only parameter we need to pass is the repository name as that’s all that is needed by the service to drill down into a specific repository.
In this new component ill just pop a simple materials table for now with some fixed headers and some simple check we have data logic. I will make this component more dynamic at a later date, for now its more for testing.
Now I have the table I can test that the flow is working correctly. I don’t need to many fields in this higher level view, at the moment I will use pull request id, author and title. This will be easy to add more at a later date. I will also need to style the table at a later date, that’s if I continue to use this type of table.
I did get a little stuck with this not showing any data. I forgot that for each style loops don’t await and the variable was not setting with data when the table rendered. It took me a while to work out why I didn’t have any data but once I realised my mistake a quick change to using some for in style loops worked nicely.
Checkout some of my previous progress If you missed it
- For the full list – https://greenchapel.dev/category/git-project/
- Adding a favourites option
- CodeCommit app – Market research
- Routing the app
- Checking the SDK works
- App Base UI Layout
- Starting of a new app
Leave a Reply
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00541.warc.gz
|
CC-MAIN-2023-14
| 1,942
| 16
|
http://www.activestate.com/blog/2012/01/adding-paas-openstack-kvm-xenserver-stackato
|
code
|
- Developer Tools
Diane Mueller, January 19, 2012
The ever-expanding OpenStack ecosystem now has an enterprise-ready PaaS. Stackato, built on Cloud Foundry and hardened for the enterprise, now supports vSphere, Amazon/AMI, HP Cloud Services, OpenStack, Citrix XenServer, and KVM infrastructure models.
ActiveState joined the OpenStack community in 2011, and committed publicly to making PaaS a reality for OpenStack deployments. As showcased with Stackato’s successful deployment to HP’s OpenStack-based cloud, we’ve made good on this commitment, and are now pleased to offer OpenStack-ready Stackato for download with our latest release of Stackato.
Stackato—ActiveState’s commercial-ready, secure, multi-tenant platform for creating a private PaaS—is now the linchpin connecting Cloud Foundry and OpenStack. Stackato’s enterprise-ready OpenStack support ensures that these two important ecosystems can be securely deployed together.
With Stackato, customers can deploy applications to either a private internal cloud (like ones powered by vSphere, XenServer, KVM, or OpenStack) or one hosted with a third-party cloud-hosting provider (like those powered by Amazon, RackSpace, or HP Cloud Services).
Stackato also now has SSH support, so you can have a secure interactive shell in any of your application instances. (Personally, this is my favorite new feature in this latest Stackato release.) This is possible (and safe) because each application instance runs in it's own para-virtualized container using LXC, providing more secure multi-tenancy.
In addition to OpenStack deployment and ssh support, Stackato has a new Management Console that replaces the Admin Dashboard with a new, improved user interface:
The Management Console offers deep visibility into the activity and events in a private cloud to help administrators better manage usage. This view includes showing activities of developers who deployed an application, number of instances deployed, memory usage, data services deployed, and languages used.
And there’s more: Additional updates in this latest release include improved application lifecycle management, improved Perl deployment speed, and new pre-staging setup hooks.
Take a test drive with Stackato on a microcloud (VM) on your own desktop or deploy directly to the Stackato Sandbox today!
Subscribe to ActiveState Blogs by Email
Share this post:
Tags: citrix xenserver, cloud computing, cloud foundry, cloud security, linux kvm, lxc, mulit-tenancy, openstack, PaaS, stackato
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122420.60/warc/CC-MAIN-20160428161522-00191-ip-10-239-7-51.ec2.internal.warc.gz
|
CC-MAIN-2016-18
| 2,519
| 14
|
https://cryptocurrencyjobs.com/jobs/principal-software-engineer-blockchain--9
|
code
|
One of the first companies ever to use Blockchain technology to make a social impact is now looking to grow their engineering team here in L...
One of the first companies ever to use Blockchain technology to make a social impact is now looking to grow their engineering team here in Los Angeles. While their mission is to help use technology to help make the world a better place, their executive team is second to none.
Therefore, if you're looking to work in the blockchain industry while also making an everlasting impact, please read on. This Principal Engineer will be collaborating with the CTO on a daily basis and handling everything Node, AWS Serverless with some DevOps. You will also be helping them build out a team in house.
Required Skills & Experience
Desired Skills & Experience What You Will Be Doing
- 5+ years of software engineering experience
- 2+ years minimum of Node.js
- AWS experience
- Familiarity with DevOps
- Ability to communicate with stakeholders
- 80% Hands On
- 10% Management Duties
- 10% Team Collaboration
- Competitive Salary: Up to $200K/year, DOE
You will receive the following benefits:
- Medical Insurance & Health Savings Account (HSA)
- Paid Sick Time Leave
- Pre-tax Commuter Benefit
Applicants must be currently authorized to work in the United States on a full-time basis now and in the future.
#LI-JV1 - provided by Dice
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00569.warc.gz
|
CC-MAIN-2021-49
| 1,369
| 20
|
https://fontsarena.com/cispeo-by-lucas-descroix/
|
code
|
Cispeo by Lucas Descroix
Cispeo is a monospaced typeface with 2 styles and extensive language support (Latin Extended, Cyrillic, Greek, Hebrew). Initially started as a custom typeface, Cispeo is now 100% free for both personal and commercial use.
A version of the font family is available under OFL license on Bonjour Monde’s Gitlab repo.
Latin Extended, Cyrillic, Greek, Hebrew (Cispeo Regular), Latin Extended (Cispeo Bold)
EOT, OTF, TTF, WOFF, WOFF2
SIL Open Font License → Licenses explained
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00786.warc.gz
|
CC-MAIN-2023-14
| 499
| 6
|
https://www.keyworddensitychecker.com/search/verification-vs-validation-testing
|
code
|
Difference Between Verification and Validation with Example
Jun 04, 2022 · Example of verification and validation. Now, let’s take an example to explain verification and validation planning: In Software Engineering, consider the following specification for verification testing and validation testing, A clickable button with name Submet. Verification would check the design doc and correcting the spelling mistake.
DA: 44 PA: 21 MOZ Rank: 44
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104248623.69/warc/CC-MAIN-20220703164826-20220703194826-00500.warc.gz
|
CC-MAIN-2022-27
| 445
| 3
|
http://i.document.m05.de/2008/08/07/instantmini-and-beta5/
|
code
|
we are about to release our x3d player for the iPhone! patrick just finished the OpenGL ES based application today. it is build with the official iPhone SDK and will be available in the AppStore soon.
beta5 of instantplayer will be released today, too. the experimental BrowserTexture is one of my favorite features. XMLHttpRequest was postponed to beta6. meanwhile i found a solution via TCPClient backend for loading and parsing XML.
all news and interesting demos will be presented at WEB3D and SIGGRAPH next week. see you there..
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258862.99/warc/CC-MAIN-20190526065059-20190526091059-00026.warc.gz
|
CC-MAIN-2019-22
| 533
| 3
|
http://elitistjerks.com/f31/t15257-melee_combat_riddle_me_parry_mechanics/
|
code
|
So I buckled down and did about 5700 seconds worth of testing on a blasted lands mob (level doesn't matter). I used a 3.6 speed 2 hander, and did a /combatlog to capture the results. The result, with respect to parry effect on the swing timer, is shown in the graph below.
When a player Parries, their next normal melee swing becomes a counter-attack sped up by as much as 50%. There does not appear to be a discernible pattern which determines the amount of swing-time reduction a player receives, and it may indeed be possible for the attack to be sped up even faster depending on the circumstance.
It was previously posted that a Parry reduced swing-time by a flat amount of 40% and could not reduce it to less than 20%. Combat log parsing (taking server latency into account) shows this does not appear to be the case.
The above shows a timer reduction range between 14% and 50% and does not abide by the 40/20 rule. With more extensive combat log parsing, timer reductions do not appear to line up to set numbers but instead cover the entire range. It appears based on server delay and the time the Parry occurs -- in many cases, it appears the counter-attack occurs about 1 second after the Parry (though this does not always hold true, either).
To me, it seems like there is a pretty discernable pattern of a constant 40 to 45% next-swing reduction, but maybe my eyes deceive me.
1) Lag - obviously if lag weren't an issue, we would never have more than 3.6 seconds left between auto-attacks. We do, however (hence the negative numbers on the x axis). This confounds many things.
2) There are a few really weird points on the far right, where the swing should be reduced down to about 2 seconds (a parry right after an auto-attack), but instead it's reducing the swing timer far more. I'm not sure if these outliers are lag, or some other odd mechanic.
3) The curve appears to flatten between 0.5 and 2 seconds on the x axis. Is this a fluke within the data set, or something that is being affected by actual game mechanics?
If anyone else is willing to do combat log generation, I'd be really appreciative. The more data the better, and spending over 1.5 hours doing nothing but dropping totems started to wear on me. On the positive side, at least I finally had a use for my gorehowl.
Alright, thanks to spades's willingness to do the mindnumbing work, I think we can confirm some of how this works. Check out the graph from his combat log (much cleaner than mine in terms of lag):
So, what we have here is actually really close to the original statement regarding how parry works. In other words, my understanding of it was pretty darn wrong. I thought I was going to find something new, but the data doesn't bear that out:
When parry gets a % reduction, the swing timer is reduced by 40% of the orginal weapon speed, except in the event that it would take the weapon speed faster than 20% of the total weapon speed. That's why the slope of the line before the flat part is 45 degrees. Each one of those points is 3.6 (weapon speed) * 40% = 1.44 faster than the swing otherwise would be.
We see the cap at .72 seconds (20% of the weapon speed).
The interesting parts, which were what threw me to my initial suspicion of the parry hypothesis being wrong is that stuff that you see when the non-parry swing timer is already lower than 20% of the original weapon speed (which is why i assumed the 20% cap hypothesis had to be incorrect).
Apparently at that point the game realizes that you can't cap a swing at a longer time than it should take, and a new rule of parry mechanics takes over which simply says "hey, parry can't reduce you below 20%, so just let the swing land like it's supposed to".
So, there we have it, in a neat little bundle - The original theory was right, and I feel a bit dumb. But at least now I'm convinced of the original theory. I suppose someone could try to figure out why there are trace bands of extra haste in little bits, but the overall picture is pretty solid now imo.
The next step is to verify what happens parry wise when dual wielding - is it next swing haste, or always the main hand?
Again, super big thanks to Spades and his sacrifice at the hands of a warrior who thought it would be cool to gank someone doing testing :-p It made the graphs extremely easy to read.
My pleasure. I blame the trace bits of haste on my fiddling about with other programs and causing lag spikes, including one very noticeable time where I tried to open a second instance of WoW (bad idea) and locked everything up for about seven seconds.
Re: dualwield testing, I think your shaman would be better suited to this than my rogue, unless I can find a paladin willing to hang around for a few hours in the Blasted Lands and autoattack to keep Light up while healing me occasionally.
"Existence has no pattern save what we imagine after staring at it for too long."
No doubt, I'll use my shaman for that. I'm jealous of your incredibly consistent ping times, though, I'll admit. I have no idea if your average ping is slower or faster than mine, but it's incredibly consistent.
Hold on, what's going on at the extreme right hand side of the graph there? You have a second line of data points, with much more extreme reduction. Looks to me like this is the line you get when you have two parries in quick succession. The line is about 0.8 - 0.9 below the main line, which is what you'd expect from two successive 40% reductions.
Could you check the logs to see if this is indeed the case, and that these points are down to double parries?
The interesting point is that this line isn't capped at 20% - the majority of the points are at less than .72 seconds on the y axis. So that's why parry streaks can insta-gib the main tank :-)
That suggests that there's a second cap to be reached there, at 4% (20% of 20%). In this case, that would mean that double parries can't reduce your swing time below 0.144 seconds. Interestingly, there's a pair of points at around (1.75, 0.3ish) that could be evidence for this. I would predict that these points come from double parries which got floored out at the second cap.
@ Disquette: Please excuse my slight change in topic here, but I was just curious as to what program you use to graph your data. I'm compiling similar data on mace stuns and different procs and I really like how your data is output. I tried Googling to find some things, but I just figured I'd ask you as well. Thanks in advance!
Rezarel - if it's easy to tell what hand is what (MH/OH), I'd be happy to look at your data. mail to disco at discofiend dotcom please
Songster - interesting observation - I'll go back through spades's log to see if I can confirm your theory.
I went back through the logs - there were no double parries like that preceding the super-hasted attack. Something consistent is happening, however, because each of the 15 instances of superhaste were a speed up between 2.345 and 2.477 seconds. That's a *really* tight band, considering lag effects. The next closest speed up to the 2.345 was 1.648 seconds, so I'd agree that there is something that triggers them. I've uploaded the excel file in case anyone wants to check them out.
Soultroll - I took the combat log in text format, opened it with MS Excel, and used the built in charting features (the graph type is "x y scatter", or something like that). I'm not sure, but I'd guess that Open Office has similar capabilities.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00089-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 7,438
| 31
|
https://grabduck.com/s/t2vXzg6A
|
code
|
Cross-posted from the Google Cloud Platform Blog Editor's note: This post is a follow-up to the announcements we made on March 25th at Google Cloud Platform Live.
Last Tuesday we announced an exciting set of changes to Google BigQuery making your experience easier, faster and more powerful. In addition to new features and improvements like table wildcard functions, views, and parallel exports, BigQuery now features increased streaming capacity, lower pricing, and more.
1000x increase in streaming capacity
Last September we announced the ability to stream data into BigQuery for instant analysis, with an ingestion limit of 100 rows per second. While developers have enjoyed and exploited this capability, they've asked for more capacity. You now can stream up to 100,000 rows per second, per table into BigQuery - 1,000 times more than before.
For a great demonstration of the power of streaming data into BigQuery, check out the live demo from the keynote at Cloud Platform Live.
Users often partition their big tables into smaller units for data lifecycle and optimization purposes. For example, instead of having yearly tables, they could be split into monthly or even daily sets. BigQuery now offers table wildcard functions to help easily query tables that match common parameters.
The downside of partitioning tables is writing queries that need to access multiple tables. This would be easier if there was a way to tell BigQuery "process all the tables between March 3rd and March 25th" or "read every table which names start with an 'a'". You can do this with this release.
TABLE_DATE_RANGE() queries all tables that overlap with a time range (based on the table names), while TABLE_QUERY() accepts regular expressions to select the tables to analyze.
For more information, see the documentation and syntax for table wildcard functions.
Improved SQL support and table views
BigQuery has adopted SQL as its query language because it's one of the most well known, simple and powerful ways to analyze data. Nevertheless BigQuery used to impose some restrictions on traditional SQL-92, like having to write multiple sub-queries instead of simpler multi-joins. Not anymore, now BigQuery supports multi-join and CROSS JOIN, and improves its SQL capabilities with more flexible alias support, fewer ORDER BY restrictions, more window functions, smarter PARTITION BY, and more.
A notable new feature is the ability to save queries as views, and use them as building blocks for more complex queries. To define a view, you can use the browser tool to save a query, the API, or the newest version of the BigQuery command-line tool (by downloading the Google Cloud SDK).
Now you can annotate each dataset, table, and field with descriptions that are displayed within BigQuery. This way people you share your datasets with will have an easier time identifying them.
JSON parsing functions
BigQuery is optimized for structured data: before loading data into BigQuery, you should first define a table with the right columns. This is not always easy, as JSON schemas might be flexible and in constant flux. BigQuery now lets you store JSON encoded objects into string fields, and you can use the JSON_EXTRACT and JSON_EXTRACT_SCALAR functions to easily parse them later using JSONPath-like expressions.
Fast parallel exports
BigQuery is a great place to store all your data and have it ready for instant analysis using SQL queries. But sometimes SQL is not enough, and you might want to analyze your data with external tools. That's why we developed the new fast parallel exports: With this feature, you can define how many workers will be consuming the data, and BigQuery exports the data to multiple files optimized for the available number of workers.
Check the exporting data documentation, or stay tuned for the upcoming Hadoop connector to BigQuery documentation.
Massive price reductions
At Cloud Platform live, we announced a massive price reduction: Storage costs are going down 68%, from 8 cents per gigabyte per month to only 2.6, while querying costs are going down 85%, from 3.5 cents per gigabyte to only 0.5. Previously announced streaming costs are now reduced by 90%. And finally, we announced the ability to purchase reserved processing capacity, for even cheaper prices and the ability to precisely predict costs. And you always have the option to burst using on-demand capacity.
I want to take this space to celebrate the latest open source community contributions to the BigQuery ecosystem. R has its own connector to BigQuery (and a tutorial), as Python pandas too (check out the video we made with Pearson). Ruby developers are now able to use BigQuery with an ActiveRecord connector, and send all their logs with fluentd. Thanks all, and keep surprising us!
Felipe Hoffa is part of the Cloud Platform Team. He'd love to see the world's data accessible for everyone in BigQuery.
Posted by Louis Gray, Googler
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00443.warc.gz
|
CC-MAIN-2018-17
| 4,929
| 23
|
http://mshtools.c1.biz/2018/12/25/test-stand-for-micro-motors/
|
code
|
- Thrust of rotor engine is determined by mechanical power at the rotor and rotor’s diameter, it can be calculated with simple equation. In real life the thrust is significantly lower because of inefficiency of each of the components a) rotor b) motor c) ESC. Total efficiency (figure of merit) is a product of these partial efficiencies. Total efficiency can be estimated by hovering time. For example (measurements of my tiny whoops):
a) Tiny whoop with 0716 (17000) brushed motors looks like this . Total efficiency (FOM) is about 0.14
b) Tiny whoop (UR65) with 0603 (17000) brushless motor looks like this . Total efficiency (FOM) is about 0.10
- Efficiency (FOM) of the motor can be calculated using motor parameters (R, Kv, I0). Current, thrust, torque, rpm, efficiency are mutually interconnected, all possible values form the following curves. When motor is loaded with a propeller, the single point corresponds to this load (example). In particular, this point gives efficiency of the motor. To estimate efficiency of the propeller, rot.efficiency coefficient should be adjusted to get proper values of experimentally measured thrust.
a) For 0716 (17000) brushed motors the plots look similar to these . Efficiency of 0716 brushed motor is about 0.45 (when loaded with 31 mm prop and voltage is 4V) and propeller’s efficiency is 0.36 ( as it was seen before )
b) For 0603 (17000) brushless motor the plots are . Motor efficiency is about 0.7
(this is more or less obvious, because both motors have the same Kv (and therefore torque) and the 0603 brushless motor should be more efficient because of lower resistance of winding)
So, we have that total efficiency of tiny whoop with brushless motors are lower in contradiction with higher values of calculated efficiency of brushless motors.
In contrast to big motors (e.g for 5″ quadcopters) micro motors cannot be described with existing theory, or some of the parameters are not taken into account (like ESC efficiency). This was published also in the report where motor efficiency was shown in dependence on motor’s weight.
Firstly, I would like to know why it happens. Second, it would be nice to predict characteristics of tiny whoops. Third, there are no published data on micro motors parameters except of thrust.
My goal is not just to have some specific parameters of specific motor, but have more general picture of what’s happening and if that can be improved.
That is why I decided to build my own stand. Here is preliminary testing:
Parts are printed with 3D printer. Load cells are 1kg and 0.2 kg for thrust and torque correspondingly. Most of the interface boards are readily available: the hx711 for load cells, for voltage and current at the power source the ina219 board. Unfortunately they are not fast (1ms). This is enough for average values of current and voltage, but not enough for measurements of voltage and current at the motor (where it is good to know also possible phase shifts and waveform shapes and also voltage drop at the MOSFETs). For that purpose I’ve developed my own board based on ACS739 current Hall-sensor and couple of ADA 4522 op amplifiers. Here is the board:
Accuracy of current Hall sensors is not very good, but they are fast (1us) and do not add extra resistance (1mOhm). For better accuracy zero offset can be adjusted before each measurements, because they have some hysteresis.
For rotational speed the laser module was used together with photodiode (unknown origin). Teensy’s comparator is good enough to count pulses even without amplification. The PD is loaded with 6.8k resistor to have a bandwidth >20 kHz.
All sensors are read with Teensy 3.2 at maximal (for the sensor) speed. Each sensor’s output is filtered with low-pass filter with adjustable cut-off frequency (1Hz in the above video) The motor is controlled by MultiWii MSP_SET_MOTOR command to FC via separate COM port.
Teensy 3.2 easily do the job of data collection. Baud rate of Teensy’s virtual serial port is 12 Mbps so it is quite comfortable to exchange data between MCU and PC program.
In an oscilloscope mode current and voltage from the fast sensor are recorded in the internal RAM buffer and later can be obtained by PC program via serial link.
PC program is written with Delphi 10.2.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00706.warc.gz
|
CC-MAIN-2022-33
| 4,286
| 19
|
http://geovis.vacorrea.com/blog/
|
code
|
Recently, a series of widespread power outages in Venezuela have made international headlines, as the lack of electricity has escalated the current political and economic instability. While electricity shortages have been a reoccurring issue since 2010, a major blackout occurred March 7, 2019 that was mainly the cause of years of government mismanagement. This was the beginning of three major blackouts that took place during the month of March through the vast majority of the country, including the capital city of Caracas. The power outages lead to the deaths of dozens of people as hospitals, schools and businesses were unable to function.
Thinking about Visualizing Change
How can we find quantifiable data of the electricity crisis in Venezuela, and then use it to visualize how the situation has changed over time? While NOAA has created this interactive VIIRS DRB nighttime imagery that is meant to show changes in nighttime lights on a global scale (including Venezuela), there is so much data that the map is very slow and hard to use, especially if trying to hone in on a specific region. While this visualization from ESRI draws a connection between changes in nighttime lights and population density, the data is from 2017 and does not show the 2019 blackouts.
Because this situation is ongoing and intensifying, there is a need for relevant and geographically specific geovisualizations that show the nature of the power outages. To date, I have not been able to find a geographically specific geovisualization of this nature. The most recent one I could find was this side-by-side visualization of the day of and after the March 7 blackout on Wikipedia, but I felt there was a need for an animated component.
To create my visualization, I used VIIRS DNB Nightly Mosaic data from the Earth Observation Group. I selected the tile that included Venezuela (Tile 1) and downloaded the corresponding .tif. I opened each frame in qGIS Desktop 3.6.0. I then downloaded the GADM shapefile of Venezuela, and used the raster extraction tool in qGIS to clip the large raster nighttime files to the shapefile. I then calculated a logarithmic scale to create a black-to-white color gradient with five classifications and a linear interpolation. I used qGIS’ print layout feature to adjust the background and exported each frame as a .tif. I exported these .tifs into Adobe Photoshop CC 2019, which I then used to create the animated .gif.
One difficulty with working with nighttime data is the inevitable cloud coverage – this was particularly visible in the time period after the first blackout and before the second. This makes it difficult to make any sort of quantitative comparison. With access to the proper data, I also would have liked to analyze the specific regions within Venezuela and see how different populations were impacted by the blackouts. While this visualization was good at painting a general picture, it could have provided more political insights.
By Veronica Correa
This website was created as part of my final project for my “Geovisualizing Change” course at the University of North Carolina at Chapel Hill. I am an undergraduate student pursuing a degree in environmental science with a minor in journalism. My primary goal for this project was to use my multimedia and geovisualization skills to tell a unique story.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513144.48/warc/CC-MAIN-20210117174558-20210117204558-00082.warc.gz
|
CC-MAIN-2021-04
| 3,357
| 8
|
https://forums.opera.com/topic/3055/copy-without-formatting
|
code
|
Please make sure to finish all your posts before this time. The downtime should only last for a few minutes.
Copy without formatting
Is there a way to copy just text from webpage to clipboard in Opera 21 by default? So i can use "ctrl+c/ctrl+v" instead of "ctrl+shift+v"? Not in Opera only, but in other programs too. I don't ever need a fonts, colors and spacing, just text.
If it does the same outside of opera, then it's not an opera feature so this is not the right place to ask.
You can use a plain text editor like notepad2(or anything that doesn't support text formatting) to work with, that way the formatting won't get copied.
Plain text is all that I get when copying text from this web page with Opera 21.
Where do you get formatted text?
It's definitely happening with all webkit-based browsers. For example, i copy nickname from this topic in Opera 21. When i paste it in any WYSIWYG editor (like one in Wordpress) or some of text editor (such a Microsoft Word or Mars Notebook) i get hyperlink with bold attribute. That never happening if i repeat the same procedure in Opera 12.17.
Now you understand me? That feature is so annoying, so i want to legally get rid of it.
Now you understand me?
I can confirm.
I don't know what you'd be able to do to fix it short of finding an extension that makes a non-formatted copy of the selected element, selects the new text and then pastes that to the clipboard. Don't know if there's an extension for that or not though.
In Word (at least in 2010), you can set the default paste option to just plain text so that ctrl + v always just pastes plain text. That won't help for programs that don't have an option like that though.
Here's an extension in the Chrome store called, "Copy without Formatting." 160 reviews, 4 star rating.
Here's a discussion of the extension. http://www.trishtech.com/2011/04/copy-unformatted-text-in-google-chrome/
Hope it works for you.
You know that to install a Chrome extension, you need the Opera extension called, "Download Chrome Extension."
Best of luck!
Thank you. But this is half-measure. I need to install two extensions, that means more processes and usage of system memory. And that extension has a bug, allowing copy with shortcuts, but ignoring context menu.
A single checkbox in opera:flags will be much more preferable. But it's not for me to decide.
If you don't use it that often, you could keep it deactivated till you use it. And the memory use on this and the "Download Chrome Extension" app is probably slight. But you know your computer resources best. The reason I passed along the extension was that Burnout426 specifically mentioned the idea of an extension, and this seemed based on its summary in the Chrome store what you were looking for so I wanted you to be aware of it. Not sure what you mean by a bug Do you mean a bug in that it fails to work, or just that it doesn't do everything you want. Isn't a plain text copy, even if via shortcuts, not the extension menu, still useful? better than nothing? If it's not, well, I did try to find something for you.
Yeah, not a bug, just omission. Anyway, i'm trying to use it.
Would paste to address bar and copy all then paste to your target location would help removing the format of fonts & paragraph format ?
In older days, I paste it to the To: ( I need to open the compose email ) & cope them from the To: field. Formats seems are removed so that I can paste to anywhere without odds formatting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948529738.38/warc/CC-MAIN-20171213162804-20171213182804-00677.warc.gz
|
CC-MAIN-2017-51
| 3,459
| 24
|
https://community.jitsi.org/t/is-it-possible-to-save-all-recording-to-same-folder-during-the-particular-meeting/88473
|
code
|
Hello all. I have one question regarding Jibri recordings. Jibri is saving files under a folder that’s named
// FileRecordingJibriService.kt private val sessionRecordingDirectory = fileSystem.getPath(recordingsDirectory).resolve(fileRecordingParams.sessionId)
So in a single meeting, let’s say 4 times Start Recording is triggered. There are gonna be 4 different folders and inside those folders
roomName_date.mp4 files will be. Is it possible to group/identify recordings that happened in a particular meeting, not room. So let’s say the room is empty and someone joins the room. A new
meetingId is created at that point. I want to differentiate all recordings until a new
meetingId is created.
I have checked prosody modules and added one component listens for
muc-occupant-joined hook. I can determine
room._data and I can understand that occupant is Jibri however there is nothing related to
sessionId in the prosody object(s).
Does anyone have an idea about this?
Thanks in advance
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00231.warc.gz
|
CC-MAIN-2021-17
| 992
| 12
|