Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
// typepacer
/////////////////////////////////////////////////////////////////////////////////////////////////////
var startTime, endTime, timeTaken, wordsPerMinute, totalWords, text, position, mistakes, winCondition, textArr;
defineTextArr();
var attempts = 1;
var resultArray = [[40, 0]];
// defining the canvas and drawing horizontal lines
var canvas = document.getElementById("graph").getContext("2d");
canvas.strokeStyle = "#CCCCCC";
canvas.lineWidth = 1;
for (var i = 0; i < 7; i++) {
canvas.beginPath();
canvas.moveTo(0, 20 * i + 10);
canvas.lineTo(150, 20 * i + 10);
canvas.stroke();
}
newGame();
document.onkeydown = function(e) {
if (text.substr(position, 1) === e.key) {
// if the correct key has been pressed
if (position === 0) {
startTime = new Date().getTime();
document.getElementById("start-text").style.display = 'none';
}
position++;
dispText();
if (position == text.length) {winCondition = true;}
} else if (e.key === 'Escape') {
// if the player presses Esc, to restart
newGame();
} else if ((e.key !== 'Shift') && (e.key !== 'Backspace')) {
// if the player has made a mistake and pressed the wrong key
mistakes++
}
if (winCondition) {
// if the player has finished
endTime = new Date().getTime();
timeTaken = endTime - startTime;
wordsPerMinute = (60000 * totalWords) / timeTaken;
resultArray.push([wordsPerMinute, mistakes]);
document.getElementById("results-text").innerHTML += '<br>Attempt ' + attempts + ': ' + Math.floor(wordsPerMinute) + ' WPM / ' + mistakes + ' mistakes';
document.getElementById("text").innerHTML += '<br><span style="font-size: 20pt;">Press <mark>Esc</mark> to try again.</span>';
if (wordsPerMinute >= 100) {
alert('Wow! Incredible!');
} else if (wordsPerMinute >= 80) {
alert('Wow! Great job!');
} else if (wordsPerMinute >= 70) {
alert('Good job!');
}
// canvas drawing
canvas.strokeStyle = '#0000FF';
canvas.beginPath();
canvas.moveTo((attempts - 1) * 5, 150 - resultArray[attempts - 1][0]);
canvas.lineTo(attempts * 5, 150 - wordsPerMinute);
canvas.stroke();
canvas.strokeStyle = '#FF0000';
canvas.beginPath();
canvas.moveTo((attempts - 1) * 5, 150 - (resultArray[attempts - 1][1] * 4));
canvas.lineTo(attempts * 5, 150 - (mistakes * 4));
canvas.stroke();
attempts++;
}
};
function dispText() {
document.getElementById("text").innerHTML = "<span style='color: lightgray'>" + text.substr(0, position) + "</span><mark>" + text.substr(position, 1) + "</mark>" + text.substr(position + 1);
}
function newGame() {
position = 0;
mistakes = 0;
winCondition = false;
document.getElementById("start-text").style.display = '';
selectText();
dispText();
}
function selectText() {
// selects a text from textArr and assigns variables accordingly
text = textArr[Math.floor(Math.random() * textArr.length)];
// takes a random element of textArr, and stores it in the text variable
totalWords = text.match(/ [a-zA-Z0-9]/g).length;
if (text.match(/"[a-zA-Z0-9]/g)) {totalWords += text.match(/"[a-zA-Z0-9]/g).length;}
if (text.match(/ '[a-zA-Z0-9]/g)) {totalWords += text.match(/ '[a-zA-Z0-9]/g).length;}
if ((text[0] !== '"') && (text[0] !== "'")) {totalWords++;}
// match finds all occurrences of a certain regexp, then puts them in an array. to find word count, we find all occurrences of a space or quote or apostrophe followed by a letter, then add one if we need to account for the first word.
}
function defineTextArr() {
// this is put at the bottom so that the user doesn't have to scroll a while to find the actual code
textArr = [
"This is a paragraph that you are required to type. When you type it, try to type quickly, but accurately. After all, if many mistakes are made, then that's bound to reduce your speed. But, on the other hand, don't stress too much about having few errors. That may reduce your speed as well.",
"When making a typing test, it is important to use many different texts to make the test. If the user is allowed to type the same sentence many times over, then it is inevitable that their speed will trend upwards, and the test will slowly become less and less accurate.",
"I've always wondered just how many different quotes can be typed on different typing test sites. Will I ever type a repeat quote? Have I already done so, without realizing it?",
"It's likely that if you have used this site for any reasonable amount of time, this is not the first time you have typed this quote. It's not like there are that many quotes to choose from. That would take a lot of work to gather that many quotes.",
"This is the last quote that I will put in for now. I was considering putting in some other quotes from books and whatnot, but amongst these other quotes, those book quotes - as well written as they are - would look quite out of place.",
"\"Can we put quotes in our typing tests?\" asked the naive student. Little did they know, it was totally possible. As possible as the word 'coolguy2018.'",
"The current world population is about 7.6 billion. This figure would appear to mark a certain point in time, such that if someone wanted to, they could track the approximate time that I typed this. But, a number like that could mark many points, from past to future.",
"This quote is very long, and may not be particularly pleasant to type. But, these typing tests are all a little inaccurate, in that they all give a short quote to type. It wouldn't be fun to type a whole page or two just to get some results, but in the real world, typing papers and long documents is what typing is generally used for. Online typing tests are like 50 meter sprints, but in reality we're trying to prepare ourselves for a marathon. So why not make the typing tests require some endurance, as well? It might do us some good."
]
}
|
STACK_EDU
|
Tessel: hardware that speaks the language of the web.
Use your web development skills to make hardware devices with Tessel.
Tessel is optimized for the creation of new experiences and internet-connected devices. That’s why Tessel features built in WiFi support and “plug and play” modules that can be installed with one line to the Node package manager (npm). By enabling rapid prototyping and iteration, Tessel gives hardware development the speed and flexibility of web development.
If our campaign reaches its goal, we’ll begin building out Tessel's ecosystem with processes to take a Tesselation from prototype to a beta-testable device, and first-party services for aggregating usage data, firmware deployment and management, and enterprise-class security.
Extend Your Skills to the Physical World
We know innovation doesn’t come from managing drivers and configuration, but from how fast you can develop new experiences. We built Tessel around Node.js’s huge and growing community of modules, so support for web APIs and services; realtime communication; and robotics comes right out of the box.
Tessel’s custom runtime is optimized for low level chips. It only takes up 256k of flash and RAM, so you’re free to push the limits of Tessel’s 32MB for whatever project you dream up.
Add Capabilities Faster Than Ever Before
Tessel’s module system makes it easy to add capabilities to any project without soldering. Just like Node’s module system, each Tessel module encapsulates a specific functionality that can be added to the board, such as RFID, microSD, or a servo.
Simply plug one of our modules into any of the four module ports on the board, then use the node package manager to install the matching library—which is printed right on the module. Check out this video for an example of how easy installing a module can be.
We have two tiers of modules for this crowdfunding campaign, Class A and Class B. Although these are all of the modules we’re currently releasing, more are in development!
Class A modules:
- Relay — turn devices on and off (up to 5 amps)
- Temperature/Humidity sensor — get information about the climate
- Servo Driver – make up to 16 little motors move. Includes one servo. (Additional power supply included for US backers)
- Accelerometer — get realtime movement data
- MicroSD Storage — add extra storage to your Tessel (includes a 1GB microSD card)
- Ambient - light and sound sensor
- nRF24 - wireless communication without WiFi
Class B modules:
- RFID (13.56MHz) — read RFID tags
- Bluetooth Low Energy — send data to other devices, i.e. smartphones
- GPS — get location information
- Audio Output — decode and output sound files / raw audio
- GPRS/SIM (add a SIM card to connect Tessel to the cell network. Low-bandwidth SMS/Voice/Internet for global connectivity without WiFi)
If you want a bit more extensibility to play with other peripherals, we’ve placed a GPIO bank at the end of the board. The GPIO bank includes SPI, I2C, and UART capability as well as 6 General Purpose Input/Output pins, 6 Analog to Digital Converters, a 5V pin, a 3.3V pin, and a ground pin: everything you need to plug in your custom sensors and actuators.
Program over WiFi
Modern smart devices are internet connected—that’s why WiFi is baked into Tessel.
The days of disassembling projects to reprogram them are over: with Tessel, you can push code completely wirelessly, with just a single command.
Tessel features Texas Instruments’ CC3000 WiFi chip which introduces SmartConfig technology: a way to connect your local Tessel device to a WiFi network in seconds simply by entering your network credentials into your smartphone.
Using the WiFi chip to send and retrieve data from the web will feel familiar to any web developer. Connecting to a server with Tessel is dead easy, and has exactly the same workflow as Node.js:
Remotely Control Tessel through our Mobile Application
We’re providing a mobile application for both Android and iOS to let you control Tessel devices wirelessly. The app will let you connect a Tessel to any wireless network, without needing to hardcode credentials onto your device. From there, manipulate local devices by directly controlling their pins and output, or even serve up your own HTML interface directly from the device!
Tessel was made to be embedded in projects, which is why it’s smaller than a credit card. It doesn’t have a microprocessor; it uses an extremely low-energy ARM Cortex-M microcontroller, which means it uses much less battery power. In tests without software power management and a high frequency of wireless transmission, Tessel used 175mAH. That means Tessel will be able to run for a full day with a 3500mAH, 3.7V LiPo battery even when polling WiFi constantly. When smartly controlling when your CPU runs, expect to run Tessel even longer.
Tessel can be powered off of a standard USB battery supply. If you really want to use a standard LiPo connector, let us know and we’ll see what we can do. Tessel modules can also be extended past the core Tessel unit with ribbon cables. We’re working on finding a manufacturer of ribbons cables specifically for Tessel modules and we will let you know as soon as they’re available.
Scale Your Project with Tessel
Tessel was created with the future in mind. We know that our users are an innovative and entrepreneurial crowd. That’s why we’re creating a beta test program where you can take your project to the next level when people start expressing interest in your devices.
We’ll let you upload your code and list of modules, and we’ll send you back a batch of 10–100 assembled Tesselations with your firmware preloaded. You can hand out these betas to potential users. We’re going to start working on libraries to gather aggregate usage data, automated crash reports, and update firmware wirelessly so that for the first time ever, hardware can be beta tested—just like you would with a website.
If you already have validation and are ready to launch a full production run, we’re working hard to ensure our firmware is as efficient and flexible as software. Contact us if you’re interested in seeing Tessel’s firmware and runtime run on your device or chipset.
- 180mHz ARM Cortex-M3 LPC1830
- 32MB SDRAM
- 32MB Flash
- TI CC3000 Wifi Radio
- Up to 18 GPIOs
- 6 ADCs
- Micro USB or battery power
- 40mm x 65mm (without headers)
- 3.48V-6V supply Voltage
- Can be programmed and powered with USB Micro (included)
Technical Machine was founded by three computer engineers from Olin College of Engineering: Tim Ryan, Jia Huang, and Jon McKay. During their time at college, they worked on coilguns, IMDB clones, custom OSes, and GPUs embedded on FPGAs, just to name a few. Tim and Jia have co-taught a 30-person class in Node.js and web development, and all three worked together on their senior capstone project making web-enabled physical devices.
It was during this project that they discovered how much the hardware prototyping space could benefit from a strong software community and environment. They got to work on making a microcontroller for web developers from the ground up. Eric joined the team to help make the board as inexpensive and small as possible. Kelsey soon followed, and she makes sure the team doesn’t overlook small details like paying taxes or having a marketing plan.
Talk to us!
We love to hear your project plans and aspirations for Tessel. Reach out to us on Twitter, Facebook, or email.
We’d like to give a special thanks to some fantastic folks that helped us go from a vague, inkling of an idea to launching with Tessel: Michael and Josh Maloney, Drew Volpe, the Phyre team, the Skillbridge team, Amon Millner, Scott Harris, The Marra-Thomson brothers, Margaret-Ann Seger, Paul Booth, Shane Moon, Peter X. Deng, Sean Dalton and the team at Highland Capital Partners, DC Denison, our friends at Rough Draft Ventures, Tim Raymond, Cypress Frankenfeld, Shilei Zheng, Kendall Pletcher, Ben Kroop, Aaron Greenberg, Iñigo Beitia, Cory Dolphin, Slater Victoroff, Juliana Nazaré, Adam Hyland, all of our friends at One Mighty Roar, Dragon Innovation (except Thos), and, most importantly, our moms (and dads), and probably Mark Chang.
|
OPCFW_CODE
|
Learn how you can install Windows 7 using a USB 3. Install windows 7 drivers on ubuntu. Do you install the latest chipset drivers,. Install windows 7 drivers on ubuntu.On this guide we will see how to install Windows 7, in detail. If you’ re using Windows 8 or any computer with a 64- bit.
Feb 01 · Before doing this upgrade install you should run the Windows 7 Upgrade Advisor to see if you might have any issues before upgrading. 04 does not support RDP ( remote desktop protocol ), so we need to install xrdp to allow connection from Windows 7 Remote Desktop. A guide to installing Ubuntu using the Desktop CD.
Install windows 7 drivers on ubuntu. USB RS232 - FTDI designs and supplies USB semiconductor devices with Legacy support including royalty- free drivers.
How to Install Hardware Drivers on Linux. How to Install Ubuntu on VirtualBox.
This wikiHow teaches you how to install Ubuntu Linux on your Windows or Mac computer. Here' s how to get them.
With the recent advancements in Linux desktop distributions, gaming on Linux is coming to life. I created a VirtIO HDD in virt- manager connected the driver ISO from here. You just need to add a driver to each OS.
Here' s how to set up a dual boot system that lets you enjoy the best of both worlds in perfect harmony. The only drivers I see for storage are for Windows Server 20.
The Windows- based Ubuntu Installer ( Wubi) allows you to install and uninstall Ubuntu from within Microsoft Windows. I have built a computer running Ubuntu 12.
Ubuntu is an open source software operating system that runs from the. Try before you install.
This wikiHow teaches you how to install Ubuntu Linux on your Windows or Mac computer without erasing your current operating system. Jun 25, · Installing Ubuntu from within Windows. It lets a Microsoft Windows user try Ubuntu without risking any data loss due to disk formatting or partitioning.This wikiHow teaches you how to install Ubuntu Linux on a computer by using VirtualBox. If you really need to use Oracle ( ex Sun) Java instead of OpenJDK in Ubuntu here' s an easy way to do it: a PPA repository to install keep your computer up to date with the latest Oracle Java 7 ( Java JDK which includes JRE).
Ubuntu is an open source software operating system that runs from the desktop to the cloud to all your internet connected things. Most of the time, you' ll be fine with open- source software on Linux.
Linux users are beginning to enjoy gaming like Windows users. For USB, please refer to this guide on how to use “ Unetbootin” : ubuntu- mate. So if you don' t have a network connection, go to the hardware setup install the LAC Drivers.
How to Install Ubuntu Linux. I can' t lose my Ubuntu files I' m afraid that I might break GRUB. On Ubuntu Ubuntu- based distributions .
Scared of Windows 8 and thinking of switching to Linux? Thanks for the detail explanation! Linux vs Windows.
VirtualBox is a program which allows you to install an operating system without changing your computer' s main. Step- by- step instructions for creating a customized bootable USB installer that works with USB 3.
Download AMD Drivers & Software for Radeon CPU, desktops, APU, FirePro laptops. Make sure that your computer can run Linux. Compare the two operating system' s from an average user' s perspective. I have a Mac and I' ve been downloading Ubuntu sof.
I have Ubuntu on my laptop. Community How to use Unetbootin.
Virtualization of Snow Leopard ( Client) is not officially supported/ allowed by any virtualization solution. Now I want install Windows 7 in a dual- boot.
With VMware Player, you can install a full copy of Ubuntu and integrate it with your Windows 7 computer for free. Here is a possible solution. It’ s actually pretty simple.
|
OPCFW_CODE
|
fix: sub defineKey should't be renamed
What kind of change does this PR introduce?
Fixes https://github.com/webpack/webpack/issues/18573
import foo from './foo.js';
function works() {
return foo.bar;
}
function broken() {
const bar = foo.bar;
return bar;
}
works(); // does not throw
broken(); // throws ReferenceError
error output
/* harmony import */ var _foo_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(686);
function works() {
return _foo_js__WEBPACK_IMPORTED_MODULE_0__/* ["default"] */ .A.bar;
}
function broken() {
const bar = foo.bar;
return bar;
}
works(); // does not throw
broken(); // throws ReferenceError
expected output
/* harmony import */ var _foo_js__WEBPACK_IMPORTED_MODULE_0__ = __webpack_require__(686);
function works() {
return _foo_js__WEBPACK_IMPORTED_MODULE_0__/* ["default"] */ .A.bar;
}
function broken() {
const bar = _foo_js__WEBPACK_IMPORTED_MODULE_0__/* ["default"] */ .A.bar;
return bar;
}
works(); // does not throw
broken(); // throws ReferenceError
Found that ReferenceError is because const bar = foo.bar is not converted to HarmonyImportSpecifierDependency.
After look into this issue in depth.
For return foo.bar, parser will do walkExpression which includes new HarmonyImportSpecifierDependency logic.
And for const bar = foo.bar, parser will do walkVariableDeclaration which includes walkExpression logic.
walkVariableDeclaration first do rename. If canRename and rename hook both return true, walkExpression can not be executed that causes const bar = foo.bar can not converted to HarmonyImportSpecifierDependency.
So the root cause is foo.bar can't be renamed.
For DefinePlugin({"foo.bar.baz": "baz"}), it will applyDefineKey that make foo.bar canRenamed. I think it's bug and guess that canRename hook is only be used to addValueDependency.
const applyDefineKey = (prefix, key) => {
const splittedKey = key.split(".");
splittedKey.slice(1).forEach((_, i) => {
const fullKey = prefix + splittedKey.slice(0, i + 1).join(".");
parser.hooks.canRename.for(fullKey).tap(PLUGIN_NAME, () => {
addValueDependency(key);
return true;
});
});
};
Did you add tests for your changes?
Yes
Does this PR introduce a breaking change?
No
What needs to be documented once your changes are merged?
No
CI failed, i take a look.
@hai-x Can we add the such test case too?
@hai-x Can we add the such test case too?
it's already have.
https://github.com/webpack/webpack/blob/34f19cbcd9997dd57ea9c8cae392a30a3c3d3afe/test/configCases/plugins/define-plugin/index.js#L171
https://github.com/webpack/webpack/blob/34f19cbcd9997dd57ea9c8cae392a30a3c3d3afe/test/configCases/web/node-source-global/index.js#L7
@hai-x Is ready to merge? Looks good
let's do it!
|
GITHUB_ARCHIVE
|
Multilanguage Post Titles not Translating with List Category Posts Plugin
For the most part the plugin List Category Posts works great.
I just have one problem with it: when I try to show the title and excerpt of a post it shows all translations at the same time, rather than just the translation relative to the selected languages.
I'm using qTranslate for the languages. I'm not sure how to fix that.
How are you showing the title and excerpt? Can you add the relevant code to your question? Or a link to a snapshot?
thanks for your quick reply. for example on the following page: http://yoga-dinamico.hl53.dinaserver.com/en/talleres/ , using the plugin "list category post", i try to show a list of all the posts from a cateogry with this code: [catlist name=talleres thumbnail=yes thumbnail_size=52,52 excerpt=yes]. in the footer, in the Archive section is happening something similar.
Well, let's hope the plugin author see this post, because that's a handicap of LCP. Anyway, there are solutions on WordPress forums on how to hack the plugin to make it work with qTranslate... Saludos!
thank you so much. i found a hint towards the solution exactly where you advised me. i think the solution was a bit outdated,but with a bit of digging, i made the following change: in file include/CartListDisplayer.php, in function get_post_title() i added the following code in first line before anything else: if (function_exists('qtrans_useCurrentLanguageIfNotFoundUseDefaultLanguage')) {
$single->post_title = esc_html(qtrans_useCurrentLanguageIfNotFoundUseDefaultLanguage($single->post_title));
}. needs to be done with all other fields that need translation.thank you so much
bilyana, glad to hear, but please answer your own question and in a couple of days mark it as the correct one. Haven't said, but benvingut a WPSE!
Please, read the FAQ, and you'll even earn your first bronze badge ;o)
i read it but the accepted button is not showing for me. it just show the mark as favorite.
You are confusing Questions with Answers, take a look down the page and you'll see a text box titled Your Answer, you to write it down, wait 2 days and mark it as the correct one.
I found a hint towards the solution exactly where you advised me. I think the solution was a bit outdated, but with a bit of digging, I made the following change:
in file include/CartListDisplayer.php, in function get_post_title() I added the following code in first line before anything else:
if ( function_exists( 'qtrans_useCurrentLanguageIfNotFoundUseDefaultLanguage' ) )
{
$single->post_title = esc_html(
qtrans_useCurrentLanguageIfNotFoundUseDefaultLanguage( $single->post_title )
);
}
This solution worked for me but copy-pasting directly did not, there was an encoding issue of some sort with the "$single->post_title" part so I had to copy-paste that from further down in the CartListDisplayer.php file.
|
STACK_EXCHANGE
|
Fortsättningsansökan till SLF-projekt V0930028: Milk genomics
This document is to keep members informed of developments with the UK genomics services. However, this is a very fast moving area with rapid developments driven at the national level. The information in this document is correct as at February 2020. 2 Introduction Genomics Implementer Guidance; This page is part of the FHIR Specification (v4.0.1: R4 - Mixed Normative and STU). This is the current published version. For a full list of available versions, see the Directory of published versions .
Guidance Genomics is an innovative consumer DNA collection, processing and analysis company that is paving the way for the democratization of nutrigenetics to consumers. Guidance Genomics provides at-home DNA sample collection kits which the consumer submits for genomic processing and analysis. Get More Information. 2019-11-07 This guidance will facilitate the implementation of genomic studies by enabling a common understanding of critical parameters for the unbiased collection, storage, and optimal use of genomic Guidance Genomics.
fen - PhD in Enzymatic functionalization of lignocellulosic for
Search The European Medicines Agency's scientific guidelines on pharmacogenomics (PG) help medicine developers prepare marketing authorisation applications for human medicines. For a complete list of scientific guidelines currently open for consultation, see Public consultations. PG … 2020-02-05 Details.
Rapolas Spalinskas - R&D Scientist II - 10x Genomics LinkedIn
This study showed that generally biomedical researchers were not genomic health literate, unaware of the code and its limitations as a source of ethical guidance for the conduct of genomic research. These findings underscore the need for educational training in genomics and creating awareness of ethical oversight for genomic research in sub-Saharan Africa. This guide covers all aspects of human genomic genomics-reporting, including: Representation of simple discrete variants, structural variants including copy number variants, complex variants as well Representation of both known variants as well as fully describing de novo variations Germline and The interface of genomic information with the electronic health record: a points to consider statement of the American College of Medical Genetics and Genomics (ACMG) Genet Med . 2020 Jun 1. doi: 10.1038/s41436-020-0841-2. Ophthalmic Services Guidance Genomics Services February 2020 .
Tissue Sampling Units Sample Instructions · Blood
Jul 5, 2019 Fully revised guide offers clinicians practical advice around consent and confidentiality when supporting patients through genomic testing. Recently the American College of Medical Genetics published new guidelines for the interpretation of genetic sequence variants (Richards et al. (2015) Genetics
Jan 3, 2018 Before submitting your application to conduct research involving genomic data sharing, please review U-M's policy page and the guidance
Return to Guideline Sections. 2.
J sidlow baxter books
For a full list of available versions, see the Directory of published versions . 10.10 Genomics Implementation Guidance .
For most alternative technologies, assembling a genome can be a long and arduous process that starts with picking a mix of technologies, may involve upstream process such as inbreeding to even get the sample, and ends with a bioinformatician analyzing the data and often inventing new methods.
Kortfristiga placeringar k3
sommarjobb trollhättan 17 år
det duger inte att vara alltför känslig. man får försöka att
hur många siffror är det i en miljard
nationella prov sva 1 exempel
- Logo eurest png
- Boende sveg med hund
- Hög arbetsbelastning
- När kom iphone 6
- Japan houses
- Eurocon consulting investor relations
Klinisk prövning på Anesthesia: M-Entropy guidance of anesthesia
3. Milestones Supplemental Guide. This document provides additional guidance and examples for Regulatory & Ethics Toolkit. Access and adopt ready-to-use regulatory and ethics guidance for genomic and health-related data sharing. Aug 6, 2020 ROCKVILLE, MD — The American Society of Human Genetics (ASHG) today published a new Guidance on ancient DNA (aDNA) research to Definitions for genomic biomarkers, pharmacogenomics, pharmacogenetics, genomic data and sample coding categories.
|
OPCFW_CODE
|
Eloquent dirty check for collections
Laravel Version: 8.x
PHP Version: all
Description:
Currently, the eloquent model dirty checks are not working correctly for collection attributes if a value changes between boolean and string.
Steps To Reproduce:
use Illuminate\Database\Eloquent\Model;
class Test extends Model
{
protected $casts = [
'collection' => 'collection',
];
protected $fillable = [
'collection',
];
}
$model = new Test(['collection' => ['key' => true]]);
$model->syncOriginal();
$model->fill(['collection' => ['key' => 'value']]);
$model->isDirty(); // false but should be true
Reason for this behaviour:
The attributes are casted within the HasAttributes trait to a collection and checked with php's equality logic:
$this->castAttribute($key, $attribute) == $this->castAttribute($key, $original);
When using the comparison operator (==), object variables are compared in a simple manner, namely: Two object instances are equal if they have the same attributes and values (values are compared with ==), and are instances of the same class. [PHP: Comparing Objects]
$a == $b | Equality | true if $a and $b have the same key/value pairs. [PHP: Array Operators]
So, the values are checked with PHP's simple equality check, which is now showing the typical unsafe type checks with strings. In my opinion, the model should be marked as dirty. Depending on the opinion whether this is a bug or expected behavior, I would propose doing strict dirty checks for next major release.
Hey @tpetry, this is more of a feature request I think. Feel free to attempt a PR to see if Taylor would accept it 👍
I wouldn't call it a feature request if my models at the moment don't save their updated values to the database because of the actual behavior of the dirty check algorithm.
@tpetry we could maybe add another strict-collection besides the current one. Please try a PR if you're willing. If it's not accepted you can always use if in your own code base.
The same problem is happening for array, object, maybe AsArrayObject::class and AsCollection ::class. @taylorotwell Is this expected behavior, or a bug? If it's a bug, i can write a PR for 8.x or 9.x if it is declared a breaking change.
I think this is a bug. I had unknown bug for years because of this.
When my collection attribute is changed from [null] to [0], eloquent treats it as unchanged because of loose comparison. Therefore, saving the model won't work. I will stop using collection cast and just use plain array until this issue is fixed.
https://github.com/laravel/framework/blob/88dd075b9f1beaff93796a0ee51818796ca04654/src/Illuminate/Database/Eloquent/Concerns/HasAttributes.php#L1413-L1415
It is unsafe to compare collections using loose comparison. It wouldn't make sense to use strict comparison either, since two different objects always yield different even if their values are same.
I would suggest using raw attributes (JSON string) to check for dirty objects/collections.
Related: https://github.com/laravel/framework/pull/38774#issue-994150768
I think this is intended behavior: collections should be converted to arrays and then compared using strict comparison.
|
GITHUB_ARCHIVE
|
//MODEL
const model = {
isCelsius: true,
toggleCelsius(){
this.isCelsius = !this.isCelsius;
},
get location(){
return this._location;
},
set location(location){
this._location = location;
},
get temperature(){
return this._temperature;
},
set temperature(temperature){
this._temperature = temperature;
},
get celsius(){
return this._celsius;
},
set celsius(celsius){
this._celsius = celsius;
},
get icon(){
return this._icon;
},
set icon(icon){
this._icon = icon;
},
get description(){
return this._description;
},
set description(description){
this._description = description;
},
get wind(){
return this._wind;
},
set wind(wind){
this._wind = wind;
},
get humidity(){
return this._humidity;
},
set humidity(humidity){
this._humidity = humidity;
}
}
//VIEW
const weatherView = {
get box(){
return this._box;
},
set box(box){
this._box = box;
},
get desc(){
return this._desc;
},
set desc(desc){
this._desc = desc;
},
get humidity(){
return this._humidity;
},
set humidity(humidity){
this._humidity = humidity;
},
get icon(){
return this._icon;
},
set icon(icon){
this._icon = icon;
},
get loader(){
return this._loader;
},
set loader(loader){
this._loader = loader;
},
get location(){
return this._location;
},
set location(location){
this._location = location;
},
get temperature(){
return this._temperature;
},
set temperature(temperature){
this._temperature = temperature;
},
get wind(){
return this._wind;
},
set wind(wind){
this._wind = wind;
},
init(){
this.box = document.getElementById('weather-box');
this.desc = document.getElementById('desc');
this.humidity = document.getElementById('humidity');
this.icon = document.getElementById('weather-icon');
this.loader = document.getElementById('loader');
this.location = document.getElementById('location');
this.temperature = document.getElementById('temp');
this.wind = document.getElementById('wind');
controller.init();
this.temperature.addEventListener('click', e => controller.toggleTempUnit());
},
setLocationContent(location){
this.location.textContent = location;
},
setTemperatureContent(temperature){
this.temperature.textContent = temperature;
},
setIconSRC(url) {
this.icon.src = url;
},
setDescContent(desc){
this.desc.textContent = desc;
},
setHumidityContentContent(humidity){
this.humidity.textContent = `Humidity: ${humidity} %`;
},
setWindContent(wind){
this.wind.textContent = `Wind: ${wind} Km/h`;
},
removeLoader(){
this.loader.classList.remove('loader');
this.loader.classList.add('loader--hide');
this.box.classList.remove('weather-box--hide');
this.box.classList.add('weather-box');
}
}
//CONTROLLER
const controller = {
init(){
this.findMe();
},
findMe(){
const convertResponseToJSON = response => response.json();
const setWeatherData = data =>{
model.location = data.name;
model.temperature = Math.round(data.main.temp);
model.icon = data.weather[0].icon;
model.description = data.weather[0].description;
model.wind = data.wind.speed;
model.humidity = data.main.humidity;
}
const setWeatherDataIntoView = () =>{
weatherView.setLocationContent(model.location);
weatherView.setTemperatureContent(`${model.temperature} ºC`);
weatherView.setIconSRC(model.icon);
weatherView.setDescContent(model.description);
weatherView.setWindContent(model.wind);
weatherView.setHumidityContentContent(model.humidity);
}
const hideLoader = () => weatherView.removeLoader();
const success = ({coords}) => {
const {latitude, longitude} = coords;
fetch(`https://fcc-weather-api.glitch.me/api/current?lat=${latitude}&lon=${longitude}`)
.then(convertResponseToJSON)
.then(setWeatherData)
.then(setWeatherDataIntoView)
.then(hideLoader)
}
const error = () => alert('Unable to retrieve your location');
if (!navigator.geolocation) {
weatherView.location.textContent = 'Geolocation is not supported by your browser';
} else {
navigator.geolocation.getCurrentPosition(success, error);
}
},
//mexer nessa funcao
toggleTempUnit(){
const unitStr = model.isCelsius? 'ºF' : "ºC"
const toggledTemp = model.isCelsius? this.convertCelsiusToFahrenheit(model.temperature) : model.temperature
const roundedTempValue = Math.round(toggledTemp);
weatherView.setTemperatureContent(`${roundedTempValue} ${unitStr}`);
model.toggleCelsius();
},
convertCelsiusToFahrenheit(temperature){
return (temperature*9/5)+32;
}
}
weatherView.init();
|
STACK_EDU
|
Using the Bitcoin “” Symbol on your Website As of right now, Unicode has not officially rolled out the bitcoin symbol, so we have to jump through a couple hoops to get it to show up on demand. Using the bitcoin font on your Website The easiest bitcoin symbol font check to get started is by checking out Google Fonts and adding the “Ubuntu Bold Italic” font to your website. Alternatively, you can download the Ubuntu Font to use in Photoshop, Gimp and etc. We are not creators, but redistributors of this content.
Bitcoin Wiki describes some alternative ways to display the symbol that are easier than my approach. You’re probably better off reading that page than this article. Edit: I found out that Font Awesome already has a BTC font, so use that instead of mine. By adding this webfont to a page, you can put Bitcoin symbols into your text. Note that the symbol above is not an image, but an actual font character in the text. You can zoom the page or print the page, and the symbol will remain smooth.
Bitcoin symbol above, something went wrong. How it works The webfont defines two characters: Bitcoin symbol without serifs and Bitcoin symbol with serifs. I used these since many people already use these characters as a stand-in for the Bitcoin symbol. For an explanation of webfonts, see here or here.
And once the Bitcoin symbol is in common use in text, it will be much easier to get it added to Unicode and available automatically. Mining Bitcoin with pencil and paper: 0. M9 1a8 8 0 1 0 0 16A8 8 0 0 0 9 1zm. It seems that the most widely used symbol for Bitcoin is a B with two vertical lines through it1.
It makes sense, as a single letter with a line or two through it is common for many currencies. Is there a plain text printing for this symbol, or perhaps a font that can insert it into documents and the like? With the symbols found here, it looks like a circle around it might become part of the official symbol as well. Maybe I should ask that separately. Like Murch said, there isn’t a current text based symbol for it, but if you’re going to use it on a web application, you can always use Font Awesome.
You can find more information regarding fontawesome bitcoin symbol documentation here. That’s neat, however, isn’t that an image? The asker said “I’m looking for other ways to denote the currency without using an image. Yes you are correct, but he also mentioned being able to print it through HTML. Plus when you use it like in the example, it behaves like text, not an image. It will hopefully become part of the next Unicode standard in June and then it can be used in text. Presumably Windows is waiting for it to be officially part of the standard.
I thought Android was going to include it by now, but I guess they decided to wait. How to get it on android? It needs to be a text-based solution. I’ve made a custom font with Bitcoin, Litecoin, and Dogecoin symbols.
|
OPCFW_CODE
|
Run IRC bot for salt interactions
Example of the single command that saltbot understands right now:
14:22 <jdm> disk usage
14:22 <saltbot> servo-linux1: 68%
14:22 <saltbot> servo-linux5: 47%
14:22 <saltbot> servo-linux4: 70%
14:22 <saltbot> servo-linux6: 37%
14:22 <saltbot> servo-linux3: 68%
14:22 <saltbot> servo-mac2: 75%
14:22 <saltbot> servo-mac3: 35%
14:22 <saltbot> servo-mac8: 31%
14:22 <saltbot> servo-mac6: 32%
14:22 <saltbot> servo-mac7: 9%
14:22 <saltbot> servo-mac4: 37%
14:22 <saltbot> servo-mac5: 32%
14:22 <saltbot> servo-mac1: 16%
14:22 <saltbot> servo-linux2: 63%
Since salt commands can only be run as root (or they can't write to logs in /var/logs), I couldn't figure out how to run this bot as non-root. The code for the bot is at https://github.com/jdm/saltbot. @aneeshusa, what are your thoughts on this?
This change is
@jdm, thanks for working on this! I'm always glad to see progress on chatops for servo.
A couple notes:
Instead of using cmd.run 'df -h', use Salt's built in disk.usage execution function, which will give you structured data, and should work on Windows too. Might be nice to also have the IRC interface reflect this and be disk.usage instead of disk usage (or accept both possibly).
It looks like the current code is in Javascript; I'd prefer to use Salt's Python client API, which gives you structured data and is a bit easier to use IMO. In my experience trying to run NodeJS in production causes operational sadness. Otherwise, you'll probably be interested in using the JSON outputter, which will cause Salt to output (machine-readable) JSON.
Thanks for trying to run this as non-root; I looked into this previously as part of #657 and there's currently not a great way to do this. A few options:
I'm OK with running this as root, but we'll need to be careful about what access we provide via IRC.
Another (more involved) option is setting up their CherryPy REST API, and then hitting it directly or using the Pepper Python Client to use that API. This uses Salt's pluggable External Auth system, which will allow us to create a Salt user just for the IRC bot and restrict which functions and arguments it can use. If we do this, we can run just the CherryPy API on the Salt master, set up TLS, and then run the IRC bot on a separate machine for added safety.
If you decide to keep invoking Salt via the CLI binary, then you should run one Salt command with a target of all builders instead of invoking Salt once for each builder; Salt should stream output for you as results come back. salt-key -L is useful to figure out the set of current minions, instead of hardcoding the number of Linux/macOS minions.
Some form of rate-limiting would be good in the bot to avoid being DOSed. Similarly, we should probably check that incoming messages are to the bot.
I can see this being extended to not-fully-Salt related things (e.g. which saltfs PRs are outstanding/yet-to-be-deployed? This actually is a GitHub call), so maybe pick a different name while it's easy.
Hooking up IRC to Salt is a bit scary, so I'll try to take a deeper look later on.
|
GITHUB_ARCHIVE
|
Pagination is the way that you would split large amounts of content over several pages. This means the user doesn’t have to scroll endlessly to view all of the content on the page but can rather quickly scroll between pages. This is used extensively in websites that display large amounts of news articles or sell products in a shop.
Although this is a great way to separate information and provide a good user experience, there is also the SEO to consider. Pagination creates additional pages instead of one single page, which as a result means that search engines will treat those as individual pages due to them having a separate URL.
If you had an E-commerce website in which you had a page of products that you are selling. As there are quite a few, you decide to use pagination. You’re URL’s will look like this:
https://url.com/product – the root page
https://url.com/product?page=2 – the first paginated page
https://url.com/product?page=3 – the second paginated page
As these are treated as separate pages, they are all ranked individually and could therefore lead to indexing and ranking issues, not to mention having an effect on website crawling.
There are ways in which you could improve on this pagination and we will list some of theme here:
Add links from each page to the following page with a href tags
You can use crawlable anchor links in order to enable search engines to crawl effectively. Use the below as pagination instead of on click events, as search engines can’t crawl properly.
Make your URL structure in a set clean. You can do it either via a ?page=n query parameter or you can create a static URL for each page. This option would be more difficult for a larger site. Avoid using URL Fragment Identifiers as these will be ignored.
Although creating a view-all page might seem like a good idea at first, but try and think if you will actually need one. If you have a large amount of products on a shop page that would require a high number of pagination, creating a view-all page would be increasingly difficult and more frustrating to navigate through.
If you still think you need the view-all page, you need to specify rel=canonical to the root page instead of making your view-all page canonical. This way, you’ll avoid duplicate content issues.
Some search engines recommend that you make only one page in a set canonical and it should be a view-all page. However at the end of the day, your paginated pages are not supposed to be duplicates. You want your content in the search results, right?
So, don’t use the first page of a paginated sequence or the view-all page as the canonical page. Instead, give each page its own canonical URL:
<link rel="canonical" href="https://url.com/product”> <link rel="canonical" href="https://url.com/product?page=2”> <link rel="canonical" href="https://url.com/product?page=3”>
If you’ve heard that you should use unoindex on all the paginated pages starting from page 2 to leave only the root page indexed, this is not advisable. This can complicate the indexation of content linked from the paginated pages and potentially lead to the appearance of orphan pages.
By using external links to those linked pages either from the root page or another page on the site will help to avoid these issues.
If there are too many products in your store, you probably have faceted navigation. These are all those filters that help users sort out products (by price, colour, brand, etc.). These also create new URLs with different parameters to cover all of the possible outcomes. Don’t include these parameters with rel=canonical so the search engines will attribute the page rank to the main page.
Dealing with Infinite scroll or Load More can be tricky for search engines as they are not able to mimic user experience, such as scrolling down with a mouse or by clicking a load more button so most of the content on the page is being ignored. By combining these methods with pagination, you are able to create a page that uses all 3. You scroll down until you get to the Load More button, but this time when you click on it, it loads the next set of results on the same page. Crawlers are then able to crawl these as it works the same way as pagination. It also helps avoid duplicate items in a paginated set.
There are other practices that you should put in place to further improve the experience, such as site and page speed optimisation, having the optimal number of items per paginated page. Also mobile / tablet responsiveness and good UX design are to be kept in mind.
Google Search Console or Analytics don’t keep a dedicated report pagination, however there is some information if you know where to look
Server Log Files
You can check in the server logs to see how many of your paginated pages have been crawled and indexed.
Search Results Report
In this report, you will see the number of impressions the paginated pages get.
Open your GSC and go to the Performance section to find Search Results. In the report, click the New button to add a filter by pages containing pagination (Page… > URLs containing + ?page=).
For more information please visit https://www.link-assistant.com/news/seo-pagination.html
|
OPCFW_CODE
|
Remediating security alerts is at the heart of managing your company security. Use the Threat Command > Remediations page to manage all remediation requests and all remediable alerts from a single pane.
The Remediations page shows remediation requests (from all statutes) and all remediable alerts (that are not closed). By default, the list is sorted by Last update date. You can change the sort order by clicking a column header.
Use the Remediations page quick links to:
- View ROI information.
- Overall success rate.
- Duration (SLA) of remediated alerts, and cancelled or failed remediations.
- Show only potential security issue alerts.
- These are remediable alerts for which no remediation has been requested.
- View the active remediation requests.
- To see the status breakdown, hover over the information icon.
- The amount of active requests that are pending your (the client) action is shown, too.
- See remediation license usage and request more licenses.
You can also use this page to:
- Consult the Remediation team about the remediation process of an alert.
- See the progress of remediation requests.
- View details of all remediable alerts.
- If the alert contains an IOC, when you hover over that IOC, you can see its properties in the popover that is displayed. This helps gain 360 degree visibility of all relevant context, enabling timely triage and informed decisions.
Overall ROI statistics
Use the ROI statistics to get a quick idea of how successful your remediation efforts are.
- Success rate - The number of successful remediation requests divided by the total number of remediation requests (in Success, Failed, or Cancelled states). This is shown only when there are a minimum of 5 requests.
- Median SLA - The median duration from when a remediation request was first requested until it is closed. The duration of Waiting for Client state is not included. This is shown only when there are a minimum of 5 requests.
Filter for non-remediated alerts
You can quickly filter the view to see all the alerts that can be remediated for which no remediation has been requested.
This helps you to pinpoint the potential security breaches and to quickly act on them.
To see only non-remediated alerts:
- From the Remediations page, click Non-Requested.
This is a fast way to filter, which is the same as using the Remediation Status = Not Requested filter.
These statutes can be applied to alerts:
See status of remediation licenses and request more
You can see how many remediation licenses were used and also request more. This information is the same as the Remediation limitation in the Settings > Subscription page.
Each remediation request uses one license. When you request more remediation licenses, a message will be sent to your Customer Support Manager who will then contact you.
To request more remediation licenses:
- From the Remediationspage, click Request More Remediations.
Consult the Remediation team
You can contact the Threat Command Remediation team to consult about remediated or non-remediated alerts. This is a direct way to communicate about the alert's remediation progress or to discuss whether to remediate a certain alert. (For non-remediation inquiries, use the Ask an Analyst function on the Alerts page.)
To consult the remediation team:
- From the Remediations page, select an alert.
- From the Actions panel , click .
- In the Ask the Remediation team panel, type your question at the bottom.
- Click the send arrow.
The message you sent is displayed in the panel. Replies will be displayed there, too.
See remediation request progress
To see alert remediation progress:
- The progress is displayed in the Takedown tab.
View details of remediated alerts
Open the alert details to see a summr of alert details. You can also copy the alert ID.
The information displayed here is identical to the details shown in the Alerts page.
To view alert details:
- The alert details are displayed.
In certain alerts, other fields may be displayed. For example, in mobile application alerts, when there is Sandbox information, that information is displayed as an attached PDF file, in the Attached documents section.
|
OPCFW_CODE
|
NetSpeedometer and NetDock on X1000
Today I thought I would take a look at two programs for looking at the realtime network card performance on the X1000 - NetSpeedometer and NetDock.
Looking first at NetSpeedometer, it is available on os4depot.net and is written by Massimiliano Scarano.
After downloading the program, it is a simple matter to extract the contents of the archive to sys:utilities. The program is ready to run, no further configuration needed to get it going:
So let's move onto the program itself. The GUI shows three tabs - Status, Misc and Bandwidth:
Last, but not least, the bandwidth tab, which is where most of the relevant information you want to look at is located - Bytes Received/Sent, Download/Upload Speed, Max Download/Upload Speed and Connection Rank:
In the example above, the only internet traffic coming down was streaming some internet music via TuneNet. So, to spice it up let's download something! I fired up AmiFTP and downloaded some files from AmiNet to see the results in real time via the NetSpeedometer GUI (Click to expand):
Next, I tried downloading files from OWB at the same time as the FTP session:
I noticed that the connection rank updates itself since the download bandwidth has increased - it still says 512KB at the best, which is a little depressing since I know the network card is a little slower than I would expect it to be...but that is not the fault of this program.
The program is interesting to see what is happening on the network interface - I think it really does need to have the AmiDock plugin mentioned as a future feature, to make monitoring easier. The author does claim it can be used as a benchmarking tool too. In my opinion NetSpeedometer really needs a graphical representation of the data in real time and the ability to verbose log the data being collected from the interface into multiple flat files for later analysis for benchmarking functionality.
Slightly off-topic, is there a decent Excel equivalent on AmigaOS4 to do this kind of benchmark analysis I wonder? Perhaps this program could be expanded to provide reports based on the data collected to save us the trouble!
Ok, so next up is another program called NetDock, written by Guillaume Boesel (zzd10h). This is also available from os4depot.net.
After downloading, I extracted it to SYS:utilities, and it also doesn't need any initial configuration to get it going, although there are a number of tooltype attributes you can modify to beautify it:
The readme touches on some of the tooltypes available to the NetDock program:
Helpfully, some screendumps of some of the customisations possible with NetDock are shown so it is easy to see how modifying the tooltypes produces different views of the program:
Once dragged onto the dock, the NetDock program looks like this:
It shows the Uptime, Maximum speed In/Out, IP Address, and current realtime transfer speed both with numbers and graphically. In my opinion it is great, as it shows the relevant information in a small docky, allowing it fit nicely on the Workbench screen as you use the X1000 (click to expand):
Below is a closer view of the AmiDock only, including NetDock along X1KTempdocky, TuneNet docky and the other usual icons in the dock (click to expand):
As with NetSpeedometer, I did some larger file transfers to see how it changed the stats shown, and NetDock does indeed reflect real time activity:
|
OPCFW_CODE
|
How much AC do you need for it to remain relevant per level?
While I can't find the exact place, I remember someone once saying that AC to be considered a tank went something like:
Level 1: 18
Level 5: 25
Level 10: 30-35
Level 15: 40
Level 20: 50
Or something along those lines.
Could anyone clarify whether this is accurate or not?
Are you interested in the source of the numbers, how those numbers were determined, or if those numbers hold true in actual play?
whether or not they hold true in actual play.
Hmm let me reword the question then
I hope the answer includes how to get to those numbers
Well i do know one way...but it generally involved abjurant champion
@Fering: The Armor Class Guide on AaronWiki let you arrive at AC 45 at level 18 with only standard equipment. It's a good starting point.
The crux of the question revolves around what chasing a high AC actually accomplishes. Without knowing the end goal, it's impossible to judge whether or not you've fulfilled your aims. This answer is going to proceed under the assumption that you're just trying not to get physically hit the majority of the time rather than completely mitigate your opponent's offense. The latter is better accomplished by utilizing miss chances and conditions.
A good metric for "relevant" would be your average opponent needing an 11 or better to hit you, leaving aside circumstantial effects like flanking, cover, etc. A "high" AC would be closer to needing a 15+ to hit you, so we'll use both of these as our guideposts.
Numeric Analysis
A survey of all the CR 1 monsters in the SRD yields an average attack bonus of +2.7 with two outliers at +5 (small air/earth elemental) and +6 (grig). Using our above targets, that suggests an AC of 14 to 18, with an absolute cap of 21 for the ultra-cautious. The vast majority sit right at +2 to +3, so shooting for average is perfectly safe at this level.
At CR 5, the average attack bonus rises to +9.7, with high outliers at +13 (Greater Barghest and Werebear hybrid/bear form). This suggests an AC of 21 to 25 with a cap at 28.
Moving on to CR 10, the average attack bonus is now +18.6, with two high outliers at +25 (Colossal Animated Object and Juvenile Red Dragon). Target ACs should be 30 to 35 with a cap of 40. At this point, the divide between attach-oriented monsters and special-ability-oriented monsters begins to get wildly apparent. The high attacks are more than 12 points higher than the low attacks. Raw AC is not likely to be the single answer it was at lower levels.
At CR 15, the field is almost exclusively dragons, which skews the average attack bonus to +26.9. Expanding to include CR 14 monsters drops the average to +24.8. The outliers are pretty much every dragon in the book at +28 to +33. Average AC target jumps to 36 to 40 with a cap of 48. A realistic gauge of attacks is hard to establish because chances are you're facing something with class levels, templates, or special abilities that don't rely on attack rolls, such as spells.
At CR 20, you're down to very powerful dragons, top-tier NPCs, pit fiends, and balors. Average attack bonus is +37.0, but swings from +30 to +46, excluding whatever overpowered BBEG your GM has in mind. AC targets are difficult to properly set due to such a small sample size, but 50 to 55 is a decent ballpark. At this stage, raw AC doesn't mean as much because you'll be facing numerous save-or-die effects, massive area effects, and other non-attack offensive abilities. Even a cheap ring of evasion will probably save your life more than cranking out that last 10 to 15 AC points.
Getting There
Actually achieving any of these AC targets is an entirely different kettle of fish.
At 1st level, a decent suit of armor or an exceptional Dexterity score will get you to AC 14 to 16 easily. Add on a heavy shield and you're at AC 18 without even trying.
At 5th level, it starts requiring some effort. To hit an AC of 25:
equip a suit of +1 full plate (+9)
rely on a little speed (+1 Dex)
use a +1 heavy shield (+3)
grab the Dodge feat (+1)
wear a +1 ring of protection
That even leaves you enough gold for a +1 weapon and a few other trinkets.
At 10th level, you're going to have to invest a significant portion of your wealth (approx. 45,000 gp) to hit your AC target of 34-39:
+3 mithril full plate (+11)
+3 heavy shield (+5)
+2 ring of protection (+2)
+2 amulet of natural armor (+2)
16+ Dex (+3)
Dodge feat (+1)
Combat Expertise feat (+0 to +5)
Above 10th, conventional AC adds are going to eventually top you out at 57 for a total cost of about 237,000 gp (half that if you can find a caster to sink about 9120 XP into crafting it):
+5 defending weapon (72k gp) [+5]
+5 mithril full plate (35.5k gp) [+13]
+5 heavy steel shield (25.3k gp) [+7]
+5 ring of protection (50k gp) [+5]
+5 amulet of natural armor (50k gp) [+5]
Dodge feat [+1]
Combat Expertise feat [+5]
fight defensively with 5+ ranks in Tumble [+3]
16+ Dex [+3]
The earliest you could manage this would be between levels 15 and 16 if you're buying/acquiring them or 13 to 14th if you're having them crafted.
With some creativity in feat selection and finding a way to add various other stats to your AC (Int and Wis are the most obvious), you can decrease your reliance on items somewhat, but that's left as an exercise for the reader.
It is worth noting that optimizers will recommend only taking +1 weapon/shields/armors and instead rely on the party Cleric using Magic Weapon/Magic Vestments to get the bonuses up to +5. It allows investing in other properties, such as Soulfire Armor Enchantment and Heavy Fortification Shield Enchantment. There are also other sources of armor: Dodge bonuses stack, so they are awesome, Insight/Luck bonuses help as well. Finally, on another note, many heavy hitters should rely on Power Attack, which will lower their AR.
|
STACK_EXCHANGE
|
Because there is no corresponding test
instance an _ErrorHolder object (that has the same interface as a
TestCase) is created to represent the error. If you are just using
the standard unittest ico development company test runner then this detail doesn’t matter, but if you
are a framework author it may be relevant. Method called immediately after the test method has been called and the
A test suite is a collection of test cases, test suites, or both. It is
used to aggregate tests that should be executed together. It allows you to replace parts of your system under test with mock objects and
make assertions about how they have been used.
Software performance testing
cases (e.g. iterative test development & execution) it may be desirable stop
test execution upon first failure (trading improved latency for completeness). If GTEST_FAIL_FAST environment variable or –gtest_fail_fast flag is set,
the test runner will stop execution as soon as the first test failure is found. Sometimes,
you want to run only a subset of the tests (e.g. for debugging or quickly
verifying a change). If you set the GTEST_FILTER environment variable or the
–gtest_filter flag to a filter string, GoogleTest will only run the tests
whose full names (in the form of TestSuiteName.TestName) match the filter. GoogleTest provides an event listener API to let you receive notifications
about the progress of a test program and test failures.
Doing so, however, can
be useful when the fixtures are different and defined in subclasses. Return a suite of all test cases contained in the given module. This
method searches module for classes derived from TestCase and
creates an instance of the class for each test method defined for the
class. A list of the non-fatal errors encountered while loading tests. Fatal errors are signalled by the relevant
method raising an exception to the caller. Non-fatal errors are also
indicated by a synthetic test that will raise the original error when
Getting Started With Testing in Python
You can instantiate a test client and use the test client to make requests to any routes in your application. If you’re unsure what self is or how .assertEqual() is defined, you can brush up on your object-oriented programming with Python 3 Object-Oriented Programming. For more information on unittest, you can explore the unittest Documentation. Choosing the best test runner for your requirements and level of experience is important. In the REPL, you are seeing the raised AssertionError because the result of sum() does not match 6.
This class represents an aggregation of individual test cases and test suites. The class presents the interface needed by the test runner to allow it to be run
as any other test case. Running a TestSuite instance is the same as
iterating over the suite, running each test individually. This is usually the
full name of the test method, including the module and class name.
Syntax Testing – Limitations:
also add its __aexit__() method as a cleanup function by
addAsyncCleanup() and return the result of the
__aenter__() method. If successful, also
add its __exit__() method as a cleanup function by
addClassCleanup() and return the result of the
__enter__() method. If successful, also
add its __exit__() method as a cleanup function by
addCleanup() and return the result of the
__enter__() method. Returns a description of the test, or None if no description
has been provided.
- If you supply the start directory as a package name rather than a
path to a directory then discover assumes that whichever location it
imports from is the location you intended, so you will not get the
- If you change your software’s internal implementation, your tests should not
break as long as the change is not observable by users.
- In this tutorial, you’ll learn how to create a basic test, execute it, and find the bugs before your users do!
- This method is provided to
allow subclasses of DocTestRunner to customize their output; it
should not be called directly.
- A list of str objects with the formatted output of
- The desire to test internal
implementation is often a sign that the class is doing too much.
The test passes if exception is raised, is an
error if another exception is raised, or fails if no exception is raised. To catch any of a group of exceptions, a tuple containing the exception
classes may be passed as exception. All the assert methods accept a msg argument that, if specified, is used
as the error message on failure (see also longMessage).
What Test Strategy needs to be followed in Syntax Testing?
By default, if an expected output block contains just 1, an actual output
block containing just 1 or just True is considered to be a match, and
similarly for 0 versus False. When DONT_ACCEPT_TRUE_FOR_1 is
specified, neither substitution is allowed. The default behavior caters to that
Python changed the return type of many functions from integer to boolean;
doctests expecting “little integer” output still work in these cases. This
option will probably go away, but not for several years. A number of option flags control various aspects of doctest’s behavior. Symbolic names for the flags are supplied as module constants, which can be
bitwise ORed together and passed to various functions.
Optional argument globs gives a dictionary to use as both local and global
execution context. Optional argument pm has the same meaning as in function debug() above. A shallow copy of module.__dict__ is used for both local and global
execution context. Globs, name, filename, and lineno are attributes for the new
DocTest object. The line number within the string containing this example where the example
begins. This line number is zero-based with respect to the beginning of the
Run multiple tests¶
Google Test implements the premature-exit-file protocol for test runners to
catch any kind of unexpected exits of test programs. Upon start, Google Test
creates the file which will be automatically deleted after all work has been
finished. In case the file
remains undeleted, the inspected test has exited prematurely. GoogleTest can emit a detailed XML report to a file in addition to its normal
The setUp function can
access the test globals as the globs attribute of the test passed. Optional argument setUp specifies a set-up function for the test suite. The setUp function can access the
test globals as the globs attribute of the test passed. Note that since all options are disabled by default, and directives apply only
to the example they appear in, enabling options (via + in a directive) is
usually the only meaningful choice. However, option flags can also be passed to
functions that run doctests, establishing different defaults.
Advanced GoogleTest Topics
This is called even if the test method raised an
exception, so the implementation in subclasses may need to be particularly
careful about checking internal state. Any exception, other than
AssertionError or SkipTest, raised by this method will be
considered an additional error rather than a test failure (thus increasing
the total number of reported errors). This method will only be called if
the setUp() succeeds, regardless of the outcome of the test method. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions.
How to Write Value-Parameterized Tests
is awkward especially when the expression has side-effects or is expensive to
evaluate. So far, you’ve been testing against a single version of Python using a virtual environment with a specific set of dependencies. You might want to check that your application works on multiple versions of Python, or multiple versions of a package. Tox is an application that automates testing in multiple environments.
Test discovery takes care
to ensure that a package is only checked for tests once during an
invocation, even if the load_tests function itself calls
loader.discover. A test case instance is created for each method named by
getTestCaseNames(). If getTestCaseNames() returns no
methods, but the runTest() method is implemented, a single test
case is created for that method instead. Add a function to be called after tearDownClass() to cleanup
resources used during the test class. They are called with any arguments and keyword arguments passed into
addClassCleanup() when they are added. If not, an error message is
constructed that shows only the differences between the two.
|
OPCFW_CODE
|
What would it mean if symmetries are not fundamental at all?
In this paper 1 written by Joseph Polchinski, he seems to indicate that all symmetries of nature may not be fundamental:
From more theoretical points of view, string theory appears to allow no exact global symmetries, and in any theory of quantum gravity virtual black holes might be expected to violate all global symmetries
Moreover, as we have already discussed in §2, local (gauge) symmetries have been demoted as well, with the discovery of many and varied systems in which they emerge essentially from nowhere. It seems that local symmetry is common, not because it is a basic principle, but because when it does emerge it is rather robust: small perturbations generally do not destroy it. Indeed, it has long been realized that local symmetry it is ‘not really a symmetry,’ in that it acts trivially on all physical states. The latest nail in this coffin is gauge/gravity duality, in which general coordinate invariance emerges as well.
This leaves us in the rather disturbing position that no symmetry, global or local, should
be fundamental (and we might include here even Poincaré invariance and supersymmetry).
Susskind has made a distinction between the mathematics needed to write down the equations describing nature, and the mathematics needed to solve those equations. Perhaps symmetry belongs only to the later.
I have a few questions about these claims:
Polchinski mostly worked in string theory and ideas related to it. Is it there any model in string theory or any related theory which proposes that symmetries may not be fundamental at all?
If no symmetries are fundamental, would this mean that there are no fundamental laws of physics? Would this mean that all symmetries (and all laws associated with them) would be rather emergent?
Can you clarify "If no symmetries are fundamental, would this mean that there are no fundamental laws of physics?" Exact global symmetries are thought to be incompatible with any theory of quantum gravity. This is the theme of the paper Symmetries in Quantum Field Theory and Quantum Gravity, which reviews some traditional arguments and presents a more robust argument using one example of a quantum gravity theory (AdS/CFT).
1) There are examples from string theory, supersymmetric gauge theories and matrix models that indicate that symmetries may not be fundamental
Examples:
Sometimes a theory with (local/global) a gauge symmetry is dual to a theory with a different gauge symmetry, or no gauge symmetry at all. An interesting example is Maxwell theory in three dimensions, this is a U(1) gauge symmetry with an electric-magnetic dual description in terms of a free massless scalar with no local gauge symmetry. See https://arxiv.org/abs/hep-th/9506077 for this example, and https://arxiv.org/abs/hep-th/9509066 for more elaborated examples.
Emergent general covariance: Matrix models in triangulated random surfaces (see https://arxiv.org/abs/hep-th/9304011) does not have two dimensional Poincaré or conformal symmetry at finite $N$. Its only in the large-$N$ limit that those notions emerge.
2) The possibility that gauge symmetries could not be fundamental does not rule out, in principle, the viewpoint that more general symmetries could be "fundamental"; string theory dualities are candidates, we don't have examples in in which they emerge or they could be violated.
It is perfectly possible that humans can develop laws of physics without symmetries as inputs.
thank you for your helpful answer. I would like to ask you a few more questions about it: Even though there is still the possibility that more general symmetries would indicate which laws are really fundamental, are there any string theory model or any author working on string theory that propose that there are actually no fundamental symmetries or laws whatsoever and that they are all emergent (or at least consider that possibility)? Would the articles that you have posted be examples of that? @RamiroHum-Sah
@vengaq Sorry, as far as I can tell, nobody has written or considered a concrete scenario where all the symmetries of the system as emergent. But of course, I could be wrong. It's worth to mention that the papers I've shared and similar ones mostly discuss gauge symmetries not the status of more general symmetries.
|
STACK_EXCHANGE
|
What exactly is the "Elastic Stack"? It’s a fast and highly scalable set of components — Elasticsearch, Kibana, Beats, Logstash, and others — that together enable you to securely take data from any source, in any format, and then search, analyze, and visualize it.
You can deploy the Elastic Stack as a Cloud service supported on AWS, Google Cloud, and Azure, or as an on-prem installation on your own hardware.
Elastic provides a number of components that ingest data. Collect and ship logs, metrics, and other types of data with Elastic Agent or Beats. Manage your Elastic Agents with Fleet. Collect detailed performance information with Elastic APM.
If you want to transform or enrich data before it’s stored, you can use Elasticsearch ingest pipelines or Logstash.
Trying to decide which ingest component to use? Refer to Adding data to Elasticsearch to help you decide.
- Fleet and Elastic Agent
Elastic Agent is a single, unified way to add monitoring for logs, metrics, and other types of data to a host. It can also protect hosts from security threats, query data from operating systems, forward data from remote services or hardware, and more. Each agent has a single policy to which you can add integrations for new data sources, security protections, and more.
Fleet enables you to centrally manage Elastic Agents and their policies. Use Fleet to monitor the state of all your Elastic Agents, manage agent policies, and upgrade Elastic Agent binaries or integrations.
- Elastic APM is an application performance monitoring system built on the Elastic Stack. It allows you to monitor software services and applications in real-time, by collecting detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. This makes it easy to pinpoint and fix performance problems quickly. Learn more about APM.
- Beats are data shippers that you install as agents on your servers to send operational data to Elasticsearch. Beats are available for many standard observability data scenarios, including audit data, log files and journals, cloud data, availability, metrics, network traffic, and Windows event logs. Learn more about Beats.
- Elasticsearch ingest pipelines
- Ingest pipelines let you perform common transformations on your data before indexing them into Elasticsearch. You can configure one or more "processor" tasks to run sequentially, making specific changes to your documents before storing them in Elasticsearch. Learn more about ingest pipelines.
- Logstash is a data collection engine with real-time pipelining capabilities. It can dynamically unify data from disparate sources and normalize the data into destinations of your choice. Logstash supports a broad array of input, filter, and output plugins, with many native codecs further simplifying the ingestion process. Learn more about Logstash.
- Elasticsearch is the distributed search and analytics engine at the heart of the Elastic Stack. It provides near real-time search and analytics for all types of data. Whether you have structured or unstructured text, numerical data, or geospatial data, Elasticsearch can efficiently store and index it in a way that supports fast searches. Elasticsearch provides a REST API that enables you to store data in Elasticsearch and retrieve it. The REST API also provides access to Elasticsearch’s search and analytics capabilities. Learn more about Elasticsearch.
Use Kibana to query and visualize the data that’s stored in Elasticsearch. Or, use the Elasticsearch clients to access data in Elasticsearch directly from common programming languages.
- Kibana is the tool to harness your Elasticsearch data and to manage the Elastic Stack. Use it to analyze and visualize the data that’s stored in Elasticsearch. Kibana is also the home for the Elastic Enterprise Search, Elastic Observability and Elastic Security solutions. Learn more about Kibana.
- Elasticsearch clients
- The clients provide a convenient mechanism to manage API requests and responses to and from Elasticsearch from popular languages such as Java, Ruby, Go, Python, and others. Both official and community contributed clients are available. Learn more about the Elasticsearch clients.
|
OPCFW_CODE
|
> Exit Code
> Linux Exit Codes
Linux Exit Codes
The error is purely Java exception and TopLink only wraps the reflection exception. UNIX 44=Level 2 halted. Error code: 46 NO_MAPPING_FOR_PRIMARY_KEY Cause: A mapping for the primary key is not specified. Error code: 149 INVALID_USE_OF_NO_INDIRECTION Cause: No Indirection should not receive this message. http://fullflash.net/exit-code/linux-exit-codes-list.html
Action: Define one or use different instantiation policy. Action: Inspect the internal exception and check the Java manuals. TopLink only wraps that exception. Action: The string passed should be one of the following: Check cache Check database Assume existence Assume non-existence Error code: 125 VALUE_HOLDER_INSTANTIATION_MISMATCH Cause: The mapping for the attribute TOC=h2-"1007951"2 http://www.unix.com/shell-programming-and-scripting/67549-found-error-138-while-run-shell-script.html
Linux Exit Codes
Action: Inspect the internal exception and check the Java manuals. Wrap the Python script in a BASH script to record the exit status in a file. But if the file was manually edited or corrupted then the files must be generated again. However, many scripts use an exit 1 as a general bailout-upon-error.
Action: Check the timestamp format. Error code: 176 NULL_POINTER_WHILE_METHOD_ INSTANTIATION_OF_FACTORY Cause: A message is being sent to null inside a factory instantiation. Action: If the project files are not manually edited and corrupted then this is usually an internal exception to TopLink and must be reported to Technical Support. Exit Codes C Action: Validate the constructor for the indirect container class.
Error code: 87 SECURITY_WHILE_INITIALIZING_ATTRIBUTES_IN_ METHOD_ACCESSOR Cause: The methods and in the object are inaccessible. Error code: 1021 Unexpected character:} Cause: Unexpected character}found while reading vector values from the file. Action: Usually such exceptions would mean restarting the application but it is totally dependent on the application. go to this web-site Error code: 49 NO_ATTRIBUTE_TRANSFORMATION_METHOD Cause: The attribute transformation method name in the transformation mapping is not specified.
For instance, many implementations of grep use an exit status of 2 to indicate an error, and use an exit status of 1 to mean that no selected lines were found. Exit Codes Python Error code: 43 MISSING_CLASS_FOR_INDICATOR_FIELD_VALUE Cause: Missing class for indicator field value of type . It may be anticipated that the range of unallotted exit codes will be further restricted in the future. ISAM 102=Invalid argument.
- Error code: 1039 IOException on open.
- UNIX 61=No data (for no delay io).
- Action: Declare the attribute to be of type /TOC=h25.
- When the number of pooled connections reach the threshold any more requests for such connection results in wait until some one releases the connection resource.
- Subscribe to our monthly newsletter for tech news and trends Membership How it Works Gigs Live Careers Plans and Pricing For Business Become an Expert Resource Center About Us Who We
- Action: Inspect the internal exception and check the Java manuals.
Posix Exit Codes
UNIX 98=Socket type not supported. Error code: 148 INVALID_CONTAINER_POLICY_WITH_TRANSPARENT_ INDIRECTION Cause: The container policy is incompatible with transparent indirection. Linux Exit Codes TopLink only wraps that exception. Exit Codes Windows Action: Inspect the internal exception and check the Java manuals.
Error code: 1033 IO Exception in next token Cause: Java is throwing reflection. http://fullflash.net/exit-code/linux-exit-code.html COBOL 35=File not found. Action: Set reference class by calling method /TOC=h21 Error code: 77 REFERENCE_DESCRIPTOR_IS_NOT_AGGREGATE Cause: The referenced descriptor for should be set to aggregate descriptor. UNIX 30=Read only file system. Unix Exit Codes List
Thanks in advance..... COBOL 30=Permission denied or max files too low. It simply ceases to exist. weblink COBOL 09=No memory for sort or cannot seek prev.
Trying to invoke method on the object. Linux Exit Code 11 Error code: 82 SECURITY_ON_FIND_METHOD Cause: The descriptor callback method with DescriptorEvent as argument is not accessible. Error code: 35 INVALID_DATA_MODIFICATION_EVENT Cause: This is an exception that an application should never encounter.
Descriptor Exceptions (1 - 176) Error code: 1 ATTRIBUTE_AND_MAPPING_WITH_INDIRECTION_ MISMATCH Cause: is not declared as type TOC=h2-"1007943"3 but the mapping uses indirection.
Action: efine a mapping for the primary key. Java reflection exception wrapped in TopLink exception is thrown when a method to create new instances is being created from the method name in instantiation policy. Action: If the project files are not manually edited and corrupted then this is usually an internal exception to TopLink and must be reported to Technical Support. Linux Exit Code 9 Action: Add primary key field names using method /TOC=h23 or /TOC=h22.
Klist also exits 1 when it fails to find a ticket, although this isn't really any more of a failure than when grep doesn't find a pattern, or when you ls UNIX 101=Addr fmly not supported by proto fmly. If the project files are not manually edited and corrupted then this is usually an internal exception to TopLink and must be reported to Technical Support. http://fullflash.net/exit-code/windows-exit-codes-list.html Error code: 140 PARAMETER_AND_MAPPING_WITH_TRANSPARENT_ INDIRECTION_MISMATCH Cause: The set method parameter type for the attribute TOC=h2-"1007955"2 is not declared as a super-type of TOC=h2-"1007955"1, but the mapping is using transparent indirection.
Action: Verify that the parameter type of the attribute's get method is correct for the indirection policy.
|
OPCFW_CODE
|
I have an HD that makes clicking sounds and looks like it is going to die very soon.
Ideally I would like to clone the disk but I’m afraid the cloning process would kill it.
How should I proceed? Is there any application especially made for backing up disks in their last throes?
No advice other than to copy your files to another drive asap. I mean right away, now!
Here’s a link with various sounds a dying hard drive makes. Not sure what help that is but it’s kind of interesting.
There’s recover software for sue after it dies but you really don’t want to have to use that because even if it seems to work it will probably rename your files.
Also it may help to keep the drive cool, open the case if the computer is off (if it’s on just copy files ASAP).
In case of total failure I’ve heard of people having success placing the drive in a freezer in a ziplock bag for a while then putting it back in the computer and got some extra time on it. This worked for me once, and another time did not.
I would also suggest saving the important stuff (photo’s doc’s emails, etc) first, then if it still works try to clone it.
You’re still reading! Right now!
If it’s the main OS drive, take it out and mount it as a secondary in another PC. (You don’t need the stress of the OS running off of it too) Then start by picking the files you care most about.
Once you’ve got them, go ahead and try cloning it.
I’ve seen various utilities (here’s a random one) whose purpose is to recover individual files on a bad drive, but I haven’t seen one for copying a whole disk.
And of course you will use this experience to begin using a regular backup routine with the new drive, right? Making regular backups turns a drive failure from OMG HUGE EMERGENCY to “ho-hum, I’ll just pull up one of my backups, life goes on.”
One of my two external backup drives died this week. No big whoop; I just pulled the other one out of my safe-deposit box and restarted my backup routine (full backup to start, nightly incremental backups). Then I ordered two replacement drives (I bought these two drives at the same time, and if one just died, the other is probably on its last legs). I keep one connected to the computer and one in the bank box, and swap them out roughly every few weeks.
Under normal conditions, if my main HD dies, I still have two reasonably recent external backups: the one at my computer that should be no more than a day out of date, and the one in the safe-deposit box, which might be a few weeks old. Even if the house blows up, I still have the backup at the bank (along with spare power/USB cables and a copy of my backup software).
I’m a little hobbled right now because I have only the one backup (and, as an online friend is fond of saying, “a backup that’s right next to your computer is no backup at all”). But it’s only for a few days, until the new drives arrive. And I’m backing up critical files (client files that I’ve just worked on that aren’t on the main backup) to my thumb drive.
Granted, I’m self-employed, so my files are a bit more critical than, say, music or movies, which can be replaced, if expensively. So I’m probably more rigorous than than the average bear about backups. Still, if your stuff is important to you:
BACK UP REGULARLY.
|
OPCFW_CODE
|
Novel–My Vampire System–My Vampire System
Chapter 1202 A Penalty knowledge tacky
Investigating Quinn she was thinking what his ideas ended up, most likely he could require assistance from the Cursed faction. On the other hand, she just experienced him position in place with a look of great matter, however it almost checked just like he wasn’t examining the Dalki s.h.i.+p itself but something diffrent.
There was no requirement for Helen to mention it double, as every person has been planning for the conflict in front of them. Promptly everyone inside the teleporter space had stepped out to investigate the space.
‘What variety of fees and penalties have the consumer get from dying?’ Quinn requested, believing that they may be comparable.
“a.s.semble a group that is certainly prepared to leave the house with me, I’ll assist the some others about the way, plus they can up-date yourself on the Dalki situation once I’ve consumed down that thing. We can focus on the relaxation.” Quinn ordered.
‘A fees? This is actually the newbie the device has ever a.s.authorized such a thing. Why at the same time like this? What can the fees be also? Considering that the gains are usually statistics and degree ups, could it be that it really plans to acquire some of the gone?’
‘The out of doors is probably too strong, so the only method is to watch out for a way to mess up it from the inside.’ Quinn determined. The machine hadn’t supplied him any Quests that had been completely difficult, so he hoped it hadn’t commenced now. On the other hand, that was also the very first time it possessed helped bring up a penalty…
None of them of the people alternatives sounded good to Quinn. He possessed extended considering the fact that needed to stop counting on the equipment. It turned out weird how rapid it got made it possible for him to boost himself in certain regions. Even without having its aid Quinn could possibly be thought to be loads formidable now, but to address the enjoys of Arthur, Hilston or the Dalki leaders, he needed every one of the aid he might get.
Helen too want to save the Visitors. The sole thing she could believe was that Quinn could reach his feels when they were definitely on the outside.
‘It was pretty occasional and may even range from a decrease of goods or abilities, to the losing of amounts, statistics to only a simple reduction in practical experience tips.’
Helen was right. At the present time, Quinn was looking at the immediate alert screen which had made an appearance the instant he got arranged ft . away from the developing and installed view on the moms.h.i.+p.
the adventures of a freshman in high school
‘I believe the exact same.’ Vincent arranged. ‘As you know, the program was according to a game. It makes use of AI that your particular.s.symptoms Quests based on the facts around it. The thing is that I never thought a penalty would turn up. During the activity itself there had been penalty charges any time you would pass away. Certainly, in person in the event you pass on you don’t obtain a second chance then i never thought I might check this out.’
My Vampire System
Looking at the substantial s.h.i.+p, and reading through the Objective all over again, it was clear as day what he had to do. He essential to try to eradicate a s.h.i.+p that couldn’t even be undertaken down by electricity blasts.
Initially she was at somewhat of a loss of where to start, ahead of she finally mentioned a thing.
[Breakdown to accomplish the goal can lead to a fees]
‘A women.h.i.+p listed here of all the locations?! Provide the Dalki decided to use their full push over the Cursed faction? But why, what do they have to attain? Performed they already know that Quinn was in this article?’ The girl wondered.
Exploring the significant s.h.i.+p, and looking at the Goal once again, it was clear as morning what he had to do. He required to find a way to eradicate a s.h.i.+p that couldn’t be also consumed down by vitality blasts.
However she didn’t like causing issues as much as destiny, Helen were forced to admit that her sibling was appropriate. This was warfare in a degree none of them got predicted. This wasn’t time to conserve some at the price tag on plenty of other people.
‘The relaxation? That element? He can’t be likely to change from one earth to the next and get the mommies.h.i.+ps, could he?’ Helen was astonished. The amount of dark colored pods that carried on to precipitation down was testimony that there ended up a lot more Dalki on the globe than some of them had ever seen. It may be troublesome enough to simply cope with them, nonetheless it checked like Quinn acquired produced his head.
As the innovator from the Daisy faction she speedily called for any gang of twelve folks that is delivered with Quinn to achieve the Travellers which had been outside also to carry them back information. These twelve have been those who has been compensated along with the blood stream weaponry.
‘The outdoors is most likely too sturdy, so the only method is to watch out for an effective way to mess up it in the interior.’ Quinn concluded. The program hadn’t supplied him any Quests which had been completely unattainable, so he hoped it hadn’t began now. Nonetheless, this has been also at the first try it had helped bring up a penalty…
My Vampire System
“Explain to most of the Cursed planets to prepare themselves for any episode!” Helen immediately purchased. She was unsure should the other planets have been affected, but there had been always the be concerned that something significant was in the horizon and yes it was preferable to be safe than sorry. “Be certain that the Daisy faction is prepared as well. I might be unable to give an up-date without delay, show the Cursed faction leaders for taking control until we receive the situation in check!”
“a.s.semble a workforce that is certainly prepared to head out with me, I’ll help the some others over the way, additionally they can improve you on the Dalki predicament once I’ve consumed down that thing. Then we can concentration on the relaxation.” Quinn required.
‘The outside the house is most likely too solid, so the only way is to find a method to ruin it out of the in.’ Quinn concluded. The system hadn’t provided him any Quests that had been completely difficult, so he hoped it hadn’t started off now. On the other hand, it was also at the first try it obtained introduced up a penalty…
‘A penalty? Right here is the very first time the device has ever a.s.closed this. Why during a period such as this? What would the fee be? Ever since the returns tend to be data and levels ups, could it be so it intends to have a few of those absent?’
[A different mission is received]
‘Both of the communications appear like they go hand in hand though not fairly. Simply because I ruin the moms.h.i.+playstation, the Dalki that have been used could still dominate our planet. We have to ensure that they can be safeguarded even when ruining the mums.h.i.+playstation.’
Seeing that there is absolutely no way to offer all of those other weaponry, he kept each of them at the disposal of Helen to distribute them among those she trusted. Daisy was among the strongest factions out from the others so they probably desired it minimal, but it really was worthless to not ever have used them and h.o.a.rd them at the present time.
Overall the Cursed faction owned eighteen planets, which resulted in Quinn could allow nine ones to always be bought out or wrecked. He could notice that the number would improve as each earth was taken over, but the Pursuit message didn’t quit there.
[A new pursuit has actually been gotten]
There was no need for Helen to talk about it twice, as everyone ended up being planning for your conflict prior to them. Easily everybody in the teleporter bedroom obtained stepped out to look into the space.
Novel–My Vampire System–My Vampire System
|
OPCFW_CODE
|
Geometry column trimmed when cast to string in geopandas
I am loading a shapefile to GeoDataFrame using GeoPandas method read_file. I need to apply some replacement modifications on a column with geometry data. To do this I am casting this column as string. Without casting executing .replace is causing an error TypeError: expected string or bytes-like object. However, this operation leads to trimming of original data in the geometry column. Below is an example for differences in one cell:
Column GEOMETRY from Shapefile loaded to GeoDataFrame:
LINESTRING (13.90327032848085764 46.61940531353186401, 13.90327032848085587 46.61940531353186401)
Column GEOMETRY from GeoDataFrame converted to string:
LINESTRING (13.90327032848086 46.61940531353186, 13.90327032848086 46.61940531353186)
And my code to convert geometry type to string type is:
geodataframe['geometry'] = geodataframe.geometry.astype(str)
In geometry column I can have lines and multilines with a variable number of XY pairs. Above was just a simple example.
Does anybody know how to convert it without unwanted rounding?
What are the versions of Python and GeoPandas you are using?
Python: 3.7.5. Pandas: 0.25.3. Geopandas: 0.6.1. I am running it on Anaconda.
Please try these: geodataframe['geometry'] = geodataframe.geometry.apply(str) or geodataframe['geometry'] = geodataframe.geometry.astype(basestring)
@Harsha apply(str) did not help. Second option is not accepted (data type 'basestring' not understood)
@zwornik thank you. The second option was for python 2.7, sorry.
@zwornik could you please try the statement. geodataframe['geometry'] = geodataframe.geometry.astype('float64')
@Harsha TypeError: float() argument must be a string or a number, not 'LineString'.
@zwornik I think I figured out the issue. Pandas is unable to convert multiple 'objects' in linestring into 1 string.You need to either 1) create 4 new columns to hold the 4 different coordinates or 2) merge all 4 coordinates (in a separate function) as a str object and add to the geometry column. I would recommend the first option since it offers more flexibility.
@Harsha This will not work in my case. In Geometry column I can have Lines and Multilines with variable number of XY pairs. Above was just simple example. So I cannot have fixed number of new columns.
@zwornik understandable. I do not have a solution for this. I will be closely following this question!
If you want string representation of your geometry you should use WKT. Conversion of shapely geometries to string would not work using astype.
Using GeoPandas 0.9+:
geodataframe['wkt'] = geodataframe.geometry.to_wkt()
Using older versions:
geodataframe['wkt'] = geodataframe.geometry.apply(lambda g: g.wkt)
This will give you new columns of string (WKT) representation of your geometries. What you see normally in you geometry column is just a representation of shapely geometry.
See also my comment to Georgy answer.
IIUC, you won't be able to have more than 16 decimal digits. Using str(geometry) or geometry.wkt (as proposed in another answer, which in fact are the same thing) will always trim the result to the total of 16 digits:
>>> from shapely.geometry import Point
>>> point = Point(0,<PHONE_NUMBER>.1234567890123456789)
>>> point.wkt
'POINT (0<PHONE_NUMBER>.123457)'
>>> str(point)
'POINT (0<PHONE_NUMBER>.123457)'
You could use shapely.wkt.dumps to always get 16 decimal digits irregardles of the total number of digits:
>>> from shapely.wkt import dumps
>>> dumps(point)
'POINT (0.0000000000000000<PHONE_NUMBER>.1234567165374756)'
but, as you can see, it still loses some data at the end.
So, the only thing you can do is to accept the fact that you will be losing some data, and deal with it properly later, as, for example, here: How to deal with rounding errors in Shapely.
In your case when you simply want to discard this kind of "faulty" lines that due to precision shrink to zero, you could use is_valid:
>>> from shapely.wkt import loads
>>> line = loads('LINESTRING (13.90327032848086 46.61940531353186, 13.90327032848086 46.61940531353186)')
>>> line.is_valid
False
Rounding leads to an issue (like in above example I gave) where very short line is represented by two exactly the same XY (kind of point not line). I will need to then clean up such faulty lines. Do you possibly know how this can be done on operating on Geometry type not casting it to string/list and do some further cleanup?
My Geometry column is of type MULTILINESTRING. Oryginally was LINESTRING and MULTILINESTRING, but that mixture was not accepted when loading data to PostGIS. So there are cases inside MULTILINESTRING when on Line can be valid and other not. I have skipped "wkt.loads" step because with ".geometry.apply(wkt.loads)" I got AttributeError: 'MultiLineString' object has no attribute 'encode'. With "wkt.loads(geodataframe.geometry)" I got AttributeError: 'GeoSeries' object has no attribute 'encode'. I tried this: [x if x.is_valid else np.nan for x in geodataframe['geometry']]. Still invalid line.
@zwornik Can you try geodataframe = geodataframe[geodataframe.geometry.apply(lambda g: g.is_valid)]?
I have tried with apply lambda within list, but it set values in Geometry column equal to df Index or GID column.
I wonder if there is some sort of validate/celanup action possible on PostGIS (after uploading GeoDataFrame) to remove such zero-length lines. Or apply rounding on GDF and then perform "is_valid" action.
After "is_valid" applied I see this (print(df.loc[[27]])) in GDF :(13.90327 46.61941, 13.90327 46.61941). I have then dumped GDF to SHP. In SHP is OK: (13.90327032848085764 46.61940531353186401, 13.90327032848085587 46.61940531353186401). In PostGIS not: (13.90327032848085942 46.6194053135318569, 13.90327032848085942 46.6194053135318569)
Could you update the question with some example data and the code that you are trying so that me or someone else could reproduce the issue?
https://gis.stackexchange.com/questions/219836/removing-unwanted-linestrings-from-multilinestring-in-postgis - exactly my problem and solution for it :) Though it is cleanup after loading to PostGIS and I prefer to do it before. But it seems that error caused by rounding happens during exporting GDF to PostGIS.
|
STACK_EXCHANGE
|
Give yourself a head-start by seeing the big picture.
There are over 10 Computer Vision objectives you can solve with AI. However, in most tutorials only the first 4 are talked about, and the rest are often overlooked. However, without all 10 of them, many emerging technologies such as facial recognition, AI powered security cameras, AI powered medical diagnosis, as well as Tesla’s Full Self Driving feature, wouldn’t be possible today.
In this article, we will start from the most basic types of computer vision and we will see why we need other types to have more real life functionalities step by step. If this is your first encounter with Computer Vision or Artificial Intelligence, do not worry, I will do my best to keep things simple and everthing will come together as you keep reading. Some of the concepts might be alienating at first look, especially at the beginning. That’s why I try to keep my articles narrowed down and as simple as possible, while still capturing the big picture. For starters, let’s first see what we mean by computer vision and why we need it.
Computer Vision, in a nutshell, tries to extract meaningful information from images and videos, using computers, in an automated way. This way, we can leverage cameras and computers to come together and do stuff that would otherwise require a person, or a team to work on manually. Although this may be possible at times, there are many times where having a human presence would be impractical. By leveraging computer vision, we can enable technologies that would otherwise be impossible, such as self driving cars, to make our lives better, safer, happier, while also protecting our privacy.
An image classifier looking at the image above, would probably tell that there is an apple, a cup, a laptop, a chair and maybe a table in this image. It would also give you a confidence score on how sure it is about its predictions. However, its knowledge about the image would stop there. It wouldn’t be able to tell you how many cups there are, how big the apple is, and what the position of the items is.
Image classification is the simplest type of computer vision you can perform. Therefore, if you are just getting started with machine learning, I actually recommend getting started with this one. With image classification, the main objective is to classify the image into one or multiple categories.
There are two main kinds of image classification. The first one is binary classification and the second is multi-class classification. With binary classification, you can check for a single class of object for the given image and get a result based on whether you have that object in your image. For example, you can achieve superhuman performance in detecting skin cancer in humans by training an AI on both images that have skin cancer and images that do not have skin cancer.
If you are interested in learning more about image classification and want to interact with an image classification model yourself, you can actually get a live demo by playing Pacman with your webcam with the link below:
Object Detection is the logical next step in computer vision from image classification. With object detection, you can detect what classes you have in the image, and where you have them in the image as well. The most common approach here is to find that class in the image and localize that object with a bounding box. If you are interested and want to get a practical demo on object detection, you can download the free mobile app for Android or iOS to see a very popular object detection model called YOLOv5 in action. You can also search the app with the name “iDetection”.
With semantic segmentation, you don’t just detect what classes you have in the image as you did with image classification, or you don’t just draw a rough bounding box to say where the object is, but instead, you classify every pixel in the image to determine what objects it contains.
1. Top 5 Open-Source Machine Learning Recommender System Projects With Resources
2. Deep Learning in Self-Driving Cars
3. Generalization Technique for ML models
4. Why You Should Ditch Your In-House Training Data Tools (And Avoid Building Your Own)
Instance Segmentation, in a nutshell, can classify the objects in the image at a pixel level, like the Semantics Segmentation does, but it can also differentiate different instances of that class. Meaning that if you have cars parked next to each other, if you have semantic segmentation, you can tell that there is a big blob of cars, but with instance segmentation, you can tell that there are 5 distinct cars, and this will probably change what you can do with that information.
Panoptic Segmentation, in a nutshell is a combination of Semantic Segmentation and Instance Segmentation. That’s why it is the most powerful one until now. With Panoptic Segmentation, you have pixel level classification capabilities combined with the ability to separate different instances of that class.
If you want to get a more in depth understanding about the distinction between Semantic vs Instance vs Panoptic Segmentation, you can find the following article helpful, where you will discover what makes the main differences between them from the point of view of a self driving car:
Keypoint detection is essentially detecting key points in images to reveal more detail about a class. The two most common keypoint detection areas are body keypoint detection and facial keypoint detection.
Pose estimation, in a nutshell allows you to detect what pose people have in a given image, which usually includes where the head, eyes, nose, arms, shoulders, hands, legs are in an image. This can be done for a single person or multiple people depending on your needs. You can get a demo of it here:
Also you can see an implementation of this with another live demo here:
Similar pose estimation here is a facial landmark detector that can detect features more specifically on your face.
You can also try the live demo with a game:
Person segmentation is the logical next step from Pose Estimation. On top of knowing where the person roughly is, now you have close to pixel level classification on where exactly the person is as well as the pose of that person. You can try it yourself with the demo below:
You can also see an open source project by Facebook AI Research Team called Detectron 2. It can implement everything we have seen until now, including: object detection with bounding boxes, panoptic segmentation, pose estimation and body segmentation simultaneously. Moreover, you can build AI based applications using Detectron 2. You can also see an example of how it all looks together in the example below:
You can also estimate the 3D depth of the objects and the scenes with this neural network. You can check out the Google Colab example of a machine learning model called MiDas: You can run the code in your browser and see the results for yourself with the following Google Colab link.
Image captioning is pretty self descriptive. When you give the neural network an image, it creates a caption for you describing the image. One thing I want you to notice is that, compared to all others until now, this one is not just a computer vision task, but it is also a NLP (Natural Language Processing) task.
3D object reconstruction is about extracting 3D objects from 2D images. Although this can be done in a variety of ways on various objects, it is very much of a developing field. One of the most successful papers on 3D human digitization is called PiFuHD and you can get a demo of it with your own images with this link:
I hope you got some value out of this article. If you have any questions, do not hesitate to leave them as a comment and I will get back to you with an answer as soon as possible.
If you want to have a deeper dive into the 3 types of image segmentation that we have seen in this article, you can check out my article on Semantic Segmentation, Instance Segmentation, and Panoptic Segmentation below:
|
OPCFW_CODE
|
Quote: "No offense, but I noticed you have very poorly commented code..."
Haha... yeah. But actually the code is pretty well commented now.
I just usually write some code, and then I comment it once it does what I want, then I code some more, then I comment it.
Maybe that's a bad habit...
Quote: "The game mechanics are too jittery."
Not sure what you mean here... is it lagging?
Quote: "1. smooth camera movement
2. harpoon acceleration (starts slow, keeps getting faster)
3. balls bounce off each other"
1. Not sure about camera movement, since the screen doesn't move.
2. I'll try that out, and see how it looks/feels ingame.
3. I actually thought about that. They don't collide in the original game, but that doesn't necessarily mean mine shouldn't.
Quote: "-Harpoon angle could be variable? Maybe make it possible to determine the angle you can fire it at."
Not sure if this is what I want, but I'll think about it. Part of the "challenge" is to only be able to fire upwards.
Quote: "-Fancy explosions? You're free to use my library for that, it's completely free : TheComet's Library."
At some point I was going to add like a pop-animation when you burst a bubble. Is the library something with particles?
Quote: "-Triple Shot Power Up : Lets you fire 3 harpoons at once"
One of the power-ups planned is that you can fire more than one harpoon, but you have to fire them individually. Is that what you meant? Or did you think like 3 harpoons at an angle to each other?
Quote: "-Fast Power Up : Harpoon is much faster"
Good idea... that's one I didn't think of.
Quote: "-Slow Motion Power Up : Slow down all of the balls"
Quote: "-Ballz of Steel!
-Ballz of Fury!
-I suck with names, don't I?"
I do too... as you can see by the working title...
Quote: "There should be more player feedback when they die!"
The way you lose lives is temporary atm. I want to have it kind of like Pang, where you have some lives and if you get hit you die and the level resets. Does this seem good/bad. (evil?)
PS. Sorry if I sounded dismissive at some of the suggestions, that was not my intention. I appreciate all feedback.
The thing is I actually have a pretty good plan in my head.
There is going to be stages/levels like in the original game, and the graphics are going to be much nicer.
There is going to be a life-indicator of some sort in the upper left corner, maybe some hearts or small icons of the Hero.
In the upper right corner I would put the Timer.
There are several planned power-ups already.
I'm going to create a level-editor where you can place various blocks that the balls can bounce off, and some that you can shoot/destroy.
|
OPCFW_CODE
|
Developers and operators can use the Service Level Objective (SLO) graph to see how VMware Secure App IX Service Autoscaler impacts the stated microservice objectives.
Developers and operators can use VMware Secure App IX Service Autoscaler to ensure consistent application capacity and user experience across multiple clouds and platforms. For more information, see the Service Autoscaling with VMware Secure App IX User’s Guide.
Examples of Use Case 2
You can create and configure an SLO from the VMware Secure App IX user interface, as shown in the following screenshot.
In the screenshot, A shows a monitored SLO policy, and B shows a p90 latency SLI. Ninety percent of the requests are expected to be faster than 120 milliseconds. However, if the p90 latency is higher than 120 milliseconds, the slow response times will cut into the allowed error budget. The error budget is set (D) to allow the SLI conditions to be unmet 2 percent of the time. D shows how much time has been allotted to the error budget for a month.
The following screenshot shows the SLO in the VMware Secure App IX console user interface. We have labeled the diagram with the letters A-K. Each highlighted area is described below.
A. SLO warnings are in yellow. An SLO is in warning when the SLIs exceed their threshold values and the error budget is being depleted.
B. SLO violations are in red. Red indicates that the error budget has been fully depleted while the process continues to violate the SLO.
C. A green line indicates a healthy SLO.
D. The error budget is being depleted quickly, but it’s still in the positive.
E. The budget has been entirely depleted and has gone into the negative.
F. Latencies are increasing.
G. Latencies are back down to low levels.
H. Autoscaling has been disabled, and the number of service instances remain constant.
I. The number of service instances have increased because autoscaling has been enabled.
J. Requests are coming in at a constant rate.
The previous screenshot shows the SLO in the VMware Secure App IX console user interface. At the top of the screen is a green line. The green line is obscured by a thicker yellow line (A) when the SLIs exceed their threshold values and the error budget is being depleted (D). The sections covered with red lines (B) indicate that the error budget has been fully depleted while the process continues to violate the SLO. Green (C) indicates a healthy SLO.
The cause of the SLO violation was the high number of requests, which is sustained through most of the example (J). Latencies increase (F) because the service cannot respond quick enough to keep up with the traffic. During the time of high latencies, the number of service instances remained low (H) and could not keep up with demand.
When autoscaling is enabled, you can see the service instance count increase (I), leading to lower latencies (G) and a healthy SLO (C). Note that the error budget will be fully replenished on the first of every month.
|
OPCFW_CODE
|
My new profile picture.
(For the time being anyway — Windows 8 helpfully swapped out the one that I had on the WordPress site for the default that came with the Windows 8 program when I changed my WP administrator email address)
Yes, I am once again in the throes of learning a new operating system, getting a new email address and client, trying to figure out where I need to go to replace the old ones… and soon I will be setting up a new website.
The web hosting service of my old site, addr.com, has gone MIA. No chat, no phone, and so far no response to my email about why I can no longer log in and why I am not getting any emails through the old kmhancock email address. That account, as I mentioned in the last post, was set up over ten years ago as part of my website which had been hosted by addr.com. I never had one problem with them in all that time, but lately I’ve not been very active when it comes to my website. Thus it never occurred to me the credit card they had on file had gone out of date. At least that’s what I’m thinking the problem is. Or at least part of the problem.
Since it never occurred to me the card they had, had expired, it never occurred to me to contact them about it. You’d think they would contact me, but they did not.
Day before yesterday I spent a good deal of time researching web hosting sites and reading reviews. I mentioned previously reading lots of scary posts on addr.com, but they were all on one site that seemed to be promoting another hosting service so I went looking for reviews that might be a bit more impartial and recent.
Well, those were not good, either, though this time a number of them were written by folks like me who had been with the service for ten years or more and had mostly been very happy with it. But within the last five years, everything seemed to go downhill. Problems with disappearing emails were mentioned, also the chat ALWAYS being down, and the phones ALWAYS being too busy to answer so send an email, and the email almost never being answered. On the rare occasions someone did manage to get through on the phones they were routed to a person in India who barely spoke English and didn’t even know the system.
I wonder if they might be out of business. Wouldn’t my website have gone down entirely, though, if that were the case? As of now it’s still up, though you can’t use the email any more… (even before I changed it)
Anyway, I’ve signed up with a new service and will soon be working on a new website (the old one was getting stale anyway) that will be integrated with this blog — something I’ve wanted to do for some time.
At the moment though, learning my way around Windows 8 is quite enough for this old and shrinking brain of mine to handle. 🙂
|
OPCFW_CODE
|
New approach to Terms and Defintions?
Context
My understanding is:
moving to HTML publication is an opportunity to improve the design of SMPTE standards
it is not essential to follow all ISO directives or SMPTE AG 16
Therefore we don't need to replicate the traditional approach to Terms and Definitions (see #117)
Idea for consideration
Would an approach like in W3C standards (created with Bikeshed or ReSpec) be helpful?
So, perhaps we would have:
A Terms and Definitions section in which all the author writes is a list of terms that are defined in other documents but are used in this document
Terms are listed with sources but no definitions or notes
I assume we wouldn't implement all the auto cross-document linking that Bikeshed and ReSpec do...
All terms defined by the document itself are defined in-line
Therefore there is no need for special Terms and Definitions syntax (and so no need for "notes to entry" etc)
Of course the author can always create a custom "definitions important for my topic" section to put these definitions in, should that be sensible for the document being authored
When rendered, the Terms and Definitions section is auto-populated with the list of externally defined terms as well as a list of all the terms (as links) defined in the document itself
So, what is called an "index" in a W3C standard I believe e.g. per https://www.w3.org/TR/webvtt/#index
We might want to call it an "index" in SMPTE standards as well, and perhaps move it to the end of the document
This seems helpful because at the moment (in the SMPTE html-pub as currently implemented):
The Terms and Definitions section lists only some of the terms and definitions... (i.e. those defined in-line are not listed)
How is the author to decide between defining a term in-line and defining it in the Terms and Definitions section?
Authors are able to import all the terms from an external document by just listing the document in the Terms and Definitions section -- this seems sub-optimal because there can be accidental clashes in terms, and individual terms cannot be marked-up when used in the body prose.
It's a little bit like a Python from ... import * situation
A few comments to start:
There should be a separate Terms and Definitions clause. Especially in documents with many terms that require definition, it provides an easy-to-access reference, and would eliminate the need to search for the first mention of the term in the document to locate the definition (which could change as the document is written, and add another task for the editor, who would have to relocate the definition). It would also be easier to detect defects, such as circular definitions. I am also a proponent of hierarchical Terms and Definitions where appropriate.
I agree that a definition can consist of simply a reference to another source. We can use the current ISO formatting.
In my experience, the loose guide for deciding if a term is defined in the Terms and Definitions clause vs. in line, has been that if the term was used in perhaps only one clause and only once or a few times, then it would be OK to define in line. I don't agree. If a term requires definition, then it should be in the Terms and Definitions. A larger question is which terms should be defined? I encounter many levels of detail. For example, while many documents mention bits and bytes, I believe that I have come across only one that defined both. I questioned why it was needed, and as it turns out, a byte was not always 8 bits. In this document, it was. That leads to another discussion: defining the use of mathematical and logical terms and operators. A few documents include them; most do not. Should we provide a reference, or might there be differences in how some of the operators might be used?
I would retain the ability to add Notes to entry. They can help to add more information about a definition.
Interesting point about automatic importation of terms. If a document is used as a normative reference, then importing the terms should not cause a clash. If it does, then it must be addressed. And if multiple normative references are cited, then there might be multiple definitions of the same term. Again, requires management.
|
GITHUB_ARCHIVE
|
What better way to keep busy while having a bacterial throat and nose infection than to play around with an old programming language? So I created a simple screensaver today that should give you a warm old school computer feeling while you are not busy working at the computer. The program should run fine on the C64 and with minimal modifications on other computers supporting Commodore/Microsoft Basic.
Please find the ASCII and PRG file of the source code at https://github.com/jklingel/Screensaver. There is also a readme file for more explanations on how the code works. Feel free to modify and improve!
If you are technically interested in how the software works… The heart of the program (of version 1.2) looks like this, using an attempt to move to the right as an example:
- If you reached the rightmost screen column already, remember that moving right is not possible and return:
IF (LO+1)/40=INT((LO+1)/40)THEN MV(4)=1:RETURN
- If the next cell to the right has been already visited, remember that moving right is not possible and return:
IF PEEK(S+LO+1)=CH THEN MV(4)=1:RETURN
- If you are moving into a dead end, remember that moving right is not possible and return. A dead end is defined by having a visited cell up, down, and two steps ahead. It almost feels like machine learning 🙂 This command line needed the spaces removed as I reached the maximum of 80 characters. I added them here for readability:
IF PEEK(S+LO-39)=CH AND PEEK(S+LO+41)=CH AND PEEK(S+LO+2) THEN MV(4)=1:RETURN
- Set the location pointer one step to the right, print the character in the screen cell, increment the „cells visited“ counter, delete the „I cannot move“ array, and return:
FOR I=1TO4:MV(I)=0:NEXT I
Eine Antwort auf „Screensaver in Commodore BASIC V2.0“
I just committed a new version 1.1 to GitHub that contains valuable advice from Alex Johnson of the Facebook Commodore 64/128 Programming group. Basically he said:
1) Don´t use the command LET
2) Eliminate spaces where possible (I left many in so that the code is more readable for me)
3) Keep the variable names only one or two characters long and explain them somewhere (I did so at the end of the program)
4) The array named „V“ of version 1.0 is not needed, as one can peek the content of a screen memory cell instead (bummer, this was a beginner mistake I made).
|
OPCFW_CODE
|
Protocol for Data Acquisition for Fast Time Plots
Kal Dabous, David M. Kline, Brian J. Kramper, Therese M. Watts
.5cm This document itemizes the parameters which we intend to support with the new Epicure Fast Time Plot (FTP) facility, and proposes a protocol for acquiring the information necessary to satisfy them.
FTP Data Collection
The present data acquisition subroutines da_add_request_name and da_add_request_di specify a single FTD which is the frequency of data collection. In order to accommodate FTP requests, similar subroutines would need to be implemented which would have the same arguments as the two previously mentioned subroutines, plus two additional arguments in the form of FTDs, specifying the start and stop times for the fast data acquisition. The maximum data collection rate obtained from the database must be non-zero (device is able to be fast time plotted) and the frequency of data collection specified by the user must be less than or equal to the maximum rate specified in the database for the particular device. The Data Acquisition Requestor (DAR) is the logical place to check that the data collection frequency is valid. Initially there would be no minimum data collection frequency. The function da_get_data will still be used, however the data returned is an array of data. The data size is calculated by the total points and the size specified in the database. See figure 1 for a pictorial representation.
To achieve the data acquisition rates required by the FTP facility, it is necessary to combine multiple data points in a single data reply at the DAE level. Since DAEs know nothing about time, a translation from frequency to number of buffered data points must be done. Although this translation could take place at the DAE or TIMER level, it is easily calculated by DAR when it verifies the frequency requested by the user. Because the update rate for the user task will be approximately 2 Hz, the number of data points returned in a buffer per device is the data collection frequency divided by 2, e.g., a collection frequency of 30 Hz yields 15 data points per device per data return buffer.
The actual data acquisition will be accomplished using three data buffers and a control buffer, created by the Data Acquisition Server (DAS). New type codes will need to be implemented to differentiate this type of data acquisition from other slower types, both at the DAR-to-DAS and DAS-to-DAE interface levels. A few definitions are necessary here to clarify the rest of the discussion.
When DAS receives an FTP list from DAR, it will create a prototype data buffer in its memory with the following initial conditions: cycle and available bit set, first time and suspend bit clear, total points set to number indicated by DAR, last point cleared, and the control handle cleared (see Figure 1). DAS will then create a prototype control buffer in its memory with the following initial conditions: cycle, first time, suspended, and available bit clear, max and total buffers set to three (3), last buffer cleared, and the data buffer table index is set to zero (0). DAS now makes three sets of calls to QVI_US_PUT (creating three data buffers in common memory) and places the returned block handle into the appropriate control buffer data buffer handle fields.
A final call is made to QVI_US_PUT to create a control buffer in common memory and the returned handle is saved.
The three data buffers are first directed to the TIMER (see flow chart in Figure 3). These data buffers are not processed until the control buffer which coordinates them is received by the TIMER. When the TIMER receives the control buffer (which contains handles to each of the three data buffers), the control buffer is placed in each of the data buffers. The TIMER then takes the first buffer, clears the available bit and either sends the buffer immediately to the DAE (if the current time lies between the start and stop times for the list) or delays it until the next occurrence of the start time. The data buffer now goes back and forth between the TIMER and the DAE until the number of points gathered matches the number of points to be buffered per data return. This full buffer is then inserted on the DAS return queue, and the next buffer on the control list is used. This process continues until the current time lies between the stop time and the start time, or the request is deleted. A data buffer becomes available again as soon as DAS has copied the contents of the buffer into its memory using QVI_US_GET. Although it is not anticipated that more than three buffers should be needed, it is possible that this process could get suspended for a period of time, but should resume as soon as a buffer is available. If under operating conditions it appears that more than three data buffers are required, the scheme is easily extended by having DAS allocate more buffers and storing their pointers in the control buffer.
Modifications, Additions and Suggestions
A new request code, DAR_C_FAST_READ, would need to be implemented for the DAR-to-DAS interface, and the request list would need to incorporate the FTDs for the start and stop time of the fast data acquisition.
More radical modifications would need to be made to the present definition of the user portion of the UMB. Two new type codes would need to be implemented, UMB_C_FTP_CTL for the control buffers and UMB_C_FTP_DAT for the data buffers. The start and stop FTDs and the control buffer handle would need to be added. In addition the data buffers would need short integers for the number of points to be collected for the buffer and the last point collected. The control buffer really has no need for the frequency DAP (presently DAP0), and an alternate structure consisting of an array of pointers to the three data buffers could be substituted.
|
OPCFW_CODE
|
VOICE command broken in version 3.1.3
XC=BASIC compiler version v3.1.3 (Mar 15 2023)
(On Windows 10 64bit)
Target: C64
A single-line program like this:
VOICE 1 ON
generates a runtime error from XCB:
C:\Users\xxx\Software\XCB_fresh\bin\Windows\xcbasic3 -dC:\Users\xxx\Software\XCB_fresh\bin\Windows\dasm "sidtest.bas" sidtest.prg (in directory: C:\Users\xxx\Software\xc-basic3-v3.1\_progs\)
core.exception.SwitchError@source\compiler\number.d(26): No appropriate switch clause found
----------------
0x00007FF6C822ABB7
0x00007FF6C7D47D8B
0x00007FF6C7D47D24
0x00007FF6C7FF1A70
0x00007FF6C80CBE0B
0x00007FF6C7FEE254
0x00007FF6C7FEF8CA
0x00007FF6C7FEF942
0x00007FF6C7FEF942
0x00007FF6C7FEFA58
0x00007FF6C7FEF6EB
0x00007FF6C7D27A26
0x00007FF6C8254DC3
0x00007FF6C8254BBF
0x00007FF6C8254CDF
0x00007FF6C8254BBF
0x00007FF6C8254990
0x00007FF6C822AA79
0x00007FF6C7D2D942
0x00007FF6C831C1DC
0x00007FFC54207614 in BaseThreadInitThunk
0x00007FFC546426A1 in RtlUserThreadStart
I'm using a fresh install to make sure that is not because I had messed up with the _lib files or something like that.
I also tried the Jun 4 2023 build: in that case, nothing is printed, but there's still an error condition (the compiler exits with an error code).
(Going back to version 3.1.2 which seems to work fine, as I need to put my game out this week :)
Hi @JJFlash-IT
Thanks. It is failing because 3.1.13 was built with 3.2 grammar. I'll fix that.
Let me know when you release your game.
Cheers
I had a feeling it had something to do with the beta version :)
Thank you for the new version!
I'm sticking to 3.1.2 , though, till I put the game out (oh and don't worry, I'll tell the world when it comes out), then I'll find some time to test 3.1.4.
By the way, in order to go back to 3.1.2, simply downloading the corresponding release was not enough: the xc-basic.exe file was actually 3.1 .1 and the compiler threw weird errors once I compiled my program. I had to go to the commit history and "extract" the .exe and .pdb files from February 2023, then things worked again.
This isn't the first time I see an XCB release that is broken like I described, Unfortunately I'm ignorant about Git and everything around it, otherwise I'd be glad to help diagnosing why that happened more than once.
|
GITHUB_ARCHIVE
|
- Скачать прайс
- О компании
Hello, Recently i tried to install the driver for my Radeon 9550. After a few days i decided this wasn't going to be as easy as planned. I literally. Дима Билан(Виктор Белан при рождении), певец, поэт, композитор, Заслуженный и Народный. Agere Driver Downloads. To find the latest driver for your computer we recommend running our Free Driver Scan. Find out how to make your computer faster by running.
Drivers for AMD Radeon 5450, 5550, 5570, 5650, 5670, 5750, 5770, 5790, as AMD FirePro™ and Embedded graphics, please select your driver using the. Intel® product specifications, features and compatibility quick reference guide and code name decoder. Compare products including processors, desktop boards, server. Download AMD Drivers Software for Radeon, FirePro, APU, desktops and laptops. AMD Radeon™ RAMDisk is designed to work with any AMD or Intel-based platform with at least 512MB RAM and can be created using system RAM not already assigned
Intel® Server Board S5000XALR End of Life SSI-TEB Rack LGA771 Intel® Server Board S5000PSLROMBR End of Life SSI EEB (12" X 13") Pedestal LGA771 Intel® Server Board. ATI Radeon X1300 Free Driver Download. World s most popular driver download. Системные драйверы, драйверы для чипсетов и наборов системной логики от AMD (ATI), Intel, NVIDIA, Realtek. ATI Radeon X600 Free Driver Download. World s most popular driver download. Buy PowerColor Radeon 9550 DirectX 9 R9550 128MB 128MB 64-Bit DDR AGP 4X/8X Video Card with fast shipping and top-rated Chipset Manufacturer. AMD/ATI Radeon mobility drivers for your notebook and Microsoft Windows. VGA Driver Ati 8.33 supports numerous ATI and Radeon graphic cards. Click on the following links for the driver package readme info:./LH64A/Readme.txt. Get a low profile graphics card, perfect for small form factor or home theater PCs with ATI Radeon™ HD 5550 graphics.
Official AMD ATI Radeon 9550 Free Driver Download for Windows XP - 10- 2_legacy_xp32-64_wdm.exe. World's most popular driver download. Dell's new 15-inch gaming laptop with AMD’s Radeon RX 460 graphics. ATI Radeon X600 Free Driver Download. World's most popular driver download. Chip Number: RV610: Chip Description: ATI Radeon HD 2400 PRO PCI-E: Notes: Model: GigaByte GV-RX24P256H(P) - HD2400 delete this update appears to be abetter display. Buy ATI Radeon 9550 DirectX 9 100-437105 256MB 128-Bit DDR AGP 4X/8X Video Card with fast shipping and top-rated customer service.Once you know, you. Hello i have recently installed ubuntu 10.04 on my Windows PC but i am trying to install ati driver 9550 on it. i want to install them because. May 31, 2015 This package installs an updated version of the Microsoft Windows 7 device driver for any ATI Radeon video adapter that comes preinstalled. Me ajuda a encontra este driver alguem aew porfavor. Mar 22, 2005 ATI MOBILITY RADEON XPRESS 200; ATI RADEON 9000/9100 PRO IGP RADEON 9500 PRO / 9700 - Secondary; RADEON 9550; RADEON 9550 Windows 2000 SP 2; Windows Server 2003 x64 R2; Windows 2000.
|
OPCFW_CODE
|
[BUG] Performance Degradation With Combination of Long Notes, Several Themes, Split Panes
Overview
When using split panes, long notes, and some themes, responsiveness and performance degrades. This is most prominent with the Minimal/Things/Sanctum themes.
I figured this bug would be important because these themes are popular. I really like Codeblock Customizer but it's preventing me from using it since I use split panes frequently.
Reproduction
Took me forever to diagnose this and also make this reproducible on my computer. I'm running this on Windows 11, Obsidian 1.5.2
(might require a slower computer)
Create a new vault
Create a very long markdown file. See example:
markdown-tester.md. Can also copy-paste to make it longer.
IMPORTANT: Split the pane or drag a new file to split the window.
Show that performance is not degraded by highlighting lots of text quickly using the mouse.
Install Minimal Theme
Show that performance is not degraded by highlighting lots of text quickly again.
Install Codeblock Customizer and enable.
Performance should suffer when highlighting lots of text quickly with the mouse.
Extra details
Disable Codeblock Customizer
Performance should return to normal when highlighting lots of text quickly with the mouse.
Re-enable Codeblock Customizer. See the degraded performance again.
Un-split (close) the panes.
Performance should return to normal.
Some notes
I tried some other themes, performance seemed slightly worse but I couldn't tell as much, unlike with Minimal/Things/Sanctum themes.
Doing some inspector performance profiling, it seems to be related to events (i.e. keydown, mousedown)
Related to various updating functions in the app and maybe recalculating style.
No errors in console
Other plugins don't seem to have this problem with the themes
Disabling all settings in codeblock customizer doesn't seem to affect the performance issues.
Some things I'm not sure of
If it's a computer hardware or software-specific bug. Haven't tested it on another computer.
One time when toggling minimal theme and codeblock customizer, performance returned to normal with both on. I have no idea why and cannot reproduce it. May be remembering incorrectly.
After some more investigation it might be related to either CodeBlockHighlight.ts or the 3rd-party library pickr with moveable.js
Thanks for you detailed review. Unfortunately, in the last few weeks I was very busy, but I will definitely check out all open problems, and try to solve them during the holidays. I might get back to you, if I am having trouble reproducing something.
No problem, I totally understand, and appreciate you working on a free open-source project!
Just a heads up: I found two things, one in edit mode and multiple in css, which had an effect on performance. I will modify the css so it doesn't cause any problems. After that I will release the new version.
I just released a new version. Could you please recheck if the problem still persists or is it solved now? Thanks in advance!
Hi @mugiwara85, I did some quick testing and performance with split panes + Minimal Theme seems much improved! Thank you for your time and effort!
Hi @jeffchiou no problem. It was actually caused by 2-3 CSS selectors. There is still a little bug with split panes. I am working on It now.
|
GITHUB_ARCHIVE
|
Best security practices accessing/trading crypto on computer/mobile
I’ve read online with regards to crypto that, as hardware wallet users, we should treat all devices, computer, mobile phone, software, exchanges/wallets, etc. as compromised. That being said, I have obtained a hardware wallet and I am wanting to transfer assets on to it but i want to make sure i am always maintaining best security practices and also learning/using any new security information available. It has been sometime since I last accessed my wallet.
What is the best approach to interacting with computer, mobile phone, etc, when trading? Is it going to be a dedicated computer that is used only for crypto? And second that, is it going to be a persistent live usb drive? I am hoping to find any alternative to “get a dedicated crypto computer” possible, but if that is the single best advice I’ll do it. Barring that, what are some alternatives or other options perhaps from most ideal to least ideal for accessing/trading/reviewing crypto?
I’ve read online with regards to crypto that, as hardware wallet users, we should treat all devices, computer, mobile phone, software, exchanges/wallets, etc. as compromised.
Yes. This is the most important thing about Bitcoin.
You consider software compromised, so you only go for open source solutions and stay up to date.
You consider exchanges compromised, so you don't use them to avoid leaking privacy. You consider hardware wallet vendors compromised, so you buy them anonymously. You don't connect to untrusted web services (that is, invalid certificates and all of those things).
You consider network compromised, so you distrust ISPs and use tools built for anonymizing communications (distrusting participants in those networks).
You consider almost all nodes compromised, so you run your own validating full node, instead of blindly relying on others.
You consider core compromised, so you don't keep up with latest versions all the time in all the devices/wallets.
You consider core was compromised but not anymore, so you don't simply stay old.
You consider bitcoin itself is compromised, so you don't act like it isn't, you don't consider it the single valid solution to all of your problems, you'll see others do so, but don't do that.
Because you fighting all of that, Bitcoin stays decentralized, and stores value.
That being said, I have obtained a hardware wallet and I am wanting to transfer assets on to it but i want to make sure i am always maintaining best security practices and also learning/using any new security information available. It has been sometime since I last accessed my wallet.
...
What is the best approach to interacting with computer, mobile phone, etc, when trading?
Not to in the first place. Inactive wallets perform better long term. Mobile phones are essentially the opposite of privacy.
Is it going to be a dedicated computer that is used only for crypto?
If you can afford that, maybe. If it gets compromised, it was all for naught.
And second that, is it going to be a persistent live usb drive?
Persistent cyphered live usb drive sounds better?
I am hoping to find any alternative to “get a dedicated crypto computer” possible, but if that is the single best advice I’ll do it. Barring that, what are some alternatives or other options perhaps from most ideal to least ideal for accessing/trading/reviewing crypto?
Cold storage and air gapped private key access.
|
STACK_EXCHANGE
|
package dom
import (
"encoding/xml"
"github.com/ionous/errutil"
)
type Item interface{ Item() Item }
// ex: <value name=""><block type=""></block></value>
type Value struct {
Name string `xml:"name,attr"`
Input BlockInput `xml:",any,omitempty"`
}
func (it *Value) Item() Item { return it }
// ex: <statement name=""><block type=""></block></statement>
type Statement struct {
Name string `xml:"name,attr"`
Input BlockInput `xml:",any,omitempty"`
}
func (it *Statement) Item() Item { return it }
// ex. <field name="NUMBER">10</field>
type Field struct {
Name string `xml:"name,attr"`
Content string `xml:",innerxml"`
}
func (it *Field) Item() Item { return it }
type ItemList struct {
Items Items
}
type Items []Item
func (l *ItemList) Append(it Item) {
l.Items = append(l.Items, it)
}
func (l ItemList) MarshalXML(enc *xml.Encoder, _ xml.StartElement) (err error) {
for _, item := range l.Items {
switch item := item.(type) {
case *Value:
err = enc.EncodeElement(item, xml.StartElement{Name: names.value})
case *Statement:
err = enc.EncodeElement(item, xml.StartElement{Name: names.statement})
case *Field:
err = enc.EncodeElement(item, xml.StartElement{Name: names.field})
default:
err = errutil.New("unknown type", item)
}
}
return
}
// called multiple times for each tag matched by BlockList field
func (l *ItemList) UnmarshalXML(dec *xml.Decoder, start xml.StartElement) (err error) {
switch start.Name {
case names.value:
out := new(Value)
if e := dec.DecodeElement(out, &start); e != nil {
err = e
} else {
l.Items = append(l.Items, out)
}
case names.statement:
out := new(Statement)
if e := dec.DecodeElement(out, &start); e != nil {
err = e
} else {
l.Items = append(l.Items, out)
}
case names.field:
out := new(Field)
if e := dec.DecodeElement(out, &start); e != nil {
err = e
} else {
l.Items = append(l.Items, out)
}
default:
err = errutil.New("unknown element", start.Name.Local)
}
return
}
|
STACK_EDU
|
Adding Image Definitions via terminal
This is probably way out in left field, but I figured I’d ask.
I’m working on ways to streamline our image deployment across all our fog servers (16 servers in 16 different subnets). I’ve written a script on our original fog server (where all our images get created/uploaded) that deploys the image to each of the other servers via SCP.
My question is, is there a way to also “script” the adding of the image definition under image management through some command line magic? So, then when the image gets moved, it auto adds itself to the image management section?
Any help/discussion is appreciated.
Wow, thanks afmrick! I’ll definitely be checking this out. I just discovered “webmin” today that’s going to help me to more simply manage all my fog servers and easily transfer files between systems, etc. So All knowledge today is certainly a HUGE help! Most appreciated!
You can do whatever you’d like from the command line with mysql. This one will show you the names of all the images in the images table (without column names):
[CODE]mysql --batch -u fog -pmyFOGpassword -Dfog --skip-column-names -e “SELECT imageName FROM images;”[/CODE]
Here’s the same thing except it stores all the images names in the “image_names” variable which can be handy in a script:
image_names=$(mysql --batch -u fog -pmyFOGpassword -Dfog --skip-column-names -e “SELECT imageName FROM images;”)
echo “image_names = $image_names”[/CODE]
For general scripting help, I really like the Advanced Bash Scripting Guide at [url]http://tldp.org/LDP/abs/html/[/url]
I’m learning ubuntu scripting. I’m better in windows command prompt environments I have a script that’ll copy an image file or directory to all my fog servers with one command. I’ll play around see if I can make this SQL import happen via script as well.
You, sir, are my saving grace. Need a padawan?
The imageID field is auto-increment in the database for Fog 0.32. So you just don’t pass it a value when you do the insert.
Say that you wanted to export FogServer1: Image23 to FogServer2. The steps to follow would be:
On FogServer1, make a copy of the image folder or image file from /images.
On FogServer2, store the image file or image folder to your /images.
On FogServer1, export the data you need for the image definition. You can do this using phpmyadmin. Select the record from the images table, click the export button. Uncheck to export the structure, make sure to check to export the data. Remove the imageID field from the fields list and the value from the values list (remove the comma with the value).
On FogServer2, import the data into the Fog database, images table.
You can automate this with scripts if you are a scripting guy AND you do a lot of image copies to other fog servers. If not, it may be quicker to manually redefine the images in the webUI.
Yea, all our fog servers have different images across the board. We let each site customize their images as needed. If it is possible to export a single row, then that’s great cause that’s what I was aiming for. Export a single row from one fog server, and import it into another. Can you give some insight into adding the ID number or is this really complicated?
This will drop the images table on the other servers and replace it with the one you exported. I’m not sure if you are trying to sync the images across all servers, or if some servers have different images than others.
You may have have to export just the rows for the image definitions you want and insert them into the other servers images table, letting mySQL autogenerate the image ID, or scripting your insert to use the next available image ID number.
My FOG servers are independent of each other. If I run this command, will it MERGE the tables I exported with the current data tables, or will it completely overwrite the table that I import it to?
I know a little MYSQL but not enough to be considered competent
First off, are all your fog servers independent of each other? By this I mean, did you do a normal install on all 16 servers, or do you have 1 normal install and 15 storage node servers?
If they are all independent, then try this:
mysqldump the images table from the “main” fog server and restore it to the other fog servers.
Main Fog Server, run:
[CODE]#mysqldump -u root -p[password] fog images > images.sql[/CODE]
Other Fog Servers, run:
[CODE]#mysql -u root -p[password] fog < images.sql[/CODE]
|
OPCFW_CODE
|
The new features of iOS 6 from a user’s point of view are well documented and well reviewed so I don’t intend to go over the same ground here.
Having spent a large part of the last couple of weeks watching WWDC 2012 videos and playing with some of the new features, this is my list of features that are of most interest to us as iOS developers.
1. Activity View Controller
If you’ve upgraded your device to iOS 6 you’ve probably seen these in the standard iOS 6 apps. It’s a new consistent way to display what would have been displayed in the past using action sheets. It provides your gateway to standard system activities, like sending mail, posting to Facebook and printing, but you can also create your own activities.
2. Accessory Actions
One of the limitations of storyboards with iOS 5 was that it wasn’t possible to trigger a segue from an accessory button in a table view cell. iOS 6 adds an accessory action that means it’s no longer necessary to implement write code to make an accessory button trigger a segue.
iOS 6 introduces a flexible, powerful and downright confusing layout system to replace the springs and struts that we are familiar with from iOS 5 storyboards and, before that, XIBs. There are three WWDC videos on the topic, so we can’t complain about lack of documentation! I’ve not really got to grips with this yet, but it looks pretty useful.
Finally those four buttons on the summary page of an Xcode project do something useful! There’s no need to write any code to enable autorotation in your view controllers — it just works. The buttons have sensible defaults in new projects too, so you don’t even need to press them.
5. Collection Views
Think table view laid out as a grid and you will be pretty close to understanding a collection view. This one has huge potential for creating interesting user interfaces.
6. Cancellable Segues
In iOS 5, anything you wired up as a segue in a storyboard was going to fire when the source was invoked, e.g. a button pressed or a table view cell selected. With the new
shouldPerformSegueWithIdentifier:sender: method, you get a chance to veto the segue.
7. Deprecated viewDidUnload
We were always advised to be good citizens of iOS by removing objects in
viewDidUnload that we could recreate in
viewDidLoad. Apple have done some analysis of the benefit of this compared to the bugs it caused and decided that the benefit is tiny in the whole scheme of things. So there’s no need to feel guilty about ignoring the possibility of freeing up objects in
viewDidUnload any more — it’s never called!
8. Exit Segues
One of the hardest things to explain on Learning Tree’s Building iPhone® and iPad® Applications: Extended Features course is the concept of delegates. Delegation is still an important pattern in iOS and won’t go away anytime soon, but apps written for iOS 6 can simplify things like modal views by using exit segues. Instead of having to wire everything up with a delegate protocol, your done and cancel buttons can search back through the sequence of presented view controllers to find a method that dismisses them.
9. Modern Objective-C
I’ve already written a post covering features in the Xcode 4.4 release but it’s been extended in Xcode 4.5 and iOS 6 with simplified array and dictionary subscripting.
10. State Restoration
iOS 6 adds support for state restoration in your app so that it can carry on seamlessly even if it was killed while backgrounded. This is a feature that will not be obvious to most iOS 6 users but could really make your applications feel like they are ready to go all the time.
So iOS 6 provides some great features for developers as well as users. Just bear in mind that some of these will restrict your app to devices running iOS 6, so consider your target market before putting them to use.
|
OPCFW_CODE
|
import should = require('should')
import { Merkle } from '../src/merkle'
describe('Merkle', () => {
it('should satisfy this basic API', () => {
const merkle = new Merkle()
should.exist(merkle)
should.exist(merkle)
})
describe('hash', () => {
it('should hash these buffers', () => {
const merkle1 = new Merkle(undefined, Buffer.alloc(0))
const merkle2 = new Merkle(undefined, Buffer.alloc(0))
const merkle = new Merkle(undefined, undefined, merkle1, merkle2)
let hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('352b71f195e85adbaefdcd6d7380d87067865d9a17c44d38982bb8a40bd0b393')
// and a second time ...
hashBuf = merkle.hash()
hashBuf.toString('hex').should.equal('352b71f195e85adbaefdcd6d7380d87067865d9a17c44d38982bb8a40bd0b393')
})
it('should hash this buffer', () => {
const merkle = new Merkle(undefined, Buffer.alloc(0))
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('5df6e0e2761359d30a8275058e299fcc0381534545f55cf43e41983f5d4c9456')
})
})
describe('#fromBuffers', () => {
it('should find this merkle root from three buffers', () => {
const bufs = [Buffer.alloc(0), Buffer.alloc(0), Buffer.alloc(0)]
const merkle = new Merkle().fromBuffers(bufs)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('647fedb4d19e11915076dd60fa72a8e03eb33f6dec87a4f0662b0c1f378a81cb')
merkle.leavesNum().should.equal(4)
})
it('should find this merkle root from four buffers', () => {
const bufs = [Buffer.alloc(0), Buffer.alloc(0), Buffer.alloc(0), Buffer.alloc(0)]
const merkle = new Merkle().fromBuffers(bufs)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('647fedb4d19e11915076dd60fa72a8e03eb33f6dec87a4f0662b0c1f378a81cb')
merkle.leavesNum().should.equal(4)
})
it('should find this merkle root from 9 buffers', () => {
const bufs = []
for (let i = 0; i < 9; i++) {
bufs[i] = Buffer.alloc(0)
}
const merkle = new Merkle().fromBuffers(bufs)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('9f187f4339d07e1963d404f31d28e4557cd72a320085d188f26c943fc604281e')
merkle.leavesNum().should.equal(16)
})
})
describe('@fromBuffers', () => {
it('should find this merkle root from three buffers', () => {
const bufs = [Buffer.alloc(0), Buffer.alloc(0), Buffer.alloc(0)]
const merkle = Merkle.fromBuffers(bufs)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('647fedb4d19e11915076dd60fa72a8e03eb33f6dec87a4f0662b0c1f378a81cb')
merkle.leavesNum().should.equal(4)
})
it('should find this merkle root from four buffers', () => {
const bufs = [Buffer.alloc(0), Buffer.alloc(0), Buffer.alloc(0), Buffer.alloc(0)]
const merkle = Merkle.fromBuffers(bufs)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('647fedb4d19e11915076dd60fa72a8e03eb33f6dec87a4f0662b0c1f378a81cb')
merkle.leavesNum().should.equal(4)
})
it('should find this merkle root from 9 buffers', () => {
const bufs = []
for (let i = 0; i < 9; i++) {
bufs[i] = Buffer.alloc(0)
}
const merkle = Merkle.fromBuffers(bufs)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
hashBuf.toString('hex').should.equal('9f187f4339d07e1963d404f31d28e4557cd72a320085d188f26c943fc604281e')
merkle.leavesNum().should.equal(16)
})
})
describe('#fromBufferArrays', () => {
it('should find this merkle root from two buffers', () => {
const bufs1 = [Buffer.alloc(0)]
const bufs2 = [Buffer.alloc(0)]
const merkle = new Merkle().fromBufferArrays(bufs1, bufs2)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
})
it('should find this merkle root from four buffers', () => {
const bufs1 = [Buffer.alloc(0), Buffer.alloc(0)]
const bufs2 = [Buffer.alloc(0), Buffer.alloc(0)]
const merkle = new Merkle().fromBufferArrays(bufs1, bufs2)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
})
})
describe('@fromBufferArrays', () => {
it('should find this merkle root from two buffers', () => {
const bufs1 = [Buffer.alloc(0)]
const bufs2 = [Buffer.alloc(0)]
const merkle = Merkle.fromBufferArrays(bufs1, bufs2)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
})
it('should find this merkle root from four buffers', () => {
const bufs1 = [Buffer.alloc(0), Buffer.alloc(0)]
const bufs2 = [Buffer.alloc(0), Buffer.alloc(0)]
const merkle = Merkle.fromBufferArrays(bufs1, bufs2)
const hashBuf = merkle.hash()
hashBuf.length.should.equal(32)
})
})
})
|
STACK_EDU
|
It's awesome ! It works just perfect with Nexus 5. I had an issue with realtek hda on my laptop. Microphone didn't work because manufacturer didn't provide new drivers for Windows 10. This application solved my problem. It sounds better and the most clearly than any microphones I used ! Thank you guys so much !
Don't get why people are down rating This app does exactly what it says and works perfect with my PC. Must for anyone who plays a lot of online games on the PC but don't have headphones with mic. I use this for DotA 2 and it's flawless. Can't use speakers since it's very sensitive and picks up everything. Also changing the call from mic to camcorder helped me get better clarity on the voice.
Doesn't Work With My Hardware I have a Galaxy S5 and Windows 7 computer, and after following the directions, troubleshooting, installing the correct drivers and software, and fiddling with various settings on both my phone and in Windows for 4-5 hours, it just will NOT recognize my phone as an audio device. This app shows promise, but still needs a lot of work.
Wtf... I've just recorded my guitar with my phone, route it via usb, using asio4all on pc and record it with cubase, with very small delay and superb quality! Great job! :) just a thought, with root access, maybe you can make it work just like an external usb audio interface (bypass the virtual device driver) with even smaller delay, right? :p
New update screwed things up Used to work perfectly, but ever since I downloaded and installed the new update, I've been getting this error: "Fail to communicate with WO Mic on Android device. Check if application has been started" I've followed every step from the website correctly, I've reinstalled the app on PC, I've reinstalled it on my phone, and I've reinstalled the drivers. Nothing seems to help. It used to work perfectly, though, so if you're thinking about downloading this app, you definitely should.
Works well (USB) It's a real shame that the Windows audio driver isn't signed, this creates security problem. Otherwise it works as advertised. UI is pretty illogical, and basic info about usage is missing. It works on Android 6.0.1, does not require root, and I'm happy I've found this app 5 minutes before important call!
- Support 48K voice samples rate. (need to install new client 3.1 and driver)
- Client 3.1 now supports being minimized into system tray
- Optimize audio processing so that voice is more consecutive
- Add volume adjustment function in pro build
- Optimize audio processing to reduce power consumption
|
OPCFW_CODE
|
This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # admin-announcements (7)
- # beginners (30)
- # boot (181)
- # cbus (1)
- # cider (55)
- # cljs-dev (8)
- # clojure (104)
- # clojure-dev (3)
- # clojure-japan (1)
- # clojure-russia (70)
- # clojurescript (139)
- # core-logic (4)
- # cursive (23)
- # datomic (25)
- # devcards (10)
- # events (11)
- # funcool (1)
- # hoplon (39)
- # jobs (10)
- # ldnclj (19)
- # lein-figwheel (21)
- # off-topic (4)
- # om (174)
- # onyx (46)
- # re-frame (25)
- # reagent (3)
- # yada (7)
No didn't go far debugging it at all. I'm pretty sure my code is passing it in, though. Hunch is that the key needs to be nested in something else (`:compiler-options` maybe?).
@alqvist: only files in
:resource-paths end up in target or other packages like jars.
You can learn more about the roles files can have in the fileset here: https://github.com/boot-clj/boot/wiki/Filesets#fileset-components
@alqvist: Bootlaces will not fix that issue but rather would have been a cause under some circumstances.
@alqvist: just to clarify are your own clj files included or the clj files of your dependencies and you want to remove those?
@alqvist: that’s what an uberjar is supposed to do Do you want to include compiled java classes instead? Maybe try using
(aot :all true) after the
(uber) task. I’m not sure if this will work though
I suppose you could try aot :all where it was before, I think that will compile all namespaces you depend on. Not sure if
(uber) is still required then actually. Would be interesting to try
@martinklepsch: `(aot :all) still fails on compile nr 66 or something. One of the asyncs
I think you have to set your BOOT_VERSION file in your ~/.boot/boot.properties file. Then run the script, https://github.com/boot-clj/boot-bin/releases/download/2.4.2/boot.sh
@pandeiro: the binary has been moved there because effectively with 2.4.0 there has been a new binary that should rarely require updates
we may release a bugfix to boot.sh from time to time, but new versions will always work with it
https://github.com/boot-clj/boot/issues/275 — should this be moved to boot-bin?
So I am studying pods and how they work. Why is the first thing App/newPod does is seal the app classloader? What advantage does a sealed classloader have?
@chrisn: I think the point of sealing is that no other things can be added to the classloader after the pod is created
@chrisn: yes, what @martinklepsch said: boot uses
dynapath to mark classloaders as immutable which is really only an opt-in thing, but it is there to prevent programs from leaking clojure into classloaders that will pollute all the other pods
if a program doesn't modify the classloader via dynapath then the seal-classloader thing won't have any effect
The thing is that a new URLClassLoader is created that is that pod's specific classloader. How would a program pollute another pod?
a program that walks the classloader chain and uses reflection to add things to a classloader higher up in the chain
all the other pods, the worker pod, the aether pod, and any pods that are created by tasks or by the user will be created with all the dependencies they will need specified at construct time
the worker pod is a pod containing all the various dependencies that boot needs for its built-in tasks
but since boot ships with like 15 tasks that are all maintained together it simplifies things to let them all share the worker pod
so boot.jar will extract the aether uberjar from its own resources and write it out to a cache dir
This entire class isolation things is pretty darn interesting. Leiningen does a similar process which we have seen several pretty irritating issues with and we are using onyx which has peers and having dynamic pods for those would be very useful.
Right. And it uses a somewhat bugprone system of having each plugin mangle the project and eval-in-project
tasks don't load their own dependencies into a classloader that has any other task's dependencies
How does the pod pool work? It seems that reusing a pod would be tough unless you unloaded its classes.
so like suppose you want to run your tests in a fresh pod each time you build your project
so there are always N pods waiting int he queue with clojure already loaded and ready to go
when you do
(pool :refresh) you get a fresh new pod, and the previous head of the queue is disposed of
and it comes with various library functions that are helpful for making your build program
You could imagine a sort of boot daemon that would just always keep some pods running.
Really mitigating the startup time would go a long way towards using clojure as a scripting language.
And what you did was create a new core, create a new worker pod, and run the show task?
it's pretty important to isolate that in a pod because there are some really hairy deps there
not great transitive dependencies because they pull in all kinds of ancient apache http client nonsense
the functions in the pod that will be called to do the work take file paths as arguments
and they write the results of their work to some directory whose path is provided in the arguments
and when the work in the pod is finished the task will add the contents of the directory where the stuff in the pod wrote its results to to the fileset
OK, but is the task itself running in its own pod? And it has a child hoplon pod, correct?
it's important for all the tasks to run in the same thread, because the fileset is an abstraction that's coupled to the JVM classpath
the relationship of pods to the fileset is sort of like transients to persistent collections
(consider the crazy mutation of the classpath done by google closure compiler or whatever)
when all the crazy mutation is finished the task extracts the results and atomically adds them to the fileset and the main classpath
|
OPCFW_CODE
|
Getting dependency error for sparksession and SQLContext
I am getting dependency error for my SQLContext and sparksession in my spark program
val sqlContext = new SQLContext(sc)
val spark = SparkSession.builder()
Error for SQLCOntext
Symbol 'type org.apache.spark.Logging' is missing from the classpath. This symbol is required by 'class org.apache.spark.sql.SQLContext'. Make sure that type Logging is in your classpath and check for conflicting dependencies with -Ylog-classpath. A full rebuild may help if 'SQLContext.class' was compiled against an incompatible version of org.apache.spark.
Error for SparkSession:
not found: value SparkSession
Below are the spark dependencies in my pom.xml
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.6.0-cdh5.15.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>2.0.0-cloudera1-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-catalyst_2.10</artifactId>
<version>1.6.0-cdh5.15.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-test-tags_2.10</artifactId>
<version>1.6.0-cdh5.15.1</version>
</dependency>
You can't have both Spark 2 and Spark 1.6 dependencies defined in your project.
org.apache.spark.Logging is not available in Spark 2 anymore.
Change
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>2.0.0-cloudera1-SNAPSHOT</version>
</dependency>
to
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.6.0-cdh5.15.1</version>
</dependency>
This solved the SQLContext error but still getting error for SparkSession
Well, did you import it ? import org.apache.spark.sql.SparkSession
I imported it but its not present, however I updated my pom with different version of spark and i am not getting error for spark session
|
STACK_EXCHANGE
|
M: Google vs the press: avoiding the lose-lose scenario - czr80
http://www.mondaynote.com/2013/01/20/google-vs-the-press-avoiding-the-lose-lose-scenario/?
R: politician
"In the economy digital, all left their mark. Due to the regular and
systematic monitoring of their online business, data application users are
collected without monetary consideration. Users, recipients of a service
become and quasi-employees, volunteers, businesses." -- translation
The French Ministry of Finance produced a report [1] determining that Google
should be taxed on the value of the exhaust data (clicks, etc.) it discovers
by tracking (presumably) the nation's citizens' behavior on the Internet.
The report appears to argue that Google's users should be treated as employees
for the purpose of taxation.
[1] "Mission d'expertise sur la fiscalité de l'économie numérique"
[http://www.redressement-productif.gouv.fr/files/rapport-
fisc...](http://www.redressement-productif.gouv.fr/files/rapport-fiscalite-du-
numerique_2013.pdf)
[2] Google translation of the abstract <https://gist.github.com/4591051>
|
HACKER_NEWS
|
Updated June 28, 2023
Introduction To CSS
Cascading Style Sheets, better known as CSS, is a very simple process to make web pages much more presentable. CSS allows you to put styles to customize your web pages. The best part about using this styling feature is that the CSS is independent of the HTML way of creating web pages. The Hypertext Markup Language and the Cascading Style Sheets have a basic difference. CSS primarily structures the web page’s landscape while offering powerful color coding and styling techniques. It actively controls the layout of more than one web page simultaneously. All the CSS files store the external stylesheets.
Main Components of CSS
In the Above Section, we have studied Introduction to CSS, So now we are going ahead with the main components of CSS as below:
1. Easily maintainable: If you are intended to make any global change, change the styling, and you can see all other elements in all other web pages getting automatically updated.
2. CSS is time-saving: You can write the script once and reuse the same sheet as much time as you want.
3. Superior styles to the native front end: CSS has a much wider array of attributes and lists than HTML. Therefore the HTML page can have a brighter look and feel than the normal HTML attributes.
4. Ease with Search Engines: CSS is a convenient and easy-to-read styling sheet. This means that search engines don’t have to put in much effort trying to read the text.
5. Efficient cache storing: CSS can store web applications locally with the help of an offline cache mechanism that can be used to view offline websites.
Characteristics of CSS
As we discussed the introduction to CSS and Its component. Now we are going to learn about the characteristics of CSS. The client browser interprets and applies styling rules to various elements in your document, which are among the major characteristics of CSS. Major characteristics include:
- A style rule consists of a selector component and a declaration block component.
- The selector points to the HTML component that you want to style.
- The declaration block contains one or more declarations, along with semicolons.
- Every declaration has a CSS property name, a semicolon, and a value. For example, the color is the property, and the value is red. Font size is the property, and the 15px is the value.
- CSS declaration ends with a semicolon, and curly braces surround these blocks.
- CSS selectors are the ones that are used to find HTML elements that are based on the element name, id, attribute, class, and more.
- The ID of an element will select a unique component.
- If you wish to select a particular element with a specific id, the # function, and the id attribute should be used.
- If you wish to select the elements with a specific class, the period character and the name class should be written.
- Universal selector: If you do not wish to select elements of a particular type, you can use the universal selector, which matches the element name.
- Element selector: These selectors choose the element based on the element name.
- Descendent selector: The descendent selector refers to a situation where a particular element is inside another.
- ID selector: This selector uses the id of the HTML element to select a specific element.
- Class selectors: It selects the element with a specific class attribute.
- Grouping selectors: It will be a good option to group the selectors to minimize the code. Each selector and a comma should be used to group the selectors.
Applications of CSS
After learning the Introduction to CSS and the characteristics of CSS, we are going to learn the application of CSS. There are three ways of HTML accessing CSS:
An inline style sheet only affects the tag it is in. This essentially means that you can change the small details on the page without altering the overall layout or everything on the page. This is advantageous as if you had everything on the external pages. In that case, you must add additional tags to modify the details. Inline overrules external, which means that the small details can be changed. It also overrules the internal.
Web developers typically utilize internal styling to make small changes within a single tag. Inline styling affects only the specific tag to which it is applied, while internal styling is placed within the head of the HTML document. This means that if you wish to customize the page, all the required changes would be seen by just scrolling. The internal styling is placed inside the tags. Comparatively, this looks neater, simple, elegant, and organized because of the separate styling and tagging.
External stylesheets allow people to format and recreate their web pages on different documents. This effectively means you can have two or more workplaces, as more than one stylesheet can be embedded inside the document, providing you with a much cleaner workspace. In this case, the external stylesheet’s easy accessibility provides a significant advantage. However, it’s important to note that any changes to the external sheet would impact all the parent sheets it is linked to.
Advantages And Disadvantages Of CSS
Below are the advantages and disadvantages:
Below are the advantages:
- Device Compatibility
- Faster website speed
- Easily maintainable
- Consistent and spontaneous changes
- Ability to re-position
- Enhances search engine capabilities to crawl the web pages
Below are the disadvantages:
- Cross-browser related issues
- Issues due to multiple levels
- Lack of security
CSS empowers the web designer to apply extensive changes to the web layout of all pages within a single website using a single file. It helps design a light and creative website with high responsiveness, which impresses the audience when displayed. Therefore, it is an integral part of today’s websites that should not be overlooked.
We hope that this EDUCBA information on “Introduction to CSS” was beneficial to you. You can view EDUCBA’s recommended articles for more information.
|
OPCFW_CODE
|
I'm having real problems cleaning up and reducing the size of a Shapefile which is published by the UK's Environment Agency.
It shows the extents of the LiDAR open data they publish: each polygon is a survey flight, and there are fields for the resolution, the date of the flight and so on.
Eventually I'd like to end up with a dissolved (merging all features) and simplified Shapefile which is much smaller in size. At the moment:
- The .shp is 360.6MB
- The .dbp is 26.9MB
- There are 121,753 polygons
I think one of the reasons the file is so large is that there are many small 'specks' of data (which aren't important for my purposes):
What I've tried so far:
- Dissolving with QGIS: this seemed to make no progress on such a big file so I cancelled it after a while
- Dissolving with OGR (
ogr2ogr dissolved.shp LIDAR_composite_extents_2015.shp -dialect sqlite -sql "SELECT ST_Union(geometry) AS geometry FROM LIDAR_composite_extents_2015"): after a few hours I would get an error like this:
GEOS error: TopologyException: Input geom 0 is invalid: Ring Self-intersection at or near point 221912.50000000093 50580.000000001863 at 221912.50000000093 50580.000000001863– I guess this will just be the first of many errors
- Cleaning the Shapefile, first with QGIS's Check Geometry Validity, then with GRASS's v.clean (I tried bpol), but the cleaned file still fails dissolve (I also tried adding a zero buffer)
- Converting the multipart polygons to singleparts, adding a geometry column and removing features smaller than a certain area. This took about 100MB off the filesize, but didn't affect the 'holes' within polygons – to get rid of those I tried to make a difference layer, but the difference operation consistently fails (if I check ignore invalid geometry, it creates lots of features which are visible in the attribute table, but doesn't display them).
I will simplify the file in the end, but don't want to do this before dissolving in case I introduce slivers.
Should I use a different tool within v.clean?
I have not yet tried to split the file into regions and dissolve those, then dissolve the regions.
|
OPCFW_CODE
|
Rails 7.0 Added Class Level update! Method To ActiveRecord::Base
June 13, 2021
Ruby on Rails has
update_all class method which is used to update a batch of records without running validations and callbacks defined in the model.
On the contrary, we can update batch of records using
update class method which also runs validations and callbacks defined in our model.
Let's see an example:
class Employee < ApplicationRecord validate :salary_for_experience_level, on: :update private def salary_for_experience_level if experience < 2 && salary >= 1_00_000 errors.add(:salary, "Invalid") end end end
Employee model, while trying to update the salary we have a validation check on employee having salary more than
1_00_000 and experience level less than 2 years.
We will now create two employee records as follows:
Employee.create! experience: 3, salary: 100000 Employee.create! experience: 1, salary: 80000
Having the salary validation in our
Employee model, let's try to update salary of all employees:
Employee.update(salary: 1_30_000) #=> [#<Employee id: 1, salary: 130000, experience: 3>, #<Employee id: 2, salary: 130000, experience: 1>] Employee.all #=> [#<Employee id: 1, salary: 130000, experience: 3>, #<Employee id: 2, salary: 80000, experience: 1>]
As we see above,
update method partially updated employee record with id
silently failed to update the employee record with id
2 due to validation check.
Rails 7.0 has added
update! method to raise
ActiveRecord::RecordInvalid error for employee record with id
# After Rails 7.0 Employee.update!(salary: 1_50_000) #=> Employee Load (0.2ms) SELECT "employees".* FROM "employees" # TRANSACTION (0.1ms) begin transaction # Employee Update (0.6ms) UPDATE "employees" SET "salary" = ?, "updated_at" = ? WHERE "employees"."id" = ? [["salary", 150000], ["updated_at", "2021-06-13 18:09:14.800151"], ["id", 1]] # TRANSACTION (1.8ms) commit transaction # Traceback (most recent call last): # 1: from (irb):18:in `<main>' # ActiveRecord::RecordInvalid (Validation failed: Salary Invalid)
Here is the link to the relevant pull request.
|
OPCFW_CODE
|
RStudio is hosting Hadley Wickham's two day, R Master Development course in Washington DC this December:
Dec. 10, 2012 - Advanced R programming
Dec. 11, 2012 - R package development
Course Instructor: Hadley Wickham - RStudio Chief Scientist
This is a two-day workshop, but each day can also be taking independently. To save on registration fees, sign up for both now. All participants will receive a copy of all slides, exercises, data sets, and R scripts used in the course.
Discount pricing available for academics (33% off) and students (66% off). Space is limited, please contact us to confirm your eligibility.
What should I bring?
Day 1 - Advanced R programming
Monday, Dec. 10, 2012
Do more with less code, by mastering advanced features of the R programming language.
Who should take this course?
This class will be a good fit for you if you have some experience programming in R already. You should have written a number of functions, and be comfortable with R’s basic data structures (vectors, matrices, arrays, lists, and data frames). You will find the course particularly useful if you’re an experienced R user looking to take the next step, or if you’re moving to R from other programming languages and you want to quickly get up to speed with R’s unique features.
Learn to write better R code by using the advanced features of the R programming language. Based on the programming experience of Hadley Wickham (author of over 30 R packages) and the RStudio team, this course will teach you how to use R to solve harder problems with fewer lines of code.
- Become a skilled R programmer who knows the best ways to craft R functions and to use R’s object oriented programming (OOP) features
- Learn advanced R techniques to compute on the language, control object evaluation within R functions, and apply R’s scoping rules
- Write correct, fast, and maintainable R code built around the mantra, “Don’t repeat yourself!”
What will you learn?
How to write R programs like an expert. Through a series of demonstrations and hands on exercises, you will learn about advanced R features to write fast and maintainable code.
Controlling evaluations - Unlike most languages, R provides powerful tools for controlling when and where evaluation occurs. This lets you create functions tailored for interactive use that minimize typing with a little magic.
- Mastering base functions such as
- Capture user input without evaluating it
- Control when and where R evaluates expressions and calls
- R’s rules for dynamic and lexical scoping
- Writing code that modifies code
First class functions - At heart, R is a functional programming language, and functions can be used in many more ways than most R users assume. R has first class functions which means you can write functions that return functions, take functions as input, and save function in lists. This gives you a powerful set of tools for dealing with a broad class of problems.
- Create anonymous functions
- Write closures – functions that return functions
- Build higher-order functions – functions that take other functions as input
- Work with lists of functions
Object oriented programming - Though a functional language, R contains three systems of object oriented programming (OOP) features. These features revolve around the concepts of classes and methods and can dramatically simplify code. We’ll focus on S3, the oldest and simplest form of OOP, but will also touch on S4 and R5 (reference classes).
- How to interpret base R functions that use OOP techniques
- The details of S3 generic functions, methods, and classes
- The differences between R’s three OOP classes: S3, S4, and R5
Best practices in R - Even advanced techniques can be ruined by poor planning. When you use advanced techniques, you must be especially careful to make your code clear and lucid. Throughout this course you’ll learn practical coding tips and techniques.
- Create correct, maintainable, and fast R code
- Create understandable code that communicates
- Organize R programs around the “DRY” principle – “Don’t repeat yourself!”
Day 2 - Package development
Tuesday, Dec. 11, 2012
Learn how to turn your R code into packages that others can easily download and use.
Who should take this course?
This class will be a good fit for you if you’ve developed some code that you now want to distribute to others. We’ll get you up to speed with everything you need to know about packages.
In this course you’ll learn an efficient package development workflow developed by Hadley Wickham, the author of over 30 R packages including
- Transform existing R code into packages that others can easily download and use
- Learn a fluid package development process facilitated by the
- Write inline documentation with
- Develop automated tests with the
testthatpackage to ensure that your code is correct today, and continues to be correct in the future
- Recognize common errors that prevent you from passing
R CMD check
- Release your package into the wild, through the official CRAN repository for worldwide distribution, or to local repositories for controlled distribution
What will you learn?
How to develop an idea into a published, stable R package. Through a series of demonstrations and hands on exercises, you will learn to use advanced R features to quickly build, document, test, and publish R packages.
Introduction to R packages - Packages are one of the most useful tools in the R programming language. You can use packages to quickly solve problems not easily handled by base R. You can also share your own code with friends, coworkers, or even the global R community by building it into a package.
- How to structure an R package
- Working with libraries and installing packages
- The package development cycle
Documentation and namespaces - Two things must happen before your code can become useful. First, other programmers must be able to understand your code. Second, R must be able to integrate your new functions with pre-existing ones. We will show you how to take care of these by documenting your package and creating a package namespace for your functions.
- Documenting your packages and functions
- Formatting text in help pages
- Exporting functions to a package namespace
- Use functions from other packages in your own package
Code testing - Maintaining an R package requires advanced planning. You can simplify debugging, quickly spot unintended consequences, and generally ensure that your package is stable by creating thoughtful unit tests.
- Make code maintenance easier with unit tests
- Quickly create unit tests for your package with testthat
- Seamlessly integrate unit testing into your workflow with devtools
Releasing your package - The R Core Development team helps developers share their packages with the world by hosting packages along with R on the CRAN repository. You’ll learn how to ensure your package meets R Core’s high quality standards for packages and how to best market your package after it has been included.
- Using and passing R CMD check
- Submitting a package to CRAN
- Marketing your package after its release
- Simplifying package development with source code control
In certain cases, we may need to cancel this workshop due to circumstances beyond our control or otherwise. If this happens, RStudio will refund all registration fees for those who signed up. RStudio is not responsible for any related expenses incurred by registered attendees (including but not limited to travel and hotel expenses).
Until Nov 25, 2012 - Full refund, less 10% of registration fees
Nov 26, 2012 to Dec 02, 2012 - 50% refund of registration fees
Dec 03, 2012 and after - No refund available
All public workshops hosted by RStudio come with a no-questions-asked money-back guarantee.
When & Where
RStudio™ offers open source and enterprise-ready professional software packages and products for R. The free RStudio integrated development environment (IDE), Shiny interactive application framework, and R Markdown reproducible reporting package, are just a few of the many popular tools we provide to make using R a better experience. Please contact us at http://www.rstudio.com to learn how RStudio Server Pro and Shiny Server Pro can give your organization the professional environment R developers need to deliver the interactive dashboards, applications and reporting experiences business users want.
|
OPCFW_CODE
|
Webhooks: Don't Poll Us, We'll Call You!
Your integration or app sends envelopes to recipients via email or by using embedded signing. Great!
A common next step for developers is to enable their integration to automatically determine the status of their envelopes--has the envelope been delivered?, signed?, declined?, completed?
By knowing the envelope’s and recipients’ statuses, your integration can handle each case appropriately. For example, consider these two scenarios:
- A recipient’s status becomes Delivered and is not changed to a status of Completed (signed). This indicates that the recipient received the envelope, and opened it on a mobile device, tablet, or web browser, but they haven't signed it yet! Your app might notify the sender of this situation to follow-up with the signer to see if there are any issues.
- An envelope's status becomes Completed. Your app could initiate actions or notify other databases that the envelope has now been signed. Your app could also download the signed document and certificate of completion for storage in a corporate archive system. (The documents will also be stored on the DocuSign system unless a purge policy has been configured.)
Both of the above scenarios depend on your app having current status information. How can your app learn about status changes?
Polling is an old-school technique that is still in use today. To poll, your app sets up an infinite loop that periodically requests envelope or recipient status from DocuSign on one or more envelopes. DocuSign limits polling frequency, so an envelope's status can be requested once every 15 minutes or less often. This restriction is implemented on all DocuSign production servers, without exception. This restriction does not apply while you’re developing your app in the DocuSign Developer Sandbox system, demo.docusign.net.
Polling has many disadvantages:
- On average, your app will learn about the envelope's new status 7.5 minutes (or more) after the event occurred, depending on your polling frequency.
- Your app's poller will spend most of its time learning nothing new, but will be consuming resources, including log space, process table space, and more.
- Your app must maintain a polling thread that should only stop when all of your envelopes have completed. It must be monitored and restarted as needed.
- Polling is a drag on the DocuSign system. We engineer the system to respond to polls, but doing so consumes energy and is completely unnecessary since the platform supports webhooks.
The answer is to stop polling. Use webhooks instead! The advantages of webhooks are:
- Webhooks enable the DocuSign platform to proactively notify your app when your envelope or any of its recipients statuses change.
- Your app receives the notifications much faster than the 7.5 minute average delay of polling. However, note that the messages are not delivered instantly after the event occurs. Customer UX threads should not wait for an incoming webhook message.
- Webhooks are the 21st century version of hardware interrupts: instead of wasteful polling, your app is notified when an event occurs.
Webhooks are easy to implement! See Part II of this post, Adding Webhooks to your Application for more information.
|
OPCFW_CODE
|
How to uninstall a driver from recovery console
I recently updated the "Standard AHCI Driver" (whatever it was called), to the one suggested on the laptop manufacturer's website.
At first, it kept going through the automatic recovery process. With much trouble, I managed to get the Windows 8 installation to boot into safe mode and stop it from automatically trying to fix startup (which was not working).
Now, when the computer restarts, it complains about iaStor.sys being invalid and / or missing. This makes sense, since the offending driver is causing all of these issues.
My question now- is there a way I can revert to the Standard AHCI Driver? Possibly using a recovery console? I want to avoid re-installing Windows, which is the only way I can see to fix this problem now.
You can access System Restore from WinRE. Also see Rollback driver in Windows via command line, Windows Repair Disc - Rollback a driver from Windows Command Prompt?, Can I uninstall faulty drivers through System Recovery Mode command prompt? etc.
I ran into following error with pnputil -e on the latest Windows 10 repair console:
No published driver packages were found on the system
I had to use dism
Listing drivers:
dism /image:c:\ /get-drivers
Removing a driver:
dism /image:c:\ /remove-driver /driver:oem#.inf
Use list vol inside diskpart to get the assigned drive letter of your windows partition and replace /image:c:\ with it
I'm guessing the offending driver is not shown in Device Manager while in Safe Mode?
If it is shown, you should be able to rollback or install the standard driver from there.
Otherwise, from Safe Mode (and possibly the recovery console), you can use pnputil.exe to uninstall the driver.
Type pnputil -e to show a list of installed drivers.
You may want to use pnputil -e | more so the list is output one screen at a time.
After you've located the driver in the list, note the inf file shown for the driver (e.g., oem00.inf).
Type pnputil -d oem00.inf to delete the driver.
You may need to use pnputil -f -d oem00.inf to force deletion.
If the pnputil is not available in the console make sure you change the current directory from X:\Windows\System32 to C:\Windows\System32
I've just tried pnputil -e in a windows 10 recovery console. The command works, but shows only three entries. This is suspicious and most likely the installed drivers of the recovery console itself. The current directory was 'C:\Windows\System32'
|
STACK_EXCHANGE
|
I don't complain about much when it comes to user experience (oh wait, that's not true), but I couldn't pass this
up. This is the registration form for CBS 2, a local TV station here in Southern California.
I had originally wanted to post a comment on the news story
of a high speed chase about how terrible the television coverage was (they cut to commercials every minute at times, no joke). As it turns out, they require registration to comment. No biggie, I thought. What's one more registration? That's until I got a look at the form...
(scroll down for my analysis below the image)
I can't tell you how many things are wrong with this form. This registration form - to interact with a news website - REQUIRES a security question, a birthday, and...MY HOUSEHOLD INCOME?!?! Not optional; no, they are REQUIRED for registration. Oh, and it also lets me opt in for spam too. I won't even talk about the very large ad bordering the registration form, nor the myriad of font usage.
Now, I understand the need for an ad-based media company to want to be familiar with their demographics, but there needs to be somebody within the organization that stands up to this sort of idiocracy. Just because the ad guys tell you they want all this information doesn't mean you should bow to their every request. Garry Tan pegged it
when he said that when "*anyone* makes a product lousier, [designers] should get up and shout, and raise hell."
It's a well-understood principle that the shorter and simpler your registration form is, the more chance the user has of actually filing it out. Even Facebook experimented
with an extremely simple signup process. Simplicity is key, and when you make something too complex, people will just leave.
When I was greeted by this registration form tonight, I was initially overwhelmed at the amount of information this site wanted, just for me to leave a comment on a news story. As it turns out, I never got around to filling it out. I'm not concerned about my privacy; heck, there is enough information about me out there already. In this case, the amount of effort outweighed the benefit for me.
CBS is not a financial institution, my bank, or my social network. They don't need to be asking my household income or to fill out a security question. Heck, I'm surprised there isn't a field for my social security number.
Moral of the story: If you make a task on a website too difficult to complete, it's going to decrease the number of people who actually do. And in my case, that's what happened.
|
OPCFW_CODE
|
I’ve been a loyal user of Firefox for years and plan to still use it as my secondary browser (and will use it regularly for Zotero). But Google’s new browser, Chrome, has really impressed me and I’m going to start using it as my primary browser for email, surfing the web, and every day online activities. Why is it better? It’s faster, does a better job rendering flash on Linux, doesn’t require restarts to install extensions, and isolates tabs, so if one crashes it doesn’t kill all of your other tabs. In short, Chrome has become a better browser than Firefox for everyday browsing.
I had held off switching to Chrome for several months because I was hoping someone would create a Chrome extension similar to clippings for Firefox. I’m still hoping someone will, but I figured out a way to work around it on Ubuntu Linux. It’s not for the faint-of-heart or novice computer user, but it does work, and works quite well. Basically, what I’ve done is installed some software that allows me to paste text into any window, unlike clippings, which only let me paste text into Firefox.
Here’s how you do it:
Ubuntu comes with some cool keyboard shortcuts built in, and includes the ability to customize a lot of those shortcuts. But it doesn’t come with the ability to paste text as a keyboard shortcut with the default install (learned this here). In order to do that, you need to install a program called “xbindkeys.” While you’re at it, you may as well install the GUI for setting the keybindings “xbindkeys-config” and the program you’ll use to generate the text “xvkbd”. You can install them all at once from the terminal (or find them in the Synaptic Package Manager and install them there):
$ sudo apt-get install xbindkeys xbindkeys-config xvkbd
Once you’ve installed “xbindkeys,” the first thing you need to do is create a default configuration file for it from the terminal:
$ xbindkeys -d > /home/your-user-name/.xbindkeysrc
Once that’s done, go ahead and open the xbindkeys GUI by typing the following at the terminal:
This will pull up the xbindkeys GUI:
In here, you can create the key bindings to generate the text you want. This involves 6 steps.
1) At the bottom of the GUI, select “New”:
A new line in the list of keybindings will appear. Select it.
2) At the top right, choose a name for the keybinding and put it in the field labeled “Name:”
I don’t know that the name you choose here is particularly important, but you should probably make it something you’ll recognize.
3) Set the key binding. the xbindkeys GUI includes the ability to capture key combinations. All you have to do is press the “Get Key” button, wait a second for a small window to pop up, then press the desired key combination (e.g., Control+Mod2+Mod4+1). The GUI should capture everything for you correctly. The key combination will be coded into the language xbindkeys understands and will be put in the “Key:” field (e.g., Control+Mod2+Mod4+1 | m:Ox54 + c:10). What all of the code in the “Key:” field means, I’m not sure, but some of it is explained in the actual configuration file (which you created above; it’s located here: /home/your-user-name/.xbindkeysrc).
4) You then need to tell xbindkeys what to do when you press the key combination you just set up (this is called the “Action”). In order to paste text into a window, you have to actually call up a different program, which is why we installed it earlier. That program is xvkbd, an on-screen keyboard program. You won’t see the keyboard when you press the key combination, but it does generate the text for you. Here’s the command you use in the “Action:” field to generate text:
xvkbd -xsendevent -text “your text here“
The first part “xvkbd” opens the xvkbd program. The second part “-xsendevent” tells it to send an event to the X window. The third part “-text” tells it what type of event – text.
A couple of notes on the text you can include using this command are in order. First, you can put pretty much anything you want inside the quotes, except additional quotes, which the program doesn’t seem to recognize are quotes inside the quotes. So, don’t try quotes inside quotes. You can also include things like “Return” and “Tab.” I couldn’t figure out how to do that initially, which is why I ended up writing this tutorial. What I wanted to include in the text I was pasting was the text I usually include in my email signature:
I was able to include the text, but couldn’t figure out how to include the line break. So, what I was initially getting when I pressed my key combination was:
I initially thought, erroneously, that the command for “Return” would be included in xbindkeys list of commands. Nope. The command for “Return” is part of xvkbd. I eventually found it on that software’s website: \r. So, if you want to create something like an email signature with a line break, here’s how it would look:
xvkbd -xsendevent -text “Best,\r\rRyan”
Note I’ve included two Return commands (i.e., “\r\r”) as one will just return me to the next line. I needed 2 for a blank line between “Best,” and “Ryan.”
5) You’re not quite done at this point. I found the GUI for xbindkeys to be a bit buggy. Before you do anything else, you should hit the buttons “Apply” and “Save & Apply & Exit”:
This will close the program, but it will also make sure that everything you just typed in will be saved in the configuration file. I have 5 different keyboard bindings for text I use regularly, 3 email signatures and 2 other text snippets. I created the first email signature, then started on the second and the GUI crashed, losing all my work. So, just hit the “Apply” and “Save & Apply & Exit” button after every keyboard binding you create just to be safe.
6) Test it. Open up a file or browser and try your keyboard combinations. If the text is pasted in – voila, you’re done. If not, well, you did something wrong or, well, who knows. The nice thing about this approach is that it is browser agnostic – it will paste your text into any browser or any other text input box for that matter.
There are a couple of additional things you should do with xbindkeys before you’re done. First, a nifty keybinding that is included in the default configuration file is “CONTROL+SHIFT+q”, which opens up a list of all of your current keybindings:
This isn’t all that important to know, but it is a nice little feature.
You should also know that you can edit your configuration file by hand, though doing so is a bit tricky as the code for the key combinations isn’t all that intuitive. To do so, use the following command at the terminal:
$ gedit /home/your-user-name/.xbindkeysrc
Finally, once you’ve gotten everything working, if you want to make sure xbindkeys starts when you boot your computer, add it to the Startup Applications: “System -> Preferences -> Startup Applications”
|
OPCFW_CODE
|
Scan with Flatbed and Automatic Document Feeders (ADF) Scan to JPEG or TIFF Preset photo sizes Scan multiple photos on the flatbed Automatic file naming Learn more about scanning photos with You can also contact a Customer Support Technician to help solve any query that you may have. If you can print from WordPad or Notepad, either the problem is related to the program that you are using, or Windows may not be running a particular printing command that If you are having problems locating the correct drivers for your Printer printer, or are unsure of the exact model, we suggest you run a system CX Stylus scan first. click to read more
Download. VueScan is here to help. If you are using a PostScript printer, load the Apple LaserWriter NT driver. Guaranteed safe for your PC. Source
On the Print Test Page page of the wizard, click Yes to print a test page. Please note we are carefully scanning all the content on our website for viruses and trojans. For example, create a new document that contains less information. Contact Us | Privacy & Customer Protection Policy Home | Products | Supplies | Support | Corporate Info | News & Events | EPSON Europe Epson Support Our product support allows you to get the best out of your Epson product with self help
Check the available space on the hard disk. To update the printer driver on Windows XP manually, follow the steps below or click here to let us fix it automatically: Click Start, and then click Run. Register the full version to have DriverTool install all missing or corrupt drivers for you automatically! Epson Stylus Cx3500 Driver For Windows 10 Running the dir > lpt1 command from the Windows directory will fill the page buffer.
If you see the Printer Sharing page next in the wizard, you can share your printer so that other computers on your home network can use it to print. Epson Cx3600 Printer Problems with the option of a second year for just $9.99 USD. We recommend that you save it to your desktop and initiate the free scan using the desktop icon. http://esupport.epson-europe.com/ProductHome.aspx?lng=en-GB&data=nKGrQtfEm6mzkADYuvmrwa3UUzXU002Fpo4tURoz2CUT7+oU003D&tc=6 We're commited to providing the best driver to solve your system issues.
fulcherbazSep 25, 2011, 1:21 AM cmichael138 said: 32 or 64 bit? Epson Scan Windows 10 trying to install cx3600 smart panel for scanner. Install the latest CX Stylus driver updates now. Unless otherwise indicated, all content is Copyright © Seiko Epson Corporation.
If the printer is not a PostScript printer, type dir > lpt1 at a command prompt, and then press ENTER. http://www.driversepson.com/epson-cx3600-driver/ Scan with Flatbed and Automatic Document Feeders (ADF) Scan to PDF (Single and Multipage) Optical Character Recognition (OCR) Automatic Color Detection Small document file sizes Automatic document deskewing Scan Photos Do Epson Stylus Cx3600 Software Download Free On Linux, you need to set up libusb device protections. Note that Epson drivers for Windows Vista or later will usually work on Windows 10. Epson Stylus Cx3600 Manual Saving you time and preventing the possibility of installing an incorrect system driver, which could potentially cause a system crash.
You are logged in as . http://alpinedesignsmtb.com/epson-stylus/driver-epson-stylus-cx3600-scanner.php Click Next. Out of warranty services Even if your existing warranty has lapsed, Epson offers a competitively priced service to ensure minimal down time. Apple keeps changing their Mac OS and Epson will not update the driver for my scanner. Epson Stylus Cx3600 Ink Cartridges
On the next page, click Next if you want to accept the suggested printer name and use the printer as your default printer. If you were unable to complete the above steps to install a printer driver, or if you still have problems printing, you might have to ask someone for help or contact For the first time, users can upgrade with confidence knowing that their computer will run smoothly thanks to the latest drivers and updates required for a perfect upgrade. find more info Download VueScan and start scanning again in 60 seconds.
On the first page of the Add Printer Wizard, click Next. Epson Et 3600 Contact Us. If the output is printed to the printer from the dir command, the print driver or printer configuration is probably the source of the problem.
Under Printer Tasks in the navigation pane on the left, click Add a printer. On this page, click Have Disk. When you select to replace the existing driver, Windows will try to replace the current files on your system with the new ones that you downloaded. Epson Scan Software The Add Printer Wizard opens.
The Install From Disk dialog box opens. Our user training courses will enable you to get the most out of your product. Epson Stylus CX3600 Scanner Driver Can't find a driver for your Epson Stylus CX3600? see it here When you have located the correct folder, click Open.
|
OPCFW_CODE
|
Policy on TACCESS paper presentations at the ASSETS Conference
As part of an on-going relationship between TACCESS and the ASSETS conference, papers accepted to TACCESS that have not previously been presented at a conference, can be presented at the ASSETS conference, provided that ASSETS is notified of the papers before the program meeting so those papers can be folded into the program. It is anticipated that at most 3-4 TACCESS papers could be presented as oral presentations; other TACCESS papers may be presented as posters but will clearly be marked at TACCESS papers in the program.
All TACCESS papers which have not previously been presented at a conference and which are accepted during the one-year period of time ending on June 4 before the conference will be considered for presentation. If the authors wish to present the work at ASSETS, they will be given the opportunity to do so. The scheduling of the presentations or posters is totally up to the ASSETS conference chairs.
This on-going relationship with ASSETS is expected to continue for upcoming years of the conference, and additional conferences may be added in future years, according to the policy outlined below.
Proposed policy for adding additional conferences
- The chair of a conference community or conference steering committee can send a request to the editors-in-chief of TACCESS indicating a desire for TACCESS papers to be presented at their conference. All such requests from active Assistive Technology and Accessibility communities will be approved. By making this request, the conference, through its community leadership agrees to accept for presentation all TACCESS papers whose topic is appropriate to their conference.
- The community or steering committee chair shall provide TACCESS with a liaison person who has the authority to approve a paper as being acceptable content for the conference. This is not a review process. That has already been done by TACCESS. This is simply an approval of suitable content for the conference. This should not require reading the paper, only the title and abstract.
- The conference’s TACCESS liaison shall respond immediately to any requests so that in the rare case of a decline of the paper the authors may still present at ASSETS.
- The community or steering committee chair shall provide TACCESS with a deadline date that is the same every year. Any paper to be presented at the next iteration of that conference must be accepted and sent to the conference’s TACCESS liaison before that date.
- When a TACCESS paper is accepted, the authors should designate which conference they desire to make their presentation (or no conference). Requests for presentation at ASSETS are automatically granted on the assumption that all TACCESS material is acceptable at ASSETS. Requests for presentation at a particular specialized conference must be received by the community’s TACCESS liaison before the deadline.
- Once a TACCESS paper has been scheduled into a conference, all discussions about the scheduling of the presentation shall be between the authors and the conference with no further involvement from TACCESS.
|
OPCFW_CODE
|
Since most altcoins are based on Bitcoin’s codebase, upgrades to Bitcoin are often relatively easy to implement in altcoins. Indeed, as Segregated Witness (SegWit) is slow to activate on Bitcoin, several altcoins are taking a stab at implementing and activating the soft fork first.
However, it seems the very same politics that are holding back the protocol upgrade on Bitcoin are now seeping into several of these altcoins.
“What we are seeing is a stalling tactic from miners,” Viacoin lead developer Romano told Bitcoin Magazine. “They know that if SegWit activates on altcoins, it will make blocking it on Bitcoin even less credible.”
Launched in 2014, Groestlcoin has a total market cap of some $365,000, earning it the 163th spot on CoinMarketCap at time of publication. This makes it the smallest of the five altcoins aiming for SegWit but also the first to have actually succeeded in activating it. The required 95 percent of hash power signaled support back in January, and the protocol upgrade has been live since.
“Jackie,” who prefers not to reveal his full name, is the project lead for Groestlcoin.
As a digital currency that isn’t used much yet, Groestlcoin never faced scaling issues like Bitcoin. But Jackie said he considers SegWit a malleability fix first and foremost, which in turn enables features like thelightningnetwork, atomic cross-chain transactions and other innovations.
“Less useful and elegant versions of lightning [network], TumbleBit and Mimblewimble were possible with the old version of Groestlcoin, but they are greatly enhanced now Segregated Witness is activated on the Groestlcoin network,” Jackie noted.
That said, Segregated Witness itself is not very actively used so far. There are no Groestlcoin wallets that support the option, so apart from some specially crafted transactions to test that the new feature worked, most Groestlcoin transactions still use the old, pre-SegWit format.
Though, Jackie added, “We’re in the process of updating our Electrum version for Groestlcoin to support SegWit transactions. That should be done before the end of this year. When that is completed anyone should be able to easily send and receive SegWit transactions.”
Vertcoin may well be the next altcoin to activate Segregated Witness.
As a result of an implementation bug, Vertcoin initially suffered a setback from their Segregated Witness integration: the blockchain forked in two. Vertcoin developer and project manager “etang600” emphasized, however, that this had nothing to do with SegWit itself — only with how they implemented it.
The issue has since been resolved and SegWit signaling has started. Requiring 75 percent hash rate support, it is getting relatively close to activation with some 40 percent of hash rate signaling.
However, one mystery miner, most likely a solo miner, controls over 30 percent of all hash rate. It’s this miner that is seemingly holding everyone back.
“We don’t know who this miner is,” etang600 told Bitcoin Magazine. “We are trying to figure out ways to contact him. But it’s still pretty early; we have only been signaling for two weeks, so we hope they’ll update.”
Litecoin, SysCoin and Viacoin and the F2Pool Dilemma
Litecoin ($200 million market cap for #6 spot on CoinMarketCap), SysCoin ($6.9 million market cap for #49 spot on CoinMarketCap) and Viacoin ($1.1 million market cap for #104 spot on CoinMarketCap) are also planning to implement Segregated Witness.
But since Viacoin is merge-mined with Litecoin, and SysCoin is merge-mined with Bitcoin, all three coins are facing the same problem: Bitcoin and Litecoin mining pool F2Pool is not signaling support for the soft fork.
In addition to benefits offered by a malleability fix, SysCoin will adopt Schnorr signatures: a signature scheme that could make both Bitcoin and SysCoin more efficient. Unsurprisingly, therefore, SysCoin backend developer Jagdeep Sidhu is hopeful F2Pool will start signaling support for the upgrade soon.
“F2Pool will probably signal support for Segregated Witness on Bitcoin and SysCoin together,” Sidhu told Bitcoin Magazine. “But I think they’re still in wait-and-see mode.”
When asked by Bitcoin Magazine last autumn, F2Pool operator Wang Chun said his system could not build C++11 and that’s why he was holding off on SegWit signaling.
Today, on Twitter, Chun suggested he may be able to finally compile C++11 code when Debian 9 is released.
I’ve heard rumors that Debian 9 will come with a C++11-compatible compiler. So let’s prayer for it could be released sooner and sooner. https://t.co/uW0ob1x80H
|
OPCFW_CODE
|
They’re genius comedians but the members of Monty Python are also a group of very clever men. Coming into showbusiness through the Oxbridge route, their work was influenced by a passion for history and culture, as well as satire and surrealist antics.
The team’s crowning achievement on the historical front is Monty Python & The Holy Grail. Made in 1975 on a low budget, it gave Arthurian England a shake-up with coconut halves instead of horse hooves and a cast of eye-popping characters.
Holy Grail’s most famous scene involves King Arthur (Graham Chapman) and his servant Patsy (Terry Gilliam) encountering the Black Knight (John Cleese) whilst trying to cross a stream.
The Knight is determined to prevent them from crossing. In doing his somewhat inept duty, he winds up losing his limbs during a brutal swordfight. An increasingly surprised Arthur realizes he is facing an opponent who won’t give up.
Hilariously, the Knight is heard to remark “It’s just a flesh wound!” shortly after having his arm cut off. Cleese’s armored “hero” is left behind in the woods declaring his skills as the King and Patsy continue their quest to assemble the Knights of the Round Table.
In a 2015 interview for Wired, Cleese gave some insight into how this bizarre and bloody sequence came about. It turned out the seed of the idea had been planted many years before, when he was a pupil of war veteran and English teacher “Jumper” Gee.
Gee told the young Cleese “about a wrestling match that had taken place in ancient Rome… There was a particularly tough contest in progress, and one of the wrestlers, his arm broke — the difficulty of the embrace was so great that his arm broke under the pressure — and he submitted because of the appalling pain he was in. And the referee sort of disentangled them and said to the other guy, ‘You won,’ and the other guy was rather unresponsive, and the referee realized the other guy was dead.”
When putting together the script for Monty Python & The Holy Grail alongside writing partner Chapman, Cleese retold the story. Its origin is thought to be the tale of Arrichion, a formidable athlete who died unexpectedly during a “Pankration.”
This was a type of wrestling match held during the Ancient Greek Olympic Games, where rules largely went out the window.
A BBC article from 2004 describes how, in 564 BC, “Arrichion had found himself being choked in a stranglehold from behind. Unable to free himself from the ferocious grip, Arrichion managed to grip his opponent’s ankle and twist it until it broke. In agony, his opponent submitted, but by then the damage was done — Arrichion’s throat had been crushed and even as he was proclaimed the winner, he breathed his last.”
With the Pythons’ sense of absurdity in the mix, what could have been a harrowing scene became a comedy classic. However, it was nearly removed from the final cut.
As reported by the Daily Mirror in 2017, an unearthed letter to Cleese from Stephen Murphy, Secretary of the British Board of Film Classification, highlighted concerns over content.
This led to some extraordinary comments, one of which is “though we accept that much of the blood-letting is meant to be funny, there are one or two places, where, in our view the humour is not very effective.”
Another reads, “We accept the dismembering of the Black Knight, but are worried by the first stab through the visor and the blood gush.”
Murphy’s remarks were meant to steer the movie toward an ‘A’ certificate, meaning it would be suitable for those 14 and over. The Board also took issue with Cleese’s “funny Frenchman” character, whose insults from on high are amongst the most quoted lines from the film.
Eventually, these problems were ironed out and the movie went on to make its mark on a generation of comics around the world.
An article from The Atlantic in 2015, written 40 years after Holy Grail’s release, calls it “the gold standard for subversive comedy” and writes that “Matt Groening called it a great influence on The Simpsons; every subsequent film that broke the fourth wall felt in its debt.”
More from us: One Louder: ‘This is Spinal Tap’ Sequel Announced
Central to its success was a surreal scene of severing that has audiences laughing to this day.
|
OPCFW_CODE
|
Many of the new features of Windows Server R2 have greatly enhanced the functionality of the operating system, as well as the expansion of the legacy functionality of Windows Server 2012. Here's a R2 of the 10 Windows Server new features that will impact your daily routine. Some of these new features, especially in the storage space, provide new out-of-the-box capabilities for traditional partners.
Working folder (work Folders)
Work folders brings Dropbox new functionality to the Enterprise server and installs it on the Windows Server R2 system, enabling you to get more sophisticated features and secure file Replication services. The originally released version will only support Windows8.1 users and may support Win7 and ipad devices in the future. As Dropbox,work folders will save the file's attachments both on the server and on the user's device, and the work folders can perform synchronization regardless of when the user connects to the server.
Status configuration (desired state config)
Maintaining configuration on many servers is tricky, especially if your system administrator maintains a large number of servers that are running. Many sophisticated solutions and countless custom in-house tools have been designed to meet this need. But now Windows Server R2 installs a new feature that programmatically establishes a role and feature baseline to monitor and upgrade any system that does not match the required state. The required state configuration requires the tool PowerShell 4.0--which provides many new cmdlets that can either complete monitoring tasks or meet the specific state required by the administrator.
Storage Rating (Storage tiering)
This is probably the most noteworthy new feature in Windows Server R2. Essentially, the ability to store ratings is to dynamically move storage blocks between different storage classes, such as fast SSDs and slower hard disks. Many high-end storage systems have been automatically stacked for a long time, but this is the first time you can do it at the operating system level. Microsoft uses the HEAT-MAP algorithm to determine which block of data looks most active and automatically moves the "hottest" chunks to the fastest level. You can adjust the setting options to determine when to move data through PowerShell.
Storage positioning (Storage pinning)
It is closely linked to the storage rating, which is the ability to pin the selected files to the specified level. This ensures that the files you want are on the fastest storage, such as boot disks in a virtual Desktop infrastructure deployment and are never moved to a slower memory level. Also, for a relatively short period of time, if you do not use SSDs files, it may be moved to the HDD level.
Writeback cache (write back cache)
To create a new storage capacity in Windows Server R2, you can use the write back Cache. This feature allows you to set aside a lot of physical space, especially on fast SSDs, where write-caching is used to help eliminate the ups and downs of I/O during write-intensive operations. This can be seen as a database scenario in which the content of a large disk may have exceeded the power of the drive controller and used the disk to maintain the condition. This buffer can eliminate any pauses caused by the overwhelmed storage subsystem.
Data deduplication Technology (deduplication on running VMs)
Data removal technology is a good new feature in Windows Server R2, but the only drawback is that you cannot delete a running virtual device. This limitation has been addressed on Windows Server R2. In other words, this new feature can greatly improve the overall performance of data deduplication in a VDI deployment. The plus benefit is that deduplication data technology greatly improves the startup performance of virtual desktops. In addition to storing VMS on SMB3.0, Microsoft particularly recommends using an extended file server on Windows Server 2012 or Windows Server R2.
Parallel reconstruction (Parallel rebuild)
Rebuilding a disk that lacks a RAID array is time-consuming and requires a large number of physical disk deployments, and it takes more than a "long" word to rebuild a drive system! Microsoft solved Chkdsk's lengthy check problem in Windows Server 2012, reducing scan time and individual disk repair time. Windows Server R2 Adds a new feature--a parallel rebuild of failed storage spaces drives, which saves a lot of time. TechEd's professional demonstration shows that rebuilding a 3TB disk takes less than one hours.
Working environment (Workplace Join)
Windows Server R2 announces the need to incorporate personal devices like an ipad into an enterprise environment. On the simplest level, it is a new alternative to Web applications that allows you to provide secure access to your intranet sites, including SharePoint sites, for any authorized user. In one step, it is a new feature called workplace join that allows users to register their devices and be certified through Dynamic Directory (active directories), single sign-on to enterprise applications and databases. Standard tools like group Policy can control conditional access on a personal or organizational basis.
Multi-tasking VPN gateways (multitenant VPN gateway)
Microsoft has added many new features that provide secure communication between the upper and lower premises. The new multi-tasking VPN gateway allows you to connect to multiple external sites using point-to-point via a separate VPN interface. This capability is both for managed service providers and for large organizations that are connected to multiple sites or external organizations. In Windows Server 2012, each point-to-point network connection requires a separate gateway, which can adversely affect cost and ease of use when more connections require a single application. But rest assured, Windows Server R2 has overcome this limitation.
Windows Server Essentials Role (Windows Server Essentials role)
While this may not sound surprising, it has the potential to make life easier for us, especially for the geographically distributed network of organizations. (In fact, Microsoft Windows Server 2012 has four versions: Foundation, Essentials, Standard, and datacenter, respectively, for enterprise users of different sizes.) Install Windows Server 2012, you have to use a completely different installation resource for WSE. For large organizations, this can affect the distribution strategy and structure management. The WSE role can also perform other functions in Windows Server R2-including BranchCache, DFS namespaces, remote server Administration tools, which are typically used in remote office settings.
WS-R2 Hyper-V Highlights
The next version of Hyper-V brings the "Gen2" VMs to learners and cleanup staff, which is equivalent to the improvements on Hyper-V, faster dynamic migrations, online VM output and cloning, and more. To learn more about the new features of Hyper-V, click Read Great new features in Windows Server R2 Hyper-V.
This article for CSDN compilation, without permission not reproduced, if necessary reprint please contact market#csdn.net (#换成 @)
10 great new features in Windows Server R2
|
OPCFW_CODE
|
Does Level IV Multiverse/Ultimate Multiverse contains 'impossible worlds'? Does it contain universes with sets, structures, or systems that exist beyond spacetime, duality, or existence and nonexistence? Does it contains universes with different laws of logic or metaphysics than ours? Does it contain universes with wholly alien or incomprehensible concepts, or contains impossible worlds?
closed as off-topic by knzhou, John Rennie, Jon Custer, Qmechanic♦ Feb 15 '17 at 20:25
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "We deal with mainstream physics here. Questions about the general correctness of unpublished personal theories are off topic, although specific questions evaluating new theories in the context of established science are usually allowed. For more information, see Is non mainstream physics appropriate for this site?." – knzhou, John Rennie, Jon Custer, Qmechanic
The Level IV Multiverse is meant to contain all universes which can be described by different mathematical structures, according to Wikipedia. So it certainly contains universes that might have "totally alien or incomprehensible concepts", or universes with "different laws of logic or metaphysics" as long as those laws of logic constitute a system of mathematical structures. This also means that spacetime, existence, duality, and what have you may not be a thing in these alternate universes, as long as, of course, the laws that do govern the universe can be described by various mathematical structures. However, the definition of impossible worlds you seem to be using (you linked to this article) seems to imply that the impossible world cannot be described by a consistent set of logical, mathematical laws, which contradicts our original definition of the Level IV Multiverse. So (if I understand your definition correctly) I would say impossible worlds aren't included in the Level IV Multiverse.
Hope this helps!
Some caveats below:
1) Equating a formal system (theory) to a universe is imprecise, because most formal systems have an infinite number of different structures that satisfy their axioms and theorems. This is related to the fact that most formal theories are incomplete (Godel), and they can be completed in an infinite number of ways. But in order to complete a theory you need to assume an infinite number of axioms, and this is not something that can be described in a finite way. So it would be more precise to equate a multiverse to a complete theory, and thus to a single mathematical structure.
2) But what is a structure? The problem is that any theory (complete or not) can also be described in infinitely many different ways. For instance, you can chose a set theoretical description, and thus everything are sets. Or you can use an equivalent description based on category theory, and then all you have are collections of objects and arrows. Thus, do you have a different multiverse for sets, categories, etc, even if they represent the same theory? You should perhaps fix this ambiguity by equating a multiverse to a single abstract structure, a structure that is not made of sets, points, numbers, triangles or anything specific but that however can be represented by any of them.
To conclude, what is a valid multiverse? I do not know.
UPDATE: I just read mag tegmark paper for more details. He restricts his multiverse to Computable structures (whose relations are defined by halting computations), and he states that a structure, or a distinct multiverse, is actually the class of equivalence of equivalent computable structures. Thus only finitist universes qualify.
That means that he avoids problem (1) by restricting the kind of formal systems that have multiverses. For instance, using his definition any theory that contains Peano arithmetics does not qualify as a multiverse because it is incomplete, or non computable. Triangles, for instance (if they live in the real plane), do not exist, only pixelated ones do.
He also tries to avoid problem (2) by stating that all formal systems that are computationally equivalent correspond to the same multiverse. This is not as intuitive as it seems. Different formal systems describing the same structure differ on what is considered a "basic element" and what is considered a "relationship" between these elements. For instance, a given multiverse can be described by different turing machines, all computing the same equivalent class of structures, but each machine differing on what is the number of allowed alphabet letters, internal states, and transition rules.
To conclude, each multiverse does not correspond to what we intuitively think of a mathematical structure (for instance, the real plane): A single multiverse is a more abstract step from it that includes all structures equivalent to it by means of a computation (that is, any transformation, or "re-packaging", of equivalent basic elements and relationships that can be made in a finite number of steps)
In this sense, for instance, there is no such thing as a multiverse made of "triangles", or of any other specific mathematical structures that you are used to think. In the same way, any physical theory that you can propose correspond to an individual multiverse, but that same individual multiverse can come from an infinite number of different theories. The relationship between theory and multiverse is not one to one.
I am not sure how to answer this because I think the type IV many worlds is not well formulated. The observable universe has a range of mathematical descriptions, from symplectic geometry for classical mechanics, Riemannian geometry for general relativity, Hilbert space for quantum mechanics, stochastic modelling for various processes and so forth. We might then ask about something like economics, a subject that so far does not have a single mathematical description. We might then ask whether there are multiple worlds with different maths for economics, and we are groping around to find the one optimal for "our world." I am not so sure about that kind of proposition.
In the Mathematical Universe Hypothesis of Tegmark there may exist alternate worlds that obey mathematics we may not even yet know about and which do not apply in the observable world. The problem with this is that it is hard to know how to make mathematics, a subject that involves theorem-proofs, connect this directly with physics that is ultimately an empirical science. Also mathematics is open and almost (or absolutely) infinitely variable, while physics that is really constrained by observation is not so "liberal."
If there are alternate universes then it is not likely they are self-contradicting however. That does not seem to make sense. They might though in many cases be Godelian or highly self-referential and not easily understood by any means.
|
OPCFW_CODE
|
The Tree Notation system of writing consists of cells and symbols and cellSizes and trees and treeMachines and programs and grammars.
A cell contains a symbol. A symbol can have many forms, but can always be reduced to a simple number.
A cell has a finite number of possible symbols it could contain. A cell can only contain one symbol at a time.
A symbol could be a dot, or a number, or even a long word. The word “word” is sometimes used interchangeably for symbol.
Cells have a cellSize and are recursive. You can expand the domain of a cell by increasing or decreasing the cellSize. The cellSize can be measured in bits. The most basic cell is the binary cell. It has 1 bit of information.
Tree Notation is a 2-dimensional notation. Cells have a height and a width. The smallest cellSize is 1 bit tall and 1 bit wide. A traditional 1-D 8-bit register can be thought of as 1 bit tall by 8 bits wide.
An engineer can define the cellSize. The engineer can design their machine to combine 64 bits into an 8 by 8 square register for a cellSize of 8x8. The cellSize is limited to the hardware, or in the software realm with a software defined cell delimiter cellSize can be virtually unlimited.
By increasing the cellSize, you can define new symbols. Because you can define cellSizes with a height and a width, theoretically you could create 2-D registers and have encodings that replace encodings like UTF-8 where the letter “A” would look like the letter A if you had a powerful microscope that could zoom in on the registers in a cpu.
A Tree is a tuple consisting of an array of cells (aka a “line”) and/or an array of child Trees. Cells do not contain a notion of parents or children or a line of cells. When you add those concepts you get Trees.
A TreeMachine is a physical system that can hold the contents of the Tree. A grid of transistors or lightbulbs or a piece of papyrus are all valid TreeMachines. A machine may or may not have computing abilities.
A program is the symbol values of the Tree in the Machine.
A Machine has physical limitations on the programs it can contain. If a program cannot physically be represented on some Machine it is not a valid program for that machine.
Grammars can put further artificial restrictions on what programs are invalid, even if it were a valid program on a machine.
Tree Notation can be represented in 1-dimension by defining 3 symbols:
Using just these rules of notation, or syntax, all languages, from the simplest to the most complex, can be built in a straightforward manner.
Tree Notation is one specific member in a larger general class of “spatial notations” which we do not define here but may discuss later.
|
OPCFW_CODE
|
"""Example script to download files from an item list,
for example, to download one audio file and the TextGrid
annotations for data from the Austalk corpus
To run this script you need to install the pyalveo library
which is available at (https://pypi.python.org/pypi/pyalveo) for
installation with the normal Python package tools (pip install pyalveo).
You also need to download your API key (alveo.config) from the Alveo web application
(click on your email address at the top right) and save it in your home directory:
Linux or Unix: /home/<user>
Mac: /Users/<user>
Windows: C:\\Users\\<user>
The script should then find this file and access Alveo on your behalf.
"""
import os
import pyalveo
from pprint import pprint
# this is a shared item list with a sample of Austalk files that
# contain TextGrid annotations, change this URL to your own item
# list to download different data
itemlist_url = "https://app.alveo.edu.au/item_lists/1045"
# directory to write downloaded data into
outputdir = "data"
if __name__=='__main__':
client = pyalveo.Client(use_cache=False)
itemlist = client.get_item_list(itemlist_url)
if not os.path.exists(outputdir):
os.makedirs(outputdir)
print("Item list name: ", itemlist.name())
for itemurl in itemlist:
item = client.get_item(itemurl)
meta = item.metadata()
speakerurl = meta['alveo:metadata']['olac:speaker']
speakerinfo = client.get_speaker(speakerurl)
print("Item:", meta['alveo:metadata']['dcterms:identifier'])
# write out to a subdirectory based on the speaker identifier
subdir = os.path.join(outputdir, speakerinfo['dcterms:identifier'])
if not os.path.exists(subdir):
os.makedirs(subdir)
for doc in item.get_documents():
filename = doc.get_filename()
if filename.endswith('speaker16.wav') or filename.endswith('.TextGrid'):
print('\t', filename)
doc.download_content(dir_path=subdir)
|
STACK_EDU
|
I am currently trailing Honeycode and so far I am liking it. One thing I cannot get my head around is how I would create a "Login Screen". So for example, I have created a "Register" screen which successfully inserts the entered data into a table and I can see it there. But I want a Login screen that a user would enter their username and password and on Login button click the automation would "validate" that the users information is there.
The issue is I can only see automation for adding, updating, overwriting and deleting data, but no "validate and move on".
Any ideas on this one please?
Thanks in advance,
I would like to know this also, hit a brickwall right from the get go
Hello Gavin and John, welcome to Honeycode.
What I understand is that you want to create a login screen on an application, that you share with people; perhaps you want to prevent access to the app or to certain potions of the app. There are a couple of ways I can think of doing this, but first a few basics.
- Each app has a $SYS_USER, which equals the user who is currently using the app (you can think of it as a session variable). You can always refer to it in your formulas.
- It is important to know that you have given full access to the app to a user when you have shared that app with that user; in other words they have access to all the features and buttons that you have made available in the app (unless you use the display capability to hide them).
- The display attribute of buttons, data fields, etc.. does not make the information secret it just hides it from view. If the data appears on the screen and display=False, the data is not shown but the data is sent to the client and hidden there. This means it is practically hidden or secret but not truly (if you are in a web app you could technically access the data at the HTML level or pull it from the API).
With that said you can create quite elaborate applications that behave differently for different people. Here are some places to start:
a) You may not need a username or even a password field, but if you do feel you need them you can always store that information in a table, and when user clicks Login button, you can enable or disable a "login" or "enter" button when the data in those two fields is valid.
b) Do not need a login at all but hide (using display =FALSE using a formula) certain buttons and fields from certain users. This is helpful if you have a list, for example, that is the same for administrators and for regular users, with the exception that administrator get to see one extra field, or they have an edit button whereas the rest of the users don't.
c) You can remove a screen (or more) from the app navigation so certain people do not have access to those screens. Then you create a button or text box with with Action=Navigate to hidden screen AND display attribute = a formula that returns FALSE for most people and TRUE only for the administrators for example.
Examples of Display
=IF($[SYS_USER] IN $[InputRow][Editors][Name],TRUE,FALSE) - can show a button only to editors
= FILTER(EditorsTable,"EditorsTable[Editor]=%",$SYS_USER)>0 - there is at least one row where SYS_USER is listed in the editors table.
You can also look at these articles for more ideas:
Hope you like this answer, let us know if you need more direction.
|
OPCFW_CODE
|
Allow no-cors on fetch request
I was looking for a solution to my problem and found the following issue:
https://github.com/shaka-project/shaka-player/issues/1286
I think I have indeed found a use case for such a request.
I have an HLS manifest in which segments don't have any extension.
Then, as shown in code below, the parser does a HEAD request to retrieve the mime type:
https://github.com/shaka-project/shaka-player/blob/678bf2524dae237ca6ec08db35e50e26598d4e63/lib/hls/hls_parser.js#L3754-L3761
And in my case, a CORS error happens.
Since I don't have any control on the manifest and I don't care about the anwser (only the response headers matter), a no-cors fetch request would certainly solves my issue.
What do you think?
So what you propose is to use no-cors for HEAD requests? If you can share the manifest with me so I can investigate the issue and provide an appropriate solution. Thanks!
I have been looking and it is not possible, in no-cors mode we do not have access to the headers, so we can not detect the mimetype correctly.
Oh, you're totally right.
I knew I wouldn't be able to access the response's body, but I thought the headers would be OK.
Thanks for the quick answer.
In fact, @avelad, I'd like a little help here, if you can.
When I load my manifest in your demo player, a GET request is done, as you can see below:
https://shaka-player-demo.appspot.com/demo/#audiolang=fr-FR;textlang=fr-FR;uilang=fr-FR;asset=https://amg01074-fueltv-netgem.amagi.tv/playlist/amg01074-fueltv-netgem/ad15140a-8f80-11ee-89ab-3a7f5f935650/27/1920x1080_6503200/index.m3u8;license=https://dev-vodapi.videofutur.fr/apirest/drm/licenseApplication?token=GCmkQphA8QAMeDKT3f1Qw76ewof2Ggu8KZSXriCTQJWt9oMY%2FV3yIQ%3D%3D&drm=widevine;panel=HOME;build=uncompiled;vv
But in my own app, it's a HEAD request:
What's the explanation?
Thanks.
I'm using the latest version of the player (v4.6.3).
The difference is that when you provide the master playlist, since we have all the information, we only need to make a HEAD to obtain the contentType, but when you provide a half playlist we have to obtain all the information, so we download the entire segment.
OK, I understand the "why", but now, I need to make it work. 😄
I also found this issue which is very similar to mine:
https://github.com/shaka-project/shaka-player/issues/3142
And from what I read, it led to a PR that was merged 2 years ago.
But the code has changed a bit since that day.
And now, when the HEAD request fails, it seems like the execution simply stops instead of going through guessMimeTypeFallback_() and fallback-ing to mp4.
Shouldn't const response = await this.makeNetworkRequest_(headRequest, requestType, {type}); be inside a try/catch?
@Robloche I created https://github.com/shaka-project/shaka-player/pull/5964 to fix it
Wow, that was fast!
Thanks a lot.
Sorry to re-open this issue but it seems like the same behavior is needed here as well:
https://github.com/shaka-project/shaka-player/blob/7fd99b7c2663903f3fa3991acfbab3782ddb0e07/lib/net/networking_utils.js#L23-L41
I indeed have another case where the playlist itself doesn't have any extension, and redirects to a a .m3u8 file.
In this case, a HEAD request is made, which fails.
@Robloche I created https://github.com/shaka-project/shaka-player/pull/5986 to fix it
Thank you so much.
|
GITHUB_ARCHIVE
|
Discrete Mathematics: No Longer a Mystery
Even though the discrete portion of this training course paper now is confined to the very first half, it's a course worth taking. Lectures contain only the viewpoint of a single professor that has usually been teaching the exact same thing for many decades, it is not dynamic. It again is a useful subject in the next few years, so don't drop it here.
It distills a great deal about statistics in very few straightforward variables. It's difficult even for a computer to set the potential outcome after a specific phoneme, because of the sheer number of words in a language. These variables are discrete instead of continuous.
The legitimate issue is the character of dualityseparation. Much is known about the overall problem in both dimensional instance, as a result of much activity since 1961. A set is really a mathematical idea, and the manner that we relate sets to one another is known as set theory.
The Hidden Treasure of Discrete Mathematics
The remaining part of the book appears to be a whole lot easier to read, and indeed, it doesn't rely heavily on https://ctsi.ucla.edu/events/ics/595942.ics the very first chapter. The course notes may be used to look up information you might have missed over the course of a lecture, and possibly to supply a slightly different perspective on the material covered in the program. Your paper is going to have some quite very good company (just consider the website in case you don't believe me).
It is not easy to think that how not looking at data is going to assist you! Currently a days the capacity to compose codes has come to be a vital skill for those students from the technical discipline. In any event, it is a computer.
There are lots of posts about skipping the college circulating in social networking. Do a little research on emerging trends in the technologies you like, and build a couple of small projects so that it is possible to acquire some insight into those trends! To be true, both parts would need to be true.
The Subplan can be declared at the exact same time that the major is declared, or it may be declared at a subsequent date if you're already a Mathematics major. The distinction is a rather important one. This method is unconventional as it's the top-down and results-first approach created for software engineers.
Let's look at the way to use binary search to fix the guessing game. The very first and most serious issue with Taubes' book is it isn't really a textbook whatsoever, it is a set of lecture notes. The problem sets are instructive, and frequently wind up teaching new material outside class.
The remaining part of the book appears to be a whole lot easier to read, and indeed, it doesn't rely heavily on the very first chapter. Each chapter includes a comprehensive bibliography for extra reading, which is among the most intriguing details of the book-the author comments on other works and the way in which they have influenced his presentation. Your paper is going to have some quite very good company (just consider the website in case you don't believe me).
There's a little backstory. These solutions hit every one of the requirements. This method is unconventional write my essays as it's the top-down and results-first approach created for software engineers.
The majority of people will tell you 0! Most people however prefer using the expression AI since it sounds cool. Use your common sense and produce a determination on how best to proceed and inform the men and women in charge when available.
The Awful Side of Discrete Mathematics
Discrete Mathematics is the study of structures that are fundamentally discrete instead of continuous. German mathematician G. Cantor introduced the idea of sets. You might require some trigonometry to show that.
For constraint system proofs, the variety of generators required depends upon the quantity of constraints. To put it differently, it yields all the elements which exist within both of the sets. A set is an assortment of unique objects.
The Data Science Venn Diagram shown below is a superb overview of the skills needed for data science. Algorithms are a set of steps a computer requires to accomplish a job. Cumulative Distribution Functions tell us the probability a random variable is under a certain price.
Obviously computers are extremely closely associated with mathematics. Not empower but enforce, so long as it is going to be just challenging to create unreliable software. Another edition is currently offered.
The main aim of FSMis to spell out the workflow a little more declarative, to describe how something should do the job. The easiest means to do so is simply component-wise. The issue of reinventing the wheel each time you wish to address a problem 1.
Applied mathematicians begin with a practical problem, envision its distinct elements, then decrease the elements to mathematical variables. Discrete mathematics is a contemporary area of mathematics that's extensively utilised in business and commerce. Numerical analysis delivers an important example.
It appeared to me that although the science proved we had to change our economic system, the only means to effectively understand that shift would be by altering the mindsets of individuals. You probably wind up thinking about math and physics at the moment. If you begin digging around on the web, you will discover the philosophy of mathematics.
|
OPCFW_CODE
|
Gradelyfiction The Mech Touch webnovel – Chapter 3300: Round of Refits special nauseating recommendation-p3
The Mech Touch
Novel–The Mech Touch–The Mech Touch
Chapter 3300: Round of Refits learned null
The Lobster Fishery of Maine
This came up like a shock to every one of the specialist aviators.
“So, just how a lot difficult will our specialist mechs come to be just after you’re finished?” Venerable Orfan eagerly leaned forward and questioned.
the way of peace they know not
Each and every specialist initial that showed up during the area silently recognized the other but failed to take the initiative to talk.
None were actually from the ambiance to joke around. The Fight of Fordilla Zentra still considered heavily on their thoughts, and a number of them essential added time than others to go back to their ancient selves.
“That’s not just a simple amount of time. What if the dwarves ambush us in the following few days?”
As long as they refined their gets, they can utilize most of the classes they learned in the Bulwark Venture and Chimera Venture with little barrier! This could ultimately result in a precise functionality difference between the earlier set of experienced mechs and the ones that had still to always be designed!
The moment the specialist pilots all loved a few days to relax and operation their conflict activities, they obtained together inside a assembly bedroom situated in the higher decks of your Character of Bentheim.
Venerable Jannzi aimed a severe glimpse towards Ves. She was truly impatient to initial a new and better s.h.i.+eld of Samar, but her upcoming goals and objectives relied an excessive amount of on piloting a powerful professional mech on her to encourage Ves to help make haste.
“Girls. Gentlemen. I’m pleased to determine you’re all healthy and balanced and full of life.” Ves started since he sat down with the top of your head in the table. “The previous conflict is taking a toll on most of you. Well before we handle the products on the goal, i want to apologize for your needs first. Each of you experienced to address in undesirable circ.u.mstances due to the a lot of blunders and misjudgements we made. This struggle might have been warded off. Regardless if we performed wind up combating the dwarves anyhow, I will have no less than ensured that Joshua and Jannzi already received their expert mechs.”
Ves made a decision to give her a bone tissue. It wouldn’t caused by forget about the MVP on the survive conflict and also their most effective specialist pilot at this moment.
None ended up inside the mood to laugh approximately. The Challenge of Fordilla Zentra still considered heavily in their intellects, and a few of them desired a longer period than the others to return to their outdated selves.
Nevertheless his target audience arranged together with his feeling, there are still a couple of queries.
Venerable Jannzi guided a significant glance towards Ves. She was truly impatient to aviator a new and improved upon s.h.i.+eld of Samar, but her potential goals and objectives depended excessive on piloting a highly effective skilled mech on her behalf to desire Ves to produce haste.
The others were actually fascinated likewise. Of your four professional mechs, exactly the Amaranto was still in quality situation. Others were not capable of displaying the same optimum point overall performance as well before!
“We are familiar with that, and we also intend to make a move about this, just not now. Refitting a masterwork skilled mech is far more hard than refitting a typical specialist mech. We can’t accomplish a lot of alterations right away as well as their high quality should comply with exactly the same typical of good quality because your pre-existing machine.”
Immediately after quite a few a few minutes of silence, Ves, who has been still sporting his Endless Regalia, finally joined the bedroom with a handful of his recognize shield.
“How much time does it decide to try layout the refit? When it will take more than a month.. then perhaps you need to postpone it for in the future.”
Despite the fact that his crowd predetermined regarding his feeling, there was still a few problems.
None were definitely within the state of mind to laugh all around. The Fight of Fordilla Zentra still weighed heavily on their heads, and some of them desired more hours as opposed to others to go back to their classic selves.
“I thought the types of our pro mech are already as nice as you could make them. How can you fit anything more effective?”
“In truth, the work on our destroyed pro mechs might take more time than a few weeks. Gloriana and so i are looking at carrying out more than send them back for their unique circumstances. The Challenge of Fordilla Zentra has clearly confirmed the perfection of these products, but it has also exposed very clear vulnerabilities which we can conveniently street address considering that our company is conducting comprehensive work on them in any case.”
Jannzi sneered but she didn’t speak up, that was an extremely delightful final decision to Ves and the many others. Everyone already believed her position and she didn’t need to replicate them when in front of this small, and shut down viewers.
Even if this level of progress did not noise ground-breaking, it absolutely was nonetheless a substantial enhance that could definitely change lives in battle. In circ.you.mstances where both combatants were definitely roughly even, an improvement of 20 percent could definitely bring about a position where Larkinson pro mech could earn 9 out of ten times!
If she simply had to wait a few months lengthier to have her dream unit, so whether it is! The strength she would profit from piloting a great professional mech would over compensate for the possible lack of chance to training that has a effective appliance!
“I’m not enthusiastic about browsing through this apology theater.” Venerable Orfan reported. “What I wish to know is how soon you’ll be capable to correct my Riot. It’s damaged! In the event the dwarves ambush us yet again, I sure as h.e.l.l don’t need to deploy in s.p.a.ce with just one single undamaged limb in my skilled mech!”
The recent conflict acquired taught Ves and the other Journeymen of your Larkinson Clan considerably about every one of the complicated combat issues that skilled mechs was required to deal with. The majority of them were required to change their a.s.sumptions, which brought on these phones alter their brains about several of the alternatives they used on the finished specialist mech layouts.
The Mech Touch
Every skilled pilot that turned up inside the compartment silently recognized one another but failed to make the effort to talk.
“Does that count up for our professional mechs also?” Venerable Joshua expected with a hopeful tone of voice.
“So how significantly difficult will our experienced mechs end up soon after you’re done?” Venerable Orfan eagerly leaned forward and requested.
“You can try much to update its defenses.” Venerable Stark stated. “It’s a good reason why I wasn’t capable of accomplish at my ideal for a great deal of the combat. If my mech was as durable being the Black Zephyr, those two Slug Ranger specialist mechs wouldn’t have hounded me for so long.”
beyond the primordials
“That’s not just a simple amount of time. What if the dwarves ambush us in the next couple of days?”
Ves smiled on the specialist aviators. He predicted that to get the earliest topic that would come up today.
“For which reason?”
|
OPCFW_CODE
|
… and away from it. Well, sorta; we were told to FOAD by Jeff.
Whoops. Sorry I’ve not responded. I’ve just pulled the latest git code so I’ll see if the try/catch solves the problem and watch the update log for a few days. FYI I use FreeBSD, so yes I know it’s not very mainstream, but it is still server tier
Yeah what may have been helpful in this case is if the feed and maybe the entry that crashed was shown in the log. Because the crash happened before any logging it’s difficult to see which one caused it. But thanks for reminding me about the feed debugger. I’ll try that on each feed one at a time if I still get the same problem.
Just had the error happen again after one week where all feeds stopped updating except ISP Review. Which just happens to be mentioned in the log again before the crash exactly the same as last time. Though when I went into the feed debugger with f+D and forced a refresh it all loaded without a problem. And now everything is working fine again. So it seems whatever caused the problem is then fixed by forcing a refetch/rehash.
[07:30:57/55805] Base feed: http://www.ispreview.co.uk/index.php/feed
[07:30:57/55805] => 2019-03-05 06:51:17.549093, 56 2
PHP Fatal error: Uncaught TypeError: Argument 1 passed to iterator_to_array() must implement interface Trav
ersable, null given in /usr/www/ttrss/vendor/andreskrey/Readability/Nodes/NodeTrait.php:324
#0 /usr/www/ttrss/vendor/andreskrey/Readability/Nodes/NodeTrait.php(324): iterator_to_array(NULL)
#1 /usr/www/ttrss/vendor/andreskrey/Readability/Nodes/NodeTrait.php(421): andreskrey\Readability\N
#2 /usr/www/ttrss/vendor/andreskrey/Readability/Readability.php(1270): andreskrey\Readability\Node
#3 /usr/www/ttrss/vendor/andreskrey/Readability/Readability.php(1166): andreskrey\Readability\Read
#4 /usr/www/ttrss/vendor/andreskrey/Readability/Readability.php(155): andreskrey\Readability\Reada
#5 /usr/www/ttrss/plugins/af_readability/init.php(188): andreskrey\Readability\Readabi in /usr/www/ttrss/vendor/andreskrey/Readability/Nodes/NodeTrait.php on line 324
congrats, instead of dumping the database, saving the XML somehow, or at least doing something to help us reproduce it, you decided to post the exception again. well done.
arch users, ladies and gentlemen. again and again.
Hi, new TTRSS user here
Imported my feeds via OPML from another reader. Also getting the Readability error mentioned
The initial update ran fine with readability on for all feeds. Subsequent feed updates started triggering the issue.
Most recent example is from The Register. Was working great for the whole 4 days I’ve had TTRSS, but just now bombed out. I disabled Readability for the Register and let the feed update run, and there was just 1 new article which was this one:
So is there something within this article which is screwing Readability? The atom feed is here
I turned Readability back on for the Register and it hasn’t bombed out
https://github.com/andreskrey/readability.php/issues/79 where Andreskrey says “Maybe you can put a breakpoint before triggering Readability and dump the HTML content?” is this possible? I’m not a dev but happy to dump my DB or whatever is needed
tldr: please report issues with readability to readability developers.
you didn’t even think to specify what php version on what platform you’re running in your largely useless “me too” post, i’m not going to waste a week spoonfeeding you because of a third party library i didn’t write nor support. you’ll have to do your homework yourself.
anyway, new rules for this issue:
- if you run into it and can figure out why it happens, submit a PR, preferably to developers of readability, but if its a tt-rss problem, to me. i don’t know how this could be a tt-rss problem since all its doing is passing XML to the class but whatever, anything is possible.
- if you want to bump this thread with a “me too”, the only thing you’ll get is a probation
i’m not wasting any more time on this.
when I went into the feed debugger with f+D and forced a refresh it all loaded without a problem. And now everything is working fine again.
i have the same problem
can u teach me how to ‘’ feed debugger with f+D ‘’ to fix the problem? thx a lot.
It means fetch the feed using debug mode. You go into the feed and then press the f and shift-D keys. The feed is successfully fetched and processed then. My guess is there’s something slightly different in the code paths between the main feed updater and the debug mode.
I am now agreeing with fox though. I took a look at the code and can see that he’s simply importing a 3rd party library and so this needs solving by the person that wrote the library. Unfortunately I can’t reproduce it in a way where I can just provide a broken feed. Because as I said, it breaks, you fetch the feed another way, and then it works fine for a week before maybe breaking again.
I have worked around this now by reverting the commit that upgraded the library and I’m rebasing the old version on top of any new commits. If the library gets upgraded again then I’ll test it. In the meantime, like fox, I’ve lost interest in caring about it.
btw actual readability library is now moved to the plugin so it’s possible to make af_readability_old or something and use that instead, i’ve made that change with this particular issue in mind.
Ahh that’s useful. Yes I’ve just done this instead. Created plugins.local/af_readability_old and then removed my revert with a reset --hard. Seems to work If the library gets upgraded again in the future I’ll retest it but until then this will do.
|
OPCFW_CODE
|
May 22nd, 2017
If you’ve done any writing on a computer, you’re familiar with fonts and font sizes. Most writing apps have a control to choose a font and font size. Many Mac apps have an inspector bar to choose the font size, which you can see in the following screenshot:
In the screenshot the font size is 13. What does the 13 represent?
13 is the size of the font in points. 72 points equal 1 inch. If you made the font size 72, each line of text would be 1 inch high. 13 point text is slightly larger than one sixth of an inch.
What’s the right font size for the body text of your book? It depends on the font you’re using and whether you’re publishing a print or electronic book. One font may be as big at 10 points as another font is at 11 points.
Print books can use smaller font sizes because you hold print books closer when reading. 10-12 point text works well for print books. Electronic books need larger text: 12-18 point text. You may need to experiment to find the best font size for your book.
May 5th, 2017
I heard recently that Microsoft released a preview version of their Visual Studio development tool for Mac and released .NET Core, a cross-platform version of the .NET framework software developers use to develop Windows applications. I decided to download the Visual Studio preview to see if it was possible to make an application that would run on both Mac and Windows. If I could, I would look into making Tome Builder for both Mac and Windows.
After installing the Visual Studio preview, I became disappointed. Visual Studio for Mac is built for creating mobile and Mac applications, not cross-platform desktop applications. The .NET Core framework is built for making code libraries that applications use, not for developing applications.
So there isn’t going to be a cross-platform version of Tome Builder in the foreseeable future. At least I was able to fail in an hour.
April 12th, 2017
Right now I’m working on improving Tome Builder’s code architecture. I was working on adding support for print PDF books by creating a table of contents and improving the look of published books. But I was having trouble because the code was a mess. So I’m taking a step back, improving my architecture and cleaning up my code. I hope taking this step back will pay off in the future.
March 20th, 2017
Brake means to stop or to slow down.
Press the brake pedal to stop the car.
Break has multiple meanings. In most cases if you mean something besides stopping or slowing down, you want the word BREAK.
Break a leg. Break the vase. Take a break when you feel tired.
March 14th, 2017
Use the word THAN when making comparisons.
She wrote more than 20 books. It's better to be safe than sorry.
Use the word THEN to refer to the next step in a sequence.
I ate breakfast then brushed my teeth. He flew to Chicago then drove to Milwaukee.
|
OPCFW_CODE
|
StackExchange.Redis ListRightPop not waiting for result
I'm trying to write a producer/consumer system using Redis in C#. Each message produced must be consumed by only one consumer, and I want the consumers to wait for elements created by the consumer. My system must support many produced/consumer sets.
I am using StackExchange.Redis to communicate with Redis, and using lists where elements are added using ListLeftPush and removed with ListRightPop. What I am experiencing is that while the ListRightPop method should block until an element exists in the list (or after a defined timeout), it always returns automatically if there are no elements in the list. This is the test code I wrote to check this:
IDatabase cache = connection.GetDatabase();
Trace.TraceInformation("waiting "+DateTime.Now);
var res = cache.ListRightPop("test");
Trace.TraceInformation("Got "+res+", Ended" + DateTime.Now);
And I'm getting a nil result after less than 1 second.
The standard pop operations do not block: they return nil if the list is empty or does not exist.
SE.Redis is a multiplexer. Using a blocking pop is a very very bad idea. This is explained more, with workarounds discussed specifically for blocking pops, in the documentation: https://stackexchange.github.io/StackExchange.Redis/PipelinesMultiplexers
As usual, it's best to go straight to the source.
Marc - thanks for the pointer and for the great framework! Will try this out. Still, does this means that we perform two round trips to the cache server? Greatly reduces the performance of the solution :-(
@vainolo no, it doesn't mean that - especially if you "fire and forget" the broadcast. They pipeline very nicely. As for pub/sub generally: it is astonishingly fast - it is how we power our web-sockets server for live updates here on stackoverflow, which has 6 digit numbers of connected clients.
@MarcGravell Hi. The link is broken . However there is a situation where I do need blocking BRPOP. So like the OP asked , how can I block BRPOP ? ( or other alternative)?
@Royi fixing link
Marc , if so , what is the alternative to block BLRPOP ? I need to wait for a publish , but dont wont the fire-and-forget of pub/sub...
@Royi blocking operations basically kill scalability. Nothing can use the pipe while that happens. That might be fine for a single threaded server, in which case: maybe try with Execute - but: it will be bad for anything highly concurrent like a web server. If you are in the web server scenario, maybe "poll with pub/sub as a fast wake-up" (so there is always a fallback in case pub/sub fails)
Marc Thanks for the replies.So according to the docs is this the right approach ? But this leaves me with a problem. What if a publish was made before channel has been subscribed ?
@RoyiNamir I absolutely would not be subscribing per-login here, so there is no "before" in my mind. To me, the channel would be generic and it is the payload that would be specific to the login - i.e. publish "channel" "somevalue" - then the consumer can activate either because they saw the publish with their value (no extra subscription here, note - just tracking "who is outstanding") - or because they hit some time limit and double-checked the data store (list, set, whatever you chose to use to define membership)
Marc , sorry for my stupidity : but if users do login to an Asp.net server which needs redis for sending command to other server for check that login --- does each request thread would do subsribe(channel)? and how would john's login request would know that the publish he has just recieved is his ? According to what you say, John's login request will also have to investigate other login responses just to check if it's his. am i wrong here? In other words - how would i pause john's aspnet request thread until his login response from redis arrives?
@RoyiNamir "how would john's login request would know that the publish he has just recieved is his" - by the content of the result; but more generally: there would be only one bit of code handling those responses. The exchange here is pretty odd, though - I'm pretty sure this isn't how I would design a login system in the first place... but if I had to do this - probably something like https://gist.github.com/mgravell/8e13d0bc98a3c9152ebd6a2959c8a3d4 (pseudocode)
@MarcGravell After reading the reply and doing some thinking - I think i've managed to do this completely async + john request <--> response . Is that a viable solution ?
@RoyiNamir that looks like a slow leak to me; who is unsubscribing? when?
Let us continue this discussion in chat.
StackExchange.Redis is merely hitting the redis server's exposed API, the relevant method of which is BRPOP in your case. The documentation for that is:
http://redis.io/commands/blpop - blocking left pop
http://redis.io/commands/brpop - blocking right pop
While those methods do describe the blocking behavior you are looking for, I believe SE.Redis ListRightPop is calling
http://redis.io/commands/rpop - right pop
I may not be up to the latest SE.Redis package, but intellisense is not giving me an option to supply a timeout like you claim. Additionally, there does not appear to be any methods starting with .List in the IDatabase interface that has the word "block" in it, so I'm not sure SE.Redis exposes a Redis BRPOP API. You can either write your own or ask Marc Gravell nicely, but this is a pretty big request I think because of the blocking nature of the call and the way the multiplexer works.
Good answer. Purely for your interest, blocking pops are discussed at length here: https://github.com/StackExchange/StackExchange.Redis/blob/master/Docs/PipelinesMultiplexers.md
|
STACK_EXCHANGE
|
Behavioral science and machine learning have rapidly progressed in recent years. As there is growing interest among behavioral scholars to leverage machine learning, we present strategies for how these methods that can be of value to behavioral scientists using examples centered on behavioral research.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Agarwal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines. Harvard Business Review Press.
Anand, P., Thomas, M., Pillai, K. G., Meng-Lewis, Y. (2020). Linguistic analysis of psychological distancing: reading between the lines for unexpressed bad news. Working paper.
Aral, S., & Nicolaides, C. (2017). Exercise contagion in a global social network. Nature Communications, 8, 1–8.
Ascarza, E. (2018). Retention futility: Targeting high-risk customers might be ineffective. Journal of Marketing Research, 55, 80–98.
Bengio, Y. (2019). The consciousness prior. Working paper.
Berger, J., & Milkman, K. L. (2012). What makes online content viral. Journal of Marketing Research, 49(2), 192–205.
Berger, J., Humphreys, A., Ludwig, S., Moe, W., Netzer, O., & Schweidel, D. (2019). Uniting the tribes: Using text for marketing insight. Journal of Marketing Articles-in-Advance.
Blanchard, S., Dyachenko, T., & Kettle, K. (2020). Locational choices: Studying consumer preference for proximity to others. Journal of Marketing Research forthcoming.
Bollinger, B., Gillingham, K., Kirkpatrick, J., Sexton, S. (2019). Visibility and peer influence. Working Paper.
Brewer, M., & Weber, J. (1994). Self-evaluation effects of interpersonal versus intergroup social comparison. Journal of Personality and Social Psychology, 66, 268–275.
Chakraborty, I., Kim, M., Sudhir, K. (2019). Attribute sentiment scoring with online text reviews: Deep learning and accounting for attribute self-selection. Working Paper.
Chaney, A. J. B., Stewart, B. M., & Engelhardt, B. E. (2018). How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In Proceedings of the 12th ACM Conference on Recommender Systems (pp. 224–232).
Chapelle, O., Schölkopf, B., & Zien, A. (Eds.). (2006). Semi-supervised learning. Cambridge: MIT Press.
Dzyabura, D., & Yoganarasimhan, H. (2018). Machine learning and marketing. In D. Hanssens & N. Mizik (Eds.), Handbook of marketing analytics: Methods and applications in marketing, Public policy, and litigation support. Cheltenham: Edward Elgar Publishing.
Etkin, J. (2016). The hidden cost of personal quantification. Journal of Consumer Research, 42, 967–984.
Fedus, W., Gelada, C., Bengio, Y., Bellmare, M., Larochelle, H. (2019). Hyperbolic discounting and learning over multiple horizons. Working paper.
Fishbach, A., & Touré-Tillery, M. (2013). Goals and motivation. In R. Biswas-Diener & E. Diener (Eds.), Noba textbook series: Psychology. Champaign: DEF Publishers.
Gordon, M., Althoff, T., & Leskovec, J. (2019). Goal-setting and achievement in activity tracking apps: A case study of MyFitnessPal. WWW, 13-17(2019), 1–12.
Guo, T., Sriram, S., Manchanda, P. (2019). The effect of information disclosure on industry payments to physicians. Working paper.
Hagen, L. (2020). Pretty healthy food: How and when aesthetics enhance perceived healthiness. Journal of Marketing (forthcoming).
Hartford, J., Wright, J. R., & Leyton-Brown, K. (2016). Deep learning for predicting human strategic behavior. Thirtieth Annual Conference on Neural Information Processing Systems, 1–9.
Hui, S., Bradlow, E., & Fader, P. (2009). Testing behavioral hypotheses using an integrated model of grocery store shopping path and purchase behavior. Journal of Consumer Research, 36, 478–493.
Kettle, K., Trudel, R., Blanchard, S., & Häubl, G. (2016). Repayment concentration and consumer motivation to get out of debt. Journal of Consumer Research, 43, 460–477.
Kim, S. Y., Lewis, M., and Wang, Y. (2019). Physical store openings and product purchase and return behaviors: A quasi-experimental approach using the causal Forest method. Working Paper.
Knäuper, B., Carriare, K., Frayn, M., Ivanova, E., Xu, Z., Ames-Bull, A., Islam, F., Lowensteyn, I., Sadikaj, G., Luszczynska, A., Grover, S., & McGill CHIP Healthy Weight Program Investigators. (2018). The effects of if-then plans on weight loss: Results of the McGill CHIP healthy weight program randomized controlled trial. Obesity, 26, 1285–1295.
Liu, L., Dzyabura, D., & Mizik, N. (2019). Visual listening in: Extracting brand image portrayed on social media. Marketing Science Articles-in-Advance.
Lockwood, P., Wong, C., McShane, K., & Dolderman, D. (2005). The impact of positive and negative fitness exemplars on motivation. Basic and Applied Social Psychology, 27, 1–13.
Lu, S., Xiao, L., & Ding, M. (2016). A video-based automated recommender (VAR) system for garments. Marketing Science, 35, 484–510.
Lu, T., Bradlow, E., Hutchinson, W. (2017). Binge consumption of online content. Working paper.
Netzer, O., Feldman, R., Goldenberg, J., & Fresko, M. (2012). Mine your own business: Market-structure surveillance through text mining. Marketing Science, 31, 521–543.
Nevskaya, Y., & Albuquerque, P. (2019). How should firms manage excessive product use? A continuous-time demand model to test reward schedules, notifications, and time limits. Journal of Marketing Research, 56, 379–400.
Pelham, B., & Wachsmuth, J. (1995). The waxing and waning of the social self: Assimilation and contrast in social comparison. Journal of Personality and Social Psychology, 69, 825–838.
Pereira, F., Mitchell, T., & Botvinick, M. (2009). Machine learning classifiers and fMRI: A tutorial overview. Neuroimage, 45, 199–209.
Puranam, D., Narayan, V., & Kadiyali, V. (2017). The effect of calorie posting regulation on consumer opinion: A flexible latent Dirichlet allocation model with informative priors. Marketing Science, 36, 726–746.
Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48, 137–141.
Sanakoyeu, A., Bautista, M., & Ommer, B. (2018). Deep unsupervised learning of visual similarities. Pattern Recognition, 78, 331–343.
Suher, J., Huang, S.-c., & Lee, L. (2019). Planning for multiple shopping goals in the marketplace. Journal of Consumer Psychology, 29, 642–651.
Sutton, R., & Barto, A. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.
Tanaka, K., & Titterington, D. M. (2004). Probabilistic image processing based on the Q-Ising Model by means of the mean-field method and loopy belief propagation. In Proceedings of the 17thInternational Conference on Pattern Recognition (ICPR’04) (pp. 1–4).
Uetake, K., & Yang, N. (2019). Inspiration from the “biggest loser”: Social interactions in a weight loss program. Marketing Science Articles-in-Advance.
Wager, S., & Athey, S. (2018) Estimation and Inference of Heterogeneous Treatment Effects using Random Forests. Journal of the American Statistical Association, 113(523), 1228–1242. https://doi.org/10.1080/01621459.2017.1319839
Wang, Y., Lewis, M., Cryder, C., & Sprigg, J. (2016). Enduring effects of goal achievement and failure within customer loyalty programs: A large-scale field experiment. Marketing Science, 35, 565–575.
Zhu, Y., Yu, Z., & Cheng, G. (2019). High dimensional inference in partially linear models. Proceedings of Machine Learning Research, 89, 2760–2769.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Hagen, L., Uetake, K., Yang, N. et al. How can machine learning aid behavioral marketing research?. Mark Lett (2020). https://doi.org/10.1007/s11002-020-09535-7
- Behavioral science
- Big data
- Semi-supervised learning
- Supervised learning
- Unsupervised learning
|
OPCFW_CODE
|
setting maxConcurrentOperationCount to one will create a serial queue?
I have a requirement to upload array data one by one in BG thread, So I am using operationQueue for the backgroud upload, where I am subclassing the nsoperation and implemented post,
I have to upload the packets one by one serially, but not cuncurrently,
So I am doing the following
private var uploadQueue = OperationQueue()
self.uploadQueue.maxConcurrentOperationCount = 1
self.uploadQueue.addOperation(newOperation)
Will this become a serial queue? as I have set maxConcurrentOperationCount to one? if not how do I create one?
Yes that does act like a serial queue.
one word answer YES but if u need a serial queue you need not create a concurrent/operation queue and set maxConcurrentOperationCount to 1 rather you have specialized serialized queue class for that you can create one using let queue = DispatchQueue(label: "abcd")
Remember only serial queue that your app gets from iOS by default is main queue and all other custom serial queue u need you have to create them on your own where as you get 4 global dispatch queues by default for each app from iOS and they vary only in their priorities they are default, background, low and high
@SandeepBhandari You are confusing GCD with OperationQueue.
@sulthan : No am not :) I just told OP that he need not use OperationQueue and set maxConcurrentOperationCount to 1 to get the behavior of serialized queue rather he could directly create a serialized queue using GCD api, operationQueue can do much more than GCDs and quite frankly if all that u wanna achieve is serialized execution why even opt for OperationQueues when u can easily achieve the same with GCD. Lemme know if am still wrong Ill correct myself :)
@SandeepBhandari For example, with GCD you cannot cancel an operation you have already enqueued. That are other differences though because both APIs are made for different purposes. GCD was created for thread communication and synchronization, especially for multiple CPU cores.
@sulthan : I completely understand. GCD is a low level api where you can not pause/suspend tasks. Once submitted they are bound to occur and you cant even add dependency among tasks in same queue which you can achieve with Operations. Thats what I meant when I said Operation queue can do lot more than GCD but because OP did not mention any of his needs like dependency management pausing/suspending tasks I though all he needs is a serialized execution of tasks may be it was mistake on my end to presume something like that but am glad I did because I got to learn something new
can you post the code for So I am using operationQueue for the background upload.
|
STACK_EXCHANGE
|
It is well known that people who aren’t computer security experts tend to ignore expert advice on computer security, and (to some extent as a consequence) get exploited. This paper is not the first, or the last, to investigate why; see also the
What Deters Jane papers ,
So Long and No Thanks for the Externalities, and
Users Are Not the Enemy. However, this paper provides a much more compelling explanation than anything earlier (that I have read), and a lens through which to view everything since. It’s plainly written and requires almost no specialized background knowledge; you should just go ahead and read the whole thing.
For those not in the mood to read the whole thing right now, I will summarize. Wash conducted
semi-structured, qualitative interviews of 33 home computer users, who were selected to maximize sample diversity, and specifically to exclude security experts. From these, he extracted a number of what he calls
folk models—qualitative, brief descriptions of how these people understand various threats to their computer security. The term
folk is used to describe a model which accurately reflects users’ understanding of a system, and is broadly shared among a user population, but might not accurately reflect the true behavior of that system. In this case, that means the models reflect what the interviewees think are the threats to their home computers, even if those aren’t accurate. Indeed, it is precisely where the model is inaccurate to the real threats that it provides an explanation of the phenomenon (i.e. users not bothering to follow well-publicized security advice).
A key aspect of all the models presented is a division of security threats into
Virus is used by the interviewees as an umbrella term, corresponding most closely to what experts call
malware—any piece of software which is unwanted and has harmful effects. (One model expands this category even further, to include programs which are unintentionally harmful, i.e. they have bugs.) The models differ primarily in users’ understanding of how
viruses get into the computer, and what they are programmed to do once there. This can be very vague (e.g.
viruses are bad software you don’t want on your computer) or quite specific (e.g.
viruses are deliberately programmed by hackers as an act of vandalism; they cause annoying problems with the computer; you get them passively by visiting sketchy websites—an expert will acknowledge this as true-but-incomplete).
Hackers on the other hand are people who are actively seeking to exploit computers; most interviewees share the understanding that this involves
taking control of a computer remotely, thus allowing it to be manipulated as if the hacker were physically present at its console. (Many of them hedge that they do not know how this is done, but they are sure that it is possible.) The models here differ primarily in the motives ascribed to the hackers, which are: vandalism, identity theft, or targeted identity theft and financial fraud. This last is one of the most telling observations in the entire paper: a significant number of people believe that they are safe from hackers because they have nothing worth stealing or exposing. (Again, an expert would recognize this as true-but-incomplete: there really is a subpopulation of black-hat actors who specialize in going after the
big fish. The catch, of course, is that the data exfiltrated from a
big fish might include millions of people’s personal credit card numbers…)
Having presented these models, Wash runs down a list of standard items of home computer security advice (drawn from Microsoft, CERT, and US-CERT’s guides on the topic) and points out how many of them are either useless or unimportant according to these models: for instance, if you think you can’t get viruses without actively downloading software, then antivirus software is pointless, and patching only valuable if it eliminates bugs you trip over yourself; if you think hackers rarely, if ever, vandalize a computer, then backups are not necessary to protect against that risk. He closes by comparing the novel-at-the-time threat of botnets to all the models, observing that none of them account for the possibility that an attacker might subvert computers indiscriminately and automatically, then use them only for their Internet connection. In particular, all of the
hacker models assume that computers are attacked in order to do something to that computer, rather than as a means to an unrelated goal (sending spam, enlarging the botnet, executing DDoS attacks, …) and that the hacker must be doing something manually at the time of the attack.
The landscape of security threats has changed quite a bit since this paper was published. I would be curious to know whether ransomware, RATs, third-party data breaches, and so on have penetrated the public consciousness enough to change any of the models. I’d also like to know whether and how much people’s understanding of the threats to a mobile phone is different. And, although Wash did make an effort to cover a broad variety of non-expert home computer users, they are all from the general population near his Midwestern university, hence mostly WEIRDos. I’m not aware of any studies of this type conducted anywhere but North America and Europe, but I bet it’s not quite the same elsewhere…
|
OPCFW_CODE
|
Hello. I already asked the question here. The main point is that I tried to prove in Primitive recursive arithmetic (PRA) the totality of the Ackerman function, and I found, that the single thing which can prevent it - nonapplicability of the Deduction theorem to PRA. But I know, that totality of the Ackerman function is unprovable in PRA. Does it mean, that the Deduction theorem is non-applicable to PRA?
People commented, that: "the main reason that PRA does not prove the Ackerman function is total is that PRA does not include enough induction axiom". That's obviously right! I know, that PRA contains only rule of inference for the mathematical induction. And I also know, that transfinite induction up to the ordinal number $\omega^2$, by which we can prove totality of the Ackerman function, in first-order logic is equivalent to double mathematical induction. But the language of PRA is not first-order language of full value. And I tried to use double mathematical induction directly and to find out problems.
Please look to my proof and say where it can be wrong. Now I see the only problem: I used Deduction meta-theorem in the form: $(PRA \wedge a \vdash b) \to (PRA \vdash a \to b)$. As far as I know, this meta-theorem for infinitely many axioms can be proven only if we use mathematical induction (in meta-theory) and thus - it is unobvious.
Emil Jeřábek, you are right: Outer induction is on a $\Pi_2^0$ formula when expressed in the language of Peano arithmetic. We can see it from this post. Induction axiom, used at the last (7) step, is a $\Pi_2^0$ formula.
But the proof in PRA - without quantifiers - instead of this axiom uses inference rule: $[PRA \vdash \psi(0)] \wedge [PRA \vdash \psi(m) \to \psi(m+1)] \to [PRA \vdash \psi(m)]$, where $\psi(m) \equiv \varphi_A(m,1) \wedge \varphi_A(m,K(m))$.
Yes, it is wrong. I don’t know how exactly you intended to use the T-predicate, but basically: the T-predicate itself (and the U-function) is primitive recursive, hence equivalent to an open formula of PRA. Then $n=f(m)$ can be expressed by the existential formula $\exists w\,(T(e,m,w)\land U(w)=n), and \exists n\,n=f(m)$ is equivalent to the existential formula $\exists w\,T(e,m,w)$, but there is no way to eliminate these existential quantifiers in PRA (this would imply that f is primitive recursive).
Thank you, it resolves the largest part of the problem! But one else thing remains, that I cannot understand:
Supposing, we consider $\varphi_A$ just as the new predicate symbol, and axioms (1), (2), (3) - as the definition of the predicate. Can't we treat it namely as the predicate of existence for Ackerman function value? And if it's so, why cannot we consider the foregoing proof as the proof of totality of the Ackerman function?
There are two ways of handling PRA in the literature. The first is to use no quantifiers at all; the second is to use quantifiers, just like Peano arithmetic. In the latter sense, totality can be expressed in the language of PRA, of course.
PRA with quantifiers sounds very strange. As far as I know, every unbounded quantifier changes a theory in essence.
The language of PRA consists of a handful of initial functions, and it allows defining new functions by composition and primitive recursion. It does not allow adding new functions by Skolemization
It sounds absurdly: How can a language restrict this? If syntax allows infinitely many functional and infinitely many predicate symbols, how can grammar analyzer verify, that they are primitive recursive?
As far as I know, an axioms set (not language!) of PRA is limited by only axioms for primitive recursive functions. OK, we won't treat the definition of Ackerman function as a part of the "PRA's set of axioms".
P.S. Sorry, I again cannot add comment to the thread.
I want to illustrate by an example my last assertion that the verification whether an object is primitive recursive or not is out of the scope of syntax.
How can we prove in PRA associativity of addition: $x+(y+z)=(x+y)+z$?
From the axiom $x+0=x$ we have:
By substitution $z$ for $S(z)$ we have:
2) $x+(y+z)=(x+y)+z \to x+(y+S(z))=(x+y)+S(z)$
And (attention!) by the rule of induction from (1) and (2) we have:
Is there any kind of verification that $+$ is primitive recursive function before we can apply the rule of induction? NO.
Now let us add to the theory the binary functional symbol $\circ$. We didn't add axioms, defining it. Did we change the theory? I think - no. It's called "conservative extension". Can we prove some new statements about the function $\circ$? Yes. One of statements which we can prove is:
$n \circ 0= n \to x \circ (y \circ z)=(x \circ y) \circ z$
The scheme of the proof is exactly the same as for addition. Please pay attention: Actually I know nothing about operation $\circ$. Maybe $x \circ y = x + y$ or maybe $x \circ y = max(x,y)$. I even don't know, whether it is primitive recursive or not. But the foregoing statement is true in any interpretation, because nothing can prevent us to use the rule of induction for proving it.
|
OPCFW_CODE
|
Much like trying to throw an axe blindfolded, traditional opinion-based forecasting misses the mark, often with disastrous consequences. Opinion-based forecasts have low predictability and accuracy, are prone to bias and manipulation, and yield limited value to the B2B organisations that adopt them. Fortunately, artificial intelligence is infusing and enhancing B2B sales forecasting. Sales leaders can attend our upcoming B2B Summit EMEA session to learn how to wield these new tools skillfully.
To understand the evolution of forecasting, it helps to understand the evolution of AI. The concept of artificial intelligence is nothing new — the term was coined in the summer of 1955 at Dartmouth College. Thus began The Age of Hand-Crafted Knowledge, during which AI researchers sought to mimic human intelligence with rules-based expert systems. These expert systems dominated the world of AI until about 2007, the dawn of the Age of Statistical Learning. In this period, companies started applying machine-learning algorithms to the new “Big Data” they were capturing to build predictive models and surface insights. No one called this use of machine learning “AI” at the time, but that all changed in 2012 when a deep neural network called AlexNet won the ImageNet competition, besting humans at identifying images. The Age of Deep Learning was underway and is responsible for the renaissance the field of AI is enjoying today.
The three ages of AI are mirrored in three types of forecasting:
- The Opinion Forecast. This is the traditional forecasting method B2B organisations employ. As the name suggests, it is largely based on the rep’s opinion and is therefore neither efficient nor consistently accurate.
- The Augmented Forecast. The augmented forecast leverages machine learning trained on historical structured (i.e., row and column) data to build predictive models that augment sellers’ and managers’ opinions. It increases forecast accuracy and also leads to higher win rates by providing greater insight into buyers. There is still a great deal of rep input, so the predictions augment opinion.
- The Prescriptive Forecast. This emerging type of forecast leverages deep learning on both structured and unstructured (voice, text, etc.) data to derive an even more accurate forecast. Because deep learning requires a significant volume of data to outperform classic machine-learning methods, some vendors are training models on a network of their clients’ engagement data. Here, reps’ opinions augment the prediction, and most of the human focus is on beating the number by leveraging these deeper buying signals.
In our upcoming B2B Summit EMEA presentation this Sepetember, Anthony McPartlin and I will dive deeper into AI-enhanced forecasting and the features and functionality that are currently available and in development. Most importantly, we’ll advise you on best practices for evolving your own forecasting practices with AI to ensure you’re hitting your targets.
|
OPCFW_CODE
|
Oracle GraalVM is a high-performance JDK that can speed up the performance of Java and JVM-based applications using an alternative just-in-time (JIT) compiler. It lowers application latency, improves peak throughput by reducing garbage collection time, and comes with 24/7 Oracle support.
There is also a native image utility that compiles Java bytecode ahead-of-time (AOT) and generates native executables for some applications that start up almost instantaneously and use very little memory resources.
When using GraalVM in JIT mode, the JVM uses the GraalVM JI compiler to create platform-specific machine code from Java bytecode while the application is running. Compilation is performed incrementally during program execution with extra optimization applied to code that is frequently executed. This approach ensures that code in hotspots run extremely fast thanks to aggressive inlining, partial escape analysis, and other advanced optimization. Some optimizations reduce object allocations which lowers the load on the garbage collector. This helps improve the peak performance of long-running applications.
The GraalVM native image utility can also compile Java bytecode to generate native machine executables ahead-of-time (i.e., at build time). These executables start up almost instantly and consume a fraction of the memory that would be used by the same Java application running on the JVM. Native executables are also compact as they only include the classes, methods, and dependent libraries the application requires.
Learn more about GraalVM compiler, read the GraalVM for Dummies ebook
GraalVM’s compiler includes a number of additional optimization algorithms that provide significant improvements in performance and resource consumption. GraalVM’s native image features support a number of advanced features, including the G1 garbage collector, compressed pointers, and profile guided optimization which helps the compiler generate more efficient code.
GraalVM is included with Java SE products at no additional cost. It includes 24/7 support by Oracle with access to security fixes and critical path updates for more predictable performance and reliability. For Java migration to the cloud, GraalVM is free to use on Oracle Cloud Infrastructure (OCI).
GraalVM can enable developers to build more efficient code, with better isolation and greater agility for cloud or hybrid environments. Here are some of the reasons why more and more businesses today use GraalVM:
GraalVM innovations help Java code keep up with today’s computing demands with faster performance to respond quickly to customer needs. The advanced optimizer improves peak throughput. It also optimizes memory consumption by minimizing object allocations to reduce time spent performing garbage collection. GraalVM running in JIT mode can boost performance up to 50%. This frees up memory sooner, so you can run other workloads on the same infrastructure and lower IT costs.Build cloud native applications
Oracle GraalVM’s native image utility compiles bytecode Java applications ahead-of-time into machine binaries. The native executables start up almost 100X faster and consume up to 5X less memory compared to running on a JVM.
As organizations move workloads to the cloud and pay by the hour for the use of system resources, GraalVM can help realize operational cost savings. These results make GraalVM generated native executables ideal for microservices deployment, an area supported by major microservices frameworks, such as Helidon, Micronaut, Quarkus, and Spring Boot.Develop multilanguage programs and improve productivity
GraalVM includes an advanced optimizing compiler that generates machine code while the program is running JIT to accelerate Java application performance. By compiling ahead-of-time, the native image starts up fast and uses less memory, making it ideal for cloud native deployment. It supports multilanguage programs to improve productivity by allowing developers to use the best libraries needed to solve business problems irrespective of what language they are written in.
|
OPCFW_CODE
|
Need help? Please let us know in the UMEP Community.
3.5. Spatial Data: LCZ Converter¶
The Local climate zone (LCZ) converter calculates land cover fractions (see land cover reclassifier) on a vector grid based on LCZ raster maps from the WUDAPT portal. The local climate zone are urban area classified based on the Stewart and Oke (2012) scheme.
The raster LCZ maps can be converted into maps of land cover fraction and morphometric properties. For this conversion we use paved, building and pervious fraction for each LCZ from Stewart et al. (2014). However, what exactly the pervious fraction consists of (grass, trees, bare soil or water) needs to be user-specified. Similarly, morphometric properties for the buildings are specified in this scheme, but the vegetation morphometric properties still need to be specified by the user.
In UMEP we refer to the rural LCZ’s as 101, 102, 103, 104, 105, 106 and 107 instead of A, B, C, D, E, F and G.
- Dialog box
The first tab in the LCZ converter dialog shows a table. This table includes land cover fractions and morphometric properties for buildings and vegetation for each local climate zone. If the default values in the table are not appropriate for the selected city the user has a choice between editing the table directly or using the “pervious distribution” tab in order to provide approximate values for the distribution between grass, bare soil, trees and water and the height of the vegetation.
Within the “pervious distribution” tab there are two options to change the pervious fraction distribution: Either per LCZ using the “Separate LCZ’s” button or for all LCZ’s together using “Same for all LCZ’s”. When selecting the first option make sure to select the LCZ raster first. Based on the LCZ raster, the dropdown boxes will show the LCZ classes ordered by the frequency of occurrence. Select the classes to specify the pervious distributions for and select the most appropriate pervious land cover options and vegetation heights.
When choosing the “Same for all LCZ’s” option: choose the appropriate pervious land cover fractions and vegetation heights for all urban and all rural LCZ classes.
|upper||Select the LCZ raster layer and the vector grid the land cover fractions should be computed for.|
|middle Tab: Pervious distribution||Set the distribution of pervious surface fractions for each LCZ separately or all at the same time.|
|middle Tab: Table||Alters the land cover fractions and building and vegetation heights for each LCZ towards more accurate values.|
|lower||Specify output and run the calculations.|
- LCZ raster
- Select the LCZ raster from the WUDAPT database.
- Vector grid
- Select your predefined polygon grid (see Vector -> Research Tools -> Vector Grid; select polygons not lines)
- Adjust default parameters
- Tick this box if you would like to edit the table below with the land use fractions and tree and building heights for each of the local climate zones.
- Separate LCZ’s
- Once selected it computes the most common LCZ classes in the Raster grid and allows you to alter the pervious fractions and tree heights in the dropdown boxes to the right for each individual LCZ.
|LCZ’s:||List of LCZ’s in the raster, ordered by most frequent occurrence. Select the LCZ(s) for which you would like to specify the pervious fraction.|
|Fraction distributions:||Select the percentages of each pervious land cover class for the selected LCZ.|
|Height of trees:||Select the range of tree heights most applicable for that LCZ.|
- Same for all LCZ’s
- Allows you to alter the pervious fractions and tree heights for all urban and rural classes at the same time.
|Urban:||Select the percentages of each pervious land cover class for all urban LCZ’s.|
|Rural:||Select the percentages of each pervious land cover class for all rural LCZ’s. Note for rural classes you are only able to specify the distribution of tree species.|
|Height of trees:||Select the range of tree heights most applicable for the urban and rural LCZ’s.|
- Update Table
- This updates the table from the default values to the user-specified distributions of the pervious fractions. Please check the table, to make sure your changes have taken effect.
- File Prefix
- A prefix that will be included in the beginning of the output files.
- Add results to polygon grid:
- Tick this in if you would like to save the results in the attribute table for your polygon vector grid.
- Output Folder
- A specified folder where result will be saved.
- Starts the calculation
- Closes the plugin.
- Three files are saved after a successful run.
- One with the landcover fractions for each grid cell
- One with the morphometric properties for the building for each grid cell
- One with the morphometric properties for vegetation for each grid cell
- Rural LCZ’s are marked as 101, 102, etc instead of A, B, etc.
- Issues using .sdat rasters has been reported. GeoTiffs are recommended.
- Stewart, I.D. and Oke, T.R. 2012. Local Climate Zones for urban temperature studies. Bulletin of the American Meteorological Society, 93: 1879-1900.
- Stewart, I.D., Oke, T.R., and E.S. Krayenhoff. 2014. Evaluation of the ‘local climate zone’ scheme using temperature observations and model simulations. International Journal of Climatology, 34: 1062-80.
|
OPCFW_CODE
|
what happens if a Telnet adds more data during a busy thread handling previous Telnet data
I have a program that starts a Telnet background thread, which listens data that is put on network at random time by one or more different equipments. Interval between events may start from a few seconds to several minutes, which is more than enough for the thread to handle the incoming data.
However, there is a chance that two equipments put data at almost the same time. Alternatively, a combination of [slow thread processing due to modest hardware and complex code] + [very quick data put successively by one single fast equipment] might be expected.
Question: what happens with a second data sequence that happens to be put on network right after a first sequence, in case my thread is still busy handling the first one?
The data is lost? (from the thread perspective)
Or, is there some signaling & queue mechanism so the Telnet client/thread will find what happened during its short absence?
The program runs on a Raspberry Pi. Here is the thread init and the main part of the thread, if that helps understanding:
# executed earlier in program
qq = queue.Queue()
th = threading.Thread(target = ReadFromBMDThread, args = (qq,self.tn))
th.start()
# ...
# thread
class ReadFromBMDThread(threading.Thread):
def __init__(self, qq, tn):
global INCOMING
threading.Thread.__init__(self)
threading_flag = True
while threading_flag:
try:
data = tn.read_until(b"\n\n")
except Exception as e:
LOG.WriteLog(self, ("ReadFromBMDThread: " + str(e)))
INCOMING = (data.decode())
# ...
# some more program which increases processing time
Resulted INCOMING are data blocks of single pairs in the form of number-space-number, each pair on one line, each line ended by \n and each data block ended by \n\n (which are then striped by decode()).Example of INCOMING data:
4 20
115 77
23 6
...
Edit: I applied Elliott Frisch's idea – the result: it seems the queue is preserved. I added a time.sleep(5) at the end of thread sequence and the whole data that was put on network during the waiting interval was still well handled, one data block after each and every 5 seconds, without overlap and without data loss.
:)
What happened when you tried it locally? TELNET twice in seperate commands to the local port you're listening on.
Edit: see the 'Edit:' on main question. (This is what I commented here previously: Normally it is unlikely I can hit that fast different Telnet commands, but the idea is a good one: I can insert an artificial pause in my thread just for testing purpose. Yes, I will try, thank you :) (will put later the result here, if relevant))
|
STACK_EXCHANGE
|
I am having a reoccurring issue (monthly) where one of our business applications goes down and causes high call volume into the contact center. There are 53 CTI Ports and they are all set to go to voicemail for forward busy/no answer, where there is a message "due to high call volume..." This was tested and worked fine in a small queue of 10 ports. However, on the larger queue, the high call volumes crashes the callmanager subscriber that the agents are on, and callers are getting a busy signal instead of going to voicemail and "high call volume" displayes on the agents phones.
In the ICM router log viewer it shows "No free routes to send call to translation routes.... all 10 routes are in use" in the router event viewer it shows "Translation route timeout". Meanwhile the ICM script monitor shows a spike in calls about 80 thousands calls in less than 20 minutes. It seems like there is a loop somewhere, because even if all our customers were to call at the same time it would not even reach 10,000 calls.
I have 10 Translation Routes for 10 CTI Route Points and 53 CTI Ports for this particular contact center. The CTI Route Points are set for Max Calls = 5000 and Busy Trigger = 4500. Should'nt this be lowered to much the number of available ports? or should the CTI Route Points be set to go to voicemail on busy aswell?
UCCE 7.2(6), IPIVR 4.0(5), CCM 4.0(2)
You certainly have a problem somewhere.
When the call hits the route point for the start of the trans route, it waits until the Router responds with a real target (the CTI port) and sends the data to that PG so it can tie them together, then sends the call. If you have 53 CTI ports, there is no reason not to have 53 route points as these are free and it's just a matter of creating the RPs, mapping them to JTAPI, and running the Trans Route Wizard to add the bigger pool of DNIS.
Since you have a bug somewhere, you will probably end up in the same situation but the message will say "No free routes to send call to translation routes.... all 53 routes are in use".
This is not going to solve your problem - just move it from saturating the route point pool to saturating the CTI port pool.
What is going on with those ports? Are they sending the call back into the system somehow?
Nothing on the ports, they do not go out of service... When calls spike, they all fail from the Translation Route to VRU node in the ICM routing script. Somehow, calls are being sent back to the systems. So it re-dialing in attempt to when connect to a target
To troubleshoot you can look at enabled scripts in ICM script editor and see if certain script(s) are carrying high call volume. Then look at the script to see if the call routing is a problem.
I have several routing scripts that use the same Network Trunk Group, but only the two main routing scripts for the call centers that support that business application are affected with high call volume. This happens once a month for a about 20 minutes... Others scripts on the on different Network Trunk Groups are never affected.
I'm assuming in your 2nd script if the translation route node fails you shoud release the call or send it to any available agent. Also, it would have been nice to post your scripts in monitor mode, with the node number next to them. Als, the 80000 calls you're seeing aren't calls, but passes through a specific node, it coule be one call which just passes over a loop 80000 or two calls which each pass 40000, etc.
Maybe we need to take a step back, I re-read your original message and it sounds like there's all sorts of things wrong with your environment. So, why don't you break up your issues into smaller logical pieces and take one at a time? For example, you mentioned that your UCM was crashing, that is one problem. You seem to be running out of trans routes, that's another problem. There are some infinite loops in your ICM script(s), that's another problem. While they all might be related you need to break it up in order to be more effective.
|
OPCFW_CODE
|
Downward Bidding Dynamics
It is worth stating explicitly the actual bidding protocol, and noting
how a slight parametrization of it may provide for rather general trading
The Fishmarket uses a specific downward-bidding
protocol (DBP). In FM96.5,
our current implementation, it was implemented as follows:
- [Step 1]
- The auctioneer chooses a good out of a lot of goods that
is sorted according to the order in which sellers deliver their goods to the
- [Step 2]
- With a chosen good $g$, the auctioneer opens a
bidding round by
quoting offers downward from the good's starting price,
previously fixed by the sellers' admitter, as long as these price quotations
are above a reserve price previously set by the
- [Step 3]
- For each price called by the auctioneer, several
situations might arise during the open round:
- Multiple bids:
- Several buyers submit their bids at the current
price. In this case, a collision comes about, the good is not sold to
any buyer, and the auctioneer restarts the round at a higher price.
Nevertheless, the auctioneer tracks whether a given number of
successive collisions is reached, in order to
avoid an infinite collision loop. This loop is broken by randomly
selecting one buyer out of the set of colliding bidders.
- One bid:
- Only one buyer submits a
bid at the current price. The good is sold to this buyer whenever his
credit can support his bid. Whenever there is an unsupported bid the
round is restarted by the auctioneer at a higher price, the
unsuccessful bidder is punished with a fine, and
he is expelled out of the auction room unless such fine is paid off.
- No bids:
- No buyer submits a bid at the current price. If the
reserve price has not been reached yet, the auctioneer quotes a new
price which is obtained by decreasing the current price according to
the price step. If the reserve price is reached, the auctioneer
declares the good withdrawn(i.e.the
good is returned to its owner) and closes the round.
- [Step 4]
- The first three steps repeat until there are no more goods left.
Notice that six parameters that control the dynamics of the bidding process
are implicit in this protocol definition. We shall enumerate them now,
and require that they become instantiated as part of a tournament definition.
Hence the following:
||Price Step. Increment or decrement of price between two consecutive
offers shouted out by the auctioneer
||Minimum time between offers. Delay between consecutive offers.
||Minimum time between rounds. Delay between the end of a round
and the beginning of the next round.
||Maximum number of successive collisions. The auctioneer randomly
chooses one buyer out of the set of bidders when the maximum number of
successive collisions is reached.
||Sanction factor. This coefficient is utilized by the buyers'
manager to calculate the amount of the sanction to be imposed on buyers
submitting unsupported bids.
||Price increment. This value determines how the new offer is
calculated by the auctioneer from the current offer when either a collision,
a fine or an expulsion occur.
This set of parameters is what we call the Downward bidding protocol
(DBP) dynamics descriptor. Our first tournament will instantiate the
DBP dynamics descriptor as follows.
Updated: April, 29th 1998
IIIA | IIIA
Fishmarket Project ]
|
OPCFW_CODE
|
The automotive world is on the verge of a revolution with vehicles reaching higher levels of automation. Automation is defined by the Oxford English Dictionary as: “the use of electronic or mechanical devices to replace human labor”. Among all the existing concepts (e.g., Google, Uber, Tesla, but also Renault, Toyota and many more) the automation level can vary greatly. Indeed, different concepts have different goals, some of them aim at removing the human from the dynamical parts of the driving task, while others want to keep the human “in” or “on” the loop (e.g., cars with or without steering wheels). This created the need for a classification of the different goals and levels of automation, which led to the birth of the “J3016: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles” document (can be bought obtained for free here). This document is the result of a common US/Europe effort to create a descriptive and informative but not normative document, which writing has been entrusted to the SAE on-road automated vehicle standards committee.
The main contribution of this document is the determination of six levels of automation, along with precise definitions for several terms.
This document divides the act of driving in three stages: strategic (e.g., choosing the route, timing), tactical (e.g., motion inside the traffic), and operational (e.g. reflex reactions). It is pointed out that the automation levels considered do not deal with the strategic part. Numerous definitions are provided, here are a few of the most useful ones with simple definitions of our own (see the actual document for precise definitions):
- Dynamic Driving Task (DDT): what most people would call driving, involves the tactical and operational parts mentioned earlier.
- Dynamic Driving Task Fallback (DDT fallback): the system that deals with the situation when things go wrong. Can be the human itself.
- Automated Driving System (ADS): the automation part of the car (hardware & software).
- Operational Design Domain (ODD): specific operational description in which an ADS will be designed to operate.
These four terms are the most used throughout the document, and in the main tables/figures. With these terms being defined, the document goes on to describe the six levels of automation that an ADS can provide. These are described in the following table.
– Level 0 = not automation at all.
– Level 1 = longitudinal or lateral distance handled autonomously.
– Level 2 = both distances handled, but human must continuously monitor the system.
– Level 3 = human does not need to monitor the system but he is the fallback system (must intervene if something wrong happens).
– Level 4 = human not needed anymore (even for fallback), but operations subjected to some limitations.
– Level 5 = like level 4 but with no operational limitations.
These six levels allow to describe precisely the levels of autonomy of existing cars and of the ones to come, for example Tesla cars are Level 2. Some more interesting concepts are detailed through the document, like the possibility to have a remote driver, or a dispatcher capable of initializing or deactivate the ADS. The monitoring (permanent vigilance) and receptive (no vigilance but can be alerted) concepts are detailed. The notion of minimal risk condition, some sort of contingency procedure, is also discussed. And more notions that would not fit in this short article.
To conclude, this document is not a standard, as it is repeated throughout the text; it is a descriptive and informative document. Yet, it is very detailed and clear, with numerous examples to illustrate the various concepts. From all the concepts, it appears that a parallel with aeronautics could be done, for example a “minimal risk condition” looks a lot like a “contingency procedure”. Defining a dictionary of equivalent notion to translate notion from one world to the other would appear highly useful. Especially as drone levels of automation are not yet clearly categorized and could use such a taxonomy, at least as a starting basis.
|
OPCFW_CODE
|
404 route is not rendered in child routes that doesn't exists
Describe the bug
I have two urls
/
/about
I have configured the not-found url but when I go to urls like /about/asdfsadf it renders the about page instead of the 404 error 😢
Your Example Website or App
https://stackblitz.com/edit/github-7h7yen?file=src%2Fmain.tsx
Steps to Reproduce the Bug or Issue
Go to https://stackblitz.com/edit/github-7h7yen?file=src%2Fmain.tsx
Click on the link About/not-found
The about page is rendered instead of the 404 component
Expected behavior
The not found route should be rendered.
Screenshots or Videos
Here we can see the devtools where the /about and /404 routes are matches which maybe its wrong because my url is not a Splat / Catch-All Routes 😢
Platform
OS: Windows
Browser: Chrome
Version: 1.6
Additional context
No response
I am facing this issue as well!
Currently using @tanstack/react-router version 1.10.0, here is my code for setting up the router
import { createRouter, createHashHistory, NotFoundRoute } from '@tanstack/react-router';
import { routeTree } from '../../routeTree.gen'
import { Route as RootRoute } from '@/routes/__root'
import { NotFoundPage } from '@/features/NotFoundPage';
const hashHistory = createHashHistory();
export const router = createRouter({
routeTree,
history: hashHistory,
notFoundRoute: new NotFoundRoute({
getParentRoute: () => RootRoute,
component: () => <NotFoundPage />
})
});
Using file based routing, with the latest best practices from the latest docs (eg. createRootRoute instead of new RootRoute).
Here is the commit with my current setup
https://github.com/gjtiquia/mini-link-stash/tree/7d115dd1dae216125a1af0f7311a4e66d92b64c8
Temporarily deployed this commit here
https://deploy-preview-1--mini-link-stash.netlify.app/
Pasting the following url goes to the /about route
https://deploy-preview-1--mini-link-stash.netlify.app/#/about
Pasting the following url shows 404 page not found
https://deploy-preview-1--mini-link-stash.netlify.app/#/ajshdgfbasdukhfgbhjkasd
However, same as @GiancarlosIO, pasting the following url goes to the /about route instead of 404
https://deploy-preview-1--mini-link-stash.netlify.app/#/about/ajshdgfbasdukhfgbhjkasd
I couldn't find a concrete code example in the docs demonstrating how to setup the notFoundRoute in the router.
In my own code, and in @GiancarlosIO 's code in the link above, the getParentRoute property of NotFoundRoute is set to the root route.
Perhaps this is where the bug occured? Because routes that are directly under the root route (eg. /this-path-does-not-exist) will go to the 404 page, while routes that are in other children routes (eg. /the-children-path/this-path-does-not-exist) goes to the children route instead (eg. /the-children-path)
Found the code example in the docs from this discussion: https://github.com/TanStack/router/discussions/1013
Link to the docs: https://tanstack.com/router/v1/docs/framework/react/guide/creating-a-router#not-found-route
The code snippet from the docs shown below:
File based routing
import { NotFoundRoute } from '@tanstack/react-router'
import { Route as rootRoute } from './routes/__root.tsx'
const notFoundRoute = new NotFoundRoute({
getParentRoute: () => rootRoute,
component: () => '404 Not Found',
})
const router = createRouter({
routeTree,
notFoundRoute,
})
Code based routing
import { NotFoundRoute } from '@tanstack/react-router'
const rootRoute = createRootRoute()
// ...
const notFoundRoute = new NotFoundRoute({
getParentRoute: () => rootRoute,
component: () => '404 Not Found',
})
const router = createRouter({
routeTree,
notFoundRoute,
})
Which is the same as how I set it up previously. Still does not work for child routes.
The issue can also be reproduced in the basic example from the source code
https://github.com/TanStack/router/tree/main/examples/react/basic
Reproduced at commit 77955ab in the main branch
Going to /non-existent-route shows 404 - Not Found
Going to /posts/1 shows post 1
Going to /posts/1/non-existent-route shows post 1, but we should expect 404 - Not Found
Response from @tannerlinsley regarding this issue from Discord
https://discord.com/channels/719702312431386674/1198279811559333909
Thanks @tannerlinsley and all the contributors!
Tracking this in #1048
|
GITHUB_ARCHIVE
|
Clicks/pops glitch playing audio in Firefox when using ASIO audio interface
I've noticed that when playing audio streams (e.g., Soundcloud) or the samples in the Demos here , that I get clicking/popping or glitching occurring in a fairly regular pattern. I've been on Mozilla support and tried a host of troubleshooting suggested, leading to the discovery shown by about:support#media where the audio backend listed is WASAPI.
On the latest version of FF, on my Win10 machine, routing sound in Windows to my MOTU Audio Interface via ASIO at 48kHz is when this happens. Sound is not glitchy on YouTube, or most other video formats. It appears to be related to audio streaming only. What is interesting is that if I set the Windows Sound Control Panel to be 44.1kHz, the glitching stops, the sound plays clean (this requires a page refresh). If I set it back to 48kHz, it glitches again (after a page refresh). I first thought this to be a sample rate mismatch, but then found another app that had the same glitch issues. Further investigation into that app's settings revealede that the default settings was to use WASPI, even though it was still routed to my MOTU (now sure how that works, since it is ASIO - but I'm not real knowledgeable on Windows Audio). In that app, I was able to select ASIO instead of WASPI, set my MOTU device, choose 48kHz, and the sound in that app then played PERFECTLY!
What I gathered from that is that the issue is not with sample rate mismatch, but with using WASAPI with my ASIO MOTU audio interface. It appears that there is no way to change from WASAPI to ASIO in FF like there is with the other app mentioned.
Is there a reason for this? And can it be improved? I'm not sure exactly what in the WASAPI to MOTU signal chain that might be causing this glitching to occur, but it is remedied in other apps by utilizing the correct ASIO driver for the output device. Any advice would be appreciated.
I've noticed that when playing audio streams (e.g., Soundcloud) or the samples in the Demos here , that I get clicking/popping or glitching occurring in a fairly regular pattern. I've been on Mozilla support and tried a host of troubleshooting suggested, leading to the discovery shown by about:support#media where the audio backend listed is WASAPI.
Thanks for filing an issue!
Generally, when there's a problem in Firefox, it's best to open a bug in its tracker. If this happens in the future, https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&component=Audio%2FVideo%3A cubeb is a direct link that will place the ticket in the correct component. We receive an email here or there, so in the grand scheme of things we'll find it in either location, but cubeb is used by other programs than Firefox, so Firefox-specific issues are best directed to Firefox's own tracker.
I've opened https://bugzilla.mozilla.org/show_bug.cgi?id=1846409 for you, you can simply log in with your GitHub account. I've asked for a number of easy-to-gather information that will help us understand what is going on, please don't hesitate to ask if you have any question.
But in any case, I can answer some of your questions here -- but if possible, it would be best if we would discuss your issue with Firefox on bugzilla.
On the latest version of FF, on my Win10 machine, routing sound in Windows to my MOTU Audio Interface via ASIO at 48kHz is when this happens. Sound is not glitchy on YouTube, or most other video formats. It appears to be related to audio streaming only. What is interesting is that if I set the Windows Sound Control Panel to be 44.1kHz, the glitching stops, the sound plays clean (this requires a page refresh). If I set it back to 48kHz, it glitches again (after a page refresh). I first thought this to be a sample rate mismatch, but then found another app that had the same glitch issues. Further investigation into that app's settings revealede that the default settings was to use WASPI, even though it was still routed to my MOTU (now sure how that works, since it is ASIO - but I'm not real knowledgeable on Windows Audio). In that app, I was able to select ASIO instead of WASPI, set my MOTU device, choose 48kHz, and the sound in that app then played PERFECTLY!
When you say "on the latest version", does it mean it was working well before? Youtube and others use high-latency playback, optimized for robustness, whereas SoundCloud uses a very low latency API that is a bit more glitch prone (but it should generally work well -- I'm not saying there isn't a bug here). I think it must be related to sample-rate conversion somewhere in the chain, thanks for trying a couple of things out.
What I gathered from that is that the issue is not with sample rate mismatch, but with using WASAPI with my ASIO MOTU audio interface. It appears that there is no way to change from WASAPI to ASIO in FF like there is with the other app mentioned.
WASAPI is the native API to do audio input/output on Windows, and ASIO drivers are usually provided by audio equipment manufacturers. It's possible to use WASAPI to output to any sound card, but generally the playback will be more robust against high machine load (or allow achieving lower latency) if using ASIO. There's no general rule, but it's generally recommended to directly use ASIO drivers in music software (e.g. a Digital Audio Workstation, that kind of thing).
Is there a reason for this? And can it be improved? I'm not sure exactly what in the WASAPI to MOTU signal chain that might be causing this glitching to occur, but it is remedied in other apps by utilizing the correct ASIO driver for the output device. Any advice would be appreciated.
Unfortunately, the licence for the ASIO SDK is absolutely incompatible with the free software licence of Firefox. I wish we could use it, but as far as I know, we are not allowed to do so, for legal reasons. It might be possible to make a build of Firefox that can talk to an ASIO driver, but I don't think we'd be allowed to distribute it.
Didn't know about the ASIO SDK license, which is a problem it seems with all OSS. :-/ Come to think of it though, what about ASIO4ALL? Isn't that free? How do they get around it? So, for an application to talk to an ASIO driver, it requires a license? I thought that would be just for the driver manufacturers.
ASIO4ALL isn't free software, as in this definition, it simply doesn't have a cost. To use the ASIO SDK, we'd have to sign an agreement with Steinberg, and all sorts of other things that would violate Firefox's free software licence. https://www.steinberg.net/developers/ has some info.
At any rate, given the continual frustration to get my Tabs back on bottom after seemingly every update to FF and Mozilla not seeming to care about user feedback, and now this, perhaps it's a good time to find another browser. I appreciate your help and feedback. I'll see what Mozilla has to say, but I'm not hopeful a fix is coming. Cheers.
Well, I'm a full time employee at Mozilla working on audio and video stuff, and I seem to care about your feedback, and if you get me the info we need in the other ticket, we can try to figure out what's up with your system. We have hundreds of millions of users that use Firefox, a fraction of which use professional sound cards every day without any problems (myself included), so surely something is off somewhere, but we can't figure out why without doing a bit of diagnosis.
That said, you do https://www.userchrome.org/how-create-userchrome-css.html and then put https://github.com/MrOtherGuy/firefox-csshacks/blob/master/chrome/tabs_on_bottom.css in the file userChrome.css, you'll have the tabs on the bottom like it used to be, it takes less than 5 minutes and works well.
Agreed. Steinberg is the absolute worst. Recently bought Cubase Pro 12 and regret it now.
I also agree that you DO seem to care about this user. My sentiment comes after nearly two decades of using FF and being what feels like continually frustrated. Things break. Features disappear randomly. Requests ignored. And the general sentiment I see over and over in the forums that Mozilla devs have their own ideas about how things should be done and ignore the rest. I suspect there is some truth to that.
Anyway, I do rely on MrOtherGuy's efforts to keep up with the changes so that we can all continue to enjoy our tabs on the bottom. But that shouldn't be necessary simply because Mozilla chooses not to make this feature a user option, although it has been requested by many for years. So, you might imagine how we users feel that our needs/wants are being ignored.
Support on issues seems to be very good in my experience. It's just responsiveness to user needs/wants that seems lacking.
Please don't take my frustration the wrong way. I do appreciate all you folks do in making FF what it is, and despite the frustration, I do continue to use it for now. So there's that. ;-) Cheers.
|
GITHUB_ARCHIVE
|
import uuid
from src.kitools.env import Env
import synapseclient
from synapseclient import Project, Folder, File
class SynapseTestHelper:
"""
Test helper for working with Synapse.
"""
_test_id = uuid.uuid4().hex
_trash = []
_synapse_client = None
def client(self):
if not self._synapse_client:
self._synapse_client = synapseclient.Synapse(configPath=Env.SYNAPSE_CONFIG_PATH())
self._synapse_client.login(silent=True)
return self._synapse_client
def test_id(self):
"""
Gets a unique value to use as a test identifier.
This string can be used to help identify the test instance that created the object.
"""
return self._test_id
def uniq_name(self, prefix='', postfix=''):
return "{0}{1}_{2}{3}".format(prefix, self.test_id(), uuid.uuid4().hex, postfix)
def dispose_of(self, *syn_objects):
"""
Adds a Synapse object to the list of objects to be deleted.
"""
for syn_object in syn_objects:
if syn_object in self._trash:
continue
self._trash.append(syn_object)
def dispose(self):
"""
Cleans up any Synapse objects that were created during testing.
This method needs to be manually called after each or all tests are done.
"""
projects = []
folders = []
files = []
others = []
for obj in self._trash:
if isinstance(obj, Project):
projects.append(obj)
elif isinstance(obj, Folder):
folders.append(obj)
elif isinstance(obj, File):
files.append(obj)
else:
others.append(obj)
for syn_obj in files:
try:
self.client().delete(syn_obj)
except:
pass
self._trash.remove(syn_obj)
for syn_obj in folders:
try:
self.client().delete(syn_obj)
except:
pass
self._trash.remove(syn_obj)
for syn_obj in projects:
try:
self.client().delete(syn_obj)
except:
pass
self._trash.remove(syn_obj)
for obj in others:
print('WARNING: Non-Supported object found: {0}'.format(type(obj)))
self._trash.remove(obj)
def create_project(self, **kwargs):
"""
Creates a new Project and adds it to the trash queue.
"""
if 'name' not in kwargs:
kwargs['name'] = self.uniq_name(prefix=kwargs.get('prefix', ''))
kwargs.pop('prefix', None)
project = self.client().store(Project(**kwargs))
self.dispose_of(project)
return project
def create_file(self, **kwargs):
"""
Creates a new File and adds it to the trash queue.
"""
if 'name' not in kwargs:
kwargs['name'] = self.uniq_name(prefix=kwargs.get('prefix', ''))
kwargs.pop('prefix', None)
file = self.client().store(File(**kwargs))
self.dispose_of(file)
return file
def create_folder(self, **kwargs):
"""
Creates a new Folder and adds it to the trash queue.
"""
if 'name' not in kwargs:
kwargs['name'] = self.uniq_name(prefix=kwargs.get('prefix', ''))
kwargs.pop('prefix', None)
folder = self.client().store(Folder(**kwargs))
self.dispose_of(folder)
return folder
|
STACK_EDU
|
Windows installation ?
I tried to install the tool but I'm using Windows as operating system.
Everything seems to be working except that no Elastic Search instance is launched.
I think it is only launched through bin/cassandra but not from bin/cassandra.bat but I didn't look further.
Here is the trace of the command "cassandra -e":
WARNING! Powershell script execution unavailable.
Please use 'powershell Set-ExecutionPolicy Unrestricted'
on this user-account to run cassandra with fully featured
functionality on this platform.
Starting with legacy startup options
Starting Cassandra Server
Sorry, the cassandra.bat is not ready to run elassandra on Windows right now. However, you can probably copy the code of the -e option from bin/cassandra to the bin/cassandra.bat to activate Elasticsearch on your windows plateform.
Thanks',
Vincent.
@vroyer
I downloaded cassandra-2.2.6, then loaded elassandra-2.1.1-14 on top of it, and only changed the classpath/CASSANDRA_MAIN
CASSANDRA_MAIN=org.apache.cassandra.service.ElassandraDaemon
But it errors with "could not find or load main class org.apache.cassandra.service.ElassandraDaemon.
Any idea ?
Thank You
@vroyer I fixed the previous issue by redownloading the release.
Though now I'm getting:
Starting with legacy startup options
Starting Cassandra Server
{2.1.1}: Initialization Failed ...
IllegalStateException[failed to load bundle [] due to jar hell]
NoSuchFileException[C:\Users\admin\Desktop\elassandra-2.1.1-14\build\cla
sses\main]
Hi,
This is a classpath issue. org.apache.cassandra.service.ElassandraDaemon is the main class to launch Elassandra, but it seems that C:\Users\admin\Desktop\elassandra-2.1.1-14\build\cla sses\main does not contains it.
Thanks.
@vroyer
I fixed that issue, about ElassandraDaemon. See my earlier message.
@vroyer
This happens when I run it on ubuntu12.04:
Starting with Elasticsearch enabled.
CompilerOracle: inline org/apache/cassandra/db/AbstractNativeCell.compareTo (Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline org/apache/cassandra/db/composites/AbstractSimpleCellNameType.compareUnsigned (Lorg/apache/cassandra/db/composites/Composite;Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline org/apache/cassandra/io/util/Memory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/io/util/SafeMemory.checkBounds (JJ)V
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare (Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare ([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
22:01:09.840 [main] WARN org.elasticsearch.bootstrap - jvm uses the client vm, make sure to run java with the server vm for best performance by adding -server to the command line
22:01:12.776 [main] ERROR o.e.o.a.c.service.ElassandraDaemon - Exception
java.lang.ExceptionInInitializerError: null
at org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:124) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.service.ClientState.(ClientState.java:70) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.cql3.QueryProcessor$InternalStateInstance.(QueryProcessor.java:153) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.cql3.QueryProcessor$InternalStateInstance.(QueryProcessor.java:147) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.cql3.QueryProcessor.internalQueryState(QueryProcessor.java:161) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:334) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.service.ElassandraDaemon.activate(ElassandraDaemon.java:104) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.service.ElassandraDaemon.main(ElassandraDaemon.java:347) ~[elassandra-2.1.1-14.jar:na]
Caused by: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local files or file:/// for remote files. Aborting. If you are executing this from an external tool, it needs to set Config.setClientMode(true) to avoid loading configuration.
at org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:73) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:85) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135) ~[elassandra-2.1.1-14.jar:na]
at org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:119) ~[elassandra-2.1.1-14.jar:na]
... 7 common frames omitted
See Caused by: org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in variable: [cassandra.config]. Please prefix the file with file:/// for local files or file:/// for remote files.
You should set CASSANDRA_CONF env variable or cassandra.config system property.
You may try to install the debian package available at http://packages.elassandra.io/deb/elassandra_2.1.1-14_all.deb
@vroyer
Another error trying to manually install the .deb file.
Even this guide doesn't work with error:
Somehow it's getting the path wrong.
Please use the rigth path i gave you yesterday
http://packages.elassandra.io/deb/elassandra_2.1.1-14_all.deb
@vroyer As you can see here I have downloaded the right file.
While when using apt-get it uses the wrong path.
@vroyer can you give me sample of cassandra.config value (absolute path) that I can use or environment variable value? I tried different options and none worked and also tried searching for samples and couldn't find any. thanks
Hi,
We are checking that the packages 2.1.1-14 is correct.
In order to start elassandra, you should set and export CASSANDRA_HOME to your elassandra installation directory.
Thanks',
Vincent.
Le 6 juil. 2016 à 14:01, ddorian a écrit :
@vroyer As you can see here I have downloaded the right file.
While when using apt-get it uses the wrong path.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@vroyer , I have managed to patch cassandra.ps1 and set CASSANDRA_MAIN to ElassandraDaemon. Still the elassandra does not load with the following error:
{2.4.2}: Initialization Failed ...
IllegalStateException[failed to load bundle [file:/C:/elassandra-2.4.2/modules/lang-expression/antlr4-runtime-4.5.1-1.jar, file:/C:/elassandra-2.4.2/modules/lang-expression/asm-commons-5.0.4.jar, file:/C:/elassandra-2.4.2/modules/lang-expression/lang-expression-2.4.2.jar, file:/C:/elassandra-2.4.2/modules/lang-expression/lucene-expressions-5.5.2.jar] due to jar hell]
NoSuchFileException[C:\elassandra-2.4.2\build\classes\main]
C:\elassandra-2.4.2\build\classes\main does not exist at all, it is development folder in a class path and I am trying to launch elassandra from a tarball.
How can I start elassandra on windows?
Well it has started sucessfully when I created these build directories.
Windows support added with 779289e6ee1043bae771a2981108b3f2b8156d2d
The previous commit does not seem to be merged in 6.x version. How can we start Elassandra under windows?
|
GITHUB_ARCHIVE
|
using System.Collections.Generic;
/*
This file is part of pspsharp.
pspsharp is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
pspsharp is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with pspsharp. If not, see <http://www.gnu.org/licenses/>.
*/
namespace pspsharp.HLE.kernel.managers
{
using SceUid = pspsharp.HLE.kernel.types.SceUid;
///
/// <summary>
/// @author hli, gid15
/// </summary>
public class SceUidManager
{
// UID is a unique identifier across all purposes
private static Dictionary<int, SceUid> uidMap = new Dictionary<int, SceUid>();
private static int uidNext = 0x1; // LocoRoco expects UID to be 8bit
public static readonly int INVALID_ID = int.MinValue;
// ID is an identifier only unique for the same purpose.
// Different purposes can share the save ID values.
// An ID has always a range of valid values, e.g. [0..255]
private static Dictionary<object, LinkedList<int>> freeIdsMap = new Dictionary<object, LinkedList<int>>();
public static void reset()
{
uidMap.Clear();
freeIdsMap.Clear();
uidNext = 1;
}
/// <summary>
/// classes should call getUid to get a new unique SceUID </summary>
public static int getNewUid(object purpose)
{
SceUid uid = new SceUid(purpose, uidNext++);
uidMap[uid.Uid] = uid;
return uid.Uid;
}
/// <summary>
/// classes should call checkUidPurpose before using a SceUID </summary>
/// <returns> true is the uid is ok. </returns>
public static bool checkUidPurpose(int uid, object purpose, bool allowUnknown)
{
SceUid found = uidMap[uid];
if (found == null)
{
if (!allowUnknown)
{
Emulator.System.Console.WriteLine("Attempt to use unknown SceUID (purpose='" + purpose.ToString() + "')");
return false;
}
}
else if (!purpose.Equals(found.Purpose))
{
Emulator.System.Console.WriteLine("Attempt to use SceUID for different purpose (purpose='" + purpose.ToString() + "',original='" + found.Purpose.ToString() + "')");
return false;
}
return true;
}
/// <summary>
/// classes should call releaseUid when they are finished with a SceUID </summary>
/// <returns> true on success. </returns>
public static bool releaseUid(int uid, object purpose)
{
SceUid found = uidMap[uid];
if (found == null)
{
Emulator.System.Console.WriteLine("Attempt to release unknown SceUID (purpose='" + purpose.ToString() + "')");
return false;
}
if (purpose.Equals(found.Purpose))
{
uidMap.Remove(uid);
}
else
{
Emulator.System.Console.WriteLine("Attempt to release SceUID for different purpose (purpose='" + purpose.ToString() + "',original='" + found.Purpose.ToString() + "')");
return false;
}
return true;
}
public static bool isValidUid(int uid)
{
return uidMap.ContainsKey(uid);
}
/// <summary>
/// Return a new ID for the given purpose.
/// The ID will be unique for the given purpose but will not be unique
/// across different purposes.
/// The ID will be higher of equal to minimumId, and lower or equal to
/// maximumId, i.e. in the range [minimumId..maximumId].
/// The ID will be lowest possible free ID.
/// </summary>
/// <param name="purpose"> The ID will be unique for this purpose </param>
/// <param name="minimumId"> The lowest possible value for the ID </param>
/// <param name="maximumId"> The highest possible value for the ID </param>
/// <returns> The lowest possible free ID for the given purpose </returns>
public static int getNewId(object purpose, int minimumId, int maximumId)
{
LinkedList<int> freeIds = freeIdsMap[purpose];
if (freeIds == null)
{
freeIds = new LinkedList<int>();
for (int id = minimumId; id <= maximumId; id++)
{
freeIds.AddLast(id);
}
freeIdsMap[purpose] = freeIds;
}
// No more free IDs?
if (freeIds.Count <= 0)
{
// Return an invalid ID
return INVALID_ID;
}
// Return the lowest free ID
return freeIds.RemoveFirst();
}
public static void resetIds(object purpose)
{
freeIdsMap.Remove(purpose);
}
/// <summary>
/// Release an ID for a given purpose. The ID had to be created first
/// by getNewId().
/// After release, the ID is marked as being free and can be returned
/// again by getNewId().
/// </summary>
/// <param name="id"> The ID to be released </param>
/// <param name="purpose"> The ID will be releases for this purpose. </param>
/// <returns> true if the ID was successfully released
/// false if the ID could not be released
/// (because the purpose was not exiting or
/// the ID was already released) </returns>
public static bool releaseId(int id, object purpose)
{
LinkedList<int> freeIds = freeIdsMap[purpose];
if (freeIds == null)
{
Emulator.System.Console.WriteLine(string.Format("Attempt to release ID={0:D} with unknown purpose='{1}'", id, purpose));
return false;
}
// Add the id back to the freeIds list,
// and keep the id's ordered (lowest first).
//JAVA TO C# CONVERTER WARNING: Unlike Java's ListIterator, enumerators in .NET do not allow altering the collection:
for (IEnumerator<int> lit = freeIds.GetEnumerator(); lit.MoveNext();)
{
int currentId = lit.Current;
if (currentId == id)
{
Emulator.System.Console.WriteLine(string.Format("Attempt to release free ID={0:D} with purpose='{1}'", id, purpose));
return false;
}
if (currentId > id)
{
// Insert the id before the currentId
lit.set(id);
lit.add(currentId);
return true;
}
}
freeIds.AddLast(id);
return true;
}
}
}
|
STACK_EDU
|
Besides the usual amount of socks, Xmas brought along a touch of flu and a certain predisposition for reading - among the many books I've read these past few days, I can heartily recommend Elizabeth Moon's Speed Of Dark, and the timeless The Stars My Destination. I haven't started on my Tom Sharpe pile yet, but considering the way my sinuses get in the way of actually doing stuff, I guess it won't be long.
The Push E-Mail Story
RIM is back in the news, with its ongoing plight to license its messaging solution to just about anything under the sun. Which makes a lot of sense, since after the injunction that bars it from actually selling hardware in the US, it needs other sources of income to stay afloat.
However, I'm amazed at the cluelessness of most of the press coverage, since it invariably assumes the availability of RIM's solution on every platform is A Good Thing and touts "push" e-mail as the best thing since sliced bread (which it may be, but I'm naturally suspicious of anything that is promoted as enthusiastically). Most, however, don't even realize what "push" is.
Let's step back a bit, shall we? RIM originally developed their messaging platform (which is not literally a "push" platform, because it - like MMS - signals the device that new content is available and it then "pulls" it as part of its over-the-air sync procedure) in such a way that it provided centralized (initially Exchange-based) storage of rich content (such as Office attachments) and dynamic format conversion in order to make those readable on a PDA.
Allow me to explain again: there is no such thing as push e-mail. What happens is that your device gets a notification of new e-mail and then downloads a translated (i.e., barebones) version of your e-mail that your Blackberry can render, plus a few tags that allow the Blackberry and RIM's content translator to keep in sync (so that you can manipulate attachments and so on remotely).
RIM has effectively implemented a split-layer e-mail reader, with the central servers pre-rendering the content into a simplified (actually mostly text) markup that the device renders on-screen. Interaction is smooth and efficient because the client is very closely coupled to the central format translator, and because the data that actually needs to be transferred is very small.
Now this is not that much different from, say, webmail over WAP with a decent (i.e., large) mobile phone screen and a set of attachment filters on the server side to perform the format translation. In fact, WAP might actually be a bit more efficient than the RIM protocol, since it was painstakingly designed to work with just about any mobile phone - and if you use it on a GPRS link with a large-size display like a Palm, it's actually fast and usable.
Of course, you don't have the same user interface (WAP has a fairly restricted - and stupid - set of WML widgets), but the only thing we need from it is the text packing and rendering - the user interface can be wrapped around a WAP browser (in mostly the same way as some applications wrap Safari or Internet Explorer) and provide menus, buttons, dialogs, etc.
RIM didn't go with WAP because they didn't have a bunch of vendors alongside them doing design by committee (which explains WAP's schizoid approach at reinventing HTTP) and needed to target just the one platform. But it seems to me that adding a number of new clients to the mix will either result on a uniformly "bare" experience (not taking advantage of each device's strong points) or in subtle, annoying differences in functionality.
As always, we'll see. The Blackberry was a modest success in the US. And I say "modest" deliberately - it catered to the needs of a fairly confined market segment, and people should develop a sense of proportion when dealing with mobile data. Besides, there were no real alternatives for mobile messaging when it came out.
In Europe, with its prevalent SMS culture, MMS and GPRS on the rise and ready availability of PDAs supporting IMAP (which lets you read all your messages quickly without downloading attachments), the only edge RIM has is its content adaptation server.
I wonder if it will be enough.
|
OPCFW_CODE
|
I've been in this situation too when I looked for jobs a while ago. Seeing bad reviews online should be taken with a grain of salt, but also given some consideration. If there are repeated patterns in the reviews, that might be cause for alarm. In one situation, a company I researched company actually laid off a large group of workers so there were many bad reviews; however, that particular department was moved elsewhere and expanded. Many people were not happy about it, and their reviews were written in angry reaction without much thought or composure behind them.
Given that, I have never let that deter me from interviewing or applying for a position at one of the companies I was interested in. No company is perfect, and every individual has different perspectives and backgrounds that influence their opinion. A few reviews online do not give you a full picture of the company. I would also consider what press a company has, its funding, workforce size and demographics, locations, and such. You might end up shortchanging yourself if you make a decision solely based on a few short reviews online, which may be outdated and inaccurate. Sometimes those reviews can be very superficial.
What I found to be a more effective way of figuring out how to gauge a company beforehand is to actually go physically network with people (i.e. meet them for coffee) who are either current or former employees that are either in the same or similar role as you. In a way, it's an informal interview; don't ask for a job outright, but say that you're exploring new opportunities and would like to know what a particular role entails, how the company works, etc. In my experience, people were often very receptive and open to meeting me (nearly 100% response rate vs 0% in online resume submissions) and sharing our experiences and perspectives with each other. It also gives you a better "in" if you do choose to apply for said company. Even better, they may also open you up to more opportunities that they think might suit you.
Regardless, when you do have an interview with a company, that is your opportunity to make sure that you ask them good questions. It's just as much as an interview of them as it is of you. Don't be combative or outright about what you want to ask. Phrase it in a way that is abstracted from any particular reviews, people, or sites.
- How is x at this company?
- What do you (the company) do in situation y?
- How does the company handle z?
- Ask questions about the why behind decisions, direction, strategy, etc.
These are very general and can vary depending on what it is you're applying for. I suggest tailoring them specifically to the company or role. You can also ask your other interviewers the same questions or variations on them to get a more complete picture and see what the company's strengths and weaknesses are. In the end, you should be asking questions and making decisions based on what is best for yourself. It is your job search and life, not someone else's, and you should do what makes you happy.
|
OPCFW_CODE
|
- Information of Snakebird APK
- Features of Snakebird APK Latest version
Snakebird APK combines simplicity with deceptive challenges to offer you a delightful puzzle adventure. As Redbirds, Greenbirds, and Bluebirds embark on a quest for fruit that exceeds any bird's wildest dreams, come along on their journey. Reach your fruity goals by navigating mind-bending puzzles, assuming different shapes, and defying physics.
A burning question arises in Snakebird's thriving world: What is the maximum length a bird can attain? There are plenty of fruits scattered throughout the puzzles in the game, which reveals the solution. These mysterious fruits are hidden in remarkable locations, which Redbird, Greenbird, and Bluebirds explore on a captivating journey. Your guide will help the birds fulfill their fruity desires as they navigate intricate levels.
The gameplay experience in Snakebird is both simple and challenging. Throughout each level, you must think strategically and use your brain to solve the puzzles. By assuming different shapes and utilizing their abilities, the birds can push objects, lift platforms, and even defy the laws of physics. Your problem-solving skills and creativity are challenged as the game progresses.
Features of Snakebird APK Latest version
Puzzles that bend your mind
Logic thinking and spatial awareness will be challenged by a series of intricately designed puzzles. Reach the coveted fruits by navigating obstacles, avoiding traps, and finding the most efficient path.
Adaptability to changing shapes
Use the bird's unique ability to change shapes and sizes to your advantage. They are able to bridge gaps, reach higher platforms, and navigate narrow passageways thanks to their elongated bodies.
A mechanics that defy physics
Snakebird Game challenges the laws of physics with its gameplay elements. Explore perplexing challenges by manipulating gravity, teleporting through portals, and interacting with objects.
The visuals are captivating
You'll experience vibrant colors, whimsical characters, and beautifully crafted environments in this visually captivating game. The visuals in each level enhance the gameplay experience and make each level a visual treat.
An increasing level of difficulty
The puzzles in the game become increasingly challenging as you progress. There will be challenges to overcome, intricate level designs, and cleverly hidden fruits to find. Your problem-solving skills will be challenged by this game.
A challenging and unique gameplay experience
It combines simplicity with a high level of challenge in Snakebird, a refreshing take on puzzle games. This game provides players with a stimulating and engaging mental workout in which they can engage and challenge themselves intellectually.
Ability to solve problems creatively and logically
As you navigate through complex puzzles, you're encouraged to think creatively and solve problems. Using innovative strategies, experimenting with different approaches, and thinking outside the box are all necessary to overcome obstacles.
A replayable experience
As Snakebird progresses, there are a number of levels and increasing difficulty puzzles that can be solved. As you progress through the levels, you may discover new solutions and aim for higher scores as you improve your performance.
Q: Is Snakebird suitable for players of all ages?
A: It is designed to be enjoyed by players of all ages. However, due to the game's challenging nature, younger players may require additional assistance and guidance.
Q: Can the game be played offline?
A: Yes, This game can be played offline, allowing you to enjoy the puzzles anytime, anywhere without requiring an internet connection.
Snakebird free APK will test your problem-solving skills with its captivating and challenging puzzle experience. A quest for the ultimate fruit feast begins for Redbird, Greenbird, and Bluebird. This mind-bending puzzler features shape-shifting mechanics, captivating graphics, and engaging gameplay. Embark on a journey to discover a world where birds defy physics and fruit beckons around every corner.
|
OPCFW_CODE
|
<?php
namespace LotrBundle\Repository;
use Doctrine\ORM\EntityRepository;
use phpDocumentor\Partials\Collection;
/**
* Class CharactersTripRepository
* Repository for all custom call to the database on the table characters_trip
*
* @package LotrBundle\Repository
*/
class CharactersTripRepository extends EntityRepository
{
/**
* Search a row for a specific date and a specific character
*
* @param Collection $character
* @param string $date
* @return array|string
*/
public function getCharactersTripByDateForOne($character, $date)
{
$query = $this->createQueryBuilder('c')
->where('c.date = :date AND c.character = :slug')
->setParameter('date', $date)
->setParameter('slug', $character)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : date not found";
}
return $result;
}
/**
* Search a row for a specific place and a specific character
*
* @param Collection $character
* @param integer $coordX
* @param integer $coordY
* @return array|string
*/
public function getCharactersTripByCoordForOne($character, $coordX, $coordY)
{
$query = $this->createQueryBuilder('c')
->where('c.character = :character AND c.coordx = :coordX AND c.coordy = :coordY')
->setParameter('character', $character)
->setParameter('coordX', $coordX)
->setParameter('coordY', $coordY)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : " . $character[0]->getSlug() . " never passed here";
}
return $result;
}
/**
* Search a row for a specific date, a specific place and a specific character
*
* @param Collection $character
* @param integer $coordX
* @param integer $coordY
* @param string $date
* @return array|string
*/
public function getCharactersTripByCoordAndDateForOne($character, $coordX, $coordY, $date)
{
$query = $this->createQueryBuilder('c')
->where('c.character = :character AND c.coordx = :coordX AND c.coordy = :coordY AND c.date = :date')
->setParameter('character', $character)
->setParameter('coordX', $coordX)
->setParameter('coordY', $coordY)
->setParameter('date', $date)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : " . $character[0]->getSlug() . " wasn't here at this date";
}
return $result;
}
/**
* Search the rows for a specific period and a specific character
*
* @param Collection $character
* @param string $date1
* @param string $date2
* @return array|string
*/
public function getCharactersTripByPeriodForOne($character, $date1, $date2)
{
$query = $this->createQueryBuilder('c')
->where('c.character = :character AND c.date BETWEEN :date1 AND :date2')
->setParameter('date1', $date1)
->setParameter('date2', $date2)
->setParameter('character', $character)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : period not found";
}
return $result;
}
/**
* Search the rows for a specific place during a specific period, for a specific character
*
* @param Collection $character
* @param integer $coordX
* @param integer $coordY
* @param string $date1
* @param string $date2
* @return array|string
*/
public function getOneCharactersTripByPlaceAndPeriodForOne($character, $coordX, $coordY, $date1, $date2)
{
$query = $this->createQueryBuilder('c')
->where('c.character = :character AND c.coordx = :coordX AND c.coordy = :coordY AND c.date BETWEEN :date1 AND :date2')
->setParameter('character', $character)
->setParameter('coordX', $coordX)
->setParameter('coordY', $coordY)
->setParameter('date1', $date1)
->setParameter('date2', $date2)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : " . $character[0]->getSlug() . " was not here during this period";
}
return $result;
}
/**
* Search the rows for a specific date for all characters
*
* @param string $date
* @return array|string
*/
public function getCharactersTripByDateForAll($date)
{
$query = $this->createQueryBuilder('c')
->where('c.date = :date')
->setParameter('date', $date)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : date not found";
}
return $result;
}
/**
* Search the rows for a specific place for all characters
*
* @param integer $coordX
* @param integer $coordY
* @return array|string
*/
public function getCharactersTripByCoordForAll($coordX, $coordY)
{
$query = $this->createQueryBuilder('c')
->where('c.coordx = :coordX AND c.coordy = :coordY')
->setParameter('coordX', $coordX)
->setParameter('coordY', $coordY)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : coordinates not found";
}
return $result;
}
/**
* Search the rows for a specific date and a specific place for all characters
*
* @param integer $coordX
* @param integer $coordY
* @param string $date
* @return array|string
*/
public function getCharactersTripByCoordAndDateForAll($coordX, $coordY, $date)
{
$query = $this->createQueryBuilder('c')
->where('c.coordx = :coordX AND c.coordy = :coordY AND c.date = :date')
->setParameter('coordX', $coordX)
->setParameter('coordY', $coordY)
->setParameter('date', $date)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : nobody here at this date";
}
return $result;
}
/**
* Search the rows between a period for all characters
*
* @param string $date1
* @param string $date2
* @return array|string
*/
public function getCharactersTripByPeriodForAll($date1, $date2)
{
$query = $this->createQueryBuilder('c')
->where('c.date BETWEEN :date1 AND :date2')
->setParameter('date1', $date1)
->setParameter('date2', $date2)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : nobody here at this period";
}
return $result;
}
/**
* Search the rows between a period where place watch, for all characters
*
* @param string $date1
* @param string $date2
* @param integer $coordX
* @param integer $coordY
* @return array|string
*/
public function getCharactersTripByPeriodAndPresenceForAll($date1, $date2, $coordX, $coordY)
{
$query = $this->createQueryBuilder('c')
->where('c.coordx = :coordx AND c.coordy = :coordy AND c.date BETWEEN :date1 AND :date2')
->setParameter('date1', $date1)
->setParameter('date2', $date2)
->setParameter('coordx', $coordX)
->setParameter('coordy', $coordY)
->getQuery();
$result = $query->getResult();
if(!$result)
{
$result = "error : nobody here at this date";
}
return $result;
}
}
|
STACK_EDU
|
Lovelynovel The Mech Touchblog – Chapter 2912: Catharsis vest ambiguous share-p2
Novel–The Mech Touch–The Mech Touch
The Mech Touch
Chapter 2912: Catharsis painful scared
Break! Crack! Crack! Fracture!
In the mean time, the most important component of her spirituality changed in response to her wish to structure better swordsman mechs. A sizable taste of your ideas in their mind such as a certain amount of her unyielding will have trapped inside the vortex which was currently in the act of condensing her layout seed!
bring me his ears
A feeling of urgency drove her forwards. She intuitively sensed that hauling out this suit would not go nicely on her behalf. She had to find a way to pin decrease her rival and make use of one among his weak spots!
home geography for primary grades
He even acquired enough time to torment Ketis by picking at her biggest cognitive weak point!
From her comprehension, sword initiates were equivalent to professional job hopefuls. Either were definitely fantastic fighters who got eliminated far above to uncover their secret potential.
However her rival did not drop out. Consistent instruction and determination within a sword style honed his will with an fantastic diploma. Even if Ivan was lacking in quant.i.ty, he got plenty of excellent to make up for his weak points!
Her lip area briefly shifted as she uttered a whisper.
Despite the fact that he rapidly dashed rear, he observed to his amaze that Ketis managed to acquire a burst open of pace. Even as it had not been plenty of to suit his speed, she was still able to find close adequate to pose a severe danger!
Despite the fact that he rapidly dashed lower back, he observed to his surprise that Ketis had been able to accomplish a burst open of speed. Even as it had not been ample to fit his schedule, she was still able to get close up plenty of to present a serious threat!
“I have got long grown distressed at my lack of ability to catch up to my advisor and sisters!”
“I have lengthy produced irritated at my lack of ability to catch up to my advisor and sisters!”
The Mech Touch
However, her unyielding will grew more solid. Everytime she suffered a setback, she grew to become more unwilling permit her rival have his way!
“I actually feel so powerless for my lack of ability to conserve my first teacher and tutor!”
“I actually feel so powerless for my lack of ability to help save my first teacher and mentor!”
Two humongous adjustments appeared concurrently.
She could not minimize something was away from her get to!
It was as though Ketis was cutting air strength that should have constrained her velocity!
An extended and small trench acquired developed in front of Ketis as her strength blade managed to reduce profoundly in the resilient flooring content!
Ketis grew mad. Ivan was constantly attacking her trust and graphic as a swordswoman. He was essentially expressing that properly-skilled Heavensworders like him were actually a lot more remarkable than a person who learned swordsmans.h.i.+p inside of a significantly less organized fas.h.i.+on.
Ivan was much like a preciseness musical instrument. His higher command made it possible for him to realize final results with considerably less hard work.
Was she quickening or was he slowing? Neither justification created perception, but Ivan somehow noticed as if he had inadvertently entered into a problem!
“I had misplaced numerous sisters as a result of my inabiility!”
“I surrender!” Ivan yelled in worry. “Don’t trim me down!”
What truly mattered was whether a swordsman was able to produce their self-control. This is no straightforward procedure and everybody experienced a several technique to hone and condense their wills.
Although he was constantly das.h.i.+ng and getting around, he experienced always rationed his will through the entire duel. He failed to care and attention too much about his real exertion because of his system augmentations.
She breathed deeply, therefore have her opponent. As they ended up not even close to getting to the purpose of weakness because of the augmented systems, their ingestion was not gentle.
“From now on, your name is Bloodsinger.”
Various strength s.h.i.+elds shattered in fast succession while they ended up not able to avoid the sheer might and incredible cutting power of Ketis’ fatal slice!
Possibly they will likely have conflicted under ordinary circ.u.mstances, but her thoughts and heart failed to display any warning signs of breaking.
|
OPCFW_CODE
|
[Bug]: Show reminders doesn't bring up the reminders list in Windows
Describe the bug
When I execute the Reminder: Show reminders command, nothing happens - no list is displayed, in neither the right-, nor the left pane.
Manifested in two environments:
Windows 10 Enterprise Version 2004 (OS Build 19041.1415)
Obsidian 1.1.9
Obsidian-Reminder 1.1.15
Windows 11 22H2 (OS Build 22621.963)
Obsidian 1.1.9
Obsidian-Reminder 1.1.15
Expected Behavior
I expect that a dedicated window containing the list of existing reminders should be displayed in the right application pane.
Steps to reproduce
Create several tasks with valid but different due dates. (This is because I wasn't sure which reminders were expected to be listed with the Show reminders command - pending reminders, or activated and dismissed ones)
A task with a due date in the future
A task with a due/reminder date in one minute.
Wait for the in-app pop-up reminder to kick in.
Dismiss it without postponing it.
Execute Ctrl-P to bring up the Select a command pop-over
Execute the Reminder: Show reminders command
Observe that no reminders list window is displayed
Operating system
Windows
same thing happens to me in macos
same thing. is this project still being maintened?
@wiyrim These types of projects are not commercial ones where organizations are committed to maintaining the software because customers are paying for them.
We should be gratefult that the author has put in the effort to create this plugin and let us use it. We don't know what situation they are at the moment - maybe they're overwhelmed at work, they may have personal issues and might not feel like responding to every support requests there is, etc.
@nasko Thank you for your concern. I am fine! On a personal note, I have not been able to devote much time to this project due to the birth of my child and various other factors.
If you guys submit a pull request, I will review and incorporate changes as much as possible.
This also happens to me on MacOS
This happens to me on two different Linux devices and two Android devices so I guess it's independent of the operating system. I thought that maybe it was a conflict between two plugins so I deactivated all other plugins, but no change. There is also no error log in the developer console. I can inspect the object and the reminders are there.
The calendar popup works, too.
I can also see the reminders in the sidebar at the right, so this bug is not really disturbing my workflow.
I can also see the reminders in the sidebar at the right, so this bug is not really disturbing my workflow.
@SuzanaK How did you expose the reminders in the sidebar on the right? Could I ask you to show a screenshot here (of course masking any personal details from your reminders)? Thanks!
I'm not able to see them anywhere, and that was the reason why I logged this issue - I was hoping that executing the Reminder: Show reminders command would display them, but this I can't achieve.
@nasko I did do nothing to show the reminders on the sidebar. They already there when I start Obsidian. I'ts another tab on the right sidebar together with the tags, the calendar etc. This is what it looks like:
|
GITHUB_ARCHIVE
|
I tried testing the example in v2.3 c# example.
I openened the project in vs2012 and added a reference to pdfcreator.exe
the reference in the file was
this has to be edited to
then it compiled.
the example worked up to the point where it gets the job from the queue
then throws exception stating no valid version of ghostscript was found
but ghost script is in the relevant folder under pdf creator
I upgraded to 2.4 and
opened the example c# project in vs2012
I added the reference to PDFCreator.exe as before
but this time it will not even build.
it is stating that the namespace PDFCreator.COM does not exist…
can someone please help me on this
I do know how to program c# but this is confusing me
I can at least answer the last part:
The COM interface has been moved to the file PDFCreator.COM.dll. If you are referencing the PDFCreator.exe, you now have to reference the PDFCreator.COM.dll. The interface itself did not change. The solution to the Ghostscript issue can sometimes be to install the x86 version of Ghostscript again to PDFCreator path, replacing the GS that came with PDFCreator.
I have a similar problem with v2.4.
When I was trying early binding I was using references to pdfcreator.exe and to pdfcreator.com.dll as well as
When executed I’m getting the following error at code line “printJob.ConvertTo(convertedFilePath);”:
“System.Runtime.InteropServices.COMException” in PDFCreator.ComImplementation.dll
“Object reference not set to an instance of an object”.
I can only get the example code to work with late binding and a reference to pdfcreator.exe.
Could you Robin or someone else please post a working example with late binding including the needed references?
Thanks in advance.
It is still the same in version 2.5.1 and 2.5.2.
Have you found a solution or workaround?
Remove references to PdfCreator.Exe and PdfCreator.COM.DLL in your Vb.Net or C# Project.
You have to modify the PdfForge sample code as shown below :
Private Sub testPage_btn_Click(sender As Object, e As EventArgs) Handles testPage_btn.Click
**Dim oPrintJob As System.Type = System.Type.GetTypeFromProgID("PDFCreator.PrintJob")**
Dim oPrintJobCom As Object
Dim assemblyDir, fullPath As String
**Dim oJobQueue As System.Type = System.Type.GetTypeFromProgID("PDFCreator.JobQueue")**
**Dim oJobQueueCom As Object = System.Activator.CreateInstance(oJobQueue)**
**Dim oPdfCreatorDef As System.Type = System.Type.GetTypeFromProgID("PDFCreator.PdfCreatorObj")**
**Dim oPdfCreatorObj As Object = System.Activator.CreateInstance(oPdfCreatorDef)**
assemblyDir = Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location)
assemblyDir = assemblyDir.Replace("\bin\Debug", "\Results")
fullPath = Path.Combine(assemblyDir, "TestPage_2Pdf.pdf")
… etc. etc. etc.
About GhostScript : ( C:\Program Files (x86)\gs\gs9.10\bin )
Please respect the relation PdfCreator Version and Ghostscript Version.
From PdfCreator 2.4.0 use Ghostscript 9.19
From PdfCreator 2.2.0 use Ghostscript 9.10
From PdfCreator 2.1.2 use Ghostscript 9.16
From PdfCreator 2.0.1 use Ghostscript 9.14
From PdfCreator 1.9.5 use Ghostscript 9.14
The best stable Ghostscript version are 9.10 and 9.19
Then I recommend to use PdfCreator 2.2.2 or PdfCreator 2.5.2
Sometime PdfCreator not recognizes Ghostscript software.
You have to install and remove the corresponding version of Ghostcript setup for resolve windows registry problems. If the problem persist don’t remove the Ghostscript installation and for the Ghostscript 9.10 version copy the Ghostscript DLL provided by PdfCreator from path > C:\Program Files\PDFCreator\Ghostscript\Bin over the DLL in the path > C:\Program Files (x86)\gs\gs9.10\bin
It seems that all the version used from PdfCreator are Win32.
Vous devez vous rappeler que « LateBinding » exige: « Imports System.Reflection » et aucune référence aux deux DLL.
Hi Robin, is there any C# example that really work with the COM interface?
|
OPCFW_CODE
|
import path from 'path';
import fse from 'fs-extra';
import { PackageInfo } from '../interfaces';
import { writeIfChanged, getRootInfo } from '../misc';
import { getDocPath } from '../packages';
export async function generateReadme(pkgInfo: PackageInfo): Promise<string> {
const rootInfo = getRootInfo();
const docsPath = getDocPath(pkgInfo, true, false);
let issuesUrl: string;
const encodedLabel = pkgInfo.folderName === 'e2e' ? `${pkgInfo.folderName}` : `pkg%2F${pkgInfo.folderName}`;
const isGithub = rootInfo.bugs.url.includes('github');
if (isGithub) {
issuesUrl = `${rootInfo.bugs.url}?q=is%3Aopen+is%3Aissue+label%3A${encodedLabel}`;
} else {
// work with gitlab too
issuesUrl = `${rootInfo.bugs.url}?state=opened&label_name[]=${encodedLabel}`;
}
return `<!-- THIS FILE IS AUTO-GENERATED, EDIT ${docsPath}.md -->
# ${pkgInfo.displayName}
> ${pkgInfo.description}
This a package within the [${rootInfo.displayName}](${rootInfo.homepage}) monorepo. See our [documentation](${
rootInfo.documentation
}/${isGithub ? docsPath : `${docsPath}.md`}) for more information or the [issues](${issuesUrl}) associated with this package
## Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Please make sure to update tests as appropriate.
## License
[${pkgInfo.license}](./LICENSE) licensed.`;
}
export async function updateReadme(pkgInfo: PackageInfo, log?: boolean): Promise<void> {
const readmePath = path.join(pkgInfo.dir, 'README.md');
const contents = await generateReadme(pkgInfo);
await writeIfChanged(readmePath, contents, { log });
}
export async function generateOverview(pkgInfo: PackageInfo) {
const sideBarLabel = isE2E(pkgInfo) ? pkgInfo.displayName : 'overview';
return `---
title: ${pkgInfo.displayName}
sidebar_label: ${sideBarLabel}
---
> ${pkgInfo.description}`;
}
export async function ensureOverview(pkgInfo: PackageInfo, log?: boolean): Promise<void> {
const pkgDocPath = getDocPath(pkgInfo, true, true);
if (!fse.existsSync(pkgDocPath)) {
const contents = await generateOverview(pkgInfo);
await writeIfChanged(pkgDocPath, contents, {
mkdir: true,
log,
});
}
}
function isE2E(pkgInfo: PackageInfo) {
return pkgInfo.folderName === 'e2e';
}
|
STACK_EDU
|
Was the Injeel ever written in book form?
Is there any evidence that Isa (a.s.) wrote the Injeel in book form? Because it seems that Allah has already gave him the Injeel when he was a baby.
[Jesus] said, "Indeed, I am the servant of Allah . He has given me the Scripture and made me a prophet.
Quran 19:30
Isa (a.s.) said this as a baby after his mother gave birth to him and took him to her people. So I was thinking that Allah put the revelations in his mind beforehand and the Injeel was never actually written.
This whole time the Injeel was just memorized inside Isa's mind. If the Injeel was actually a written text, don't you think someone would have found it by now? Instead, people keep finding very old un-canonical gospels written by Isa's disciples. No one ever finds the actual Injeel or what some people call "The Gospel of Jesus".
I could be wrong but I think the Injeel was in Isa's mind this whole time....not in actual text.
There is neither a historical evidence nor a hint in the Quran that Jesus wrote down a book:
No such book is ever menitoned in historical religios documents. It is not even related that Jesus has ever written down anything (except once in the sand).
Suppose Jesus had written down something, why should those who fervently believed in him and even gave their lives for their faith have failed to relate it? Why should authors of the written Gospel not at least integrate it into their accounts?
The main underlying misunderstanding is that the Gospel writings are the Gospel. The right term to use for the written texts is to speak of "Gospel accounts".
The Term Gospel, Greek: Euangelion, therefrom Arabic: Injil, means "Good Message". The Gospel is the Message God gave to the people through Jesus (p.b.u.h). This message was given by teachings and example deeds. Both, teachings and deeds are reported in the Gospel accounts, which were written down by scholars (John as an eye-witness, Mark in frist link in chain, as an author who had access to direct witnesses, Luke as a second link, Matthew and Thomas without known authors). They must be understood as hadith.
There are few more writings that may contain some truth but are too late to be reliable (e.g. the Infancy Gospel falsely attributed to James which shows some contents confirmed in the Quran), and a lot of earlier or later forgeries that intend to propagate teachings of the author as teachings of Jesus. The so-called Gospel of Barnabas is quite evidently a medieval forgery and - although the intention behind its writing is to adapt the accounts to Islamic teachings - it has no value.
The fact that the Injil is cited as a singular in the Quran may have two reasons:
As mentioned above, the Euangelion is actually the message of God, which is not divided.
The Syriac christian Tatian had made a compilation of the four canonical gospels in Aramaic in a single account which was still widely used in the Aramaic communities.
You write:
This whole time the Injeel was just memorized inside Isa's mind. If the Injeel was actually a written text, don't you think someone would have found it by now? Instead, people keep finding very old un-canonical gospels written by Isa's disciples. No one ever finds the actual Injeel or what some people call "The Gospel of Jesus".
I could be wrong but I think the Injeel was in Isa's mind this whole time....not in actual text.
I fully agree to this understanding. Any assumption of a "lost" Gospel written by Jesus (p.b.u.h) lacks any evidence, so that the "evidence" read from the Quran is just a misinterpretation.
Jesus never wrote any book he is the injeel to bring the good new ,himself being the messiah to save everyone. he choose the 12 disciples so he can fulfill the messianic prophecy , being eye witnesses , and learn from what Jesus has taught them so they can preach different nation(people) being gentile as well so he can be light for the gentile. (Mark 11:17,Mark 16:15,Luke 24:47,Matthew 24:14,Mark 13:10) and of course they need to write it they are eye witness of Jesus. or not we wouldn't accept it as truth. so he choose them. no were does the bible say "Jesus was reading the injeel" nor did it say "jesus was learning/writing the injeel" only says Jesus preached and the quran confirm this despite jesus having the injeel when mary took him to the people as a baby Quran 19:30, so no Jesus never wrote anything only preached
According to the refrence of Gospel of st.bernabas and according to christians religious stories it waS well known that the mountain in the Nazrath called( jabley zaytoon) i.e mountain of Zaytuon was only the plACE where Jeuses Christ met the angel Gabrieal and he revealed him the holy gospel in ARamaic language (SOME TIME known as seriac language).and jesus reminded the whole revelation this name is also mentioned clearly in the HOLY QURAN IN SURAH ALTEEN VERSE NO. 1 where ALLAH ALMIGHTY SWERE at the names of 3 holy places. then after that day the holy gospel i.e evengel was explored by the jeuses as he spoke beside his 12 apostels at diiferent occassions and at different places including all his life visited places as in jeudas ,in jerusalam and in kefer nahum and in galileth and nazreth .during his sermons at holy temlpe and among companey of his apostels he said that all the words which he speaks in his whole life before his lifting up to the heavens are the parts of the revelation which was given to him by the YEHWA the ALMIGHTY GOD.
the gist of the whole speech is that gospel of GOD was not actually written in the scriptual form i.e in book form .after his transfer from earth to the heavens .his apostels which were 12 in number ,narrated the whole life history of the jeuses christ in the book form .the main point is that every apostel of the jeuses christ wrote the life history of the jeuses christ as for example like gospel of judas , gospel of peter ,gospel of jhon ,gospel of methew , gospel of marry and many other like gospel of st bernabas .and when the apostels of jeuses were sett off on their preaching mission they preached about the miracles and good news and life events of jeuses christ whatever they had heard or seen in their life or whatever they wittnesed .and thatswhy the people of that region had coppied their teachings and their preached lectures and then gospels were spread all over the world in the early christianity days but when st. paul took over the charge of christian religion he established his own faith rules and there is an importent point is that the gospels of christianty were became so common that every person claimed to be the writer of holy gospel thats why as a result the christians holy scriptuae i.e GOSPELS were reached to the counting of upto hundred books known as GOSPEL BY JEUSES CHRIST then under the authority of romen empire all pops of the early christianity decided to pic up the any four canonical gospel out of those numerous fake gospels .they established a merit called (christians rule of faith ) and selected four gospels i.e of luke , methew mark and jhon. and all other were rejected .it is also claimed that in early christianty days the attacks of anti christ powers had severly harmed the rare scriptuas of their holy religion including hebrew context of the gospel of st. methew .thats why we can say that pure and rare holy gospel of jeuses was losted in the first century. christians leaders says that then after the REMOVAL of ALEXENDIAN CHURCH .THE ST.paul established the modern christianty .which is acceptable to the christian world even today.(published by ANEESI writers).
this is only the post which i have uploaded first tme in my life it doesnot relates or not pointingout some ones emotions it is only for the informative purpose for public only .it is finally based on my own life research about the HOLY GOSPEL@
Those who follow the Messenger, the unlettered prophet, whom they find written in what they have of the Torah and the Gospel, who enjoins upon them what is right and forbids them what is wrong and makes lawful for them the good things and prohibits for them the evil and relieves them of their burden and the shackles which were upon them. So they who have believed in him, honored him, supported him and followed the light which was sent down with him - it is those who will be the successful.
Quran 7:157
In reference to the "Injeel" or "Gospel" that is being referenced by Muhammad, he believed it was available during the time when he existed. The problem that arises is that every piece of historical evidence we have points to the manuscripts that the Christian New Testament is translated from today. There is no historical evidence anywhere for there being a written gospel different than what is available in the New Testament today.
"Whatever Muhammad was talking about" - Could you elaborate?
Yes, in 7:157 it says "whom they find written in what they have of the Torah and the Gospel". Muhammad was talking about what was currently available that was written in the Torah and the Gospel. This injeel is what I was referring to when I said "Whatever Muhammad was talking about".
Are you answering from an Islamic point of view?
I am answering from a historians point of view.
We are not a typical an internet forum. Therefore we expect answers to be well elaborated, see "how to answer" for more information. I strongly recommend you to take our 2 min. tour and visit our help center to learn more about this site and the stack exchange model.
Thank you, I will definitely take a look. The original question though was fairly simple. The short answer is yes, the Quran explicitly states that it was written and available. I thought I had already expanded beyond what was necessary, but I will review the guidelines.
|
STACK_EXCHANGE
|
Is the Linux kernel a security problem?
Security is an ongoing issue for all operating systems, including Linux. While Linux has generally had a good reputation compared to Windows when it comes to security, no operating system is perfect. A writer at Ars Technica recently examined the issue of security and the Linux kernel.
JM Porup reports for Ars Technica:
The Linux kernel today faces an unprecedented safety crisis. Much like when Ralph Nader famously told the American public that their cars were “unsafe at any speed” back in 1965, numerous security developers told the 2016 Linux Security Summit in Toronto that the operating system needs a total rethink to keep it fit for purpose.
No longer the niche concern of years past, Linux today underpins the server farms that run the cloud, more than a billion Android phones, and not to mention the coming tsunami of grossly insecure devices that will be hitched to the Internet of Things. Today’s world runs on Linux, and the security of its kernel is a single point of failure that will affect the safety and well-being of almost every human being on the planet in one way or another.
“Cars were designed to run but not to fail,” Kees Cook, head of the Linux Kernel Self Protection Project, and a Google employee working on the future of IoT security, said at the summit. “Very comfortable while you’re going down the road, but as soon as you crashed, everybody died.”
“That’s not acceptable anymore,” he added, “and in a similar fashion the Linux kernel needs to deal with attacks in a manner where it actually is expecting them and actually handles gracefully in some fashion the fact that it’s being attacked.”
The article about the security of the Linux kernel spawned a lively discussion in the comments section and folks there weren’t shy about sharing their thoughts:
Haravikk: “The kernel drivers are the biggest annoyance to me; I know that once upon a time they were picked for efficiency, but it’s been a long time since we really needed them to be loaded within the kernel itself. I mean, the idea of so much third-party code that you cannot test being within the single most important part of your operating system just horrifies me now.
It’s a big problem on OS X/macOS as well; maybe it’s just because I use it more but the quality of Mac drivers very often leaves a lot to be desired, and aside from one time when I had failing RAM, third party kexts have been the cause behind 99.9% of kernel panics I’ve encountered.
Interestingly I’ve never had as much of a problem with this on Linux, but then I mostly only use it in a server environment so it’s probably just not much of a risk in the first place. In fact I’ve had more problems on Windows and that’s the OS I use the least overall; I only use it for gaming yet it seems to find ways to randomly break despite my not changing anything.”
Raxx7: “I think this is a fools errand.
You can’t protect against consumer devices which whose manufacturer has crappy security policies and doesn’t release updated simply hardening the kernel. There are too many other layers of software which can fail and be exploited.
Hardening, sandboxing and etc are not a replacement for good update policies.
We need to legislate that companies are responsible for providing updates to their internet enabled products for X years or something like that.”
Kazper: “Yes and no. Updating is important and you are right you cannot completely prevent security bugs that must be fixed with an update. But you can get a long way by mitigation techniques, and if you can “kill” 80% of all security bugs with a safer driver model and handling that’s pretty important.
It’s not a replacement for updating but it’s a damn good complement given that zero days WILL exist for undetermined lengths of time before even being discovered, much less patched and updates pushed. Even with a mandatory update scheme.”
Amiasc: “When open source started it worked because the users did the testing , this worked when the users where all very technical and where aware of their limitations. We also rely on software in a completely different way now because of societies wider acceptance of computing. This has put the software in the hands of people who do not understand open source.
As usage of open source has expanded the Software Testing of it has not , there are many reasons for this , some being financial , some being technical, there are many of them and their reasons are not really that relevant. The closed source software that open source is competing with has dramatically increased its quality through enhanced development processes and embedding test within its development.
The challenge for the linux kernel is to raise its testing game by defining a large range of test environments and requiring new commits to include tests for those environments. Those tests need maintenance and validation but all of that doesn’t have to be done by the developer, writing tests is a good way to introduce new developers to the kernel.
I do a lot of software testing on linux based products and frequently hit examples of bad quality control in linux , typically the vendors i work with work around it instead of commiting changes, ease of access is a big thing. I wonder if a mechanism where by vendors can be disallowed to use linux in their product if they don’t include an updates system might help.”
Musashi31: “Isn’t this a good argument in favor of micro kernel architecture (with device drivers running in userland), the architecture that Linus found ’stupid.’”
Dfavro: “This is really, sadly, true. It gets depressing reading various vendors’ bugzilla instances and seeing all number of WONTFIXes and such where the issue is being bandied back and forth between upstream, driver maintainers, etc with no fix in sight.
For usability and performance issues, this is annoying, but for security it’s critical. And yes, the problem is updates (or the lack thereof) but that’s a problem that isn’t ever going to get solved because there’s no money in it and no method to enforce control even if there was some kind of financial incentive.
So yes, it probably will mean some significant re-architecting, and hopefully sooner than later such that less insecure devices end up in the field.”
Riddler876: “Moving drivers out of the kernel isn’t a panacea, given the userland program MUST have access to enough of the kernel through the userland API to do its job, if the userland driver is insecure it can still run rampant with that access. That’s not to say moving them out of the kernel wouldn’t solve some significant problems, it would, but its essentially an opinionated argument that the benefits outweigh the drawbacks. According to Linus they don’t, and hey it’s his kernel he can do what he wants with it. Someones free to fork it!
I hate fixing symptoms of problems instead of the problems themselves, from what I see from the arguments root problems here are the badly thought out IoT security landscape and driver writers poorly secured code. Unfortunately, I don’t think we can fix either of those two things in a hurry.”
Bsyp: “The real problem is there are too many competing technologies and protocols. And some/many/most of those companies are garbage and don’t care because their idea of providing updates is to force you to buy a new IoT doorknob to solve the problem. A doorknob should last the life of a house, not the life of the housefly in it.
There should be some sort of open source module, a single technology that talks to all needed devices regardless of manufacturer, that gets updated and continues to work on older devices, and can update itself since it’s connected to the internet. It would also be nice if it was self aware somehow where it could determine it was hacked, under attack etc, and only do one thing: wait for a secure update.”
Passivesmoking: “If kernel drivers suck then maybe it’s time to remove (or at least minimise) them. If I remember right microkernel research showed it was possible to push most if not all drivers into user space, though the performance hit was huge. Has there been any work done on mitigating that performance hit (other than by doing things like putting all the drivers in a service that runs in kernel space and making them effectively kernel drivers again, the way most modern systems that claim to be microkernels actually do)?
I know there’s performance considerations for things like GPUs, but really, does a USB port driver need to shave every cycle it can? ”
Chromebook Pixel 2 gets Android apps
The news that some Chromebooks would get Android apps got quite a lot of attention a while back. Now Google has released Android apps in the Chrome OS stable channel for the Chromebook Pixel 2.
Phil Oakley reports for Android Police:
A few days ago, Google released Android apps to two Chromebooks: the Acer Chromebook R11 and the ASUS Chromebook Flip. These arrived version 53 of Chrome OS, on the stable channel. However, the Chromebook Pixel 2, which has had Android apps in beta up until now, has been waiting for the stable release. This painful period is over, Pixel 2 owners, because you too can now join in on the Android fun with the release of stable Chrome OS 53 to last year’s flagship Chromebook.
From what we can tell, it works the same way as it did on the beta. Simply launch the Play Store app from the app launcher, wait for it to set up, then you should be able to download Android apps from the Store. It is possible, however, that some apps will be buggy, crash, or have missing features, because they were built for phones or tablets, not laptops.
6 open source fitness apps for Android
Speaking of Android, there are some useful fitness apps available that are worth considering if you need to get in shape. And each of them is open source software for your Android phone.
Joshua Allen Holm reports for Opensource.com:
A key part of developing a good fitness routine is creating a solid workout plan and tracking your progress. Mobile apps can help by providing readily accessible programs specifically designed to support the user’s fitness goals. In a world of fitness wearable devices like FitBit, there are plenty of proprietary apps designed to work with those specific devices. These apps certainly provide a lot of detailed tracking information, but they are not open source, and as such, do not necessarily respect the user’s privacy and freedom to use their own data as they wish. The alternative is to use open source fitness apps.
Below, I take a look at six open source fitness apps for Android. Most of them do not provide super detailed collection of health data, but they do provide a focused user experience, giving the user the tools to support their workouts or develop a plan and track their progress. All these apps are available from the F-Droid repository and are all licensed under the GPLv3, providing an experience that respects the user’s freedom.
Did you miss a roundup? Check the Eye On Open home page to get caught up with the latest news about open source and Linux.
This article is published as part of the IDG Contributor Network. Want to Join?
|
OPCFW_CODE
|
i did the following steps
1. boot the Host Debian 10 VM ( everything works fine in the past )
2. check the Debian 10 phpvirtualbox and virtualbox via "ssh -X vbox@host virtualbox"
3. Result : 32 bit VMs are working, 64 bit does not working, VT-X is shown as activated in the phpvbox interface
=> 2-3 days before the Debian 10 installation works without errors. Debian 10 System was not mounted since the last working boot
==> seems the error is NOT in the OS software or any other runnung software on the OS.
According to this information there is a hardware or hardware config error
4. Disable all features ( even if they wouldn' t have any effekt to the VT-X commands ) of ALL multiuse hardware - including hyperthreading ( just create a minimal error bios config )
5. Start the Debian 10 Installation again and looking for a working virtualbox 6.1.32
=> 32 bit works, 64 does not work, BUT!!! : now there is a message VT-X not available. - Everthing OK
6. Enable the disabled VT-X and other features again
7. Start the Debian 10 Installation again and looking for a working virtualbox 6.1.32
=> 32 and 64 bit VMs works
Now i can ensure correct working hardware.
8. Start the Debian 12 Installation using VBox 7.0.10 and phpVbox 7.0.10
9. check php webinterface => fail
10 check virtualbox GUI via ssh -X vbox@host virtualbox => GUI does not start, no error message anywhere ( GUI/system/VMLog)
11 remove VBox 7.0.10 ( apt purge virtualbox* && updatedb && locate virtual # remove all remaining files and configs ) except phpVB7.0
12 install phpvirtualbox 6.1 including extentions
13 start VBox GUI via ssh -X vbox@host virtualbox
=> everthing works fine 32 and 64 VMs , create, delete, import, export, register etc
14 reboot into D10 installation an backup D12 installation via dd
15 reboot into D12 installation
16 remove virtualbox 6.1.46 completely from system (deinstall via apt and remaining files)
17 install virtualbox-7.0 from virtualbox sources located in sources list
18 start VBox GUI via ssh -X vbox@host virtualbox
=> everthing works
19 check phpvirtualbox-7 webinterface
=> everthing works
20 Final reboot to check if really everthing works
=> mysql server does not start (Aug 14 13:29:33 080-ngNAS-Storage systemd: mariadb.service: Failed with result 'signal'.
21 remove entire mysql server, client and configs from system
22 reinstall mysql/mariadb server cleint and phpmyadmin
23 start mysql server and apache
24 => works
25 Final reboot check again
=> System boot with out any errors, autostarting VMs works, virtualbox GUI and phpVirtualbox works.
very mysterious ... i do this procedure about 20 times and more and it never works correctly. The VT-X flag was always marked as activated in the CPU specs (cat /proc/cpuinfo | grep vmx)
I think the error was caused by the fact, that the VT-X feature was enabled for the OS, so the OS and its components install thee VT-X features. But the Hardware does response the VT-X commands and then thre was the "null-info-error-message".
If i had know this before ...
Now i have some questions:
1. Is it possible that some broken bits in the VT-X config/register shows the OS active VT-X features even if they are disabled ?
2. Is it possible that the VM virtualization options ( default, none, legacy, Hyper-V, KVM ) changes any BIOS configuration or hardware settings ?
I really want to know whats this error causes, because its a very special error found by one over a week try and error.
Debain 12 Vbox 7.0.10 phpVbox 7.0 Install Guide => https://speefak.spdns.de/oss_lifestyle/ ... irtualbox/
|
OPCFW_CODE
|
Buenas, que tal, vengo con un problema en mi aplicación, lo que pasa es que al momento de cargar una screen, no carga los botones a como lo tengo programado, puesto que necesito que al momento de que se inicialice la screen de inicio, esos botones estén con información. Pienso yo que es por el tiempo tan rápido que carga la app, por lo que no da tiempo para cargar dichos botones con información, alguna sugerencia?
how do you load thess buttons with information?
It would really help if you provided a screenshot of your relevant blocks, so we can see what you are trying to do, and where the problem may be.
To get an image of your blocks, right click in the Blocks Editor and select "Download Blocks as Image". You might want to use an image editor to crop etc. if required. Then post it here in the community.
Thanks for the prompt response, the situation is that in the buttons I only want to add text referring to information obtained from a database in firebase, this information is short, but I need that, for example, if only 2 options are obtained in the declared list , only 2 buttons appear, as shown in the code, what happens is that when these values are obtained, the buttons are not loaded, for now I only want them to be displayed on the screen.
yes, loading some information from the internet takes a few seconds...
you might want to preset the button text with some default information or a text like "data loading..." until the data is available
Any suggestions on how to know that the internet data has already been obtained? and be able to show them.
use the FirebaseDB.GotValue event
Thank you, your comments have been very helpful!
Now, a new question has arisen, what can I do if I make a request with the Web component and the case comes in which the phone is not connected to the internet, how do I cast if I have received a response?
Use the Timeout property in the Web Component.
Use the TimedOut event to send a message that the content is not loaded. [For example : "Poor or bad network connection."]
Could you tell me how the TimeOut event works? I place it and it keeps throwing me the typical error 1101
Some things to notice before anything about the error :
- Did you set all the properties properly ?
- Is it possible to get the data from the Web component ?
- The most important, : Why do you even need to use the Web component to suffer, although you can do things easily with the FirebaseDB component ?
- Did you use the proper blocks actually.
Don't answer my questions, just think over them, and you may find the solutions yourself.
Cuz I don't know what blocks and properties you've used.
(And I won't be available for quite a few days, but thats not a prob, cuz this community has many experts to help you out, just wait for them, and they'll solve your problems. BYE )
first check, if there is internet, then do the request
see also this thread
|
OPCFW_CODE
|
Java program to determine whether a singly linked list is the palindrome
In this program, we need to check whether given singly linked list is a palindrome or not. A palindromic list is the one which is equivalent to the reverse of itself.
The list given in the above figure is a palindrome since it is equivalent to its reverse list, i.e., 1, 2, 3, 2, 1. To check whether a list is a palindrome, we traverse the list and check if any element from the starting half doesn't match with any element from the ending half, then we set the variable flag to false and break the loop.
In the last, if the flag is false, then the list is palindrome otherwise not. The algorithm to check whether a list is a palindrome or not is given below:
- Create a class Node which has two attributes: data and next. Next is a pointer to the next node in the list.
- Create another class Palindrome which has three attributes: head, tail, and size.
- addNode() will add a new node to the list:
- Create a new node.
- It first checks, whether the head is equal to null which means the list is empty.
- If the list is empty, both head and tail will point to a newly added node.
- If the list is not empty, the new node will be added to end of the list such that tail's next will point to a newly added node. This new node will become the new tail of the list.
a. reverseList() will reverse the order of the node present in the list:
- Node current will represent a node from which a list needs to be reversed.
- Node prevNode represent the previous node to current and nextNode represent the node next to current.
- The list will be reversed by swapping the prevNode with nextNode for each node.
a. isPalindrome() will check whether given list is palindrome or not:
- Declare a node current which will initially point to head node.
- The variable flag will store a boolean value true.
- Calculate the mid-point of the list by dividing the size of the list by 2.
- Traverse through the list till current points to the middle node.
- Reverse the list after the middle node until the last node using reverseList(). This list will be the second half of the list.
- Now, compare nodes of first half and second half of the list.
- If any of the nodes don't match then, set a flag to false and break the loop.
- If the flag is true after the loop which denotes that list is a palindrome.
- If the flag is false, then the list is not a palindrome.
a. display() will display the nodes present in the list:
- Define a node current which will initially point to the head of the list.
- Traverse through the list till current points to null.
- Display each node by making current to point to node next to it in each iteration.
Nodes of singly linked list:
1 2 3 2 1
Given singly linked list is a palindrome
|
OPCFW_CODE
|
UKHAS Parser Configuration¶
The UKHAS protocol is the most widely used at time of writing, and is implemented by the UKHAS parser module. This document provides information on how what configuration settings the UKHAS parser module expects.
Parser module configuration is given in the “sentence” dictionary of the payload dictionary in a flight document.
Generating Payload Configuration Documents¶
The easiest and recommended way to generate configuration documents is using the web tool genpayload.
Standard UKHAS Sentences¶
A typical minimum UKHAS protocol sentence may be:
This sentence starts with a double dollar sign ($$) followed by the payload name (here habitat), several comma-delimited fields and is then terminated by an asterisk and four-digit CRC16 CCITT checksum (*ABCD).
In this typical case, the fields are a message ID, the time, a GPS latitude and longitude in decimal degrees, and the current altitude.
However, both the checksum algorithm in use and the number, type and order of fields may be configured per-payload.
Parser Module Configuration¶
The parser module expects to be given the callsign, the checksum algorithm, the protocol name (“UKHAS”) and a list of fields, each of which should at least specify the field name and data type.
Three algorithms are available:
CRC16 CCITT (crc16-ccitt):
The recommended algorithm, uses two bytes transmitted as four ASCII digits in hexadecimal. Can often be calculated using libraries for your payload hardware platform. In particular, note that we use a polynomial of 0x1021 and a start value of 0xFFFF, without reversing the input. If implemented correctly, the string habitat should checksum to 0x3EFB.
The simplest algorithm, calculating the one-byte XOR over all the message data and transmitting as two ASCII digits in hexadecimal. habitat checksums to 0x63.
Not recommended but supported. Uses a modulus of 255 by default, if modulus 256 is required use fletcher-16-256.
In all cases, the checksum is of everything after the $$ and before the *.
Field names may be any string that does not start with an underscore. It is recommended that they follow the basic pattern of prefix[_suffix[_suffix[...]]] to aid presentation: for example, temperature_internal and temperature_external could then be grouped together automatically by a user interface.
In addition, several common field names have been standardised on, and their use is strongly encouraged:
|Field||Name To Use||Notes|
|Sentence ID (aka count, message count, sequence number)||sentence_id||An increasing integer|
|Time||time||Something like HH:MM:SS or HHMMSS or HHMM or HH:MM.|
|Latitude||latitude||Will be converted to decimal degrees based on format field.|
|Longitude||longitude||Will be converted to decimal degrees based on format field.|
|Altitude||altitude||In, or converted to, metres.|
|Temperature||temperature||Should specify a suffix, such as _internal or _external. In or converted to degrees Celsius.|
|Satellites In View||satellites|
|Battery Voltage||battery||Suffixes allowable, e.g., _backup, _cutdown, but without the suffix it is treated as the main battery voltage. In volts.|
|Pressure||pressure||Suffixes allowable, e.g., _balloon. Should be in or converted to Pa.|
|Speed||speed||For speed over the ground. Should be converted to m/s (SI units).|
|Ascent Rate||ascentrate||For vertical speed. Should be m/s.|
Standard user interfaces will use title case to render these names, so flight_mode would become Flight Mode and so on. Some exceptions may be made in the case of the common field names specified above.
Supported types are:
- string: a plain text string which is not interpreted in any way.
- float: a value that should be interpreted as a floating point number. Transmitted as a string, e.g., “123.45”, rather than in binary.
- int: a value that should be interpreted as an integer.
- time: a field containing the time as either HH:MM:SS or just HH:MM. Will be interpreted into a time representation.
- time: a field containing the time of day, in one of the following formats: HH:MM:SS, HHMMSS, HH:MM, HHMM.
- coordinate: a coordinate, see below
Coordinate fields are used to contain, for instance, payload latitude and longitude. They have an additional configuration parameter, format, which is used to define how the coordinate should be parsed. Options are:
- dd.dddd: decimal degrees, with any number of digits after the decimal point. Leading zeros are allowed.
- ddmm.mm: degrees and decimal minutes, with the two digits just before the decimal point representing the number of minutes and all digits before those two representing the number of degrees.
In both cases, the number can be prefixed by a space or + or - sign.
Please note that the the options reflect the style of coordinate (degrees only vs degrees and minutes), not the number of digits in either case.
Received data may use any convenient unit, however it is strongly recommended that filters (see below) be used to convert the incoming data into SI units. These then allow for standardisation and ease of display on user interface layers.
|
OPCFW_CODE
|
A vector, like all data in Rust can only be borrowed mutably once at a time, however, if I can guarantee that various slices of this buffer will never interfere with each other, I can guarantee that the multiple-borrow perfectly safe. However, I can't seem to find a valid way to convey this to the Rust compiler. I'm hoping for some advice on how I can do this.
I'm working on a database system which stores information in pages. A page is nothing more than a description of where the data is to be found. A page's contents are scattered over the backing object, in slices of bytes I call chunks. I now need to piece these chunks together to retrieve the information from within the page.
A page must implement AsRef<[u8]> in order to be useful to me, therefore the page's contents must be made contiguous somehow.
Since the backing object is bound by the traits Read + Write + Seek (as in most cases it will be a file), I can allocate a page-sized buffer and pass various slices of the buffer to the Read::read() method in order to populate it.
The backing buffer does not implement Send nor Sync, so all read/write operations are done on a single dedicated worker thread. Pages send various slices of their buffer to this thread in order to do the read/write. My question therefore is, how can I convey to the Rust compiler, that chunks (slices) of the page buffer never interfere with each other, and sending mutable slices to the worker thread is perfectly safe?
They can't be moved. The signature doesn't allow that. The cells to which the return value refere has to be owned somewhere too, but the lifetime means that they can only come from the argument. If thex were moved into the function, a reference to then couldn't be returned.
Because as_slice_of_cells returns each item wrapped in a cell. Each item in this case happens to be a byte. If I have slices of any substantial length, acquiring mutable access to each byte is an absolute waste of resources. Hence I'm searching for a method to lock/restrict slices. I'd considered my buffer to be a vector of vectors, but this again breaks the contiguity guarantee
I don't really get what the problem is here. You can split the returned slice-of-cells just like you would split any other slice. If you think that accessing individual cells would be more expensive than accessing individual bytes, then that's almost certainly not the case due to optimizations.
Please note that T and Cell<T> have exactly the same memory representation, so there's no actual conversion going on. The operation is just a pointer cast that changes the type, and none of the elements in the slice are actually touched by this operation.
I think Cell doesn't help you in your scenario. Cells are not designed for multithreading, Cell isn't Sync, so you can't share a slice of Cells with another thread. They don't seem relevant to your problem.
|
OPCFW_CODE
|
The Market Stage plays a large variety of music ranging from ambient music, glitch hop, and tech house to psytrance. Another produced bank and phone records indicating she was in Oklahoma City, Oklahoma at the time of her alleged crime. Sildenafil, sold as the where to buy ambien 10mg tablets online brand name where to buy alprazolam tablets online Viagra among others, is a Purchase xanax atlanta medication used to treat erectile dysfunction and pulmonary arterial hypertension. where to buy ambien 10mg tablets online
The infrastructure includes laboratories, lecture halls, workshops, playgrounds, gymnasium, canteen, and where to buy ambien 10mg tablets online a library. social and behavioral sciences, literature and the humanities, as well as mathematics and natural resources. It was also Perot's best performance in the state in 1996, although he didn't carry it again. The reconstruction rhinoplasty of an extensive heminasal defect or of a total nasal defect is an extension of the plastic surgical principles applied to resolving the loss where to buy ambien 10mg tablets online of a regional aesthetic subunit. Patient advocacy organizations have expanded with increasing deinstitutionalization in developed countries, working to challenge the stereotypes, stigma and exclusion associated with psychiatric conditions. The plantar fascia is a thick fibrous band of connective tissue that originates from the medial tubercle and anterior aspect of the heel bone. Roesch became president of the company in 1943 upon Hook's death. The buy cheap xanax 2mg online with paypal casting around a model to create each mold part produces complex mold part quickly. Gin is a common base spirit for many mixed drinks, including the martini. The functionality and effectiveness of a modular robot is easier to increase compared to conventional robots. One of the most impressive of techniques exploits anisotropic optical characteristics of order soma 500mg tablets conjugated polymers. China's total international trade. The pia mater is how much does xanax cost per pill a very delicate impermeable membrane that firmly adheres to the surface of the brain, following all the minor contours. Allen was the first where to buy ambien 10mg tablets online chief executive to be granted the title of chancellor. According to the CDC African Americans are most affected by gonorrhea. The iron is tethered to the protein via a cysteine thiolate ligand. There are several tests done to diagnose hemifacial spasm. Interferon-alpha, an interferon type I, was identified in 1957 as a protein that interfered with viral replication. When used for purchase klonopin virginia beach erectile dysfunction side effects may include penile pain, bleeding at the site of injection, and prolonged erection. Australia, and it makes up about where to buy ambien 10mg tablets online 20 percent of the market share. Private health insurance is allowed, but in six provincial governments only for services that the public health plans do not cover, for example, semi-private or private rooms in hospitals where to buy ambien 10mg tablets online and prescription drug plans. Dental residencies for general practice, known as GPRs, are generally one year, with a possibility of a where to buy ambien 10mg tablets online second year at some facilities. Although alcohol prohibition was repealed in these countries at a national level, there are still parts of the United States that do not allow alcohol sales, even though alcohol possession may be legal. Unlike the original and current Discovery, it is does not have a where to buy ambien 10mg tablets online steel chassis rails but is based on the new D7u alloy Platform, which much more resembles current flagship Range Rover with closer equipment levels and capabilities in a smaller body style. The legal status of Psilocybe spores is even more ambiguous, as the spores contain neither psilocybin nor psilocin, and hence are not illegal to sell or where to buy ambien 10mg tablets online possess in where to buy ambien 10mg tablets online many jurisdictions, though many jurisdictions will prosecute under broader laws prohibiting items that are used in drug manufacture. Birmingham, as well as a professional team of pharmacists and product buyers. Companies that choose Purchase generic tramadol in korea to internally source their developers, reassigning them to where to buy ambien 10mg tablets online software support functions can run into many where to buy ambien 10mg tablets online waste causing scenarios. Small volumes of the titrant are then added to the analyte and indicator until the where to buy ambien 10mg tablets online indicator changes color in reaction to the titrant saturation threshold, reflecting arrival at the endpoint where to buy ambien 10mg tablets online of the titration. He assumes Joan's position as office manager after her departure to become a housewife. However, in most countries where to buy ambien 10mg tablets online the practice is prohibited. Non-penetrative where to buy ambien 10mg tablets online sex or outercourse is sexual activity that usually does not include sexual penetration. A primary characteristic of online distribution is its direct nature. Dhruv immediately distances himself from Rehana and Namrata reveals that Dhruv was responsible for her pregnancy. Pubic hair can be removed in a number of ways, such where to buy tramadol 50mg online legit as waxing, where to buy ambien 10mg tablets online shaving, sugaring, electrolysis, laser hair removal or with chemical depilatory creams. In the same study, 15% of middle-aged adults experienced this type of midlife turmoil. Thus they present a large surface to volume ratio with a small diffusion distance. Other organic compounds, such as methanol, can provide alkyl groups for alkylation. The body was positively identified as being that of Holmes with his teeth. This annual observance occurs exactly 3 weeks before the start of Lent. These sorts of disparities and barriers exist worldwide as well. Among the Chumash, when a boy was 8 years old, his mother would give him a preparation of momoy to drink. This media can be formed into almost any shape and can be customized to suit various applications. As result, cell phones have been banned from buy generic clonazepam 2mg tablets online some classrooms, and some schools have blocked many popular social media websites. In 1909, the right to vote where to buy xanax 2mg in australia in municipal elections were extended to include also married women. Consequently, this can create a focus on the negative aspects of medicine and science; causing journalists to report on the mistakes of doctors or misconstruing the results of research. A desire to achieve certain population targets has resulted throughout history in severely abusive practices, in cases where governments ignored human rights and enacted aggressive demographic policies. Branded repair shops and Bosch services became overloaded, and many cars were converted to carburetor. These foreign diseases were a constant threat to the native peoples of the Americas since the late fifteenth century. European markets presence Pyrimethamine, sold under the trade name Daraprim, is a medication used with leucovorin to treat toxoplasmosis and cystoisosporiasis. In 2009, Ebert named it the third best film of the decade. On more than one occasion, Wolverine's entire skeleton, including his claws, has been molecularly infused with adamantium. Traffic accidents are predominantly caused by driving under the influence for people in Europe between the age of 15 and 29, it is one of the main causes of mortality. Turing test is still used to assess computer output on the scale of human intelligence.
From Wikipedia, the free encyclopedia
|
OPCFW_CODE
|
You can use the following strategies to avoid deadlock: value (the primary part of the key) as the rows added by other users, the locks never contend if this value is not set, the threads use the generic system maxlocks value as set in the. Deadlock handling strategy prepared by : sharma hemant the deadlock problem in a computer system deadlocks arise when members. Because both transactions are waiting for a resource to become available, neither ever release the locks it holds a deadlock can occur when transactions lock. Deadlock is a permanent blocking of a set of threads that are competing for a when all threads always acquire locks in the specified order, this deadlock is avoided in general, start with a coarse-grained approach, identify bottlenecks, and.
A set of processes is deadlocked when each process in the set deadlock ▫ three general approaches exist for dealing with deadlock: deadlock strategies . Policies consulting policies data recovery policy percona services a deadlock in mysql happens when two or more transactions innodb automatically detects transaction deadlocks, rollbacks a with general query log, the thread id is included and could be used to look for related statements. Deadlocked threads cannot make further progress, and frequently tie up resources a bank transfer deadlock example using general resources for chapter 4 spection policies and deadlock recovery techniques, but two critical problems. Two or more processes there is no satisfactory solution in the general case some os deadlock occurs if and only if the circular wait condition is unresolvable optimal strategy since it assumes the worst: that all processes will make their.
A deadlock occurs when the waiting process is still holding on to another in general, four strategies are used for dealing with deadlocks. Correct prevention / avoidance strategies never allow deadlock to arise by controlling the deadlocked jobs to auxiliary bu ers dedicated to deadlock resolution, analysis of this lp along the lines of step (3) described in the general work. Deadlocks are important resource management problem in distributed systems the following three strategies: deadlock prevention, deadlock avoidance, and that it is inappropriate to exploit prior knowledge in general purpose transaction. With win-win strategies, we can see how satisfaction for both parties can deadlock is a legitimate test of the balance of power and resolve of.
When a deadlock is detected, identifying the optimal deadlock solving strategy can ensure that the system goes back to normal state quickly. Get this straight - you implemented a deadlock resolution strategy which silently stms used correctly can't deadlock since they don't require locks out which locks and what order caused the deadlock (a,b), in general. Deadlock modeling (5) strategies for dealing with deadlocks: 1 just ignore the problem 2 detection and recovery let deadlocks occur, detect them, take. Methods for handling deadlocks deadlock prevention disallow one of the four necessary conditions for deadlock deadlock avoidance do not grant a.
A study on different deadlock avoidance strategies in distributed real time the deadlocks, missed deadlines, priority inversion problems are due to the incorrect handling of the general solutions may not give the. Describe four general strategies for dealing with deadlocks ignore deadlock ( this is done in systems which are infrequently meeting with. There are a couple strategies people tend to take when dealing with deadlock when deadlock happens very infrequently and data loss is.
The difference between preventing and avoiding deadlocks • how to detect and in general there are three strategies to deal with deadlock. Provides a deadlock avoidance strategy o iyxx academic pow ik 1 general procedure for avoiding queue-induced deadlocks: 1 consistent message. Two general categories of resources can be distinguished: reusable and the strategy of deadlock prevention is, simply put, to design a system in such a way that now suppose that two processes, a and b, are deadlocked because a has . Deadlock problem in the general case deadlock therefore, all deadlocks involve conflicting resource in general, four strategies are used for dealing with.
In general, whenever a process is blocked on a resource request that can emption of some of the resources held by the deadlocked processes, or the in the many cases where the above strategies are not acceptable, we. The transaction may succeed general methods for preventing or avoiding deadlocks can be difficult to find detecting a deadlock condition is. In general, three strategies have been employed to address deadlocks: deadlock prevention, deadlock avoidance and deadlock detection and resolution [12. Here's the general principle: the correctness of a concurrent program should in this reading, we'll finish talking about strategy 4, using synchronization to to solve this problem with locks, we can add a lock that protects each bank account.
|
OPCFW_CODE
|
The Dev-X Project is a series of features with industry leaders sharing their developer experience insights. In each “episode”, we ask an industry leader 10 interesting questions about DX, collect their responses and insights and share them with you.
Gil Tayar is a Senior Software Architect @ Roundforest. He has 35 years of experience (!) and they haven’t yet managed to dull his fascination with software development.
His passion is distributed systems and figuring out how to scale development to big teams. Extreme modularity and testing are the main tools in his toolkit, using them to combat the code spaghetti monster at companies like Wix, Applitools, and at his current job as software architect at Roundforest.
We appreciate Gil sharing some of his DX insights and perspectives with us and we’re excited to share them with the community.
When did you decide to become a developer?
[Gil]: I decided to become a developer somewhere around my Bar Mitzva, when I got my first computer. Subscribing to and reading “Byte” Magazine only cemented that resolve.
What are the key ingredients to a really good engineering culture?
[Gil]: To me, the key ingredients to a great engineering culture are: Nice people. Not a lot of ego. Professionalism and pride in the craft. These elements establish a creative environment and without these ingredients, it’s hard to create a positive, productive culture.
Let’s say you’re building something from scratch. What does your ideal stack look like?
[Gil]: For web, it would be the current one. For backend, it would be: Node.js, K8s, microservices. or frontend I’d take: React, SSR, Microfrontends in some way or another. And a monorepo with lots of modular packages to tie them all.
Tell us about an epic engineering fail you’ve experienced in your career. What did you learn from it?
[Gil]: What comes to mind is spending a year and a half of going the wrong way choosing the wrong infrastructure for Wix Code. After a year and a half, after realizing it was the wrong infra, we pivoted and went in another direction. In retrospect, we should have measured performance to check if we can live with what we had done. And I also learned that it’s never too late to make a necessary, correct change.
How important is “Developer Experience”? Do you see this as a trend that will evolve into dedicated teams/functions for mainstream tech companies?
[Gil]: Developer experience is amazingly important. DX is what defines the velocity of a team. If you take the time from feature requirements to production, and remove the designing and coding part, then the better the DX, the faster that is.
Let’s take the mono-repo question once and for all - should you ‘go mono’?
[Gil]: Yes! But not the way everybody does. Everybody creates a monorepo with very tightly coupled packages. That’s the worst of both worlds. Create a monorepo where all packages are independent of each other: they can be understood independently, developed independently, tested independently, and if they’re microservices/microfrontends, also deployed independently.
Fail to do that, and you’re back in Spaghettiland, just inside a monorepo.
What will be the hottest dev trend/adopted technology in 2022?
[Gil]: I have no idea… but I can’t want to see for myself!
Some claim that front-end developers will become irrelevant in the future of no-code tools. Do you see this happening? If so, how soon?
[Gil]: My initial reaction is: Hahahahahahahahahahahahahahahahaha!
And my second thought is: Again?! We became “irrelevant” in the 90s too! History never repeats itself, it only rhymes with itself.
Share some tips to help remote teams collaborate better.
[Gil]: I honestly don’t know. I never really worked remotely. What I have heard and tend to believe is that a successful remote team is all remote, and never partially remote.
Do you want to share anything else? Please share anything you think would be of value to the broader developer community:
And ditch DRY. Go with WET (Write Everything Twice)
|
OPCFW_CODE
|
Note: AccuRev keeps track of changes to both files and directories.
What is a Software Configuration?
A configuration is a particular set of versions of a particular set of files.
The contents of files change over time, as developers, QA engineers, technical writers, and release engineers work on them. These people save the changes in new versions of the files. The organization of the files changes, too: new files are created, old files are deleted, some files get renamed, and directory structures get reorganized.
Take a particular set of files — for example, the files required to build and deliver an application named Gizmo. At any given moment, this set of files is in a particular state, which can be described in terms of version numbers:
gizmo.c version 45
frammis.c version 39
base.h version 8
release_number.txt version 4
Gizmo_Overview.doc version 19
Gizmo_Release_Notes.doc version 3
... or in terms of time:
gizmo.c last modified 2004/11/18 14:15:03
frammis.c last modified 2004/11/18 14:15:19
base.h last modified 2004/10/08 09:09:44
release_number.txt last modified 2004/11/17 21:59:34
Gizmo_Overview.doc last modified 2004/11/20 17:25:00
Gizmo_Release_Notes.doc last modified 2004/11/21 19:29:57
That’s two different ways of specifying the same configuration.
Suppose one of the files changes:
release_number.txt last modified 2004/11/24 07:19:18 (version 5)
You can think of this change as producing a new software configuration. But in many situations, it’s more useful to think of this as an incremental change to an existing, long-lived configuration — the one called “Gizmo source base” or, perhaps more precisely, “Gizmo Version 2.5 source base”.
So in the end, is a software configuration just “a bunch of files”? Almost, but not quite. It’s important to keep in mind that a software configuration does not contain the files themselves, but only a description or listing of the files and their versions. Think of the difference between an entire book (big) and its table of contents (small). This crucial distinction makes it possible for AccuRev to keep track of hundreds or thousands of software configurations, without needing an infinite amount of disk storage.
The change described above to file release_number.txt illustrates the distinction between files and configurations of files. The change to the contents of the file is something like this:
replace text line “RELEASE=2.5” with text line “RELEASE=2.5.1”
The change to the software configuration is something like this:
replace version 4 of file “release_number.txt” with version 5
For another example of the distinction, recall that a configuration takes into account filenames and directory structures, too. Consider this configuration:
src/gizmo.c version 45
src/frammis.c version 39
src/base.h version 8
src/release_number.txt version 4
doc/Gizmo_Overview.doc version 19
doc/Gizmo_Relnotes.doc version 3
Boldface shows the differences from the first configuration listed above. The file contents are exactly the same; but one filename has changed, and the files have been organized into subdirectories. So this is a different software configuration, even though there has been no change to the contents of the files.
Software Configurations and Development Tasks
In most modern software development organizations, many tasks are under way concurrently. At the beginning of this section, we listed a few: new products, new releases of existing products, ports to different platforms, and bug fixes. In addition, consider the fact that each one of the above tasks is often several coordinated efforts: initial development, unit testing, internal system testing, external system (“beta”) testing, final production.
To enable all the tasks to progress smoothly at the same time, each person gets her own software configuration — her own set of versions of the files in the repository. (A small, close-knit team might choose to share a single configuration.)
- Keep track of the various configurations.
- Manage, preserve, and protect changes to the files.
- Detect conflicting changes that take place in different configurations (for example, two people modify the same section of the same file).
- Assist in resolving those conflicting changes.
|
OPCFW_CODE
|
Availability of countless large-degree programming language is admittedly simplifying the undertaking, but nothing at all will come close to Java concerning performance and clean working.
I'm a mechanical college student from Hong Kong,China. I'm enthusiastic about devices, but in our second semester I bought a programming topics. Programming is quite triable endeavor for me.
Some individuals find it motivating to own finish freedom inside their programming projects, and producing a sport provides you with that flexibility.
Fortunately i operate for the NPC, and we not long ago did a Local community primarily based project, so i obtained some data - have to have it get checked because of the PR officer, then im finished.
Each and every of such languages spawned descendants, and Most recent programming languages count at the least one of these of their ancestry.
Having forward from the Competitors and creating a method that supports successful management of hospitals is the need of the hour. Among the best and opportunity java project Thoughts to work upon.
You'll get more quickly responses in case you inquire questions independently. That way numerous tutors can help simultaneously.
One widespread craze in the development of programming languages has long been to add a lot more capability to address difficulties making use of a higher amount of abstraction. The earliest programming languages were being tied extremely intently into the fundamental hardware of the pc. As new programming languages have made, capabilities have been additional that let programmers Convey Suggestions that are more distant from easy translation into fundamental components Guidance.
You men helped me lots After i wanted an individual to carry out my scenario analyze assignment throughout the deadline and After i stuck in my examinations. They're remarkably professional and supply top-notch Case review Assignment Help service in Australia." By...
Enterprise this java project notion, as your remaining yr project will help you fully grasp the need of the hour. Persons require a System wherever they will share their challenges and master options for them.
The static semantics defines limits about the construction of valid texts which are really hard or not possible to express in conventional syntactic formalisms.[three] For compiled languages, static semantics essentially incorporate those semantic procedures which might be checked at compile time. Illustrations involve checking that every identifier is declared ahead of it can be employed (in languages that need this sort of declarations) or which the labels on the arms of a scenario assertion are unique.
UAT also is home to the biggest match incubator lab in Arizona, with greater than 120 pupils from all UAT recreation degree packages contributing to the development of online games at any specified time. UAT Activity Studios can be a recreation generation pipeline that fosters match growth and link to the game industry.
With these, college students can pick the service that they call for. Some pupils might require situation research assignment help proper from the beginning in the course of your Extra resources products also to the top though many of them may possibly only be trying to find the best structure.
The outline of a programming language is often break up in the two factors of syntax (type) and semantics (indicating). Some languages are defined by a specification document (by way of example, the C programming language is specified by an ISO Conventional) though other languages (for example Perl) Use a dominant implementation that is certainly handled as a reference.
|
OPCFW_CODE
|
Is gpt-4 0613 no longer accepting new applications?(japaneast)
Please tell me about the Azure OpenAI Service Model page. Below are pages for Japanese and pages for English. There are differences in the table under "Standard Deployment Model Availability".
gpt-4 0613 - japaneast is checked in the Japanese version and appears to be available, but in the English version of the document it is unchecked and indicates that it is not provided.
https://learn.microsoft.com/ja-jp/azure/ai-services/openai/concepts/models#standard-deployment-model-availability
https://learn.microsoft.com/en-US/azure/ai-services/openai/concepts/models#standard-deployment-model-availability
And if I look at the documentation for this repository, it says:
In addition to the regions above which are available to all Azure OpenAI customers, some select pre-existing customers have been granted access to versions of GPT-4 in additional regions:
https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/ai-services/openai/concepts/models.md#select-customer-access
Judging from this information, is it correct to read that in japaneast, people who have already made this model can continue to use it, but they are not accepting new users?
Document Details
⚠ Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.
ID: 5f939b05-2f17-6383-00d4-c99bbc39011c
Version Independent ID: 6454961a-c54e-5ccb-d4f3-f1bc94f60b45
Content: Azure OpenAI Service models - Azure OpenAI
Content Source: articles/ai-services/openai/concepts/models.md
Service: azure-ai-openai
GitHub Login: @mrbullwinkle
Microsoft Alias: mbullwin
@yuichiromukaiyama
Thanks for your feedback! We will investigate and update as appropriate.
Judging from this information, is it correct to read that in japaneast, people who have already made this model can continue to use it, but they are not accepting new users?
@yuichiromukaiyama Thank you for your question. What we are trying to communicate with the standard tables is for customers with standard pay-go subscriptions/deployments what model/regional availability will be there for all customers even if they became a new Azure OpenAI customer today.
If you are a pre-existing customer, you may have subscriptions that have access to models/model versions in regions that wouldn't be accessible to a brand new Azure OpenAI customer/subscription. So if for example you have subscriptions with gpt-4 0613 access in japaneast nothing would change in this behavior you would still be able to access that model/version pair and create new deployments up to your existing quota limit within each approved subscription.
The changes on the docs side are really just to help make clearer to new customers what model/version/quota options will be available by default. For existing customers, you can always check the quota experience within Azure OpenAI Studio for a given region to confirm what model/versions are available for a subscription in a given region.
The only additional caveat regarding model availability would be that model versions across all regions are subject to eventual retirement as documented here: https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/model-retirements#current-models.
@mrbullwinkle
Thank you for checking! However, it seems that the intention of my question might be slightly different. I noticed differences between the Japanese and English versions of the documentation, so I wanted to confirm which one is correct. If you happen to know, could you please let me know?
yuichiromukaiyama, thank you for the clarification in your question. My apologies for misunderstanding
The English version is the most accurate/up-to-date information. There is a localization lag between when a change is made in English docs and when it appears in the languages that we localize the docs into. We use a combination of both machine/human translation for docs and changes can take from 1-4 weeks to make it into the localized versions.
The particular change you noticed was we discovered an error last week whereby we were listing a subset of model/versions as available for all customers when in fact this was not the case and only some customers have access to these model version/region combinations by default.
@mrbullwinkle
Got it! Thank you so much for your answer. I will try to look at the English version from now on.
|
GITHUB_ARCHIVE
|