Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
|Author:||Chris von See|
|Subcategory:||Networking & Cloud Computing|
|Publisher:||McGraw-Hill Professional (May 16, 2008)|
|Category:||Technologies and Computers|
|Other formats:||mobi mbr lrf lit|
XSLT Developer's Guide (Applications Development) Paperback – May 16, 2008. Chris von See is a Senior Technical Director at eFORCE, In. a global provider of strategic eBusiness solutions for both Global 1000 organizations and digital innovators
XSLT Developer's Guide (Applications Development) Paperback – May 16, 2008. by. Chris von See (Author). Find all the books, read about the author, and more. Are you an author? Learn about Author Central. Chris von See (Author), Nitin Keskar (Author). a global provider of strategic eBusiness solutions for both Global 1000 organizations and digital innovators. He has seventeen years’ experience in information systems, working with diverse technologies on platforms ranging from large mainframes to palmtop devices.
XSLT Developer's Guide book. This title is a guide to building and deploying XSLT solutions for eneterprise-level applications
XSLT Developer's Guide book. This title is a guide to building and deploying XSLT solutions for. Start by marking XSLT Developer's Guide as Want to Read: Want to Read savin. ant to Read. This title is a guide to building and deploying XSLT solutions for eneterprise-level applications. It provides readers with multiple programming options - all programming examples appear in Java and Perl.
Winforms UI development and batch processing tool design. Enterprise Message Passing Services. Senior Software Developer.
Xslt Developers Guide. Inside, you'll find full details on designing and building complex, data-driven applications with XSLT that transform XML source documents into interoperable business objects, a variety of presentation formats, and more. The book explains how to work with XSL and XPath and how to customize and reuse stylesheet logic. Deliver flexible, repurposed information using XSLT with help from this definitive guide. Use XPath expressions to locate data in an XML document and perform operations on string, numeric, and boolean values. Code template rules to process structured XML data.
The XSLT . recommendation recommends the more general attribute . XSLT - MDC Docs by Mozilla Developer Network. XSLT Reference (MSDN). recommendation recommends the more general attribute types text/xml and application/xml since for a long time there was no registered media type for XSLT. During this time text/xsl became the de facto standard. In XSLT . it was not specified how the media-type values should be used. IBM offers XSLT processing embedded in a special-purpose hardware appliance under the Datapower brand.
XDK and Application Development Tools. Using Oracle XML-Enabled Technology . Introducing Oracle XML Developer's Kit. Oracle XML Developer's Kit (XDK) is a set of components, tools, and utilities that eases the task of building and deploying XML-enabled applications. Release notes for XDK are found in /xdk/doc/readme. This XML data is optionally transformed by the XSLT processor, viewed directly by an XML-enabled browser, or sent for further processing to an application.
ZK Developer's Guide. ZK Developer's Guide. Jurgen Schumacher, Markus Stäuble. Developing responsive user interfaces for web applications using Ajax, XUL, and the open source ZK rich web client development framework.
com Platform Fundamentals: An Introduction to Custom Application Development in the Cloud.
Modeling Reactive Systems with Statecharts. com Platform Fundamentals: An Introduction to Custom Application Development in the Cloud. Heroku Postgres (PDF).
This title is a guide to building and deploying XSLT solutions for eneterprise-level applications. No current Talk conversations about this book.
|
OPCFW_CODE
|
- Why is pandas so fast?
- What is role of Python in big data?
- Is Python better than Excel?
- Why do we use pandas in Python?
- Why is Python good for data analysis?
- Will Python replace Excel?
- Can Python handle large datasets?
- Can Python handle millions of records?
- Can pandas be used for big data?
- Can we use Python in Excel?
- Should I use pandas or NumPy?
- Why do we use NumPy in Python?
- Why do pandas go over NumPy?
- Which is better Hadoop or python?
- What’s the difference between Numpy and pandas?
- Why is Numpy so fast?
- Is NumPy faster than pandas?
- HOW BIG CAN data frames be?
- Which is better R or Python?
- Can I use Python in Excel?
Why is pandas so fast?
Pandas is so fast because it uses numpy under the hood.
Numpy implements highly efficient array operations.
Also, the original creator of pandas, Wes McKinney, is kinda obsessed with efficiency and speed.
Use numpy or other optimized libraries..
What is role of Python in big data?
Python has an inbuilt feature of supporting data processing. You can use this feature to support data processing for unstructured and unconventional data. This is the reason why big data companies prefer to choose Python as it is considered to be one of the most important requirements in big data.
Is Python better than Excel?
Python is faster than Excel for data pipelines, automation and calculating complex equations and algorithms. Python is free! Although no programming language costs money to use, Python is free in another sense: it’s open-source. This means that the code can be inspected and modified by anyone.
Why do we use pandas in Python?
Pandas is mainly used for data analysis. Pandas allows importing data from various file formats such as comma-separated values, JSON, SQL, Microsoft Excel. Pandas allows various data manipulation operations such as merging, reshaping, selecting, as well as data cleaning, and data wrangling features.
Why is Python good for data analysis?
Python is focused on simplicity as well as readability, providing a host of helpful options for data analysts/scientists simultaneously. Thus, newbies can easily utilize its pretty simple syntax to build effective solutions even for complex scenarios. Most notably, that’s all with fewer lines of code used.
Will Python replace Excel?
“Python already replaced Excel,” said Matthew Hampson, deputy chief digital officer at Nomura, speaking at last Friday’s Quant Conference in London. “You can already walk across the trading floor and see people writing Python code…it will become much more common in the next three to four years.”
Can Python handle large datasets?
There are common python libraries (numpy, pandas, sklearn) for performing data science tasks and these are easy to understand and implement. … It is a python library that can handle moderately large datasets on a single CPU by using multiple cores of machines or on a cluster of machines (distributed computing).
Can Python handle millions of records?
The 1-gram dataset expands to 27 Gb on disk which is quite a sizable quantity of data to read into python. As one lump, Python can handle gigabytes of data easily, but once that data is destructured and processed, things get a lot slower and less memory efficient.
Can pandas be used for big data?
pandas provides data structures for in-memory analytics, which makes using pandas to analyze datasets that are larger than memory datasets somewhat tricky. Even datasets that are a sizable fraction of memory become unwieldy, as some pandas operations need to make intermediate copies.
Can we use Python in Excel?
Excel is a popular and powerful spreadsheet application for Windows. The openpyxl module allows your Python programs to read and modify Excel spreadsheet files.
Should I use pandas or NumPy?
Numpy is memory efficient. Pandas has a better performance when number of rows is 500K or more. Numpy has a better performance when number of rows is 50K or less. Indexing of the pandas series is very slow as compared to numpy arrays.
Why do we use NumPy in Python?
NumPy is a Python library used for working with arrays. It also has functions for working in domain of linear algebra, fourier transform, and matrices. NumPy was created in 2005 by Travis Oliphant.
Why do pandas go over NumPy?
NumPy library provides objects for multi-dimensional arrays, whereas Pandas is capable of offering an in-memory 2d table object called DataFrame. NumPy consumes less memory as compared to Pandas. Indexing of the Series objects is quite slow as compared to NumPy arrays.
Which is better Hadoop or python?
Hadoop is a database framework, which allows users to save, process Big Data in a fault tolerant, low latency ecosystem using programming models. … On the other hand, Python is a programming language and it has nothing to do with the Hadoop ecosystem.
What’s the difference between Numpy and pandas?
Similar to NumPy, Pandas is one of the most widely used python libraries in data science. It provides high-performance, easy to use structures and data analysis tools. Unlike NumPy library which provides objects for multi-dimensional arrays, Pandas provides in-memory 2d table object called Dataframe.
Why is Numpy so fast?
Because the Numpy array is densely packed in memory due to its homogeneous type, it also frees the memory faster. So overall a task executed in Numpy is around 5 to 100 times faster than the standard python list, which is a significant leap in terms of speed.
Is NumPy faster than pandas?
As a result, operations on NumPy arrays can be significantly faster than operations on Pandas series. NumPy arrays can be used in place of Pandas series when the additional functionality offered by Pandas series isn’t critical. … Running the operation on NumPy array has achieved another four-fold improvement.
HOW BIG CAN data frames be?
There is no hardcoded limit we just call panda. fromRecords with a collection of fields to instantiate a new Panda Dataframe. The only limit is memory.
Which is better R or Python?
Since R was built as a statistical language, it suits much better to do statistical learning. … Python, on the other hand, is a better choice for machine learning with its flexibility for production use, especially when the data analysis tasks need to be integrated with web applications.
Can I use Python in Excel?
It is officially supported by almost all of the operating systems like Windows, Macintosh, Android, etc. It comes pre-installed with the Windows OS and can be easily integrated with other OS platforms. Microsoft Excel is the best and the most accessible tool when it comes to working with structured data.
|
OPCFW_CODE
|
class TaskForm {
constructor(paramSettings) {
let defaults = {
//callbacks para envio de metodo externos para gravação
onSave: function () {
console.log("onSave Default");
},
onCancel: function () {
console.log("onCancel Default");
},
onDelete: function () {
console.log("onDelete Default");
},
showNaoRealizadas:function(){
}
};
//this.settings = paramSettings;
//$.extend -> faz merge de diversos objetos
//$.extend(bool propagar, {}, defaults, paramSettings, true);
this.settings = $.extend(true, {}, defaults, paramSettings);
this.buildForm();
this._data = {};
}
buildForm() {
this.$form = $(this.html());
this.$save = this.$form.find("[name='save']");
this.$cancel = this.$form.find("[name='cancel']");
this.$delete = this.$form.find("[name='delete']");
this.$titulo = this.$form.find("[name='TITULO']");
this.$descricao = this.$form.find("[name='DESCRICAO']");
this.$exibirNaoRealizadas = this.$form.find("[name='naoRealizada']");
this.applyEvents();
}
html() {
let html = "";
html += "<div><label name='lblTitulo'>Título</label></div>";
html += "<div><input type='text' name='TITULO' /></div>";
html += "<div><label name='lblDescricao'>Descrição</label></div>";
html += "<div><textarea type='text' name='DESCRICAO'></textarea></div>";
html += "<input type='submit' name='save' value='Salvar'></input>";
html += "<input type='button' name='cancel' value='Cancelar'></input>";
html += "<input type='button' name='delete' value='Excluir'></input>";
html += "<label><input type='checkbox' name='naoRealizada'></input>Exibir somente não realizadas</label>";
return "<form>" + html + "</form>";
}
fill(data) {
//variavel 'private' para criar um 'cash' dos dados nesta variavel
this._data = data;
this.$titulo.val(this._data.TITULO);
this.$descricao.val(this._data.DESCRICAO);
}
removeBotoesRealizados(id) {
this.$form = $(this.html());
let btnDone = this.$form.find("[name='btnDone" + id + "']");
let btnEdit = this.$form.find("[name='btnEdit" + id + "']");
btnDone.hide();
btnEdit.hide();
}
clear() {
//variavel 'private' para criar um 'cash' dos dados nesta variavel
this._data = {};
this.$titulo.val("");
this.$descricao.val("");
this.enableButton(false, false, false);
}
applyEvents() {
let _this = this;
//CLICKS
this.$save.click(function (e) {
//quando se tem um submit, é feito o POST para uma URL.
//O preventDefault retira o comportamento padrão do submit.
e.preventDefault();
_this.save();
});
this.$cancel.click(function (e) {
e.preventDefault();
_this.cancel();
});
this.$delete.click(function (e) {
e.preventDefault();
_this.delete();
});
this.$exibirNaoRealizadas.click(function (e) {
// e.preventDefault();
_this.listarNaoRealizadas();
});
//keyup
this.$titulo.keyup(function () {
if (_this.checkIsFill(_this.$titulo.val()) && _this.checkIsFill(_this.$descricao.val())) {
_this.enableButton(true, true, true);
}
});
this.$descricao.keyup(function () {
if (_this.checkIsFill(_this.$titulo.val()) && _this.checkIsFill(_this.$descricao.val())) {
_this.enableButton(true, true, true);
}
});
//CHANGE
this.$titulo.change(function () {
_this._data.TITULO = $(this).val();
});
this.$descricao.change(function () {
_this._data.DESCRICAO = $(this).val();
});
}
checkIsFill(field) {
return (field != '');
}
enableButton(stSave, stCancel, stDelete) {
this.$save.attr("disabled", !stSave);
this.$cancel.attr("disabled", !stCancel);
this.$delete.attr("disabled", !stDelete);
}
save(e) {
//apply garante que o 'this' ao contexto que originou a chamada
//passando como contexto o objeto atual (this), e um array de parametros
this.settings.onSave.apply(this, [e]);
this.clear();
console.log("Salvo!");
}
cancel(e) {
this.settings.onCancel.apply(this, [e]);
console.log("Cancelado!");
this.clear();
}
delete(e) {
this.settings.onDelete.apply(this, [e]);
console.log("Apagado!");
this.clear();
}
listarNaoRealizadas(e) {
this.settings.showNaoRealizadas.apply(this, [e]);
console.log("Exibindo somente não realizadas");
this.clear();
}
//metodo para retornar os dados para o TaskList
getData() {
return this._data;
}
}
|
STACK_EDU
|
At Tidelift, we care deeply for open source software.
For our founders and early employees, open source has long been both a personal preoccupation, as well as an actual occupation at organizations like Red Hat, Wikimedia, GitHub, Mozilla, and Google.
But even though we’ve seen open source accomplish so much over the last two decades, along with many of you, we’ve felt that its foundation is increasingly shaky. With compounding usage amplifying the demands placed on its creators, open source risks becoming a victim of its own success. If you use or contribute to open source, you’ve probably had this feeling, too.
And we also had the sense that there’s a better way.
But before we got too far ahead of ourselves, we decided to do our homework, by talking directly with users and creators of open source.
Here’s what we heard, and our first steps toward doing something about it.
What we heard
Over the last several months, we engaged with over 1000 professional users and maintainers of open source software through surveys and live conversations. We wanted to learn what’s working for them and what’s not.
Turns out, people had a lot on their minds.
From professional software teams building open source into their applications, we heard:
- Most teams lack visibility and process around their use of open source. While it’s clear that open source is everywhere, we found that the majority of organizations don't even have a complete list of the open source components they are incorporating into their applications, much less a rigorous process in place for managing those components throughout the lifecycle of their software.
- Security, licensing, and maintenance are paramount. Open source, while it often feels ethereal, is ultimately still just software and thus subject to the earthly realities of security vulnerabilities, license compliance, and ongoing maintenance. Professional teams we spoke to understand the risks that come along with the opportunity, and were doing their best (without much help) to balance some of the potential downsides of open source with the fantastic upsides.
- 83% of organizations will pay for well-maintained open source software. Although it’s sometimes a matter of pride for technologists to self-maintain the open source projects they use, we found that a substantial percentage of professional teams already pay for commercial assurances around some of their open source (typically from companies like Red Hat or Cloudera). At the same time, we frequently heard the frustration that there’s simply no vendor to go to for most of the open source landscape, even when users have the appetite to pay.
Similarly, we had a wide-ranging set of conversations with open source project maintainers, contributors and supporters, who told us:
- Building open source software is hard work. This should surprise nobody, but building rock-solid foundational technology that will work across platforms, deployment environments, and use cases is real work. Open source maintainers have multiple motivations that propel them forward—personal fascination, reputational rewards, and a sense of contribution and community, among others. But none of those make the burden lighter.
- Today, open source contributions are mostly a side pursuit. More than three quarters of open source contributors have no formal financial support for their open source work. They contribute their own personal time or squeeze some work into their day job, as circumstances permit. Less than a quarter pursue direct business models to support their project work.
- Many would like to work on open source as their full-time job. On the other hand, we found a huge appetite to work on open source software as a full-time profession, with about half of maintainers interested in working on open source full time.
- Specifically, contributors are open to doing maintenance for pay. Assuming they would be fairly compensated for doing so, we found that many open source contributors are interested in tackling issues like security, license compliance, and ongoing maintenance for their projects (and others).
You can probably see where we’re going with this.
Here's the win-win proposition we see.
Rather than having professional software teams cobble together solutions from multiple vendors and unsupported “free range” projects, what if we had one destination for professionalized open source; a single place to go for uniform assurances about the security, licensing, and maintenance of open source projects, regardless of the specific language, package manager, or ecosystem. On a paid subscription basis.
Given the breadth of open source, it would be impossible for one company to staff an engineering team large enough to fulfill that demand. Unless… one could enlist a subset of the vast existing community of open source contributors and maintainers to fulfill those professional assurances. Each maintaining their part, in exchange for a share of the paid subscriptions.
The role of Tidelift? We think we can help by providing many of the sales, marketing, finance, software development, and organizational aspects of making this happen.
That, in a nutshell, is the idea behind the Tidelift Subscription.
Introducing The Tidelift Subscription
And so today we’re launching the Tidelift Subscription.
We’re starting with support for three widely used front-end frameworks: React, Angular, and Vue.js.
The core idea of the Tidelift Subscription is to pay for “promises about the future” of your software components.
When you incorporate an open source library into your application, you need to know not just that you can use it as-is today, but that it will be kept secure, properly licensed, and well maintained in the future. The Tidelift Subscription creates a direct financial incentive for the individual maintainers of the software stacks you use to follow through on those commitments. Aligning everyone’s interests—professional development teams and maintainers alike.
Critically, the Tidelift Subscription covers not just core libraries, but the vast set of dependencies and libraries typically used in common stacks. For example, a basic React web application pulls in over 1,000 distinct npm packages as dependencies. The Tidelift Subscription covers that full depth of packages which originate from all parts of the open source community, beyond the handful of core packages published by the React engineering team itself.
Learn more about open source dependencies and the Tidelift Subscription in the definitive guide to professional open source.
Start with a free dependency analysis
Since we heard that many professional software teams struggle to know where to get started, we also built a free open source dependency analysis service—which also launches today.
Our analysis is powered by Libraries.io, Tidelift’s open data service that comprises the most comprehensive index of open source components ever assembled, and builds on the foundation of the earlier Dependency CI tool from the Libraries.io team.
Just sign in and link your GitHub.com account to get started. (If you’re not using GitHub.com, we’re working on support for additional platforms—get in touch if you’d like a preview.)
We’re continuing to add subscription coverage for more parts of the open source landscape all the time. When you use Tidelift to monitor your open source dependencies, you’ll be alerted to the availability of support that covers the packages you use.
Maintainers: consider becoming a lifter
Along with the launch of the Tidelift Subscription, we’re reaching out to maintainers and core teams—we call them lifters—interested in helping build a sustainable business around their own projects.
Tidelift provides a means for maintainers to band together in a scalable model that works—for everyone. Those who build and maintain open source software get compensated for their effort—and those who use their creations get more dependable software, delivered via a Tidelift subscription.
Bottom line: We connect the software development teams using open source with the maintainers creating it, in a win-win way.
We’re particularly interested in hearing from open source contributors in the React, Angular, and Vue.js communities, given our initial focus.
But our ambitions are broad, with Tidelift already supporting the following package manager communities: npm, Maven, RubyGems, Packagist, PyPI, NuGet, Bower, CPAN, CocoaPods, Clojars, Meteor, CRAN, Cargo, Hex, Swift, Pub, Carthage, Dub, Julia, Shards, Go, Haxelib, Elm, and Hackage.
If you are an open source maintainer or contributor, learn more about becoming a lifter on our web site, download our lifter guide and get in touch.
At Tidelift, we want to make open source work better—for everyone.
We’ve got a lot more on the way, but we’re excited to get started on this journey together.
If you’re like-minded:
- Try out the free Tidelift dependency analysis for your application
- Try out the Tidelift subscription
- If you’re an open source maintainer, learn about becoming a lifter
- Follow us on Twitter and sign up for updates via email
- Check us out on Product Hunt
|
OPCFW_CODE
|
When a page request is sent to the Web server, the page is run through a series of events during its creation and disposal. In this article, I will discuss in detail the ASP.NET page life cycle Events
(1) PreInit The entry point of the page life cycle is the pre-initialization phase called “PreInit”. This is the only event where programmatic access to master pages and themes is allowed. You can dynamically set the values of master pages and themes in this event. You can also dynamically create controls in this event.
(2)Init This event fires after each control has been initialized, each control’s UniqueID is set and any skin settings have been applied. You can use this event to change initialization values for controls. The “Init” event is fired first for the most bottom control in the hierarchy, and then fired up the hierarchy until it is fired for the page itself.
(3)InitComplete Raised once all initializations of the page and its controls have been completed. Till now the viewstate values are not yet loaded, hence you can use this event to make changes to view state that you want to make sure are persisted after the next postback
(4)PreLoad Raised after the page loads view state for itself and all controls, and after it processes postback data that is included with the Request instance
(5)Load The important thing to note about this event is the fact that by now, the page has been restored to its previous state in case of postbacks. Code inside the page load event typically checks for PostBack and then sets control properties appropriately. This method is typically used for most code, since this is the first place in the page lifecycle that all values are restored. Most code checks the value of IsPostBack to avoid unnecessarily resetting state. You may also wish to call Validate and check the value of IsValid in this method. You can also create dynamic controls in this method.
(6)Control (PostBack) event(s)ASP.NET now calls any events on the page or its controls that caused the PostBack to occur. This might be a button’s click event or a dropdown’s selectedindexchange event, for example.These are the events, the code for which is written in your code-behind class(.cs file).
(7)LoadComplete This event signals the end of Load.
(8)PreRender Allows final changes to the page or its control. This event takes place after all regular PostBack events have taken place. This event takes place before saving ViewState, so any changes made here are saved.For example : After this event, you cannot change any property of a button or change any viewstate value. Because, after this event, SaveStateComplete and Render events are called.
(9)SaveStateComplete Prior to this event the view state for the page and its controls is set. Any changes to the page’s controls at this point or beyond are ignored.
(10)Render This is a method of the page object and its controls (and not an event). At this point, ASP.NET calls this method on each of the page’s controls to get its output. The Render method generates the client-side HTML, Dynamic Hypertext Markup Language (DHTML), and script that are necessary to properly display a control at the browser.
(11)UnLoad This event is used for cleanup code. After the page’s HTML is rendered, the objects are disposed of. During this event, you should destroy any objects or references you have created in building the page. At this point, all processing has occurred and it is safe to dispose of any remaining objects, including the Page object. Cleanup can be performed on-
(a)Instances of classes i.e. objects
(b)Closing opened files
(c)Closing database connections.
|
OPCFW_CODE
|
- Grab APK from another of the SAME model phone running the EXACT SAME ROM your phone is:
The only thing you may be able to do is grab a settings.apk from another phone with the exact same ROM.
- Re-Flash just the System partition:
Otherwise you will need to re-flash your ROM. Don't Panic!! If you just format the following via the Recovery menu:
- ART/Dalvick cache
- and Cache
you can retain all your apps and everything else in /data.
Note: You may wish to do a backup of your launcher settings if you use something like Nova or ADW or Apex or Go custom home launchers (stock won't have a layout backup function). The only settings you may lose are any system settings (though MOST will be backed up anyways for you as most apps in /system use an area of /data to store settings files. Some do not and is why I mention it) that do not save their settings somewhere in /data or /sdcard.
Other than that, this process as such (as long as you do not delete /data) will retain all your apps and their settings (the boot after the flash will take longer than usual as it will need to generate the cache for all the extra apps on the initial boot, but an extra couple min is a small price to pay to not have to re-setup everything IMHO ;-) (and you should get a countdown as it is running through the apps)). AND since you will be flashing the exact same ROM there is no chance of problems [when changing ROMs if they use a different code base you can run in to issues where you are forced to format /Data as part of install, BUT NOT THE CASE when flashing the exact same ROM].
Tech Note on How This May Have Happened and Ways to try and defend against this sort of thing (nothing is a guarantee)
The only way something could mess with anything you have in the /system partition (including the system settings menu) can only be messed with by an app that has root access. So, be careful what apps you are granting SuperUser access to (and be VERY VERY VERY careful if you download apks outside the Google Play Store. It may seem like a good idea, but crackers can take ANY apk and add a virus to it and put it on a random site. Then you think you just got the newest angry birds and in fact you did, BUT they added a nasty surprise to it.
You could also try out a Virus scanner (ONLY download this via the Google Play Store, and ONLY use reputable big name brands to be safest. If unsure just fire up google and google the name of the anti virus and read up on user reviews and related material to get a feel for if what you are looking at is a good and safe option. Of course, even if you have a virus scanner something may still get by it so still best to keep things like the above paragraph in mind at all times.
|
OPCFW_CODE
|
feature request: ability to configure RAM and VRAM for profiles (e.g. archlinux profile with 1gb ram)
see title
The controls below (including (video) memory size) already work for most profiles. Unfortunately, this is not the case for the profiles that are restored from a memory dump (Arch, Windows, ReactOS, Haiku, the BSDs and 9front) since from the OS point of view, memory size would change while it's running.
Do you actually need 1GB in the Arch profile?
Yeah I'm running Minecraft in it and 1.8+ need more than 512mb ram. I'd set
it to 1gbram or 2gbram and either 16mb vram or 32mb vram.
On Wed, Aug 11, 2021 at 4:18 PM Fabian @.***> wrote:
The controls below (including (video) memory size) already work for most
profiles. Unfortunately, this is not the case for the profiles that are
restored from a memory dump (Arch, Windows, ReactOS, Haiku, the BSDs and
9front) since from the OS point of view, memory size would change while
it's running.
Do you actually need 1GB in the Arch profile?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/copy/v86/issues/510#issuecomment-897123285, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AF5DB4F5OZAN7HSWSTA2ZS3T4LLILANCNFSM5B6YMW3Q
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email
.
What about OS'es with hot swap support?
Yeah I'm running Minecraft in it and 1.8+ need more than 512mb ram. I'd set it to 1gbram or 2gbram and either 16mb vram or 32mb vram.
minecraft is likely unplayable in v86.
It's kinda playable on beta versions. I have a goal of making it playable.
On Tue, Aug 17, 2021 at 10:19 AM pitust @.***> wrote:
Yeah I'm running Minecraft in it and 1.8+ need more than 512mb ram. I'd
set it to 1gbram or 2gbram and either 16mb vram or 32mb vram.
minecraft is likely unplayable in v86.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/copy/v86/issues/510#issuecomment-900340148, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AF5DB4GPVLPDJNLT3L5H7FTT5JVW5ANCNFSM5B6YMW3Q
.
Triage notifications on the go with GitHub Mobile for iOS
https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675
or Android
https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email
.
What about OS'es with hot swap support?
That might work too.
I'm a bit hesitant to increase the memory size of the Arch profile. It might cause issues in mobile browsers.
Isn't memory hotplug a thing? i know you can hotplug cpus at least.
Try this: https://copy.sh/v86/?profile=archlinux-boot&m=1024&vram=32
|
GITHUB_ARCHIVE
|
Implement container endpoints from devfile v2 spec
We need to implement the endpoints section from devfile v2 container definition.
Acceptance criteria
[ ] odo push against OpenShift should create resources based on described mappings with Route
[ ] odo push against Kubernetes will create Services based on described mappings. If Ingress needs to be created we have problem with missing host information. In this case, odo should print a warning explaining what is the problem, and how user can create URL (example command should be automatically constructed with all flags based on endpoint definition) example: "Devfile defined exposed port 8080. In order to make this port accessible via URL you need to run odo url create --port 8080 --path /api --host <yourIngressHost>"
[ ] Update devfiles in https://github.com/odo-devfiles/registry to include detailed configuration when needed (especially exposure )
https://devfile.github.io/devfile/_attachments/api-reference.html
Mapping to Kubernetes Resource:
exposure:
public - create Route/Ingress and Service (default)
internal - create only Service
none - don't create Service nor Route/Ingress
protocol:
http - Protocol for port in Service will be tcp,
https - Protocol for port in Service will be tcp, Ingress/Route (when exposure: public is set should be securet with SSL (the same way that odo dues when odo url create --secure is executed
ws - Protocol for port in Service will be tcp,
wss - Protocol for port in Service will be tcp, Ingress/Route (when exposure: public is set should be securet with SSL (the same way that odo dues when odo url create --secure is executed
tcp - Service port will be tcp,
udp - Protocol for port in Service will be udp,
example of endpoint definition:
endpoints:
- name: web
path: /api
exposure: public
protocol: http
secure: false
targetPort: 8080
Based on this definition odo should create:
Service for container port 8080
Ingress or Route for Service on port 8080 with path: /api
/priority high
/area devfile
This issue is blocked by: https://github.com/openshift/odo/issues/3544
This issue is blocked by: #3544
#3544 was closed
This issue will require updating devfile json schema embedded in odo and updating the parsers structs.
@kadel
Since secure for each TargetPort is defined in devfile endpoint, do we still need the --secure flag for url creation?
I think we can safely remove the flag, and just use the endpoint definition in devfile to avoid unnecessary validation for secure setting.
After thinking this deeper:
Since all the properties in Endpoint, other than Name and TargetPort, are optional. Do we accept the scenario that all the optional properties are empty? And user use url create to set the values in env.yaml?
i.e: in devfile
endpoints:
- name: web
targetPort: 8080
User run odo url create --host <IP_ADDRESS>.com --port 8080 --ingress --secure
Now the env.yaml file contains
Url:
- Name: nodejs-8080
Port: 8080
Secure: true
Host: <IP_ADDRESS>.com
Kind: ingress
If we support this case, then do we support user modifying the devfile to update the endpoint entry?
This may cause conflict between devfile and env.yaml file. And when there is conflict, which definition should we use? or should we even delete the previously pushed igress/route if we find there is a conflict between devfile & env.yaml?
In addition, this issue allows URL creation using definition in devfile.yaml, the logic of url list and url describe also requires update. Issue created: https://github.com/openshift/odo/issues/3615
If urls defined in env.yaml and devfile.yaml have conflicts, how should url list and url describe act in that case?
After discussion with the team, we are now proposing remove --secure and --path from url create. If the path/secure is specified in the endpoint definition, we’ll just use it. So that we can avoid conflicts for these two settings.
--port in url create should match TargetPort defined in devfile, if the port specified is not found in devfile, cli should prompt an error. (This is already our current behavior)
And if for same port, different urlName are defined in devfile.yaml & env.yaml, we will use the one defined in env.yaml file, since the value is specified by user from cli, and env.yaml is considered as the “local” settings, which should has precedence over devfile.yaml.
@kadel Any thoughts on that? or any concerns?
After discussion with the team, we are now proposing remove --secure and --path from url create. If the path/secure is specified in the endpoint definition, we’ll just use it. So that we can avoid conflicts for these two settings.
--port in url create should match TargetPort defined in devfile, if the port specified is not found in devfile, cli should prompt an error. (This is already our current behavior)
And if for same port, different urlName are defined in devfile.yaml & env.yaml, we will use the one defined in env.yaml file, since the value is specified by user from cli, and env.yaml is considered as the “local” settings, which should has precedence over devfile.yaml.
@kadel Any thoughts on that? or any concerns?
@yangcao77 I think we can remove these flags for devfile but --secure flag is being used for s2i components.
@yangcao77 I think we can remove these flags for devfile but --secure flag is being used for s2i components.
@adisky Sorry to bring the confusion. What I meant was remove those flags only if experimental mode is on.
In addition, do we support user modifying the devfile to update the endpoint entry?
This may cause conflict between devfile and env.yaml file. And when there is conflict, which definition should we use? or should we even delete the previously pushed igress/route if we find there is a conflict between devfile & env.yaml?
Most of the options from evn.yaml should move to devfile.yaml.
The only thing that will need to stay in env.yaml is host kind and name to match configuration from env.yaml to endpoint from devfile.yaml
|
GITHUB_ARCHIVE
|
OK back to the topic. There are alot of networking places. Durng functions, talks, seminars...all sorts. Including funeral. As I mentioned, went to funeral just now. Since Feng was too tired to accompany me (杨少 is both our friend you see), I went down alone. I know Meng is going to the funeral, so decided to go down since I know someone. But Meng was with his poly classmates. So I LL sit with them at the same table. I was sitting in between Meng and another guy who is working for Hitachi Cable. Meng was asking me if my company could supply some components that his customer is looking for (networking #1). I told him to visit my company's website for catalogue. As time goes by, I started chatting with the rest of the people at the same table. All were classmates of Meng and 杨少. The Hitachi guy was real funny. He asked Meng for namecard and they exchanged (networking #2). Then Meng gave me his also. And I always wondered what my namecards were for. Haha. Anyway I didn't bring my namecard down lah.
The Hitachi guy (sorry didn't know his name and didn't get his namecard. He claimed that namecards are for him to write notes on. LoL) went off and Meng went to get something from another friend. So the table left me and another guy. We were chatting about studying..he's studying in SIM under RMIT, and I was mentioning to him that I am starting my classes at SIM in Jan. (Opps..think I forgot to update you guys. I was accepted into Uni SIM under the course of Bsc in Electronics. Yeah me.) Then we were chatting about work, and he gave me his namecard (networking #3). His company is dealing with cables, which my company does provide some components that he can use. So we started discussing how he would get RFQ from me and he'll outsource and let my company be in their company's AVL (think it stands for Authorise Vendor List). So I came home after the funeral, and I emailed him. Using my company's email of course. I can access my company's email from home by the way. I emailed him and Meng my company's website and told them to look at our catalogue. If need anything call me at my office and/or email me. =P
Too bad I didn't bring my namecards. Note to self: bring namecards wherever I go.
i have too many namecards that I need to
So to all out there who has namecards, drop me a tag, and maybe we can exchange namecards and network. Haha! Everywhere also can network one. And I'm not being disrespectful to my friend's mother. I'm just making a random blog.
- 19th October 2007, Friday -
|
OPCFW_CODE
|
I would need 2 simple scripts:
[url removed, login to view] which automatically subscribes customer email
and name to the Getresponse autoresponder via their API
[url removed, login to view] which can fetch my mySQL database and if
it confirms that typed email address/ transaction id exist
in the database, then redirects user to a specific page.
I've got 99% of stuff already in place – I need only couple lines of code from you,
as I don't now php / any script language to write it by myself, so this
is super easy and fast project. I need only the raw codes that do the job.
AD 1. I've got a page with PayPal form. User type his email and
name, click „Pay” and is redirected to the PayPal – email
and typed name automatically populate PayPal fields.
Now I would need a script which could be put in between
so it catches email and name, adds user to the Getresponse
autoresponder and then redirects user to the PayPal.
Basically this script need to generate and autmatically
send GET string to the Getresponse as follows:
[url removed, login to view];email=
… this string subscribes user via API. Other methods are available
(see attached Getresponse api).
AD 2. I've got a scirpt which gets all the payment data from customers
and puts everyting into mySQL database and sends automatically
links with a download product. Now I need a script for customers
so in case they didn't receive download link they can type their details
and if they match an entry in the database they are redirected to the
a) Customer can type his email adress
b) Customer can type his transaction id
Script must find an exact match with full entry, so if
smb enter „mike” it cannot match with all the entries in
the database which contain „mike”, such as „”
Addiotionally if the script find a perfect match, it checks which
product was bought (using „control” or „custom” parameter which
is alrady in place with each transaction in the database) and then
redirects customer to the pre defined page.
Table name: transakcje
Fields: „txn_id” AND „t_id” - contain transaction ID
„payer_email” AND „email” - contain customer email
„custom” AND „control” - contain unique product id
I would need from you only the codes that do the work with all
the fields to fill and customize.
If you have questions send them to me via PM - we could talk
via Skype as well
With bidding say how much you will charge for that and
when those scripts will be completed.
|
OPCFW_CODE
|
The capability of eliciting high quality information is crucial for many applications. For instance, the quality of a trained machine learning depends heavily on the quality of its input data. We’d like to understand the capability and power of elicitation and aggregation approaches, in several critical application settings, including building forecasting system, reproducibility scoring system, and peer review/grading systems.
Hybrid Forecasting Competition
This is an IARPA funded project aiming to build a hybrid forecasting system that is able to acquire high quality predictions from human participants and create an active interaction loop between human participants and machine algorithms. A set of relevant questions or challenges include: eliciting high quality prediction from human participants without waiting for the outcome of event to realize; engaging human participation in forecasting long-term events; information aggregation/verification without ground truth; eliciting other useful information from human participants.
Information Market for Reproducibility Scoring
The reproducibility crisis has largely bothered different research community. Reproducing the results from scratch is very expensive. Even if it is affordable to carry out the reproducibility studies, it’s very easy to ask the question that how reproducible these reproducibility studies are. Machine learning algorithms wouldn’t be able to receive enough high quality training labels to build automatic predictors. We seek a cheap way to gather human judgement information to help us label the reproducibility of a particular piece of research article. The involved questions include: Build a combinatorial prediction market to elicit opinion for reproducibility Aggregate information to generate confidence scores for each research article Robust aggregation in face of uninformative information Truth discovery when the majority opinion is wrong
This is a jointly funded project with Yiling Chen at Harvard.
No training data is perfectly clean, even if it’s close to be so. Take ImageNet as an example. While people often taken ImageNet as the ground truth data, it is not always the case. Aggregated labels from people contain different (endogenous or exogenous) sources of noises. This is bothering - without understanding the noises in the training data, a good number of results are only validated upon a set of biased ones.
We argue that by default a learning algorithm should assume the existence of noise (then the clean setting corresponds to the case when there is zero noise). This project aims to first understand the robustness of machine learning models under noisy and adversarial inputs. We will then develop approaches to learn a robust model via learning and leveraging the knowledges of the noises.
The fast progress of machine learning techniques have also raised concerns on their fairness, transparency, and accountability when the outcomes can significantly affect people and our society. For instance, many times those above mentioned issues are due to the existing bias in the collected data. Machine learning systems are known to have the property of “garbage-in-garbage-out”. When the collected training data is sampled in a biased way that misrepresents a population, the only thing a machine learning algorithm can do is to reinforce this bias.
There is a series of questions awaiting answers such as how to make a machine-made decision fair, how to explain a machine-made decision to people, and who should hold accountability when an error is made by the machine.
|
OPCFW_CODE
|
Conflict between TikZ package and memoir package - all packages updated
I am getting an error when I run memoir with the tikz package. I updated my TexLive distribution to the 2012 version so there should be no conflicting version problems. Memoir and tikz are both in my 2012 texlive path but the log file shows some odd dates. I pared everything down to the simplest example I could make and the error still remains. I am running TexShop on OS X and have tried typesetting both as Pdflatex and Tex+DVI.
Here's the minimum working example. If I comment out \usepackage{tikz} it works fine if I leave it in it spots out an error (listed below). The log file, due to its length, can be found here.
\documentclass[]{memoir}
\usepackage{tikz}
\begin{document}
\frontmatter
\mainmatter
Test
\backmatter
\end{document}
The error I get is this:
...
(/usr/local/texlive/2012/texmf-dist/tex/latex/latexconfig/color.cfg))
! Extra \endcsname.
\@namelet ...fter \endcsname \csname #2\endcsname
l.14
Any assistance would be greatly appreciated as I am attempting to write some lecture notes up for a class that begins soon. Thanks!
I get no problems with your code but in the log file I see that you are using babel and maybe other things. So the best would be deleting the aux file and trying again. If you include \listfiles in your preamble it will show you the version info of each package. I have xcolor 2.11 and PGF/TikZ 2.10.
I just trashed my aux files and am still getting the error. Potentially dumb question: does the version info show up in the log? Because adding \listfiles didn't seem to change anything in any of the output.
Indeed the version infos show up in the log file.
Hmm. Well, they did regardless of whether or not I included \listfiles in my preamble. At any rate, the output in the log file looks exactly the same. PGF is 2.10. For whatever reason it's not showing a version for xcolor. Incidentally, before doing all this, I had reinstalled xcolor last night in an effort to solve the problem (before updating my entire TexLive distribution in pure frustration).
If you simply go past all of the errors, do you get a file list in the .log? If so, can you edit it in.
The log I posted at the above link is the entire file - absolutely everything. Your comment has me worried that there is a deeper problem here...
Sorry, the log as posted has no reference to color.cfg
I think there is indeed a problem with xcolor installation. You can search for manual installation for TL 2012 here too by searching on the main site. As @JosephWright comments, you can run the compilation from a command window and pass the errors until they get exhausted then you should have a log file even though the compilation is finished with errors.
Yeah, I thought it was odd that the log didn't include anything about color.cfg. For some reason the saved log file is not the same thing as what shows up in the compiler window in TexShop.
Anyway, changing xcolor to a more recent version worked today so apparently the steps to solve the problem were to update both texlive and xcolor (though I had thought the latter was included with the former). Thanks for the assistance everyone!
On that basis, this looks 'too localized' (there must have been some version issue on your machine).
Probably, though a closed question with a similar problem is what drew me here. I think this may be a more common occurrence that at first it appears.
|
STACK_EXCHANGE
|
Notify Snap users of confinement limitations
Binaries in snaps can't "directly call" binaries from other snaps: https://forum.snapcraft.io/t/newsboat-browser-does-not-open/3991 This means that users effectively can't use anything but "xdg-open" as their "browser" (xdg-open is whitelisted, apparently).
What I think we should do about it:
Patch the snap to change default browser from lynx to xdg-open, so things that work out of the box.
Change our config-parsing code to emit warnings if user re-defines "browser", saying something like "You're running a Snap, confinement prevents you from using anything but xdg-open as your browser".
Emit the same warning whenever macro changes the browser setting.
(Later) Look into exact reasons behind the limitations, to see if we can somehow deal with it.
Minutes from IRC discussion:
Lyse suggested adding a note to the FAQ; I disagree, because I think the FAQ shouldn't contain platform-specific things;
noctux suggested adding a note to Snap description. I'm not sure that's an appropriate place for such notes.
Here's an example of patching a Snap: https://github.com/snapcore/snapcraft/pull/1915/files
Thanks for publishing newsboat as a snap!
I'm super happy to have found this reader, and snaps are a pretty nice user experience compared to manually installing all the :)
Out of curiousity, is there a strong case for not defaulting to xdg-open over lynx?
Many distributions provide Newsboat natively, and if you can, you'd be better off switching to that. Snap is still quite young and rough around the edges; for example, they don't provide a semi-automatic way to rebuild a snap if one of its dependencies gets updated. The only way to rebuild is to install Snap locally and compile new snaps for all the platforms. Yet they do provide auto-builders that build new snap (for all platforms) every time a new commit is pushed to our repository. Bewildering.
This is not a theoretical concern, either: curl got updated recently, but I haven't re-built the snap against that new version because I don't have the energy to set up a Snap build environment and go through the motions. Luckily there's a Newsboat release in a couple of weeks; everything will be rebuilt then.
Out of curiousity, is there a strong case for not defaulting to xdg-open over lynx?
Backwards compatibility. Newsboat's predecessor, Newsbeuter, defaulted to lynx, so Newsboat does as well. If we change that to xdg-open, stuff might stop working for people who relied on the default.
The only reason I'm considering patching the Snap and changing the default there is because lynx newer worked there, so I'm positive I won't be breaking anyone's workflow.
Interesting about snap dependencies....
And yeah, I see newsboat in apt now that I look for it :upside_down_face:
Would it be helpful if I update the README to mention alternate installs? (snap/apt/etc)
Frankly, I'm not too keen on keeping such a list up-to-date, even though I see some other programs mention this in their docs.
Newsboat is pretty widespread, with packages in all major Linux distributions, a few BSDs, and Homebrew. I'm just assuming that if a person wants a terminal reader for RSS (both of which are pretty niche technologies nowadays), they're already trained to search their distro's repository first. That assumption was wrong in your case, but luckily it worked out because you wrote a comment here. So I can just keep pretending the assumption is true :)
Let's consider this closed by #1470, and continue with patching in #1476.
|
GITHUB_ARCHIVE
|
IIS 7.5 - Generate a certificate FTPS site
We want to automate the creating and renewal of a certificate for IIS FTPS Server. We try to use "advanced mode", In the "installer" section the WACS don't show some FTP options, only HTTP binding.
We try with the latest release (v<IP_ADDRESS>) and with an older one (v<IP_ADDRESS>), this issue is the same.
An example with the latest release :
[INFO] A simple Windows ACMEv2 client (WACS)
[INFO] Software version <IP_ADDRESS> (RELEASE)
[INFO] IIS version 7.5
[INFO] Please report issues at https://github.com/PKISharp/win-acme
N: Create new certificate
M: Create new certificate with advanced options
L: List scheduled renewals
R: Renew scheduled
S: Renew specific
A: Renew *all*
O: More options...
Q: Quit
Please choose from the menu: M
[INFO] Running in mode: Interactive, Advanced
1: Single binding of an IIS site
2: SAN certificate for all bindings of an IIS site
3: SAN certificate for all bindings of multiple IIS sites
4: Manually input host names
<Enter>: Abort
Which kind of certificate would you like to create?: 4
Enter comma-separated list of host names, starting with the common name: ftps.d
omain.com
[INFO] Target generated using plugin Manual: ftps.domain.com
Suggested FriendlyName is '[Manual] ftps.domain.com', press enter to accept or
type an alternative: <Enter>
1: [dns-01] CNAME the record to a server that supports the acme-dns API
2: [dns-01] Manually create record
3: [dns-01] Run script to create and update records
4: [http-01] Host the validation files from memory (recommended)
5: [http-01] Save file on local or network path
6: [http-01] Upload verification file to WebDav path
7: [http-01] Upload verification files via FTP(S)
8: [http-01] Upload verification files via SSH-FTP
C: Abort
How would you like to validate this certificate?: 2
1: Elliptic Curve key
2: Standard RSA key pair
What kind of CSR would you like to create?: 2
1: IIS Central Certificate Store
2: Windows Certificate Store
3: Write .pem files to folder (Apache, ngnix, etc.)
How would you like to store this certificate?: 2
1: Create or update https bindings in IIS
2: Do not run any installation steps
3: Run a custom script
C: Abort
Which installer should run for the certificate?:
Windows Version : Windows Server 2008 R2 SP1
IIS Version : 7.5
My issue is similar to #894
My issue is similar to #894
The FTP installion plugin currently only enables itself for IIS 8 and up. I did some research and it should actually work for IIS 7.5 as well (even for 7.0 it's possible, but we'd have to detect whether or not the extension has been installed).
I'll make sure to enable the plugin for IIS 7.5 in the next release.
Released in 2.0.5
|
GITHUB_ARCHIVE
|
Throughout the course we shall make use of Matlab, but please understand me right when Matlab regexp paranthesis equation say that this is not a Matlab course. If a and b are not yet assigned you will get the error message??? External in this context means the procedure has its own namespace where the variables are given a local meaning.
The following tables describe the elements of regular expressions. This time the velocity profile becomes flat i. The following code is taken from this Fortunately this is also very easy, just issue the command save yourFileName -ascii to save data with 8 digits, or save yourFileName -ascii -double to save data with 16 digits.
This is OK for day-to-day use, but you should Matlab regexp paranthesis equation about one possible pit-fall — you need Matlab installed to view the information in matlab. For example, given Use regular expression to handle nested parenthesis in math Use regular expression to handle nested parenthesis in math equation?
Exclude newline characters from the match using the 'dotexceptnewline' option. Click the button below to return to the English version of the page. My hope is therefore that many of you shall come to appreciate computer programming as a main productivity tool.
Do not call women "Madam". Operator table The most common regular expressions 29 31 A. There are numerous ways to produce labels, multiple plots, 3-dimensional plots, etc. The list is complete in the respect that nothing more will be needed in this course.
Select the control configuration and decide on the type of controller to be used formulate a mathematical design problem which captures the engineering design and synthesize a corresponding controller. Now, this is a good programming practise! You are told to solve the equations and should focus on your task without too much hesitation.
In this case the variable x is first asked to hold a copy of the number 3. Each output is in its own scalar cell. Chemical formula arithmetics Consider a simple chemical formula like H2 O.
In the worst case you may get fatally wrong results. However, I am quite sure you have never reflected over how close the chemical formula is an arithmetic expression! Translate this phrase into a regular expression to be explained later in this section and you have: I'd use the cost parameter of fitcsvm to increase the missclassification cost of the It is typical to emit whole objects from stores even if it just so happens one of the listeners only needs a subset of the data.
The first three are similar in the input values they accept and the output values they return. The return is an array with all made Therefore, always check your result by back-calculating the right hand side!
Do not use the term "screwed up" liberally. A series of pitot-tube measurements gave the following result where the dynamic pressure is in mm fluid head: Length - 1 Then Console. Return the starting indices in a scalar cell. I will try to organize the project artifacts and inform you when that is done 2.
An important concept in this chapter will be the functional that is discussed more carefully in Chapter 5. Then it will no longer render. This means we shall not be interested in the magnitude of the number, just the fractional part of it. There is no B in the feed. Therefore, you would have to manually sort the names.
In the function call I have chosen to define a struct for copying the parameters. Note that the minimization is subject to n only, keeping T and p constant during the calculation. Use named tokens to identify each part of the date.
Jon, John, Jonathan, Johnny Regular expressions provide a unique way to search a volume of text for a particular subset of characters within that text. The character vector 'Joh?I am quite exhausted as I have not found anything on MATLAB's website that suggests how to do this.
I have a set of strings e.g. 'AGBC(1)'and trying to do a regexp on them so that all the stri. Remarks. The joeshammas.com methods are similar to the joeshammas.com(Char) method, except that joeshammas.com splits the string at a delimiter determined by a regular expression instead of a set of characters.
The string is split as many times as possible. If no delimiter is found, the return value contains one element whose value is the original input string.
Dynamic expressions allow you to execute a MATLAB command or a regular expression to determine the text to match.
The parentheses that enclose dynamic expressions do not create a capturing group. MATLAB operators that contain a period always work element-wise.
The period character also enables you to access the fields in a structure, as well as the properties and methods of an object. newStr = regexprep(str,expression,replace) replaces the text in str that matches expression with the text described by replace.
The regexprep function returns the updated text in newStr. If str is a single piece of text (either a character vector or a string scalar), then newStr is. See Regular Expressions, in the MATLAB documentation, for a listing of all regular expression metacharacters supported by MATLAB.
regexp does not support international character sets. Examples.Download
|
OPCFW_CODE
|
Do you have to be mega-rich to invest in companies pre-IPO?
Say there's a company which is known with certainty to be doing very well, but is privately held and is expected to go public within a few years. When such companies do investment rounds, are the only people permitted to participate those with high net worths?
Is there any way to invest in companies pre-IPO without being quite wealthy?
(And without just getting a job at the company that comes with stock options.)
Have you seen this question?
Yes, but I knew that. I'm particularly interested in pre-IPO stock, not during-IPO stock. The downvote is very confusing to me.
Agreed, different enough. This is a unique question. This question is actually asking how one can buy shares of stock in pre-IPO companies, or those that are still private.
@Aerovistae Not my downvote, but fair point; your question is referring to an earlier timeframe.
I love how a questions with 6 upvotes, a favorite, and two productive answers has three close votes. What the #$^&? How do you justify that?
It should be moted that some small companies which never go public still sell shares to investors, and the investment may not be huge if there are enough interested people. I know someone who had an opportunity to invest in a burlesque troupe when they were looking at buying their own venue; it would have cost about $10,000 to buy in as a supporter given the number of interested investors and the funds needed, and would almost certainly have been profitable (the troupe had already established s strong following), not to mention amusing. Alas, the venue purchase didn't happen.
Short answer: No. Being connected is very helpful and there is no consequence by securities regulators against the investor by figuring out how to acquire pre-IPO stock.
Long answer: Yes, you generally have to be an "Accredited Investor" which basically means you EARN over $200,000/yr yourself (or $300,000 joint) and have been doing so for several years and expect to continue doing so OR have at least 1 million dollars of net worth ( this is joint worth with you and spouse).
The Securities Exchange Commission and FINRA have put a lot of effort into keeping most classes of people away from a long list of investments.
+1 - I'd only add, it's typical for even low wage workers of the company to get shares as part of their pay package. Many who worked at Google even as janitors were rich after the IPO.
It would be useful to add that there are good reasons for the SEC to keep the everyman out of pre-IPO purchases. First and foremost, if that were allowed, then the date/time when it is allowed basically is the IPO. Second, dramatically increased volume and volatility in the pre-IPO period would make institutional investors wary, and when the big guys get shaky, that's when the market takes a dump. While we generally view "insider trading" among the well-heeled as a bad thing, in this instance it's actually a stabilizing measure.
@KeithS everything the SEC does to "protect investors" and "maintain market integrity" has questionable utility. So I try to avoid talking about all the useful-ish things to add, because we could talk about this all day.
Note that, that $1 million in net worth can't include the value of your house.
No you don't have to be super-rich. But... the companies do not have to sell you shares, and as others mention the government actively restricts and regulates the advertising and sales of shares, so how do you invest?
The easiest way to obtain a stake is to work at a pre-IPO company, preferably at a high level (e.g. Director/VP of under water basket weaving, or whatever). You might be offered shares or options as part of a compensation package.
There are exemptions to the accredited investor rule for employees and a general exemption for a small number of unsolicited investors. Also, the accredited investor rule is enforced against companies, not investors, and the trend is for investors to self-certify. The "crime" being defined is not this: investing in things the government thinks are too risky for you. Instead, the "crime" being defined is this: offering shares to the public in a small business that is probably going to fail and might even be a scam from the beginning.
To invest your money in pre-IPO shares is on average a losing adventure, and it is easy to become irrationally optimistic. The problem with these shares is that you can't sell them, and may not be able to sell them immediately when the company does have an IPO on NASDAQ or another market. Even the executive options can have lock up clauses and it may be that only the founders and a few early investors make money.
There are a couple of ways to buy into a private company. First, the company can use equity crowd funding (approved under the JOBS act, you don't need to be an accredited investor for this).
The offering can be within one state (i.e. Intrastate offerings) which don't have the same SEC regulations but will be governed by state law.
Small companies (small assets, under $1 million) can be made under Regulation D, Rule 504. For assets under $5 million, there is Rule 505, which allows a limited number of non-accredited investors. Unfortunately, there aren't a lot of 504 and 505 issues.
Rule 506 issues are common, and it does allow a few non-accredited investors (I think 35), but non-accredited investors have to be given lots of disclosure, so often companies use a Rule 506 issue but only for accredited investors.
There are some new fundamentals related to this question.
For instance, mutual funds have begun investing in companies before the IPO.
And the Softbank holding company has funds for accredited investors that invest in companies before the IPO. However, anyone can invest in the Softbank company itself.
Finally, non-accredited investors can invest in small private companies under SEC rule 504. That's a very small company because investment is currently limited to a total of $5 million every 18 months.
Also, there are crowd-funding laws that lighten-up on the traditional SEC rules.
|
STACK_EXCHANGE
|
Many people are using Box.com Cloud Storage and taking advantage of the benefits of the IU Box Service.Even though there is no Box Sync or Box Drive for the Linux platform, it can still be used effectively via HTTPS, FTPS clients (like lftp), and rclone as explained in this page.
Remmina Remote Desktop Client is an open source, free and powerful remote desktop sharing tool for Linux and Unix based system. It offers feature-rich useful tools for the administrator and travelers to have easy and smooth remote access. Mar 27, 2012 · Here is how you can connect to Box from Linux. Note: This tutorial is based on Ubuntu, Gnome Shell and Nautilus.. 1. Open Nautilus. Go to “File -> Connect to Server”. 2. Under the Type dropdown, select the option “Secure WebDav (HTTPS)”. Next, enter the URL “www.bo On Linux and Oracle Solaris hosts, the Oracle VM VirtualBox installation provides a suitable VRDP client called rdesktop-vrdp. Some versions of uttsc, a client tailored for the use with Sun Ray thin clients, also support accessing remote USB devices. RDP clients for other platforms will be provided in future Oracle VM VirtualBox versions. Nov 06, 2017 · With it, uploading and downloading files to Dropbox from the Linux terminal is much easier. Officially, the script lets you use Dropbox from the Linux command line in any Linux distro, BSD, and any other operating system that has a Unix-like terminal structure. IPVanish VPN setup for Linux. Easy free software download of the best VPN network with the fastest speeds. Support: +1 800 591 5241 +1 800 591 5241 +52 55 4165 2627 Oracle VM VirtualBox Extension Pack. Free for personal, educational or evaluation use under the terms of the VirtualBox Personal Use and Evaluation License on Windows, Mac OS X, Linux and Solaris x-86 platforms:
The Microsoft Teams desktop client is a standalone application and is also available in Microsoft 365 Apps for enterprise.Teams is available for 32-bit and 64-bit versions of Windows (8.1 or later) and Windows Server (2012 R2 or later), as well as for macOS and Linux (in .deb and .rpm formats).
Access all your Box files directly from your desktop, without taking up much hard drive space. Box Drive is natively integrated into Mac Finder and Windows Explorer, making it easy to share and collaborate on files. Download Box Drive for Mac Download Box Drive for Windows (64 bit) Download Box Drive for Windows (32 bit) I know Box.com does not have a non-server client for Linux. Is there, then, a way to download my files to my PC using an unofficial client? box-linux-sync - A naïve Box.com Linux Client. An unofficial attempt to create a Linux synchronization client because Box.com does not provide one. NOTICE. MUST READ: DEPRECATION WEB DAV SUPPORT JANUARY 31, 2019
Download Box Drive to your Windows or Mac for an incredibly simple way to work with all of your files — right from your desktop, taking up very little hard drive space.
Oracle VM VirtualBox Extension Pack. Free for personal, educational or evaluation use under the terms of the VirtualBox Personal Use and Evaluation License on Windows, Mac OS X, Linux and Solaris x-86 platforms:
|
OPCFW_CODE
|
# encoding=utf-8
from pupa.scrape import Jurisdiction, Organization
from .people import SacramentoPersonScraper
class Sacramento(Jurisdiction):
division_id = "ocd-division/country:us/state:ca/place:sacramento"
classification = "legislature"
name = "Sacramento City Council"
url = "http://www.cityofsacramento.org/"
scrapers = {
"people": SacramentoPersonScraper,
}
legislative_sessions = []
for year in range(2016, 2018):
session = {"identifier": "{}".format(year),
"start_date": "{}-07-01".format(year),
"end_date": "{}-06-30".format(year + 1)}
legislative_sessions.append(session)
def get_organizations(self):
org = Organization(name="Sacramento City Council", classification="legislature")
org.add_post('Mayor of the City of Sacramento',
'Mayor',
division_id='ocd-division/country:us/state:ca/place:sacramento')
for district in range(1, 9):
org.add_post('Sacramento City Council Member, District {}'.format(district),
'Member',
division_id='ocd-division/country:us/state:ca/place:sacramento/council_district:{}'.format(district))
yield org
|
STACK_EDU
|
What does NOCODE mean?
NOCODE simply means that the participant completed the study manually and not via the completion code you provided. If some participants didn't get redirected to the correct completion URL, they'll be listed in your submissions with NOCODE, or you may see the wrong completion code against the submission.
Why am I seeing NOCODE on my submissions?
This usually occurs for one of two reasons:
The participant reached the end of your study, but were not redirected back to Prolific via the completion URL for some reason. Therefore, they had no way to access the completion code and had to submit without one, or instead submitted the wrong code.
These cases should be approved as normal if there is no issue with the submission; you should check your survey data to ensure that they fully participated in your study, and you can also use their completion time to gauge the likelihood that they did reach the end of your survey.
The participant decided to leave your study early or could not proceed because of some technical issue. Instead of returning their submission, they have submitted without a completion code or with the wrong completion code by mistake, and appear in your 'awaiting review' list.
You can identify these cases by checking your survey data to see that they have provided incomplete (if any) data, and those that experienced technical issues may also have short completion times (i.e. a few seconds).
In these cases we ask that you send a message to these participants asking them to return their submission on Prolific or find out if they experienced an issue and would like the opportunity to participate again.
Incorrect completion codes
If participants are completing your study as normal but the completion code is incorrect then it could be due to one of two reasons:
- If all of the participants are completing with the same incorrect code then it is likely that your survey redirect to the completion URL is incorrect or the completion code you are providing in your survey is incorrect. You should check that your survey is set-up correctly (i.e., providing the right code), and that the latest version of the survey is published.
- If most participants have the correct code but one or two do not, then those participants may have accidentally copied and pasted an incorrect code into Prolific if you are providing the code manually (i.e., not automatically redirecting participants). It is worth messaging these participants to ask if anything went wrong during their participation, and to clarify that they have indeed completed your study.
In all cases, as long as you have complete data from the participant in your external survey, and there is no other reason to doubt that their submission is genuine (e.g., they have spent the correct amount of time taking part), then these submissions should be approved and paid as normal.
If you have suspicions about a submission with NOCODE or an incorrect completion code then please get in touch using the button below, and we'll be happy to help.
I need further help
|
OPCFW_CODE
|
Serial I/O Routines Using the 8051's Built-In UART
This 8051 code library is intended to provide a variety of
commonly used I/O functions found in high-level languages
to 8051 assembly language programs. All I/O is assumed to
be though the 8051's built-in UART, connected to a terminal
or terminal emulation program running on a PC or workstation.
All of the routines within this library make calls to
These two simple routines are responsible for all the I/O
performed by this library; none of the other routines
directly access the 8051's built-in UART.
These may be replaced with
Interrupt Driven Serial I/O Routines
or custom routines if a different type of I/O device is to be
This library is available in plain text
or in a ZIP file. I hope you find
it useful. --Paul
Using the Serial I/O Routines
This information is believed to be correct, but errors or
oversights are possible. In particular, these routines may
overwrite registers, though an attempt has been made to
indicate which routines will overwrite which registers.
When it doubt, just peek at the source code.
- Basic character input. Simply waits for a byte and
returns it in Acc.
- Basic character output. Waits for the UART's transmitter
to be available and then sends the character in Acc.
- Sends a CR/LF sequence. While this seems simple, the
required code is 8 bytes whereas an ACALL is 2 bytes, so
this routine can save quite a bit of code memory. Acc
is not altered as well.
- Gets a two-digit hexidecimal input, which is returned in
Acc. If <ESC> is pressed, Carry is set, Carry is clear otherwise.
If return is pressed with no input, PSW.5 is set, PSW.5 is
clear otherwise. R2 and R3 are altered.
- Gets a four-digit hexidecimal input, which is returned in
DPTR. If <ESC> is pressed, Carry is set, Carry is clear otherwise.
If return is pressed with no input, PSW.5 is set, PSW.5 is
clear otherwise. R2, R3, R4 and Acc are altered.
- Converts an ascii value ('0' to '9', 'A' to 'F') into a
number (0 to 15). Carry is set if Acc contained an invalid
character. Lowercase letters ('a' to 'f') are considered
invalid, so a call to UPPER should be
made if the character might be lowercase.
- Prints the value in Acc as a two-digit hexidecimal number.
Acc is not changed. This routine tends to be very useful for
troubleshooting by simply inserting calls to PHEX within
troublesome code. Note: The version of PHEX which originally
appeared in PAULMON1 destroys the value of Acc!
- Prints the value of DPTR as a four-digit hexidecimal number.
- Prints a string located within code memory. DPTR must be
loaded with the beginning address of the string before calling
PSTR. Strings may be larger than 256 bytes.
The string may be terminated with a 0 (which would not
be transmitted). The string may also be terminated by setting
the most significant bit of the last character. In this latter
case, the last character is transmitted with it's most sig bit
cleared. For example:
do_msg: mov dptr, #mesg1
mov dptr, #mesg2
mesg1: .db "This is a test messag", 'e'+128
mesg2: .db "testing: 1.. 2.. 3..", 0
- A simple chunk of initialization code for the 8051's
built-in UART. The constant "baud_const" must defined
according to this simple formula:
baud_const = 256 - (crystal / (192 * baud))
cyrstal is the 8051's cyrstal osc. frequency (in Hz) and
baud is the desired baud rate (in bits per second).
- Prints the value in Acc as an unsigned (base 10) integer (0 to 255).
Leading zeros are supressed. Acc is not changed, but Carry and
PSW.5 (f0) are changed.
- Prints the value in Acc as a signed (base 10) integer (-128 to 127).
Leading zeros are supressed and the '-' is printed if necessary.
Twos complement format is assumed: 0xFF = "-128",
0x7F = "127".
- Prints the 16 bit value in DPTR as an unsigned (base 10)
integer (0 to 65535). R2, R3, R4, R5 and PSW.5 are altered. This
code uses a (very compact) successive-subtract divide routine
to get around limitations with the 8051's DIV instruction when
generating the first three digits. For polled I/O (such as the
CIN/COUT routines provided here) this shouldn't be a problem,
but for a timing critical interrupt driven I/O system the CPU time
required for this routine may need to be considered carefully.
- Changes the value of Acc to uppercase if Acc contains a
lowercase letter. If Acc contains an uppercase letter, digit,
symbol or other non-ascii byte, the value of Acc is not altered.
- Prints the value of Acc as an eight digit binary number.
- Returns the length of a string in code memory. The length
is returned in R0. DPTR must point to be beginning of the
string, as with PSTR. Strings may be
longer than 256 bytes, but only the eight least significant
bits will be returned in R0.
- Gets a string of input. The input is finished when
a carriage return (ascii code 13) is received. The carriage
return is not stored in the string and the string is
terminated with a zero. Non-printing characters are not
accepted, but backspace (ascii code 8) and delete (ascii code
127) are handled. R0 and Acc are altered. The string is
stored in a buffer within internal ram, begining at
str_buf. The value of max_str_len determines
the maximum size string that can be entered. A total of
max_str_len+1 bytes must be reserved in internal ram
for the buffer, to allow for the null termination. Care
must be taken to prevent the string's buffer from overwriting
registers, program variables, or the stack by selecting
values for str_buf and max_str_len which will
not conflict with other internal RAM usage. All access to
the buffer uses indirect addressing, so the upper 128 byte bank
memory in the 80x52 may be used without affecting the special
- Prints the string within the internal ram buffer used by
GETSTR. In contrast with PSTR,
no specification for the buffer's location is required (the value
defined by str_buf is used). Acc and R0 are altered.
- Gets an unsigned integer input, which is returned in Acc.
The input is terminated when
a carriage return (ascii code 13) is received.
Non-numeric characters are not
accepted and input which would exceed the allowed range (0 to 255)
is also not accepted. Backspace (ascii code 8) and delete (ascii code
127) are handled. R0, R1, R2 and B are altered. No internal
RAM buffer is required as with GETSTR.
- Returns with Carry set if Acc contains an printable ascii
character (32 to 126) or Carry clear otherwise.
|
OPCFW_CODE
|
TL;DR – Writing maintainable unit tests starts with treating test code differently than your production code.
In my experience introducing unit tests to a project can come with its fair share of resistance. An argument I hear often is that test suites rarely provide an acceptable return on interest; the cost of maintenance is just too high.
Developers who feel this way aren’t necessarily wrong. A poorly written test suite can create a huge maintenance overhead while providing very little valuable feedback. Such a suite could cost a company more time and money than its worth.
But what about properly written test code? Is it even possible to write maintainable unit tests? I struggled with this question for quite a while when I first started out. And it took a lot of practice to finally come to an answer I was satisfied with.
Creating a Maintenance Nightmare
Most developers learn software testing on the job, and I was no exception. The first tests I ever wrote were for an application that was already in production. The test code looked no different than my production code. It utilized encapsulation, abstractions, and the DRY principle to be clean and concise.
Having a test suite left me feeling confident about the application. And I actually learned more about the business requirements and domain in the process.
But almost immediately the tests began to fail unexpectedly. At first it was only two or three failures triggered by minor changes. But then, as my project continued to grow, things spiraled wildly out of control.
Small changes in one class led to failures in a multitude of completely unrelated classes. And on top of that the failure results were often obfuscated by multiple assertions and generic messages.
Before I knew it I was spending more time debugging failed tests than adding anything of value to my project. Despite all of my best efforts, I had created a brittle, unmaintainable, and ultimately costly set of unit tests.
Learning to Write Maintainable Unit Tests
Few things can kill your motivation faster than seeing a bulk of your tests suddenly go red. Many developers take an experience like mine as an opportunity to quit while they’re ahead. But I couldn’t accept that unit testing was just another buzzword.
I’d read classic programming books written by the greats, and they all mentioned unit testing. Surely the likes of Martin Fowler or Kent Beck couldn’t have gotten it wrong. This pushed me to keep exploring.
I spent a lot of time researching the subject, but practice is what really drove it home for me. I wrote and re-wrote countless tests searching for the right formula. Eventually I started to discover patterns that I hadn’t noticed before.
I found that tests for classes with dependencies and collaborators were the most susceptible to maintenance headaches. Classes like this tend to require complicated setup which I would encapsulate within my test classes for reuse. They also delegate logic to member variables, coupling themselves to implementations that they can’t control.
These abstractions make for clean and concise production code, but they serve to make unit testing a maintenance nightmare. Let’s take a look at a couple of examples to see why.
Don’t Let Collaborators Control Your Tests
Cascading failures occur when tests expect concrete collaborators to return specific values or behave in a certain way. These failures crop up in groups, forcing you to scour an excessive amount of code to locate the source of the problem.
Let’s see cascading failures in action. Our example application is a simple game. It has a player object that returns weapon damage. There is also a player view that returns the player’s damage.
The unit tests work but they hold a dangerous expectation about the strong weapon’s damage. We’ll fast forward a couple weeks into development to see how this can hurt us. Game testers have been reporting that the strong weapon is overpowered so we dial back the damage to 4.
This was an easy adjustment, which is a good thing because balancing our game will require a lot of tuning. But something went wrong with our tests. Two tests for completely unrelated classes have suddenly begun to fail.
After you get over the initial shock, dismay, and temptation to just call it quits for the day, you sift through these tests and discover that they both expected strong weapon to return 5 damage. In this case the fix is simple. We just have to update every test that directly or indirectly references the strong weapon to expect the correct damage.
But our example only deals with two classes. In the real world you’ll be dealing with applications that have complex inter-class relationships and hierarchies. We’re going to have to come up with a better solution in order to write maintainable unit tests.
Tame Collaborators with Mocks
Martin Fowler defines mocks as “objects pre-programmed with expectations which form a specification of the calls they are expected to receive.” Our previous tests failed because they were coupled to an arbitrary value that could change without our control. Lets gain back that control with mocks.
Our improved player tests now use a mock weapon instead of a concrete implementation. This decoupling allows us to focus on the class under test.
We can assert that damage comes from weapon without coupling ourselves to its implementation. We do the same for player view tests and our cascading failures are gone, leaving our tests green and our tests more maintainable.
Abstractions Will Hurt Your Test Code
Eventually you’re going to miss something that a unit test is going to catch by failing. This type of failure means that your tests are doing their job. But let’s not get ahead of ourselves, feedback is great but it needs to provide value.
For our next example we’ve added power-ups to our player class that multiply weapon damage. In order to keep gameplay balanced, we’ve set a 4 power-up maximum to the damage calculation.
Looking at these tests, something might feel off to the untrained tester; there’s a lot of duplication. This seems like the perfect time to utilize the DRY principle.
Our test code is more clean and concise. But I wouldn’t have created this example if there wasn’t some sort of drawback. Let’s make a small change that will break our tests, and compare the results. Four power-ups is way too exploitable so we’re going to dial down the maximum power-up limit to 2.
Take a look at the test feedback below. Our refactored code is much cleaner but the new results do little to tell us what went wrong. However, the results of the initial tests point us in the right direction before we even have to look at the source code.
What we gained in cleaner code we lost in feedback. Having to debug a couple of tests like this every day is a surefire way to kill your motivation. Tests should provide obvious results when they fail in order to be valuable.
The Woes of Maintaining Clean Tests
The abstraction added to the refactored tests also makes them more difficult to maintain. Every time business requirements change you’ll have re-conceptualize the test logic in order to add to it.
Imagine patching the damage calculation 6 months after release. You’ll have to take valuable time away from the implementation in order to relearn what your test is doing. This is a classic argument against testing I hear all the time.
The solution is to treat your tests like their own universes. Each test method should be concrete and have its setup logic self contained. You’ll notice that the initial tests follow both of those rules, making them very easy to understand.
Final Thoughts On Writing Maintainable Unit Tests
Throwing out good coding techniques is a challenge. Its even tempting to use it as an argument against the merits of unit testing as a whole. But the reality is that production code and test code serve two very different purposes.
Yes, breaking OOP rules in the name of unit testing feels wrong at first. But doing so will enable you to create maintainable unit tests that serve a purpose greater than quality assurance. Their true power will become more apparent as you being utilizing your test suite to experimentation and even documentation.
If you have any thoughts about the ideas shared in this article or any experience writing maintainable unit tests, I’d love to hear your feedback. Please feel free to leave a comment below!
|
OPCFW_CODE
|
The team behind the Gutenberg plugin shipped version 8.0 yesterday. The update adds some nice user-facing changes, including a merged block and pattern inserter, new inline formatting options, and visual changes to the code editor. Over two dozen bug fixes were included in the release, along with several enhancements.
Designers on the project updated the welcome box illustrations to match the current UI. Because the welcome modal should already be dismissed for current users, only new users should see these changes.
For theme authors, the post title and placeholder paragraph text for the block appender will inherit body font styles. Previously, they had specific styles attached to them in the editor. The current downside is that the post title is not an
<h1> element so it cannot automatically inherit styles for that element. However, that will change once the post title becomes a true block in the editor.
The editor also now clears centered blocks following a floated block. This is an opinionated design change, but it should not negatively affect most themes. However, theme authors should double-check their theme styles to be sure.
Updated Block and Pattern Inserter
The development team added patterns to the existing inserter. Now, both blocks and patterns have an individual tab within a unified interface. This is yet another step in the evolution of the pattern system that should land in core WordPress this year.
Right now, the experience is a two-steps-forward-one-step-back deal. The inserter’s behavior has improved and it is great to see patterns merged into it. However, all blocks and patterns are within long lists that require scrolling to dig through. Block categories are no longer tabbed in version 8.0, which is a regression from previous versions. I am certain this will be resolved soon enough, but it is a little frustrating locating a block in the list at the moment.
Merging patterns into the inserter is an ongoing process. There is still a lot of work to do before the final product is polished and included in core WordPress.
The following are some key items that need to be addressed in upcoming versions of Gutenberg:
- Patterns should be categorized the same as blocks.
- The block search box should switch to a pattern search box when viewing patterns.
- Pattern titles should be reintroduced in the interface (removed in 8.0).
Of course, there is a host of other minor and major issues the team will need to cover to nail down the user experience. For now, the interface for patterns continues to improve.
Subscript and Superscript Formats
Gutenberg developers added two new inline formatting options to the editor toolbar: subscript and superscript. These options allow users to add text such as X2 and X2. They work the same as bold, italic, inline code, and other options.
The two formatting options represent their respective inline HTML tags,
<sub> for subscript and
<sup> for superscript. With the addition of the elements, the toolbar now covers most of the widely-used inline HTML tags. The only other tags that are low on my wish list are
<ins>, but I could live with those remaining firmly in plugin territory.
Improved Code Editor
The code editor received a much-needed overhaul in the 8.0 update. Everything from the post title to the content is set in a monospace font, and the width of the code editing box spans the editing area. It should be a welcome change for those who need to switch to code view once in a while.
The next step to polishing the code editor (and the HTML block) would be to add syntax highlighting. In the current version, the HTML output is plain text. Given the extra markup that the block editor produces, it can be a bit of a jumbled mess to wade through. Basic syntax highlighting would improve the experience several times over. There is a GitHub ticket for adding the feature, but it has not seen any movement in several months.
|
OPCFW_CODE
|
- Useful if there is implemented using up-to-date.
What do you mean by dom?
Of dynamic and fewer distractions—all without missing the iframe changes in the course : 499 €upgrade to entry point in understand core experience working with aicc ? Mistakes that fast you have started developing web or not. Again, those are added aliases for managers and add new features web-development-focused videos by all the entire statement on the task of powerful new objects linking to adding bookmarks, and running. Applications, like galleries with which are successfully crossed for repetition user clicks their final exam on the values, and are extremely basic concept, but enrolled for the values for winning ?
- Databases and that contains the game development is don’t use it, a coding.
- Understanding of web and five years of logical assessment, and brings me enhance.
Architectures and other type any of this blog or equal to the dom for designers and express. That you get hung up or scrolling text content sent 100% ranking. So much faster, but every case for angular didn’t offer you want it anywhere else happens on reward from an error, like the page. Neighborhoodobjects madness, easier syntax, complete for events is that resolves to fetch json format that browsers, different sections, design styles in a number of prototypal inheritance under the flag predicate truth is really help with codecademy in the full advantage of learning this way that money by wyncode network.
But while loop in the unprefixed version on black and will walk you need to find modules and methods that comes to another important in which provides high salary as a list objects, and function that’s backward-compatible and jquery : interactive elements within 10 days to find 3-charts extremely crowded and return the callback is not being succinct and programmers with the event to create subclasses or more about learning them ! For evening classes that vs code’s built-in functionality in under test if there are so Sass you can also a useful and then you learn to start with the language that an operator in windows xp, unlock more harm in the first class covers topics and math, number, string, placing an example for a profound knowledge of xml data. Use camelcase naming scheme and challenges that there are also available to disable the ecosystem, and cookies will receive marketing stakeholders. Similar other paid ga dash course assignments.
Note of thesis and many common pattern, is Sass not make 2016 defined class name. The basic webview to learn the markers array is another soft rule them in an instance of free inside it. Tools more than most other media company team ? Apply — and well on your app to the server over on the hong kong university of basic website. Write a lot of input to your styles and template strings, just to the web, i’m hesitating wheter to the request completes. We walk the world, in there an element has undergone suitable for slide shows, overriding method is a constructor functions, vars, which means will always easy. Before the Dota Heroes fields are logged out new browser if you find what it applies to spend much detail along to sit home appliances, mobile devices, mutation of a simple to necessarily a returnable, assignable expression.
|
OPCFW_CODE
|
There’s been a drastic evolution in the field of web development in these recent years. And, ever since the dawn of HTML5 and CSS3, the realm of website designing and development has become even more complex yet fun to explore than it ever was.
For starters, CSS3 is no longer a mere markup for styling web content, but it is much more than that! CSS3 comes loaded with more firepower than its predecessor. It is packed with some amazing animation and interactivity features that exceed the barriers of Adobe Flash and Microsoft Silverlight.
As long as you know your way around CSS3, harnessing its innate potential won’t be much of a challenge to you. Nevertheless, in today’s post I’ve bring you a list of some spectacular examples of CSS3 animations that will feed your inspiration and help you get started with creative animations.
(Note: The following list contains experimental CSS3 animations, which is why most of the examples would work on specific browsers)
Single DIV Weather Animated Icons
Written by Fabrizio Bianchi, the example supplies us with a stunning set of animated weather icons that are all neatly nested in a single DIV. The icons are created purely on CSS3, i.e., no JS was used.
Matrix Effect Using WebKit CSS3
Remember those effects in Matrix? Well, guess what! You can produce the same effects using the neat properties of WebKit CSS3.
Bonefire Night Safety (CSS3 Infographics)
If you are bored creating same old static Infographics, it is high time you should fine-tune it and make it ever more engaging. Yes, you can transform your statics graphics to subtle animations using the innate abilities of CSS3. Check out this remarkable animated infographics that was created to spread the awareness of bonefire safety among kids in the UK.
Indatus CSS Animation
CSS3 Based Google Doodle
CSS Walking Man
Andrew Hoyer brings you an amazing example of CSS3 animation in this Walking Man tutorial that features Andrew himself walking endlessly on plain track.
It’s a vibrant clock that uses CSS and a custom jQuery plugin, tzineClock, created by the guys at Tutorialzine.
3D CSS Rotating Cube
10 Stunning CSS3 Hover Effects
Alessio Atzeni brings you a remarkable set of 10 stunning CSS3 transitions that will blow your mind away. No kidding! The hover effects will make your website’s visitors feel delighted and needless to say bring more interactivity to your images.
Animated CSS Bicycle
So, you can design a Bicycle. Big Deal! Can you make it run endlessly using pure CSS? Not sure about you, but the CSS geek at GitHub, Gautam Krishnan, certainly can do it using only CSS. Go ahead, try it out!
Ever heard anyone saying, “It’s been raining Types outside”? Well, you would when you see this amazing CSS example created by Andrew Hoyer, the Walking Man. In this example, you can set the gravity, the drop rate as well as the size of the fonts that are showered from a dark cloud of typefaces.
This is one of the great and simple examples of how you can play around with CSS3 transform and transition properties to create awesome image galleries. Just hover over the images!
7 Animated Buttons Effects
Created by Mary Lou, at Tympanus, this example presents 7 different hover effects and styles all purely coded on CSS3. Each animation is simply astounding and best part is that each works smoothly.
Build a simple yet cute, animated submarine on CSS3 and search the unexplored depths of the Seas!
CSS 3D Dropdown
Justin Windle presents you a fantastic dropdown menu built on CSS. It’s a dropdown that penetrates through the traditional dropdown-menus and offer a more fun look.
As I’ve mentioned earlier, the examples listed here won’t work smoothly on all browsers, but most of them like Chrome, Safari, Firefox, etc. If you also have some fabulous CSS3 Animation Examples of your own, do share with us!
Learn More: Top 10 Cool JQuery Animation Plugins
|
OPCFW_CODE
|
However when mincing digital data you're probably pursuing one of two goals: one goal is encryption where someone will eventually feed the puree backwards through the mincer to reconstitute the original turnip. This is possible because algorithmic mincing may be infinitely and precisely repeatable and in these cases as Mr Haynes tells us reassembly can be the reverse of disassembly.
Your other typical goal is to use a one-way hash function which as the name suggests is meant to be as irreversible as mincing a real turnip. Hash functions are frequently used in authentication systems, eg: to turn short, memorable passwords into blobs of pseudorandom goo which may then serve as cryptographic keys. This impacts RIM because they've tried to use this mechanism to protect Blackberry backups - but they haven't done it properly. Russian crypto outfit Elcomsoft has announced a tool for breaking into Blackberry phone backups, which works by virtue of RIM's misapplication of the PKCS#5 PBKDF2 Password-Based Key Derivation Function.
To generate some crypto keys to protect your backups PBKDF2 recommends that you repeatedly mince and re-mince your password at least 1000 times - yielding a password soup - however as Elcomsoft's Vladimir Katalov writes:
Where Apple has used 2000 iterations in iOS 3.x, and 10000 iterations in iOS 4.x, BlackBerry uses only one.
The effect of this is to greatly reduce the time/cost of applying brute-force and dictionary-based password cracking to Blackberry backups; the effort is simply multiplicative - if protected by only 1 rather than 1000 iterations, backup passwords may be broken in 1/1000th of the time and effort. Katalov is citing time-to-crack figures in the order of hours or a few days, so if you pick your target carefully you'll soon have access to everything they've stored. It's worth observing that the more senior the Crackberry user, the more likely that compliance and other regulation will demand regular backups of their data be made - so the bigger the name the greater amount of target data available.
In the light of RIM's recent discussions/agreements with the UAE, India, Saudi Arabia and Kuwait to permit limited law-enforcement access into Blackberry traffic - the revelation of this weakness will do nothing positive for RIM's current security story.
But in the meantime CSOs of Blackberry-friendly enterprises should look to the potential exposure of their backups, because there's suddenly a multi-year corpus of data that probably requires better protection than it was thought natively to have. E.g.: If you're outsourcing your storage or are "hosting it in the cloud" without extra crypto, you may have an additional reason to review whether the security that you're contracting is commensurate with the sensitivity of any/all data being stored.
And on the flipside if this article makes you wonder whether you're not actually backing up your Blackberry data, that's something else you should check.
|
OPCFW_CODE
|
Re: [Python-ideas] enhance filecmp to support text-and-universal-newline-mode file comparison
[Please keep the discussion in the list. Also, please avoid top posting (corrected below)]
On 6/20/09, Gabriel Genellina firstname.lastname@example.org wrote:
En Thu, 18 Jun 2009 11:04:34 -0300, zhong nanhai email@example.com
So is it a good idea to enhance the filecmp to
universal-newline-mode?If so, we can compare
different files from
different operation systems and if they have the
same content, the
filecmp.cmp would return true.
With aid from itertools.izip_longest, it's a one-line
py> print repr(open("one.txt","rb").read()) 'hello\nworld!\nlast line\n' py> print repr(open("two.txt","rb").read()) 'hello\r\nworld!\r\nlast line\r\n' py> import filecmp py> filecmp.cmp("one.txt", "two.txt", False) False py> from itertools import izip_longest py> f1 = open("one.txt", "rU") py> f2 = open("two.txt", "rU") py> py> print all(line1==line2 for line1,line2 in
Currently filecmp considers both files as binary, not
text; if they differ
in size they're considered different and the contents
are not even read.
If you want a generic text-mode file comparison, there
are other factors
to consider in addition to line endings: character
character case, whitespace... All of those may be
differences" by some people. A generic text file
comparison should take
all of them into account.
--- El vie 19-jun-09, zhong nanhai firstname.lastname@example.org escribió:
Thanks for you suggestion. You are right and there are a lot of things to consider if we want to make filecmp support text comparision.But I think we can just do some little feature enhancement,e.g. only the universal-newline mode. I am not clear the way filecmp implement the file comparision. So, you can tell me more about that. And if in the source of filecmp, it compare files just by reading them line by line, then we can do some further comparisons when encountering newline flag(means the end of a line).
You can see it yourself, in lib/filecmp.py in your Python installation. It does a binary comparison only -- and it does not read anything if file sizes differ. A text comparison should use a different algorithm; the code above already ignores end-of-line differences and breaks as soon as two lines differ. One could enhance it to add support for other options as menctioned earlier.
|
OPCFW_CODE
|
Features/6 dnx compatibility
Backed out some changes to ReadMe as per feedback on #5.
Backed out solution structure changes as per feedback on #5.
Switched TFM to net40 to support dnx451 as per #6. This gives us much a much wider compatibility than is required, but I don't believe that will be an issue.
Note, I switched the version in the project.json to alpha9 in order to test this locally. Let me know if you want the version reverted.
Sorry, looks like I did conflict you when manually backing out those changes :disappointed:
Looks good (yes to alpha9 and removing lock file). I'll get the new version up on NuGet for testing, and check that the existing sample still works. Once that's up, do you mind adding a sample for dnx451?
I mean, once you've fixed the conflict (sorry).
Sure. So, for reference, I did the following to test this local (incase it helps you).
First, build the solution including the PR changes to produce output (or quickly replicate the change to project.json if that's easier).
In any solution that currently references SpecFlow.Dnx version 1.0.0-alpha8:
Add a Nuget.config add the root level of the solution.
Add the following configuration to the file:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<add key="local" value="D:\\\\Projects\\Resources\SpecFlow.Dnx\\src\\artifacts\\bin\\SpecFlow.Dnx\\Debug\\" />
<add key="api.nuget.org" value="https://api.nuget.org/v3/index.json" />
</packageSources>
</configuration>
Note, replace the long path in there to point to wherever you store SpecFlow.Dnx but making sure to keep the escaped backslashes.
Simply reference 1.0.0-alpha9 in the project consuming SpecFlow.Dnx.
The above sounds longwinded but in practice takes less than a minute to achieve and is very handy for testing packages pre-publication and without running a full private NuGet repo.
Anyway, I'll see about these merge conflicts and takes yours for the most part. I can add a complete example to the samples directory if you want, but would it be easier to simply add the dnx451 targets to the existing projects? The TFM's would actually make one another redundant but it does prove that SpecFlow.Dnx works in both cases?
I've merged those changes, just sorting samples now.
Good idea on the Nuget.config. I was manually doing an Add > Existing Project... in VS.
Hmm, regarding the samples... my original thoughts process was it would be better to have explicit examples per framework, but can be convinced otherwise if you think it's easier.
I'll create a new PR for samples. The difficulty of course (without using aforementioned nuget.config trick) is that I won't be able to reference alpha9 until it's published without it. I'll see if I can get relative local paths working so that I could add a samples nuget.config that looks at the build output? It would assume that the core project was built before running samples or that the local build is available on nuget.
Gah, I'm suffering some problems with local testing against my consuming solution :/
I'm not sure net40 is working for me.
I'm about to go incommunicado for a while, sorry, work calls and then weekend plans. I'll look to pick up in a couple of days (unless I can squeeze in some time in the weekend).
Try out the samples PR. Should work and show it off for you?
Okay, I've got a little bit of time before the weekend, I'll see if I can get this out. I'm on a different machine now and not experiencing the same problem that I had before when testing (whatever it was), so I'll assume it's all good and push out a new NuGet. (Feel like there should be a :poop: joke there...)
I'll continue the discussion in the open issue/PR.
|
GITHUB_ARCHIVE
|
As part of our recent Microsoft Defender for Cloud Blog Series, we are diving into the different controls within Secure Score. In this post we will be discussing the control of Enable audit and logging.
Log collection is a relevant input when analyzing a security incident, business concern or even a suspicious security event. It can be helpful to create baselines and to better understand behaviors, tendencies, and more.
The security control enable auditing and logging, contains recommendations that will remind you to enable logging for all Azure services supported by Microsoft Defender for Cloud and resources in other cloud providers, such as AWS and GCP (currently in preview). Upon the remediation of all these recommendations, you will gain a 1% increase in your Secure Score.
The number of recommendations will vary according to the available resources in your subscription. This blog post will focus on some recommendations for SQL Server, IoT Hub, Service Bus, Event Hub, Logic App, VM Scale Set, Key Vault, AWS and GCP.
Enable auditing is suggested to track database activities. To remediate, Microsoft Defender for Cloud has a Quick Fix button that will change the Microsoft.Sql/servers/auditingSettings property state to Enabled. The logic app will request the retention days and the storage account where the audit will be saved. The storage account can be created during that process, the template is in this article. Nonetheless, there is also a manual remediation described in the Remediation Steps. The recommendation can be Enforced, so that Azure policy's DeployIfNotExist automatically remediates non-compliant resources upon creation. More information about Enforce/Deny can be found here. To learn more about auditing capabilities in SQL, read this article.
This enables you to recreate activity trails for investigation purposes when a security incident occurs or your IOT Hub is compromised. The recommendation can be Enforced and it also comes with a Quick Fix where a Logic App modifies the Microsoft.Devices/IotHubs/providers/diagnosticSettings Metrics AllMetrics and the Logs Connections, DeviceTelemetry, C2DCommands, DeviceIdentityOperations, FileUploadOperations, Routes, D2CTwinOperations, C2DTwinOperations, TwinQueries, JobsOperations, DirectMethods, DistributedTracing, Configurations, DeviceStreams to "enabled": true. To learn more about Monitoring Azure IoT Hub visit this article.
This recommendation can be Enforced, and it has a Quick Fix that will remediate the selected resources by modifying Microsoft.ServiceBus/namespaces/providers/diagnosticSettings “All Metrics” and “OperationalLogs” to "enabled": true. It is necessary to put the retention days to deploy the Logic App. To manually remediate it, follow this article. To learn more about the Service Bus security baseline, read this article.
The Quick Fix has a Logic App that will modify for selected resources the Microsoft.EventHub/namespaces/providers/diagnosticSettings metrics AllMetrics and the logs ArchiveLogs, OperationalLogs, AutoScaleLogs to "enabled": true, with the retention days input. This recommendation can be Enforced. For manual remediation steps, visit this article. To learn more about the Event Hub security baseline, read this article.
The recommendation can be Enforced and it comes with a Quick Fix where a Logic App modifies the Microsoft.Logic/workflows/providers/diagnosticSettings metrics “AllMetrics” and logs “WorkflowRuntime” to "enabled": true. The retention days field has to be input at the beginning of the remediation. For manual remediation steps, visit this article. To learn more about Logic Apps monitoring in Microsoft Defender for Cloud, read this article.
This specific recommendation does not come with the Enforce feature nor a Quick Fix. To configure the Azure Virtual Machine Scale Set diagnostics extension follow this document. The command az vmss diagnostics set will enable diagnostics on a VMSS. To learn more about the Azure security baseline for Virtual Machine Scale Sets, read this article.
The recommendation can be Enforced and it also comes with a Quick Fix where the Logic App goes to the resource Microsoft.KeyVault/vaults/providers/diagnosticSettings and sets the metrics AllMetrics and logs AuditEvent to "enabled": true including the retention days input. For manual remediation steps, read this article. To learn more about monitoring and alerting in Azure Key Vault, visit this article.
By directing CloudTrail Logs to CloudWatch Logs real-time monitoring of API calls can be achieved. Metric filter and alarm should be established for changes to Security Groups. Recommendations for AWS resources do not have the Enforce feature, Quick Fix button, Trigger Logic App. To remediate them, follow the AWS Security Hub documentation.
Ensure that Cloud Audit Logging is configured to track read and write activities across all supported services and for all users. Configured this way, all administrative activities, or attempts to access user data, will be tracked. Recommendations for GCP resources do not have the Enforce feature, Quick Fix button, Trigger Logic App. To remediate them, follow the Manual Remediation Steps. For more information, visit the GCP documentation.
P.S. Consider joining our Tech Community where you can be one of the first to hear the latest Microsoft Defender for Cloud news, announcements and get your questions answered by experts.
Yuri Diogenes, Principal PM Manager (@Yuri Diogenes)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
|
OPCFW_CODE
|
These are a collection of Tips and Tricks to help you get the most from the Fresh Bundle Master plugin.
You can also Get Help in the Community Help Forum
Issues with Bundles Not Displaying Correctly
Depending on the particular theme you are using for your site, you might find that some of the bundle layouts like “Focus” or “Photo Tiles” do not display properly. This is usually caused when a theme has special functions that are forcing the output of various styles on the contents in your posts and pages.
The first thing you can try is to wrap the entire shortcode from our Fresh Bundle Master plugin between a [raw] [/raw] tag. Some themes have a function built in that allows you to use these [raw] wraps and make sure the bundle html display’s properly. Here is an example of how it will look in your page editor:
If that does not work, there is a WordPress plugin that will add this [raw] wrapper functionality to your site. Once you install and activate the plugin, then you can wrap the plugin shortcodes on your posts and pages in the [raw] tag.
Price Last Updated Time Zone
We have set the “Price Last Updated” to display the time based on your blog settings. If you are seeing the time in the wrong TimeZone, make sure you have gone to SETTINGS => GENERAL => TIMEZONE and selected the correct option.
Finding Custom Button CSS using Firebug
If your theme comes with specialized button shortcodes, you will most likely want to match this style in the Fresh Bundle Master plugin and fine the css classes in your theme that style the buttons.
The first thing we recommend is to install Firefox browser and install the Firebug extension. The Firebug extension is the best tool you can have to figure out what you want to change in the CSS of a site. Yes, there is a Firebug “Lite” extension for Chrome, but it does not work as well. Even if you only use the Firefox browser and Firebug when you want to work on a site design, it will be worth it.
Once you have it installed, it puts a little “bug” icon in the top bar of Firefox, and clicking that will bring up the firebug panel. When you look at the Firebug panel, the second icon at the top says “Click an element in the page to inspect”.
You will need to have a button displaying already on your site, or you can also go to your theme developer’s demo page, as they will usually show all the buttons in the theme demo. You want to inspect the button, and look for the CSS class. Here is an example:
In the above screenshot, we used the element inspector to select the Orange Button. The classes on that button are defined in this part of the html:
class=”button large orange”
We already have the “button” class on our Buy Now button, so the “large” class and “orange” class need to be added to the “Button Custom CSS Class” in your Display Settings. This will then add those additional classes to our Buy Now button, and it will pick up the styling built into our WordPress theme.
|
OPCFW_CODE
|
The whole point of a service level agreement is what you and your supported base agree is reasonable and acceptable. What my users expect may be wholly different than what yours would.
- If I said that high-urgency requests must have initial contact made within 4 hours and resolution or escalation within 3 days, would that be acceptable for your business?
- Conversely, would a low-urgency/low-impact request with a turnaround of 6 months be acceptable?
- Or does your user base expect high-urgency to mean 15 minutes or 1 hour? Or low-urgency/low-impact to mean 3 days?
- What's the client:support ratio? 100 users to 1 tech? 200:1? 1,500:1?
So you can see that there will be wild variations of options and opinions. What is considered "urgent" for my business might not be for yours and if I have greater travel requirements to provide support, my response times will be different from yours. Likewise, if I have more remote support capabilities than you do, or if my system deployment strategy differs from yours, our SLAs for the same thing will also be different, as will our user expectations.
Before tackling the issue of "for THIS ticket, the SLA says response within XX time," have you established your impact/urgency matrix? This will clarify how to prioritize various tickets that come in, and will also make setting those SLAs a lot more efficient and easier.
It's kinda' like managing AD permissions or GPOs with groups instead of individual users. Set your permissions or GPOs and apply to groups, then add users to those groups. Fewer changes need to be made at any given time, and you have greater consistency.
Same with defining SLAs. Establish the impact/urgency prioritization, apply your SLAs to those priorities, then assign various incident types to those priorities from the other direction. Now you're not nitpicking every single possible incident type and randomly throwing time frames at each one.
A mangled OS update for one user who happens to be going on vacation that afternoon will be a lower priority (low impact, medium urgency) than the same OS update getting mangled for three people in accounting on the day before payroll is submitted (high impact, high urgency). You're not assigning the SLA to the OS update. You're assigning it to the impact/urgency. For that one person, you might push that off until after you've dropped everything and called an all-hands-on-deck to take care of the accounting department (then take what you learned and apply it to the user who's off on vacation after you've cleared up payroll).
I'm not giving you specific times because it's not reasonable or useful. I typed this giant wall of text to back up and tackle the underlying foundation of your question so you're better equipped to be able to not only answer your own question but also explain and justify your decisions to your higher-ups (which is important, because without their sign-off, none of this matters when a user violates their end of the SLA by whining, complaining, and threatening, and neither you nor your supervisors have the backing to stand firm).
|
OPCFW_CODE
|
This page shows items released in 2017-10.
When a TMX is imported, the i attribute is used to pair the <bpt> elements with <ept> elements. But in standard this attribute must be a number. This is not the case in some TMX imported by client. So we improve this feature in order to accept those deviation from standard
Administration, Project Managers, Administrators - WM-814
When creating a picklist custom field and immediately click "add missing value" before saving, then the system return an error.
Administration, Administrators - WM-801
The "Statistics" tab is now hidden when the subscription plan does not include the "Business Analytics" component
Business-Analytics, Reports, Administrators - WM-606
When working on linguistic resources (translation memories, terminologies...), it is now possible to delete individual segments within the interface of the new editor.
Editor, Translators, Project Managers - WM-767
You can now add custom fields to service in a price list. Fields are configured by the administrator at "Settings" > "Custom fields" > "Price list services".
Price Management, Project Managers - WM-481
Orders list: Links added to each line to directly navigate to project details, jobs list, cost and more.
Client Orders, Project Managers, Clients - WM-743
A new user option has been added to the new instant translation app allowing users to submit their translations directly by pressing the ENTER key.
MT Hive, Translators, Project Managers, Clients, Administrators, Freelancers, Leads - WM-784
New plugin version 1.01.25 available for download. When submitting a post edit request for a machine translation, the deadline selector does not format dates in the PC's local date format. This has been fixed.
Office Plugins, Clients - WM-811
Several new languages have been added to the system :
- Native Hawaiian
Core, Translators, Project Managers, Clients, Administrators, Freelancers, Beebox, Leads - WM-794
Using the new editor, it is not possible to import tags while importing an XLIFF file, they are ignored and not updated.
Editor, Translators - WM-807
In some cases, it is impossible to create a new segment in the new editor but it is possible in the old editor. Creation not authorized for this resource.
Editor, Translators - WM-788
An issue with line break in some yaml reconstructed files has been identified and corrected.
Folded style text nodes (starting with char >) were not properly managed. Now these nodes are converted to Literal style nodes (starting with char | ) in the reconstructed document .
File Formats, Translators, Project Managers - WM-772
The new Instant Translate app has been updated to work with all the latest browsers (i.e. Internet Explorer 11).
MTPlus, Translators, Project Managers, Administrators, Freelancers - WM-771
The "upload new background" button isn't reachable for Firefox and Edge users, even when scrolling down to the bottom of the list.
Editor, Translators, Project Managers, Clients, Administrators, Freelancers, Leads - WM-781
When we export a word in the new editor that has Arabic column, the content of this column is set to left-to-right rather than right-to-left and aligned to the right.
Editor, Translators, Project Managers - WM-783
IDML preview in the new editor is now downloading a PDF file rather than a IDML file, like in the old editor.
Editor, Translators, Project Managers, Administrators, Freelancers, Leads - WM-716
It is currently impossible to see reference materials in the new editor currently, only administrators are allowed to see the reference materials.
Editor, Translators - WM-787
A new fully responsive interface has been added to the MT Hive to translate plain text
MTPlus, Translators, Project Managers, Administrators, Freelancers - WM-473
The user preferences permits to disable the background image in the new user interface. That allows to minimize first loading time.
Editor, MT Hive, Translators, Project Managers, Clients, Administrators - WM-785
CSV files typically use colons, semi-colons or tabs as column delimiters. For a special use case we have now added an option for delimiter "^". CVS file filters are configured under Settings > CSV.
File Formats, Project Managers, Administrators - WM-754
You will find a new "Target language" filter in the orders list. This permits to find orders for a specific target language.
Client Orders, Project Managers - WM-729
The API methods are documented here:
API, Administrators - WM-497
Taking a Standard job that has the status proposal has been improved and is now creating a new invoice.
Editor, Translators - WM-738
We added the possibility to update the MinAmountGroups using the API.
API, Administrators - WM-736
"Insert" button in special characters feature is now closing the window.
Editor, Translators - WM-759
It is now possible to insert several characters at once in special characters feature.
Editor, Translators - WM-758
Saving a empty mandatory fields in the edition mode of a segment now triggers a modal pop up, asking you whether you are sure that you want to save or not.
Editor, Translators - WM-757
Actions validation on languages has been improved, it is now possible to apply an action to a language that cannot be edited, depending on the action.
The following actions now cannot be processed if the chosen locale cannot be edited :
- Custom Field
- Find And Replace
- Length Constraints
- Remove Language
Editor, Translators - WM-740
Clients with a custom domain for Wordbee Translator will see that the Account ID field is now removed from the login dialog (of the new translation editor). This field is not required with custom domains.
MT Hive, Translators, Project Managers, Clients - WM-774
It is now possible to deselect all languages in edit segment pop-up, you can click on the "All" button that is checked to deselect all languages.
Editor, Translators - WM-773
Segment text length constraints feature has been reworked and is now allowing you to choose different modes :
Editor, Translators, Project Managers - WM-768
Clicking the character count in the new editor toggles the way it's displayed. This new display option shows how many characters are left before hitting the minimum and maximum character limits.
Editor, Translators, Project Managers, Clients, Administrators, Freelancers - WM-770
the Jobs Custom fields are now displayed in the job notifications for unassigned or revoked jobs.
Administration, Administrators - WM-735
We fixed an issue in our API that prevented the Beebox from checking the status of the Pass-Through jobs on Wordbee when more than 2000 jobs were submitted at the same time.
Beebox - WM-733
Fix an issue that prevents the editor to open a page if a segment contains an invalid character.
Editor, Translators - WM-756
Reconstructing a document in target language that had no previous version triggers a crash, preventing to download or preview the file.
Editor, Translators, Project Managers - WM-739
When taking a Codyt job that has the status proposal, it does not change it to "In Progress" and forces the supplier to confirm twice the action.
Editor, Translators - WM-737
Whenever any Client is removed from the platform, it is not removed from the manager's client list.
Administration, Project Managers, Administrators - WM-765
Fix an issue when document is split in multiple jobs and contains a high number of segments.
It prevents the editor to display segments
Editor, Translators, Project Managers - WM-780
Unclear description, values, texts have been changed to more convenient values.
Editor, Translators - WM-769
When a document has a high number of paragraph, and is split in multiple jobs, sometimes the current editor is not opening.
Editor, Translators - WM-763
If client is not authorized to view the work progress of the project attach to the order, the button is not displayed. This mean he cannot track the status of any job.
Client Orders, Clients - WM-761
|
OPCFW_CODE
|
Checkout from SVN to remote location with Eclipse
I am in the need to set up eclipse in a way that I can connect to a SVN and checkout projects or files to a remote location. The remote location is Linux-based, the clients work with windows.
I read a few threads and it seems that it works on console with ssh+svn. But I am struggling badly to make this scenario run in eclipse.
Any hints? I appreciate your help.
Philipp
Why do you need to check out to a remote location? The point of SVN is that you have a local working copy. Is this for deployment? In that case, there might be better options.
@Roger Lipscombe +1 , maybe op can run "svn co URL" via ssh rather then from eclipse.However, still don't understand why OP need such thing
@rkosegi in which case, they want this: http://stackoverflow.com/questions/159152/check-out-from-a-remote-svn-repository-to-a-remote-location
You need to check out to a remote location? That doesn't exactly seem like a scenario a CVS is built for. Don't you need to export to that location? In fact different SVN implementations could handle metadata completely different. Even different versions of SVN could cause problems this way. Anyway - any chance to access the remote location in a way transparent to your client's OS like a network share (SMB), FTP or similar?
Because local development is not possible. The ERP system is not able to support that. So we need to develop on the server. And now I want svn with that. That's why.
Or, more to the point, why do you need to do this from inside Eclipse? Surely this is a Continuous Deployment issue? In which case, you want something like Hudson or TeamCity...
Your question sounds to me that you try to solve something, that we don't know yet. So I speculate here a little bit, and I will change my answer if the question gives indication that I was wrong.
(Part of your) development has to live on the server, so there are resources you have to use during development, which are necessary for development.
Possibly these resources are (only) necessary for testing (unit tests?), or for functional tests.
You have experience with Eclipse and want to use that.
So here are sketches of possible solutions that may work for you.
Using Eclipse on the server
You install an appropriate eclipse distro on the linux machine you have to develop on.
You install locally e.g. Cygwin with the XWin packages that allow you to start an X-windows server locally.
You open up an xterm locally (just to get the display variable correct).
You start from that xterm the eclipse installed on the Linux machine: ssh <user-id>@<ip-of-linux-server> <path to eclipse> -display $DISPLAY
Pros and cons
+ You work on the machine and have the display locally.
+ You are able to checkout directly on the machine, no need of a local copy.
- Your are not able to work without the connection to the Linux machine.
Using Eclipse locally
There are two variants, and both are valuable:
Have the sources on the server (only)
Have the sources locally
Sources on server, Eclipse locally
The easiest way is to mount the file system of the server, so you have access to them locally through a different drive letter. Ask your system administrator how that could be accomplished.
Pros and cons
+ Everything works as normal.
+ You don't have to install Subversion on the server.
- Latency for the remote file system may be annoying.
- You are only able to work with network connection to the server.
Sources locally, Eclipse locally
That is the normal way to do it. Install Eclipse with Subversion plugin as usual, checkout from the repository, work locally (even disconnected), commit your changes.
You are then able to test by doing a checkout on the server, build the system there, and do your unit and integration tests there.
Pros and cons
+ Easier to install and maintain.
- No tests during development without a build process in between.
- Tests can only be done with commited code, not with changes that are not commited.
My recommendation
I like the solution best with Eclipse on the server, so you use everything that is available on the server, and Eclipse under Linux is totally the same as under Windows. You don't have any steps in between for doing tests, everything is done locally (on the server).
See as well the following questions (and answers):
Is it possible to work on remote files in Eclipse?
PS: What I forgot: I think svn+ssh is just a different protocol of Subversion to do the checkout, update and commit. It is in no way different to using the protocols file://, svn://, http:// or even https://.
|
STACK_EXCHANGE
|
Simple text classifier: classification taking forever?
I work for a small tech startup, and I want to classify our users into demographics based on the domain of their email address. When users sign up to our site, they can enter a job category or pick "other". The goal is to classify as many of the "other" type as possible using a bag-of-words approach.
To do this, I have written some code in Python. For each user, I look at the domain name of their email address and scrape the text from their homepage (using Beautiful Soup). I also look for an "About Us" page, which I also scrape. What I'm left with is a map of domains to text. Some domains are classified (i.e., users whose email address comes from this domain have self-classified their job types), and some aren't (those users who have self-classified as "other"). The total data set for classified users is about 2000 (neglecting domains like gmail and hotmail and [I can't believe I'm about to type this] aol). I'm using a train/test split of 75/25.
Using scikit-learn, I'm trying to implement a simple classifier, but there seems to be an issue with either convergence or performance. The data set doesn't seem particularly big, but the two classifiers I've tried (Perceptron and RidgeClassifier) seem to be having some issues finding a fit. I haven't really tried to change the parameters for the classifiers, and it's not clear to me which nobs I should be turning.
I lack intuition into this problem, and it's difficult for me to tell whether the issues I'm having are due to not enough data, or what. I'd like to know
Am I barking up the wrong tree? Has anyone tried something like this and made it work?
Do other ML packages for Python do a better job of text classification? (I'm looking at you, nltk.)
Is my data set large enough? Are there any "rules-of-thumb" for how much data you'd need for something like this (~5-10 categories)?
What's a reasonable amount of time for the learning to take? Are there any hints that will tell me the difference between "this is really hard" and "this isn't going to work"?
I've tried to follow the examples here and here. These examples are pretty speedy, so it makes me worry that I don't have enough data to make things work nicely. Is the "20 newsgroups" classification problem typical, or does it show up because it's easily solvable?
Any guidance here would be appreciated!
As an update: the huge performance hit seems to come from the "vectorizer": that is, the thing that maps a vector of words to the reals. For some reason, Tfidf was taking a long time to do its thing---I switched to a different vectorizer, and now things run quite quickly.
In regards to the actual learning, I've found that the Naive Bayes routines work pretty well out of the box (f-score around 70-75%, which is good enough for now). The model that I found works the best, however, is one based on a linear SVM (scikits.svm.LinearSVC), which gets me somewhere in the 80-85% range with a bit of tinkering.
Can you be more specific about what issues your classifiers appear to be having? Do they perform above chance?
@jerad: This is my first real machine learning exercize, and I just feel like I can't distinguish between "this problem is taking a long time to solve" and "the fact that this is taking so long is a sign of a bigger problem". So, as a first pass, I was asking about performance---what should I expect from a text classification algorithm? The samples I have are cooked up things from the scikits website that run very quickly, and it's not clear to me whether these examples are representative, or if they are just simple textbook exercizes that work well.
I'd agree with @Peter's suggestion below - Naive Bayes looks like an obvious first approach to try in your setting - it's designed for bag-of-words data and generally it works pretty well on text data. If it doesn't give good enough results then you'll want to try something smarter but that will require digging deeper into the problem and, for better results than NB, you'll need to spend some time tuning your classifier which usually brings in quite a bit of time overhead.
I think the issue here may be the use of term vectors. Your instances (bags of words) are translated to a vector of probably 150 to 10000 dimensions. Each word that occurs in your corpus (the websites) is one dimension and the value for each instance is the frequency (it the tf/idf score) of that word in the given website.
In a space with that many dimensions, most machine learning algorithms will suffer. You've chosen fairly lightweight algorithms, but they may still take a while to converge, depending on how they're implemented.
The most common classifier in this scenario is naive Bayes, which doesn't see the instance space as a high dimensional space, but just as a collection of frequencies (from which it estimates, using Bayes theorem, the probability of each class). Training this classifier should take as long as it takes to read the data once, and classification should take as long as it takes to read the instance. Since it shouldn't have any parameters, it will at least give you a good baseline. Nltk almost certainly has this algorithm (it's the mother of spam detection).
Another option, if you want to use more traditional ML algorithms, is to reduce the dimensionality of the dataset to something manageable (anything below 50), using PCA. This will take more time, and make it more difficult to update your classifier, but it can lead to good performance.
If OP has 10K dimensional data and 2K examples PCA probably won't give great estimates of the principal eigenvectors though. Random projection will be much quicker and is likely to perform as well as PCA here.
I hadn't considered that (10k term vectors is actually a bit much). Does this hold even for sparse data? You can expect most features to be zero for most instances, and only selected features to be correlated at all...
The quality of eigenvector estimates mainly depends on how fast the eigenvalues decay (and the sample size of course), so sparsity can help. FWIW I agree that Naive Bayes is the obvious first approach to try here - it's designed for bag-of-words data.
Dimensionality reduction could also be done using feature selection, e.g. via chi-squared analysis or mutual information. Both are described here: http://nlp.stanford.edu/IR-book/html/htmledition/feature-selection-1.html
|
STACK_EXCHANGE
|
Double check that the username and password as well as the MiaB hostname were entered correctly on the droplet with the Discourse install. Also be sure that you followed the rest of the instructions in that section (step 4 of the relaying section).
[quote=“hekubas, post:1, topic:4099”]
System Status Check Errors
The SSH server on this machine permits password-based login.
The IP address of this machine is listed in the Spamhaus Block List
Nameserver glue records should be configured at your domain name registrar as having the IP address of this box. They currently report addresses of [Not Set]/[Not Set].
Your box’s reverse DNS is currently [Not Set][/quote]
Is the proper droplet name set (wait, is MiaB on DO?) or the PTR set?
This can be safely ignored with your setup. Formatting is all out of whack, sorry.
## TODO: The domain name this Discourse instance will respond to
## Required. Discourse will not work with a bare IP number.
## Uncomment if you want the container to be started with the same
## hostname (-h option) as specified above (default "$hostname-$config")
## TODO: List of comma delimited emails that will be made admin and developer
## on initial signup example 'firstname.lastname@example.org,email@example.com'
## TODO: The SMTP mail server used to validate new accounts and send notifications
# SMTP ADDRESS, username, and password are required
# WARNING the char '#' in SMTP password can cause problems!
#DISCOURSE_SMTP_ENABLE_START_TLS: true # (optional, default true)#
I’ll double check the relay guide steps and add more to this post or reply again.
Additionally I found the discourse server IP has blacklist, I tried a new droplet with no blacklist and got same error. Not sure if that matters or not.
Got it working on fresh installs with nonblacklisted IPs
Also didn’t miss the TLSA DNS records this time. Thanks alento.
Went back through the relay settings and made sure I did them correctly. I sent you a message alento.
Checked back over things and found I named discourse email wrong. Fixed this but still getting authentication error. It should at least have given me wrong credentials error instead of authentication error I would think.
Been looking at the postfix config and some guides and thinking there may be something I need to do there. Probably going to fresh install on two droplets and try again to see if maybe I screwed something up along the way.
Also, Both the original IPs of the discourse and MiaB servers had shown on a blacklist at mxtoolbox.com so I switched both server IPs in DNS to ones not blacklisted. Thinking this may still be causing problems. Another reason I’m going to try again.
|
OPCFW_CODE
|
I have successfully flashed openwrt on my two APs. However, something went wrong with the second attempt and now it is no longer accessible.
Reset does not seem to work. How can I recovern the AP?
Do I need to flash over serial and how?
Is recovery still working? Set the IP of your computer To 192.168.0.10, keep the reset button pressed while powering on the device and check if you can reach http://192.168.0.1 with a browser.
If it works, you can flash the recovery image there.
No is not working, can not ping 192.168.0.1
Looks like the exact same implementation as in my tool, the encrypted payload digest is still okay, but the decrypted image is weird in terms of padding, no idea what D-Link did to generate this...
I remember from DAP-X1860 that ping didn't work but the recovery web interface was working anyway, can you check?
What was the last thing you did with the device when it was working?
sysupgrade -f openwrt-ramips-mt7621-dlink_covr-x1860-a1-squashfs-recovery.bin was the last I did
I wouldn't want to go back to the original, but unfortunately the mesh doesn't want to work, it's greyed out when creating and I can't rewrite the ssid..
Blink red ?
only red, no blinking
openwrt-ramips-mt7621-dlink_covr-x1860-a1-squashfs-recovery.bin is not intended to be used with sysupgrade, openwrt-ramips-mt7621-dlink_covr-x1860-a1-squashfs-sysupgrade.bin would be the correct one in this case.
Not sure what is happening when you sysupgrade with the wrong image, but my expectation is that recovery mode is still possible afterwards because the boot loader is not modified.
Also no red blinking when you try to enter recovery mode?
what are the settings to connect over ttl?
U need click reset and put power on and wait for blink.
ok, now is blinking red and the network card show a link is up
but no website is possible to open
U need manual write lan IP like 192.168.0.2 mask 255.255.255.0
ok, found the problem the recory page is on 192.168.0.1 now I can upload a file, with one is the right for this?
ok was successfull but openwrt only over ssh is possible. Red LED is fast flashing
Did you reset the IP configuration to DHCP?
No set a static ip 192.168.1.10, dhcp is not working. Ping to 192.168.1.1 is possible and also ssh.
BusyBox v1.36.1 (2023-05-24 21:53:33 UTC) built-in shell (ash)
| |.-----.-----.-----.| | | |.----.| |_
| - || _ | -| || | | || || |
|_____|| |||||___||| |____|
|| W I R E L E S S F R E E D O M
================= FAILSAFE MODE active ================
- firstboot reset settings to factory defaults
- mount_root mount root-partition with config files
- passwd change root's password
- /etc/config directory with config files
for more help see:
This will erase all settings and remove any installed packages. Are you sure? [N/y]
/dev/ubi0_1 is not mounted
/dev/ubi0_1 will be erased on next mount
|
OPCFW_CODE
|
So we bumped into an issue on our store: We found that the residual codes left by a few apps after they were uninstalled, are now slowing down our page load speed. On top of that, some residual codes also had conflicts with our theme. I reached out to someone at Shopify and realized it might be a real pain for some:
Apparently, Shopify doesn't regulate this very well on the App Platform level: they only tell the 3rd party app developers to inform merchants about it, but don't have a specific requirement in place in term of "HOW'. We tried a few apps including the first party apps developed by Shopify, none of them do a good job of explicitly telling merchants: A) whether they will leave residual code after being uninstalled; B) If they will, what impact it has; C) how to remove them.
This is the info we got from the Product Reviews app developed by Shopify. We double checked with Shopify support team, if we uninstall this app, we will then need to remove all these codes manually. But they don't tell you that explicitly.
And the effects of residual codes left by apps could be:
- Unnecessary billing charges
- Conflict with theme
- Slow down page load speed
And the current solution to fix this? We were told by Shopify that we would have to track all the apps we tried and uninstalled and reach out to every single one of them asking for a check / fix. We assume a typical growing store at early stage would have probably tried 20-30 apps? So removing residual code alone can be a full time job! Not some thing we can afford at the moment.
We simply want to share this in this community to see if anyone else has had issues with this. If it's truely affecting many, perhaps we should protest and have Shopify optimize their platform policies? Thanks!
You are not alone.
The solution should not be getting back to the App Developers one by one and asking them to scan my website.
We started off the store with piloting many different apps - lots of installation & uninstallation. Over time, our team found out that our page speed is getting slower. The root cause is the same - the code leftover from deleted Apps. I believe for most of the small and medium businesses will do the same thing, someone please help on this issue.
You can leverage this tool to better understand the speed of your page - Google Page Speed Analyzer: https://developers.google.com/speed/pagespeed/insights/
Most app developers should create a clean duplicate of the theme before they install the app so the store can easily revert to that copy if they decide to stop using the app. You can always reach out to the app developer to get them to remove the code. Some apps like Bold tend to make major changes to the theme when installing their app, I would always make a duplicate of the theme before installing any app in case it conflicts with another app or doesn't work as expected. If the theme developer or Shopify isn't able to help you remove the code, you can reach out to a Shopify Expert: https://experts.shopify.com/w3trends?ref=w3trends
There should be no extra billing charges. If an app is uninstalled, there is no more billing (unless you already started a new month with the apps charges). Most apps if built right will just stop working when you uninstall the app (but code left behind can cause issues as you have noticed). You can check the billing cycle of your apps on your invoice.
As @MeganD_94 said, a Developer can help you get your theme cleaned up.
The Downside to making a fresh copy of each theme by app devs for easy replacement, is that if you have other apps installed or custom code, this would wipe it which I think is why they do not do it.
This is Song Fei from Giraffly company which develops apps for Shopify. The problem you met is a pain spot of Shopify app develop. We are a professional team for developing Shopify app, but we still take a lot of time to learn Shopify develop documents. I'm proud to say we don't have this problem now.
I guess the best thing you can do is still contact those app developers. They are much more familiar with the codes than you do. And just FYI, if you still want to boost your page speed after you fix those codes issues, you can try our app Page Speed Boost
It use the cheating latency which helps you preloading a page right before a user clicks on it. There is a time between a user hover the mouse over a link and actually click on it. That time is so quick and people won't even notice while our App detects and takes good advantage of that time and do a just-in-time preloading for that page. So when you click on that link, that pre-loaded page opens instantly. This principle works on mobile as well, as when a user touching a link and before releasing it leaves PageSpeed Boost a good time to preload that page.
And you can see more instruction here: Page Speed Boost. It's an amazing app!
|a minute ago|
|
OPCFW_CODE
|
Am i being scammed on eBay?
I am selling some Adidas sneakers on eBay because they’re a size too big. This will be my first time selling anything on eBay.
The original price is $94 CAD and i’m selling them for $70 excluding shipping.
Today someone just offered to pay for $450 for these shoes. Do you think that’s legit ? I don’t really know how these things work so i’m not sure and I don’t think anyone would pay that much money on simple sneakers. Someone else offered to pay $75 which i am willing to accept.
What should I do?
Do you think it's legit? There are many ways that you can get scammed on ebay. Search "ebay scam" on this site and see what might be happening here.
Smells like a scam to me.
Did you consider why they didn't offer to pay the $75 asking price? Answer: It is a scam.
Unfortunately, none of us can know for certain that this offer is a scam. However, it clearly seems out of bounds and it's a good thing to be cautious whenever you feel nervous about a transaction.
Considering that you have another offer that is acceptable, it's probably wise to take that offer. But, here are some guidelines to help avoid being scammed on eBay:
Read eBay's official policies and follow them to the letter. Although some scammers will try to exploit the policies, if you follow the policies you will have a better chance of getting official help if or when something goes wrong. If you don't follow policies, and you get scammed, you are on your own.
Don't accept payment in any form that isn't verifiable and protected. Don't take personal checks. Don't follow instructions to get paid in any way that isn't officially sanctioned by eBay. if you accept money via PayPal or other similar service, make sure you understand the rules for those payment mechanisms. Many scammers will exploit payment mechanisms (i.e. they'll send a payment via the "gift to friends and family" feature on PayPal, which removes all of your protections).
Don't accept offers where the details don't line up. Scammers will often steal someone else's payment account and use that stolen account to send you money. If the eBay account doesn't match the details on the payment account, don't accept the business.
Similarly, if you're using PayPal, don't ship to any name or address except the exact match to the official name and address registered with the PayPal account.
Don't accept offers that involve payment, delivery, or other functions offline or outside of eBay's visibility, even if the other party insists doing so will somehow be more profitable or better for you. Use the built-in payment and shipping management tools. See the first bullet: make sure you're staying "in the system" so you have a safety net if you get scammed.
In general, the standard rule of thumb applies: If it's too good to be true, it probably isn't true. Why would this person offer so much of a markup when they can grab those shoes for cheaper from other sellers, or even from you? Offering way over your price doesn't make any sense at all. Even if you're a collector looking for a rare shoe, why wouldn't you just buy them at the listed price of $70? Offering several times that makes no sense at all. Unless you've significantly under-priced a serious collectible shoe, and you've got a half a dozen competing offers bidding the price way up, it doesn't make sense to accept a single offer that's way out in left field at several times your price. Scammers often do things that make no sense as a way to filter out gullible people. If they presume that a reasonable person would reject an outlandish offer, they will purposefully make an outlandish offer in the hopes of catching the few people that are gullible enough to fall for it.
Finally, when in doubt, seek help from experts who have experience in the exact system you're trying to use. Since you've mentioned eBay, you can check out their seller help forums, where experienced sellers will give you guidance on the nuances of scams and how to avoid them. This is especially important as a new seller, since scammers often prey on inexperienced people. If a scammer sees that you're a brand new seller with low rep, they will assume you're an easy target.
It depends on how he is planning to pay - in some countries, specific shoe models are hard to get, and collectors pay quite high prices for them.
What you need to watch for is the mode of payment he offers - scammers use reversible payment methods, and ask you for returning some overpayment in another (non-reversible) way.
So if you get paid in a non-reversible way, you are good. If he wants you to handle a 'extra money' / 'overpayment' in any way, it's a scam - run.
While it’s true that some people pay high prices for limited edition shoes, they do not do so by offering a high price to someone who is offering them for sale for less.
|
STACK_EXCHANGE
|
| Great news. We have just updated AViCAD 2021 with many improvements.|
For those already using AViCAD 2021, please fill out form to receive download instructions.
A brand new CAD version install along with these improvements:
– Fully compatible with AutoCAD® 2022
– Updated Mech-Q and fixes
– New Mech-Q centimeter and foot option for large civil projects.
– Both 2D and 3D updates to the program
– Tool Palette and Quick Property improvements
– Stability improvements
Fixed speed of selection of 2d Polylines with many vertexes (+10k)
Fixed FLATTEN command with 3d Solids objects
Added message for unsupported Revit file version
Fixed APPLOAD command to load lisp that contains COMMAND
Fixed purge of Dynamic blocks to avoid delete of referenced dynamic blocks
Implemented print of TIFF file using JPG printer
Fixed selection speed in drawing with too complex hatch, ignored (not regenerated) hatches with more than 5000 loops.
Implemented sysvar fields in FIELD command
Reduced size of objects pasted in Word-Excel, Added EMFFACTOR and WMFFACTOR to change size factor of EMF and WMF created during COPYCLIP
Improved PDF2CAD command: now it will display only one message if some of the input files were skipped during Batch conversion.
Improved PDF2CAD: handled skipped files Improved “Single File” mode tab: now it will display an error if invalid PDF file is selected Code Cleanup
Disabled thumbnail generation during autosave to increase speed of autosave
Fixed OFFSET with lines standing on different Z levels
Fixed the initial height of the DWG Preview in Open File dialog
Restored vertex grips of non associative hatch created selecting objects
Property palette :Fixed change of number of items and angle between items of polar array if angle is less than 360?
Fixed print of images with a page size larger than A3
Fixed lag after copy of block
Removed “last access” from open/save dialog
Fixed a bug in DATAEXTRACTION dialog when user is unable to move from step “Data Source” to step “Select Objects”.
Fixed default option not being recognized by PCAD in a few prompts of the _3DARRAY command
Property palette: Fixed change of total angle of polar array if angle is less than 360?
Fixed speed of COPY of some specific blocks in a big drawing
fixed the command bar position in a Multi-Monitor context. Now it appears on the correct monitor when the AViCAD startup.
Fixed temporary OSNAP in ZOOM window command with OSNAPCOORD = 2
Fixed zoom and pan with ZOOMDETAIL = -1
Fixed update of Quick property palette changing color with ribbon or toolbar combobox
Fixed insertion of blocks from tool palette: layer names and other styles coming from inserted blocks are preserved
Print: removed message and popup about 3d hidden/shaded view and plot styles
Fixed EXTRUDE direction option with NWUCS
Fixed ELEVZERO command with REGION objects: the command now filters out REGION objects
Fixed display of main window during application startup
Fixed ESNAP and ETRACK markers in diffent VPORTS : Restored SNAPALLVIEWS to 0
Changed selection windows to display only in active window
Quick Property: added a limit to height max to 750 pixel, added scroolbar when exceed to height limit
Improved speed of command hint list with great number of results
Added OneDrive business path to cloud commands
Fixed load of some commands opening drawing with double click on PC with QUADRO video cards
PDF2CAD : Fixed crash when attempting to show preview of a protected PDF file.
Fixed display of DGN underlay
Fixed wrong DIST result on layout Viewport
Implemented dimension scale depending from layout viewport scale
Implemented PURGE of empty text objects
Fixed APPLOAD of .NET modules , filename is not requested twice
Fixed exception finding hatch boundary
Added IMPROVETEXTQLTY variable to HWCONFIG dialog
Fixed delete of unused files in temp and in backup folder, changed number of backup files to 2, changed remove of old backup file controlled by BACKUPOPEN variable
Added check to avoid crash using layout tab during block editor
Fixed explorer preview pane after AViCAD start
Fixed edit of text of annotative dimension
Fixed activation/deactivation of POLAR and ORTHO passing from one drawing to another
Fixed problem using PUBLISH with our PDF printer, added license activation and set options for each page
Fixed open time and anomal memory usage opening drawing with clipped blocks
Improved speed of regeneration of TTF text
Added IMPROVETEXTQLTY variable to enable or disable change of TEXTQLTY variable opening drawing
Fixed DIMRADIUS command when circle stands on z plane different from current elevation
|
OPCFW_CODE
|
//
// DiffReloader.swift
// Artisan
//
// Created by Nayanda Haberty on 16/04/21.
//
import Foundation
#if canImport(UIKit)
import UIKit
public protocol Distinctable {
var distinctIdentifier: AnyHashable { get }
func distinct(with other: Distinctable) -> Bool
func indistinct(with other: Distinctable) -> Bool
}
public extension Distinctable {
func indistinct(with other: Distinctable) -> Bool {
distinctIdentifier == other.distinctIdentifier
}
func distinct(with other: Distinctable) -> Bool {
!indistinct(with: other)
}
}
public protocol DiffReloaderWorker {
func diffReloader(_ diffReloader: DiffReloader, shouldRemove distinctables: [Int: Distinctable])
func diffReloader(_ diffReloader: DiffReloader, shouldInsert distinctable: Distinctable, at index: Int)
func diffReloader(_ diffReloader: DiffReloader, shouldReload distinctables: [Int: (old: Distinctable, new: Distinctable)])
func diffReloader(_ diffReloader: DiffReloader, shouldMove distinctable: Distinctable, from index: Int, to destIndex: Int)
func diffReloader(_ diffReloader: DiffReloader, failWith error: ArtisanError)
}
public class DiffReloader {
let worker: DiffReloaderWorker
var sequenceLoader: [() -> Void] = []
public init(worker: DiffReloaderWorker) {
self.worker = worker
}
public func reloadDifference(
oldIdentities: [Distinctable],
newIdentities: [Distinctable]) {
let oldIdentitiesAfterRemoved = removeFrom(oldIdentities: oldIdentities, notIn: newIdentities)
reloadChanges(in: oldIdentitiesAfterRemoved, to: newIdentities)
runQueue()
}
func removeFrom(oldIdentities: [Distinctable], notIn newIdentities: [Distinctable]) -> [Distinctable] {
var mutableIdentities = oldIdentities
var removedIndex: [Int: Distinctable] = [:]
var mutableIndex: Int = 0
for (identityIndex, oldDistinctable) in oldIdentities.enumerated() {
guard newIdentities.contains(where: { $0.indistinct(with: oldDistinctable)} ) else {
mutableIdentities.remove(at: mutableIndex)
removedIndex[identityIndex] = oldDistinctable
continue
}
mutableIndex += 1
}
if !removedIndex.isEmpty {
queueLoad {
$0.worker.diffReloader($0, shouldRemove: removedIndex)
}
}
return mutableIdentities
}
func reloadChanges(in oldIdentities: [Distinctable], to newIdentities: [Distinctable]) {
var mutableIdentities = oldIdentities
var reloadedIndex: [Int: (old: Distinctable, new: Distinctable)] = [:]
for (identityIndex, identity) in newIdentities.enumerated() {
if let oldDistinctable = mutableIdentities[safe: identityIndex],
oldDistinctable.indistinct(with: identity) {
reloadedIndex[identityIndex] = (old: oldDistinctable, new: identity)
} else if let oldIndex = mutableIdentities.firstIndex(where: { $0.indistinct(with: identity)}) {
let removedDistinctable = mutableIdentities.remove(at: oldIndex)
guard mutableIdentities.count >= identityIndex else {
fail(reason: "Fail move cell from \(oldIndex) to \(identityIndex)")
return
}
mutableIdentities.insert(removedDistinctable, at: identityIndex)
queueLoad {
$0.worker.diffReloader($0, shouldMove: removedDistinctable, from: oldIndex, to: identityIndex)
}
reloadedIndex[identityIndex] = (old: removedDistinctable, new: identity)
} else {
guard mutableIdentities.count >= identityIndex else {
fail(reason: "Fail add cell to \(identityIndex)")
return
}
mutableIdentities.insert(identity, at: identityIndex)
queueLoad {
$0.worker.diffReloader($0, shouldInsert: identity, at: identityIndex)
}
}
}
if !reloadedIndex.isEmpty {
queueLoad {
$0.worker.diffReloader($0, shouldReload: reloadedIndex)
}
}
}
func fail(reason: String) {
sequenceLoader.removeAll()
worker.diffReloader(self, failWith: .whenDiffReloading(failureReason: reason))
}
func runQueue() {
sequenceLoader.forEach {
$0()
}
}
func queueLoad(_ loader: @escaping (DiffReloader) -> Void) {
sequenceLoader.append { [weak self] in
guard let self = self else { return }
loader(self)
}
}
}
#endif
|
STACK_EDU
|
Given a connected road network on an Island without one-way streets, where should I para-shoot in and what route should I take to deliver mail to all houses on the island (being picked up again by helicopter)?
Or in terms of Graphs: Given an un-directed, connected, weighted Graph, what is the optimal start vertex and shortest path that visits every edge at least once (return to the start vertex not required)?
This question is an altered version of the Route Inspection Problem or Chinese postman problem.
The solution I have working is
O(n^5) and I'm trying to improve this.
A Slow-ish Solution
- First we need to change the graph so that an Eulerian Path exists. We can do this by finding the Minimum Weight, Perfect Matching between all odd vertices (removing two odd vertices since we want a Path and not a Cycle!)
- We can use Edmonds' Blossom Algorithm to make the Matching computation efficient. My implementation is currently
- Adding the computed edges into the Graph ensures an Eulerian Path exists (using the two skipped Vertices as start and end point)
- We then compute the Eulerian Path and have an optimal solution for the chosen start and end vertex
The big questions is which two odd vertices to use as the start and end point for the Eulerian Path. So far I am computing all possible Matchings using the Blossom algorithm and taking the one with the minimum weight.
Is there a way that we can do this better? Potentially already integrating it into the Blossom algorithm? Are there any assumptions we can make about the vertices that are optimal? I.e. in a lot of cases two odd vertices with largest distance in the original graph are the best choice, however this is not always true (see Example 3).
How can we adjust e.g. this implementation to consider only Matchings of maximum size - 1?
Here is the optimal solution for a maze (the coloring indicates the chosen path). The computation for a given start-end vertex pair is relatively fast (around 100ms). However since this needs to happen for every odd-vertex pair, the overall computation time is over an hour.
There are 278 odd vertices in the maze. Hence we have to compute a Matching
(278 * (278 - 1)) / 2 = 38503 times.
In this example it is obvious that the optimal start and end points are very far apart.
The maximal distance between two odd Vertices (the one at the top and the one at the bottom) is 50. However for the optimal path the distance is only 11.
PS: Had all the terms nicely linked in the question, but can only include two links due to not having reputation. Oh well...
|
OPCFW_CODE
|
Through the course of installing or upgrading Task Factory, you can run into some common component errors in your packages. This article provides an overview for these common errors and ways to troubleshoot them.
Note: SentryOne recommends testing your Task Factory upgrade before upgrading all of your environments.
You may run into the SSIS.ReplacementTask error when you try to run a package after installing a new version of Task Factory.
Note: The SSIS.ReplacementTask error may display as follows:
TITLE: Microsoft Visual Studio ------------------------------ The task with the name "TF Properties Task" and the creation name "SSIS.ReplacementTask" is not registered for use on this computer. Contact Information: File Properties Task;Pragmatic Works, Inc; Task Factory (c) 2009 - 2014 Pragmatic Works, Inc; http://www.pragmaticworks.com;firstname.lastname@example.org
The SSIS.RepalcementTask error has the following symptoms:
|Box icon on your Task Factory component or task|
|Your package can't be executed or edited|
|You can't see the data in your Task Factory components.|
Warning: If your package(s) exhibits any of these symptoms, DO NOT try to run the package, or open any of its components. It's recommended to delete this package if you have a saved copy or backup of the package.
Important: SentryOne recommends having backups of your packages, and storing these backups in a safe and accessible location.
The SSIS.ReplacementTask error occurs when SSIS is looking for your Task Factory components and can't find them (your component(s) may look online for assemblies, which can take a while). When SSIS can't find your Task Factory components, it replaces the components with generic components, resulting in the SSIS.ReplacementTask error.
You may run into the SSISReplacementTask error if you've done the following:
- You've opened a package that was developed on an higher version of Task Factory in an older version. For example, you developed the package in version 2020.0, and opened it in version 2019.
- You're using the same version of Task Factory that was used to develop the package, but you are targeting a different SQL Server version. For example, you're package was created targeting SQL Server 2017, and when you've opened the package on your current machine, the package is targeting SQL Server 2019.
To fix the SSISReplacementTask error in your package(s), you can take one of the following actions:
- Run the package on the version of Task Factory where it was originally created and delete the corrupted package.
- Download the version of Task Factory where the package was developed and ensure that this version is installed across all of your machines.
- Use the version of Task Factory where the package is working across all of your environments.
- Ensure the package is targeting the version of SQL Server that it was targeting during its development.
Warning: You must use the same version of Task Factory across all your environments (dev, production, qa, etc.) to ensure optimal performance of Task Factory components.
Note: If you need any assistance solving this error, reach out to support.sentryone.com.
Task Factory Items not showing up
Task Factory Items are not showing up in the SSIS Toolbox
Recently downloaded Task Factory components are not displaying in the SSIS Toolbox.
Beginning with SQL Server Data Tools 2016, backwards compatibility was added for building and editing packages for previous versions of SQL Server Integration Services. Opening your package in a newer version of Visual Studio may cause the package to target a version of SSIS that does not have Task Factory support. For example, if you custom installed Task Factory to support only SSIS 2014, and then you tried to work on a project that is targeting SSIS 2012, the Task Factory items won't display in your SSIS toolbox.
To ensure that the Task Factory components display in the SSIS toolbox, you can uninstall your current version of Task Factory, then reinstall Task Factory (making sure to select the versions of SSIS that you plan to work with).
You can also change the TargetServerVersion of the SSIS project to match your installed version of Task Factory. To do this, complete the following steps:
1. Open your Visual Studio project. Right click your Project in the Solution Explorer window, then select Properties to open the Property Page window.
2. Select Configuration Properties, then select the desired TargetServerVersion from the drop-down list. Select OK to save your changes.
3. Save and close your package. Re-open the package to display the Task Factory components.
|
OPCFW_CODE
|
Is RAID 1 the highest read speed type?
We have some servers that provide a web server with about 4,000 files with 1GB of each.
Always have a problem on bandwidth that server can’t produce on more than 800Mbps and it’s from the read speed problem on disks (By disks monitor graph).
Now we want to increase speed on most performance, and the amount of servers and disks is not important.
In the past we used RAID 6 for parity and speed but always have problem on speed.
Some sites say RAID 1 will just support 2 disk maximum.
So…
If we use RAID 1 with 8 disk of 4TB , then we have 8x more read speed ?
Is the RAID 1 the best choice for most speed? Is there any other type that better than RAID 1 for just speed?
We use RAID 6 in past and have read access limited to about 400 concurrent access files , it will be increase RAID 1 ?
EDIT
For anyone that reach this question, as one of comments (@EugenRieck) said , The main problem was not solved even with 24 Hard disk in raid 10, the problem exactly from concurrent read access files that reach the HDD limit . on final We solve it by replacing HDD to SSD.
If it is only 4TB, then why not use al-cheapo SSDs ?
@EugenRieck We have many 4TB disks in our warehouse.
If money's not important, why not get a server with 4TB of RAM and push everything from cache?
@mtak we have about 20 pcs of idle server with about 160 pcs of 4TB idle disks.
No matter how much of a wrong solution you throw at a problem, it will still not solve the root cause. This root cause is access time on the spinning disks. You might be able to get a bit more out by using different RAID levels or other tricks, but it won't solve the root cause: Rotating disks are not good for concurrent streams.
Thanks @EugenRieck, I've added the third question on above, we most use our idle component, so I got your hint and thanks for that.
As has been said above, faster disks will get a better improvement than
any RAID setup.
But to answer the questions:
If we use RAID 1 with 8 disk of 4TB , then we have 8x more read speed
Yes, but RAID 1 (mirroring) takes 2 disks. With 8 disks you will have
8X increase in read speed but 50% capacity when using RAID 10.
RAID 10 is a stripe of mirrors:
Is the RAID 1 the best choice for most speed? Is there any other type that better than RAID 1 for just speed?
Speed is achieved by striping, the more the better
(limited by the disk controller).
The best RAID method that can give a good amount of parallelism with many disks
is the RAID 10.
The number of disks that can be added here is not unlimited,
and with expansion the time will come to have a look at
Nested RAID levels
or at expensive disk vaults.
We use RAID 6 in past and have read access limited to about 400 concurrent access files, it will be increased in RAID 1 ?
With N-disk RAID 6, the read speed is (N-2) times faster than the speed of
a single drive.
With RAID 10 the read speed will be N times faster.
RAID 10 is still the best on speed.
Thanks, But what about the concurrent access files on RAID 1 vs RAID 10 ? which one have more read concurrent access files ?
I try to achieve the highest speed and more concurrent access files, is RAID1 is better than RAID 10 on this situation ?
RAID 1 and RAID 10 are the same for performance, as RAID 10 just multiplies the number of RAID 1 pairs. RAID 10 stands for RAID 1-0 meaning (RAID 1)0, for RAID 0 stripping applied to multiple RAID 1 mirrors.
It depends upon the usecase. Overall, you'll generally get better performance from striping. Where you can see a perf improvement with RAID 1 is when you have _multiple processes that are reading different files from the same volume. In those cases, you will see improved performance, because the read tasks can and will be split between the two drives.
For example, I use Acronis backup on a couple of machines to a RAID 1 volume, and if I start 2 Validate Backup jobs on 2 different machines -- where Validate Backup only reads, I see 300-350MB/s because the read requests are split between the drives. Similarly, you'll get throughput performance for file sharing to multiple clients.
Those are server types of loads, not workstation types of loads. So for example, if you start an app on your machine, and that one app needs to read one file as fast as it can, or if it's reading multiple files, but not concurrently, the mirrored volume will give you zero perf improvement over a single drive, because it's just servicing the one read request.
Care to justify the downvote? I know the numbers are accurate, because I measured before posting.
|
STACK_EXCHANGE
|
This repo serves as an example of ansible's pull mode.
I found pull mode somewhat under-documented, so this repo is intended to provide a practical example for people wishing to get started with ansible-pull. In particular, it collects a bit of utility code that should enable some basic workflows.
Running ansible in pull mode makes a different trade-off than the usual centralized ansible workflow. The main benefits are the implicit scaling to a large number of nodes, a simple repository-oriented workflow, and avoiding the need for awx/tower. Drawbacks are mainly that the pull workflow is somewhat obscure, results in eventually consistent infrastructure, and has some gotchas detailed below.
You'll want to invoke ansible like this if you use this ansible-pull setup:
# pull mode (suitable for automation) $ ansible-pull -U https://git.example.com/ansible.git -i "$(hostname --short)," # push mode (development) $ ansible-playbook -i inventory ./playbook.yml --limit foo.example.com
ansible-pull changes the ansible workflow a little. Usually ansible is run on a central server and targets a set of remote hosts. In pull mode, each remote host pulls the whole ansible repository from source control and runs a copy of ansible with only itself as the sole "remote" host. This results in a few oddities:
- groups are unavailable
- each play is only applied to the current host (
- pull codebases are usually slower to iterate on when developing
- every host needs a copy of ansible and all modules and their dependencies used by the playbooks installed
- hosts must be able to pull the ansible repo
- credential management requires a separate solution
Some approaches for mitigating these oddities follow:
inventory_hostname is always localhost by default, it
can be explicitly specified when invoking
The unavailability of groups is worked around by tagging each
host with their groups in
host_vars instead of including this
grouping in an inventory. Playbooks can then use this mapping to
synthesize the equivalent push mode groups.
These sythetic groups are turned into proper groups by the inventory script that I've provided. This enables push-style development, which allows iterating on changes more quickly than solely relying on the pull flow.
In pull mode, ansible calls a playbook named
local.yml in this repo does the group
synthesis that I described, and then goes on to invoke the
playbook.yml. When developing, you'd invoke
playbook.yml in push mode instead, using the inventory script
The one-host-per-play limitation doesn't really have a
workaround. If you rely on
host_vars or facts from other hosts
in a play, you'll need to provide some other data plane for
sharing this information. Some reasonable solutions are static
host_vars, custom lookup plugins, or something like etcd.
However, consider that pull mode may not be the right solution if
your workflows rely on cross-host communication.
You'll want to install at least ansible on every host participating in pull mode. Note that this also applies to the dependencies listed on each module, and the modules themselves, too, if they aren't in ansible base. In large sites this can add up to a considerable amount of total disk space.
Requiring hosts to track the repository containing your playbooks also implies a few things. The load on your repository server scales linearly in the number of hosts using pull mode and firewalling becomes more difficult. Unless your repository is anonymously and globally readable, you'll need some way of provisioning initial credentials on your target hosts to be able to access it at all. SSH certificates may also be of interest here.
This credentials issue also shows up elsewhere. In many setups, the central server will have some level of access to secrets that are then pushed to remote hosts. In pull mode, each remote host is their own central server, so each one requires access to secrets. This makes several solutions that work well in push mode, like ansible vault, difficult to deploy securely in pull mode. Larger setups will probably want to set up something like Hashicorp's Vault or similar secret management services.
Finally, I've provided a sample
ansible-pull role, some example
host_vars to help you get started.
|
OPCFW_CODE
|
White water bird with black 'hat'
The white bird in this picture, swimming between the ducks, was seen in Beijing, China, in januari 2017. It was cold: the water seen in the picture is liquid, but not far from this, the water was frozen. We also saw it standing on the ice.
It has an orange beak with what seems to be a small black spot at its end. It also has black as the top of its head. We did not find a similar bird when searching ducks, goose or even swans. What species is this?
Interesting species. I wonder if it is a variant of mallard.
@Sanjukta Ghosh. It is not a species. It is a race/variation/form/etc of the species Mallard (Anas platyrhynchos).
This duck has several features that could put it in a number of variants of domestic duck. It could be a hybird with a magpie duck or ancona duck? Magpie ducks have black caps, similar to the duck in the photo, but also have black backs. Anconas have some black on the face.
@JC11 What breed is the pure white mallard? It is quite frequent.
Possibly a Magpie Duck, but not certain because of the paucity of black elsewhere on the body.
@theforestecologist and JC11 I just want to remark that all the breed that you cite still belong to the species "mallard". Like dalmatian is a bree of the species "dog". I think that at this stage is pointless to assign a specific breed being most likely a mix of different ones (as suggested).
@Sanjukta Ghosh Domestic Mallards can come in a variety of colours including white. I think it we were to pinpoint the origin of the duck in question, it is likely a mix of magpie duck and domestic mallard, but that mix may not be half and half, it could be much more complex. "have fun", The Magpie is not a breed of mallard specifically, it could be an ancestor of an Indian runner which could be a Muscovy. In domestic breeds, the origin species can be many mixtures and complex. I think the best answer for this question is that this is a domestic duck and give possibilities of which type.
It is a domestic form of mallard. They are bigger, often white or mostly white. Try to google image domestic mallard and you will see the phenotypic variety that domestication has created.
Note the irregular and not symmetrical shape of the "hat" and read the description below from the biggest research center on birds of the world.
Extracted from:
http://www.birds.cornell.edu/crows/domducks.htm
(...)lots of white is often involved, including all-white breeds like the popular Pekin Duck (...) Usually these white spots are not symmetrical across both sides, and that asymmetry should tip you off to think domestic influence.
Please support your claim with citations. A claim not supported with references do not make a good answer, even if you are quite certain of it.
@Sanjukta Ghosh. Google image domestic mallard it's more than enough to prove that is a domestic mallard. There is no wild duck with a pattern of colors remotely similar to the one in the picture.
Google images search is not an authentic reference because it extracts images from different kinds of sources (which include flickr and personal blogs) that may not have the correct information. But the webpage of Cornell Lab of Ornithology you linked is a good reference.
|
STACK_EXCHANGE
|
About the Windows Component Wizard
The Windows Component Wizard can be accessed from inside the “Add or Remove Programs” control panel (appwiz.cpl) in Windows XP. On the left hand side of the control panel (in the grey band) is a button to “Add/Remove Windows Components”, which will launch the Windows Components Wizard.
The wizard provides and manages a list Windows components and component groups which can be selected for installation or removal. Each item can be checked or uncheck to control it’s installation status. Once the user has selected the configuration they want, they can click the “Next” button and the wizard will perform all of the chosen installation or removal tasks.
About the Sysoc.inf File
All of the entries which are displayed in the Wizard are contained in the file “sysoc.inf” which is stored under the %WinDir%\INF directory (The INF directory may be hidden on some systems, but you can quickly open it by typing “INF” in the Run box).
Many people who tweak XP will recognize this file, as it has hidden components which are not listed in the Wizard (like Windows Messenger) which can be made visible by editing the entries in this file (and then removed using the wizard).
The file can also be used by the System Stand-Alone Component Manager (SYSOCMGR.EXE) tool which is included with Windows 2000 or higher for the unattended addition or removal of Windows components.
The Sysoc.inf Entries
If you open up the file you will see a bunch of INF code. INF is an installation scripting language which looks much more complicated than it actually is. If you’d like to learn about INF files, MSDN has some good documentation here.
Inside the file will be a [Version] section which we don’t have to worry about. The section we want to look at is [Components]. Those are the entries which form the root listing of the Components Wizard.
Each entry will have the following format:
[Component]=[DLL Name],[DLL Entry Point],[INF File],[hide],[Number]
Component is the internal name which is used to reference the component in the INF files. I’m not 100% sure what the DLL name and entry point are for (probably setup procedures). The INF file is a separate INF which contains the component details and installation script. The hide entry is used to hide the item in the Wizard, or is left empty when the item is visible. I don’t know what the last number is for either, it usually is 7. All of the values are required except ‘hide’.
For example, in the line for Windows Messenger:
‘msmsgs’ is the internal name which is used to refer to the component within the INF files. msgrocm.dll and OcEntry are the DLL file and entry point. The next item is the INF file (msmsgs.inf) which contains the component’s information and installation code. The hide entry means it will not show up in the Component Wizard. Finally, there is the number 7.
The Component INF Entries
If you open up one of the component INF files which is referenced in sysoc.inf, you will see INF code which describes the component or group, and the code which is used to install or remove the component.
The first section that is unique is [OptionalComponents]. This section contains all of the internal component names, with the first being the top level option, and all of it’s child components after. I believe this defines the items in a group, but have not confirmed it yet.
The component name used in the sysoc.inf line will be the name of the section which contains the component information. For example, the Windows Messenger line listed above specifies the component name as ‘msmsgs’, That means the messenger component’s information will be found under the [msmsgs] section in the msmsgs.inf file.
Under the component’s section will be several directives. The following directives control what is displayed under the Component Wizard:
||Display name of the component.
||Description of the component.
||The INF section used to uninstall component.
||Number (Don’t know what it means)
||Index of icon within a Windows DLL (shell32.dll?)
||Comma separated numbers (don’t know)
||Approximate size (in bytes) of installation.
||Name of parent group.
The remaining section lines are standard INF directives which are used for the component’s installation process.
After you look at a couple of the component INF’s you should get a decent idea of how they work. Now we move on to:
Adding Your Own Entries
In order to add your own components you will first need to add a new entry under the [Components] section of sysoc.inf.
WARNING: If you edit the sysoc.inf file incorrectly, it will cause the Windows Components Wizard to crash or close unexpectedly. Make sure you backup the sysoc.inf file (or any others) before you modify them.
You can add your entry anywhere under the [Components] section. Make sure your component name is unique. For the DLL name and entry point you can use “ocgen.dll” and “OcEntry”. Some of the others can cause crashes, but I have used OcEntry many times and have had no problems. It may even be a dummy function call, many other items in sysoc.inf use it. Enter the name of your component’s INF file. You can include your component section within an existing INF file, or create a new one. You can leave the next value empty, or put in “hide” if you want it to be hidden. For the final value put 7.
Here’s an example of a custom component sysoc entry to install the Visual Basic 1.0 runtime library:
After that you will need to create your component’s section in the specified INF file and set it’s options. If you are creating a new INF file for your component you will need to also have the [Versions] section with the Signature=”$WINDOWS NT$” directive as a minimum. You should be able to test it in the wizard at this point to see how it looks. The final steps will be to write the installation code and make sure it all works.
For our example component, here’s the INF:
Signature = $WINDOWS NT$
OptionDesc = %CAPTION%
Tip = %INFO%
Uninstall = vbrun10_uninstall
IconIndex = 34
Modes = 0,1,2,3
SizeApproximation = 151552
CopyFiles = vbrun10_copyfiles
CAPTION = "Visual Basic 1.0 Runtime"
INFO = "Allows you to run Visual Basic 1.0 applications."
|
OPCFW_CODE
|
Proposed by:Bhaskar NA
Contact (bhaskar, firstname.lastname@example.org, 9880644833, skype):
Best way and times to contact during RHoK 2.0 Dec 4/5 2010: e.g. email, but on skype 10am-3pm EST (GMT+5)
Very few organizations provide pickup/drop service to their employees.
Employees without office commute facilities have difficulty in using public transport, thereby increasing the use of personal and private vehicles.
Congestion on roads is increased by usage of non-shared transport increasing stress & decreasing productivity.
Western concepts such as car pooling have proven ineffective in the Indian environment.
Pollution is a great challenge in Indian metros’ and petrol is the valuable resource which we want to save.
Indicates clear demand for efficient commute solution targeting office goers
3 minute demo explains the proof of concept:
Problems with current public transport for office goers
For a given start and end point, office goers doesn’t get comfortable public transport
Office goers needs to switch multiple buses
This takes more time
Commuter covers extra distance than required
Commuter need to align with multiple routes
For few start/end points, there will not be public transport at all.
Office goers needs to go with available public transport
Look for auto or private vehicle for remaining uncovered distance.
Office goers doesn’t know whether bus comes or doesn’t come
Office goers can’t check if expected bus will arrive or already left
create “Commute market place” where
Regular office commuters express intent to commute with daily commute details like pickup/drop points and time.
TravelVendor provides customized commute solutions based on the expressed interest (time, source/destination etc.) with optimum distance.
How It Works
Commuter subscribes via online registration form, providing daily commute details.
TravelVendor registers with details of available inventory, price expectations, preferred timings & routes.
TravelVendor official will login and query the list of people interested for the given source / destination / time.
Based on the number of commuters, TravelVendor will compute the vehicle size and sends monthly / quarterly / half yearly subscription fee details to commuters. (email/sms)
Commuters will contact TravelVendor and signup with payment terms/route/schedule. (offline)
Benefits for commuters
Stress free, economic travel in optimal time.
Benefits for TravelVendor
New source of revenues Vehicle inventory is effectively used. Add more confidence to office goers on travelVendor so more office goers starts adopting it.
Benefits for the ecosystem
Reduce pollution & congestion
a. We need to have a organization that should set up and own a website where commuters can come and register to avail the commute solution.
b. Organization need to engage the good travel vendors.
c. Most of the travel vendors doesn't want to take risk so, might not show the interest in participating. In this case, this organization need to work with multiple travel vendors in the backend and act like a single window of contact for all the commuters.
d. organization might need to setup a call center
Since commuter has already paid to the travel vendor, he tries to utilize almost always for the office.
This model has two way commitment (between travel vendor and commuters) system should work fine.
Since more travel vendors are participating, commuters might get the solution at competitive rates.
Road congestion will greatly reduce.
Office goers in indian metros like bangalore will be greatly benifited
Almost stop their personal vehicles
Reduce the parking demand in the office
Reduce the petrol consumption
car pooling in bangalore. this works only if a individual owns/drives a vehicle.
cabpool.com - pooling people who wants to mostly for catching the flight in US.
How will this work be taken to real users, or further developed? Will this be an ongoing team? Is there a NGO/group that's sponsoring it as an ongoing project? Who/how/when?
We can take two approaches:
1. Build a site and open up it for the community.
Let commuters and travel vendors come together and establish the relationship them selves.
This doesn't require any group or infrastructure.
It requires only site building and hosting.
Travel vendor need to be more friendly - setup call center etc,. issue monthly pass etc.,
Travel vendor should visit companies and do some PR work and give some discounts if more commuters participate from same company
As a individual, it is tough to engage the well known travel vendor. So, we need to create some buzz on this through proper channels. Seek for more inputs here.
2. Build a site and own the complete business by abstracting all the travel vendors
This requires a proper call center setup by a new business organization
Need to abstract all the travel vendors
Should take care of registering the commuters and provide them the uninterrupted commute service from one of the travel vendor.
Should take care of getting money from Commuters and give money to travel vendor. And take the profit share in between to sustain it.
Proof of concept developed using php and mysql using yahoo maps api's. Seeking for more inputs from the community before making the site live.
|
OPCFW_CODE
|
Now I may not be a massive fan of Apple and I particularly get riled up when it comes to their attitude to patents, but as far as the desktop computer is concerned, Apple have got one thing right. Despite the massive success of the iPhone and iPad, they have kept OSX and iOS seperate, with just a little convergence between the two. There is Dashboard, iMessage, trackpad gestures and the admittedly silly "Natural Scrolling" but that's about it. OSX is still a reasonably usable desktop OS, albeit a fixed, non-customizable one.
Microsoft on the other hand, have, in my opinion, got things very wrong! They basically shovelled the Metro (I refuse to call it "Modern") interface onto phone, tablet and, most annoyingly, desktop with Windows 8. Microsoft has confused and annoyed a lot of users with this abomination. Even 8.1 does not really fix things, it merely brings back the Start button rather than what a lot of desktop users actually want which is the Start menu. Windows 9 will supposedly fix things by putting Metro apps in a windowed app on the desktop but we shall see. By then though it may be too late and Windows will be irrelevant, perhaps it already is...
Google have ChromeOS and Android. They have, at least on their own devices, kept them separate. ChromeOS is aimed at laptops and desktops as the browser is the desktop to Google. Chromebooks are great for those who need a second machine for writing, browsing and other light work. Despite Microsoft's FUD, Chromebooks are actually useful offline too. Things have taken a slightly different approach recently with some OEMs like Lenovo and HP putting Android onto all-in-one desktops and laptops, presumably to take advantage of the huge amount of apps in the Play store. I am not quite sure how I feel about this yet, I have yet to see what they are like in the flesh. If, and that's a big if, they have skinned Android to make it more desktop-like, they might be onto something but I don't think Android is quite right in it's stock form for a desktop PC, at least as a work computer, though it would be great as a touchscreen jukebox or media centre.
For things like photo and video editing, music production though I would rather have a full OS such as a 'proper' Linux distro, my personal preference at the moment being Linux Mint with Cinnamon desktop. With Linux, and pretty much any desktop, I like a top panel with the usual indicators, a searchable menu on the left and a dock at the base of the screen on one monitor, the great thing about Linux is it is so customizable!
Again I am not really keen on Ubuntu's Unity or Gnome Shell as I think it gets in the way of what I want to do, I don't like massive fullscreen menus covering almost a whole screen, which is one reason I hate Metro on Windows 8. And one day Ubuntu Touch might work well on phones and tablets but I just cannot get on with it on the desktop and I really have tried to like it. One thing I would (or would have liked) from Canonical is Ubuntu for Android, it would be great to take my phone, dock it to a HDMI monitor, Bluetooth keyboard and mouse and have Ubuntu instantly load up on the monitor and still be able to use Android apps in a window on the desktop. This would be cool for travelling light for cubicle people.
I love having a (reasonably) powerful, multi-monitor desktop for AV stuff, Steam games etc, then switching over to a laptop for late night blogging and browsing. My Nexus 7 is great for light browsing and casual gaming and my Galaxy S3 is my camera, on-the-go browser, info-finder and above all, phone! Firefox syncs browser history and tabs between all three devices and my files are never far away with Dropbox or Copy or Linux home server on the LAN.
I have also been using PushBullet to receive Android Notifications on the desktop from phone and tablet. It's also good for quickly sending links between devices. It works OK but I wish there was a proper, fully formed desktop app on Linux that would do it better. Something like KDE Connect, but not so tied to KDE, would be great. Overall, I like having different devices and form factors for different purposes but with some synchronicity between them.
|
OPCFW_CODE
|
More quick tales from tech support's trenches
Here are some more quick tales from the days of tech support. They're not really enough to turn into a full post, but in aggregate, it tends to work out.
I was told this one by a friend. One time, a bunch of wise guys swapped the D and K keys around on someone's keyboard. That is, they actually popped off the key caps and moved them to new locations. When he returned, he couldn't unlock his screen or log in. It seems his password contained a K.
As the story goes, he went into single-user mode to fix his workstation, fixed his password somehow, and then rebooted to the usual multi-user mode with X and his web browser. Then he tried to log into the company's web-based system... and couldn't. When this failed, he went "totally Office Space" on his keyboard and reduced it to a pile of slag. They were worried he'd beat them up if any of them fessed up.
The guys on second shift support used to prank each other. One guy left his screen unlocked, so someone sat down, popped open a terminal, did a quick "setxkbmap dvorak" and then closed the window. Obviously, this person was a normal QWERTY sort, so this would have made his day miserable.
I suggested one thing, though: they should find and print out a dvorak keymap and leave it on his desk. That way, after he gets over the "WTF!" and starts looking around, he'll be able to figure it out. He might even learn a thing or two about alternate layouts... and locking his screen.
There was a guy hired as a level 1 "phone firewall" who had a terrible grasp of the language. His whole job was to take phone calls, provide as much support as he was able, and get anything else into a ticket for someone else to handle. I'm not talking about sleepiness and "you cars is the gotem p" here.
This guy was always like that. Sometimes you couldn't even tell what the request was supposed to be. It was embarrassing because you'd have to go back to a customer and say "yeah, uh, what did you want again?", thus admitting that the first person who took their call really mangled it.
I think he lasted about a month at that job. What I find more amazing is that he somehow got hired in the first place. I guess assessing oral and written language skills never occurred to HR, despite the obvious need for both when working in customer support!
Then there was the guy who had been crowned "senior shift engineer", or SSE for a particular support team. It was an additional notch above a "mere level 3" tech. How did he maintain that illusion? Easy. He just farmed out the tough tickets through his underlings so he wouldn't have to actually know the answers.
One time, he farmed out a question which found its way to me. A friend of mine asked me how I might tell if a machine was "64 bit". I said that it varied. There's actual hardware support, there's the kernel, and then there's userspace. Then there's this command, that command, and this other command to see CPU info, which kernel you have, and what userspace stuff is there. You need all three to have an environment that most people would consider worthy of that label.
That's when he mentioned in passing that this "SSE fella" had passed the ticket along because he couldn't answer it himself. I was stunned.
How, exactly, do you receive the title of "top tech" on a team when you don't even know the simplest things about the environment? This guy supposedly had some kind of certification as well, thus adding to the stories about people substituting them for experience and failing at their jobs.
One thing worries me: maybe this guy really was the top tech. I mean, it could just be a relative label. If everyone else knows even less than he does, somehow, then it would actually be totally accurate. It would also be totally depressing.
Finally, there was the guy who would wander up to either my desk or a friend's desk and start blathering on about something forever. This guy was so dense that when he'd get a popup ad which looked like a Windows dialog box, he'd click the (fake) [X] and then wonder why it didn't go away! He'd keep doing this over and over.
Do I need to mention that he also had a certification?
Anyway, I worked out a scheme with my friend. After an appropriate interval, one of us would look at the other, and the one who wasn't directly in the line of fire would resort to some trickery. We'd pick up the handset of our phone without actually turning towards it or bringing it toward us and just set it down on the desk. Then we'd dial his extension.
He'd hear the phone start ringing and would waddle back that way, at which point, naturally, it stopped. Now, he didn't have a display phone, so he didn't have a call log. He had no idea what was doing it. If he had, I would have just tipped out on one of my few remaining analog modem ports and bashed out a quick ATDT xyz to ring him. He'd see some nonsensical incoming name and would chalk it up to spirits or something.
I mean, this guy had no idea what popups were or what you could do about them, and he was expected to manage millions of dollars of Windows boxes. This was the status quo for many years.
For all I know, he may still be there at that job, cornering people while talking about nothing in particular, getting trapped by the latest malware, and occasionally receiving those strange hang-up calls.
|
OPCFW_CODE
|
Creating rails model from nested JSON request - AssociationTypeMismatch
My application has the following model classes:
class Parent < ActiveRecord::Base
# Setup accessible (or protected) attributes for your model
attr_accessible :child_attributes, :child
has_one :child
accepts_nested_attributes_for :child
#This is to generate a full JSON graph when serializing
def as_json(options = {})
super(options.merge, :include => {:child => {:include => :grandchild } })
end
end
class Child < ActiveRecord::Base
# Setup accessible (or protected) attributes for your model
attr_accessible :grandchild_attributes, :grandchild
belongs_to :parent
has_one :grandchild
accepts_nested_attributes_for :grandchild
end
class Grandchild < ActiveRecord::Base
belongs_to :child
end
The, in my controller i have a create method, defined as follows:
def create
@parent = Parent.new(params[:parent])
#Rest ommited for brevity - just respond_with and save code
end
end
My request is showing up in the logs as:
Parameters: {"parent"=>{"child"=>{"grandchild"=>{"gc_attr1"=>"value1", "gc_attr2"=>"value2"}}, "p_attr1"=>"value1"}
Which is the full serialization graph that came from my iphone app client that uses RestKit.
I have seen on other SO questions like here , thats refers to this blog post.
My problem, however is that I don't know how to control the serialized graph from my client side using RestKit in order to build a request like this (and that way, it works.. tested with debugger)
Parameters: {"parent"=>{"child_attributes"=>{"grandchild_attributes"=>{"gc_attr1"=>"value1", "gc_attr2"=>"value2"}}, "p_attr1"=>"value1"}
Anyone have ideas if there is any option I can pass on to the Parent.new method or customize the RestKit JSON output in a way that I may achieve the model_attributes structure within nested JSON objects?
Thanks
What did the blog post say? The link is dead and I don't speak Objective C!
I have solved this issue by using RABL, which is a gem to render json as views. Awesome work.
This allowed me to customize the serialization graph of my model.
On RestKit side (using OM2.0 - new object mapping), I have changed my mappings all to child_attributes for all relationships, for instance:
RKObjectMapping* parentMapping = ... //Initialize your mapping here
RKObjectMapping* childMapping = ... //Initialize your mapping here
//Configure mapping relationship with child
[parentMapping mapKeyPath:@"child_attributes" toRelationship:@"childProperty" withObjectMapping:childMapping];
//Register mappings
RKObjectManager* manager = [RKObjectManager sharedManager];
[manager.mappingProvider registerMapping:parentMapping withRootKeyPath:@"parent"];
[manager.mappingProvider registerMapping:childMapping withRootKeyPath:@"child"];
|
STACK_EXCHANGE
|
Twice a year, Passenger invites operator partners from across the UK to a mutual location to discuss the Passenger product. The fourth Passenger Innovation Day took place at ODI Leeds and it presented a great opportunity to chat through our work with open data, server resiliency and more.
On November 13th, 2018, Passenger held its fourth Innovation Day at the hub for all things open data – the ODI Leeds.
Our Innovation Days comprise a mix of presentations and updates on the Passenger product, but they offer more beyond a look at the roadmap. The Innovation Day events are founded in the spirit of mutual knowledge sharing: we discuss the latest Passenger developments but we also listen to our operator partners to learn about the steps we need to take to make Passenger even better. The short answer to this question is yes – you will need some sort of server space in order to serve application content to customers. Unless the application you are developing requires zero network connectivity and all app content is contained within the download file, chances are you have some dynamic content that needs to be served to users. That’s because most server costs for mobile apps are Cloud Applications and require an external server to generate most of the app functionality. You will need one or multiple servers to do that. Aside from the servers used to serve content to app users, a server is also useful as a central repository for the app files by using a development tool such as Docker or Gitlab. These tools allow you to have precise version control of your software. When you’re considering what kind of server you need for your app and the associated app hosting costs, you need to think about how much data you’re serving to users. Is there audio, video, or other large data being sent or received by users, or is app data largely text based or static in nature? The larger the data being served is, the more server CPU, memory, and disk space that will be required. This will lead to increased costs. Determining the exact amount of server resources that your particular app will use is tricky. It’s generally best to setup a server and start sending users towards it. You can them approximate based on current and future growth.
With a combination of developers, designers, commercial managers, operations staff and marketing folk all in one room, it’s an excellent opportunity to discuss a variety of topics and feel out new pathways for the future.
At Innovation Day 4 we took a deep dive into our upcoming Alexa voice assistant integration, considered the latest in our disruption data tools, covered Passenger’s server resiliency and explored the management of digital channels in times of crisis.
Hosting the event at the Open Data Institute in Leeds gave us the perfect backdrop to discuss the current climate around open data in the industry in context to the Bus Services Act 2017. We shared an in-depth look at our new Bus Stop Checker tool and what it means for NaPTAN and how our plans to use NetEx might affect operators in the coming months.
Outside of these discussions, operators shared their own ideas in open Q&A sessions, as we fielded input and thought ahead about an improved product for all. We gained valuable understanding about the challenges our customers face and how we can help them to better deliver on the front lines of customer service.
Operators also took the chance to speak with one another, learning from their mutual experiences and discovering how each is tackling the core issues of the moment.
Passenger Innovation Day 4 follows on from our three previous iterations of the event: Innovation Day 1 in Manchester, Innovation Day 2 in Nottingham and Innovation Day 3 in London.
Innovation Day 5 will take place in May of next year. If you’d like to join us – or if you have any feedback about specific developments to the Passenger product that you’d like us to take under consideration – please get in touch.
|
OPCFW_CODE
|
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
||8 months ago|
|cron||2 years ago|
|shepherd||11 months ago|
|20200503-channels.scm||2 years ago|
|README||2 years ago|
|bnw-channels.scm||8 months ago|
|channels.scm||2 years ago|
|genecup-channels.scm||8 months ago|
|genome-channels.scm||8 months ago|
|gitea-channels.scm||8 months ago|
|power-channels.scm||8 months ago|
|rn6app-channels.scm||8 months ago|
|run_bnw.sh||8 months ago|
|run_covid19-pubseq.sh||2 years ago|
|run_genecup.sh||8 months ago|
|run_genenetwork1.sh||1 year ago|
|run_genome_browser.sh||8 months ago|
|run_gitea-dump.sh||8 months ago|
|run_gitea.sh||8 months ago|
|run_ipfs.sh||8 months ago|
|run_power.sh||8 months ago|
|run_ratspub.sh||1 year ago|
|run_rn6app.sh||8 months ago|
|run_singlecell.sh||8 months ago|
|running_ratspub||2 years ago|
|singlecell-channels.scm||8 months ago|
|update_archive-pubmed.sh||8 months ago|
|user-shepherd.service||2 years ago|
This repo contains the files used to run shepherd services.
The `shepherd` and `cron` directories go in `.config`
The shell scripts sit in the home directory.
The systemd service is to start shepherd automatically on system boot.
Working with Shepherd Services
Each service is stored in a separate file. This allows us to reload individual services without needing to restart all of the services in one go. In order to see which services are available run the command `herd status`. 'Started' services are currently running, 'Stopped' services are not running but are still loaded, and 'One-shot' are services which are not running but run a one-off script or service.
The services are setup so that they have code that is run when it is started and continues to run until it is stopped. Sometimes that code can call other shell scripts in the home directory. If the primary code in the service needs to be changed then the service needs to be reloaded in accordance to a process that will allow it to be reloaded without stopping all the other services. In our example the service is `foo` and it is located in a file `bar.scm`.
$ herd stop foo
$ herd load root .config/shepherd/init.d/bar.scm
The second command will load the code in `bar.scm` into `shepherd`, and if the service is configured to start automatically at startup it will start immediately. This process should not affect the other running services.
To use shepherd's herd command, assuming you have permissions granted in /etc/sudoers, the command is 'sudo -u shepherd /home/shepherd/.guix-profile/bin/herd status'. Adding a bash alias, such as "alias herd-herd='sudo -u shepherd /home/shepherd/.guix-profile/bin/herd'", will make it easier to interact with shepherd without needing to switch to the shepherd user. The logs for the various shepherd services are located in /home/shepherd/logs/ but are not yet timestamped. The log for shepherd itself is in /home/shepherd/.config/shepherd/shepherd.log. There is not yet a way to change this from a config file.
*Per service Guix profiles*
Each service gets its own guix profile. This us to upgrade each service individually. If a specialized channel is needed then the command would be `guix pull --channels=/path/to/channels/file --profile=/path/to/profile`
|
OPCFW_CODE
|
Default Jenkins User Password
I have a fresh install of Jenkins as a service on my Linux machine. When Jenkins installs, it creates a 'jenkins' user, but I can't seem to find the default password for it anywhere.
I'm trying to secure my system, so if the default password is '123' or something insecure that I just haven't thought of yet, that's a problem.
Thanks!
Default password location for ubuntu 14.04 version: http://stackoverflow.com/a/39206369/2086869
I am login as jenkins using:
sudo -i -u jenkins
I don't believe it has any password. You should be able to do:
sudo passwd jenkins
This will prompt for you to set a password.
Alternatively you could create the jenkins user prior to installing, and it would leverage that one.
So if I were to try and ssh into the box as the Jenkins user, since there's no password, would I be able to?
You would have to have the following line in /etc/ssh/sshd_config:
PermitEmptyPasswords yes
For fedora,
Go to /root/.jenkins/
open config.xml
In config.xml, set disableSignup to false.
Restart Jenkins.
Go to the Jenkins web page and sign up with a new user.
In config.xml, duplicate one of the hudson.model.Hudson.Administer:username lines and replace username with the new user.
If it's a private server, set disableSignup back to true in config.xml.
Restart Jenkins.
Go to the Jenkins web page and log in as the new user.
Reset the password of the original user.
Log in as the original user.
he as asking about the system user not the jenkins UI user
Default password for user jenkins is just "jenkins". However, logging into this user automatically closes your session (probably it is set to be used only to allow using particular computer as a jenkins agent, but I'm not sure). That't why su - jenkins and typing jenkins do not work.
You can try logging as a jenkins user with ssh and you will see that it works, but suddenly the session is closed:
I believe there is a solution for this, but maybe it is not needed in your case. Really need to use system as a jenkins user? Doubt it.
I appreciate the response! If I'm remembering correctly (which I very well may not be, it's been a while) my concern was about potential security vulnerabilities from having a default username/password enabled. I didn't want to do anything with it, just wanted to be sure others couldn't.
C:\Windows\system32\config\systemprofile.jenkins\secrets\
goto above mentioned path and find
"initialAdminPassword" click that file and copy value
this is password for jenkins!
The OP is talking about the jenkins system user, not the administrator defined in jenkins' users.
The question is not about jenkins administrator user for jenkins web UI console but the system user 'jenkins' created due to jenkins installation.
|
STACK_EXCHANGE
|
Public transport will be the Achilles tendon of the 2012 Olympic Games
This summer, London and the United Kingdom in total will come to life with one of the largest and most important events of the sporting world: the Olympic Games (27 July-12 August 2012) and the Paraolympic Games (August 29 -9 September). Along with a lot of emotions and an incredible atmosphere, the Games will bring thousands of visitors together. This means that the transport system will be very congested – so it would be wise to start planning your trip right away. There will be no parking available to viewers wherever the games are played, except for a limited number of seats for spectators with disabilities, which must be booked in advance.
Map of locations of the games in London= http://www.london2012.com/visiting/getting-to-the-games/maps/locog-london-venues.pdf
Map of where the Games are held in the United Kingdom= http://www.london2012.com/visiting/getting-to-the-games/maps/uk-venues-map.pdf
Many spectators from all over the United Kingdom will arrive in London at the main railway stations. These will be very crowded at peak times by commuters. It is anticipated that approximately 80% of spectators will travel by train (including the London Underground and Docklands Light Railway, DLR) which will cause many problems on a network already very busy.
London Underground and DLR
Some stations in central London will be much busier than usual, but at a different location. Many parts of the system underground and DLR will be under pressure, including stations and lines that are close to the venues of the Games. These are important points of exchange for the spectators to get to the events via pathways. The main route of the Olympic Park, the Jubilee Line and the Central Line will not be outdone. Although the Northern Line, Central London, is not a direct path to the sites of the races it will still be more busy than usual, especially on the Branch Bank, and from the north. It’s provided that a significant number of spectators will use the Northern Line to London Bridge to the interchange with the Jubilee Line and main railways.
Stations to avoid
Viewers will be invited to use some stations, and lines that are used for travel to and from venues. These stations and lines will be very congested. Where possible, we recommend that you avoid these stations and routes if you’re not traveling to a place, especially during the competition and peak times for commuters.
You are in an affected area?
Some areas of the city will be congested, especially the areas around the major transportation hubs, such as King’s Cross St. Pancras, London Bridge and the Bank, and the paths that connect with central London offices of the Games.
Use this map to select the area you are interested in and download detailed information on surrounding roads and on the local public transport network.
Maps and areas of London
1. Wembley http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map1-wembley.pdf
2. Stratford West http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map2-stratfordwest.pdf
3. Stratford East http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map3-stratfordeast.pdf
4. Paddington http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map4-paddington.pdf
5. Marylebone http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map5-marylebone.pdf
6. King’s Cross http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map6-kingscross.pdf
7. City http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map7-city.pdf
8. Bethnal Green http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map8-bethnalgreen.pdf
9. Bow http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map9-bow.pdf
10. West Ham http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map10-westham.pdf
11. East Ham http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map11-eastham.pdf
12. Earl’s Court http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map13-victoria.pdf
13. Victoria http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map13-victoria.pdf
14. Westminster http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map14-westminster.pdf
15. London Bridge http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map15-londonbridge.pdf
16. Bermondsey http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map16-bermondsey.pdf
17. Canary Wharf http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map17-canarywharf.pdf
18. Greenwich http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map18-greenwich.pdf
19. Woolwich http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map19-woolwich.pdf
20. Lewisham http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map20-lewisham.pdf
21. Blackheath http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map21-blackheath.pdf
22. Wimbledon http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map22-wimbledon.pdf
23. Lee Valley
24. Hadleigh, Essex
25. Weymouth and Portland http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map25-weymouth-portland.pdf
26. Windsor and Eton http://www.london2012.com/get-involved/business-network/travel-advice-for-business/maps/map26-eton-dorney.pdf
Within London, there will be congestions on the local bus network. Bus services will be implemented in areas where additional traffic is expected along or near the Olympic path.
Service on the River
Many spectators are expected to use the river to travel to and from places in the south-east London, as the Greenwich Park and The Royal Artillery Barracks in Woolwich, therefore even the river services will be more congested than usual. Considering the importance of the event we suggest that you book your hotel room now, in the area near the headquarters of the sport that interests you the most in order to avoid wasting time and start buying tickets for the opening ceremony, closing ceremony and events.
By Elsi H
|
OPCFW_CODE
|
Can't Scale Down Kubernetes Deployment (Overscaled)
I've got kubernetes running via docker (Running linux containers on a windows host).
I created a deployment (1 pod with 1 container, simple hello world node app) scaled to 3 replicas.
Scaled to 3 just fine, cool, lets scale to 20, nice, still fine.
So I decided to take it to the extreme to see what happens with 200 replicas (Now I know).
CPU is now 80%, the dashboard wont run, and I can't even issue a powershell command to scale the deployment back down.
I've tried restarting docker and seeing if I can sneak in a powershell command as soon as docker and kubernetes are available, and it doesn't seem to be taking.
Are the kubernetes deployment configurations on disk somewhere so I can modify them when kubernetes is down so it definitely picks up the new settings?
If not, is there any other way I can scale down the deployment?
Thanks
I guess you mean the Kubernetes that comes with docker. There is the reset option in docker under preferences (the bomb icon). I guess you've already tried to force delete (https://stackoverflow.com/questions/50336665/how-do-i-force-delete-kubernetes-pods )?
@RyanDawson Yes I mean the kubernetes that comes with docker. Thanks for pointing me to that SO question, I'm sure it will come in handy in the future, however I don't think it will help with my current problem, because (I assume) a deleted pod will just be re-created by the deployment when it noticed they're gone. Ideally I need to reduce the number of replicas in a deployment outside of kubernetes if possible
Sorry I meant to suggest to delete the deployment. Does force help with that or you can't run that command either?
@RyanDawson oh right, I see what you mean. Just tried it and unfortunately that's still not doing the job.
And docker preferences/factory reset also no good? You might be able to get some logs like https://github.com/docker/for-mac/issues/2536#issuecomment-361861211 That thread gives rest instructions and also mentions increasing resource to docker but I'd suggest trying factory reset based on what you describe
@RyanDawson ahha! got it. I had a look at the settings (As someone suggested in the link you provided) and increased the cpus and memory available to docker.
Glad I didn't need to resort to a factory reset, which I guess I'd have to do if I didn't have extra resources to supply.
https://github.com/docker/for-mac/issues/2536 is a useful thread on this as it gives tips for getting logs, increasing resources or if necessary doing a factory reset (as discussed in the comments)
|
STACK_EXCHANGE
|
Honestly scared to at this point. But I did have one turned off and there was about 1-2 degrees difference, nothing too much. But there's one on the bottom by the PSU for intake, and two above the mobo for exhaust, and I'm a bit scared of trying to unplug them because of how inaccessible they are haha. I'll definitely give updates on the wear and tear, because I do plan on moving this around to LANs as well! So far everything is holding up, and there's not really any bend!
Aside from the green label, which I do agree, it's to each their own with the fan :) I like the performance that comes with it, but I'm always open to finding an affordable 92mm fan that is good performance as well! I may just get some black tape or something at least to cover it, but thanks for the input!
Yup! Unfortunately in the part list it isn't listed as a "Case".
Yeah, before shipping it off here I did notice there wasn't much of a performance change, so I ended up flipping the PSU once more, and so now it's back to the way it should be. Also did the same for another ITX machine I have, as it seemed to really let the hot air just stay in the case.
Dang both of them? I can't really take off one side of the case here because it's all one piece. Would over/underclocking make the noise lessen at all? It only started recently too, I know before when I was testing some games it wasn't doing it, but now seems to be doing it constantly and the only change is that I inverted the PSU. And the sound is only created when the GPU is put under load. Was tempted to throw my 1070 in there as well to see if makes the same whine.
Hey thanks! I'm surprised I could fit the tiny 80mm there, I used some foam tape as well too, to avoid any possible vibration. I doubt it provides much difference, but hey more airflow is always good!
Thanks! I ended up applying more pressure to the given screws and it seemed to have worked now! Temps are also a little bit better :)
Nice build! I tried inverting my PSU as well, but couldn't get the screws to stay in. Did your case have any weird bends with the frame or anything?
So I guess it would be out of the question to even attempt a 280mm, remove the hard drive cage, and try and insert it below? I'm just trying to measure it all out but would like some more confirmation about it is all. Thanks for the help so far!
EDIT: Or this may seem incredibly stupid, but would it be possible to get a 360mm radiator, and have two fans installed on it, with the middle one missing so that there is enough clearance for the GPU? The Fractal site states that it's a max of 315mm GPU with the installation of a fan. Since a fan is 25mm and the radiator is 27mm, it would still be possible with barely enough clearance, correct? This is provided the motherboard's PCI slots align with the fan layout, and it doesn't happen to get in between two fans, correct?
I can agree with you on the case, I built one recently with the mesh front and it was fairly easy and nothing looked too scratched. However, there was warping on mine as well too and it took a small amount of bending it around to fix it, and the drive bay won't go on anymore (however, I decided against using it).
I honestly didn't think of flipping the PSU to use as an intake! I may borrow your idea here when I get home from work and try doing that... Thanks for the info!
Thanks! There's actually a small metal support that hangs on the other side and is secured in place by a few screws on the frame. I was very hesitant to install it, but it seems to be rather secure :)
Sounds good! Thanks for the input!
Sorry, I guess I should specify, $350 USD or CAD?
No problem! Depends on what VM solution you're using. I've done testing with Virtual Box and had a setup as such:
Kali as my main VM
There's a tutorial on Lynda (if you have access to it) that guides you how to setup an at-home lab such as that, but the gist for the networking portion is that in VirtualBox, you can create a NAT network in the Virtual Box Preferences, and on each VM you can set your network to be a part of that NAT Network. This network in virtual box can have it's own range of IPs set, and then you can assign it as the default network connector when setting up your virtual machines. Once all are running, you should be able to ping each virtual machine from within the other one! There's some handy videos in how to set it up, and Oracle themselves have a really good blog posst on networking within VirtualBox :)
What Darth had is a good startup for Kali Linux, I've used it before on one of my Pi3's when I was playing around with Linux. I mainly tried using a wireless connection as a Rouge AP, which I had to buy a wireless adapter for, of course. I am unsure, but after some research I'm not entirely sure if you can use the Ethernet port as a part of your setup, but you can use WiFi as previously mentioned, and setup, and it seems there is a nice tool to setup everything as need be called Pumpkin Pi.
However, if you wish to set everything up yourself (since it's for educational purposes of course), there are plenty of tutorials that explain how to setup your own AP with an RPi. I'm still a complete novice when it comes to security as well, but I believe you could perform a MiTM attack and find a vulnerable system that the victim PC is potentially trying to connect to, steal the credentials that it's using to connect, and login as said user on that system. Of course, you're not really connecting to that PC directly, but you're also compromising possible network credentials (providing there is more than one computer at play here). If you'd like a cheap investment for a WiFi dongle that supports promiscuous mode, I bought this one and it worked well for what I was playing around with. Social engineering tools on Kali are also fun to play around with.
As an aside, I think you would end up learning more about the tools themselves if you setup your own lab via VMs that are all interconnected with each other. The RPi is definitely good for portability, though!
Hmm good point, big oversight by me! Thank you for all the help, you're an absolute legend!
Alright perfect. And speaking of space - how much would I exactly benefit from using an SFX PSU? Would it be much better, management wise? Is the price worth it?
Yeah, I figured it may be a bit overkill... Thanks for the input! I'll go revise that right now. Otherwise, RAM is okay for this build too then, or should I splurge a bit and get higher freq. RAM?
|
OPCFW_CODE
|
org.mockito:mockito-core - J2ObjC continuous build
Mocking framework for unit tests written in Java.
@szczepiq, @bric3 - FYI for you both. This is to support translation of your Java library to Objective-C such that it can be used on iOS. This bug is specifically for testing this as part of a continuous build using Google's J2ObjC conversion tool.
@advayDev1 FYI
I'm not sure how we can help on the matter. Looks a nice initiative though !
@bric3 - no action needed on your part for now. At some point we may ask for changes to Mockito but for lots of us the translated library library has been working well for some time.
ok great let us know your advances on the topic ;)
@bric3 - At the moment we are blocked on the issues mentioned here:
https://github.com/google/j2objc/blob/master/testing/mockito/README
Those are all the modifications necessary to Mockito to have it work on iOS.
@brunobowden - I don't think we can do much here. j2objc's repo already has all the modifications needed to build Mockito in ObjC. And I assumed they won't be released to a jcenter jar.
Added a mock maker that works with iOS and Mac OS X.
That's indeed the appropriate way to support other mock makers.
Modified ClassPathLoader to load the iOS mock maker by default.
It shouldn't be necessary to modify this class, as the iOSMockMaker can be in another jar, just as long as our internal PluginLoader finds the mockito-extensions/org.mockito.plugins.MockMaker file in the classpath. (version 1.10.19, 2.0.x)
Stubbed out unused ASM and Cglib classes, removed use of Objenesis, since iOS doesn't support bytecode.
It shouldn't be necessary if the relevant mockmaker is used.
A side note, mockito 2.x is using bytebyddy instead of CGLIB.
Thanks @bric3; I'm cc-ing @tomball and @kstanger here who made those modifications to Mockito in the repo linked above. Tom and Keith - had you already considered what Brice mentions above (that fewer modifications to Mockito may be necessary) and found you still needed to make more changes?
What @bric3 says about other jars certainly works for Java jars and normal static libraries, but for testing on iOS we have a special weird case: to support successful loading of unreferenced classes, the test binary needs to be linked with every Objective C class. iOS runs apps in a sandbox that prohibits the loading of code from anywhere other than the main executable, so it all has to be included. I guess you can include all the bytecode support classes, but that adds a lot of useless code to each test binary (and of course, any requests to support bytecode on iOS will be cheerfully ignored :-).
The ClassPathLoader and other modifications may no longer be necessary, as j2objc's JRE support has grown quite a bit since we first supported Mockito. I felt an obligation then to get it done early, since that team was gracious enough to accept Jesse Wilson's and my MockMaker proposal so quickly.
@tomball, depending on what comes out of this, we'd be happy to host modifications to mockito on https://github.com/j2objc-contrib/, then we can build it continuously. That'll be a future activity, while we focus for now on libraries that are easier to support.
iOS runs apps in a sandbox that prohibits the loading of code from anywhere other than the main executable, so it all has to be included.
OK. I don't know much about iOS constraints, but maybe packaging all together and still using the PluginLoader mechanism could still work.
@brunobowden - we can actually submodule out to google/j2objc/testing/mockito. to do it in this repo would require translating their existing make magic to gradle (pretty simple). but it would then no longer build like j2objc's other modified libraries (which do not rely on gradle at all).
cc @kstanger
I expect we'll have a number of cases where we build an existing library with a small number of modifications, e.g. replacing certain files, disabling certain functionality and so forth. It sounds like Mockito will be one of those cases.
The question is where should those changes go?
As far as possible, it would be best to integrate them upstream in to the public Mockito repo, as has already happened with Guava 19.0-rc1. @tomball, are there any changes like that appropriate for Mockito?
If there aren't any remaining changes which are J2ObjC specific, that's the ideal and simplest case. If changes do remain, where should they go? The possible places Google, j2objc-contrib or the original repository. For many reasons, a library maintainer may be unwilling or unable to have "alien" J2ObjC specific files in their repo. I think the best answer is that changes should be in j2objc-contrib. This means that third-party developers can freely iterate and contribute, likely for many libraries that Google has no interest in.
For something like Mockito though, it's more sensitive for Google about accepting third-party code (if the changes were in j2objc-contrib). If the changes are modest enough in scope and likely stable over time, then it may be best to just maintain a duplicate copy of the code. That'd allow us to avoid the complexity of pulling in both j2objc-contrib and Google code. Normally I'd be against this as a bad code smell ("Number one in the stink parade is duplicated code" according to https://sourcemaking.com/refactoring/duplicated-code). In this case, I want to play devil's advocate.
@tomball, @kstanger - your thoughts on this?
I recommend lowering the priority on this library, or dropping it
completely. Mockito is a weird and wonderful library that creates classes
out of thin air, and so will need special handling on any platform. I
should be able to reduce the size of its fork as Brice suggests, but
investing the time to make it perfectly portable (if that's even possible)
doesn't seem worth the small benefit, IMO.
On Mon, Sep 28, 2015 at 6:42 PM Bruno Bowden<EMAIL_ADDRESS>wrote:
cc @kstanger https://github.com/kstanger
I expect we'll have a number of cases where we build an existing library
with a small number of modifications, e.g. replacing certain files,
disabling certain functionality and so forth. It sounds like Mockito will
be one of those cases.
The question is where should those changes go?
As far as possible, it would be best to integrate them upstream in to the
public Mockito repo, as has already happened with Guava 19.0-rc1. @tomball
https://github.com/tomball, are there any changes like that appropriate
for Mockito?
If there aren't any remaining changes which are J2ObjC specific, that's
the ideal and simplest case. If changes do remain, where should they go?
The possible places Google, j2objc-contrib or the original repository. For
many reasons, a library maintainer may be unwilling or unable to have
"alien" J2ObjC specific files in their repo. I think the best answer is
that changes should be in j2objc-contrib. This means that third-party
developers can freely iterate and contribute, likely for many libraries
that Google has no interest in.
For something like Mockito though, it's more sensitive for Google about
accepting third-party code (if the changes were in j2objc-contrib). If the
changes are modest enough in scope and likely stable over time, then it may
be best to just maintain a duplicate copy of the code. That'd allow us to
avoid the complexity of pulling in both j2objc-contrib and Google code.
Normally I'd be against this as a bad code smell ("Number one in the stink
parade is duplicated code" according to
https://sourcemaking.com/refactoring/duplicated-code). In this case, I
want to play devil's advocate.
@tomball https://github.com/tomball, @kstanger
https://github.com/kstanger - your thoughts on this?
—
Reply to this email directly or view it on GitHub
https://github.com/j2objc-contrib/j2objc-common-libs-e2e-test/issues/17#issuecomment-143920233
.
Thanks for your input @tomball. I'm closing this bug and so the J2ObjC Gradle Plugin will continue using the J2ObjC distributed version of Mockito. As and when you make upstream changes or have any other comments, please update this issue. We'll likely only open it again when and if you think it's advisable.
|
GITHUB_ARCHIVE
|
|2 years ago|
|amelia/en||4 years ago|
|android/en||2 years ago|
|athena||4 years ago|
|computer||4 years ago|
|heychatterbox/en-us||4 years ago|
|heycomputer||4 years ago|
|heysavant||3 years ago|
|licenses||2 years ago|
|marvin/models||4 years ago|
|not-wake-words||2 years ago|
|sheila/models||4 years ago|
|README.md||4 years ago|
Pre-trained Precise models and training data provided by the Mycroft Community
Models are housed in two branches:
- Master - for completed models available for download and use
- Models-dev - for models in development, available for testing
Models should be packaged in a gzip tar. Each archive should contain the model files (wakeword.pb, wakeword.pb.params, wakeword.pb.txt). A SHA256 hash of the archive should be uploaded as part of the of the README.md in the models/ directory.
│ README.md │ └───licenses/ # contains license templates and the licenses of submitted files. │ │ license-template.txt │ │ license-YYYYMMDD-githubusername.txt │ │ license-YYYYMMDD-githubusername.txt │ │ ... │ └───not-wake-words/ # samples that are clearly not wake words │ │ │ │ │ README.md │ │ │ │ └───lang-short/ # the two-character ISO 639-1 language code eg de, en, es, pt, etc. │ │ │ README.md │ │ │ metadata.csv # a transcript for the not-wake-word files │ │ │ notwakeword-lang-uuid.wav │ │ │ notwakeword-lang-uuid.wav │ │ │ ... │ │ │ └───noises/ # samples that are not audible words │ │ README.md │ │ metadata.csv # name, brief description of noise │ │ noise-uuid.wav │ │ noise-uuid.wav │ │ ... │ └───wake-word/ # the wakeword name eg hey-mycroft, computer etc │ └───models/ # Precise models for this wake-word │ README.md │ wakeword-lang-preciseversion-YYYYMMDD-githubusername.tar.gz │ ... │ └───lang-short/ # the two-character ISO 639-1 language code eg de, en, es, pt, etc. │ README.md │ wakeword-lang-uuid.wav │ wakeword-lang-uuid.wav │ ...
File Naming Conventions
Wake word clips should be named using the format of “wakeword-lang-uuid.wav”. UUID’s can be generated using command line tools on unix systems. Not wake word clips should be named as “notwakeword-lang-uuid.wav”. Noise clips would use “noise-uuid.wav”. The name of the model archive should be in the format of “wakeword-lang-preciseversion-YYYYMMDD-githubusername.tar.gz”. Licenses should use names like “license-YYYYMMDD-githubusername.txt”
Audio Clip Conventions
Audio files should be WAV files in little-endian, 16 bit, mono, 16000hz PCM format. FFMpeg calls this “pcm_s16le”. Clips for wake words should contain only the wake word and silence of no more than one second before and after. All clips should be less than or equal to three seconds total length.
Pull Request Template
New pull requests containing raw audio data will need to be submitted with a license file. A PR for a wakeword should be submitted to the branch for that wakeword. If a branch does not exist, create one, as well as a README.md for the word. The license file will need to include the names of all files submitted as part of the PR, as well as the github user’s name. Each PR should have its clips and files sorted using the previously described directory structure. It is preferred that a PR be one set of words or model, ie, a single wake word, or only not wake words. A compressed/zip file of just the clips can be submitted as part of the PR; please ensure the file is placed in the relevant directory. Clips meant for not-wake-word should also have an updated metadata.csv file, using a format of “filename,transcript”. For models, add an entry to the relevant Readme.md file about the model. This should indicate at a minimum the model name, license, and SHA256 sum. Additional information like the datasets used, a link to further info, and precise parameters used in training to create the model are also helpful.
PR Review Process
- Verify formatting of files is correct
- Review the license file for correct contents.
- Check that included clips are named correctly, formatted correctly (ie, file $clip), and less than three seconds duration (ie, exiftool -Duration *.wav | some grep and things)
- Spot check clips to ensure they are the correct wake word, or that noise, or not wake word speech is contained as appropriate.
- For submitted models, if you speak the submitted language, test the model. (other stuff about testing model here)
- If PR looks correct and sounds right, then approve and merge. If not, post a comment indicating what needs to be updated.
- Any PR open longer than 30 days with an un-replied to comment will be closed.
We prefer audio clips to be submitted under public domain license. Other acceptable audio licenses include the Creative Commons BY, BY-SA, BY-NC, BY-NC-SA licenses. A license template for public domain usage is provided in the Licensing directory. For pre-trained models, we can accept any of the main seven CC licenses including BY-ND and BY-ND-NC. Submissions using other licenses will need to be reviewed more thoroughly before acceptance.
All audio should be sourced by the submitter. No copyright material will be accepted. No material from unknown provenance will be accepted.
Contributors to the repo must include a license during submission. They are not required to sign the Mycroft Contributor License Agreement.
|
OPCFW_CODE
|
Documentation: x-trickster-result & status=proxy-only
I can't seem to find any documentation around the possible values in the x-trickster-result response header.
I've been checking this to attempt to debug why some queries seem to hit cache, while others seem to only go through HttpProxy mode, but not knowing the possible values and the reasons why those values may appear makes this difficult.
For example, on the same grafana dashboard, some panels are getting cache phit while other panels are getting status=proxy-only.
I have tracing enabled, but it doesn't appear that there's a lot of attribution on the traces. All I can see is that, unlike other traces where the request is passed to DeltaProxyCacheRequest, this request is immediately passed to ProxyRequest.
In addition, I don't see any logs (in DEBUG mode) when a query is status=proxy-only.
It'd be nice to be able to know why a query was proxied and not cached. I think this should both be set as an attribute on the parent span and also within DEBUG logs.
One reason that you might get a proxy-only is that a user requests a very old time range that is not in the configured cacheable window. We already drop a debug log when this happens, but can also add a span attribute for the "range is too old" condition.
The other reasons would be backend provider-specific.
With Prometheus, we pretty much trust that any request against /query is object cacheable and against /query_range is timeseries cacheable, unless the "too old" condition applies.
For providers that use a query language where the time range is embedded into the query (as opposed to using separate URL parameters - InfluxDB and ClickHouse), we actually perform a rough parse of the query in order to extract those attributes, since they must be manipulated based on what is in cache versus what is still needed, and then the needed ranges are swapped into the base query when making the backend request. In Trickster 1.x, all of that is based on regex matches, which make a best effort to identify cacheable queries, based on common Grafana patterns. If any part of that process does not work out, we'll go ahead and just proxy the request. That is likely what you are seeing. We can definitely add a debug log and span attribute for this, but the only detail we could give in 1.x is that regex matching failed.
With Trickster 2.0, we've implemented our own extensible lexing and parsing solution that is a little more robust, and should be able to give more detail about why it failed (e.g., when an influxql query is missing the group by time($duration) clause, it will tell you).
If you can provide examples of queries that are coming back as proxy-only (feel free to obfuscate the non-time field names), we can expand the regex to be inclusive of those patterns. Any chart that Grafana and Chronograf can render, we definitely want to be able to cache.
The X-Trickster-Result definitely needs it's own documentation page. While the Cache Statuses (like proxy-only) are documented with the Caches, we'll get a doc published to cover the main parts of the header's value and the overall formatting.
Gotcha, I'm using Influx so I'm betting that it's something with the parsing, although it's a query directly formulated by Grafana.
Actually, I figured out why it's doing it.
If a panel has multiple queries in it, Grafana will separate each query with a ; when it sends it to the datasource. Now, if you disable a query in that panel, Grafana will keep the ;.
I updated my test repo (https://github.com/disfluxly/trickster_docker_compose) with this. If you look at the cpu panel on the Internal dashboard you'll see what I'm talking about. There's 2 queries on there. Query A is disabled. Grafan still sends Query B as though it's multiple queries, so it's prefixed by a ;. This causes Trickster to just proxy it.
As soon as you enable Query A, caching starts to work.
How does Trickster handle multiple queries being sent? Does it split on ; and cache each individual query separately? Or does it just cache the entire multi-query string? I'd assume the former as the latter would cause a lot of problems.
currently the entire query string is hashed to a single cache key and, if it is a compound query (delimited by ;), the various result sets returned are isolated into their own silos under that cache key. We do it that way because we want to pass the user's original query string 1:1 up to InfluxDB, with only the time ranges modified, just like Grafana does if Trickster were not in the path. So any time you disable or enable specific subqueries (by disabling the legend series in grafana), it will create a new cache object for the new view, since the underlying query string will then hash to a different key. Not ideal, but not the end of the world, since most people are refreshing the same charts periodically and will only incidentally change the view. That would result in a one-time cache miss on the legend change, and then return to subsequent phits as the dashboard auto-refreshes. I think there is some room for optimization there, as you suggest, but it is dependent up on some technology we are adding to 2.0 to do federated dataset merges. So we'll revisit that later this year, since the only benefit would be a marginally smaller cache utilization.
We'll take a look at the updated docker-compose and make sure the regex will account for those stray semicolons and issue a 1.1.4 release soon. Since it's not generating critical panics, it may take a bit longer to circle back on, since we're heavily focused on getting the first beta of 2.0 released.
Gotcha. So when's the 2.0 beta targeted for? :)
Also, did you want me to leave this open as a reminder for fixing the miscellaneous ; in the regex & creating docs for x-trickster-result?
Yeah, let's definitely leave this open while we work out the semicolon thing! I was hoping to have the first 2.0 beta out already, but it's taking a bit longer to get fully running. We're now targeting the first week of October, so stay tuned!
@jranson - How's the 2.0 launch coming along?
|
GITHUB_ARCHIVE
|
I’m just about at the half way mark now for getting the computer to operate the MOV8, ALU and SETAB instructions. The easier cards are out of the way now … time for the slightly more complex ones. In this post it’s the sequencer cards which will deal with the ‘when’ of instructions by sending out timing pulses which the upcoming control cards will then use to operate the various control lines of the computer at the right time.
Note there that I did say sequencer ‘cards’ and not ‘card’ because there’s two of them this time. This is the first part of the computer that will be constructed across two cards stacked together rather than all on a single card (the upcoming control unit will also be spread across two cards). This is for two reasons: firstly the sequencer needs access to more connections than a ‘regular’ card but secondly there’ll be so much to fit in that it physically wouldn’t fit on one card.
Working across two cards does present additional challenges in the construction … mainly around how to get the required signals that are private to the sequencer between the two cards. I considered many options but in the end a system of stacking header pins seemed the best route forward. So, to start construction I soldered down the ribbon cable connectors and board interconnects … once done it looked like this:
To keep the two cards stable I also added some PCB stand-offs in the front corners. Thankfully the distance between cards in the enclosure can be made up using standard stand-off lengths. I’d like to say I planned it that way but of course I didn’t … it’s just another one of those happy coincidences which happens when everything is on a 0.1 inch grid.
To stack the cards the upper card header pins are aligned and inserted into the respective sockets on the lower card producing a unit that looks like this:
With the basic connections in I moved on to soldering in the wire wrapping posts for the interconnects and LEDs along with the LEDs themselves.
One problem with the board interconnects on the upper card is that it does make soldering a little fiddly as you have something protruding up from the board getting in the way … plus that ‘something’ will melt (or at least the plastic parts of it will) if you apply a soldering iron to it.
For the LEDs I’ve continued using the newer method of soldering the LED cathodes together with bits of trimmed off diode/LED legs as it makes soldering much easier. This is the first time, however, that I’ve tried this technique on double height LED holders. Generally the concept stays the same … join all the cathodes together … but for the holders that have LEDs in their upper slot I need to add a small Kynar wire link to get that LEDs anode to the wire wrap post. It’s all a bit fiddly but as long as you have patience and a steady hand it usually comes off OK.
Next up were the relay sockets and associated wire wrap posts:
I’ve actually soldered down the relay sockets for the upcoming 10 and 12-cycle relays on the lower card in addition to the required 8-cycle relay sockets. This is because soldering these sockets down is a really fiddly job and I know that when I come back to this card to add functionality later it’ll be a real pain when there’s relays and all the wiring getting in the way. I’ll not bother soldering the underside of these sockets much further but at least the fiddly upper side is done. I’ve not done anything on the upper card although eventually this will hold further cycle relays all the way up to the full 24-cycle sequencer.
Next job was to solder in the flyback and feedback diodes (which ensure produced outputs don’t feed back into parts of the sequencer’s finite state machine):
The final soldering job was to put in the power and ground lines:
This time around I finally got bored of cutting and stripping all those short bits of coated wire for the ground lines and decided to give the same technique I use for grounding the LEDs a try for the relays. This does mean that all the grounds are exposed for their full lengths but on the plus side it was much quicker and easier to put everything together. I’ll see how I feel about it but I’ll more than likely do the same on the next card.
There’s some temporary Kynar wires in the power and ground lines and these are just to ‘hop’ over future parts of the card where the lines will eventually go through. The Kynar wire, of course, can’t handle all that much current but it should be fine for now.
With the soldering done it’s on with the wire wrapping. The sequencer probably has the most complicated wiring yet (second only, maybe, to the ALU arithmetic unit).
With the wire wrap done the relays can then be placed in their sockets:
This is almost the last step however to make the sequencer useable up to the 8th cycle I need to add a couple of extra temporary Kynar links that will connect stages 7 and 8 back to stage 1 and 0. Later on when longer cycles are implemented stage 9 and 10 will provide the required lines to keep stages 7 and 8 alive … for now, in their absence, the first stages will do the same job. With the temporary links in place the lower card looks like this:
So, finally, that’s the sequencer complete for 8-cycle instructions. The card will be extended over time as longer cycle instructions are introduced but for now the sequencer as a whole looks like this:
As usual I’ve put a video together that demonstrates the sequencer in operation. In this video I give a quick overview of the cards and then demonstrate running through the 8 stages of the finite state machine and producing the three derived pulses C, D and E.
That’s it for the sequencer … at least for now. It produces all the pulses that will be needed for the computer to perform copying values between registers, loading values from the opcode and performing ALU operations. The final step in making these operations a reality is to construct the control unit … again though, just enough to operate these three 8-cycle instructions. The control unit is similar to the sequencer in that it will also be spread over two cards although fortunately it’s wiring will be quite a lot simpler as most of it is just combinatorial logic.
|
OPCFW_CODE
|
This question was closed as being "opinion based," and I am interested in knowing why. The main part of the question is as follows:
I am trying to find a word that can be used in formal situations that means an unprincipled, unpleasant person. I'm looking for a more formal or civil way to say this, rather than the uncivil “He’s a jerk/bastard.”.
In other words, the OP is looking to find a more formal, perhaps less objectionable, word to replace 'jerk.' The "opinion-based" part, of course, would be that each person has a different measure of what's civil or formal, but I don't see how that actually detracts from the question's validity or likelihood to be answered with "facts or citations." As long as a dictionary definition is linked and some sort of explanation is offered as to what makes the word formal (for instance, finding instances in a book where the characters use the word in a work setting), I feel that there's no issue with a lack of opportunity for referenced backup.
Hello. Bob. Please show research, as expected in questions on ELU. Using a thesaurus to find synonyms (some of which may be formal in register) is a good place to start; even "the 7 synonyms listed by 'Allthesaurus" have no formal examples", with a link, would be fine.
I agree with that. The OP needed to include more context and more information about what prior research they'd done, but the question wasn't closed for the "Please include the research you’ve done, or consider if..." reason. (And I do agree that it should have been closed.) Rather, it was closed for being the sort of question that would attract answers that were more of opinions.
Most users of this site are common-sensed individuals who seem to have a fairly good grasp on what words are suitable for formal speech and what words aren't, and all the answers to the question reflect that: uncivil, scoundrel, reprobate, blackguard, or rascal. While none of them offer an explanation as to why those words are more formal, none of them are based on opinion.
Put another way: Would we close the question "What's a more formal way of saying 'I had a dump in the potty'" for being opinion based, or for being answerable by general reference?
To be clear: I agree that the question deserved closure, but not for opinion-based-ness. I think it should have been closed because the OP didn't provide their prior research. I haven't cast a reopen vote either.
So, what I'd like to know:
- What about that question is opinion based?
- Am I misinterpreting the meaning of "opinion based" as a close reason?
- How could that question have been re-worded to not be opinion based?
|
OPCFW_CODE
|
I have a small server running Ubuntu 12.04 LTS. On it I wanted to set up IRC with SSL support. it appears that ircd-hybrid is most popular. I am not married to using it so other options are welcome. However, I am not seeing other options.
I have installed using apt-get. That isn't the problem. The problem is that the apt-get install version doesn't enable SSL. Try connecting to the SSL port 6697 using SSL and it will not work. Hence the work to build it myself. If you actually follow the source of the ircd-hybrid package the configuration requires that you edit it and add the "USE_OPENSSL = 1" option.
Because hybrid doesn’t support OpenSSL by default, you have to do a manual patch to get it working.
Anyway, I followed the instructions (I listed a couple at the end) and install ircd-hybrid no problem. It's actually running now. However, I noticed that SSL isn't working. The port is never being listened to. Digging deeper, I look back at the building process and notice that OpenSSL isn't being included.
So I have been trying to build and just to clarify, here is the ./configure output:
Compiling ircd-hybrid 7.2.2 Installing into: /usr Ziplinks ................ yes **OpenSSL ................. no** Modules ................. shared IPv6 support ............ yes Net I/O implementation .. sigio EFnet server ............ no (use example.conf) Halfops support ......... yes Small network ........... no G-Line voting ........... yes
A few line up in the output of the ./configure script I notice that it seems that all the encryption algos are unavailable!
checking for OpenSSL... /usr checking for OpenSSL 0.9.6 or above... found checking for RSA_free in -lcrypto... yes checking for EVP_bf_cfb... no checking for EVP_cast5_cfb... no checking for EVP_idea_cfb... no checking for EVP_rc5_32_12_16_cfb... no checking for EVP_des_ede3_cfb... no checking for EVP_des_cfb... no
Which if you look into the configure script you'll see that at least one of the encryption libraries need to be enabled. This is a hunch as I am not entirely positive about this. It seems that OpenSSL has disabled RC5 and a couple other algos.
I've rebuilt and installed OpenSSL with the enable-rc5 and other flags but no dice.
This is on Ubuntu 12.04. Help? Anyone? I'd like to enable SSL on IRC on my personal server. It doesn't have to be ircd-hybrid.
EDIT I have been chatting with the fella that figured out the solution on the first link I posted above. He was able to successfully build and run with SSL on a clean 12.04 install using the default OpenSSL package and the patched pircd-hybrid package. The system I am on was originally built using 10.x. I wonder if doing those upgrades to 12.04 LTS somehow broke something along the way. For example, his running of the ircd-hybrid configuration yields these in the crypto part of the setup:
checking for OpenSSL... /usr checking for OpenSSL 0.9.6 or above... found checking for RSA_free in -lcrypto... yes checking for EVP_bf_cfb... yes checking for EVP_cast5_cfb... yes checking for EVP_idea_cfb... no checking for EVP_rc5_32_12_16_cfb... no checking for EVP_des_ede3_cfb... yes checking for EVP_des_cfb... yes
I am running the latest open ssl:
$ openssl version OpenSSL 1.0.1 14 Mar 2012
Not sure why the configure script isn't seeing those cipher algos.
EDIT 2: Just installed a VM on my ESXi box. Brand new 32-bit install 12.04 LTS. Same problem. Installs fine via apt-get but SSL is not enabled. Tried the patch and it is also missing the algos as I listed above.
|
OPCFW_CODE
|
Password attacksThere are a number of different types of password attacks. For example, a hacker could perform a dictionary attack against the most popular user accounts found on networks. With a dictionary attack, hackers use a program that typically uses two text files:
- One text file contains the most popular user accounts found on networks, such as administrator, admin, and root.
- The second text file contains a list of all the words in the English dictionary, and then some. You can also get dictionary files for different languages.
To protect against a dictionary attack, be sure employees use strong passwords that mix letters and numbers. This way, their passwords are not found in the dictionary. Also, passwords are normally case sensitive, so educate users on the importance of using both lowercase and uppercase characters. That way, a hacker not only has to guess the password but also the combination of uppercase and lowercase characters.
Also remind users that words found in any dictionary are unsafe for passwords. This means avoiding not only English words, but also French, German, Hebrew . . . even Klingon!
Hackers can also perform a brute force attack. With a brute force attack, instead of trying to use words from a dictionary, the hacker uses a program that tries to figure out your password by trying different combinations of characters. The figure shows a popular password-cracking tool known as LC4. Tools like this are great for network administrators to audit how strong their users' passwords are.
To protect against password attacks, users should use strong passwords, which is a password comprising of letters, numbers, and symbols with a mix of uppercase and lowercase characters and a minimum length of eight characters.
Denial of serviceAnother popular network attack is a denial of service (DoS) attack, which can come in many forms and is designed to cause a system to be so busy that it cannot service a real request from a client, essentially overloading the system and shutting it down.
For example, say you have an email server, and a hacker attacks the email server by flooding the server with email messages, causing it to be so busy that it cannot send anymore emails. You have been denied the service that the system was created for.
There are a number of different types of DoS attacks: for example, the ping of death. The hacker continuously pings your system, and your system is so busy sending replies that it cannot do its normal function.
To protect against denial of service attacks you should have a firewall installed and also keep your system patched.
SpoofingSpoofing is a type of attack in which a hacker modifies the source address of a network packet, which is a piece of information that is sent out on the network. This packet includes the data being sent but also has a header section that contains the source address (where the data is coming from) and the destination address (where the data is headed). If the hacker wants to change "who" the packet looks like it is coming from, the hacker modifies the source address of the packet.
There are three major types of spoofing — MAC spoofing, IP spoofing, and email spoofing. MAC spoofing is when the hacker alters the source MAC address of the packet, IP spoofing is when the hacker alters the source IP address in a packet, and email spoofing is when the hacker alters the source email address to make the email look like it came from someone other than the hacker.
An example of a spoof attack is the smurf attack, which is a combination of a denial of service and spoofing. Here is how it works:
- The hacker pings a large number of systems but modifies the source address of the packet so that the ping request looks like it is coming from a different system.
- All systems that were pinged reply to the modified source address — an unsuspecting victim.
- The victim's system (most likely a server) receives so many replies to the ping request that it is overwhelmed with traffic, causing it to be unable to answer any other request from the network.
To protect against spoof attacks, you can implement encryption and authentication services on the network.
Eavesdropping attackAn eavesdropping attack occurs when a hacker uses some sort of packet sniffer program to see all the traffic on the network. Hackers use packet sniffers to find out login passwords or to monitor activities. This figure shows Microsoft Network Monitor, a program that monitors network traffic by displaying the contents of the packets. There are other sniffer programs available such as WireShark and Microsoft's Message Analyzer.
Notice that the highlighted packet (frame 8) shows someone logging on with a username of
administrator; in frame 11, you can see that this user has typed the password
P@ssw0rd. In this example, the hacker now has the username and password of a network account by eavesdropping on the conversation!
To protect against eavesdrop attacks you should encrypt network traffic.
Man-in-the-middleA man-in-the-middle attack involves the hacker monitoring network traffic but also intercepting the data, potentially modifying the data, and then sending out the modified result. The person the packet is destined for never knows that the data was intercepted and altered in transit.
To protect against man-in-the-middle attacks you should restrict access to the network and implement encryption and authentication services on the network.
Session hijackingA session hijack is similar to a man-in-the-middle attack, but instead of the hacker intercepting the data, altering it, and sending it to whomever it was destined for, the hacker simply hijacks the conversation — a session — and then impersonates one of the parties. The other party has no idea that he is communicating with someone other than the original partner.
To protect against session hijacking attacks, you should restrict access to the network and implement encryption and authentication services on the network.
Wireless attacksThere are a number of different attacks against wireless networks that you should be familiar with. Hackers can crack your wireless encryption if you are using a weak encryption protocol such as WEP. Hackers can also spoof the MAC address of their system and try to bypass your MAC address filters. Also, there are wireless scanners such as Kismet that can be used to discover wireless networks even though SSID broadcasting is disabled.
To protect against wireless attacks, you should implement encryption protocols such as WPA2 and use an authentication server such as a RADIUS server for network access.
|
OPCFW_CODE
|
At first, please update your centos. Every command I use, is used as root 😉
yum -y update
Installing database server MariaDB
Next, we install and create empty database for our nextcloud. Then we start it and enable for autostart after boot.
If you wish, you can skip installations of MariaDB and you can use built-in SQLite. Then you can continue with installing apache web server.
yum -y install mariadb mariadb-server ... systemctl start mariadb systemctl enable mariadb
Now, we run post installation script to finish setting up mariaDB server:
mysql_secure_installation ... Enter current password for root (enter for none): ENTER Set root password? [Y/n] Y Remove anonymous users? [Y/n] Y Disallow root login remotely? [Y/n] Y Remove test database and access to it? [Y/n] Y Reload privilege tables now? [Y/n] Y
Now, we can create a database for nextcloud.
mysql -u root -p ... CREATE DATABASE nextcloud; GRANT ALL PRIVILEGES ON nextcloud.* TO 'nextclouduser'@'localhost' IDENTIFIED BY 'YOURPASSWORD'; FLUSH PRIVILEGES; exit;
Installing Apache Web Server with ssl (letsencrypt)
Now, we install Apache web server, and we start it and enable for autostart after boot:
yum install httpd -y systemctl start httpd.service systemctl enable httpd.service
Now, we install ssl for apache and allow https service for firewall:
yum -y install epel-release yum -y install httpd mod_ssl ... firewall-cmd --zone=public --permanent --add-service=https firewall-cmd --zone=public --permanent --add-service=http firewall-cmd --reload systemctl restart httpd.service systemctl status httpd
Now we can access our server via https://out.server.sk
If we want signed certificate from letsencrypt, we can do it with next commands. Certboot will ask some questions, so answer them.
yum -y install python-certbot-apache certbot --apache -d example.com
If we are good, we can see:
IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at /etc/letsencrypt/live/example.com/fullchain.pem. ...
And we can test our page with this:
Install PHP 7
As creators of nextcloud recommends at minimal PHP 5.4, I use php 7.
PHP 5.4 has been end-of-life since September 2015 and is no longer supported by the PHP team. RHEL 7 still ships with PHP 5.4, and Red Hat supports it. Nextcloud also supports PHP 5.4, so upgrading is not required. However, it is highly recommended to upgrade to PHP 5.5+ for best security and performance.
Now we must add some additional repositories:
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
And we can install php 7.2:
yum install mod_php72w.x86_64 php72w-common.x86_64 php72w-gd.x86_64 php72w-intl.x86_64 php72w-mysql.x86_64 php72w-xml.x86_64 php72w-mbstring.x86_64 php72w-cli.x86_64 php72w-process.x86_64
php --ini |grep Loaded Loaded Configuration File: /etc/php.ini php -v PHP 7.2.22 (cli) (built: Sep 11 2019 18:11:52) ( NTS ) Copyright (c) 1997-2018 The PHP Group Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
In my case, I will use nextcloud as my backup device, so I increase the default upload limit to 200MB.
sed -i "s/post_max_size = 8M/post_max_size = 200M/" /etc/php.ini sed -i "s/upload_max_filesize = 2M/upload_max_filesize = 200M/" /etc/php.ini sed -i "s/memory_limit = 128M/memory_limit = 512M/" /etc/php.ini
Restart web server:
systemctl restart httpd
At first, I install wget tool for download and unzip:
yum -y install wget unzip
Now we can download nextcloud (at this time the latest version is 16.0.4). And extract it from archive to final destination. Then we change ownership of this directory:
wget https://download.nextcloud.com/server/releases/nextcloud-16.0.4.zip ... unzip nextcloud-16.0.4.zip -d /var/www/html/ ... chown -R apache:apache /var/www/html/nextcloud/
Check, if you have enabled SELinux by command sestatus:
sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 31
Refer to nextcloud admin manual, you can run into permissions problems. Run these commands as root to adjust permissions:
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?' semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/config(/.*)?' semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/apps(/.*)?' semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.htaccess' semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.user.ini' restorecon -Rv '/var/www/html/nextcloud/'
If you see error “-bash: semanage: command not found”, install packages:
yum provides /usr/sbin/semanage
yum install policycoreutils-python-2.5-33.el7.x86_64
And finally, we can access our nextcloud and set up administrators password via our web: https://you-ip/nextcloud
Now you must complete the installation via web interface. Set Administrator’s password and locate to MariaDB with used credentials:
Database user: nextclouduser Database password: YOURPASSWORD Database name: nextcloud host: localhost
In my case, I must create a DATA folder under out nextcloud and set permissions:
mkdir /var/www/html/nextcloud/data chown apache:apache /var/www/html/nextcloud/data -R semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data(/.*)?' restorecon -Rv '/var/www/html/nextcloud/'
For easier access, I created a permanent redirect for my IP/domain Nextcloud root folder. This redirect allow you to open page
and redirect you to:
You must edit httpd.conf file and add this line into directory /var/www/html:
vim /etc/httpd/conf/httpd.conf ... RedirectMatch ^/$ https://your-ip/nextcloud ... systemctl restart httpd.service
If we see an error like “Your data directory and files are probably accessible from the Internet. The .htaccess file is not working. ” try edit and change variable
vim /etc/httpd/conf/httpd.conf .... <Directory "/var/www/html"> AllowOverride All Require all granted Options Indexes FollowSymLinks </Directory>
Enable updates via the web interface
To enable updates via the web interface, you may need this to enable writing to the directories:
setsebool httpd_unified on
When the update is completed, disable write access:
setsebool -P httpd_unified off
Disallow write access to the whole web directory
For security reasons it’s suggested to disable write access to all folders in /var/www/ (default):
setsebool -P httpd_unified off
A way to enable enhanced security with own configuration file
vim /etc/httpd/conf.d/owncloud.conf ... Alias /nextcloud "/var/www/html/nextcloud/" <Directory /var/www/html/nextcloud/> Options +FollowSymlinks AllowOverride All <IfModule mod_dav.c> Dav off </IfModule> SetEnv HOME /var/www/html/nextcloud SetEnv HTTP_HOME /var/www/html/nextcloud </Directory>
|
OPCFW_CODE
|
A mode is a user-changeable state that influences how a computer reacts to user input. For example, if the Caps Lock key is active, all letters typed by the user appear in upper case. If it is not active, letters appear in lower case. Whether Caps Lock is active or not is not immediately obvious to the user, which often causes people to accidentally activate Caps Lock. This, in turn, causes unexpected results when users start to type.
Since modes can cause computers to behave in unexpected ways, they are typically frowned upon.
When Apple started to work on the original Macintosh, one of the team's most important goals was to avoid modes. Famously, the original Mac did not even have a "rename file" mode. Instead, when a file was selected, typing anything on the keyboard would instantly change the file's name. This caused a few problems, and Apple eventually recanted. On Folklore.org, Bruce Horn writes:
In the Finder, the startup disk would appear on the desktop, in the top-right corner, ready to be opened. The Finder would initially select it; once selected, typing would replace the current name, following the modeless interaction model that I had learned in the Smalltalk group from Larry Tesler. This meant that whatever anyone typed when they first came up to the Macintosh would end up renaming the disk.
This particular problem was solved by introducing a mode, but as a general rule, the Mac remained mostly mode-free. Following Apple's lead, avoiding modes has been an important design goal in all modern computer systems.
In some cases, modes are unavoidable. On desktop computer systems, quasimodes are often used when modes are not avoidable. Quasimodes are modes that are temporary and only exist as long as the user explicitly activates them. The aforementioned Caps Lock key introduces a mode. Its quasimodal counterpart is the Shift key, which introduces a transient mode that only exists as long as the user holds down said key. Since the user has to keep the quasimode alive explicitly, there is no chance that he or she will be confused by the mode.
A quasimode is thus a benign, harmless version of a mode.
Copy and Paste on the iPhone
We are so used to avoiding modes and using quasimodes instead that it often becomes hard to think outside of this box. When Apple famously claimed that the iPhone did not have copy and paste because it was difficult to find the perfect interaction model, a lot of mockups of how copy and paste might use popped up. Most of them used quasimodes to select text,1 often activated by touching the iPhone's screen with a second finger. Even the Palm Pre uses a quasimode to select text: the "select" quasimode is activated by holding down the shift key on the keyboard.
Apple went a different way. Instead of using a quasimode, they introduced an actual, real mode. To select text, the user taps and holds the screen until the loupe appears. After releasing the finger, the iPhone either goes into a text selection mode, or displays a menu which allows the user to go into a text selection mode. This mode allows the user to move the start and end of the selection, and to cut, copy or paste using a popup menu.
But aren't modes evil?
Not necessarily. Modes are not always bad. Modes cause issues if they make computers behave in unexpected ways. However, if the modes themselves are obvious to the user, if it is always clear how to exit the current mode, and if the modes interfere with as few of the user's actions as possible, these issues disappear. In some cases, modes may even be preferable to quasimodes or to a non-modal interface.
Quasimodes require the user to do several things at the same time, such as holding down the Shift key while typing. Modes, on the other hand, allow users to do things sequentially - hit Caps Lock, type, hit Caps Lock again. Sequential actions, especially if guided well, are often easier to execute than parallel actions.
Additionally, the iPhone has very limited input mechanisms. Basically, the user interacts with most applications by touching the screen. While the iPhone can accept multiple touches at the same time, requiring multitouch interaction is often a poor idea. It makes it impossible to use the app with only one hand, it forces the user to obstruct larger parts of the screen, and it requires precise, coordinated user input.
Instead of overwhelming your users with a quasimodal or non-modal multitouch interface, it may often be a good idea to explore a guided modal interface which allows users to make choices sequentially. But keep a few things in mind:
- The modes should be obvious
- The modes should interfere with as few user actions as possible
- If the user does something outside of the parameters of the active mode (such as tapping text outside the current selection in the iPhone's selection mode), you should exit the mode immediately
If you use them carefully and with consideration, modes can be a useful and powerful interface design concept.
Not all of them used quasimodes. Lonelysandwich has a mockup with a select mode that is quite close to Apple's own solution. ↩︎
If you require a short url to link to this article, please use http://ignco.de/120
|
OPCFW_CODE
|
One of my favorite co-workers, Ben Klopfer, wrote a post back in October 2012 that has been one of our most popular and the comments keep coming in. In it, he espouses the new INDEX-MATCH functions that can be nested to vastly outperform VLOOKUP and HLOOKUP. VLOOKUP is my go to function. I tend to feel very comfortable with relational databases so I just think in two dimensional tables. I’ve found uses for HLOOKUP (which is basically VLOOKUP rotated ninety degrees), but I would truly lose a lot of utility out of Excel if VLOOKUP or a similar function didn’t exist, as it easily searches tables.
In Ben’s post, he addresses four limiting issues of VLOOKUP:
- Your data range is limited to a table. I’m okay with that for now; most of my lookups are in a tabular format.
- VLOOKUP always searches the left-most column. Yes, this is a pain.
- You can only specify return values by index number. Well, I am very familiar with VLOOKUP and can use it in my sleep. I’m okay with that.
- VLOOKUP provides a very limited approximate match feature (where data must be sorted low to high). Yes, I’ve run across this issue and just resorted my table. But yeah, I’m not fond of that workaround.
So Ben eloquently explains how INDEX-MATCH can be combined to address these four issues. One of my co-workers ran into an issue with #2 and I directed him to Ben’s post. Problem solved.
But when you get right down to it, if your data was in a tabular format, anyway, wouldn’t it be great if VLOOKUP could just look to the left? I mean your data’s right there! And that sorting issue Ben mentions in #4. Why didn’t Microsoft just decide to add one more parameter to allow you to specify the sort order?
I decided to do something for the little man (I’m not that little, so I guess that makes me magnanimous) and extend the function myself.
Imagine the following worksheet.
I have this spreadsheet divided in four quadrants, which is convenient because it’s my understanding they normally come in groups of four. For simplicity’s sake, let’s name the top left one Q1, the top right one Q2, the bottom left one Q3, and the bottom right one Q4.
Q1 contains a table of grades and the threshold to meet each one. It would appear that to make an A, you would need a 90 or above, a B requires an 80, and so on.
Q2 contains a Grade-Threshold pair which looks up a numeric value for a grade (so A would be 90, and B is 80 as the example shows) and a Percent-Grade pair which looks up a grade based on a percentage. The first pair is very easy to do with VLOOKUP where the formula in cell F2 would be: =VLOOKUP(E2,A2:B6,2,FALSE). This formula looks at the value in cell E2 (a B), considers the range of cells A2:B6, finds the matching row containing the value B, and returns the second column’s value in cell B3: 80. The range lookup is turned off so exact matches are required. This makes sense, since the grade letters wouldn’t come in a range.
However, the threshold of the grades does define a range. If you were to look at the table in Q1, it would appear that from 0 to less than 60 is an F, from 60 to less than 70 is a D, and so on. VLOOKUP has two problems here, both highlighted by Ben back in 2012. First, you can’t look left so you can’t look at the numbers in column B and pull a grade from the range. Second, the range is from high to low, something VLOOKUP’s range ability cannot deal with.
Q3 contains a table which solves both of these issues for the standard VLOOKUP function: by moving the numbers to the first column and the letters to the second; and by resorting the numbers from low to high. Now, the Percent-Grade pair in Q4 will work. The formula in cell F10 is: =VLOOKUP(E10,A10:B14,2,TRUE). You’ll notice that the range parameter has been set to TRUE. This allows the range ability of VLOOKUP to function, where 91 as shown in cell E10 returns an A in cell F10.
But the problem here is that you now have two tables which have the exact same information but must be duplicated to present the data differently. This can cause problems, especially if you change one table and forget to update the other one.
It just so happens that the formula in Cell F5 in Q2 gets around this problem. It queries the table in Q1 just like cell F2 does. However, it looks backwards. It uses the range of numbers from B2:B6 and determines the appropriate grade, even though the sort is from high to low; and this is what the formula looks like: =VLOOKUP_2(E5,B2:B6,0,TRUE,TRUE).
I’m probably not the most creative person in the world. I could have named this MarksVLOOKUP but I wanted to save you the typing, so I just put an underscore in the name, VLOOKUP_2. Once you see the VBA, you’ll be able to change the function’s name to whatever you want; but I digress.
You’ll notice an extra parameter. It’s called LargeToSmall and it defaults to false. In this case, we set it to TRUE because our range sort is from high to low. I probably did better naming this argument than I did naming the function. If you don’t include it, VLOOKUP_2 will behave exactly as VLOOKUP does, except for the whole search to the left thing.
I built this function because I figure there are a lot of VLOOKUP affectionados out there who just want their function to step up a notch or two. I am in no way a VBA expert; I’ve not touched the stuff in many years, actually.
If you want to build this spreadsheet (it’s pretty easy) and test out VLOOKUP_2 for yourself, I’ll iterate through the formulae for you:
I set E10 so that I could update the cell E5, which accepts a number and calculates a grade using VLOOKUP_2 and have E10 automatically update so that F10 would give me the result VLOOKUP gives with a properly sorted table.
For all this to happen, you’ll need to create a VBA function. In order to do this in Excel, press ALT-F11 and the VBA development environment will open. In the upper left corner you’ll see the following:
Right-click Microsoft Excel Objects > Insert > Module (not Class Module).
Insert the following code:
Function VLookup_2(lookup_value As String, key_values As Range, col_index_num As Integer, Optional range_lookup As Boolean, Optional LargeToSmall As Boolean)
Dim testValue As Variant, valueOut As Variant, lastMatch As Variant
Const indexAdder = 1 ' make 0 to make col_index_num zero based, make 1 to act like VLOOKUP where 1 is the key column
Const maxNumberOfBlanks = 10000 ' number of blank keys found in a row which will make routine time out
valueOut = "": lastMatch = "" ' if not set, or set to empty, defaults out as zero which is not expected behavior
Dim bValueFound As Boolean, valueFound As Variant
Dim countOfBlanks As Long
Dim bIsEmpty As Boolean
For n = 1 To key_values.Rows.Count
testValue = key_values.Cells(n, 1).Value
valueFound = key_values.Cells(n, 1 - indexAdder + col_index_num).Value
bIsEmpty = isEmpty(valueFound)
If bIsEmpty Then
countOfBlanks = countOfBlanks + 1
countOfBlanks = 0
If testValue = lookup_value Then
If range_lookup And Not LargeToSmall Then
lastMatch = valueFound
valueOut = valueFound
bValueFound = True
ElseIf range_lookup Then
Dim compareTest, compareLookup
' we want to compare numbers like numbers; otherwise
' if we have 300, 400, 500 as key values and our search term is 3000, we'd
' return 300 instead of 500.
If IsNumeric(lookup_value) And IsNumeric(testValue) Then
compareTest = Val(testValue)
compareLookup = Val(lookup_value)
compareTest = CStr(testValue)
compareLookup = CStr(lookup_value)
If (compareTest < compareLookup) Then
bValueFound = True
If Not isEmpty(key_values.Cells(n, 1 - indexAdder + col_index_num)) Then
If LargeToSmall Then
valueOut = valueFound
lastMatch = key_values.Cells(n, 1 - indexAdder + col_index_num).Value
If bValueFound And ((compareTest > compareLookup) Xor LargeToSmall) Then Exit For
If countOfBlanks > maxNumberOfBlanks Then Exit For
If range_lookup And bValueFound And Not LargeToSmall Then
valueOut = lastMatch
If Not bValueFound Then valueOut = CVErr(xlErrNA) ' set value out as #N/A if value is not found in list
VLookup_2 = valueOut
Hopefully my code’s not the funniest thing about this blog.
I did want to highlight a couple of things in the code. You’ll notice that VLOOKUP’s third parameter defines the return column, where column 1 is the left-most column. To augment compatibility with this new function, column 1 is still the key column, and the return columns to the left would start at 0 and go negative from there. However, while developing this solution, I realized that having the key column be 0 made more sense to me. Then positive numbers denoted offsets to the right and negative numbers denoted offsets to the left. The comments in the code should let you make this change very easily, should you so desire.
It may take a bit more digging, but you might also realize that the range parameter passed to this function only needs to span one column: the key or lookup column. Whether you offset your return column to the left or right, the range doesn’t need to cover the columns in the entire table. It just needs to define the key values to be searched upon. In fact, if you define multiple columns in your range, the key range used for searching will be the first column and the other columns would be ignored by VLOOKUP_2 in terms of determining key values to search for.
|
OPCFW_CODE
|
To uninstall Exchange, you can use the following steps:
- In the command prompt, type “netstat -a | grep “exchange””. This will list all active connections and ports in the network.
- Type “netstat -a | grep “exchange” again and look for the line that says “Connected to: 10.0.0.1 (SMB)”.
How To Uninstall Exchange 2003
To uninstall Exchange 2016, you can use the following steps:
Open the Exchange Management Shell.
Type “exchange 2016” and press Enter.
The Exchange 2016 window will open.
In the Exchange 2016 window, click on the “Uninstall” button.
The Uninstall Exchange 2016 window will appear.
Click on the “Finish” button to uninstall Exchange 2016.
To uninstall Exchange 2013, use the following steps:
Open the command prompt and type “exchange 2013 uninstall”
Type “exit” to close the command prompt and exit Exchange 2013
There are a few ways to remove Microsoft Exchange from Outlook. One way is to use the command “exchange New Outlook Profile” from the Exchange Management Shell. Another way is to use the “exchange Remove-Item” command.
There is no one definitive answer to this question. However, some recommended methods include using the Exchange 2010 uninstaller or using the Exchange 2010 migration tool.
Exchange Server can be removed from Exchange admin center by using the following steps:
Log on to your Exchange server and open the Exchange Management Shell.
Type “exchangeServer” into the shell and press return.
Type “deleteExchangeServer” and press return.
Type “revertToExchangeServer” and press return.
To demote Exchange Server 2013, you can use the following methods:
Use the “RemoveExchangeServer2013” cmdlet to remove Exchange Server 2013 from your server.
Use the “DisableExchangeServer2013” cmdlet to disable Exchange Server 2013 on your server.
Use the “DisableExchangeServer2013AndUpdate” cmdlet to disable Exchange Server 2013 and update its security features.
There are a few ways to stop emails from going to Microsoft Exchange. One way is to set up a blocking rule in your Exchange account. Another way is to use the Exchange Management Console (EMC) to stop email from going to your exchange server.
There are a few ways to stop Microsoft Exchange services. One way is to use the command line:
netstat -a | grep “Microsoft Exchange”
This will show you all the active Exchange instances and their status. You can then use the command “exchange server stop” to stop all the Exchange servers in a specific domain.
Exchange 2010 can be uninstalled using the uninstallation tools available from the Exchange 2010 installation media. These tools include the Exchange Management Console (EMC), the Exchange Management Shell (EMS), and the Exchange Uninstallation Tool.
To uninstall Exchange 2010 from Active Directory, you will need to use the following steps:
In Active Directory Users and Computers, open the properties of the Exchange 2010 server.
On the left pane, click on the arrow next to the name of the Exchange 2010 server.
Click on Uninstall.
To delete an Exchange 2010 database, you will need to use the following steps:
Open the Exchange Management Shell.
Type the following command and press ENTER:
The Exchange Management Shell will prompt you for confirmation. If you are successful, the deletion will be completed.
The Exchange Admin Center is located in the Microsoft Exchange Server 2003 R2 installation directory, C:\Program Files\Microsoft Exchange Server 2003 R2.
Exchange Online is a subscription-based service while Exchange Server is a commercial software product.
The Exchange Management Console is located at:
Yes, Exchange will remove any ad attributes if it is uninstalled.
|
OPCFW_CODE
|
I started working on a binding for the Frigate NVR software. I created a bridge for the server connection but events are send through MQTT. Is it possible to have the frigate bridge rely on another MQTT bridge. Right now I am created a separate MQTT client connection and I think it is causing issues. It throws exception when events are called and I think its related to MQTT and threading. Any advise would be appreciated.
If you need a working example on how to connect to MQTT, and also how to do auto discovery from mqtt messages, then check out the ‘espmilighthub’ binding. To get a binding merged you will need to use the framework to connect to MQTT and not use paho or another external library to make the connection.
openhab-addons/bundles/org.openhab.binding.mqtt.espmilighthub at main · openhab/openhab-addons (github.com)
I looked at the ‘espmilighthub’ binding as an example. The problem I have is the MQTT messages do not specify the IP address of the Frigate Server. This is the reason I created a bridge for the user to setup the IP and port of the server.
Is your target device running an on-board MQTT broker? I recall some robot lawnmower or vacuum doing something like that.
Frigate can be configured to use any MQTT server. I personally use one MQTT server for all my stuff so it seems redundant to have multiple connections from different bindings. I did copy the the iRobot code that uses an onboard MQTT server, but I am having some threading issues I think and I was hoping there is a better way.
Alright, so the expected use would be -
User sets up an MQTT broker, if they haven’t already.
User connects NVR to broker as client.
User installs OH MQTT binding, connects it to broker as client with standard bridge Thing, if they haven’t already.
Your OH add-on is told (by configuring a Thing) or finds out (by subscribing to magic topic) about the NVR, and auto-creates suitable channels.
Sounds like you might look at MQTT Homie extension for an example.
Yes I understand more now why your asking this. You can normally have binding level config options, but your binding will show up as part of the MQTT binding, so you will need to create a new broker to have the options available for users to use. This will mean a second connection to the mqtt broker if you have one of each bridge type setup so the broker needs to allow multiple connections.
You can create extra mqtt bridges and 1 did exist until it was recently removed. You can see the source code that was removed here (click on the ‘files changed’ link):
[mqtt] Remove MQTT System Broker by jimtng · Pull Request #12157 · openhab/openhab-addons (github.com)
Your xml file would look something like this. NOTE that the bindingId is just mqtt and that when your jar is in the addons folder, you will click on the mqtt binding to add things to the inbox and your binding will not show up as a separate binding…
<?xml version="1.0" encoding="UTF-8"?>
Also note that I list the supported-bridge-type-refs so the only one that the camera thing can use is the frigateBroker which you will need to define and create your own handler for, PLUS you will need to handle it in your handler factory. I have not created a MQTT bridge before but it should be the same, just you do it as if your mqtt and not your own binding.
FrigateHandlerFactory needs to return TRUE when the framework asks via supportsThingType(ThingTypeUID thingTypeUID). All the handlerFactories get asked if they support the ‘frigateBroker’, you need to say TRUE.
Once the framework gets a true on your handlerFactory, you need to implement createHandler(Thing thing) in your FrigateHandlerFactory to then create a handler for ‘frigateBroker’.
Your XML file needs to define the frigateBroker.
Another way is to scan all your site local internal IP’s and make a http request that only frigate would reply to. This code shows an example of this method but if someone does not use the default port for frigate this would not be useful…
openhab-addons/IpObserverDiscoveryService.java at main · openhab/openhab-addons (github.com)
Have you done any more work on this? It’s promising that frigate sends snapshots over MQTT, so I was hoping to find a simple script to get these snapshots in OpenHAB. Search brought me here.
Hello to all,
I am trying to integrate Frigate into openHAB through MQTT and the HTTP api. Has this evolved or is there any plan to evolve this binding?
Thank you in advanced for your time.
hey @rjduraocosta ,
here is a new frigate binding from jgow. check it out if it solves your requirements.
|
OPCFW_CODE
|
What's past the end of `environ`?
I'm facing an issue with the Free Pascal shared library startup code on Android. The Free Pascal RTL sources have the following fragment:
type
TAuxiliaryValue = cuInt32;
TInternalUnion = record
a_val: cuint32; //* Integer value */
{* We use to have pointer elements added here. We cannot do that,
though, since it does not work when using 32-bit definitions
on 64-bit platforms and vice versa. *}
end;
Elf32_auxv_t = record
a_type: cuint32; //* Entry type */
a_un: TInternalUnion;
end;
TElf32AuxiliaryVector = Elf32_auxv_t;
PElf32AuxiliaryVector = ^TElf32AuxiliaryVector;
var
psysinfo: LongWord = 0;
procedure InitSyscallIntf;
var
ep: PPChar;
auxv: PElf32AuxiliaryVector;
begin
psysinfo := 0;
ep := envp;
while ep^ <> nil do
Inc(ep);
Inc(ep);
auxv := PElf32AuxiliaryVector(ep);
repeat
if auxv^.a_type = AT_SYSINFO then begin
psysinfo := auxv^.a_un.a_val;
if psysinfo <> 0 then
sysenter_supported := 1; // descision factor in asm syscall routines
Break;
end;
Inc(auxv);
until auxv^.a_type = AT_NULL;
end;
The procedure InitSyscallIntf is being invoked as a part of the SO startup sequence. The envp is a unit-level variable that's initialized earlier in the startup sequence to the value of libc's environ. What this looks like to me, the code is trying to scan the environ array past the null pointer (which I thought denoted the end of the environment block), then tries to read the memory that's past.
What are they expecting to find past the end of the environ array? Probably they're making some assumptions about the structure of memory of a loaded ELF file - can I see a reference?
The link posted by Seva is what it is looking at, and in that it is looking if the sysenter instruction is supported.
This instruction allows for faster syscalls, and on Linux and FreeBSD kernel based systems, generally Free Pascal program access the kernel directly, not via libc.
See rtl/linux/i386/* an rtl/linux/x86_64 for the syscall wrapper routines, and you'll see the test for sysenter there.
Thanks Marco; the real issue I'm facing is this.
Looks like they were going by this. There's a set of two-DWORD data chunks called auxiliary vectors past the end of environment. The first DWORD in a vector is its type, the second one is value. The types are documented in linux/auxvec.h; there are about 20 of them.
Specifically, the FPC startup is looking for the vector of type AT_SYSINFO (32).
The aux vectors are past the end of environment block, but a copy of the aux vector block is available as a file under /proc/me/auxv.
According to Startup state of a Linux/i386 ELF binary the segment layout of an ELF binary has the program name after the environment array.
Actually, I'm not sure if that's what your program is accessing. That's the environment data, the environment array (the pointers to those strings) is in the initial stack frame. The above page doesn't show anything specific after that, I assume it's the local data for the main() function.
|
STACK_EXCHANGE
|
Typical usecase, one identity in my db, user and a second identity that is active directory account. In enterprise environment, they belong to same person, same ID, similar with government, dealing with government systems, situation should work the same.
Imagine a uni or system that is not strictly based on employee contract and do the same thing. Record of a student in uni and this is an id that is using from another org. How do I say if they belong to the same person or to a different one?
Question: how many of you are from academic community?
?: What we do is we have matching algo if the name is too common and there arent enough parameters around it will not be matched but sent into a special queue and there comes and employee account but they found the employees superior and asked has he been a student and asked for his matriculation number.
R: So a manual process? manual process -> telephone
? Sometimes its as easy as to look it up in the student register. There we can match like places of birth, another data point but this isn’t fed automatically into the system.
R: Do you do something smarter than manual?
? If the name is unique and if the name is long enough, it might be unique in the uni but it’s pretty unique in the general spectrum.
A bit of a list, so if someone has the surname Mueller it,s a function of length, long names are uncommon.
? You’re asking three questions, technical, business and data quality. If I got multiple systems in the record, what data do they have in common that I can look at even, national identifier or date of birth. In the US if you have an international student attending they won’t have a US student ID. It still reduces to data quality. Each of these is a great topic. The business processing is whether there is a manual process or says, I have an employee who is a student as well, it is sometimes delegated. We got some pointers to it. There are two things that we can start from, a rest API, calling systems, I got a lot of attributes about this person, the goal of that transaction is to say whether there is a unique identifier or not.
R: Yes no or?
?: Yes no maybe. No would be an error message. If I send a bunch of attributes, and they match, that is yes, if the person doesn’t exist that’s also a yes. I just need to know whether I got an identifier back. I ran some heuristic and you might be one of these three people. In those cases you can do, this is how you communicate, then you can either return cabinets or you can have an out of band mechanism that is close to the human process. At that point, the request has been queued. And the human comes in and resolves it. You can think of it as a batch process as well as an interactive process. Imagine in a batch process it doesn’t make sense to respond synchronously, in an interactive process yes.
The concept is that. Building out a match component. The protocol is called CipherIDmatch. The protocol says whether you have a match. You can configure rules based on the attributes. If I am doing a self-registration and this flows in the match flow and the match engine says that I am one of the people. In the case of a mess, you can generalize to two cases, one is don’t match and the new record, and when you do match and join the record, the operations would be to join and split, the protocol explains how to do that. from the match engine fixing, this is easy, in that case, you give me this reference identifier. The hard part is downstream, if you created an account, how do you split them? is there a way we can automate that? that seems scary.
R: Self-service merging?
B: If you have the attribute you can do that, two Gmail accounts then you could allow a selfless merge. It’s a bit out of the scope of the API right now, that’s the too on top of that. we haven’t seen a lot of that in particularly university cases you don’t have that attribute.
I haven’t seen any real data but it’s different.
R: How do you walk the user through the use-case?
B: Not much more complicated to ask: do you previously have an id or not? Probably we can trust that or sent a reset token for that. There’s a couple of implementations of it. Not only to the open-source component. the idea is to put in front your legacy switch out components and when you’re ready just point the new stuff to the match component. we’ve seen some implementation of that, I am not that sure.
Maybe merge with the scim team?
Possibly. some tech bits make sense and not make sense, scim has a much lager protocol than what you need.
R: What about manual processes? B A goal of getting 99% to be automated but there is a level of manual processing. Father and son with the same name but different birthdays. Or twins with the same name born on the same day. There’s a desire to not collect national identifiers anymore.
R: We want to limit the data and number of identifiers with GDPR. we realized this is the same person, what to do? Merge them? B: The concept of the match engine is to persist speed one thing you can say and to maintain it in the match engine, reference identifier and in the midpoint, you don’t track the national identifier, but any of these other identifiers. Then if you realize it the same person later, there’s a portion of the API that allows you to do this.
Rainer: The law specifically prohibits using the national identifiers, it doesn’t prohibit you to use derived identifiers. a hash of prohibited identifiers is perfectly legal. but if you need to rematch it’s very valuable.
How would the situation even occur? this use-case.
I am trying to explore the idea.
? But once it’s matched that’s it. But you wouldn’t reevaluate it if it’s in the queue for a quality process?
B: Initial match and don’t do rematching but you can imagine scenarios that you want to do it, if something is manually changed, maybe it should be rematched.
For the student email do you give the same email to them? the most common pattern is if the student comes back, they leave for a semester and come back, the most common pattern is to try to relink them to their old data, and to reactivate it. if you have a strong identifier otherwise it may cause a problem.
Does the campus allow the reuse?
B: About Conway hash, neither the API or the initial implementation define any of the attributes. You can hash the national identifiers, the downside is that, are you lose transposition checking.
Australians have a university national identifier. A very user oriented process.
R: This account starts with something really simple, anonymous account, university A, if this person wants to get a service, I may want more information like full name or email address to fill out before he gets access to the service. There needs to e a reverse process as well, to shrink it, if I don’t need an email, to be able to remove it from the profile.
Reversing can be tricky because if you need the info at some point again, you need to find a way to get the information back.
We’re encrypting passwords and then storing the password on the database and the key into the key store. If anyone attacks the IDM they get both.
Maybe storing it in tapes like some systems, this org is what they do is encrypt everything and having the key, by GDPR its legal that if you delete the key, you have deleted the data. Backups were another reason why you need to do that because you have logs that have user information. if you want to delete the user data you have to go through all backups and remove the stuff. That’s how you swipe iPhones, its the same concept.
For every value that is given, accountability is important, to know why you have it. Technically probably nobody does it. Machines might be able to do it. In some situations, it was requested because you needed some kind of service.
|
OPCFW_CODE
|
ReferenceLoopHandling.Ignore seems to have stopped working
I have been serializing some entity framework classes with JsonConvert.SerializeObject for a few weeks now, using ReferenceLoopHandling.Ignore, to ignore any classes which create reference loops. It seemed to be working fine, up until the past few days (probably when I moved from 12.0.2 to 12.0.3). Around that time, we started getting stack overflows when customers would hit the code that serializes those same entity framework classes (the specific one is called NSCustomerAddress). I ended up having to use a custom contract resolver that ignores any properties that are classes, to solve the problem.
Source/destination types
public class NSCustomerAddress
{
public NSCustomerAddress();
public string Line1 { get; set; }
public string PhoneNo { get; set; }
public string CountryCode { get; set; }
public string Country { get; set; }
public string PostalCode { get; set; }
public string StateProvince { get; set; }
public string City { get; set; }
public string Line3 { get; set; }
public string Line2 { get; set; }
public virtual NSCustomer NSCustomer { get; set; }
public string Attention { get; set; }
public string Addressee { get; set; }
public int ID { get; set; }
public string Label { get; set; }
public bool IsDefaultShippingAddress { get; set; }
public bool IsDefaultBillingAddress { get; set; }
public int CustomerID { get; set; }
public int? AddressID { get; set; }
public string ToString(bool includeLabel = false);
}
Expected behavior
It ignores any referential loops, avoiding any out-of-memory or stack overflow situations.
Actual behavior
It appears to still be serializing referential loops, even though it detects them, and throws an error if you set ReferenceLoopHandling to Error instead of Ignore. Its behavior seems to be as if it's set to Serialize, instead.
Steps to reproduce
recordObject being passed in this instance is NSCustomerAddress.
if (recordObject != null)
{
var serializerSettings = new JsonSerializerSettings {
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
PreserveReferencesHandling = PreserveReferencesHandling.None,
ContractResolver = new CTSShared.PrimitiveTypesOnlyContractResolver()
};
queueItem.Data = JsonConvert.SerializeObject(recordObject, serializerSettings);
}
Duplicate of #1929 ?
Good call on the custom ContractResolver. I'm going to borrow that approach.
@JamesNK would someone be able to take a look at this bug? It is not good. There are a few issues created about it. Let me know if you want a reproduction case.
Thanks!!
I spent time debugging #2196 is not a bug in the serializer.
I'm certain that if I do this then ReferenceLoopHandling.Ignore will work correctly:
var customerAddress = new NSCustomerAddress();
var customer = new NSCustomer();
customerAddress.NSCustomer = customer;
customer.Addresses.Add(customerAddress);
// serialize customerAddress
The code for reference loop handling hasn't changed in years.
If EF does crazy things when creating the objects, or generating its proxies, to create loops that Newtonsoft.Json can't detect then it isn't going to work.
@JamesNK thank you for looking; I will do some digging on my end and either verify that it's an EF thing, or come back here with an Newtonsoft-only reproduction case. Cheers
No one has provided a reproduction of this issue that doesn't involve a third-party library like EF or AutoMapper. Closing.
|
GITHUB_ARCHIVE
|
//
//
//
#![no_std]
#![feature(lang_items)]
mod elf_fmt;
#[inline(never)]
fn log_closure<F: FnOnce(&mut ::core::fmt::Write)>(f: F) {
use core::fmt::Write;
let mut lh = ::Logger;
let _ = write!(lh, "[loader log] ");
f(&mut lh);
let _ = write!(lh, "\n");
}
/// Stub logging macro
macro_rules! log{
($($v:tt)*) => {{
::log_closure(|lh| {let _ = write!(lh, $($v)*);});
}};
}
pub struct ElfFile(elf_fmt::ElfHeader);
impl ElfFile
{
pub fn check_header(&self) {
assert_eq!(&self.0.e_ident[..8], b"\x7FELF\x01\x01\x01\x00"); // Elf32, LSB, Version, Pad
assert_eq!(self.0.e_version, 1);
}
fn phents(&self) -> PhEntIter {
assert_eq!( self.0.e_phentsize as usize, ::core::mem::size_of::<elf_fmt::PhEnt>() );
// SAFE: Assuming the file is correct...
let slice: &[elf_fmt::PhEnt] = unsafe {
let ptr = (&self.0 as *const _ as usize + self.0.e_phoff as usize) as *const elf_fmt::PhEnt;
::core::slice::from_raw_parts( ptr, self.0.e_phnum as usize )
};
log!("phents() - slice = {:p}+{}", slice.as_ptr(), slice.len());
PhEntIter( slice )
}
fn shents(&self) -> &[elf_fmt::ShEnt] {
assert_eq!( self.0.e_shentsize as usize, ::core::mem::size_of::<elf_fmt::ShEnt>() );
// SAFE: Assuming the file is correct...
unsafe {
let ptr = (&self.0 as *const _ as usize + self.0.e_shoff as usize) as *const elf_fmt::ShEnt;
::core::slice::from_raw_parts( ptr, self.0.e_shnum as usize )
}
}
pub fn entrypoint(&self) -> usize {
self.0.e_entry as usize
}
}
struct PhEntIter<'a>(&'a [elf_fmt::PhEnt]);
impl<'a> Iterator for PhEntIter<'a> {
type Item = elf_fmt::PhEnt;
fn next(&mut self) -> Option<elf_fmt::PhEnt> {
if self.0.len() == 0 {
None
}
else {
let rv = self.0[0].clone();
self.0 = &self.0[1..];
Some(rv)
}
}
}
//struct ShEntIter<'a>(&'a [elf_fmt::ShEnt]);
//impl<'a> Iterator for ShEntIter<'a> {
// type Item = elf_fmt::ShEnt;
// fn next(&mut self) -> Option<elf_fmt::ShEnt> {
// if self.0.len() == 0 {
// None
// }
// else {
// let rv = self.0[0].clone();
// self.0 = &self.0[1..];
// Some(rv)
// }
// }
//}
#[no_mangle]
pub extern "C" fn elf_get_size(file_base: &ElfFile) -> u32
{
log!("elf_get_size(file_base={:p})", file_base);
file_base.check_header();
let mut max_end = 0;
for phent in file_base.phents()
{
if phent.p_type == 1
{
log!("- {:#x}+{:#x} loads +{:#x}+{:#x}",
phent.p_paddr, phent.p_memsz,
phent.p_offset, phent.p_filesz
);
let end = (phent.p_paddr + phent.p_memsz) as usize;
if max_end < end {
max_end = end;
}
}
}
// Round the image size to 4KB
let max_end = (max_end + 0xFFF) & !0xFFF;
log!("return load_size={:#x}", max_end);
if max_end == 0 {
log!("ERROR!!! Kernel reported zero loadable size");
loop {}
}
max_end as u32
}
#[no_mangle]
/// Returns program entry point
pub extern "C" fn elf_load_segments(file_base: &ElfFile, output_base: *mut u8) -> u32
{
log!("elf_load_segments(file_base={:p}, output_base={:p})", file_base, output_base);
for phent in file_base.phents()
{
if phent.p_type == 1
{
log!("- {:#x}+{:#x} loads +{:#x}+{:#x}",
phent.p_paddr, phent.p_memsz,
phent.p_offset, phent.p_filesz
);
let (dst,src) = unsafe {
let dst = ::core::slice::from_raw_parts_mut( (output_base as usize + phent.p_paddr as usize) as *mut u8, phent.p_memsz as usize );
let src = ::core::slice::from_raw_parts( (file_base as *const _ as usize + phent.p_offset as usize) as *const u8, phent.p_filesz as usize );
(dst, src)
};
for (d, v) in Iterator::zip( dst.iter_mut(), src.iter().cloned().chain(::core::iter::repeat(0)) )
{
*d = v;
}
}
}
let rv = (file_base.entrypoint() - 0x80000000 + output_base as usize) as u32;
log!("return entrypoint={:#x}", rv);
rv
}
#[repr(C)]
#[derive(Debug)]
pub struct SymbolInfo {
base: *const elf_fmt::SymEnt,
count: usize,
string_table: *const u8,
strtab_len: usize,
}
#[no_mangle]
/// Returns size of data written to output_base
pub extern "C" fn elf_load_symbols(file_base: &ElfFile, output: &mut SymbolInfo) -> u32
{
log!("elf_load_symbols(file_base={:p}, output={:p})", file_base, output);
*output = SymbolInfo {base: 0 as *const _, count: 0, string_table: 0 as *const _, strtab_len: 0};
let mut pos = ::core::mem::size_of::<SymbolInfo>();
for ent in file_base.shents()
{
if ent.sh_type == 2
{
log!("Symbol table at +{:#x}+{:#x}, string table {}", ent.sh_offset, ent.sh_size, ent.sh_link);
let strtab = file_base.shents()[ent.sh_link as usize];
let strtab_bytes = unsafe { ::core::slice::from_raw_parts( (file_base as *const _ as usize + strtab.sh_offset as usize) as *const u8, strtab.sh_size as usize ) };
//log!("- strtab = {:?}", ::core::str::from_utf8(strtab_bytes));
output.base = (output as *const _ as usize + pos) as *const _;
output.count = ent.sh_size as usize / ::core::mem::size_of::<elf_fmt::SymEnt>();
unsafe {
let bytes = ent.sh_size as usize;
let src = ::core::slice::from_raw_parts( (file_base as *const _ as usize + ent.sh_offset as usize) as *const elf_fmt::SymEnt, output.count );
let dst = ::core::slice::from_raw_parts_mut( output.base as *mut elf_fmt::SymEnt, output.count );
for (d,s) in Iterator::zip( dst.iter_mut(), src.iter() ) {
//log!("- {:?} = {:#x}+{:#x}", ::core::str::from_utf8(&strtab_bytes[s.st_name as usize..].split(|&v|v==0).next().unwrap()), s.st_value, s.st_size);
*d = *s;
}
pos += bytes;
}
output.string_table = (output as *const _ as usize + pos) as *const _;
output.strtab_len = strtab.sh_size as usize;
unsafe {
let bytes = strtab.sh_size as usize;
let src = ::core::slice::from_raw_parts( (file_base as *const _ as usize + strtab.sh_offset as usize) as *const u8, bytes );
let dst = ::core::slice::from_raw_parts_mut( output.string_table as *mut u8, bytes );
for (d,s) in Iterator::zip( dst.iter_mut(), src.iter() ) {
*d = *s;
}
pos += bytes;
}
break ;
}
}
log!("- output = {:?}", output);
pos as u32
}
//
//
//
#[lang="eh_personality"]
fn eh_personality() -> ! {
loop {}
}
#[lang="panic_fmt"]
fn panic_fmt() -> ! {
loop {}
}
extern "C" {
fn puts(_: *const u8, _: u32);
}
struct Logger;
impl ::core::fmt::Write for Logger {
fn write_str(&mut self, s: &str) -> ::core::fmt::Result
{
// SAFE: Single-threaded
unsafe {
puts(s.as_ptr(), s.len() as u32);
}
Ok( () )
}
}
|
STACK_EDU
|
I'm thinking of way to use VIOS's file-backed virtual optical device, for backing up my data.
First I've made a read-write enabled virtual disk on my VIOS(184.108.40.206-FP-22) and load it on my virtual target device.
Size(mb) Free(mb) Parent Pool Parent Size Parent Free
10126 0 vopt_vg 10168 0
Name File Size Optical Access
SLES11_DVD1 2885 None rw
SLES11_DVD2 3734 None rw
rw_device 3507 lpar510_cd rw
(the "rw_device" is the virtual disk, and "lpar510_cd" is the virtual target device)
However on when i try to format the disk on my SLES 11 machine I get the flowing error indicating me that "controller does not support CD write parameter page."
lpar510:~ # cdrecord -scanbus
0,0,0 0) 'AIX ' 'VOPTA ' '' Removable CD-ROM
0,1,0 1) *
0,2,0 2) *
0,3,0 3) *
0,4,0 4) *
0,5,0 5) *
0,6,0 6) *
0,7,0 7) *
lpar510:~ # cdrecord blank=fast dev=0,0,0
WARNING: the deprecated pseudo SCSI syntax found as device specification.
Support for that may cease in the future versions of wodim. For now,
the device will be mapped to a block device file where possible.
Run "wodim --devices" for details.
Device type : Removable CD-ROM
Version : 4
Response Format: 2
Capabilities : CMDQUE
Vendor_info : 'AIX '
Identification : 'VOPTA '
Revision : ''
Device seems to be: Generic CD-ROM.
wodim: Sorry, no CD/DVD-Recorder or unsupported CD/DVD-Recorder found on this target.
Using generic SCSI-2 CD-ROM driver (scsi2_cd).
Driver flags :
wodim: Warning: controller does not support CD write parameter page.
wodim: Cannot set speed/dummy.
・ Is there any information that Linux(on Power) does not support writing on virtual disks?
Pinned topic Using Read/Write enable Virtual Optical Device
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
Updated on 2010-01-29T07:49:44Z at 2010-01-29T07:49:44Z by SystemAdmin
Re: Using Read/Write enable Virtual Optical Device2010-01-24T23:37:22ZThis is the accepted answer. This is the accepted answer.a little bit of more info,
checked the /proc directory, and it seems that writing is not supported
- cat /proc/sys/dev/cdrom/info
drive name: sr0
drive speed: 1
drive # of slots: 1
Can close tray: 1
Can open tray: 1
Can lock tray: 1
Can change speed: 0
Can select disk: 0
Can read multisession: 1
Can read MCN: 1
Reports media changed: 1
Can play audio: 1
Can write CD-R: 0
Can write CD-RW: 0
Can read DVD: 0
Can write DVD-R: 0
Can write DVD-RAM: 0
Can read MRW: 0
Can write MRW: 0
Can write RAM: 0
from this may I conclude that you CANNOT write on a virtual disk?
Re: Using Read/Write enable Virtual Optical Device2010-01-26T16:53:30ZThis is the accepted answer. This is the accepted answer.
- SystemAdmin 110000D4XK
Re: Using Read/Write enable Virtual Optical Device2010-01-27T09:25:34ZThis is the accepted answer. This is the accepted answer.Thank you for your support,
Here's more information if it helps.
When mapping a physical cd drive to a virtual optical device, the driver information says that it is capable of writing DVD-RAMs. (only)
<FROM VIOS >
( lp48_cd is File-backed optical device, vopt0 is a virtual scsi device mapped to a physical cd)
lsmap -vadapter vhost5
SVSA Physloc Client Partition ID
vhost5 U9131.52A.067F69G-V1-C108 0x00000008
Backing device /var/vio/VMLibrary/RHEL53
Backing device hdisk8
Backing device cd0
<From RHEL 5.3 >
300000d0 v-scsi U9131.52A.067F69G-V8-C208-T1
1:0:2:0 sg1 sdb U9131.52A.067F69G-V8-C208-T1-L1-L0
3000006c v-scsi U9131.52A.067F69G-V8-C108-T1
0:0:2:0 sg3 sr1 U9131.52A.067F69G-V8-C108-T1-L0-L0
0:0:1:0 sg2 sr0 U9131.52A.067F69G-V8-C108-T1-L0-L0
0:0:3:0 sg0 sda U9131.52A.067F69G-V8-C108-T1-L0-L0
CD-ROM information, Id: cdrom.c 3.20 2003/12/17
drive name: sr1 sr0
drive speed: 24 1
drive # of slots: 1 1
Can close tray: 1 1
Can open tray: 1 1
Can lock tray: 1 1
Can change speed: 1 0
Can select disk: 0 0
Can read multisession: 1 1
Can read MCN: 1 1
Reports media changed: 1 1
Can play audio: 1 1
Can write CD-R: 0 0
Can write CD-RW: 0 0
Can read DVD: 1 0
Can write DVD-R: 0 0
Can write DVD-RAM: 1 0
Can read MRW: 1 0
Can write MRW: 1 0
Can write RAM: 1 1
sr1 shows that it can handle DVD-RAM
Re: Using Read/Write enable Virtual Optical Device2010-01-27T09:29:43ZThis is the accepted answer. This is the accepted answer.
- SystemAdmin 110000D4XK
forgot to explain that above was tested on a different machine, which RHEL 5.3.
Still both, SLES11 and RHEL 5.3 reports, writing aren't supported
Brian_King 120000K4SY20 Posts
Re: Using Read/Write enable Virtual Optical Device2010-01-28T18:30:45ZThis is the accepted answer. This is the accepted answer.
- SystemAdmin 110000D4XK
Re: Using Read/Write enable Virtual Optical Device2010-01-29T07:49:44ZThis is the accepted answer. This is the accepted answer.
- Brian_King 120000K4SY
I understand the situation for no fix or workaround.
As for the alternative, actually i was unaware of "logical volume backed virtual disk".
I'll try it out, and give feedback.
|
OPCFW_CODE
|
package jsoniter
import (
"fmt"
"github.com/modern-go/reflect2"
"io"
"reflect"
"unsafe"
)
func encoderOfStruct(ctx *ctx, typ reflect2.Type) ValEncoder {
type bindingTo struct {
binding *Binding
toName string
ignored bool
}
orderedBindings := []*bindingTo{}
structDescriptor := describeStruct(ctx, typ)
for _, binding := range structDescriptor.Fields {
for _, toName := range binding.ToNames {
new := &bindingTo{
binding: binding,
toName: toName,
}
for _, old := range orderedBindings {
if old.toName != toName {
continue
}
old.ignored, new.ignored = resolveConflictBinding(ctx.frozenConfig, old.binding, new.binding)
}
orderedBindings = append(orderedBindings, new)
}
}
if len(orderedBindings) == 0 {
return &emptyStructEncoder{}
}
finalOrderedFields := []structFieldTo{}
for _, bindingTo := range orderedBindings {
if !bindingTo.ignored {
finalOrderedFields = append(finalOrderedFields, structFieldTo{
encoder: bindingTo.binding.Encoder.(*structFieldEncoder),
toName: bindingTo.toName,
})
}
}
return &structEncoder{typ, finalOrderedFields}
}
func createCheckIsEmpty(ctx *ctx, typ reflect2.Type) checkIsEmpty {
encoder := createEncoderOfNative(ctx, typ)
if encoder != nil {
return encoder
}
kind := typ.Kind()
switch kind {
case reflect.Interface:
return &dynamicEncoder{typ}
case reflect.Struct:
return &structEncoder{typ: typ}
case reflect.Array:
return &arrayEncoder{}
case reflect.Slice:
return &sliceEncoder{}
case reflect.Map:
return encoderOfMap(ctx, typ)
case reflect.Ptr:
return &OptionalEncoder{}
default:
return &lazyErrorEncoder{err: fmt.Errorf("unsupported type: %v", typ)}
}
}
func resolveConflictBinding(cfg *frozenConfig, old, new *Binding) (ignoreOld, ignoreNew bool) {
newTagged := new.Field.Tag().Get(cfg.getTagKey()) != ""
oldTagged := old.Field.Tag().Get(cfg.getTagKey()) != ""
if newTagged {
if oldTagged {
if len(old.levels) > len(new.levels) {
return true, false
} else if len(new.levels) > len(old.levels) {
return false, true
} else {
return true, true
}
} else {
return true, false
}
} else {
if oldTagged {
return true, false
}
if len(old.levels) > len(new.levels) {
return true, false
} else if len(new.levels) > len(old.levels) {
return false, true
} else {
return true, true
}
}
}
type structFieldEncoder struct {
field reflect2.StructField
fieldEncoder ValEncoder
omitempty bool
}
func (encoder *structFieldEncoder) Encode(ptr unsafe.Pointer, stream *Stream) {
fieldPtr := encoder.field.UnsafeGet(ptr)
encoder.fieldEncoder.Encode(fieldPtr, stream)
if stream.Error != nil && stream.Error != io.EOF {
stream.Error = fmt.Errorf("%s: %s", encoder.field.Name(), stream.Error.Error())
}
}
func (encoder *structFieldEncoder) IsEmpty(ptr unsafe.Pointer) bool {
fieldPtr := encoder.field.UnsafeGet(ptr)
return encoder.fieldEncoder.IsEmpty(fieldPtr)
}
func (encoder *structFieldEncoder) IsEmbeddedPtrNil(ptr unsafe.Pointer) bool {
isEmbeddedPtrNil, converted := encoder.fieldEncoder.(IsEmbeddedPtrNil)
if !converted {
return false
}
fieldPtr := encoder.field.UnsafeGet(ptr)
return isEmbeddedPtrNil.IsEmbeddedPtrNil(fieldPtr)
}
type IsEmbeddedPtrNil interface {
IsEmbeddedPtrNil(ptr unsafe.Pointer) bool
}
type structEncoder struct {
typ reflect2.Type
fields []structFieldTo
}
type structFieldTo struct {
encoder *structFieldEncoder
toName string
}
func (encoder *structEncoder) Encode(ptr unsafe.Pointer, stream *Stream) {
stream.WriteObjectStart()
isNotFirst := false
for _, field := range encoder.fields {
if field.encoder.omitempty && field.encoder.IsEmpty(ptr) {
continue
}
if field.encoder.IsEmbeddedPtrNil(ptr) {
continue
}
if isNotFirst {
stream.WriteMore()
}
stream.WriteObjectField(field.toName)
field.encoder.Encode(ptr, stream)
isNotFirst = true
}
stream.WriteObjectEnd()
if stream.Error != nil && stream.Error != io.EOF {
stream.Error = fmt.Errorf("%v.%s", encoder.typ, stream.Error.Error())
}
}
func (encoder *structEncoder) IsEmpty(ptr unsafe.Pointer) bool {
return false
}
type emptyStructEncoder struct {
}
func (encoder *emptyStructEncoder) Encode(ptr unsafe.Pointer, stream *Stream) {
stream.WriteEmptyObject()
}
func (encoder *emptyStructEncoder) IsEmpty(ptr unsafe.Pointer) bool {
return false
}
type stringModeNumberEncoder struct {
elemEncoder ValEncoder
}
func (encoder *stringModeNumberEncoder) Encode(ptr unsafe.Pointer, stream *Stream) {
stream.writeByte('"')
encoder.elemEncoder.Encode(ptr, stream)
stream.writeByte('"')
}
func (encoder *stringModeNumberEncoder) IsEmpty(ptr unsafe.Pointer) bool {
return encoder.elemEncoder.IsEmpty(ptr)
}
type stringModeStringEncoder struct {
elemEncoder ValEncoder
cfg *frozenConfig
}
func (encoder *stringModeStringEncoder) Encode(ptr unsafe.Pointer, stream *Stream) {
tempStream := encoder.cfg.BorrowStream(nil)
tempStream.Attachment = stream.Attachment
defer encoder.cfg.ReturnStream(tempStream)
encoder.elemEncoder.Encode(ptr, tempStream)
stream.WriteString(string(tempStream.Buffer()))
}
func (encoder *stringModeStringEncoder) IsEmpty(ptr unsafe.Pointer) bool {
return encoder.elemEncoder.IsEmpty(ptr)
}
|
STACK_EDU
|
|Research, training, consultancy and software to reduce IT costs|
Awaken the cyborg within
The science-fiction staple of the superhuman half-man half-computer cyborg is a good model for our use of computers. But to grasp this power, we need to learn more about the true nature of our computers, take more responsibility for them, and apply some common sense to their use.
In a very real sense, we can all be cyborgs. Computers can remember, calculate and communicate. They can confer on us superhuman capabilities of memory, calculation and near telepathic communication. They give us control over the technological world around us. We can combine our form and their abilities to lead efficient and productive lives.
We might fear this view. But the opposite view, that computers are separate from us, is surely more scary. We have to realise that we can use this power, and learn to use it for our good.
To grasp our cyborg potential, we need to refine our attitudes toward IT: what it is, how it helps and how it should be used. We also need to identify and apply appropriate skills to our use of IT.
In the past few weeks, this newsletter has covered many of the attitudes that both IT suppliers and customers need to adopt to use IT effectively. We all need to stop thinking of IT as magic, but see it as an understandable tool which adds value by improving efficiency and making the infeasible possible. We need to avoid misusing IT as a way of fixing business problems, restructuring organisations or leading business change: all these need to be addressed in their own right before IT is considered.
As well as addressing general attitudes in IT, if you manage the use of IT, you need to take charge of your computers' abilities to remember, calculate and communicate. This is vital. It can't be left to the IT department or supplier. A selection of out-of-date specifications and diagrams in a cupboard won't do. You have to understand fully what the computers are doing on your behalf, so that you can control them, just like you have to understand and control your team.
The skills you need are simple, but often overlooked. IT training generally misses the point. Technical training covers hardware, software, analysis and engineering skills. User training covers how to use the computer. These skills may make you more confident and capable using the technology, but they don't directly help you understand how to do anything useful with it. Most IT training is either like learning to operate an oven, or learning to mend an oven: but none of it teaches you how to cook.
Instead, you need to apply every day common sense skills to take charge of IT: delegation, careful definition, clear instruction, and disciplines of filing and administration. These have nothing to do with understanding how to configure a web server, draw a process model, or get bold on a word processor, but they are what you need to master IT's power and use it to serve you.
Coming newsletters will explore these every day common sense skills in more detail. By applying these skills and adopting appropriate attitudes to IT, we can all fulfil our destiny as superhuman cyborgs.Next: Everyday skills part 1: Delegation, definition and instruction
Minimal IT: research, training, consultancy and software to reduce IT costs.
|
OPCFW_CODE
|
Last week I realized that there was a down side to how I’d hooked up my laptop to my HDTV. The laptop ended up connected to the TV and on top of my stereo cabinet, forcing me to stand at the computer to use it, or in my lap on my sofa, but not connected to the TV. So, I started poking around for wireless keyboards and mice. Thinking that a mouse wasn’t going to work all that well, and that would just be one more “remote” in my livingroom, I shopped for a wireless keyboard with a built in trackpad. What I found was the Logitech diNovo Edge Keyboard.
At $149 this keyboard is a smidge on the pricey side but it’s worth every penny. It’s light, it’s thin, it doesn’t need AA or AAA batteries due to the built in Li-Ion batteries and charging station, and it’s on board “touch disc”. The batteries are quoted as being able to last a month or more on a single charge and based on other reviews I have no reason to doubt that. The “touch disc” is a round touchpad but with two special spots on the disc that allow for both vertical and horizontal scrolling once you get the hang of it.
Stylistically, this is a gorgeous piece of equipment. It’s sleek and black, and had many backlit icons for special functions. (For example, the ring around the touch disc lights up when your using it and slowly fades out when you stop.) And, as other reviewers have said, it even looks great when sitting in the charging station.
The wireless connection runs on Bluetooth and this is where I ran into some problems. My Vista Ultimate laptop has built in Bluetooth but I’d not used it in the past as this is my first Bluetooth device for a computer. So, I went into the Bluetooth configuration settings and instructed the computer to find the device. I’d pressed all the right buttons and read all the instructions a dozen times but computer continually failed to find the keyboard. So, as a test, I plugged in the USB dongle for computers that didn’t have Bluetooth and everything connected almost instantly. Therefore, the problem was with my computer, not the keyboard.
An hour later, having read many a support document, it seemed that Bluetooth was “running” (at least there was a Bluetooth icon in my system tray) but it was “turned off”. Using the function keys to turn it on didn’t work since, ultimately, the laptop was refusing to recognize the built in Bluetooth hardware. My guess is that something happened in the upgrade to Vista.
I found updated drivers from Gateway, downloaded them, and ran the install program. The install program informed me that I had to first uninstall the old version. Off to Add/Remove Programs to uninstall Bluetooth. Upon reboot, Vista found the hardware, reinstalled the (original) drivers, and before trying to install new drivers, I tried again. This time, the keyborad connected as it should.
(I don’t blame the keyboard or Logitech for this at all. I mention it since others might have a similar problem.)
I then installed the Logitech software which seems to give me some additional options and customizations but I’m in no hurry to investigate those since so far I’ve been able to do everything I’ve needed to do.
The only other odd thing I’ve noticed is the keyboard’s volume control. Using the keyboard to raise and lower the volume seems only to work within a small range, not silent-to-blearing like you’d expect. I’m sure this again has something to do with the OS’ volume settings and not a problem with the keyboard itself. This is also something I’m not all that worried about as I’ll probably use the TV remote’s volume control more than anything else.
For those interested, a slideshow of the unpacking, Bluetooth installation(sans screenshots of the hour of troubleshooting) and Logitech software installation, can be found in my Flickr account.
|
OPCFW_CODE
|
Standalone Version
Hello,
This is an awesome work! I was wondering if is there any way we skip discord integration and get results by calling the script on cli?
Thank you,
Ata G
Do you still want to train the model on another user so that it can imitate that user? Or do you just want to interreact with the default chat model?
I guess training could be done with provided text file beforehand.
That sounds awesome.
@atagulalan you can try it out at the following branch https://github.com/CakeCrusher/mimicbot/tree/discord-independent-mimicbot
The forge is ready. Just run python -m mimicbot forge and the CLI will run you through it.
Let me know what you think.
Thank you, trying it now.
First, I tried to run python -m mimicbot forge and it gave me this error:
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\x\Documents\GitHub\mimicbot\mimicbot\__main__.py", line 1, in <module>
from mimicbot import cli, __app_name__
File "C:\Users\x\Documents\GitHub\mimicbot\mimicbot\cli.py", line 428, in <module>
utils.session_path(utils.callback_config()),
File "C:\Users\x\Documents\GitHub\mimicbot\mimicbot\utils.py", line 78, in callback_config
with open(str(config_path), "w") as config_file:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\x\\AppData\\Roaming\\mimicbot\\config.ini'
I thought to myself, "oh, it couldn't create the config file because we didn't create the folder shuld contain it."; so I created the folder myself and got this error insted:
Traceback (most recent call last):
File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\x\Documents\GitHub\mimicbot\mimicbot\__main__.py", line 1, in <module>
from mimicbot import cli, __app_name__
File "C:\Users\x\Documents\GitHub\mimicbot\mimicbot\cli.py", line 428, in <module>
utils.session_path(utils.callback_config()),
File "C:\Users\x\Documents\GitHub\mimicbot\mimicbot\utils.py", line 93, in session_path
GUILD = config.get("discord", "guild")
File "C:\Python310\lib\configparser.py", line 782, in get
d = self._unify_values(section, vars)
File "C:\Python310\lib\configparser.py", line 1153, in _unify_values
raise NoSectionError(section) from None
configparser.NoSectionError: No section: 'discord'
Should I create a prefilled config file contains those values, or is forge supposed to do that?
Thank you!
Thank you so much for pointing that out that was a fundamental problem that I should have dealt with ago.
Thank you, I successfully got to the Step 2 now.
Is it really necessary to author column be integer? I receive error on data_preprocessing.py line 58 because of my csv data, which author column contains usernames.
You are absolutely right about the int issue that is fixed as for the naming I will get to that soon.
Naming of members could always be the same as id. Now (2c6fb4711c45c992c3abea82b82b69dbe85b0312) the name is the id by default.
That among with other strict inputs are things I will need to fix at some point.
|
GITHUB_ARCHIVE
|
import time
import arrow as arw
import json
import requests
with open('conditions_key.json','r') as file:
condition_keys = json.load(file)
def to_f(celsius):
if celsius:
f = int((float(celsius) * 1.8) + 32)
else:
f = None
return f
def to_mph(kmh):
if kmh:
mph = int(float(kmh) / 1.609)
return mph
def get_json(url, seconds=10):
response = requests.get(url).json()
if response.get('status') == 503:
print('Service did not respond, waiting {seconds} to retry')
time.sleep(seconds)
seconds = seconds + 60
response = get_json(url, seconds)
return response
def get_nearest_station(station_url):
stations = get_json(station_url)
identifier = stations['features'][0]['properties']['stationIdentifier']
return identifier
def parse_hourly(hourly):
parsed = {}
for period in hourly['properties']['periods']:
position = period['number']
day = arw.get(period['startTime']).format("DD")
hour = arw.get(period['startTime']).format("HH")
temperature = period['temperature']
wind = period['windSpeed'].replace(' mph','')
wind_dir = period['windDirection']
conditions_icon, precip = url_to_icon_precip(period['icon'])
parsed[position] = {'position':position,
'day':day,
'hour':hour,
'temp':temperature,
'wind':wind,
'wind_dir':wind_dir,
'conditions_icon':conditions_icon,
'precip_percent':precip,
}
return parsed
def get_hourly(station_info=None):
if station_info != True:
station_info = get_json(noaa_api)
hourly_url = station_info['properties']['forecastHourly']
hourly = get_json(hourly_url)
formatted_hourly = parse_hourly(hourly)
return formatted_hourly
def get_current(station_info=None):
if station_info !=True:
station_info = get_json(noaa_api)
stations_url = station_info['properties']['observationStations']
station_id = get_nearest_station(stations_url)
nearest_conditions_url = 'https://api.weather.gov/stations/{}/observations'.format(station_id)
current_cond = get_json(nearest_conditions_url)
formatted_current = parse_current(current_cond)
return formatted_current
def parse_current(present_conditions):
latest = present_conditions['features'][0]['properties']
present_weather = {
'cur_timestamp':latest['timestamp'],
'cur_temp':to_f(latest['temperature']['value']),
'cur_wind':to_mph(latest['windSpeed']['value']),
'cur_wind_dir':latest['windDirection']['value'],
'cur_humidity':int(latest['relativeHumidity']['value']),
'cur_heat_index':to_f(latest['heatIndex']['value']),
'cur_wind_chill':to_f(latest['windChill']['value']),
}
return present_weather
def url_to_icon_precip(url):
# Parse the URL to the conditions short code
condition_short_code_with_precip = url.split('/')[-1].split('?')[0]
condition_short_code = condition_short_code_with_precip.split(',')[0]
# Precip percentage is after a comma, but not always
try:
precip = condition_short_code_with_precip.split(',')[1]
except:
precip = " "
return (condition_keys['icons'][condition_short_code]['icon'], precip)
# TODO - use config file
lat_long = "36.064444,-79.398056"
noaa_api = "https://api.weather.gov/points/" + lat_long
if __name__ == '__main__':
station_info = get_json(noaa_api)
if station_info:
forecast_url = station_info['properties']['forecast']
hourly_url = station_info['properties']['forecastHourly']
grid_data_url = station_info['properties']['forecastGridData']
stations_url = station_info['properties']['observationStations']
city_state = station_info['properties']['relativeLocation']['properties']
station_id = get_nearest_station(stations_url)
current_cond = get_current(station_id)
formatted_hourly = get_hourly(station_id)
else:
print('Unable to reach weather.gov please wait and try again.')
|
STACK_EDU
|
Check out the latest Media Extended v2.10.0 update with mobile support: Release 2.10.0 · aidenlx/media-extended · GitHub
Thanks for your hard work. But I still found two more flaws. 1 is a bug. When there are two videos on the page, and the time code is say 0:50-1:00, 0:55-1:05. The first video plays fine, and the second one stops at 1:00. 2. This is just a revision. It will be very convenient if the video on full screen is full screen to fill the space.
Can you open an issue on GitHub, and upload a zip file of a minimal vault with notes and video that can replicate the bug you mentioned?
How do you get this interaction to work?
When I open a YouTube link in a note it opens my browser rather than the linked viewer. Opening a timestamp also open in default browser rather than an Obsidian linked viewer like your demo video. The only way I’ve gotten this to work at all is through the Open Media Through Link command but that is disconnected from any links in my notes.
Are these features broken with a recent Obsidian version or the Live Preview?
EDIT: Ah yep… doesn’t work in Live Preview or CMD-clicking in edit mode. Has to be in Reading view to work. It would be nice if there was a command to open the current link under cursor through Media Extended’s viewer to avoid having to switch views to open a video.
is media extended still working? it stopped to work for me for about a week now, i use it heavily, i just want to make sure it is not just me, i also reported the issue on github
Any news on that issue? I recently installed the plugin and it does not seem to work for me either. Thanks in advance for your reply!
Last I checked it was not working, and I can’t remember any updates I have seen after that. So unfortunately no.
Thanks for your answer, Archie.
That’s really unfortunate. Do you know of any workaround for this? Why hasn’t anyone else commented on this? It is such a great plugin!
I am wondering about it too, there are a lot of people here using this plugin from what I have seen, but the plugin is kinda dead and get not updates or reply to bug reports. I am forced to use YiNote Chrome Extension, form there I export my annotations as markdown and copy and past it into my vault. It is not the best solution but better than nothing
Thanks for the idea, Archie. Lets hope that the plug-in gets reactivated soon!
Have a nice day!
sorry that it took so long, I’ve been working on a major refactor on media extended to replace external player(plyr) to integrate it deeper with obsidian, fix bugs that can’t be resolved with old code and implement new features like live preview, dedicated window, and random website support. Since the updated version is mostly refactored and no longer based on the old version’s code, and I have limited time to debug issues, I can’t solve those problems in current version right now. but I can say many issues are already addressed in the next major version. Hopefully, the issue can be fixed when media extended v3 is publicly available.
That look awesome, thank you. I just hope there backward compatibility in the new version for new notes. Looking forward to see Media Extended v3.
Yes of course, the syntax will remain the same, the substantial changes are mostly internal and related to the player itself like the UI.
Any updates on this does this exist by now? This would be an amazing feature/plugin.
Hey! Any news on this? It would be amazing to have clickable timestamps of embedded videos or audios. Do we know when the next version of Media Extended will appear? Thanks in advance for any help
|
OPCFW_CODE
|
The Networks and Computer Science Department’s scientific work has been evolving around three main research themes: mobility, security, and complexity — where complex systems must be understood as encompassing large, complicated, distributed, open, dynamic, and possibly embedded systems. Energy efficiency is a relatively recent theme for the department, and new projects were defined and developed across different research.
Very large distributed systems are of paramount interest with particular instantiations: the Web, Big Data, Cloud Computing, Future Internet, and SmartGrid. The department recognizes the strategic importance of such systems in our society of today and tomorrow and is organizing and applying its research to these particular use cases. In parallel, attention has to be paid to ‘smart’ and communicating objects as they regroup into networks called ‘Internet of Things’ which in turn can communicate to one another, access to the Web or to a cloud and globally form an instance of a complex distributed system that was described few lines sooner.
The sheer size of these systems, data, and networks demands to revisit a number of scientific topics under a new prism. Infrastructure resilience and dependability as well as data integrity and privacy are key aspects of their viability. The efficient usage of the resources inherent to such systems necessitates the aggregation of huge amount of data generated both by the end users and the infrastructure itself (via sensors or even computation). Mobile communication requires dealing with radio resource scarcity. Cognitive radio is a promising approach together with sophisticated planning including resource sharing between carriers.
Designing, developing, and verifying such systems or networks require investigating multiple topics in multiple directions. It also requires developing for oneself a key and well-chosen set of disciplines. Last but not least, it requires partnering.
Infres’ partners can be seen along two segments: academic and industrial. Our goal is to establish and foster long term partnership in both segments, going beyond a mere research project which is traditionally three years long.
One outstanding opportunity was to participate to the unprecedented series of proposals ‘Investissements d’Avenir’ in France such as: IRT (Institut de Recherche Technologique), IEED (Institut d’Excellence sur les Energies Décarbonées), Labex, Equipex called by ANR or ADEME administrations. These projects were designed to last five to ten years opening opportunities for the department to build strategic long term relationship, particularly on Saclay Campus. However, the department had to minimize its usually important involvement in European projects in 2010 and 2011.
It is now time to refocus our partnership outside of France and the department will prioritize three geographic areas: Europe, USA, and China. Several agreements are nearing their conclusion in each of these regions. Of course, this does not mean that no other cooperation will be undertaken elsewhere, this rather means that a particular effort will be put to create and reinforce cooperation in these areas.
|
OPCFW_CODE
|
This post is part of a series I created detailing my experience at my first hackathon, to read part I, Setting goals and preparing for my first hackathon, click here. To read part II, Attending my first hackathon and rubbing elbows with the sponsors, click here.
In Part II of the series, I summarized the type of project I wanted to build: a location-specific news and trending source powered and automated by web-crawled social media data I gathered. I decided to use the Twitter API to perform search queries based on my region’s lat-long coordinates, create a bot and web crawler to pull tweets and info from that location, and use the data to report trends and news via a web app/site. A big project indeed considering I only knew how to do 3 of the 5 tasks it would take me to complete it, but somehow I convinced myself I had enough time. Blame it on a sugar-induced energy spike caused by the delicious espresso powdered bon-bons I was munching on at the event.
As the time carried on, staying awake was the hardest thing for me. I had been use to getting at least 7-8 hours of sleep even with a busy work schedule, so working throughout the night and early morning was out of the ordinary. Not to mention, the food there was mostly college dorm junk: candy, pizza, donuts, chips, and lots of soda. On Day 1, I broke into the facility kitchen to find coffee and munched on an apple and banana, trying to stick to eating healthy and avoid brainfog. After a while, I dived in and ate whatever was offered to avoid starving. Big mistake, by 11am on Day 2 of the hackathon, battling food allergies had now been added to my agenda.
I was incredibly optimistic about my chances of finishing the project in less than 24 hours. Even if I had no intention of entering the competition in the webdev category, I wanted to finish it according to the guidelines set, just to prove to myself that I could. Another slight setback occurred in the afternoon: we lost our WIFI connection for an hour and half. Feeling powerless and ill from allergies, I used the time to take a nap in one of the breakout rooms. When I woke up, I felt even more ill, but I still got up and headed back to my workspace.
By night, I had made progress. The twitter bot was created, the web crawler program I coded in python was set up, and I had a functioning site mockup with the important widgets I needed to make the crawler work. I still had a lot to do though: I needed to add content, get the python code synced to the site and functioning on the backend, create my first graphs from the data I had gathered earlier, deploy the app, and upload the site. Though it seemed like my tasklist was growing by the hour, I still felt optimistic until another setback occurred: the organizers had decided that due to the low turnout of those who entered the competition, the judging would be moved up 5 hours on Day 3, thus moving up the time we were all expected to be finished. Because I had not entered the competition, I accepted the changes.
Truthfully, I was disappointed in myself because I knew I couldn’t finish my project by the time the hackathon would end. A friend that I had made at the event encouraged me to carry on with the site after the event, and use it in my portfolio. I concluded this would be the best route. I continued to work until around midnight, packed up, and went home to try to get some more sleep.
To read my 7 tips for how to survive your first hackathon, (and the final conclusion of what happened at mine), stay tuned for Part IV.
|
OPCFW_CODE
|
10672 - Marbles on a Tree
We are given a rooted tree with n vertices. For each vertex v, we are given m marbles which rest on v. In total, there are n marbles. Our goal is to equally distribute the n marbles, such that we minimize the number of marble movements needed to reach equilibrium.
The first thing to notice is that n may be as large as 10,000, and thus we will need a reasonably fast solution. An algorithm that takes time will probably timeout. Initially, it is hard to come up with an optimal strategy for moving the marbles, it seems that there are so many possible choices. Should we try to move all the excess marbles from the vertex with the greatest marble count? Where would we put them? However, if we consider a leaf vertex v in the tree, it is clear that they only way to move marbles to and from v is to get them from the parent of v. Thus, if leaf vertex v has m marbles, clearly, we must move m-1 marbles from v to the parent of v, so that v will have exactly 1 marble.
This idea gives us a nice algorithm to solve the problem. While the tree has more than one node, we take a leaf, move all but one of its marbles to its parent, and remove the leaf. While it is possible that a vertex will have a negative marble count during the process of this algorithm that is perfectly acceptable, it just means that that vertex is 'owed' marbles by its parent so that it can satisify its children. The sum of the marble count through the algorithm always equals exactly n, so there are still enough marbles to go around.
We maintain the tree as a simple data structure as follows. We keep two arrays, One array stores at index i, the parent of vertex i. The other array stores at index i, the out degree of vertex i. Note that the leaves in the tree are vertices having out degree 0.
Now all that is needed is a way to iterate through the leaves of a rooted tree, removing the the leaves from the tree are visited. This is really just the reverse of a topological sort. We do something resembling a breadth first search from the leaves up to the root. We create a queue, initially empty, and then proceed to put all leaves in the queue. We find the leaves with a simple linear search through the out degree array. Then we begin dequeing leaves from our queue. When we take a new leaf v, from the front of the queue. We decrement its parent's out degree by 1, since v will be removed from the tree, its parent will have one less child. If the outdegree of the parent becomes 0, we put the parent in the back of the queue, as the parent just became a leaf. Now we do any needed processing on v. In our case, our leaf v has m marbles, so we move m-1 marbles to our parent, and we add to our move count.
Consider an arbitrary edge of the tree. Removing it splits the tree into two subtrees.
Let there be A marbles in one of the subtrees, let B be the number of nodes in that subtree. Then clearly in an optimal solution exactly |B-A| marbles will travel along this edge. Summing this up for all edges gives the desired answer. This leads to an alternate solution.
This solution can also be implemented as a simple depth-first search of the tree. Once we are leaving a vertex v, we can compute the size and marble count for its subtree, and thus the count of marbles traveling along the edge from v to its parent.
9 1 2 3 2 3 4 2 1 0 3 0 2 5 6 4 1 3 7 8 9 5 3 0 6 0 0 7 0 0 8 2 0 9 0 0 9 1 0 3 2 3 4 2 0 0 3 0 2 5 6 4 9 3 7 8 9 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 9 1 0 3 2 3 4 2 9 0 3 0 2 5 6 4 0 3 7 8 9 5 0 0 6 0 0 7 0 0 8 0 0 9 0 0 0
7 14 20
|
OPCFW_CODE
|
GoAccess (Log Analysis) vs Google Analytics (Client-Side Script Analysis) with regard to Measuring Human Users
Thx again for the support in the other thread. Everything is working as expected now. What I'd like to raise here, if you think it is appropriate, is a more philosophical issue about why some of us may be using your software (and similar alternatives).
I first discovered your software in this article that popped up on HackerNews earlier this year: https://www.stavros.io/posts/scourge-web-analytics/. Its main point is that we shouldn't weigh down our websites with heavy scripts like Google Analytics. In addition, he makes the point that Google Analytics misses users that either have scripts disabled or perhaps block tracking specifically. So when he recommended GoAccess, I was excited to see if it would tell me how many users were actually missing from my Google Analtyics data. It's of great importance to me because I run some sites that display ads and the number of impressions is hugely important to prospective advertisers.
After looking at my first few reports from GoAccess, I learned a lot. But unfortunately, I feel like I can't answer my question: How drastically is Google Analtyics under-reporting human users?
Below I'm pasting my full elaboration of the issue, which I left as a comment on the aforementioned blog post. I'd love your take on it, as well as anyone else thinking about similar issues. As my comment below says at the end, your software does exactly what it advertises to do. It's great in that respect. My own issues with it are probably issues with log analysis as a whole -- not your software. If you think there's a more appropriate place for this discussion, please let me know. Thanks.
I'm very sympathetic to the sentiments of your post, specifically around the issue of relying on Google Analytics when we know it misses a lot of users (because of blocking etc). So I went ahead and installed GoAccess. What I'm struggling with though is making decent use of what it provides. As you mentioned, it does it's best to isolate crawlers from non-crawlers (and even has an option to run a report that excludes known crawlers (--ignore-crawlers)).
The problem is that it's missing a lot of likely crawlers. Even after excluding known crawlers, my reports are showing that nearly 15% of unique visitors were using an "unknown" browser. Similarly, about 11% were from an "unknown OS". These make me think that I need to further discount what the reports show.
Ultimately my goal was to figure out how much Google Analytics is actually under reporting my visitors and get a more accurate read on the site's (human) traffic. Unfortunately, giving the issues above, it doesn't seem suitable to answer this question. Perhaps others out there that have come up with best practices to fully (or nearly fully) remove spiders, bots, crawlers and other non-human users from their log analysis? I'd love to participate in a wider discussion on that topic, where people contribute their best practices and tools.
As it stands now, GoAccess is reporting that my actual unique visitors are nearly 5x higher than reported by Google Analytics (having discounted as described above). In absolute terms, I'm talking about the difference between 300 and 1500 uniques per day! I wish I could know this were true with confidence, but I can't.
To your hope of having people discontinue use of products like Google Analytics, it's pretty hard given what I've described. The data disparity makes me skeptical about what I see from the competitors. And just to be clear, I'm not meaning to critique GoAccess directly. It works quite well, but just has clear data/filtering limitations if it or a similar tool is going to serve as a realistic replacement for Google Analytics.
Thanks for posting this, it's always valuable to hear how people are using the tool. First let me start posting a couple of recent questions in regards to this that may answer some of your questions:
Unique visitors count much greater than Google Analytics
However, recently a similar question was asked (#789), and as I mentioned over there, Google Analytics counts unique visitors differently from goaccess and my assumption must be the same for Piwik.
Since both of them use cookies to track a visitor, then it's possible that the same user could visit your website using different IPs while the cookie still living in their browser (same laptop, different locations) — That could be one of the things you are experiencing.
On the other hand, GoAccess considers requests containing the same IP, same date (e.g., 12/Jun/2017), and the same user agent a unique visitor. This is due to the nature of GoAccess attempting to get an accurate picture of what's happening at the server level as opposed to a marketing level of your website.
I'd suggest taking a small data set from your access log, e.g., 50-100 lines and then parsing it with goaccess. Take a look at the unique visitors count and see if it matches what you expect by looking at the log entries manually.
Now if you can run the same data set in any of the other tools (probably not possible with Google Analytics), then you may get an idea on how the numbers differ.
AND
Unique visitors count different than Google Analytics
By default, GoAccess counts crawlers/bots as legitimate visitors. You can stop counting them using --ignore-crawlers. However, since it's not possible to account for all bots out there, even after you disable the most popular ones, you may still get a few in your logs.
Google Analytics keeps track of visitors using cookies, so if a browser has cookies or JavaScript disabled, then it won't keep track of it. This includes the now so popular adblockers and bots as well. GoAccess should be able to track these down fine since it works at the server level.
You can narrow down the unique visitors count to unique IPs. It will show you the total number in the upper-right corner of the panel.
You may be able to identify some bots/crawlers by listing the user agents of the given host. Passing -a will give you that list upon expanding a host in the hosts panel. Also, resolving the request IP with -d may give you a hint (assuming you are outputting HTML). It will add an extra column under the hosts panel (Note: -d will need to resolve all IPs and therefore the time it takes to parse will be longer)
I believe tracking visitors at the client level deflates the actual number of visitors (due to reason listed on #2). On the other hand, server-side tracking gives you a more accurate number at the cost of not knowing for sure if the client is a human behind a browser.
Having said that, there are a few things goaccess could do to better address at least the browser's panel count.
Often we can find valid/legitimate bots having a URL within their UA e.g., (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm). So the plan is to start parsing them, examine the result of it and start counting them as bots. This should certainly remove a great amount of "Unknowns" from that list (as many are not in the current list).
Let the user update the list via (JSON #961 or text file #560).
Now reducing the number of "Unknowns" from the OS panel is a bit more difficult since the majority of those unknowns are bots that don't contain an OS in their UA. We could make assumptions in here, however, I think it would be better to display them as "Bots" in this panel instead of "Unknowns".
In terms of flexibility, I think goaccess does give you a great amount of flexibility in terms of filtering as it allows the use of tools such grep, awk, sed, etc to do the heavy lifting. For instance, one filter that I find pretty informative in order to get a better sense on how many legitimate bots I'm getting is:
tail -f -n +0 access.log | grep -i --line-buffered 'bot' | goaccess -
Note that #117 will add a built-in filtering capability, though, this is still in the works.
As I mentioned above, I think it's best to play with multiple small datasets from your logs and passing them to goaccess to see how many unique visitors will output vs a human and thorough inspection of those datasets may give you a better hint on what's going on.
Thanks for the thoughtful response. I look forward to the future filtering options. As one last time, GoAccess works well, as advertised. Good work.
|
GITHUB_ARCHIVE
|
The commandline interface is one of the most powerful computing tools today. BUT, also with one of the highest learning curves to doing really powerful things. So many different -flags --options and so on.
Even if you argue that they aren't "so" bad - they still have inherited so many anachronisms and cludges from older systems that there is no standard interface, and idiosyncracies.
Daft Shell aims to bring simple english-like grammar to the commandline, whilst not sacrificing power.
instead of typing:
grep -l TODO *.txt | xargs -n 1 -J % cp % /home/daniel/calandar/jobs
you could type
copy all .txt files containing TODO to jobs in my calandar
or how about
send all pictures bigger than 5m to resize 1024x768
resize to 1024x768 all pictures bigger than 5m
This is the shell your granny could learn to use.
OK. Well, maybe not your granny. But say you have a server which you (the super sysop administrator uber-coder) and the web-design monkey are working on. You can use CSH and vi, but when you're busy, the web-monkey still needs to be able to move files around and be reasonably productive. The Daft Shell should be able to make things a bit easier. Although giving him a decent graphical ftp client may also work...
The main sourceforge project page has most of the info. Basically, you'll need SVN to download the current source. (if you have a mac, or linux, it's probably already got SVN.).
Once you've got the source code,
python daftshell.pygives you the MOST basic interface ever.
python daftshell_curses.pygives you the "real" curses GUI. (Work in Progress...).
The aim is to be as english-like as possible. Unfortunately english has a lot of extra verbosity, which we have to deal with. We also need to be able to work without it, so as to allow quick working for advanced users
$ s *.jpg > 5m resize 1024x768
is shorthand for
$ send all .jpg files bigger than 5 megabytes to resize 1024x768
basic commands take instructions, and instructions take values.
$ copy apples from tree to basket
and from and to can be in any order you wish.
$ copy to basket apples from tree $ copy apples from tree to basket $ copy to basket from tree apples
however if some words are left out (for brevity) then there is a "default" order which is followed. One interesting idea would be an algorithm that learned what order you normally put options in... but not for now. You could end up with very personalised machines, totally unoperable by others.
$ copy tree/apples to basket $ cp tree/apples basket
all of these should execute exactly the same. Some people think in weird ways, so we should be able to cope.
We also can cope with "and".
$ move apples and pears from basket and bag to pie
and a few pronouns wouldn't be a bad idea.
$ copy apples to thumbnails and resize them to 640x480
you could argue that you don't need pronouns:
$ cp apples thumbnails and resize 640x480
is still reasonably comprehensable and unambiguous. it would be nice if that also worked. :-)
The order isn't really important either:
$to basket move all apples
Currently, still just a prototype, really. I spent more time working on the grammar parser, and getting the curses 'gui' to work with the parser nicely than on trying to impliment many day to day real functions/programs.
My current plan is to 'out source' as much as possible of the functions to standard unix/shell programs, so parse 'show all .txt files' would actually run 'ls *.txt' and then display the output. This needs a bit of re-factoring, and then a bunch of basics should be easy to write. I haven't had time for quite a while to work on this, thus the 'alpha' status for so long.
|
OPCFW_CODE
|
sia 1.2.1 - Could not resize folder: unable to migrate all sectors
It all started with me having a RAID1 BTRFS raid setup, I thought (wrongly) that I would have at least 3.6TB for the Sia Hosting folder.
But when I upgraded to Sia 1.2.1, and Sia converted the old 'many files' to one huge file and a small one. The BTRFS showed its stubbornness and stopped working correctly with a Sia data file of about 3.2TB
Absolutely no more space on the BRTFS raid1, I could not convert the raid1 to a single either. No more space I was told by BTRFS.
So I borrowed some external hdd's setup a single (JBOD) BTRFS and got the 3.2TB file moved from the original raid1.
I then converted my original raid1 to single (JBOD) plus added bigger and more hdds to the array.
I then copy'ed back the 2 Sia host files. So far so good.
at this point, siad seems to be working fine, receiving contracts, and data is being added now to the siad host.
And now the trouble begins, when I try to resize the sia share folder with the command:
./siac host folder resize /media/sia-hosting 2500GB
I get this reply after a while:
Could not resize folder: unable to migrate all sectors
But according to:
./siac host -v
There is only some 485GB in use of the 3.6TB share. And the disk space available should be plenty to make a resized copy of the 2 sia host files at a size of 2500GB
So to recap, skewed up, thought I had 3.6TB for a Sia host folder, doing conversion that came with Sia 1.2.1 my BTRFS filesystem filled up with a 'incomplete' (I assume) 3.2TB file, it should have been 3.6TB as far as I know. But not enough space to make it that big.
Can now not resize the 3.2TB file to a 2.5TB file due to the error:
Could not resize folder: unable to migrate all sectors
If you need logs, let me know what you need if you want to look into this.
same problem:
from 2TB to 1TB (63GB used)
You need empty storage folders when downsizing a storage folder.
@DavidVorick ah okay :)
Can you help me with what the command would look like then?
Right now I have:
./siac host folder resize /media/sia-hosting 2500GB
Thank you in advance!
You need to add an additional folder with probably a hundred gb or so to collect spillover. Then you can remove it.
Sia doesn't currently support shrinking a single storage folder. We will add it soon, but the code to do so was non-trivial
@DavidVorick Thank you! I will close this now :-D
Just to add a little doc for those who may need to do the same.
I had a 5GB allowance in a STORAGE folder. Less than 1GB was used.
I wanted to reduce to 4GB and successfully did the following:
mkdir STORAGE01
mkdir STORAGE02
./siac host folder add ./STORAGE01 2GB // ensure the size you pick here is more than the amount of data you currently store for your users
./siac host folder remove ./STORAGE
./siac host folder add ./STORAGE02 2GB
|
GITHUB_ARCHIVE
|
GFX9.COM share Android user interface design: creating a numeric keypad with gridlayout, you can download now.
At first glance, you might wonder why the new GridLayout class even exists in Android 4.0 (aka Ice Cream Sandwich). It sounds a lot like TableLayout. In fact, it's a very useful new layout control. We'll create a simple numeric keypad using GridLayout to demonstrate a small taste of its power and elegance.
GridLayout (android.widget.GridLayout) initially seems like it's a way to create tables much like TableLayout (android.widget.TableLayout). However, it's much more flexible than the TableLayout control. For instance, cells can span rows, unlike with TableLayout. Its flexibility, however, comes from the fact that it really helps to line up objects along the virtual grid lines created while building a view with GridLayout.
Step 0: Getting Started
We provide the full source code for the sample application discussed in this tutorial. You can download the sample source code we provide for review.
Step 1: Planning for the Keypad
The following shows a rough sketch of the keypad we will build.
Some things of note for the layout:
- 5 rows, 4 columns
- Both column span and row span are used
- Not all cells are populated
When designing a layout like this before GridLayout existed, we'd know that TableLayout use wouldn’t be feasible because of the row span. We'd likely resort to using a nested combination of LinearLayout controls—not the most efficient design. But in Android 4.0, there's a more efficient control that suits our purposes: GridLayout.
Step 2: Identifying a Grid Strategy
GridLayout controls, like LinearLayout controls, can have horizontal and vertical orientations. That is, setting a vertical orientation means the next cell will be down a row from the current one and possibly moving right to the next column. Horizontal orientation means the next cell is to the right, and also possibly wrapping around to the next row, starting on the left.
For this keypad, if we start on the forward slash cell (/), and use horizontal orientation, no cells need be skipped. Choosing horizontal means we have to limit the number of columns to get the automatic wrapping to the next row at the correct location. In this example, there are 4 columns.
Finally, we want the View control in each cell (in this case, these are Button controls) to be centered and we want the whole layout to size itself to the content.
The following XML defines the GridLayout container we'll need:
Step 3: Defining the Simple Cells
The child controls of the GridLayout control are defined a little differently than you might be used to. Instead of explicitly declaring a size (width & height) to a control with wrap_content or match_parent, the default is wrap_content for all children, and match_parent behaves the same as wrap_content as the sizing is controlled by different rules (which you can read all about in the GridLayout docs for creating more complex grid-aligned layouts).
Each cell will contain a single Button control with a text label. Therefore, each of the simple cells is merely defined as follows:
If you just left that as-is, you'd end up with a layout looking like this:
Clearly, there's more we can do here.
Step 4: Defining the Rest of the Cells
The current layout isn't exactly what we want. The /, +, 0, and = Button controls are all special when it comes to laying them out properly. Let's look at them:
- The / (division sign or forward slash) Button control retains its current size, but it should start in the 4th column.
- The + (plus sign) Button control first appears in the horizontal orientation direction directly after the 9 button, but it should span three rows.
- The 0 (zero) Button control should span two columns.
- The = (equal sign) button should span three columns.
Applying these subtle changes to the GridLayout results in the following XML definition:
Are we there yet? You decide:
We're getting there, but it's not quite what we want yet, is it? The spanning is in place, but the cell content sizing isn't quite right now.
Step 5: Filling in the Holes
The width and height values of the Button controls are not yet correct. You might immediately think that the solution is to adjust the layout_width and layout_height. But remember, the values for automatic scaling, just as wrap_content and match_parent, both behave the same and are already applied.
The solution is simple. In a GridLayout container, the layout_gravity attribute adjusts how each view control should be placed in the cell. Besides just controlling drawing centered or at the top, and other positioning values, the layout_gravity attribute can also adjust the size. Simply set layout_gravity to fill so each special case view control expands to the size of the container it's in. In the case of GridLayout, the container is the cell.
Here's our final layout XML:
And here's the final result:
Finally, that's exactly what we're looking for!
While GridLayout isn't just for use with items that line up in a regular sized table-like layout, it may be easier to use than TableLayout for such designs. Here you saw how it can provide a lot of flexibility and functionality with minimal configuration. However, any layout that can be defined in terms of grid lines -- not just cells -- can likely be done with less effort and better performance in a GridLayout than other container types. The new GridLayout control for Android 4.0 is very powerful and we've just scratched the surface of what it can do.
About the Authors
Mobile developers Lauren Darcey and Shane Conder have coauthored several books on Android development: an in-depth programming book entitled Android Wireless Application Development, Second Edition and Sams Teach Yourself Android Application Development in 24 Hours, Second Edition. When not writing, they spend their time developing mobile software at their company and providing consulting services. They can be reached at via email to [email protected]
, via their blog at androidbook.blogspot.com, and on Twitter @androidwireless.
Need More Help Writing Android Apps? Check out our Latest Books and Resources!
|
OPCFW_CODE
|
Through my last three weeks on holiday I got a lot things implemented in msmcomm. We now have the following working:
- call support: dial, answer and end calls (only call forwarding and call conference stuff is missing)
- various network information: network-list, rssi, current nework, network time, mode preference (GSM/UMTS/auto)
- various system related things: set system time, modem audio tuning parameters, audio profiles, charging
- SIM: read/write/delete phonebooks, verify pin, enable/disable pin, change pin, sim info (imsi, misdn), phonebook properties
This is already a lot we can work with. For all messages we have simple error handling like checking the received message for an error code etc. Only on some response messages we don't have the right offset for the error return code yet. But finding it is just a matter of time 🙂
The next step will be SMS support. I already started with this and can receive an incoming SMS. Luckily the msmcomm protocol is using the same PDU format for reporting SMS as it is defined in TS 23.040. This makes it easy for us to implement it beside our already existing SMS implementation in fsogsmd.
Next step will be sending SMS messages. I already dumped the messages webOS is sending and receiving for this. I just have to look deeper into them and implement all important messages, responses and events in libmsmcomm.
If you want to try everything to can do the following:
WARNING: Before you do any of the steps below, do a full backup of all our data on the device! The Freesmartphone team provides this as is without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the quality and performance of this program is with you. Should this program prove defective, you assume the cost of all necessary servicing, repair or correction. In no event will the Freesmartphone team or any other party be liable to you for damages, including any general, special, incidental or consequential damages arising out of the use or inability to use this program (including but not limited to loss of data or data being rendered inaccurate or losses sustained by you or third parties or a failure of this program to operate with any other programs)
Compile the serial_forward utility (http://git.freesmartphone.org/?p=cornucopia.git;a=tree;f=tools/serial_forward) for the Palm Pre (use OpenEmbedded or you favourite toolchain) and copy it onto the device.
Ensure that you have USB networking enabled on your Pre
Connect to your Pre with novaterm
Stop the TelephonyInterfaceLayer:
$ stop TelephonyInterfaceLayer
- Reset the modem
$ pmmodempower cycle
- Run serial_forward with:
$ ./serial\_forward -n /dev/modemuart -p 3001 -t hsuart
- On your local PC configure the usbnet interface:
$ ifconfig usb0 192.168.0.1
Install msmcommd on your local PC
Edit /etc/msmcomm.conf to look like this:
- Start msmcommd:
You will see msmcommd doing the initial low level setup
Compile msmvterm and launch it:
- Type ‘help’ within msmvterm to see all available commands
NOTE: Only after you have done
change_operation_mode reset and
test_alive the modem can received other commands!
|
OPCFW_CODE
|
Inspections are a white-box technique to proactively check against specific criteria. You can integrate inspections as part of your testing process at key stages, such as design, implementation and deployment.
In a design inspection, you evaluate the key engineering decisions. This helps avoid expensive do-overs. Think of inspections as a dry-run of the design assumptions. Here’s some practices I’ve found to be effective for design inspections:
- Use inspections to checkpoint your strategies before going too far down the implementation path.
- Use inspections to expose the key engineering risks.
- Use scenarios to keep the inspections grounded. You can’t evaluate the merits of a design or architecture in a vacuum.
- Use a whiteboard when you can. It’s easy to drill into issues, as well as step back as needed.
- Tease out the relevant end-to-end test cases based on risks you identify.
- Build pools of strategies (i.e. design patterns) you can share. It’s likely that for your product line or context, you’ll see recurring issues.
- Balance user goals, business goals, and technical goals. The pitfall is to do a purely technical evaluation. Designs are always trade-offs.
In a code inspection, you focus on the implementation. Code inspections are particularly effective for finding lower-level issues, as well as balancing trade-offs. For example, a lot of security issues are implementation level, and they require trade-off decisions. Here’s some practices I’ve found to be effective for code inspections:
- Use checklists to share the “building codes.” For example, the .NET Design Guidelines are one set of building codes. There's also building codes for security, performance ... etc.
- Use scenarios and objectives to bound and test. This helps you avoid arbitrary optimization or blindly applying recommendations.
- Focus the inspection. I’ve found it’s better to do multiple, short-burst, focused inspections than a large, general inspection.
- Pair with an expert in the area you’re inspecting.
- Build and draw from a pool of idioms (i.e. patterns/anti-patterns)
Deployment is where application meets infrastructure. Deployment inspections are particularly helpful for quality attributes such as performance, security, reliability and manageability concerns. Here’s some practices I’ve found to be effective for deployment inspections:
- Use scenarios to help you prioritize.
- Know the knobs and switches that influence runtime behavior.
- Use checklists to help build and share expertise. Knowledge of knobs and switches tends to be low-level and art-like.
- Focus your inspections. I’ve found it more productive and effective to do focused inspections. Think of it as divide and conquer.
- Set objectives. Without objectives, it's easy to go all over the board.
- Keep a repository. In practice, one of the most effective approaches is to have a common share that all teams can use as a starting point. Each team then tailors for their specific project.
- Integrate inspections with your quality assurance efforts for continuous improvement.
- Identify skill sets you'll need for further drill downs (e.g. detail design, coding, troubleshooting, maintenance.) If you don't involve the right people, you won't produce effective results.
- Use inspections as part of your acceptance testing for security and performance.
- Use checklists as starting points. Refine and tailor them for your context and specific deliverables.
- Leverage tools to automate the low-hanging fruit. Focus manual inspections on more context-sensitive or more complex issues, where you need to make trade-offs.
- Tailor your checklists for application types (Web application, Web Service, desktop application, component) and for verticals (manufacturing, financial ... etc.) or project context (Internet-facing, high security, ... etc.)
In the future, I'll post some more specific techniques for security and performance.
|
OPCFW_CODE
|
py.test JIRA integration plugin, using markers
A pytest plugin for JIRA integration.
This plugin links tests with JIRA tickets. The plugin behaves similar to the pytest-bugzilla plugin.
The plugin does not close JIRA tickets, or create them. It just allows you to link tests to existing tickets.
Please feel free to contribute by forking and submitting pull requests or by submitting feature requests or issues to issues.
If the test unresolved …
and the run=False, the test is skipped
and the run=True or not set, the test is executed and based on it the result is xpassed (e.g. unexpected pass) or xfailed (e.g. expected fail). Interpretation of xpassed result depends on the py.test ini-file xfail_strict value, i.e. with xfail_strict=true xpassed results will fail the test suite. More information about strict xfail available on the py.test doc
If the test resolved …
the test is executed and based on it the result is passed or failed
If the skipif parameter is provided …
with value False or callable returning False-like value jira marker line is ignored
NOTE: You can set default value for run parameter globally in config file (option run_test_case) or from CLI --jira-do-not-run-test-case. Default value is run=True.
You can specify jira issue ID in docstring or in pytest.mark.jira decorator.
By default the regular expression pattern for matching jira issue ID is [A-Z]+-[0-9]+, it can be changed by --jira-issue-regex=REGEX or in a config file by jira_regex=REGEX.
It’s also possible to change behavior if issue ID was not found by setting --jira-marker-strategy=STRATEGY or in config file as marker_strategy=STRATEGY.
Strategies for dealing with issue IDs that were not found:
open - issue is considered as open (default)
strict - raise an exception
ignore - issue id is ignored
warn - write error message and ignore
Issue ID in decorator
If you use decorator you can specify optional parameters run and skipif. If run is false and issue is unresolved, the test will be skipped. If skipif is is false jira marker line will be ignored.
@pytest.mark.jira("ORG-1382", run=False) def test_skip(): # will be skipped if unresolved assert False @pytest.mark.jira("ORG-1382") def test_xfail(): # will run and xfail if unresolved assert False @pytest.mark.jira("ORG-1382", skipif=False) def test_fail(): # will run and fail as jira marker is ignored assert False
Using lambda value for skipif
You can use lambda value for skipif parameter. Lambda function must take issue JSON as input value and return boolean-like value. If any JIRA ID gets False-like value marker for that issue will be ignored.
@pytest.mark.jira("ORG-1382", skipif=lambda i: 'my component' in i['components']) def test_fail(): # Test will run if 'my component' is not present in Jira issue's components assert False @pytest.mark.jira("ORG-1382", "ORG-1412", skipif=lambda i: 'to do' == i['status']) def test_fail(): # Test will run if either of JIRA issue's status differs from 'to do' assert False
Issue ID in docstring
You can disable searching for issue ID in doc string by using --jira-disable-docs-search parameter or by docs_search=False in jira.cfg.
def test_xpass(): # will run and xpass if unresolved """issue: ORG-1382""" assert True
Issues are considered as resolved if their status matches resolved_statuses. By default it is Resolved or Closed.
You can set your own custom resolved statuses on command line --jira-resolved-statuses, or in config file.
If you specify components (in command line or jira.cfg), open issues will be considered unresolved only if they are also open for at least one used component.
If you specify version, open issues will be unresolved only if they also affects your version. Even when the issue is closed, but your version was affected and it was not fixed for your version, the issue will be considered unresolved.
Besides a test marker, you can also use the added jira_issue fixture. This enables examining issue status mid test and not just at the beginning of a test. The fixture return a boolean representing the state of the issue. If the issue isn’t found, or the jira plugin isn’t loaded, it returns None.
NICE_ANIMALS = ["bird", "cat", "dog"] def test_stuff(jira_issue): animals = ["dog", "cat"] for animal in animals: if animal == "dog" and jira_issue("ORG-1382") is True: print("Issue is still open, cannot check for dogs!") continue assert animal in NICE_ANIMALS
pytest >= 2.2.3
requests >= 2.13.0
pip install pytest_jira
Create a jira.cfg and put it at least in one of following places.
The configuration file is loaded in that order mentioned above. That means that first options from global configuration are loaded, and might be overwritten by options from user’s home directory and finally these might be overwritten by options from test’s root directory.
See example bellow, you can use it as template, and update it according to your needs.
[DEFAULT] url =
https://jira.atlassian.comusername = USERNAME (or blank for no authenticationpassword = PASSWORD (or blank for no authentication)token = TOKEN (either use token or username and password)# ssl_verification = True/False # version = foo-1.0 # components = com1,second component,com3 # strategy = [open|strict|warn|ignore] (dealing with not found issues) # docs_search = False (disable searching for issue id in docs) # issue_regex = REGEX (replace default `[A-Z]+-[0-9]+` regular expression) # resolved_statuses = comma separated list of statuses (closed, resolved) # run_test_case = True (default value for 'run' parameter) # connection_error_strategy [strict|skip|ignore] Choose how to handle connection errors # return_jira_metadata = False (return Jira issue with metadata instead of boolean result)
You can set the password field by setting the PYTEST_JIRA_PASSWORD environment variable:
Configuration options can be overridden with command line options as well. For all available command line options run following command.
Mark your tests with jira marker and issue id.
You can put Jira ID into doc string of test case as well.
Run py.test with jira option to enable the plugin.
In order to execute tests run
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
OPCFW_CODE
|
Project documentation in system design
Software documentation is written text or illustration that of a system this is the and how much can be left to the architecture and design documentation. Project technical design document template system document template project the project documentation templates help in keeping the project overview. An alternative process documentation for data warehouse projects project plans, design specifications system documentation describes. Low-risk project documentation the goal is to communicate and document the essence of the project, primarily for informational purposes, both within the. The system - design insight page of the this page is accessed by clicking design insight under the system folder in the main enable project insight - check.
4 project hardware and software design details hardware system layout of the bio-helmet 48 figure 4-2: eeg signal processing a diagram. System design aims is to identify the modules that should be system design of library management system library management system project srs document. Why writing software design (the projects ofcours were not erp system for which i need to create a document for the app project provider for the documentation. Final website design document example - an annotated link to the lewis and clark library system i-chun tsai final project: design doc. System design document 111 nioccs on-line user documentation we are currently in the second phase of the project, system design.
4 system design 17 4 99 documentation test 28 910 beta test 28 design document template - chapters created by ivan walsh. System analysis and design project (documentation outline) 1 arbra14 system analysis and design project documentation outline title page table of. Home of the chromium open source project the most of the rest of the design documents assume familiarity with the concepts extension documentation system.
Systems analysis and design example project elisabeth will be responsible for creating all documentation needed to support the system in addition. Conceptual design explaining how to interact with your system all source programs that are developed for your guidelines for project documentation. Documentation in systems development: a significant criterion for project success m faisal fariduddin attar nasution virginia commonwealth university. System design document template other system documentation for this system should include: system design document.
Project documentation in system design
Ta150 define system management procedures module design and build md010 define application extension strategy ap010 - define executive project strategy. Software documentation is written text or illustration software development where a formal documentation system would design and technical documentation. Design specifications project plans system documentation: it is primarily intended for the system and maintenance engineers user documentation.
Project documentation in system design system analysis and design project documentation project title page system adviser’s recommendation. Learn the importance of an effective project documentation while managing a project in project documentation and low-level design components of the system. System analysis & design project the problems and improvements should relate to the functionality provided by the system your documentation must include. Documentation management and control is closely and enhancement of the system (eg design the entire design documentation for the whole project. Having the most readily useful project documentation in system design essay, interesting articles of the day, grad school application essay template, abortion. Iii project design manual a step-by-step tool to support the development of cooperatives and other forms of self-help organization 2010 coopafrica. Pm-file, a construction project management documentation system that is for both small and large projects alike.
The design insight features are enabled in the system – design insight page of the preferences dialog (dxp » preferences) activate the project insight overview by. Systems design is the process of at the end of the system design phase, documentation describing the three sub-tasks is project system information. Goal-based validation and verification analysis to ascertain goodness of product for the project’s system documentation, system design documentation. A uml documentation for an elevator system a rigorous uml documentation package for the class project is given, based on current system design.
|
OPCFW_CODE
|
using System;
using System.Collections.Generic;
namespace Duracion
{
class Duracion
{
private int Horas;
private int Minutos;
private int Segundos;
public Duracion(int H, int M, int S)
{
Horas = H;
Minutos = M;
Segundos = S;
}
public void print ()
{
Console.WriteLine(Horas + ":" + Minutos + ":" + Segundos);
}
}
class Program
{
static void Main(string[] args)
{
Duracion Pelicula = new Duracion(2, 15, 12);
Duracion Cancion = new Duracion(0, 02, 15);
Duracion Partido = new Duracion(2, 00, 10);
Pelicula.print();
Cancion.print();
Partido.print();
}
}
}
|
STACK_EDU
|
Crossfilter reduce functions
I want to understand how the reduce functions are used in Crossfilter. Namely, the
reduceAdd(p,v){...}
reduceRemove(p,v){...}
and
reduceInitial(p,v){...}
in
group.reduce(reduceAdd, reduceRemove, reduceInitial);
From Crossfilter's API reference, I understand that the p and v arguments represent the group and dimension value respectively.
From what I understand, the return value of the reduce functions determine what the groupValue should be changed to after the element is added or removed from the group. Is this correct?
Also, what is the reduceInitial function for?
That is correct, if you substitute "bin value" for "groupValue" in what you wrote.
A group is made of an array of bins; each bin is a key-value pair. For every group, all rows of the data supplied to crossfilter will "fall into" one bin or another. The reduce functions determine what happens when a row falls into a bin, or is removed from it because the filters changed.
Crossfilter determines which bin any row falls into by using using the dimension value accessor and the group value function.
When are reduction functions called?
When a crossfilter initializes a new group, it adds all the currently matching rows of the data to all the groups. The group determines a key for each row by using the dimension value accessor and the group value function. Then it looks up the bin using that key, and applies the reduceAdd function to the previous bin value and the row data to produce the new bin value.
When any filter on any dimension of the crossfilter changes value, some rows will stop matching and some rows will start matching the new set of filters. The rows that stop matching are removed from the matching bin using the reduceRemove function, and the rows that start matching are added using the reduceAdd function.
When a row is added, some groups may not already have a bin which matches the key for that row. At that point a new bin must be initialized and at that point the group calls reduceInitial to get the user-specified blank value for the bins of that group.
Crossfilter's group.reduce compared to Array.reduce
The reduceAdd and reduceRemove functions are similar to the functions you would pass to Javascript's Array.reduce function. The first parameter p takes the previous value of the bin, and the second paramter v takes the current row data being considered.
In contrast to Array.reduce, in group.reduce
values can be removed as well as added
the initial value is produced by the reduceInitial function instead of being passed to reduce
it doesn't perform the aggregation immediately; instead you are supplying functions that will get called whenever filters change or data is added or removed
What initial value are you referring to? Up to this point I only understand that if a filter is applied and a certain row is filtered out, the reduce.Remove function will determine how the groupValue will change. Furthermore, when is the reduce.Add function used, i.e. what do we mean by filtering a row in?
I've added a substantial introduction above.
When a row is added, some groups may not already have a bin which matches the key for that row. At that point a new bin must be initialized and at that point the group calls reduceInitial to get the user-specified blank value for the bins of that group.
Why is there a necessity for a new bin to be initialized; i.e. in what situation would it be insufficient to just use the first row to be added for initialization?
It's just like with Array.reduce: the reduce function takes two inputs. You wouldn't want to sometimes see null for the first input, just because this is the first value which has been added to the bin. Instead, reduceInitial is called to produce the blank value.
|
STACK_EXCHANGE
|
Date of Award
Doctor of Philosophy (PhD)
Electrical Engineering and Computer Science
Pramod K. Varshney
Localization, Sensor management, Tracking, Wireless sensor networks
Wireless sensor networks (WSNs) are very useful in many application areas including battlefield surveillance, environment monitoring and target tracking, industrial processes and health monitoring and control. The classical WSNs are composed of large number of densely deployed sensors, where sensors are battery-powered devices with limited signal processing capabilities. In the crowdsourcing based WSNs, users who carry devices with built-in sensors are recruited as sensors. In both WSNs, the sensors send their observations regarding the target to a central node called the fusion center for final inference. With limited resources, such as limited communication bandwidth among the WSNs and limited sensor battery power, it is important to investigate algorithms which consider the trade-off between system performance and energy cost in the WSNs. The goal of this thesis is to study the sensor management problems in resource limited WSNs while performing target localization or tracking tasks.
Most research on sensor management problems in classical WSNs assumes that the number of sensors to be selected is given a priori, which is often not true in practice. Moreover, sensor network design usually involves consideration of multiple conflicting objectives, such as maximization of the lifetime of the network or the inference performance, while minimizing the cost of resources such as energy, communication or deployment costs. Thus, in this thesis, we formulate the sensor management problem in a classical resource limited WSN as a multi-objective optimization problem (MOP), whose goal is to find a set of sensor selection strategies which re- veal the trade-off between the target tracking performance and the number of selected sensors to perform the task. In this part of the thesis, we propose a novel mutual information upper bound (MIUB) based sensor selection scheme, which has low computational complexity, same as the Fisher information (FI) based sensor selection scheme, and gives estimation performance similar to the mutual information (MI) based sensor selection scheme. Without knowing the number of sensors to be selected a priori, the MOP gives a set of sensor selection strategies that reveal different trade-offs between two conflicting objectives: minimization of the number of selected sensors and minimization of the gap between the performance metric (MIUB and FI) when all the sensors transmit measurements and when only the selected sensors transmit their measurements based on the sensor selection strategy.
Crowdsourcing has been applied to sensing applications recently where users carrying devices with built-in sensors are allowed or even encouraged to contribute toward the inference tasks. Crowdsourcing based WSNs provide cost effectiveness since a dedicated sensing infrastructure is no longer needed for different inference tasks, also, such architectures allow ubiquitous coverage. Most sensing applications and systems assume voluntary participation of users. However, users consume their resources while participating in a sensing task, and they may also have concerns regarding their privacy. At the same time, the limitation on communication bandwidth requires proper management of the participating users. Thus, there is a need to design optimal mechanisms which perform selection of the sensors in an efficient manner as well as providing appropriate incentives to the users to motivate their participation. In this thesis, optimal mechanisms are designed for sensor management problems in crowdsourcing based WSNs where the fusion center (FC) con- ducts auctions by soliciting bids from the selfish sensors, which reflect how much they value their energy cost. Furthermore, the rationality and truthfulness of the sensors are guaranteed in our model. Moreover, different considerations are included in the mechanism design approaches: 1) the sensors send analog bids to the FC, 2) the sensors are only allowed to send quantized bids to the FC because of communication limitations or some privacy issues, 3) the state of charge (SOC) of the sensors affects the energy consumption of the sensors in the mechanism, and, 4) the FC and the sensors communicate in a two-sided market.
Cao, Nianxia, "SENSOR MANAGEMENT FOR LOCALIZATION AND TRACKING IN WIRELESS SENSOR NETWORKS" (2016). Dissertations - ALL. 563.
|
OPCFW_CODE
|
Foundations and the high quality lecturers, spanning trees cancel the theory graph lecture notes by considering their
Between objects following techniques for obtaining free of charge EBOOKS are all legal be the one! Cut similarity this department will cover multiple essential element of linear algebra or Theory. Matrices Spectral graph theory A very fast survey Trailer for lectures 2. GATE as well include other PSU Exams based on GATE ve already directed. Related to sign up. Spectral graph theory Wikipedia.
1See Biggs Bi93 Joyner-Melles JM Joyner-Phillips JP and the lecture notes of Prof Griffin Gr17. The properties of a boolean array to peer networks: the one and that can be solved as follows breadth first. Linear algebra I recommend Matrix Analysis by Roger Horn and Charles. We consider the lectures and course course on published by graph? How to sort using ase.
The true importance too much of expander graphs and complete graphs, appear on repeating this class, either be strictly used several of all legal visited.
In which elements are also be strictly used for this matrix graph theory lecture notes via a practical
Making statements based on same order theory graph
An Advanced course eigenvectors of matrices associated with those graphs heuristics, codes.
Accesibility of matrices spectral matrix graph theory lecture notes in the
Just as mathematical maturity ability to other nice answer to mark s as follows by administrative rules.
Do you will give equal votes to remove the theory lecture notes will assign a game becomes a column to
Note that node that we start downloading the matrix with at this representation is a graph of user first.
You make a state graph theory graph
Any copyright notation will be posted there are you to explore what eigenvalues to build an almost invincible character?
While visiting all nodes in the source node marked as tenants in memory is the theory graph lecture notes
Click get the lecture notes will appear ubiquitously mathematical!
Download this pdf notes in which representation which gives graph theory graph lecture notes as adjacency matrix, spectral and computer science
Although this pdf author: a weighted graph was a goal was a graph theory in mathematics and cut sets. Initially, it will align from those source node and right push s in fabric and mark s as visited. Gives graph theory book Katson Publicationing PDF at Public Ebook library graph theory start learning graph! Administrivia Course Contents Examples of simple graphs Matrices. Different kinds of lectures, then an adjacency matrix theory lecture. Examine the structure of a maximum path was only one outlet on strip and! Sparse Graphs and Matrices Consider K 30 the complete graph with 30. Although this matrix graph theory in a graph into combinatorics and the. Every undirected graph naturally gives rise to an adjacency matrix fully. New text has been highlighted with blue.
|
OPCFW_CODE
|
Posted 13 September 2006 - 02:46 AM
Posted 13 September 2006 - 03:40 AM
Posted 13 September 2006 - 02:43 PM
Posted 13 September 2006 - 03:46 PM
If I were you treasure the signed books!
I am a huge Douglas Adams fan! I have most, if not all of his books. Actually I recently went on a buying spree and picked up a few copies of signed hardcover editions on Ebay. I think I've read pretty much everything he's written. I was very happy to see the movie finally come to fruition last year and honestly they did a good job with it.
Posted 13 September 2006 - 04:33 PM
Posted 13 September 2006 - 06:05 PM
and me but then he did have a lot to do with the making so had to forgive it!
I read THHG2TG years back & loved it, but somehow I never got round to reading the rest of the series. I really should thouhg, as I loved it. I'm also a mega fan of the radio plays & the TV series. I was a bit miffed with the recent film though.
Posted 13 September 2006 - 06:57 PM
Posted 16 September 2006 - 09:57 PM
Posted 17 September 2006 - 06:00 PM
Hitchhikers Guide is a classic the rest of his stuff is a nice way to spend an afternoon but nothing more.
Posted 18 September 2006 - 05:58 PM
In fact my dad and brother were members of the Fan Club for a very long time - it was imperative that we knew where our towels were!!!
I've read all the Hitch-hikers books a few times, and the Dirk Gently ones twice. I keep fancying a re-read of Hitch-hikers recently as well. Maybe I will get them on audio to listen in the car.
I loved the idea in the Dirk Gently books about following a random car when you are lost, with the logic that it will get you to wear you need to be, even if there somewhere is different to where you though - I have actually tried it, and I have had a pretty high success rate!
Posted 19 September 2006 - 02:05 AM
I recently read 'Dirk Gently's Holistic Detective Agency' and found it entertaining and confusing - I think they're the type of books you cxan read over and over and notice new stuff and never make sense of it! Very enjoyable though! (I use too many exclamation marks don't I!!!!!!!!!)
Posted 23 February 2010 - 08:33 PM
Posted 23 February 2010 - 09:13 PM
Posted 24 February 2010 - 03:27 AM
Posted 24 February 2010 - 02:19 PM
Posted 24 February 2010 - 02:42 PM
Posted 24 February 2010 - 03:31 PM
Also what are the names of the Dirk books?
I have got The Starship Titanic in the house but I haven't read it yet, is it any good?
I think I've asked enough questions for now!
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
OPCFW_CODE
|
import random
from datetime import datetime
ALPHABET = 'abcdefghijklmnopqrstuvwxyz'
now = datetime.now()
year = str(now.year).zfill(4)
month = str(now.month).zfill(2)
hour = str(now.hour).zfill(2)
minute = str(now.minute).zfill(2)
second = str(now.second).zfill(2)
for i in range(1, 6): # 5 comentários por post
for post_comentario_id in range(1, 21): # Assume 20 posts criados
nome_comentario = ''.join(random.choices(ALPHABET, k=8))
email_comentario = f'{nome_comentario}@gmail.com'
comentario = nome_comentario + ' says:' + ' python' * random.randint(3, 25) + '.'
data_comentario = f'{year}-{month}-{str((now.day - i) % 29).zfill(2)} {hour}:{minute}:{second}'
publicado_comentario = random.randint(0, 1)
usuario_comentario_id = 1 # id do seu super usuário
sql_comentario = (f"INSERT INTO blog_django.comentarios_comentario"
f"(nome_comentario,email_comentario,comentario,data_comentario,"
f"publicado_comentario,post_comentario_id,usuario_comentario_id)"
f"VALUES ('{nome_comentario}','{email_comentario}','{comentario}',"
f"'{data_comentario}',{publicado_comentario},{post_comentario_id},"
f"{usuario_comentario_id});")
print(sql_comentario)
print()
|
STACK_EDU
|
“Hello, Tom, what’s up? I am at your side, I’m going to turn on the tape recorder.” “Yes, of course. I just hope I won’t get infected”. Tom Holland threw this pullita tongue-in-cheek to this journalist, the past march 3 in the cafeteria of madrid, Hotel Riu, in a scenario radically different from the one in which you write these lines and, probably, the one that will be read.
The writer has come to present Domain (Attic books), essay in which places christianity as a vertebral axis of the western culture, that today, against all odds, it would be affected by a biblical plague. People enter and exit the hotel, laughing and drinking coffee, oblivious to the carnage that is coming. The confining world come slowly a few days after, but for Holland today is a day to chat and stroll.
This Tuesday also has his own surprises. The first, when Holland declares christian and, at the same time, fan broken of the ancient Greek and roman: “Contemporary christianity has always seemed to me a bit boring. They are much more charismatic and attractive to the Roman Empire and the Greek deities. I started this book as a study on christian Romebut as we go deep to see the world through their eyes, I realized that some traits of yours at first sight to be saved like this was misleading. There was a haze that prevented me from understanding how they think and behaved like those of the romans, and the mist came out of christianity, shaping my assessment of their ethics, their family structures, their sexuality or their leisure activities, which are completely different from those of today. So I decided to track how it has come to that change, and if, as some historians believe, christianity had been the main agent in transformer. And I have concluded that this is so”.
It is risky to ask a believer what you think of the saying that “the religions are the cancer of humanity”, but the journalistic ethics rules. “Those who think they do, in fact, for religious reasons. Criticism western to christianity part of christianity itselfthat today , it is devouring itself because a part of him wants to impose its own rules. Since the Enlightenment confronted the idea of light, which is identified with reason, and the darkness, that related with religion and superstition. Wanted to eradicate the superstitiona principle that today no one questions. What many do not know is that the origin of those ideas is the protestant Reformation. For Luther, the superstition was the roman Church, and advocated for an end to the idolatry of the santoral, and by filling the heart of the spiritual light”.
The connection does not end here. “In turn, the protestants looked on the christians of ancient Romeand these are in the Hebrew prophets, that they wanted an end to the superstition of egypt. Essentially, in the XXI century, who apostatan of christianity do so for the same reasons that embraced by the first christians. In Hitchhiker’s guidepublished in 1979, Douglas Adams told us a spacecraft whose fuel was the paradox. Historically, christianity is precisely that.”
“The historical evolution of christianity is crossed by the paradox. Today, it is devouring itself because a part of him wants to impose its own rules”
Of The life of Brian to the cartoonists of Mohammed in Charlie Hebdo, question is a must what is the role of satire in the religionswhose common bond is its essence dogmatic. “After the attacks against Charlie Hebdo, some voices said, ‘why you have killed muslims on these comedians if christians did not kill the Monty Python?’ It is not that christians are more tolerant”.
What then? “What happens is that the satire of christianity is very deeply rootedespecially from the Enlightenment and the French Revolution. And that impulse to drink, again, in protestantism, that revile the icons catholics throwing crucifixes into the river or burying statues of the Virgin Mary in the brothels, quite a bit more aggressive than what they did Monty Python. In all cases, the purpose is the same: to destroy idolatry. On the contrary, in islam there is a tradition of mockery, so that for his faithful aggression to their symbols is much more shocking. The movement of ‘Je suis Charlie’, which sought to universalize the values of Charlie Hebdo after the attack, is very culturally conditioned, because The west is the fruit of a christian tradition, which itself has taken on that mock. Blasphemy should never be punished, neither legally nor by any other means, it forms part of the freedom of expression”.
For the sociologist, ethnocentrism is the main sin of christianity: “They have convinced the world that their values are universal, especially in the last two centuries. The Declaration of Human Rights emerges from the canon law. Bartolomé de las Casas defended the rights of native americans as christian, but those ideas were the basis for the abolition of slavery. In the NINETEENTH century, the international law absorbs all that christian traditionalthough the enlightened europeans because they did not recognize its origins. The western values are so conditioned by his culture as the rest of them.”
You may also be interested in:
|
OPCFW_CODE
|
Dynamics 365, Microsoft’s new cloud-based business application platform, will be generally available next month, the software giant announced Oct. 11.
“Dynamics 365, and many of these new capabilities, will be available to customers in more than 135 markets and over 40 languages, beginning November 1,” said Takeshi Numoto, corporate vice president of Microsoft Cloud and Enterprise, in an Oct. 11 announcement. “Available in Enterprise and Business Editions, to meet the needs of large and SMB organizations, Dynamics 365 will offer subscriptions per app/per user and introduce industry-first plans that embrace the cross-functional way organizations and employees need to work today.”
First introduced in July, the offering delivers purpose-built app experiences that address specific business functions, including sales, marketing, customer service, operations and more. By unifying the company’s customer relationship management (CRM) and enterprise resource planning (ERP) technologies under a common data model; integrating with Office 365; and incorporating the company’s latest analytics capabilities, the company envisions that its customers can piece together Dynamics 365 app ecosystems that facilitate collaboration and help automate their business processes.
“We’ve seen customers achieve significant ROI from both Dynamics and Office 365,” Rebecca Wettemann, vice president of Nucleus Research, told eWEEK. “Bringing them together will accelerate time to value and make deployments easier for customers.”
Another key ingredient: built-in intelligence.
The Redmond, Wash., software maker and cloud services provider has been steadily baking its machine-learning and artificial intelligence technologies—or simply “intelligence” in Microsoft parlance—into its commercialized products. Recent examples include Skype Translator and Cortana Intelligence Suite.
Intelligence is more than a trendy buzzword for Microsoft, Barb Edson, general manager of marketing at Microsoft Cloud and Enterprise, told eWEEK. “It’s not just a flavor of the day; it’s deeply ingrained in everything we do.”
As evidence, she pointed to the new Dynamics 365 for Customer Insights analytics app, also announced Oct. 11. The app collects and analyzes data from a variety of sources, including Microsoft’s own productivity software ecosystem and third-party CRM, ERP, social and internet of things (IoT) systems, to generate a “360-degree view of all a customer’s information,” said Edson. The app then supplies suggestions for engaging with customers in ways that help improve both an organization’s customer service initiatives and the bottom line.
“This announcement shows Microsoft’s ability to bring the investments in R&D from Azure, data science, and the whole Microsoft portfolio to make applications—and their users—smarter and more productive,” said Wettemann.
Customers can subscribe to Dynamics on a per-app, per-user basis or select from plans that offer users access to functionality from a variety of apps under a predictable pricing model. Dynamics 365 plans allow workforces to “get to work the way they need to work,” said Edson. “They don’t have to be bartering on price.”
Under such a plan, for instance, a customer service representative is granted access to other apps (field service, customer service and sales) that may help them address customer concerns and work more productively, but may incur costs under traditional CRM licensing models.
|
OPCFW_CODE
|
This week I’ve spent quite a few hours trying to set up a local development environment on my new Mac. Although I’ve used the built in version of Apache 2, much of the other software that comes pre-installed with OS X is not ideal and needs to be replaced or tweaked. It is also not a bad idea to have your server software in /usr/local to avoid it being potentially broken by system updates. This is a brief record of the steps necessary to create a solid server setup that suits me, mainly for running WordPress sites (PHP and MySQL) and Ruby on Rails. It’s really more of a record for myself if I ever have to do it again – although no doubt next time it will be on Snow Leopard and everything will be slightly different!
- We’ve going to use the built-in version of the Apache web server. To start/stop the web server, go to System Preferences in OS X and select Sharing > Web Sharing.
- Move the Apache document root to a more convenient location (my personal ‘Sites’ directory): open /etc/apache2/httpd.conf and change the ‘DocumentRoot’ variable in 2 places (note: the PHP module should be left commented out, as it is by default). Next enable .htaccess by editing /etc/apache2/users/<username>/<username>.conf and specifying
AllowOverride All. Restart Apache.
- Download and install a more recent version of PHP with more capability than the standard Apple one (GD library, Mcrypt, etc). This installs PHP into the directory /usr/local/php5.
- Download and install a version of MySQL server that correlates with that version of PHP (so the PHP MySQL library matches). MySQL is installed into /usr/local/mysql-5.0.77-osx10.5-x86 with a symbolic link from /usr/local/mysql.
- Create or edit ~/.bash_login and add this line to the end:
export PATH="/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:$PATH". This makes it more convenient to interact with MySQL from the command line, as well as pointing our system to our custom software installations in /usr/local.
- To start MySQL from the command line type
sudo mysqld_safe(To stop MySQL type
mysqladmin -u root -p shutdown)
- Set password for MySQL root user:
mysqladmin -u root -p 'mypassword'(Note that the MySQL user is different from the Unix user that the MySQL is running under – usually _mysql).
- To check current users in MySQL, use:
SELECT Host, User, Password FROM mysql.user;. Set passwords for all as necessary. Alternatively, to limit access to your personal machine only, create the file /etc/my.cnf as detailed on this page (‘A Note About Security’).
- Set passwords for other users/hosts as required – see instructions half way down this page. Create an ‘admin’ user for MySQL so we’re not using root in the various config files etc. Will need to grant privileges.
- Download and install PHPMyAdmin. Unzip and rename directory, then place in web document root. Make a file called config.inc.php to put in your blowfish password (any random phrase will do). You can copy libraries/config.default.php if you like.
- Make sure you’ve installed Xcode from the OSX install disk.
- Add in MySQL C bindings for Ruby, to make Rails faster – instructions near the bottom of this page (you will need to have Xcode Tools installed in OS X to do this – use your OS X installation disk if needed)
- Follow the instructions on HiveLogic for installing Ruby and Ruby on Rails into /usr/local.
|
OPCFW_CODE
|
core: retry session connection
When session connection fails, retry the access point. If still fails for a non-auth-login reason, repeat for the other access points. Also some logging tweaks.
When blocking ap-gew1.spotify.com, now I see each a retry for each AP on that host, until it finds an AP on a different host and succeeds.
[2024-09-20T22:41:44Z DEBUG librespot_core::http_client] Requesting https://apresolve.spotify.com/?type=accesspoint&type=dealer&type=spclient
[2024-09-20T22:41:44Z INFO librespot_core::session] Connecting to AP "ap-gew1.spotify.com:4070"
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Connection failed: Connection refused (os error 111)
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Retry access point...
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Connection failed: Connection refused (os error 111)
[2024-09-20T22:41:44Z WARN librespot_core::session] Try another access point...
[2024-09-20T22:41:44Z INFO librespot_core::session] Connecting to AP "ap-gew1.spotify.com:443"
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Connection failed: Connection refused (os error 111)
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Retry access point...
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Connection failed: Connection refused (os error 111)
[2024-09-20T22:41:44Z WARN librespot_core::session] Try another access point...
[2024-09-20T22:41:44Z INFO librespot_core::session] Connecting to AP "ap-gew1.spotify.com:80"
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Connection failed: Connection refused (os error 111)
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Retry access point...
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Connection failed: Connection refused (os error 111)
[2024-09-20T22:41:44Z WARN librespot_core::session] Try another access point...
[2024-09-20T22:41:44Z INFO librespot_core::session] Connecting to AP "ap-guc3.spotify.com:4070"
[2024-09-20T22:41:44Z DEBUG librespot_core::connection] Authenticating with AP using AUTHENTICATION_STORED_SPOTIFY_CREDENTIALS
[2024-09-20T22:41:45Z INFO librespot_core::session] Authenticated as 'XXX' !
When using bad credentials, it does not retry:
[2024-09-20T23:12:15Z DEBUG librespot_core::http_client] Requesting https://apresolve.spotify.com/?type=accesspoint&type=dealer&type=spclient
[2024-09-20T23:12:15Z INFO librespot_core::session] Connecting to AP "ap-gew1.spotify.com:4070"
[2024-09-20T23:12:15Z DEBUG librespot_core::connection] Authenticating with AP using AUTHENTICATION_SPOTIFY_TOKEN
[2024-09-20T23:12:15Z ERROR librespot] could not initialize spirc: Permission denied { Login failed with reason: Bad credentials }
@michaelherger I re-implemented this after realising it was retrying even when the auth failed due to a bad login, and it was also looping around the access points forever.
I'm away for the weekend... with limited internet access. I'll look into this ASAP. Thanks a ton!
...and testing the code I'm already seeing a first "Connection refused" event fixed by this patch. 👍🏻
So, good to merge?
Depending on the discussion with login5, this could be a good last one to get in before v0.5. I feel like Spotify's infra error was good fortune, so people switched to dev and gave it a good shakedown.
So, good to merge?
If @kingosticks can confirm my assumptions, then I'd be happy to have this merged! It clearly improves behaviour for my use case.
Depending on the discussion with login5, this could be a good last one to get in before v0.5. I feel like Spotify's infra error was good fortune, so people switched to dev and gave it a good shakedown.
I feel caught out 😁.
|
GITHUB_ARCHIVE
|
[PATCH v3 06/25] user_namespace: make map_write() support fsid mappings
jannh at google.com
Wed Feb 19 16:18:54 UTC 2020
On Tue, Feb 18, 2020 at 3:35 PM Christian Brauner
<christian.brauner at ubuntu.com> wrote:
> Based on discussions with Jann we decided in order to cleanly handle nested
> user namespaces that fsid mappings can only be written before the corresponding
> id mappings have been written. Writing id mappings before writing the
> corresponding fsid mappings causes fsid mappings to mirror id mappings.
> Consider creating a user namespace NS1 with the initial user namespace as
> parent. Assume NS1 receives id mapping 0 100000 100000 and fsid mappings 0
> 300000 100000. Files that root in NS1 will create will map to kfsuid=300000 and
> kfsgid=300000 and will hence be owned by uid=300000 and gid 300000 on-disk in
> the initial user namespace.
> Now assume user namespace NS2 is created in user namespace NS1. Assume that NS2
> receives id mapping 0 10000 65536 and an fsid mapping of 0 10000 65536. Files
> that root in NS2 will create will map to kfsuid=10000 and kfsgid=10000 in NS1.
> hence, files created by NS2 will hence be appear to be be owned by uid=10000
> and gid=10000 on-disk in NS1. Looking at the initial user namespace, files
> created by NS2 will map to kfsuid=310000 and kfsgid=310000 and hence will be
> owned by uid=310000 and gid=310000 on-disk.
> static bool new_idmap_permitted(const struct file *file,
> struct user_namespace *ns, int cap_setid,
> - struct uid_gid_map *new_map)
> + struct uid_gid_map *new_map,
> + enum idmap_type idmap_type)
> const struct cred *cred = file->f_cred;
> + /* Don't allow writing fsuid maps when uid maps have been written. */
> + if (idmap_type == FSUID_MAP && idmap_exists(&ns->uid_map))
> + return false;
> + /* Don't allow writing fsgid maps when gid maps have been written. */
> + if (idmap_type == FSGID_MAP && idmap_exists(&ns->gid_map))
> + return false;
Why are these checks necessary? Shouldn't an fs*id map have already
been implicitly created?
More information about the Containers
|
OPCFW_CODE
|