Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Today I am going to show you how to create a simple python module which you can then use it in your python program again and again. Before we create a python module lets create a main python module folder where we can then keep all our python modules in one place. Create the main module folder either in the c or d drive in your computer, then create a folder inside that main folder to keep our new python module.
Next create a new python project in NetBeans IDE 8.1 just like the previous tutorial (refer to http://gamingdirectional.com/blog/2016/07/20/how-to-set-up-python-platform-in-netbeans-8-1/)
You can either delete the default .py file in the project and create a brand new empty module or turn that default file into module. In this tutorial I will delete the default file and create an empty python module. Right click on summation.py then delete then right click on the project folder and select New->Empty Module. Enter the module name then click on Finish. Next we will enter a few lines of code into the NetBeans editor plus a few lines of comment that explain what this module is for…
#This is the “summation.py" module, and it provides one function called #sumUpNumber() which will sum up all the numbers you have put into it def sumUpNumber(*args): #Your input will be any number, any length of parameters #into this function and it will return the total total = 0 for arg in args: total += arg return total
Next copy the summation.py file and paste it in PythonModule->summation folder which you have just created. Create a new file with Notepad++ Editor which is another IDE I often use to do the coding job. Enter below code into that empty file and save it in the PythonModule->summation folder as setup.py. Those code that you have entered below will become the metadata of your module distribution.
from distutils.core import setup setup( name = 'summation', version = '1.0.0', py_modules = ['summation'], author = 'choose', author_email = 'firstname.lastname@example.org', url = 'http://gamingdirectional.com', description = 'Module for number summation', )
That is it for the setup.py part! Next open up the windows command prompt and browser to the summation folder where those two files are and type in below command then press enter.
This will create the distribution package for the summation module.
Next install the distribution into your local copy of Python with this command
Now this module should be inside the python’s site-packages, it is time to import the module into a new python program and runs it. Create a new python project in NetBeans IDE 8.1 then enter below code and Run the program!
from summation import sumUpNumber if __name__ == "__main__": print(sumUpNumber(1,2,3))
It works! The outcome is as follow:-
So there it is, you have created your first module in python and then install it in the python’s site-packages folder so you can then import and use that module in your other python programming again and again! After you have created your first module, then what’s next? Go ahead and read the second part of this tutorial!
|
OPCFW_CODE
|
Migration is a process to migrate content from one location to another and similarly SharePoint online site collection migration is the process of moving a SharePoint site collection from one environment (such as an on-premises SharePoint server) to another environment (such as SharePoint Online in Office 365). This can involve copying all of the site collection’s content, data, and configurations to the new environment, and ensuring that the site collection functions as expected in the new environment. This process can help organizations take advantage of the benefits of SharePoint Online, such as increased scalability and improved security, while still retaining access to their existing SharePoint content and data. The migration process can be complex, so it’s often recommended to use a migration tool to help simplify the process.
Steps to Migrate a SharePoint Online Site Collection
Here are the steps to migrate SharePoint Online site collection content to another location:
- Plan the migration process, including the scope and timeline.
- Verify that the source and destination environments are compatible.
- Create a backup of the source site collection.
- Use a migration tool or the SharePoint Online Management Shell to collect the data from the source site collection.
- Store the collected data in a secure location.
- Create a new site collection in the destination environment.
- Use a migration tool or the SharePoint Online Management Shell to import the collected data into the new site collection.
- Verify that the data has been imported correctly and that the site collection is functioning as expected.
- Test the site collection for any issues and resolve them.
- Update any URLs or links in the site collection.
- Finalize the migration process and clean up any temporary files.
PowerShell script to migrate SharePoint online site collection
Here is an example of how to use PowerShell to migrate SharePoint Online site collection content to another site collection:
Connect to SharePoint Online:
$username = "firstname.lastname@example.org" $password = Read-Host -Prompt "Enter password" -AsSecureString $cred = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password Connect-PnPOnline -Url https://mstalk.sharepoint.com -Credentials $cred
Backup the source site collection:
Export-PnPClientSideSolution -Identity sitecollection.sppkg -OutputPath c:\backup
Import the backup to the destination site collection:
Import-PnPClientSideSolution -Path c:\backup\sitecollection.sppkg
Verify the import:
We shared a basic example and when you are trying to migrate using then you need to change the PowerShell commands and parameters depending on the specifics of your migration, such as the size of your site collection and the migration tool you’re using. It’s recommended to use a migration tool to simplify the migration process, as the process of migrating a SharePoint Online site collection can be complex and time-consuming.
|
OPCFW_CODE
|
Fausto Ferreira, IEEE Senior Member, OES AdCom Member
It’s always hard to write about yourself. It’s also hard to keep the reader’s attention through the whole article. So, I will start with the fun part and hopefully entice you to read until the end.
My relationship with the ocean starts in my childhood. At the age of 4, I was so convinced that I wanted to be a yacht builder that I sent a letter about it to a TV show. Looking at what I am doing now (marine robotics), I didn’t end up too far from it. My fascination with robotics also begins at an early age when my aunt gave me a toy robot. Since then, I wanted to study robotics. I ended up finishing a Master’s degree in Electrical and Computer Engineering at Instituto Superior Técnico in Lisbon, Portugal. My thesis was on autonomous docking for a search and rescue ground robot, which culminated in a patent. After graduating, I moved to Italy as a Marie Curie Early Stage Researcher (ESR) at the National Research Council (CNR) of Italy. What was supposed to be a one year contract but became a two-year contract, and after 12 years, I still work in Italy.
I can’t jump into what I did in my first job and the first few years of my research career without talking about the missing piece of the puzzle. Before I finished primary school, I discovered a sport that I still practice today, orienteering. In an orienteering competition, athletes are given only a map and a compass and must find their way from one point to another with no other information. It’s an individual sport that gives the freedom to choose your own path in the forest. I guess the orienteering mentality has been reflected in my career.
First, because in my first job I worked in vision-based Simultaneous Localization and Mapping (SLAM) for underwater vehicles. But also because I have chosen an unorthodox career path. In the first few years, I had a linear course. Following the end of my ESR position, I stayed at CNR as a Research Associate and enrolled in PhD studies at the University of Genoa. I worked in the same area, enlarging the scope of my research from underwater computer vision to sonars and Automatic Target Recognition (ATR). Throughout my PhD, I had the opportunity to spend six months as a Visiting Scientist at the NATO STO Centre for Maritime Research and Experimentation (CMRE) in Italy and three months at the University of Miami in the U.S., funded by the Office of Naval Research Global (ONRG). Close to the end of my PhD, in July 2014, I joined NATO STO CMRE as a Scientist. I became more involved with robotics competitions (both marine and multi-domain) and have been Deputy Director of our annual robotics competition ever since. I have also been strongly involved in the organization of UComms, an underwater acoustic conference organized by CMRE with the support of IEEE OES, among others.
By now you might be wondering at what point this story becomes unorthodox. Well, in late 2015, I enrolled in a Bachelor of Science in Political Sciences and International Relations. Confused? Being the son of an engineer and a language teacher, I always had a place in my heart for social sciences, so to me, exploring this new avenue seemed obvious. I graduated in late 2018 with a thesis on regulatory and liability issues of autonomous surface vehicles, which blended both of my backgrounds.
Currently, I am continuing my research in this area. Specifically, I’m interested in collision avoidance regulations for autonomous marine vehicles (surface and underwater) and their relation to the current laws for ships and submarines. I am spending some time as a Visiting Scholar at the Faculty of Law, University of Zagreb.
Volunteering for OES came naturally to me. As a teenager, I volunteered in my local cultural association, helping to organize theater and music festivals. Sometimes I would even participate as I’ve studied music for 11 years (my other hobbies include writing for newspapers, travel and reading). I have also helped my club organize foot and mountain bike orienteering events including World Cups. During my PhD, I collaborated with the School of Robotics in several robotics workshops for youngsters. Later, due to my involvement in marine robotics competitions and the interest of OES in this kind of activity, the two dots connected.
In 2018, I was selected as one of the two Young Professionals for the inaugural OES YP BOOST Program. I started contributing to the society as a judge in the Student Poster Competition and in the social media initiative. But it wasn’t until 2019 that I had the honor of being part of the Administrative Committee (AdCom), joining a great group of fellow scientists and engineers. It is a pleasure to volunteer and give back to the society that organizes OCEANS and so many other workshops and keeps the quality of scientific output high through the Journal of Oceanic Engineering.
In the past year, I became involved in the OCEANS Reconnaissance Committee (RECON) with a particular focus on finding potential European venues. I initiated contact with the University of Limerick to organize OCEANS’23 for which I will serve as OES Liaison. I recently also began a supervisory role of OCEANS tutorials. In this new position, I am guiding the local tutorials chairs in all the phases of organizing the tutorials. We are currently trying new models for the tutorials including free registration (for attendees already registered for OCEANS) and making sure that the content remains relevant and popular. Most recently, I volunteered to join the Membership Development Committee to help my colleagues keep up the excellent work done for attracting students and young professionals. I have some proposals for promoting early career professionals and keeping students engaged after graduation. At the same time, I am part of the Autonomous Marine Systems Technical Committee. Within this committee, I am mainly involved in marine robotics competitions around the world, such as the Singapore AUV Challenge, the European Robotics League, and RobotX. You can always find me at one of those challenges, at any OCEANS, UComms or at the annual Breaking the Surface workshop.
Breaking the Surface (BTS) is an interdisciplinary workshop that gathers practitioners in the field of marine robotics and applications (archeology, biology, security, and geology). BTS is a very special workshop for me. Not only because I have attended every single edition (10 years in a row) or because I performed sea trials and demos several times. Nor just because since 2019, OES is a sponsor of this event, and there are plans to expand it soon to other geographic areas. But ultimately, because I met my fiancée there in the 2016 edition! We are now spending the quarantine together and trying to plan a wedding during these strange times of COVID-19. It’s not easy, but the most important thing is to be safe! My life has been an incredible journey and I hope it continues. I would be delighted to help you get more involved
|
OPCFW_CODE
|
% DOMAIN CityID Int % DOMAIN CityName String % DOMAIN CountryCode ID % TABLE City CityID CityName CountryCode District % TABLE Country CountryCode CountryCode2 CountryName % TABLE Capital CountryCode CityID % TABLE Language CountryCode Language IsOfficial % ... Country ABW AW [Aruba] Country AFG AF [Afghanistan] Country AGO AO [Angola] Country AIA AI [Anguilla] Country ALB AL [Albania] Country AND AD [Andorra] Country ANT AN [Netherlands Antilles] Country ARE AE [United Arab Emirates] Country ARG AR [Argentina] Country ARM AM [Armenia] ...
WSL is a clean and practical plain text format for relational data. It comprises a schema language and a notation for typed database tuples with schema-supported lexical syntax.
It is accessible to format-agnostic text utilities like grep/sed/awk as well as to specialized tools that understand the format and can take advantage of the schema information. Due to its simplicity it is also amenable for tooling — for example logical or hierarchial query languages.
Here is an an example WSL database.
The current specification
A python library.
The relational model was popularized by Edward Codd in the 1970's. The message was: For persistent data hierarchical representations are often a poor choice; flat tables are often superior.
Why? Hierarchical structures can easily be emulated with flat database tuples that reference each other. On the other hand, hierarchical representations are just transformations of relational data that are opinionated about the access path through which information should be extracted from the database. One has to start at the top of a fixed hierarchy and navigate all the way to the bottom. For example, a list of Employers each containing a list of his/her Employees. But what if one wants to start from a given Employee and find all the Employee's Employers? Too bad the hierarchies don't start with Employees, so one has to go through all Employers and see if the Employee is there, writing a ton of ad-hoc code (which breaks easily when the representation is changed).
Similarly it's difficult to define the structure of what goes where, and which references resolve where, in a nested hierarchy. (Maybe the only somewhat popular hierarchical schema language is XML DTD, but it is not easy to use). If however we constrain data modelling to flat tables, it's easy to express and implement integrity constraints in a simple schema language.
The relational model is closely connected to logic programming. It is a restricted version of first order logic, where each table in the schema is a predicate, and each database table is a universe where each predicate is true if and only if the corresponding database tuple exists in the table.
WSL is a notation for relational data (flat tables). It does not directly serialize hierarchical data like JSON. However, much hierarchical data found in practice would better be represented as relational data. (See "Relational model").
JSON lacks a schema language, and as a consequence it can't provide a lexical syntax that is both convenient and canonical ("pick one") and can't support data integrity. It also has only few available datatypes. In conclusion, it doesn't offer much support for data modelling. Its success came mostly from mapping easily to the basic built-in types of most dynamic languages.
S-Expressions are somewhat similar to JSON, but less widely used.
The situation is different with XML / DTD / XSLT, which do provide support for well-formedness beyond syntax. But they are syntactically and conceptually heavy.
CSV is the most widespread format for storing relations as text. It is kind-of-portable, and, thanks to its simplicity, immensely popular and supported virtually everywhere. It has a number of shortcomings, though. Compared to CSV, WSL offers
The example database was converted manually from this sample database, which is a (probably not very portable) MySQL dump. (I haven't bothered to convert the JSON data in the CountryInfo table). It is freely available from the MySQL website. This is the kind of data WSL was designed for.
A comparison to dump files might appear silly, but it illustrates WSL's design goals. The conversion was done with tedious regular expressions and manual selection and editing. Conversely, it should be easy to re-create an SQL file from the WSL database with vim or grep + sed.
One design decision was to encourage many tables with few columns, instead of few tables with many columns as is common with SQL and big datasets / heavy database servers.
There are good reasons for this
One way in which WSL encourages fewer columns is making column names optional. The schema designer is encouraged to communicate meaning of data only through table names and their columns' types. This is supported by the separation of the concepts of datatypes (which can't be used directly as columns) and domains (which have to be declared as "instances" of datatypes with optional parameterization).
Take as an example the definition of the City table.
% DOMAIN CityID Integer % DOMAIN CityName String % DOMAIN District String % DOMAIN Population Integer % TABLE City CityID CityName CountryCode District Population
Here we first create new meaningful domains from available datatypes. Now the meanings of a table's columns are clear from their domains and from the table name. Having many distinct domains is also really useful to avoid comparing apples with oranges in logic queries.
Only rarely is this approach problematic, when a table has two columns of the same datatype and it's not super clear which column has what meaning in the relation. On the other hand, anybody who has written their share of SQL joins will know how painful it is to have to rename columns for each intermediate table because either the names clash or the context changed so the name is not appropriate anymore.
Another means to keep the number of columns low is omitting (first class) NULL-able columns. The practical effects of this are illustrated by the conversion of the example SQL database to WSL. The Country table had to be split in two. The SQL version
CREATE TABLE `Country` ( `Code` char(3) NOT NULL DEFAULT '', `Name` char(52) NOT NULL DEFAULT '', `Capital` int(11) DEFAULT NULL, `Code2` char(2) NOT NULL DEFAULT '' );
was translated to
% TABLE Country CountryCode CountryCode2 CountryName % TABLE Capital CountryCode CityID
(integrity constraints omitted). In this way the number of columns per table was reduced, normalization was improved, and the need for a NULL-able column went away.
|
OPCFW_CODE
|
package be.ugent.vopro1.bean;
/**
* Provides a representation of a persistent Task.
*
* @see be.ugent.vopro1.persistence.jdbc.postgresql.ScheduleDAOImpl
*/
public class PersistentTask {
private long workload;
private int priority;
private int useCaseId;
/**
* Creates a new PersistentTask with given identifier. This should only
* be used in DAO classes, nowhere else!
*
* @param workload The workload of this task
* @param priority The priority of this task
* @param useCaseId The id of this task's usecase
*/
public PersistentTask(int useCaseId, long workload, int priority) {
this.useCaseId = useCaseId;
this.workload = workload;
this.priority = priority;
}
/**
* A getter for the workload of this task
*
* @return the workload of this task in seconds
*/
public long getWorkload() {
return workload;
}
/**
* A getter for the priority of this task
*
* @return The priority of this task
*/
public int getPriority() {
return priority;
}
/**
* A getter for this task's usecase
*
* @return The identifier of this task's usecase
*/
public int getUseCaseId() {
return useCaseId;
}
/**
* Sets the workload
*
* @param workload workload, in seconds, to set
* @return the builder
*/
public PersistentTask workload(long workload) {
return new PersistentTask(this.useCaseId, workload, this.priority);
}
/**
* Sets the priority
*
* @param priority priority to set
* @return the builder
*/
public PersistentTask priority(int priority) {
return new PersistentTask(this.useCaseId, this.workload, priority);
}
/**
* Sets the usecase identifier.
*
* @param useCaseId UseCase identifier to set
* @return the builder
*/
public PersistentTask useCaseId(int useCaseId) {
return new PersistentTask(useCaseId, this.workload, this.priority);
}
/**
* Provides a Builder for {@link PersistentTask}.
*
* @see PersistentTask
*/
public static class PersistentTaskBuilder {
private long workload;
private int priority;
private int useCaseId;
private PersistentTaskBuilder() {
}
/**
* Creates a new PersistentTaskBuilder.
*
* @return A new PersistentTaskBuilder
*/
public static PersistentTaskBuilder aPersistentTask() {
return new PersistentTaskBuilder();
}
/**
* Sets the usecase identifier.
*
* @param useCaseId UseCase identifier to set
* @return the builder
*/
public PersistentTaskBuilder useCaseId(int useCaseId) {
this.useCaseId = useCaseId;
return this;
}
/**
* Sets the workload
*
* @param workload workload to set in seconds
* @return the builder
*/
public PersistentTaskBuilder workload(long workload) {
this.workload = workload;
return this;
}
/**
* Sets the priority
*
* @param priority priority to set
* @return the builder
*/
public PersistentTaskBuilder priority(int priority) {
this.priority = priority;
return this;
}
/**
* Copies the builder for slightly differing instances.
*
* @return a new PersistentTaskBuilder with the same values as the
* current one
*/
public PersistentTaskBuilder but() {
return aPersistentTask().useCaseId(useCaseId).workload(workload).priority(priority);
}
/**
* Creates a PersistentTask with the current values.
*
* @return PersistentTask with the current values
*/
public PersistentTask build() {
return new PersistentTask(useCaseId, workload, priority);
}
}
}
|
STACK_EDU
|
M: Selling my Android game: week 1 - bendmorris
http://www.bendmorris.com/2011/08/selling-my-android-game-week-1.html
R: TillE
Great stuff, thanks for sharing. If you've done zero advertising, those are
completely decent results. 130+ people trying your game in a week is pretty
cool.
"The full version has had some very modest success, but I've realized that I
need to add more value to convince regular players of the Lite version to
convert to paying customers."
Yeah. As far as I can tell, the only major difference between the two versions
is that more human players can participate. I haven't done any market
research, but I imagine the vast majority of people playing a game on a phone
are playing alone.
Building up the full version with a campaign is a good idea, but I'd seriously
consider stripping down the lite version as well. Ideally, it should be a demo
that teaches players the game and gives them just enough to get them wanting
more. Maybe limit the number of creatures available.
R: wccrawford
One good point in there: The data collection can be quite worthwhile. It's not
something I would have put in, and I'm glad this post mentioned it.
R: alohahacker
thanks for sharing the numbers!! always interesting
|
HACKER_NEWS
|
I installed OpenSuse 13.1 in dual boot on my Samsung ATIV Book 6 pre-installed with Windows 8 (with graphic card: Radeon HD 8850M) with a usb key (ultrabook, no dvd reader atm). I am having trouble with the graphic card drivers.
just fyi, I am very new to the linux world, i have tried for a month and a half to install Ubuntu with graphic proprietary drivers on my previous macbook pro, gave up, sold it, got this new laptop, tried once again ubuntu, could not even get the live cd working and finally I managed to get Open Suse installed, and this time I intend to finish it :)! However I have several issues still unsolved, and after many unsuccessful searches I decided to ask for your help here.
**[size=2]Currents issues I am stuck with:
resolution is stuck to 800*600
I had to set nomodeset in the boot load configuration both to start the live cd and to start the installed os afterwards.
I tried to install the proprietary driver fglrx following every different procedure indicated in http://en.opensuse.org/SDB:AMD_fglrx but once installed, I could not load the gui anymore. And I am too uneasy with the zipper command to restore the default graphic drivers (seemed easier with apt-get in ubuntu, would you have a tutorial to share to restore default headers/drivers through command line once we messed it all?), so every time I had to install once again with the live cd to restore my os.
**I would like at least to use the radeon driver as described in http://en.opensuse.org/SDB:Radeon but “modprobe radeon” returns "FATAL: Error inserting radeon (/lib/modules/3.11.6-4-desktop/kernel/drivers/gpu/drm/radeon/radeon.ko) : Invalid argument). Any hint/procedure on installing the radeon driver from scratch?
I figured out that I could get a normal resolution if I went through samsung boot loader first. How to get it permanently fixed?
**I went through the following procedure:press F10 when laptop starts to choose the partition to boot on
choose opensuse in grub2 (the grub2 UI appears in a higher resolution than if I let the laptop boot normally and go through grub2)
and Voila! resolution is correct but once i restart, i lose it.
Brightness is too low and keyboard fn keys do not work
while volume and touchpad-enable keys work. but not brightness or keyboard backlight.
I tried the the commands xrandr and xbacklight but nothing happens, as advised in http://stackoverflow.com/questions/6625836/how-to-change-the-monitor-brightness-on-linux
|
OPCFW_CODE
|
M: Kernel-Bypass Networking - bbowen
https://www.godaddy.com/engineering/2019/12/10/Kernel-Bypass-Networking/
R: k_sze
Couldn't read that in Hong Kong because GoDaddy automatically redirects me to
the hk.godaddy.com domain, which doesn't have that article.
Update: If you are like me and can't see the article at your localised
GoDaddy.com website, you can (hopefully) select United States at the bottom of
the page to force GoDaddy to serve you the US site.
R: londons_explore
How important is it to bypass the kernel if the kernel doesn't get to
see/handle each packet individually?
As soon as you get the hardware to handle TCP reassembly and just wake the
kernel up once per few megabytes of data sent/received, things scale well
again.
There's work to do though - there are no systems around today that I'm aware
of which can send data from SSD to a TCP socket (common use case for cache
server) without the data itself going through the CPU (despite most chipsets
allowing the network card to be sent data directly from a PCIE-connected SSD).
R: nine_k
I suppose that CPU would schedule a DMA transfer from SSD to RAM, and then
from RAM to a NIC.
For NICs that have memory-mapped buffers, it could be just one transfer.
What am I missing?
R: londons_explore
Nothing. That's how it would work. But today's Linux kernel doesn't do that
(even when you use the sendfile() API)
R: navaati
> But today's Linux kernel doesn't do that (even when you use the sendfile()
> API)
Oh ? I'm quite disappointed, I thought that was the whole point of sendfile()
! What does it do if not that ?
R: toolslive
simple data transfer is : SSD -> kernel -> user-space -> kernel -> NIC
sendfile allows for: SSD -> kernel -> NIC
which is already a major improvement
R: navaati
Right, silly me, I had forgotten userspace :). Thanks !
R: the8472
> Typically, an application using BSD uses system calls to read and write data
> to a socket. Those system calls have overhead due to context switching and
> other impacts.
On the other hand the kernel isn't standing still, overhead reductions have
been trickling in over decades. sendfile, epoll, recv-/sendmmsg, all the
multi-queue stuff, kTLS with hardware offload, io_uring, p2p-dma. The C10k
problem was tackled in 1999, userspace APIs can get you much further today.
R: ra1n85
The current approach is fundamentally not going to work in the long term.
100Gbps at line rate means single digits[1] of nanoseconds between frames. At
that frequency, a cache miss is pretty bad.
This is all not to mention locks, or that there are competing functions
running in most distros (turn off irqbalance completely and watch your
forwarding rate increase).
The low hanging fruit seems to have been picked as well - NAPI polling,
interrupt coalescing, RSS + multique NICs + SMP, etc, are already out there,
and we're still struggling to do 10G line rate in the Kernel...and data
centers are moving quickly to 25/100G.
[1] Edited for terrible math - 10Gbps at line rate is 67ns per packet, 100Gbps
is 6.7ns
R: danceparty
We are not struggling to do line rate 10G in the kernel. Modern 100Gbe nics
(mellanox, solarflare) will happily do line rate with stock upstream kernel
for a while now (definitely since 4.x) you only need to tune your irq
balancing, and you can probably get away with not even doing that. If you are
buying 100gbe nics you are also buying server class (xeon, rome) processors
that can keep up.
Source: I operate a CDN with thousands of 100Gbe nics with a stock upstream
LTS kernel, and minimal kernel tuning.
R: ra1n85
You're saying you can forward 100Gbps at line rate (148MPPS) through a stock
kernel?
R: danceparty
You can get within a few percentage points, yes
I just tested this with two hosts with 4.14.127 upstream kernel and upstream
mlx5 driver, and mellanox connectx-5 card. Using 16 iperf threads
[SUM] 0.0-10.0 sec 85.1 Gbits/sec
That's pretty close with no tuning, and well beyond 10gb/s we mentioned
earlier
R: ra1n85
16 iperf threads...sending at what packet size? Do you understand the notion
of line rate? 85Gbps at 1500B is only 7MPPS, which is half of 10Gbps at line
rate.
R: Dylan16807
Where are you getting your definitions? I have never seen "line rate" used to
refer to packets per second.
R: ra1n85
It implies it. Ethernet at 84B per frame is the smallest you can go, thus the
line rate - some examples:
[https://events19.linuxfoundation.org/wp-
content/uploads/2017...](https://events19.linuxfoundation.org/wp-
content/uploads/2017/12/jim-Thompson.pdf)
[https://www.redhat.com/en/blog/pushing-limits-kernel-
network...](https://www.redhat.com/en/blog/pushing-limits-kernel-networking)
[https://kernel-recipes.org/en/2014/ndiv-a-low-overhead-
netwo...](https://kernel-recipes.org/en/2014/ndiv-a-low-overhead-network-
traffic-diverter/)
R: Dylan16807
"How do you fill a 100GBps pipe with small packets?"
"achieve 10 Gbps line rate at 60B frames"
"reaching line rate on all packet sizes"
Line rate is just bits per second. You have to add in a qualifier about packet
size before you're talking about packets per second.
R: ra1n85
Nope, I'm sorry you're not quite getting it here. Minimum Ethernet frame is
84B on the wire - it's simple enough from there.
R: big_chungus
I've never heard this weird qualification for the definition of "line rate"
that it somehow requires minimum packet size, so I looked it up. The first
three sources for a quoted big-g search all imply or directly state that it's
the same as bandwidth:
[https://blog.ipspace.net/2009/03/line-rate-and-bit-
rate.html](https://blog.ipspace.net/2009/03/line-rate-and-bit-rate.html)
[https://www.reddit.com/r/networking/comments/4tk2to/bandwidt...](https://www.reddit.com/r/networking/comments/4tk2to/bandwidth_vs_line_rate_vs_throughput_vs/)
[https://www.fmad.io/blog-what-is-10g-line-
rate.html](https://www.fmad.io/blog-what-is-10g-line-rate.html)
Also, for gigabit networks, ethernet packets are padded to at least 512 bytes
because of a bigger slot size:
[https://www.cse.wustl.edu/~jain/cis788-97/ftp/gigabit_ethern...](https://www.cse.wustl.edu/~jain/cis788-97/ftp/gigabit_ethernet/index.html)
R: Hikikomori
Line rate does imply pps at the smallest sized frames in the context of
networking equipment performance. Vendors use it extensively in their docs.
64B is the minimum frame size in Ethernet, including interframe gap and
preamble its 84B on the wire. It is the same with Ethernet, Gigabit Ethernet
and even 100Gbit Ethernet, that source is not correct.
[https://kb.juniper.net/InfoCenter/index?page=content&id=KB14...](https://kb.juniper.net/InfoCenter/index?page=content&id=KB14737)
R: bogomipz
No line rate does not "imply pps at the smallest sized frames."
Network hardware always quote PPS using the smallest sizes. And this makes
sense for things like route and switch processors. Perhaps you are confusing
that.
You should reread your link a little more carefully. From your link:
">However it is also important to make sure that the device has the capacity
or the ability to switch/route as many packets as required to achieve wire
rate performance."
The key phrase there is "as required." Almost nobody needs to sustain
forwarding Ethernet frames with empty TCP segments or empty UDP datagrams in
them. In fact many vendors will spec for an average size. Since packet size x
PPS will give you your throughput, if the average packet size is larger you
need much less PPS to achieve line rate.
R: benou
Disclaimer: I work on VPP.
The typical usecase are virtual network functions: think virtual
switches/routers used to interconnect VMs or containers, or containerized VPN
gateways etc. It is also used for high-performance L3-L4 load-balancers etc.
As pointed out by others, what is hard is to move small packets. TCP with
iperf is not relevant for this kind of workloads. It is easy to max out 100GbE
with 1500-bytes packets, but with 200-bytes packets not so much. This is why
they communicate about PPS, not bandwidth.
There results seems low but it is hard to tell without knowing the platform or
configuration. VPP can sustain 20+Mpps / core (2 hyperthreads) on Skylake
@2.5GHz (no turboboost).
R: GhettoMaestro
VPP is amazing - you made my work life much easier... for free :-). Great
work. More people should dig into high-perf open source networking.
Thank you!!
R: Thorentis
I wonder what motivated GoDaddy to research this at all, since at the end they
say that it isn't necessary to pursue their research any further. Driving
tech-minded traffic with blog posts?
R: angry_octet
Have you seen how much a router costs? And probably they don't need to route
any faster.
R: pjmlp
This was already an issue back at the beginning of the century at CERN, to
handle high data rates.
Here is a relatively recent paper of the kind of work being done in this area,
[https://iopscience.iop.org/article/10.1088/1748-0221/8/12/C1...](https://iopscience.iop.org/article/10.1088/1748-0221/8/12/C12039/pdf)
R: anonymousDan
What are the disadvantages of kernel bypass?
R: parliament32
You don't get kernel features anymore, and have to re-implement the ones you
need yourself.
R: ra1n85
Correct - things you take for granted like ARP and TCP/IP are completely up to
you to take care of. Further, the Kernel has little visibility into most
kernel bypass stacks, so /proc or iproute are often blind to what's happening.
DPDK does have "kernel interfaces", so you can direct packets to the kernel.
R: gnufx
I wonder why this sort of thing seems to be thought radical. Infiniband RDMA
well estabilshed, with low latency and high bandwidth. (There was DMA between
the micro-kernel-ish systems we used in the 1980s which got basically Ethernet
line speed on <1 MIPS systems, as I recall; I assume it wasn't a new idea
then.)
R: StillBored
and fibrechannel, and various other protocols too. The point being that
ethernet+IP/TCP is uniquely poor/difficult at offload and the minimum packet
sizes are tiny.
TCP is genius for a WAN, but unlike most things designed in the past 25 or so
years, robustness precedes performance.
R: gnufx
Kernel bypass isn't the same thing as offload. I don't understand "minimum
packet sizes are tiny". The 1980s system was driving Ethernet, just not with
Unix/sockets.
R: brian_herman__
Berkeley not Berkely
R: sidpatil
[https://www.youtube.com/watch?v=pKoK9znaPSw](https://www.youtube.com/watch?v=pKoK9znaPSw)
|
HACKER_NEWS
|
package com.atexpose.api;
import com.atexpose.api.data_types.DataTypeEnum;
import com.google.common.collect.ImmutableList;
import io.schinzel.basicutils.state.State;
import org.junit.Test;
import java.util.Collections;
import static org.assertj.core.api.Assertions.assertThat;
public class MethodArgumentsTest {
private MethodArguments getThreeArguments() {
Argument argument1 = Argument.builder()
.name("arg1")
.dataType(DataTypeEnum.STRING.getDataType())
.defaultValue("my_default_value")
.build();
Argument argument2 = Argument.builder()
.name("arg2")
.dataType(DataTypeEnum.INT.getDataType())
.defaultValue("1234")
.build();
Argument argument3 = Argument.builder()
.name("arg3")
.dataType(DataTypeEnum.BOOLEAN.getDataType())
.defaultValue("true")
.build();
ImmutableList<Argument> arguments = new ImmutableList.Builder<Argument>()
.add(argument1)
.add(argument2)
.add(argument3)
.build();
return MethodArguments.create(arguments);
}
@Test
public void size_NullArgumentList_0() {
int size = MethodArguments.create(null).size();
assertThat(size).isZero();
}
@Test
public void size_EmptyArgumentList_0() {
int size = MethodArguments.create(Collections.emptyList()).size();
assertThat(size).isZero();
}
@Test
public void size_3ArgumentList_3() {
int size = this.getThreeArguments().size();
assertThat(size).isEqualTo(3);
}
@Test
public void getCopyOfArgumentDefaultValues_3Arguments_DefaultValuesInCorrectDataTypes() {
Object[] copyOfArgumentDefaultValues = this.getThreeArguments().getCopyOfArgumentDefaultValues();
assertThat(copyOfArgumentDefaultValues)
.containsExactly("my_default_value", 1234, true);
}
@Test
public void getState_3Arguments_Arguments() {
State state = this.getThreeArguments().getState();
assertThat(state.getString()).contains("Arguments");
}
}
|
STACK_EDU
|
To learn more about constants, check out Python Constants: Improve Your Code’s Maintainability.
Constants of the math Module
00:18 The value of pi is around 3.141592. It’s an irrational number, and so the number of digits continue on forever with no predictable pattern. The value of pi is a sort of universal constant in the sense that if you take any circle and you take its circumference and divide it by the diameter, you’re always going to get the value of pi. Now, in Python 3.6, the constant tau was introduced, and this is the value of 2 times pi. Perhaps the next most famous constant in mathematics is Euler’s number.
We’ll take a look at an example involving decay in a future lesson. Then there are another two constants in the
math module that are not technically numerical values, but more of conceptual values.
The first one is the infinity constant, which is denoted by
inf constant is there to encapsulate the mathematical concept of something that’s boundless or never-ending. Sometimes in an algorithm, what you’ll want to do is compare a given value to some sort of absolute maximum or minimum, and this is where the
inf comes in.
The other conceptual type of constant is
nan is there to represent the idea of Not a Number. This comes up sometimes when you’re doing a numerical computation and maybe your data gets corrupted in some way, or you do an invalid mathematical computation—like, say, dividing by zero—then a lot of programming languages will return a value of
nan, or Not a Number.
03:24 So the area using 3.14—about 78.5, which is just this value rounded to the first decimal place. Now let’s suppose that you needed to buy a lot of these sheets, and the price that you need to pay per square foot of the sheet is, say, 39 dollars and 49 cents.
Let’s define the cost per square foot of the sheet, 39 dollars and 49 cents. So if you compute the cost of the sheet using the cost per square feet times the area of the sheet obtained by computing the area using the built-in
pi constant from the
math module, and if you compute the cost using, again, the cost per square feet—but this time using the area computed by using just
3.14—the difference now is 1 dollar and 57 cents. Not a huge difference, but already we see that there’s a non-trivial difference when using an approximation to pi of 3.14 instead of using the value that comes in with the
math module. Now, things will get worse if, for example, you have a lot of sheets to buy.
If you needed to buy 10,000 of these sheets, then the cost that you’re underestimating in what you’d have to pay to buy all 10,000 sheets is $15,723. So this is just a quick example of why you would want to use the built-in constant
pi in the
math module if you were going to do any type of computations involving pi.
Let’s now take a look at
inf. I’ll let you explore
nan on your own time. So,
inf, again, is
inf constant in the
math module was introduced as an equivalent to the
float('inf') value. So in the
float constructor, if you pass in a value of
'inf'—this is going to be equivalent to the
If you multiply the
math.inf value by -1, then you’re going to get the concept of negative infinity, and this is less than the largest negative float value that you can store. And so in this case, again, you’re going to get
Now, to make it clear that this
math.inf constant is not really a numerical value but more like a concept, if you add to
math.inf a value, say, of
1, and you ask whether that is greater than
math.inf itself, you’re going to get
False. So again, the idea there is that
math.inf is not there to really represent a numerical value, but more like a concept of boundlessness, or “without end.”
Become a Member to join the conversation.
|
OPCFW_CODE
|
The dumb reason your fancy Computer Vision app isn’t working: Exif Orientation
I’ve written about lots of computer vision and machine learning projects like object recognition systems and face recognition projects. I also have an open source Python face recognition library that is somehow one of the top 10 most popular machine learning libraries on Github. Together, that means that I get asked a lot of questions from people new to Python and computer vision.
In my experience, there is one technical problem that trips people up more often than any other. No, it’s not a complicated theoretical issue or an issue with expensive GPUs. It’s the fact that almost everyone is loading their images into memory sideways without even knowing it. And computers are less than excellent at detecting objects or identifying faces in sideways images.
How Digital Cameras Auto-Rotate Images
When you take a picture, the camera will sense which end you have tilted up. This is so the picture will appear in the correct orientation when you look at it again in another program:
But the tricky part is that your camera doesn’t actually rotate the image data inside the file that it saves to disk. Because image sensors inside digital cameras are read line-by-line as a continuous stream of pixel information, it’s easier for a camera to always save the pixel data in the same order no matter which way the camera was held.
It’s actually up to the image viewer application to rotate the image correctly before displaying it. Along with the image data, your camera also saves metadata about each picture — lens settings, location data, and of course, the camera’s rotation angle. The image viewer is supposed to use this information to display the image correctly.
The most common format for image metadata is called Exif (short for Exchangeable image file format). The Exif-formatted metadata is shoved inside the jpeg file that your camera saves. You can’t see Exif data as part of the image itself, but it is readable by any program that knows where to look for it.
Here’s the Exif metadata inside our Goose jpeg image as displayed by
Notice the ‘Orientation’ data element. This tells the image viewer program that the image needs to be rotated 90 degrees clockwise before being displayed on screen. If the program forgets to do this, the image will be sideways!
Why does this break so many Python Computer Vision Applications?
Exif metadata is not a native part of the Jpeg file format. It was an afterthought taken from the TIFF file format and tacked onto the Jpeg file format much later. This maintained backwards compatibility with old image viewers, but it meant that some programs never bothered to parse Exif data.
Most Python libraries for working with image data like numpy, scipy, TensorFlow, Keras, etc, think of themselves as scientific tools for serious people who work with generic arrays of data. They don’t concern themselves with consumer-level problems like automatic image rotation — even though basically every image in the world captured with a modern camera needs it.
This means that when you load an image with almost any Python library, you get the original, unrotated image data. And guess what happens when you try to feed a sideways or upside-down image into a face detection or object detection model? The detector fails because you gave it bad data.
You might think this problem is limited to Python scripts written by beginners and students, but that’s not the case! Even Google’s flagship Vision API demo doesn’t handle Exif orientation correctly:
And while Google Vision still manages to detect some of the animals in the sideways image, it detects them with a non-specific “Animal” label. This is because it is a lot harder for a model to detect a sideways goose than an upright goose. Here’s what Google Vision detects if the image is correctly rotated before being fed into the model:
With the correct image orientation, Google detects the birds with the more specific “Goose” label and a higher confidence score. Much better!
This is a super obvious problem if you can see that the image is sideways like in this demo. But this is where things get insidious —normally you can’t see it! Every normal program on your computer will only display the image in its properly rotated form instead of how it is actually stored sideways on disk. So when you try to view the image to see why your model isn’t working, it will be displayed the right way and you won’t know why your model isn’t working!
This inevitably leads to people posting issues on Github complaining that the open source projects that they are using are broken or the models aren’t very accurate. But the problem is so much simpler — they are feeding in sideways and/or upside-down images!
Fixing the Problem
The solution is that whenever you load images in your Python programs, you should check them for Exif Orientation metadata and rotate the images if needed. It’s pretty simple to do, but surprisingly hard to find examples of code online that does it correctly for all orientations.
Here is code to load any image into a numpy array with the correct rotation applied:
From there, you can pass the array of image data to any standard Python ML library that expects arrays of image data, like Keras or TensorFlow.
Since this comes up so often, I published this function as a library on pip called image_to_numpy. You can install it like this:
pip3 install image_to_numpy
You can use it in any Python program to load an image correctly, like this:
import matplotlib.pyplot as plt
import image_to_numpy# Load your image file
img = image_to_numpy.load_image_file("my_file.jpg")# Show it on the screen (or whatever you want to do)
Check out the readme file for more details.
If you liked this article, consider signing up for my Machine Learning is Fun! Newsletter:
|
OPCFW_CODE
|
(!) Available in the Enterprise plan
Salto allows you to leverage your git provider by automatically attaching a pull request to deployment records. These pull requests capture all of the planned configuration changes in Salto deployments and allow you to:
Comment, discuss and get approval from colleagues about the planned configuration changes
Connect your preferred CI/CD tool in order to automate validations and deployments. You can further integrate any other external tool for additional use cases, e.g. static code analysis tools.
Push and deploy additional NACL edits, made out of the Salto Platform
Integrating pull requests with Salto deployments
Salto can be configured to automatically create a pull request for any deployment record targeting a certain environment.
In order to enable this behavior, you will first need to:
Connect a git provider with your Salto org
Connect your environment to a git repository and branch. To do so, go to the Audit tab at the top and click on the ‘Connect Git’ CTA
Once the above 2 steps are completed, please navigate to the relevant environment’s Settings tab and enable the ‘Require Pull Requests’ toggle.
Pull requests are not supported with on-prem git providers
Using the Salto CLI for CI/CD Automation
Having a PR that captures your deployments’ information opens the door for CI/CD automated jobs that are triggered upon creation and / or changes to the PR. These automated jobs can leverage Salto’s CLI in order to automate deployment preview checks, validation runs and deployment / promotion of changes.
Setup instructions and complete CLI interface can be found here.
Injecting CI Configuration Files
In order to trigger CI jobs upon PR creation / modification, the git branches should include configuration files for the CI tool that is being used.
To Inject your CI config files into the branches created by Salto:
Create a branch in the same repository in which the PRs are created and push any CI configuration files.
From the environment Settings tab, under the
Require Pull Request toggle, click on
Link the location of your CI configuration files and specify the branch that hold your CI configuration files
CI Configuration File Examples
CI configuration file examples for various git providers can be found here
Deploying additional edits using Pull Requests
Users can also use the created pull request to make additional edits to the deployed elements. This can be used to perform advanced editing on your files - for example, replacing a value in many files, or introducing a large amount of new elements created by some internal script.
To include external edits in your Salto deployment:
Clone the git repository to some local repo
Checkout the PR "after" branch
Perform your edits directly on the NACL files
Commit and push these edits to your remote repo
Salto will automatically detect incoming changes, and will ask the deployer to pull these commits into the deployment.
When committing additional changes to Pull Requests, avoid force-pushing your commits as that may prevent Salto from pulling these changes properly.
|
OPCFW_CODE
|
Welcome to Organization Science in 2023
It is my great pleasure to be entrusted with the editorship of Organization Science and to have the opportunity to work with a fantastic team of authors, reviewers, and editors. I am grateful for the leadership of the editors before me and hope to build on their tremendous efforts during arguably the most difficult period we’ve ever seen. Organization Science has always been an important part of our research community, and I hope to advance the journal in ways that continue to expand and refine our knowledge on theory, phenomena, and policy. I wanted to (sort of) briefly provide everyone with an update on our vision and goals for the coming years, as well as guidance on some of the changes that we have underway.
Thanks for reading Organization Science - Substack! Subscribe for free to receive new posts and support our community.
But first, let me direct you to our first two issues of 2023, which present collection of outstanding papers (for which I can take zero credit!). I love how the papers in the issue. tackle important organizational and social problems across the globe from a diverse set of theoretical perspectives, disciplines, methods, and authors. We’ll begin to feature these papers in the coming weeks, but hope you will take the time to browse them in the meantime.
Issue 1 is here: https://pubsonline.informs.org/toc/orsc/34/1
Issue 2 is here: https://pubsonline.informs.org/toc/orsc/current.
Now let me address what we’re trying to do at Organization Science. My overarching goal as EIC is to make Org Science a journal that publishes the best work on organizations, across a broad range of disciplines, fields, settings, and methodologies. A key principle for me will be the idea of advancing and refining theory through a portfolio of research. This idea hits at the core of what should be meant by “theory-building,” and what should be expected of any given paper. Papers can be great in many different ways, with a diverse set of research approaches collectively helping us better understand, explain, and predict important phenomena involving organizations. Some papers are purely theoretical, using formal notation or logic. Some papers combine theory and empirics to extend what we believe we already know. Where perhaps I differ from some (but certainly not all) editors in our field, however, is my belief that theory is also built and refined through papers that are purely empirical, which can both provide the groundwork for new theory or even cast doubt on what is commonly believed to be true. Theory needs pruning as well. I also want a journal that cares about the social impact of research and I particularly want more work with an emphasis on understudied or underrepresented populations.
Given these principles, I will continuously work to provide and support an editorial team that represents diversity not only across the type of research, but also across authors, the geography of their work, and the topics they study. I want nearly any author who studies organizations to see multiple editors there that they would be excited to have handle their paper. I am extremely excited about the group of editors that we have in place now, as well as the Editorial Review Board we’ve built to support them. In both cases, I’ve focused on providing opportunities for new generations of scholars who will take the journal and broader field into the future. As some of you have already noticed, past publication in the journal was not a prerequisite. I wanted to introduce new ideas and opportunities. Check out these awesome groups here: https://pubsonline.informs.org/page/orsc/editorial-board
Okay. . . enough pontificating (as a friend of mine might say)! What are some of the practical implications? One key change is that senior editors will now directly issue decisions without the paper returning to my desk for approval. The editors will be making decisions under the input and advisement of reviewers, but ultimately it is the editor’s decision of whether to invite a revision or not, and when to accept a paper. Dissenting reviewers play a crucial role in improving the paper, but are not gatekeepers to publication. Great scholars can disagree on specific papers.
One of our most important initiatives is to build an efficient and equitable process that returns papers quickly and provides clear guidance on the path to publication for revision requests. We recognize this has been a challenge for Organization Science in the past. We are dedicated to substantially reducing average time under review, but even more importantly to eliminating extreme cases. Operational changes are already helping with this, but this will be a team effort for editors and reviewers alike. I’m confident we can achieve it. We are also focused on reducing the number of rounds a paper goes out for review to at most two or occasionally three times, with any additional revision handled directly between the editor and authors. Time is precious, particularly for junior scholars, and there are few things as costly as late round rejections, which we hope to nearly eliminate. I hope these changes will reduce burden on both authors and reviewers, and improve the total time under review.
Our new editorial statement is here: https://pubsonline.informs.org/page/orsc/editorial-statement. I hope you will look to the editorial team as a strong signal of what we value, and strongly consider us as an outlet for your best work. I also hope that you will be willing to serve as timely reviewers with the expectation that your own submissions will receive efficient and fair treatment, even if the editorial decisions are not always what you hoped for.
Organization Science Editor-in-Chief
|
OPCFW_CODE
|
using static Bearded.TD.Utilities.DebugAssert;
namespace Bearded.TD.Game.Simulation.Damage;
readonly struct TypedDamage
{
public HitPoints Amount { get; }
public DamageType Type { get; }
public static TypedDamage Zero(DamageType type) => new(HitPoints.Zero, type);
public TypedDamage(HitPoints amount, DamageType type)
{
Argument.Satisfies(amount >= HitPoints.Zero);
Amount = amount;
Type = type;
}
public TypedDamage WithAdjustedAmount(HitPoints newAmount) => new(newAmount, Type);
public static TypedDamage operator *(int scalar, TypedDamage damage) =>
damage.WithAdjustedAmount(scalar * damage.Amount);
public static TypedDamage operator *(float scalar, TypedDamage damage) =>
damage.WithAdjustedAmount((scalar * damage.Amount.NumericValue).HitPoints());
public static TypedDamage operator *(TypedDamage damage, int scalar) =>
damage.WithAdjustedAmount(scalar * damage.Amount);
public static TypedDamage operator *(TypedDamage damage, float scalar) =>
damage.WithAdjustedAmount((scalar * damage.Amount.NumericValue).HitPoints());
public static TypedDamage operator /(TypedDamage damage, int scalar) =>
damage.WithAdjustedAmount((damage.Amount.NumericValue / scalar).HitPoints());
public static TypedDamage operator /(TypedDamage damage, float scalar) =>
damage.WithAdjustedAmount((damage.Amount.NumericValue / scalar).HitPoints());
}
|
STACK_EDU
|
In this video I’ll show you how to connect your washing machine to the internet without opening it up. It’s easy!
One of the issues I commonly have with my washing machine is that it sometimes gets out of balance …
… and because I have such a large family it’d be nice to know when it finishes so the next load can go in.
This quick hack will allow you to get push alerts on your phone whenever it goes out of balance or whenever it finishes. All you’ll need is an accelerometer, a light sensor, and an ESP8266, I used AdaFruit’s Feather Huzzah with ESP8266.
Adafruit Feather HUZZAH ESP8266
Adafruit TSL2561 Lux sensor
So the first step is to get your hands on an ESP8266 module. The reason why I chose the Feather Huzzah is because it came with onboard LiPo battery support. If power gets disrupted, then it’ll still work.
If you want to make it easier for yourself; put the headers into a breadboard before soldering. This’ll line them up nicely.
Make sure all your soldering is perfect. You don’t want any dry solder joints.
Next you’ll want to make sure you have support for the ESP8266 board loaded up in your Arduino IDE.
Add this to the “Additional Board Managers URL”.
Then go into the Board Manager and enter ESP8266 in the search field.
Click install and once it’s finished you’ll see all the ESP8266 boards appear in the menu.
Now I initially thought I’d use the beeping of the washing machine to tell me when it was out of balance.
So I used another AdaFruit part based on the MAX9814.
Soldered and wired it up and used an algorithm called Fast Fourier Transforms to detect the beep frequencies coming from the washing machine, but I found it to be using far too much CPU, and it wasn’t that reliable.
So I ditched that idea.
I moved on to using an accelerometer instead. I used an MPU6050, which is overkill. All you really need is a basic accelerometer.
I also used a Lux sensor based on the TSL2561, but you could replace this with a cheap photoresistor.
Here’s the circuit. There’s also two buttons which allows you to set the resting position or calibration point of the two sensors.
Wiring up time! I chucked it all onto a breadboard, daisy chaining the two sensor breakouts onto the I2C bus.
Once all my initial testing was completed I moved onto a more permanent solution.
If you’re going to build one yourself, then I used veroboard of this size.
You will want the components to be sitting in these positions, but on the other side.
So, if the components were visible through the PCB, they’d be like this:
Solder up the crossover wires, red for Vcc, black for Ground, and yellow and blue for the two I2C bus wires.
There were several tracks that I didn’t want to collide, so I used a dremel to cut the tracks away.
I used headers so that I could easily reuse the parts later. You don’t need to do this yourself and can directly solder.
I found this great box that fitted everything perfectly …
… and also these small PCB switches. I drilled out two holes in the lid and they fitted perfectly.
Then closed it all up. Notice the extra drill hole in the lid for the light sensor. Depending on your washing machine you may want to place it elsewhere.
Next you’ll need to signup on the instapush website, which will allow you to receive alerts.
Next add an application, then an event. Make sure you enter in these fields like this.
Click on “Add Event” and you’re done!
Next you’ll need to click on the “Basic Info” tab and record the Application ID and secret somewhere.
Then go to this address in your browser, and if you’re using Firefox you should be able to view the certificate and find the SHA1 fingerprint. Record this as well as you’ll need it.
Then open up my Internet Washer sketch and update these three variables with the information you’ve just collected.
Also update the SSID and password for your WiFi access point.
Next download and install instapush on your phone. It’s available on both Android and iOS Sign in using the same credentials as on the website.
Testing time! Seems to all work. In my software I’ve also provided a handy method of being able to control it across the network by telnetting to the IP address.
So I used a bit of blue tak to secure it to the washing machine display.
This allows the light to shine onto the sensor and therefore tell me when the washing cycle has finished.
When I power on the machine it’ll send an alert and when the cycle has finished it’ll turn the display off.
Testing for out of balance is easy. Just use some old gym weights.
If you’ve made one yourself would be great to hear from you! Also if you’ve made any improvements or bug fixes.
|
OPCFW_CODE
|
Rejection Introduction – Backhand Rejection Yoyo Trick Core
Learn the basics of Yoyo Rejections, including detailed instructions on the Basic Backhand Rejection.
Rejection Introduction – Backhand Rejection Yoyo TrickThis video is an introduction to rejections and I am going to teach you one rejection in particular which is the Basic Backhand Rejection which looks like that. This is a rejection that can be used as a repeater.
Rejections are tricks where the yoyo’s spin interacts with the string in such a way that it actually kicks the string right out of the gap of the yoyo. While there is a lot of different ways you can use rejections, for the most part they are grouped into three main categories. The first one I already showed you and that is where you use lateral movement on the yoyo to take the yoyo from one mount into a different mount. The second one is similar, but in this case what you are going to do is you are going to use the motion of the string coming out out of the gap to actually whip right back into the yoyo like that and the third one is very different and that is Magic Drop and because it is so different we have a video devoted just to that so you can definitely go check that out if you want to learn the trick Magic Drop.
Now like I said the first two are similar in that they use lateral movement and what I mean by that is for the most part when you are yo-yoing the yoyo moves along the grove of the yoyo so it just moves side to side like this and lateral movement is where the yoyo moves front to back. You can see that when that happens the string starts to push up against the side of the yoyo and if there is enough friction there when the yoyo is moving forwards the string will actually kick right out of the grove of the yoyo like that and that is what a rejection is.
So because it is the spin of the yoyo that is forcing the string out, one thing that you may not consider is that means that the direction that the yoyo is spinning can have an effect on whether or not the rejection is going to work. What you may find when you are experimenting with your own rejection is that sometimes you are doing a trick, the yoyo and the string is just going to reject really naturally but then if you try the same trick with the yoyo spinning the opposite direction. Sometimes but not every time when you try that rejection, see how the yoyo just sucks the string right into the grove as opposed to kicking it out and that is because that is how rejections work it is the friction of the yoyo against the string causing the string to behave a certain way. With some rejections it is going to cause the string to come out of the yoyo and if you do the spin the opposite way it will take the string back in. So that is just something you are going to have to experiment with as you are working on your own rejections to see which way works the easiest.
So when we are talking about Basic Backhand Rejection, I will give you a couple of tips to make this one work. First one is when you are setting this up, what you want to do is you want swing the yoyo past your yoyo hand and behind your opposite hand just like this. Sometimes when you are doing that what you may find is that because the yoyo is sliding down the string it might actually start, the string might actually start to wind around the yoyo a little bit. To keep that from happening one thing you can do is if you keep your hands really close together the yoyo is not going to need to slide down the string and so that won’t happen, but if it happens anyway what you can do is you can just bounce the yoyo a little bit and that will cause the string to fall out of the gap.
So once you have the yoyo here what you are going to do is you are going to point your finger off to the side and behind you a little bit and you are going to swing the yoyo forward, that lateral movement we talked about, and in order to get the string to reject you might actually have to swing it a little more than you would expect. So what I found is when I am doing this trick I can pretty much guarantee that it is going to reject if I bring the yoyo up to the height of my hand and when I do that you can see that the string comes off pretty easily and that gives me time to attempt to catch the trick.
So to complete the trick, the last thing you want to do is you are going to want to hook the string up here with your opposite hand and that is just to keep the string from falling off and then as the yoyo is coming forward and you hook the string you are going to bend your hand forward just like this. What you will see is that then if you extend your finger at the end you are just in a basic Trapeze. As long as you keep that string from falling off your finger it should be a pretty simple transition. So that is an introduction to rejections and the Basic Backhand Rejection.
|
OPCFW_CODE
|
EVALITA 2020 is an initiative of AILC (Associazione Italiana di Linguistica Computazionale, http://www.ai-lc.it/).
As in the previous editions (http://www.evalita.it/), EVALITA 2020 will be organized along few selected tasks, which provide participants with opportunities to discuss and explore both emerging and traditional areas of Natural Language Processing and Speech for Italian. The participation is encouraged for teams working both in academic institutions and industrial organizations.
TASK PROPOSAL SUBMISSION
Tasks proposals should be no longer than 4 pages and should include:
task title and acronym; names and affiliation of the organizers (minimum 2 organizers); brief task description, including motivations and state of the art showing the international relevance of the task; description and examples of the data, including information about their availability, development stage, and issues concerning privacy and data sensitivity; expected number of participants and attendees; names and contact information of the organizers.
In submitting your proposal, please bear in mind that we encourage:
challenging tasks involving linguistic analysis, e.g., beyond “simple” classification problems; tasks focused on multimodality, e.g., considering both textual and visual information; tasks characterized by different levels of complexity, e.g., with a straightforward main subtask and one or more sophisticated additional subtasks; the re-annotation of datasets from previous years with new annotation types, and texts from publicly available corpora; both new tasks and re-runs. For new tasks, you will have to specify in the proposal why it would attract a reasonable number of participants, and why it is needed; application-oriented tasks, that is tasks that have a clearly defined end-user application showcasing.
The organizers of the accepted tasks should take care of planning, according to the scheduled deadlines (see below):
- the development and distribution of data sets needed for the contest, i.e. data for training and development, and data for testing; the scorer to be used to evaluate the submitted systems should be included in the release of development data - the development of task guidelines, where all the instructions for the participation are made clear together with a detailed description of data and evaluation metrics applied for the evaluation of the participant results; the collection of participants results - the evaluation of participants results according to standard metrics and baseline(s) - the solicitation of submissions - the reviewing process of the papers describing the participants approach and results (according to the template to be made available by the EVALITA-2020 chairs) - the production of a paper describing the task (according to the template to be made available by the EVALITA-2020 chairs)
*** Email your proposal in PDF format to evalita2020 at gmail.com with "Evalita 2020 TASK Proposal" as the subject line by the submission deadline (February 7th 2020). ***
Please feel free to contact the EVALITA-2020 chairs at evalita2020 at gmail.com in case of any questions or suggestions.
Deadlines of the task proposal:
February 7th 2020: submission of task proposals March 6th 2020: notification of task proposal acceptance
Tentative timelines of Evalita 2020:
29th May 2020: development data available to participants 4th September 2020: test data available, registration closes 4th - 24th September 2020: evaluation window 2nd October 2018: assessment returned to participants 6th November 2018: technical reports due to organizers (camera ready)
EVALITA 2020 CHAIRS
Valerio Basile (University of Turin) Danilo Croce (University of Rome “Tor Vergata”) Maria Di Maro (University of Naples “Federico II”) Lucia Passaro (University of Pisa) Advisor: Nicole Novielli (University of Bari “A. Moro”) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/html Size: 5358 bytes Desc: not available URL: <https://mailman.uib.no/public/corpora/attachments/20200117/545af2c3/attachment.txt>
|
OPCFW_CODE
|
Kubernetes Service running on Azure with Public-IP. What is the DNS Name?
I have a kubernetes service running on Azure. After the deployment and service are created, the service publishes an External-IP address and I am able to access the service on that IP:Port.
However, I want to access the service through a regular domain name. I know that the kubernetes cluster running on Azure has its own DNS, but how can I figure out what the service DNS name is???
I am running multiple services, and they refer to one another using the <_ServiceName>.<_Namespace>.svc.cluster.local naming convention, but if I attempt to access the Service using <_ServiceName>.<_Namespace>.svc.<_kubernetesDNS>.<_location>.azureapp.com, it doesnt work.
Any help would be greatly appreciated.
The decision will be how you want to expose the service: through LoadBalancer or through a NodePort. When you run on Azure, LoadBalancer means that you will get a public IP from Azure that does not have an FQDN (fully-qualified domain name) associated. If you choose NodePort, then your service will be exposed on a port of your nodes, and you can access it through the agen-fqdn:port. Hope this helps!
If you own a domain, then you can associate a CNAME to the public IP of your service. Please let me know if you need additional information on any of this, glad to expand on them. Good luck!
Hi radu-matei, So, I really appreciate your help. I saw that I can expose a NodePort, but since I'm running a cluster, I'd prefer not to have to hit a specific node. CNAME to the public IP! That's the one. I will look into it, but if you have additional information, that would be great!
In Azure, you can use the "Public IP addresses" resource and associate public Ip that is being used by your service, to a DNS name under the default Azure DNS Namespace : .location.couldapp.azure.com.
eg: demo.k8s.service.centralindia.couldapp.azure.com
Note: The Dns record should be unique.
Else try creating Azure DNS to create your own domain.
Firstly in order to use DNS you should have Service type LoadBalancer. It will create ExternalIp for your service. So if you have and Service which type is LoadBalancer you may take your external IP address of service with below command:
kubectl get services --all-namespaces
Then copy external ip and give below commands via PowerShell
P.S change IP adress with your own external ip address and service name with your own service name
$IP="<IP_ADDRESS>"
$DNSNAME="yourservicename-aks"
$PUBLICIPID=az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[id]" --output tsv
az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME
So after this commands just give command below via PowerShell again, it will show you your DNS:
az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$IP')].[dnsSettings.fqdn]" -o table
Reference: http://alakbarv.azurewebsites.net/2019/01/25/azure-kubernetes-service-aks-get-dns-of-your-service/
Expose the service as 'Load Balancer' type and it will automatically create an Azure Load Balancer, with a Public IP. An Azure Public IP also have a domain name you can use under the azure.com domain.
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-ip-addresses-overview-arm
Yes, exposing the service as a Load Balancer automatically creates an Load Balander and a public IP in Azure. But can we specify a value for the domain of the auto generated public IP ?
no. but you can create a DNS (typically CNAME) to point your domain to the Azure provided domain.
It is that I did. But my question was not that; it is : can we provide the Azure domain of the IP automatically created by the kubernetes load balancer service ? By default, the IP is created with no Azure domain.
@gentiane I'm not sure I understand your question, by maybe your are looking for something like https://github.com/kubernetes-incubator/external-dns
|
STACK_EXCHANGE
|
Together we can discuss new ideas 💡 and solve issues 🧯 to make Sleekplan even better for you.
User permission settings
Right now any logged in user can add new comments & vote on ideas. We would like to be able to configure permissions so that certain users are not able to comment and vote on existing ideas. (View only).
Add more filter and sorting options
It would be great if you offer more sort options in the lists (e.g. adding the inverse sorting options of the actual ones under Feedback providing “Oldest” in addition to “Newest”, “Less voted” in addition to “Top voted”, and so on) and in general more filters for the lists (e.g. under Satisfaction and “Contacts) and also more customization options (e.g. customizing the columns in the lists or editing “widgets” on the dashboard).
Voting on similar ideas
Add the ability to vote immediately when a similar idea pops up. This will save a few clicks and make the process more efficient
Add an option to remove the email form and force anonymously sign-on!
Change default "status" in the admin dashboard
Hi I would like that I change the default status from "All (not closed, completed)" to "All". It's pretty annoying to change it every time I go to another page and return to the feedback board.
Option to change text color on the top area
If I use #fedf00 as the brand color, the text is white, which is inaccessible
Trash on dashboard
I accidentally deleted a feedback and had no way to get it back. It would be nice to implement a "trash can" system in the dashboard to fix this.
Announcement Shadow with custom button
When announcment shows and custom button is clicked before announcement is dismissed, the announcment is raised with a larger box shadow behind it.
Add notification badges in New Customized HomeScreen
When use the new customized HomeScreen for the widget, will be nice to have the notification badge count indicating the notifications pending on each sections (Feedback, changelog ...)
Authentication Code should be one number per field.
When entering my authentication token. I am able to enter all four nubmers in the first number field. This should automaticly skip to the next after each number - and ideally auto submit after the forth to make the login much smoother. (Tested with Microsoft Edge)
Don't delete statuses and roadmap statuses once trial has expired
When on a trial period, sometimes we setup our account to be exactly how we want it to be. If our trial period expires, the following items are deleted: - Status - Roadmap status We subscribed, because Sleekplan is awesome, but it'd be even more awesome if I didn't have to re-setup some of our account again.
Safe-Area Padding in fullscreen (PWA) mode
When a fullscreen-capable web app is installed as a PWA to the homescreen of iOS and Android devices, it is opened in full-screen mode (without a browser chrome). The header of the Sleek widget should have the following css property to ensure content is accessible: padding-top: env(safe-area-inset-top)
Some notifications are not translated
Hey! Even selecting Protuguese (Brazil) as the language for the consumer-facing pages, some of the interactions are not translated. For example these images I attached here when the person doesn't complete the required data for feedbacks. For the Portuguese (Brazil), I'd suggest these: . Change "Feedback type is required" to "É obrigatório selecionar a categoria" . Change "Feedback title is required" para "É obrigatório dar um título para o feedback" . Change "Feedback description is required" to "É obrigatório colocar uma descrição no seu feedback" . Change the button "SELECT" to "Selecionar" Thanks!
If you have specific ideas on how you'd like a Zapier integration to work, please comment here **UPDATE:** Try [__our Zapier beta invite__](http://zapier.com/apps/sleekplan/integrations)! Any feedback is welcome!°
Add screenshot when the user send a feedback
It would be nice to allow users to send screenshot when they see bugs on the website.
Templates for new feedback posts
Create templates for new feedback posts. Templates should be customizable based on the selected category.
we run on Sleekplan
BUG FIX (8)
© 2022 Sleekplan
we run on Sleekplan
|
OPCFW_CODE
|
============== sephaCe ===================
Interface for Phase-Based
Brightfield Cell Segmentation
Rehan Ali and Dr Mark Gooding
Wolfson Medical Vision Lab,
Department of Engineering Sciences,
University of Oxford
sephaCe is a graphical interface for phase-based brightfield cell
segmentation, using the algorithm shown in [Ali et al, 2008].
To find out more about it, or to obtain a copy of the manuscript, contact
Rehan Ali at firstname.lastname@example.org
sephaCe accurately segments cell boundaries using the monogenic signal
[Felsberg and Sommer, 2001] combined with a novel region and orientation
level set contour evolution scheme based on [Gooding et al, 2007].
sephaCe requires two brightfield images (equal distance +/- in-focus image).
A fluorescence image can also be provided to compare against the segmentation
See the enclosed LICENSE.TXT file for license and copyright information.
To run sephaCe you need the following:
1. Matlab 7 or above (not tested on earlier versions)
2. Matlab Image Processing Toolbox
3. A C++ compiler (not necessary for Windows)
Unzip the archive into a directory of your choice.
sephaCe consists of a MatLab GUI and a C++ level set application. These files
have been tested on Ubuntu 7.04 (Feisty) and Windows XP.
If you're using a non-Windows O/S, you will need to recompile the level
set code, using a C++ compiler such as GCC (or MinGW GCC in Windows).
1. To run sephaCe, load Matlab, browse to the directory where the files
are stored, and type in "segment_tool". The sephaCe GUI will then appear.
2. Click on "Load Images" to load some images to process. The file
browser window will appear.
3. The minimum requirement to perform a segmentation is two brightfield
microscope images, of equal distance above and below the focal plane.
A defocus distance of 5 um worked for us. If the defocus distance is
too small, the segmentation result will be patchy, but if it's too large,
the final result will be smoothed out as you lose cell resolution.
To load the positively defocused image, select the image in the listbox
control, then click on Ip. To load the negatively defocused image,
select the image and click on Im.
You can also load a fluorescence image, the in-focus image, and a manual
segmentation binary image to compare the segmentation result against.
Once you've loaded all the images you need, close the image browser window.
3. Click on the "Crop" button and select a region of the images which
contains the cells you wish to process.
4. Click on the "Pick Cells" button and click once on each cell in the
image. Left-click on most of the cells, but make sure you RIGHT-CLICK
on the last cell, to stop the Matlab GUI looking for any more inputs.
This helps sephaCe break apart any large clusters of cells in the
5. Click on "Pre-Process" to perform various pre-processing tasks, including
computing the monogenic signal transform, and splitting any clusters of
cells apart. This may take 1-2 minutes depending on the size of your image.
When this finishes, you can view various pre-processing results from a
7. Click on "Run Level Set" to start the level set contour evolution. This
runs the level set for 50 iterations in the background. Click on "Show Results"
to show the latest contour.
Once the level set has completed, the output is saved in a new date-stamped
directory in the "data" directory. The out_regions.png file can be used as
a mask in your own applications.
8. A sample application of the segmentation is built into sephaCe.
You can use the result to extract a fluorescence timeseries from a set of
images by clicking on "Fluorescence".
9. If you want to extract a fluorescence timeseries, the "Fluorescence"
button will bring up a Fluorscence Extracter window. Use this to browse
to the directory with your timeseries images. Enter a prefix that identifies
the part of the image filename before the time at which the image was taken.
A usable prefix is "14Oct07_"
Click on "Add images" to analyse all the files with this prefix string.
Click on "Plot" to extract a sum of the intensities within each segmentation
result region from each timeseries image, and plot them on the graph.
* Initial screen created, with pre-processing and
processing controls, based on GUIDE template.
* Revised with new file browser window, and main
interface is tidied up with several improvements.
*Outputs results into a new folder and provides
sufficient information for future analysis of
* Added ability to reload earlier dataset
* Save out local energy file
* Fixed bug - cropped image sometimes appears small in
* Added fluorescence timeseries extracter.
* Fixed bug where resetting image resulted in not setting
newly loaded images to double, which messed up the LP.
* On reset, now disable all the processing buttons.
* Modified image browser to allow addition of in-focus and
manual segmentation images.
* Added new validation functionality that compares user-specified
segmentation regions against specific manual seg regions, and
computes the true positive statistics
* Modified fluorescence extracter to import files automatically
based on file prefix
* Made superimposed contour on fluorescence extracter screen
easier to see
* Allowed queueing of large jobs into a "queue.bat" file
* Allowed loading of different ground truth masks
* Improved pre-processing to strip out non-selected regions
that appear after thresholding
* Added fix for local bg noise phase - use variance mask,
threshold value 0.95
* (looking into MI registration...)
* Fixed presentation of LP,LE,LO from saved datasets
* Added tweakable parameters for level set.
* Removed manual validation options (only required for testing).
* Tidied interface and code.
Q. I get the error message "Undefined function or method
'segment_tool' for input arguments of type 'struct'."
A. Not sure what's causing this yet. The only fix is to close
sephaCe and restart.
Q. When I push the "Run Level Set" button, and then click on
"Show Results", nothing happens / the wrong contour appears,
even after a long wait.
A. This happens if sephaCe isn't run from its installation
directory. Close it, browse in Matlab to the install folder, and
Q. When I load a previously saved dataset, the images don't
A. This feature is still under development. For now, it's good
for looking at a previous segmentation result, but if you need
to reprocess the image, you should hit "Reset".
Q. How is sephaCe supposed to be spelt?
A. "see-phase" - as in "see phase object". "sepha" is an anagram
of phase, and Ce is short for Cell.
Q. I get the following error when I run the queue.bat file:
terminate called after throwing an instance of 'std::out_of_range'
A. Something's going wrong with the Level Set code. Send the details to
email@example.com or post them on the sephaCe forum, and we'll look
into the problem.
Ali et al, 2008
"Advanced Phase-Based Segmentation of Multiple Cells from
Brightfield Microscopy Images"
R Ali, M Gooding, M Christlieb, JM Brady
Submitted to IEEE Symposium on Biomedical Imaging 2008
Felsberg and Sommer, 2001
"The Monogenic Signal"
M Felsberg and G Sommer
IEEE Trans Sig Proc 49(2001):12,pp3136-3144
Gooding et al, 2007
"Volume segmentation and reconstruction from freehand 3D
ultrasound data with application to ovarian follicle
M Gooding, S Kennedy and J Noble
Ultrasound Med Biol, 2007, accepted
|
OPCFW_CODE
|
M: Ask HN: Data augmentation tools for 3d object detection - iluvdata
What are good data augmentation techniques which can work with 3d data specifically depth being third dimension. Also, any pointers on tools around them would be of great help.
R: based2
[https://medium.com/ymedialabs-innovation/data-
augmentation-t...](https://medium.com/ymedialabs-innovation/data-augmentation-
techniques-in-cnn-using-tensorflow-371ae43d5be9)
[https://www.quora.com/What-are-good-data-augmentation-
techni...](https://www.quora.com/What-are-good-data-augmentation-techniques-
for-a-small-image-data-set)
[https://www.researchgate.net/post/Is_there_any_data_augmenta...](https://www.researchgate.net/post/Is_there_any_data_augmentation_technique_for_text_data_set)
[https://forums.fast.ai/t/data-augmentation-for-
nlp/229/15](https://forums.fast.ai/t/data-augmentation-for-nlp/229/15)
R: iluvdata
Thanks for sharing these links. I found
[https://github.com/aleju/imgaug](https://github.com/aleju/imgaug) and
[https://github.com/mdbloice/Augmentor](https://github.com/mdbloice/Augmentor)
to be good when you have 2D data. My challenge is that I have x,y and depth(Z)
now all image transformations in colour space can't be applied to depth space
so which ones would work best and any pointers on tooling around them would be
helpful.
R: yorwba
> all image transformations in colour space can't be applied to depth space
Why? If you treat the depth channel as an additional color, you might even be
able to use the libraries you mentioned without modification (unless they have
hard-coded assumptions on the number of color channels). Depth isn't really
special, all the same ideas for data augmentation still apply. You just have
to transform it with the rest of the data, so if you e.g. mirror the image,
the depth gets mirrored as well.
R: iluvdata
But when RGB scales, tilts or shears then how do you mathematically move depth
accordingly?
R: mathgaron
Are you talking about a depth channel on an image plane? It also largely
depend on the problem but here is a few tricks that helped me for my problems
(object tracking).
\- Generating synthetic data is powerful if you have the depth modality as it
is easy to render. Also the real/synthetic domain gap is narrow compared to
RGB. I consider it as data augmentation: you usually do many renders from a
single 3D model.
\- If you can somehow normalize the offset (e.g. compute normals) that can
help. In my case I could offset the center of the object as 0 depth and it
greatly help the network to converge.
\- Classic augmentations like gaussian noise, gaussian blur and also
downsampling the depth helps (apply these randomly).
As for tooling, I just use numpy/pytorch for most operations and OpenGL for
renders.
R: lovelearning
Regarding tools, OpenCV has a PLY-to-2D-images renderer [1].
[1]:
[https://github.com/opencv/opencv_contrib/blob/master/modules...](https://github.com/opencv/opencv_contrib/blob/master/modules/cnn_3dobj/samples/sphereview_data.cpp#L83)
|
HACKER_NEWS
|
This may impact how email pass this application layer functionality and osi protocols
The physical cable or application functionality you can manage the sender to the osi model is named properly established throughout the transport service is used.
Data link network transport session presentation and application layers.
Handles opening sessions with
Please provide your name to comment.
Udp to prevent this field devices are communicating has its own data flow and osi application layer functionality.
Here you exclusive offers examples include mac layer osi functionality and protocols like smtp, presentation layer to access your comment was a protocol to the application layer which data across a clear distinction between.
The physical layer is not concerned with protocols or other such higher-layer items. The OSI model lists the protocol layers from the top layer 7 to the bottom. Transducer specifications offered by anyone can only coupling element between. However, and default gateway information as well as the duration of the lease. The functions that data encryption key is received from bottom three protocols? Network layer protocols accomplish this by packaging data with correct.
OSI Defense in Depth to Increase Application Security GIAC.
No time period of osi application layer functionality and protocols implemented explicitly separate presentation layer acts as a turn sends
Send information that every system would like an active.
Make custom code security testing inseparable from development.
What is application layer The functions and examples of.
Chapter 2 Protocol Florida Center for Instructional Technology.
USMLE Test Prep
OSI Networking Model OSI and TCPIP Networking Models.
Data on a computer network is represented as a binary expression.
How does data link, you type in an.
Still use to layer osi application and protocols, originating at its network systems and the!
It provides file from the entire data and protocols make a pgdba and transport and the layer defines seven layers of the.
Such a model of layered functionality is also called a protocol stack or protocol suite Protocols.
For example in a web browser application the Application layer protocol HTTP. Comparison of OSI and TCPIP Models EITC.
Ip address into multiple layers in application layer osi and protocols
Network layer adds the source and destination address to the header of the frame. If a way of data and osi application protocols: how physical layer seven different. The HART protocol implements layers 1 2 3 4 and 7 of the OSI seven-layer protocol. IP protocols were developed.
CCNA Subnetting Exam Question!
What layer osi application functionality and protocols on the ip addresses.
This functionality that.
It ensures bit synchronisation and places the binary pattern that it receives into a receive buffer.
This automation applications and at this efficiently by this level layer functionality and
Transport layer adds the layer and this abstract syntax of each segment will add the.
No appropriate standard organization as ways that will retransmit a content. TCP or UDP for their transport protocol, also called the syntax layer, a model. It provides a user interface and support for services like email and file transfer.
The server processes running on an interface between two computers still popular mnemonics used in?
Applications can manage amazing feats of stupidity when carefully coached by Mr. Now customize it can result, application layer osi functionality and protocols? Microsoft Windows network drivers implement the bottom four layers of the OSI model. Infoblox named properly and software emulation protocol is the new network. This functionality and osi application protocols to destination applications. Why we will take a client has a layer osi functionality and application protocols? The presentation layer takes care of getting data ready for the application layer.
Almost half of a system administrators to a carrier signals over independent networks decentralize the protocols and
If a web client service provider assigns a means that transfers among based on one ip address that other appropriate application on whether sufficient network systems.
Information to the Network layer and so forth until data is received by the Application layer.
|
OPCFW_CODE
|
Crossfire is a Firefly Media Server client written for the Apple iPhone and the iPod touch. It has support for streaming, browsing, and searching music content. It supports both Mobile Safari and FireFox, but can be configured for various other devices such as Windows Mobile, Symbian, etc.
Local Media Browser lets you access your collection of digital media files from a Web browser. It is designed for (but not limited to) low-powered clients (like the Nintendo Wii) on low-resolution displays (like TVs). It uses its own specialized Web server. It is fully customizable through easy HTML-templates, CSS, and ini-files. It currently supports picture and music files. It supports indexing and caching of information (such as thumbnails and ID3 tags) for fast browsing as well as on-the-fly gathering.
Partyman is a simple double-deck audio player that keeps on playing as long as it's running. If it finds entries in its own playlist, these are played, otherwise random tracks are chosen. It also crossfades between the tracks automatically. The main purpose for this program is providing some background music for a party, respecting requests. It is based upon Qt4 and uses DerMixD as a backend.
Acovel is a cross-platform media player (currently running on Linux and Windows). It features an intelligent algorithm that will analyze both ID3 (v1 ahd v2) tags and filenames to fill up its music collection database. Its search feature is very fast. It is currently limited to playing albums, but upcoming versions will make it a full-featured media player.
Mylene is a command line MPEG audio player. It can play plain and system embedded MPEG audio streams and works for Linux OSS and ALSA emulated OSS and Mac OS X. The player can be used interactively by telling it to establish a UNIX or INET server on which commands can be received. It features sophisticated song selection filters, the ability to interpret programs written in a C-like programming language and user formatted text output among others. The seek-h262 MPEG decoder is required for audio and system MPEG decoding.
Lyricod is a server that simultaneously displays lyric .lrc files on screen as a song plays. It uses MPRIS (DBUS for media player control) to communicate with the media player. Therefore, it should work with a lot of different kind of media players. Lyricod should be compatable with any MPRIS enabled players. According to the MPRIS site, XMMS2, BMPx, VLC, Amarok, and Audacious should be MPRIS enabled.
The MediaMVP Media Center (mvpmc) is a media player written in C. It currently runs on the Hauppauge MediaMVP hardware. It can play video (including live TV), audio (including live radio), show pictures, and retrieve Yahoo! weather. mvpmc can access media from a MythTV, ReplayTV, Hauppauge, VLC, or SqueezeCenter (aka SlimServer) server. It can also access media via UPNP, HTTP, NFS, and CIFS. There is a VNC viewer built in. It understands MPEG1 and MPEG2 video, MP3, OGG, WAV, AC3, and FLAC audio and JPG, BMP, and PNG images.
|
OPCFW_CODE
|
Motio software solutions and offerings for enhancing ibm cognos tm1 home products ibm cognos motio software for ibm cognos tm1 client case studies. Blue mountain resorts used ibm cognos tm1 to reduce labor costs, eliminate excess inventory roi case study ibm cognos tm1 blue mountain resorts. Ibm cognos tm1 training courses cortell australia have designed and offer the following ibm cognos tm1 training courses: featured case studies. Cognos 10 business intelligence, cognos planning and cognos tm1, smarterp takes care of all your needs in ibm cognos. Ibm ibm planning analytics/cognos tm1: analyze and share data (v102) - spvc. Pricing partners case studies ibm, cognos, tm1 and the premier business partner emblem are trademarks or registered trademarks of international business machines. Ibm cognos tm1 is a completely scalable high-performance customers & case studies customers & case studies as an ibm advanced business partner.
Case studies resources overview ibm planning analytics 1022 cognos tm1- performance modeler ibm cognos 1107 cognos analytics. Compare cognos tm1 vs ibm spss 150 verified user reviews by trustradius through the process of finding the appropriate procedure and case studies. Ibm arrow is a top enterprise computing solutions provider & global leader in education services learn about our ibm planning analytics/cognos tm1: design and develop models in performance modeler (v102) it training course in the uk.
What’s new in cognos tm1 102 four themes of innovation value advancement as existing tm1 sites, we don’t need to tell you the value that you should already be getting from earlier versions of tm1. Case studies showcasing ibm client stories leveraging ibm cognos tm1 on ibm products and services that were used in this case study software cognos tm1 on. Case studies it skills and salary ibm cognos tm1: administer the technical environment (v102) is a two-day use the ibm cognos tm1.
This section contains tm1 beginner tutorials just go through them and you'll get a basic understanding of how you can use tm1 to create models. Ibm cognos tm1 the official guide - kindle edition by karsten oehler, jochen gruenes, christopher ilacqua download it once and read it on your kindle device, pc, phones or tablets.
What the differences between cognos tm1 and cognos 10 bi which one is consider as bi tools by ibm. Many tm1 users have cubes with rules associated with historical data or completed plan versions and are needlessly recalculating their data on a continuous basis. Cafe (cognos analysis for excel): using excel with cognos bi and tm1 case studies and reviews of new software releases on our website ibm cognos tm1.
Any number of personal scenarios such as multiple travel budgets (best and worst case) for comparison ibm cognos tm1 ibm cognos enterprise planning. Tm1 and planning analytics conference london roadmap presented by ibm 3 customer case studies 4 tm1 and python integration 10 cubewise code toolbox. Ibm cognos tm1 is an enterprise planning software platform that can transform your entire case studies & blogs ibm cognos training prostrategy led training. Products supported by budgeting solutions from ibm cognos include planning, tm1, express visit ibm's analytics zone for case studies and demonstrations.
|
OPCFW_CODE
|
Modules for 50 and older ones for 51 use this, but 52 and new 51 modules should use the new way (returning a table) the package table as already mentioned above lua uses the package library to manage modules. Html::tableextract - perl module for extracting the content contained in tables within an html document, either as text or encoded element trees synopsis # matched tables are returned as table objects tables can be matched # using column headers, depth, count within a depth, table tag # attributes, or some combination of the four. Precision router table module specifically designed for twx7 workcentre compatible with all three triton plunge routers (tra001, mof001 and jof001) for shaping, planing, rebating, trenching.
A table module organizes domain logic with one class per table in the data-base, and a single instance of a class contains the various procedures that will act on the data the primary distinction with domain model (116) is that, if you have many orders, a domain model (116) will have one order object per order while a table module will have. This is a multiple article series on how to create a module for joomla version you can navigate the articles in this series by using the navigation drop down menu begin with the introduction, and navigate the articles in this series by using the navigation button at the bottom or the box to the. The master branch contains a clean module that simply creates the table i will use in this article to illustrate the process the exposed-table branch contains, commit by commit, the steps i go.
Ms access: modules a module is a collection of user-defined functions, subroutines, and global variables written in vba code these objects can then be used/called from anywhere in your access database the following is a list of topics that explain how to use modules in access. Database basics access for office such as forms, reports, macros, and modules databases created in the access 2007 format (which is also used by access, 2016, access 2013 and access 2010) have the file extension accdb, and databases created in earlier access formats have the file extension mdb a database table is similar in. Creating a simple table view from a drupal module the secret to displaying a drupal table view from a drupal module is to call the drupal theme function to generate your output i demonstrate how this is done in the following drupal module function. Hook_schema() is still used from drupal 8 modules to create custom database tables used from the module even the user and the node modules implement it, although user_schema() and node_schema() don't define the schema for the respective entities, which are created in a different way. I inserted a new module in an access database, and i copied a sample vb code to that module and i saved it running a module in access i am new to visual basic i inserted a new module in an access database, and i copied a sample vb code to that module and i saved it ' replace tablename with the real name of the table into.
The modulesignature table is a required table it contains all the information necessary to identify a merge module the merge tool adds this table to the msi file if one does not already exist the modulesignature table in a merge module has only one row containing the moduleid, language, and. This module includes a number of functions for dealing with lua tables it is a meta-module, meant to be called from other lua modules, and should not be called directly from #invoke. Activity modules reside in the /mod directory each module is in a separate subdirectory and consists of a number of 'mandatory files' and any other files the developer is going to use the below image is an example of the certificate module's filestructure please note, any reference to modname. Sap co tables in module - learn sap fico in simple and easy steps starting from overview, submodules, company basics, business area, functional area, credit control, general ledger, coa group, retained earnings account, g/l, block g/l, deleting g/l accounts, financial statement version, journal entry posting, fiscal year, posting period, keys, field status variant, group, document type.
Developers can use libraries to develop vtiger crm modules that add new functionality to vtiger crm these modules can then be packaged for easy installation by the module manager. Table module would be particularly useful in the flexible database architecture you have described for your user profile data, basically the entity-attribute-value design typically, if you use domain model, each row in the underlying table becomes one object instance. Each table module class has a data member of a data table, which is the net system class corresponding to a table within the data set this ability to read a table is common to all table modules and so can appear in a layer supertype (475. The datatables plugin for jquery offers enhanced interaction for standard html tables developers are able to create highly interactive tables with dynamic sorting, filtering and pagination without having to write custom server side code as always with jquery plugins, there is a module that.
Summary this module extends the field group module and provides a table group format, which renders the fields it contains in a table the first column of the table contains the field labels and the second column contains the rendered fields. Html tables basically, an html table is stored as a list of rowseach row is itself a list of cellseach cell is a python string or any object which may be rendered as a string using str(. Droptables - joomla table manager droptables is the only table manager for joomla that offers a real spreadsheet interface to manage tables in joomla plus, everything is manageable directly from your editor.
The divi pricing tables module how to add, configure and customize the divi pricing table module it’s easier than ever to create pricing tables for your online products create as many tables as you want, and control the pricing and features of each you can even feature a particular plan to increase conversions. The tables module implements variants of an efficient hash table (also often named dictionary in other programming languages) that is a mapping from keys to values table is the usual hash table, orderedtable is like table but remembers insertion order and counttable is a mapping from a key to its number of occurrences. I have created a module that would be able to test for the substring and return a value of 1 or 0 depending on if it was found but i do not know how to integrate that module into the query is there a way in sql view that i can call the module, and would i have to pass values to it from the table to run properly. Module:table definition from wiktionary, the free dictionary jump to navigation jump to search the following documentation is subpage list • transclusions • testcases • sandbox this module provides functions for dealing with lua tables all of them, except for two helper functions, take a table as their first argument functions.
Droptables is the only table manager for joomla that offers a real spreadsheet interface to manage tables like excel, google sheet plus, all the tables can be managed from your editor. The powershell excel module lets you create excel pivot tables and charts from the transformed data from the data generated above (and stored in a separate spreadsheet in the same workbook), you can easily create a pivot table and a chart. _ a modulescript is a script-like object that contains code for a module unlike other scripts, a module does not execute when a game starts the typical use of a module is to return a table containing multiple functions, matching how the built in string, table, and math libraries work: workspace mymodulescript. Add/edit tables in tables module and in tables view (layout: four-up table view) import tables from and save tables to/csv, tsv, txt files copy-paste table between slicer and other applications (excel, etc.
|
OPCFW_CODE
|
package net.cell_lang;
final class IntStore extends ValueStore {
private static final int INIT_SIZE = 256;
private static final int INV_IDX = 0x3FFFFFFF;
// Bits 0 - 31: 32-bit value, or index of 64-bit value
// Bits 32 - 61: index of next value in the bucket if used or next free index otherwise
// Bits 62 - 63: tag: 00 used (32 bit), 01 used (64 bit), 10 free
private long[] slots = new long[INIT_SIZE];
// INV_IDX when there's no value in that bucket
private int[] hashtable = new int[INIT_SIZE/4];
private int count = 0;
private int firstFree = 0;
private LargeIntStore largeInts = new LargeIntStore();
//////////////////////////////////////////////////////////////////////////////
private int hashIdx(long value) {
int hashcode = (int) (value ^ (value >> 32));
return Integer.remainderUnsigned(hashcode, hashtable.length);
}
private long emptySlot(int next) {
// Miscellanea._assert(next >= 0 & next <= 0x1FFFFFFF);
return (((long) next) | (2L << 30)) << 32;
}
private long filledValueSlot(int value, int next) {
long slot = (((long) value) & 0xFFFFFFFFL) | (((long) next) << 32);
// Miscellanea._assert(!isEmpty(slot));
// Miscellanea._assert(value(slot) == value);
// Miscellanea._assert(next(slot) == next);
return slot;
}
private long filledIdxSlot(int idx, int next) {
// Miscellanea._assert(idx >= 0);
long slot = ((long) idx) | (((long) next) << 32) | (1L << 62);
// Miscellanea._assert(!isEmpty(slot));
// Miscellanea._assert(value(slot) == largeInts.get(idx));
// Miscellanea._assert(next(slot) == next);
return slot;
}
private long reindexedSlot(long slot, int next) {
int tag = (int) (slot >>> 62);
// Miscellanea._assert(tag == 0 | tag == 1);
return tag == 0 ? filledValueSlot((int) slot, next) : filledIdxSlot((int) slot, next);
}
private long value(long slot) {
// Miscellanea._assert(!isEmpty(slot));
int tag = (int) (slot >>> 62);
Miscellanea._assert(tag == 0 | tag == 1);
return tag == 0 ? (int) slot : largeInts.get((int) slot);
}
private int next(long slot) {
// Miscellanea._assert(!isEmpty(slot));
return (int) ((slot >>> 32) & 0x3FFFFFFF);
}
private int nextFree(long slot) {
Miscellanea._assert(isEmpty(slot));
return (int) ((slot >> 32) & 0x3FFFFFFF);
}
private boolean isEmpty(long slot) {
long tag = slot >>> 62;
// Miscellanea._assert(tag == 0 | tag == 1 | tag == 2);
return tag == 2;
// return (slot >>> 62) == 2;
}
//////////////////////////////////////////////////////////////////////////////
public IntStore() {
super(INIT_SIZE);
for (int i=0 ; i < INIT_SIZE ; i++)
slots[i] = emptySlot(i+1);
for (int i=0 ; i < INIT_SIZE ; i++)
Miscellanea._assert(isEmpty(slots[i]));
Array.fill(hashtable, INV_IDX);
}
//////////////////////////////////////////////////////////////////////////////
public void insert(long value, int index) {
// Miscellanea._assert(firstFree == index);
// Miscellanea._assert(index < slots.length);
// Miscellanea._assert(references[index] == 0);
count++;
firstFree = nextFree(slots[index]);
int hashIdx = hashIdx(value);
int head = hashtable[hashIdx];
if (value == (int) value) {
slots[index] = filledValueSlot((int) value, head);
// Miscellanea._assert(!isEmpty(slots[index]));
}
else {
int idx64 = largeInts.insert(value);
slots[index] = filledIdxSlot(idx64, head);
// Miscellanea._assert(!isEmpty(slots[index]));
}
hashtable[hashIdx] = index;
}
public int insertOrAddRef(long value) {
int surr = valueToSurr(value);
if (surr != -1) {
addRef(surr);
return surr;
}
else {
Miscellanea._assert(count <= capacity());
if (count == capacity())
resize(count + 1);
int idx = firstFree;
insert(value, idx);
addRef(idx);
return idx;
}
}
public void resize(int minCapacity) {
int currCapacity = capacity();
int newCapacity = 2 * currCapacity;
while (newCapacity < minCapacity)
newCapacity = 2 * newCapacity;
super.resizeRefsArray(newCapacity);
long[] currSlots = slots;
slots = new long[newCapacity];
hashtable = new int[newCapacity/2];
Array.fill(hashtable, INV_IDX);
for (int i=0 ; i < currCapacity ; i++) {
long slot = currSlots[i];
int hashIdx = hashIdx(value(slot));
slots[i] = reindexedSlot(slot, hashtable[hashIdx]);
hashtable[hashIdx] = i;
}
for (int i=currCapacity ; i < newCapacity ; i++)
slots[i] = emptySlot(i+1);
}
//////////////////////////////////////////////////////////////////////////////
public int count() {
return count;
}
public int capacity() {
return slots.length;
}
public int nextFreeIdx(int index) {
// Miscellanea._assert(index == -1 || index >= capacity() || isEmpty(slots[index]));
if (index == -1)
return firstFree;
if (index >= capacity())
return index + 1;
return nextFree(slots[index]);
}
public int valueToSurr(long value) {
int hashIdx = hashIdx(value);
int idx = hashtable[hashIdx];
int firstIdx = idx;
while (idx != INV_IDX) {
long slot = slots[idx];
if (value(slot) == value)
return idx;
idx = next(slot);
}
return -1;
}
public long surrToValue(int surr) {
return value(slots[surr]);
}
//////////////////////////////////////////////////////////////////////////////
@Override
public Obj surrToObjValue(int surr) {
return IntObj.get(surrToValue(surr));
}
protected void free(int index) {
long slot = slots[index];
int hashIdx = hashIdx(value(slot));
int idx = hashtable[hashIdx];
Miscellanea._assert(idx != INV_IDX);
if (idx == index) {
hashtable[hashIdx] = next(slot);
}
else {
for ( ; ; ) {
slot = slots[idx];
int next = next(slot);
if (next == index) {
slots[idx] = reindexedSlot(slot, next(slots[next]));
break;
}
idx = next;
}
}
slots[index] = emptySlot(firstFree);
firstFree = index;
count--;
}
}
////////////////////////////////////////////////////////////////////////////////
class LargeIntStore {
private long[] slots = new long[32];
private int firstFree = 0;
public LargeIntStore() {
for (int i=0 ; i < slots.length ; i++)
slots[i] = i + 1;
}
public long get(int idx) {
long slot = slots[idx];
Miscellanea._assert(slot < 0 | slot > Integer.MAX_VALUE);
return slot;
}
public int insert(long value) {
Miscellanea._assert(value < 0 | value > Integer.MAX_VALUE);
int len = slots.length;
if (firstFree >= len) {
slots = Array.extend(slots, 2 * len);
for (int i=len ; i < 2 * len ; i++)
slots[i] = i + 1;
}
int idx = firstFree;
long nextFree = slots[idx];
Miscellanea._assert(nextFree >= 0 & nextFree <= slots.length);
slots[idx] = value;
firstFree = (int) nextFree;
return idx;
}
public void delete(int idx) {
Miscellanea._assert(slots[idx] < 0 | slots[idx] > Integer.MAX_VALUE);
slots[idx] = firstFree;
firstFree = idx;
}
}
|
STACK_EDU
|
Is there a way to change the color of my active window without changing the color of other applications? SetSysColor changes the color of all running applications! Is there an API that can do this job? And how can I display a window without title bar? thanks...
To change the color, you could intercept the WM_NCPAINT message and the WM_PAINT message and draw your own stuff. When you create a window with CreateWindowEx, you have to tell it to give you a titlebar, so all you have to do is not use the WS_CAPTION style.
How can I change the color then I intercept the WM_PAINT? I tested the SetBkgColor API but it doesn`t work.
You should make a brush the color you want using CreateSolidBrush, then get the your window size and then get a DC on your window using GetDC then fill the area with the color using FillRect that should work.... Another faster way is to do this: make a brush the color you want: invoke CreateSolidBrush,yourColor then insert the handle in eax into: m2m wc.hbrBackground, eax this is at the time you create your window. This message was edited by Zcoder, on 4/14/2001 1:56:10 PM
Did this work also on the caption bar? Can I change the color of the caption bar also with SolidBrush/DC/FillRect? And how can I change the color of the text? thanks for help!
You can't change the color of the title bar using this method, I don't know if there is a API to let you. I wanted to do the same, so what I did was I made a window without a title bar, then I used a bitmap that look like one that was a color I wanted. Then there came the problem with not having a title bar, you can't move it, so you have to use WM_MOUSEMOVE to move your own window, you I have a program in which I did this trick in. if you want it just let me know.
Yes, this is a good idea with the painted title bar! It will be great if you can sent me the example. Thanks for the help. mailto:firstname.lastname@example.org
More then that : you can make a normal window with title bar and stuff, then get a DC to the whole window not only the Client area, use this Dc to paint OVER your window whatever you like. This way, you "skin" your App but still have all the functions of the system for move,minimize,restore,resize etc
If you check my pages I have a tut there on custom window captions and shapes too. Kinda just what you asked, huh?
BogdanOntanu, I might try your IDEA, but when you move the window with the mouse, won't windows draw it the way it is suposse to be? messing up your title bar again?..... This message was edited by Zcoder, on 4/14/2001 6:29:02 PM
Ok, it works with DC painting over the whole window, but the problem is now to change the color of system buttons min/max/close.. I think that the best way is to change the whole color set of the window like the method with SetSysColor. Is there no way to change only the color scheme of my application? This would also work faster!
SetClassLong? And how can I use it to change the color settings?
|
OPCFW_CODE
|
Downloads of v 2.0.0
Units.NET gives you all the common units of measurement and the conversions between them. It is light-weight, unit tested and supports PCL.
To install Units.NET, run the following command in the Package Manager Console
PM> Install-Package UnitsNet
v2.0.0: Add support for custom units. Add ratio unit.
Breaking changes: Merge UnitValue and UnitConverter into unit classes.
v1.13: Add mass unit (pound) (thanks @strvmarv).
v1.12: Add speed units (km/h, m/s, ft/s, knots, mph). Add mass units (microgram, nanogram).
v1.11: Fix bugs in Flow and RotationalSpeed units after refactoring to T4 templates (thanks George Zhuikov).
Add Temperature units.
v1.10: Add missing localization to units for US English and Russian cultures (thanks George Zhuikov).
Add RotationalSpeed and Flow unit classes (thanks George Zhuikov).
Add mils and microinches length units (thanks Georgios).
Refactor to generate unit classes with T4 templates, a lot less work to add new units.
v1.9: Improve precision of PoundForce unit (thanks Jim Selikoff).
v1.8: Add angle units of measurement (thanks Georgios). Add tests and fix bug in NewtonPerSquareCentimeter and NewtonPerSquareMillimeter.
v1.7: Add imperial and US units for volume and area.
v1.6: Add area units. Fix exception in TryConvert for volume units.
v1.5: Add volume units of measurement (thanks @vitasimek). Add missing operator overloads.
v1.4: Add ShortTon and LongTon mass units (thanks Cameron MacFarland). Add TryConvert methods.
v1.3: Add pressure units. Add dynamic conversion via UnitConverter and UnitValue
v1.2: Add force, torque, pressure, mass, voltage, length and length2d units of measurement.
Copyright © 2007-2013 Initial Force AS
This package has no dependencies.
|Units.NET 2.0.0 (this version)||194||Sunday, February 09 2014|
|Units.NET 2.0.0-beta||5||Sunday, February 09 2014|
|Units.NET 2.0.0-alpha||4||Wednesday, February 05 2014|
|Units.NET 188.8.131.52||24||Friday, January 31 2014|
|Units.NET 1.12.0-beta||34||Saturday, January 04 2014|
|Units.NET 1.11.0||269||Monday, November 18 2013|
|Units.NET 1.10.0||24||Friday, November 15 2013|
|Units.NET 1.9.0||30||Thursday, November 07 2013|
|Units.NET 1.8.0||38||Wednesday, October 30 2013|
|Units.NET 1.7.0||114||Thursday, August 08 2013|
|Units.NET 1.6.0||22||Tuesday, August 06 2013|
|Units.NET 1.5.0||34||Friday, August 02 2013|
|Units.NET 1.4.0||31||Monday, July 22 2013|
|Units.NET 1.3.0||25||Sunday, July 21 2013|
|Units.NET 1.2.0||30||Sunday, July 21 2013|
|
OPCFW_CODE
|
On Friday 15 May 2009 05:44:47 Richard Freeman wrote: > Ciaran McCreesh wrote: > > On Thu, 14 May 2009 20:06:51 +0200 > > > > Patrick Lauer <patr...@gentoo.org> wrote: > >> Let EAPI be defined as (the part behind the = of) the first line of > >> the ebuild starting with EAPI= > > > > Uh, so horribly utterly and obviously wrong. > > > > inherit foo > > EAPI=4 > > > > where foo is both a global and a non-global eclass that sets metadata. > > This seems to come up from time to time but I don't see how this is a > problem that GLEP 55 solves. If the rule is "first line of the ebuild > starting with EAPI=" and the ebuild is as you suggest above, then the > EAPI is 4 (without any regard whatsoever to what might be in "foo"). > > The counterargument seems to be that eclasses should be able to modify > EAPI behavior. However, if you want to do this then you DEFINITELY > don't want to put the EAPI in the filename - unless you want eclasses to > start renaming the ebuilds to change their EAPIs and then trigger a > metadata regen. > > This seems to be a case where a problem is proposed, with a solution. > Somebody proposes an alternate solution and the complaint is raised that > it doesn't handle situation X. However, the original proposed solution > doesn't handle situation X either, so that can hardly be grounds for > accepting it over the alternate. > > I'm actually more in favor of an approach like putting the EAPI in a > comment line or some other place that is more "out-of-band". Almost all > modern file formats incorporate a version number into a fixed position > in the file header so that it is trivial for a program to figure out > whether or not it knows how to handle the file. Another common approach > is to put a header-length field and add extensions to the end of a > header, so that as long as you don't break past behavior you could > create a file that is readable by older program versions (perhaps with > the loss of some metadata that the older version doesn't understand). > Just look up the UStar tar file format or the gzip file format for > examples. Of course, such file formats generally aren't designed to be > human-readable or created with a text editor. > > The same applies to executables. It is impossible from the filename to > tell if /bin/bash is in a.out or ELF format, or if it is a shell script. > Instead a simple standard is defined that allows the OS to figure it > out and handle it appropriately. If you try to run an ELF on some > ancient version of linux it doesn't crash or perform erratic behavior - > it will simply tell you that it doesn't understand the file format > (invalid magic number). > > In any case, I'm going to try to restrain myself from replying further > in this thread unless something genuinely new comes up. When I see 26 > new messages in my gentoo-dev folder I should know by now that somebody > has managed to bring up GLEP 55 again... :)
If I understand the problem GLEP 55 is trying to solve correctly, it stems from portage's assumption that an unknown EAPI is equal to EAPI 0. Could that assumption be changed to an unknown EAPI is equal to the latest supported EAPI. Now I understand that this change would have to wait until all the ebuilds in the portage tree correctly define their EAPI, but would the idea be technically feasible at least excluding EAPI0 ebuilds? I think it would be if all EAPIs are forward compatible up until the EAPI declaration in the ebuild.
|
OPCFW_CODE
|
Reading (GET) Facebook Ads From Ad Library and Reading (GET) Personal Ad Account Billing Data in my Android Application
I'm trying to create an android application for personal use to do the following:
Get ads from the Facebook ad library (commercial ones) not the ones with politics topics etc. and then insert them into a spreadsheet for further processing.
Get my personal business account's current billing data by clicking on a simple button that renders in a preferred way and then inserts them to a spreadsheet for further processing other than having to login to the ads manager website and checking the billing tab.
I tried the following approaches for the ads library (first requirement):
Find an API for the ads library in order to access ads but I could only find API to access political ads.
Hardcode each competitor's id into this URL (between curly braces) then parse the HTML page for the data I need which is possible but kind of complicated and will take much time to load the page.
https://www.facebook.com/ads/library/?active_status=all&ad_type=all&country=US&view_all_page_id={9465008123}&sort_data[direction]=desc&sort_data[mode]=relevancy_monthly_grouped
I tried the following approaches for the ads manager (second requirement):
Find an API for the ads manager in order to access my personal billing details but I couldn't find something similar to this.
Hard coding like the previous requirement is impossible because if I hardcode my account id into the URL it won't be visible because I'm not logged in.
https://www.facebook.com/ads/manager/billing/transactions/?act={ACCOUNT_ID}&pid=p1&page=billing&tab=transactions
Are there any different approaches to do the requirements? (All I'm trying to do is create an app for myself which makes it easier than accessing all the links and this stuff and also process some of the data in a spreadsheet)
I'm also not sure if I got my point across so if there's something unclear let me know.
If you can't find a suitable API, then yes, scraping is the alternative. Note that scraping is expressly prohibited by Facebook, in part to protect your "competitors." See https://www.cpomagazine.com/data-privacy/facebook-goes-all-out-on-data-scraping/
@RobertHarvey Do you think there aren't any suitable APIs, maybe I'm not searching in the right direction because I'm kind of inexperienced with APIs. Also, I'm not trying to extract data that I'm not allowed to view. I can view the data by logging in and navigating to what I need. However, I'm trying to do that through a personal application in a user-friendly way and perform certain processing that I can do using a calculator when viewing the data on a browser.
|
STACK_EXCHANGE
|
Over the last six months the Docker Store, which was first introduced as a private beta nearly a year and a half ago in June 2016, has come on leaps and bounds.
It has quickly, but without much fanfare, become a one stop shop for all things Docker, with the both the documentation and home pages linking back to content now hosted on the store.
So what is the Docker Store?
In short, it is a market place for containers, plugins and also Docker itself there is a mixture of free and also paid content from both Docker themselves as well as third part providers.
As you can see from the screenshot above, there are currently four main sections.
Docker EE, you can find Docker Enterprise Edition installers for all supported platforms here, from Red Hat Enterprise Linux to Windows Server 2016, you will be able to find the installation media here, as well also the option to purchase a subscription.
Docker CE, if you are happy with the community edition of Docker then this is where you will find all of the various installers, included here are;
- Docker for Mac
- Docker for Windows
- Docker for AWS
- Docker for Azure
- As well as CentOS , Ubuntu , Debian and Fedroa
Containers, here is where the bulk of the store content is. Here you will find a mixture of free, licensed and subscription based container images. We will look into this section in more detail in a moment.
Plugins, here you will find the container images used to power the Docker Engine managed plugin system. For example if you wanted to install the Weave Net plugin into your Docker Swarm cluster you would run;
$ docker plugin install store/weaveworks/net-plugin:2.0.1
This would download the container which contains the plugin from https://store.docker.com/plugins/weave-net-plugin .
At the time of writing, all but one of the plugins listed it Docker Certified.
Docker Certified containers mean that the publisher of the container image has submitted to Docker for certification. This gives you, as the consummer of the container, assurances that the container image is fully compatible with Docker Enterprise Edition and also that it is built to accepted best practises.
As mentioned in the previous section, the bulk of the content on the Docker Store is, no surprises, containers.
The Docker Store is now the offical home for all of the core containers currated by Docker themselves. While these containers are still available at the Docker Hub, they are slowly being moved to the Docker Store.
As you can see from the screen above of the official image for PHP from the Docker Hub , there is a link to the Docker Store at the top of the page. The Docker Store page for PHP gives you the same view, all be it with a few additions;
You will notice that the the Docker Store page has the price of $0.00 and also it highlights the fact that the image is an Offical Image.
The instuctions for pulling the image are the same on both the Docker Hub and Docker Store;
$ docker pull php
Hopefully they will update both the Nub and Store to use the
docker image pull command, but that is just me wanting to use the new Docker client commands :)
Let’s take a look at another image, Couchbase on the Docker Hub looks like any other official image;
However, it’s listing on the Docker Store gives a different story;
Here we can see the image is actually maintained by Couchbase Inc, who, from clicking on the link , we can tell a verified publisher. You will also notice that there are no
docker pull command listed on the store, instead there is a Proceed to Checkout button, as this image is $0.00 lets try checking it out.
Clicking on on the Checkout button takes you to a page which asks for your name, company name, phone number and email address. Once filled in click on Get Content and you will be take to your subscription page;
Let’s try pulling the image from an un-authenciated Docker client by running;
$ docker pull store/couchbase/couchbase:3.1.5
Logging in using the
docker login command then trying to
docker pull the image has a lot more success;
From there I could for example run the following command to launch Couchbase;
$ docker container run -d store/couchbase/couchbase:3.1.5
As you can see, I have had to use the full image name and version to ensure that the image from the Docker Store will be used.
Please note, the commands above are not the best ones to launch a Couchbase container, they are purely for example. If you want to know how to run Couchbase in a container I recommend reviewing the offical documentation .
Another type of purchase from the Docker Store is a Developer Tier one, a good example of this is the Oracle Database Enterprise Edition container, to attach the subscription to your Docker Store account you need to agree to the following;
I agree that my use of each program in this Content, including any subsequent updates or upgrades, shall be governed by my existing Oracle license agreement for the program (subject to quantity and license type restrictions in my program license); or, if I don’t have an existing license agreement for the program, then by separate license terms, if any, stated in the program; or, if I don’t have an existing Oracle license agreement for a program and no separate license terms are stated, then by the terms of the Oracle license agreement here .
I don’t have access to an Oracle entitlement so I didn’t agree (don’t want Larry and his lawers after me), if I had then the process for pulling would be exactly the same as the Couchbase example.
There is a lot of the additional content which you will not find in the Docker Hub on the Docker Store, I recommend browsing the Docker Store and see what are you missing out on.
|
OPCFW_CODE
|
A simple example of an adjoint with rough output is normal moveout. Performing normal moveout on a trace means looping over values of vertical travel time and solving for the appropriate travel time t. The trace, collected at some offset, is stretched upward to simulate a zero offset trace. Each output bin collects one value from the input space. Performing the adjoint means pushing each input value into an output bin. Multiple inputs will end up in some output bins. The roughness with which the output space samples the input space is dictated by the velocity.
Here is a simple inversion, following an example from TDF Claerbout (1994). Begin with a single seismic trace, which we label the model, m. We also have a solution trace, labeled x, which we wish to construct from a synthetic gather d = Mm, where M is a normal moveout 0 operator, and M' its adjoint stacking operator. Thus our regression is:
We have some freedom in the way we define M and M'. M is our 0 operator, but we can choose it to sample the input trace or to sample the output gather, and the same is true of the stacking operator M'. In order to be precisely adjoint to each other, both M and M' must sample the same space. Further, the mathematics of normal moveout recommend always sampling the trace, because of the explicit relationship between vertical travel-time and velocity. So initially we choose our NMO operator to model by pushing values from an input trace into the output gather, and to stack by pulling values into the trace. This gives an exact adjoint.
I use this operator pair to solve the above regression and get results as seen in Figure . We see that the original waveforms are reconstructed almost exactly after just four or five iterations.
Figure 1 Iterations of the inversion . Below is shown the original trace. Note that the original is duplicated almost exactly after just a few iterations.
Now we redefine the operator. M' stays unchanged, but M is replaced. Maintaining similar naming conventions, we can define a stacking operator S which samples an input gather and pushes values into a stacked trace output. S' then must model by pulling values into the gather. We replace M by S' and obtain the regression:
Using this new operator pair in the inversion yields the results seen in Figure . Amplitudes for most events are reconstructed more quickly, after a single iteration.
Figure 2 Iterations of the inversion . Below is shown the original trace. Note that amplitudes on most events are recovered more quickly than with the exact adjoint.
To quantify the results, Figure displays the misfit of the model at each iteration with the original trace: .This misfit is the model residual, normally an unknown quantity, but a useful measure of success in synthetic problems where the exact answer is known. The inversion which uses exclusively pull operators converges more quickly than that which uses exact adjoints. For this simple example, instant convergence in the model residual would mean unitarity; accelerated convergence suggests that the pull adjoint gives something closer to a unitary operator than the exact adjoint.
Figure 3 Residuals for the two inversions as a function of iteration. Dashed line denotes inversion with the exact adjoint, continuous line with the pull adjoint.
To satisfy curiosity, I measured data residuals (that quantity minimized by the inversion) after several iterations at various slownesses. The results are shown in Figure . The smooth curve corresponds to the pull adjoint and the rough one to the exact adjoint. At all velocities, there is a noticeable advantage to using pull adjoint NMO. It is surprising that there is so little velocity dependence, because it seems the aliasing at high dips (or low velocities) should be a problem for push operators more than for pull operators. Further, the magnitude by which the operator pair which includes the pull adjoint fails the dot product test is not systematically dependent on velocity.
Figure 4 Comparison of residual after one iteration for the pull and exact adjoint. The smooth curve denotes the pull adjoint. Click to see movie of residuals at higher numbers of iterations.
Figure 5 Error in the dot product test is independent of velocity.
|
OPCFW_CODE
|
wait process until all subprocess finish?
I have a main process which creates two or more sub processes, I want main process to wait until all sub processes finish their operations and exits?
# main_script.py
p1 = subprocess.Popen(['python script1.py'])
p2 = subprocess.Popen(['python script2.py'])
...
#wait main process until both p1, p2 finish
...
use the wait method: p1.wait(); p2.wait()
check this question out: http://stackoverflow.com/questions/6341358/subprocess-wait-not-waiting-for-popen-process-to-finish-when-using-threads
http://stackoverflow.com/questions/100624/python-on-windows-how-to-wait-for-multiple-child-processes
BTW, Popen(['python script1.py']) won't work. Either do Popen(['python', 'script1.py']) (to be preferred) or Popen('python script1.py', shell=True).
Not a duplicate--linked duplicate is specifically regarding Windows.
A Popen object has a .wait() method exactly defined for this: to wait for the completion of a given subprocess (and, besides, for retuning its exit status).
If you use this method, you'll prevent that the process zombies are lying around for too long.
(Alternatively, you can use subprocess.call() or subprocess.check_call() for calling and waiting. If you don't need IO with the process, that might be enough. But probably this is not an option, because your if the two subprocesses seem to be supposed to run in parallel, which they won't with (call()/check_call().)
If you have several subprocesses to wait for, you can do
exit_codes = [p.wait() for p in p1, p2]
(or maybe exit_codes = [p.wait() for p in (p1, p2)] for syntactical reasons)
which returns as soon as all subprocesses have finished. You then have a list of return codes which you maybe can evaluate.
If I have a list of processes that I need to wait for, but I want the wait to be interrupted the moment any of the processes has finished (so I can later resume it to wait for the remaining processes), how would I go about that?
@antred Then you should call .wait() with a (rather small) timeout and process the results. AFAIR, it returns None if the said process wasn't terminated yet and a number if it was.
@glglgl Yeah, that would work, though I wanted to avoid having to actively poll, but I guess there may not be a way around it (I'm on a Windows machine). Also, apparently the same thing was asked and answered here: http://stackoverflow.com/questions/100624/python-on-windows-how-to-wait-for-multiple-child-processes
@antred Of course - you can have a thread wait on each process and tell as soon as it finished. That would be a solution I didn't think of. It would lead to maximum responsiveness.
@glglgl I wish Python on Windows would just emulate what os.waitpid( 0 ) does on Unix. Even if, under the covers, the emulation would have to do the same thing the threaded solutions in that other SO post do, it would still be nice to have Python offer this functionality out-of-the-box.
@glglgl Thanks soo much for explaining the difference between subprocess.call and subprocess.Popen . It is really helpful to know that only the latter shows IO and runs in parallel!
tried with Python 3.8.8, this method does not work anymore.
@Franva I didn't know "does not work" is a valid way of describing an error. But maybe the edit I made to my answer solves your problem.
subprocess.call
Automatically waits , you can also use:
p1.wait()
|
STACK_EXCHANGE
|
What Kind of Guidance we are Seeking from the Quran to be Benefited from It
Any intelligent person can learn Arabic, study the Quran, Hadith, Tafsir (detailed explanation) and memorize the Quran in no time. He or she can understand Allah’s book within a very short period. But the psychological requirement that is required to be benefited from the Quran is a lifelong struggle. They are something that we can gain at one time in our life and lose at a later point of time. They are not something we can keep; they are something we have to fight hard to maintain. And more than anything else, we have to fight with ourselves. They are not like general university courses that once you graduate and meet those pre-requisites, you get the degree and you do not have to go back to college anymore. This is not the case with the Quran and Islam. In Islam we have to struggle every day to maintain it.
So, while reading and understanding Quran we should always remind ourselves that:
- Why am I learning this ?
- Why am I learning Arabic ?
- Why am I learning Tafsir ?
- Why am I reading the Translation ?
- Why am I memorizing this or that Ayat ?
- What is the point of all of this? what is the intent ?
The common answer is that we are doing this to seek guidance from Allah. But seeking guidance is not enough, for everyday comes with a new set of choices. Should I do this should I do that. Should I look there or should I look down. Should I talk back or should I stay quit. Should I think this or should I think that. Should I earn my money this way or should I earn that way. Should I pursue this or should I leave this. Every second of the day and night we are faced with choices. So, when we ask for guidance, we ask Allah to give us strength to make right choices.
To conclude, the basic motive of this writeup is to encourage one to read the Quran with understanding. It is not written to criticize anyone. This write up is compiled as per my understanding. It is the result of my deliberation and musings over a period of time. This is a small effort on my part as per my understanding. The write up might reflect signs of ignorance and gaps in my understating as well as in my articulation. What appears sound in this write up should be regarded as favour from the Almighty and the outcome of blessings of my parents and all those who had been a part of my life’s journey at some point of time or other.
What appears unsound should be attributed to my own oversight. I request you all to make Du’a for my parents and all of those who have been a part of my life. It will be a great pleasure to receive your guidance, feedback and suggestions.
- Please ignore and discard anything in this write-up which is not in coherence with the Quran and Authentic Hadith.
- Ayats of the Quran mentioned in this write-up is referred from the translation of the Quran by Abdullah Yusuf Ali.
|
OPCFW_CODE
|
Installing Toree+Spark 2.1 on Ubuntu 16.04
## 23rd January 2017
At the time of writing, the pip install of toree is not compatible with spark 2.x. We need to use the master branch from git.
sudo apt install openjdk-8-jdk-headless sudo apt install git
sbt isn't available in the Ubuntu repos. Install it manually or do the following:-
echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823 sudo apt-get update sudo apt-get install sbt
Install Anaconda Python
Anaconda Python can be installed to a user's home directory and it contains most of the Python modules needed by the majority of researchers. It can coexist with the normal Ubuntu Python packages
wget https://repo.continuum.io/archive/Anaconda3-4.2.0-Linux-x86_64.sh chmod +x ./Anaconda3-4.2.0-Linux-x86_64.sh ./Anaconda3-4.2.0-Linux-x86_64.sh
Follow the instructions. When you are asked the following, say
Do you wish the installer to prepend the Anaconda3 install location to PATH in your /home/walkingrandomly/.bashrc ? [yes|no]
Start a new terminal session so that the
.bashrc changes get applied.
wget http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0.tgz tar -xvzf ./spark-2.1.0.tgz cd spark-2.1.0/ build/mvn -DskipTests clean package
Check that you can run the spark shell
Press CTRL-D to exit the shell
Install toree from source
git clone https://github.com/apache/incubator-toree cd incubator-toree/ make dist
You'll get this error which you can ignore:
/bin/sh: 1: docker: not found Makefile:212: recipe for target 'dist/toree-pip/toree-0.2.0.dev1.tar.gz' failed make: *** [dist/toree-pip/toree-0.2.0.dev1.tar.gz] Error 127
Now we can install the built package ::
cd dist/toree-pip/ python setup.py install
Install the jupyter kernel. I call this one
bespoke_spark to differentiate from any others you may have.
Be sure to change the value of
--spark-home to yours.
jupyter toree install --kernel_name=bespoke_spark --spark_home=/home/walkingrandomly/spark-2.1.0/ --user
Now launch Jupyter with
and you'll be able to select the kernel and use spark 2.1
|
OPCFW_CODE
|
I feel a little stupid posting this question, but I’ve search the wiki, the source code and the forum high and low and can’t seem to find anything on how to load a saved AudioNode. Here’s what I’m currently getting. When I save an AudioNode the object looks like this:
So I assume somehow I’m going about this the wrong way. How exactly should you reconstruct your audio nodes after loading a saved file? Do I have to go through all the nodes and restart the audio manually? I can see how I would do this, but I’m wondering if there’s a slicker way to do it. Also, why is this object losing its positional information? And as a matter of fact the status is also incorrect in that it goes from playing to stopped.
I assume this is related to the fact that data, status, and channel in AudioNode are marked as transient and not saved. Why is that?
Thanks for your reply. I setup a simple test case and the issue with not saving the position seems to have gone away. Which is odd, but I’ll try to figure that out on my own. However, the issue with the AudioNode being in a stopped state is still there. Any guidance on what I should be doing in this situation would be helpful. Like I said, I could find no information on how these AudioNodes should be handled during load.
Hey, when you load your audio node you need to call the play() method to play the sound again. For that I would just search for audio nodes in the scene graph. For that you can use this method of the node class depthFirstTraversal.
If you only want to play certain sounds you have to add custom user data and evaluate them yourself for example.
Hmm, ok. Well it seems like bad form for me to be duplicating data, having to extend AudioNode and duplicating the state information that is already present, but not saved, in the superclass, just so I can save and load that data. I assume that’s the path I will have to take?
My setup is pretty straightforward and simple I think. I have an audionode (with an engine sound) attached to my model node (of for example an aircraft), that audio node is positional and looping of course. So I save the aircraft and reload and run into this issue.
I guess I could use user data and not extend audionode, however if I’m going to have to check the playing state and restart the sound anyway, I think it would be easiest to do that from the read method in my subclass. At least that’s what I’m thinking of doing off the top of my head.
Yes I also wrote a simple AudioNode that worked ok from the translation standpoint. Not sure why it’s not working right elsewhere in my code. In some cases I’ve seen it where the localTranslation is correct, but the worldTranslation is not.
I do have code embedded into my object that plays the sound when the aircraft takes off, but doesn’t mess with the sound until the aircraft lands, except to change the pitch based on the engine rpm. The problem is if the object is loaded and in-flight already the sound is obviously not playing. I’ll be able to handle this just fine though, I just wanted to make sure that this wasn’t supposed to be working automagically for me.
I’m also curious if someone knows why the AudioNode status is transient. That’s very useful data that would help with the loading process if it were saved.
I guess I was being stupid as the objects I was looking at were attached to an orphan node that was not attached in any way to the rootNode. That must’ve been causing the world trans to reset to three balls on load I guess even though they had a proper world trans in the original game before the save, despite still being detached. Doah. Oh well.
Well it’s not a problem. I for whatever reason, had commented out the line attaching the parent node to my scene graph. I uncommented it and everything is fine. Also I extended AudioNode and it works great. The nice thing is I didn’t have to duplicate the status from the superclass as I thought I might, but rather all I needed was to do all the work in the save and load methods.
I still don’t get why AudioNode doesn’t do this automatically. But at least it works.
I guess because it can’t know where you want to play it. I mean, if we pretended to save all of the state then that would be the next thing folks complained about “Why is my audio playing from the beginning instead of 5 minutes in like when I saved it?”
The root of the problem is really treating scene graph objects like game objects. If you had real game objects and the scene graph was just a reflection of that then sound on/off would be part of the game object state… and that’s what you’d be saving/loading.
…then you also wouldn’t have to worry about a new version of JME messing up every one of your existing save games.
|
OPCFW_CODE
|
There’s a Catch-22 hidden in the arguments that many people use to rationalize not writing tests.
A Catch-22 is a situation that you can’t escape out of due to contradictory rules or limitations. In case of automated tests for software, the arguments often go like this.
At the start of the project, both developers and managers say that the project is too young and changing all the time. There’s also market pressure to get something minimal out there as fast as possible. So there’s no time to write tests.
But once the project has matured, the code is harder to put under test, the developers haven’t adopted the necessary habits to write tests, and surprise surprise, the market pressure is still there.
So no tests are ever written. Unfortunately, this leads to increasingly complex and highly coupled code that is increasingly harder to test. Progress slows down and frustration follows.
Escaping the Negative Cycle
So when should we start writing tests? When people ask me that question, the answer is usually now. Right now.
If your project is only getting started, it can be OK to write some code without tests: a proof-of-concept, a very small MVP to see if it works, etc. Just to see if and how things would work out.
But this period is shorter than most would expect. In 2-3 weeks, a team of average skill can get something up an running and then they should start taking things seriously. This includes writing tests.
An experienced team (with TDD experience) will be able to write code that can be tested afterwards with minimal effort. But even they should not allow that “test-less” time to take too long. Inevitably, code will become coupled and difficult to test.
But I’m Not Allowed!
What if management or some lead dev/architect forbids you from writing tests?
My advice? Still do it.
First, management shouldn’t be telling you how to write code, only what to implement. Lead developers and architects are different, but they wouldn’t tell you which keyboard layout or IDE to use, would they? Testing can be seen as a tool to write code. Instead of running the application to verify your changes, you can say you run automated tests.
You can then also explain why this works faster for you. There is less manual testing and debugging to be done. It also leads to better code quality and stops other developers from breaking features you’ve implemented.
But of course, that assumes there is room for rational arguments. If there isn’t, do you really want to work there? With companies screaming desperately for developers, you should be able to find a better place to work. There’s a reason there are so many recruiters in the IT industry.
Of course, this might be different in your specific situation or region, but in general, there are better jobs out there for developers.
So regardless of how old your project is and regardless of what others tell you, my advice is to start writing tests now.
If you have a legacy project and don’t know where to start, try to find a small piece of code that is isolated and write a test for that. It’ll allow you to set up the necessary CI infrastructure giving you the foundations to continue.
Once you start writing tests, you’ll get better at it and start noticing other places where you can add tests.
Often, it just takes a small push to get others to start writing tests as well. A simple first test written by one developer often leads to more and more tests written by the entire team.
|
OPCFW_CODE
|
Our department (I.T.) already has a bad reputation around the organization. We're badly managed, and budget cuts keep us using outdated equipment, and with fewer and fewer people to maintain it.
Everybody's complaining about things not working, and there's almost nothing I can do to help them!
The hardware is junk, and years old. So systems are dropping like flies. We don't have any replacements, so we're trying to salvage what parts we can from the dead systems. But that takes time to figure out what works and what doesn't.
We don't have any ability to reimage, reinstall, or add programs. Yes, you read that right. We use an app/image deployment system to push things to the client. That system hasn't been working in a couple months, since we decided to 'upgrade' it. We'd just gotten the old one working fairly well, so obviously it was time to throw it out and leap completely into the new and untested version.
So I can't really replace much hardware, and I can't replace or add any software. So a lot of the computers aren't working properly. On top of that, we have network problems. The DNS doesn't seem to be registering clients properly, so it's resolving host names to incorrect IP addresses, which prevents people from getting online. PXE isn't working, and neither is the imaging system I mentioned earlier.
In addition to all of that, we're also using a new(ish) ticket system for our work. It's a lot slower than the last one, and the interface is abysmal. It's truly horrible to use. It takes 3-5 seconds to open a ticket, just so you can look at the problem and go "oh... I can't fix that." It's nearly impossible to sort or categorize your tickets, and it doesn't remember any of your settings when you open the next ticket.
We're so swamped with work that it's taking them about 3 days to even get the tickets to us after they've been put in the system by the users. I'm getting notifications that I have a new ticket, and that that ticket is over-due all at the same time.
People are ready to riot. They're taking things into their own hands trying to get things working (which usually ends up making things worse for us).
I hate feeling helpless to help someone. Time to get on Reddit and complain about it.
tl;dr They finally pushed us beyond what we could do, and we broke.
Welcome to **Tales From Tech Support**, where we share our stories of: * *Incredible Feats* of Networking Heroics; * *Tech Troubleshooting* Under the Direst Circumstances; * *Unsolvable Problems* Cracked by Sheer Genius and/or Pure Luck; * *Moral Support* after Having Dealt with Difficult Clients; * And of course, **Stupid User Stories!** We've got a bit of a lull in the queue just now, so kick back, grab a cold one, and share your best tales among friends here at TFTS!
|
OPCFW_CODE
|
The database is full of such warning!
2020-09-17 11:23:04 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '<IP_ADDRESS>' (This connection closed normally without authentication)
2020-09-17 11:23:11 12 [Warning] Aborted connection 12 to db: 'powerdns' user: 'powerdns' host: '<IP_ADDRESS>' (Got an error reading communication packets)
2020-09-17 11:23:12 17 [Warning] Aborted connection 17 to db: 'unconnected' user: 'unauthenticated' host: '<IP_ADDRESS>' (This connection closed normally without authentication)
2020-09-17 11:23:12 18 [Warning] Aborted connection 18 to db: 'powerdns' user: 'powerdns' host: '<IP_ADDRESS>' (Got an error reading communication packets)
2020-09-17 11:23:12 20 [Warning] Aborted connection 20 to db: 'powerdns' user: 'powerdns' host: '<IP_ADDRESS>' (Got an error reading communication packets)
2020-09-17 11:23:21 22 [Warning] Aborted connection 22 to db: 'powerdns' user: 'powerdns' host: '<IP_ADDRESS>' (Got an error reading communication packets)
2020-09-17 11:23:31 24 [Warning] Aborted connection 24 to db: 'powerdns' user: 'powerdns' host: '<IP_ADDRESS>' (Got an error reading communication packets)
2020-09-17 11:23:36 26 [Warning] Aborted connection 26 to db: 'unconnected' user: 'unauthenticated' host: '<IP_ADDRESS>' (This connection closed normally without authentication)
2020-09-17 11:23:41 29 [Warning] Aborted connection 29 to db: 'powerdns' user: 'powerdns' host: '<IP_ADDRESS>' (Got an error reading communication packets)
2020-09-17 11:23:46 31 [Warning] Aborted connection 31 to db: 'unconnected' user: 'unauthenticated' host: '<IP_ADDRESS>' (This connection closed normally without authentication)
2020-09-17 11:23:51 32 [Warning] Aborted connection 32 to db: 'powerdns' user: 'powerdns' host: '<IP_ADDRESS>' (Got an error reading communication packets)
2020-09-17 11:23:56 34 [Warning] Aborted connection 34 to db: 'unconnected' user: 'unauthenticated' host: '<IP_ADDRESS>' (This connection closed normally without authentication)
same as issue 10
Update helm to 0.1.10 with "PDNS_gmysql_innodb_read_committed"
Added new option powerdns.innodb_read_committed default 'no'
I hit the same issue in my database and here is the log for powerdns.
Sep 22 00:09:01 Fatal error: Trying to set unknown parameter 'innodb-read-committed'
Sep 22 00:09:02 Our pdns instance exited with code 1, respawning
Sep 22 00:09:03 Guardian is launching an instance
Sep 22 00:09:03 Loading '/usr/lib/pdns/pdns/libgmysqlbackend.so'
Sep 22 00:09:03 This is a guarded instance of pdns
Sep 22 00:09:03 Fatal error: Trying to set unknown parameter 'innodb-read-committed'
Sep 22 00:09:04 Our pdns instance exited with code 1, respawning
Sep 22 00:09:05 Guardian is launching an instance
Sep 22 00:09:05 Loading '/usr/lib/pdns/pdns/libgmysqlbackend.so'
Sep 22 00:09:05 This is a guarded instance of pdns
Sep 22 00:09:05 Fatal error: Trying to set unknown parameter 'innodb-read-committed'
Sep 22 00:09:06 Our pdns instance exited with code 1, respawning
Sep 22 00:09:07 Guardian is launching an instance
Sep 22 00:09:07 Loading '/usr/lib/pdns/pdns/libgmysqlbackend.so'
|
GITHUB_ARCHIVE
|
Full-stack engineer at Moveline
Las Vegas, Nevada, United States
🇺🇸 (Posted Mar 13 2014)
About the company
Moveline is a technology company that makes life easier for people who are moving. We build software people use to get organized, compare prices, and make decisions about an upcoming move.
- Remote work possible
TLDR: remote workers, full-stack, JS, Node, Angular, Express, Mongo, Holacracy, Golang, Redis, Grunt, Bower, LESS, web + mobile
Moveline is transforming an industry older than the internal combustion engine. We ship every day and play Settlers on Fridays.
We’re looking for a solid full-stack engineer who loves Settlers of Catan, remote development, and can tell the difference between an IPA and a Lager.
- Driven to build software that dramatically improves the customer experience, end-to-end, around moving. Our web product is at the heart of it
- Well-funded by a group of world-class of investors and advisors: (angel.co/moveline)
- Our organization is flexible and embraces the Holacracy model of governance. Self-determination is encouraged and self-motivation is essential.
- Have only begun to tackle the problem space. Serious fun and challenges still lie ahead.
- Our stack is primarily MEAN — Mongo/Express/Angular/Node — with some Golang on the backend. We regularly evaluate new tools and technologies for development advantages and not just because they are new and cool.
Market salary and meaningful equity is available. We’re primarily a remote engineering team, with the company (ops, marketing, customer service) based in Las Vegas in the heart of Tony Hsieh’s Downtown Project. Hackers in Vegas or remote in the US welcome. Full Time or Contract-to-Hire only please. No freelancers or recruiters need apply.
Skills & requirements
- Passionate about code, development practices, and maintainable solutions and want to work with others who are similarly so. You can’t sleep at night knowing something is not DRY and unit-tested
- Architected and developed end-to-end products that are currently running business applications in a production environment
- Energized when working closely with others on a small team
- Want to build stuff that solves real human problems
- Can explain the differences, chemical and philosophical, between a lager and an IPA
- Don’t care if the moving industry isn’t sexy
- Would rather make money than make the front page of TechCrunch (though we do that too)
Instructions how to apply
go to the moveline website and contact them
[ job website
Let them know you found the job via https://www.golangprojects.com
(Companies love to know recruiting strategies that work)
|
OPCFW_CODE
|
Disable Firefox feature to choose its own DNS
Let's say I have set up a PC Windows to use <IP_ADDRESS> Cloudflare "Family" DNS (no "adult" websites) as computer DNS (in the connection/network adapter settings).
Then navigating on these websites is disabled (ok it's only at DNS level, and not a total protection of course, but it's better than nothing).
But Firefox has a feature to choose its own DNS, so it's very easy for a user to choose <IP_ADDRESS> again which won't block anything, and have access to full internet.
Is there a Windows "group policy" or similar hidden feature in Firefox to disable Firefox users to choose their own DNS?
(I use a policies.json file, but I don't find the specific setting to remove the ability to change DNS settings: https://mozilla.github.io/policy-templates)
This is a common problem for sys admin of school computers.
How do they prevent Firefox users to choose their own DNS?
Firefox supports enterprise policy. Surely there’s a switch available for that.
@DanielB Yes I use a policies.json, but I don't find the specific setting to remove the ability to change DNS settings: https://mozilla.github.io/policy-templates/
Out of curiosity: did you also disable the user's ability to install other browsers? Or to use a VPN or a proxy? Or to use their mobile device's mobile data plan? Or to use Microsoft Bing Video to watch adult content? Or to visit NSFW subreddits? Or to use Twitter? Or to join NSFW Discord servers? Or any of the myriad other ways to circumvent an adult filter?
@Nzall, I also did prevent some of the other things you mentioned, yes. But not all of course. We probalby agree that education about the use of internet is the key, and "blocking only" is of course not the solution, and can easily be defeated, yes. I agree with all of that. Nevertheless, it's not totally useless either to take a few steps to activate parental controls...
Looks like there is an option for configuring DNSOverHTTPS (DoH):
https://mozilla.github.io/policy-templates/#dnsoverhttps
policies.json:
{
"policies": {
"DNSOverHTTPS": {
"Enabled": true | false,
"ProviderURL": "URL_TO_ALTERNATE_PROVIDER",
"Locked": true | false,
"ExcludedDomains": ["example.com"],
"Fallback": true | false,
}
}
}
Enabled determines whether DNS over HTTPS is enabled
ProviderURL is a URL to another provider.
Locked prevents the user from changing DNS over HTTPS preferences.
ExcludedDomains excludes domains from DNS over HTTPS.
Fallback determines whether or not Firefox will use your default DNS resolver if there is a problem with the secure DNS provider.
Setting Enabled to false and setting Locked to true will disable the feature and prevent changing the setting (confirmed working):
{
"policies": {
"DNSOverHTTPS": {
"Enabled": false,
"Locked": true
}
}
}
|
STACK_EXCHANGE
|
Refactor: Avoid Passing Controller Context to Service Layer
After reviewing the service implementation, it appears that we're passing the controller's context directly to the service layer. This approach compromises our abstraction boundaries, effectively blending controller and service responsibilities. By injecting the controller's context, we tightly couple the service layer to the specifics of the request lifecycle, which limits where and how the service layer can be reused. This dependency means that services can only function correctly in the context of a controller, restricting flexibility and reducing separation of concerns.
Instead, we should extract the relevant parameters from the request context within the controller and pass only those necessary values to the service layer. This way, our service layer remains decoupled, reusable, and focused solely on the core business logic without knowledge of the request context.
https://github.com/inbox451/inbox451/blob/a927b348fb456ba8433c0132f6a5cbac1eb0ea95/internal/storage/users.go#L12
cc/ @Jalmeida1994
Totally agree with the points raised. Passing context.Context around is definitely the Go way for handling cancellations and timeouts, but we’re being a bit lazy by just forwarding the whole request context as-is.
Right now, our service layer ends up with way more info than it needs, creating unnecessary coupling. Ideally, the service layer shouldn’t be aware of request-level details—it just needs what’s relevant to run its logic.
Like you said, we should adjust things so the controller layer pulls out only the essentials (like auth tokens, request IDs, or deadline info) and passes those directly to the service layer. For pure cancellation or timeout handling, we can still use context.Context, but it should be a fresh one derived from the request context instead of the original. This way, we stick to Go’s context pattern without tightly coupling our layers.
Unless it's really a Go way to do things but I would need to investigate a bit more. But by me, let's go for it!
@Jalmeida1994 I was looking into the authentication concept in general, and with that investigating how echo's middleware work.
With that I started running some experiences and was trying the search for a user using it's ID and it was then it struck me that to call the GetUserByID I would need to pass the context, from the request.
Holding off on #22 (unit test implementation) pending this refactor. The current design passes request context directly through to the storage layer, which creates tight coupling and makes unit testing more complex - we shouldn't need to mock HTTP request context at the storage level.
Hi @Jalmeida1994 , I just noticed that the Contextthat we are passing is context.Context interface and not Echo Context and this has a special meaning.
The context.Context interface is a core part of Go's concurrency control and request-scoped data management.
type Context interface {
Deadline() (deadline time.Time, ok bool)
Done() <-chan struct{}
Err() error
Value(key any) any
}
The main purposes of the Context is:
Canceling Signal, important then the HTTP connection gets closed for example and we are in the middle of a SQL query, with this we can push the cancel to the SQL server to end the query.
Deadline management in Go allows you to set a specific time at which a context should expire. This is useful for enforcing timeouts and managing time-sensitive operations and we see this across a bunch of apps with context.WithDeadline()
and context.WithTimeout()
Request-scoped value propagation, this allows to pass values across the call stack wirthout passing parameters throw methods, we can use this for passing "core" to the entire APP without needing to push it as method parameter for example and can retrieve it from the Context, this works also for auth objects for example.
Resource cleanup, when looking to ctx.Done() during long running tasks we can understand if we can free given resources or not.
This said, regarding propagating the context to our service layer we can derive the following benefits of keeping it as-is:
allows us to propagate the cancellation signals
allows us to access context scope values
allows us to check for timeouts looking at ctx.Done() before running a request / query, obviously we don't do this but it's good to know :D
Let's have a 1:1 chat regarding this, I'm going to close this ticket.
|
GITHUB_ARCHIVE
|
Traders often have a tough time in determining where to place their stops and Anne-Marie Baiynd discusses why she thinks it should be determined by stop-specific criteria.
TIM: My guest today is Anne-Marie Baiynd, and we're talking about setting stops in your trading so that you don't lose more money than you can afford. Anne-Marie, talk about some of the best practices for setting stops. I know it's a question a lot of traders have, where do I set my stop on my trade.
ANNE-MARIE: You know, I would love for that answer to be simple, but it's a little bit more convoluted because in the end it's a stock specific. If you have a stock that's really noisy or if has a high beta, meaning it moves at a faster rate than the broad market, where you set the stops is really going to be dependent on how much the stock moves. If I'm trading a utility with a beta of 0.5, meaning that it moves much less than the broad market, I'm going to be able to have a nice tight trade. If I'm looking at a chart that's really breaking out and all of the sudden it's just left a really right tight channel and it's accelerating very quickly, I can look at that range of motion and see that I can have myself also with a tight stop.
Many of us, we structure our trades to go, you know what, I'm only going to lose $300 on this trade but that means I'm going to have to buy X many shares because that just means 75 cent stop without regard to what the stock is. Some stocks you only need a 15-cent loss, some stocks you need a buck fifty, so it's learning about the instrument that you're trading with, understanding, hey, if it breaks out here, what's it most likely to do? How far is it most likely to come back?
One of my favorite things to do is when a stock breaks out, I will look at the candlestick that breaks that relative resistance if we're going to the north, I'll look for that, and the length of that candlestick that broke my level, I will look just underneath it at the bottom of its wick, give it a little bit more room, and I will make that the stop, so it really becomes a function of how sharply it broke resistance or how sharply it will break support that holds.
TIM: So make it objective, not just something subjective.
ANNE-MARIE: Oh, absolutely.
TIM: How about like -
ANNE-MARIE: You can't have flat things.
TIM: How about average true range or something like that? Can I use something like that on a daily basis to find a place to, if the stock typically uses a dollar, I don't want to put my stop within that range because it may get hit to easily.
ANNE-MARIE: That's exactly right. You can use that very well. It's an easy way to have a system tell you what it's actually doing so that you don't have to really think through a whole lot. In the end the more you think about that particular stop that you're using, the better it's going to be when you choose that stop.
TIM: Anne-Marie, thanks for your time.
ANNE-MARIE: Thanks for having me.
TIM: You're watching the MoneyShow.com video network.
|
OPCFW_CODE
|
Peter and Matthew Howkins have announced the availability of a new version of free emulator RPCEmu.
Version 0.8.9 brings with it a long list of improvements, fixes and new features. These include, for all the platforms on which the emulator can be run:
- There is now support for emulation of 256 MB of RAM. This is the maximum amount supported by the RiscPC (which, as its name suggests, is hardware the emulator is originally designed to emulate) and A7000.
- A new option has been added to reduce CPU usage. When enabled, RPCEmu will try to reduce the amount of CPU usage by utilising the ‘Idle’ feature of RISC OS. The effects should be seen roughly 30 seconds after booting RISC OS, provided activity is low enough. This feature does not require a ‘Portable’ module, and is partly based on code by Jeffrey Lee.
- A two-button mouse mode has been added which swaps the right and middle mouse button behaviour, meaning the right hand button (normally Adjust on RISC OS) acts like the middle button (normally Menu). When used on systems with a two button mouse or other pointing device, such as the touch-pad found on most laptops), this means the two buttons become Select and Menu, and there is no need to move your hand to the keyboard to call up a RISC OS menu.
- There have been several fixes to the “Follow host mouse” feature, which should make it more reliable: It now correctly interprets OS_Byte 106 and handles pointer/cursor linking.
- SWIs are now intercepted even when called using CallASWI. This, based on a patch by Alan Buckley, further improves the reliability of the “Follow host mouse” feature.
- When changing between emulating a RiscPC and an A7000, the emulator will now appropriately (re)configure the mouse type, removing the need to issue the necessary
- Resolved a bug in which an ARM instruction including a rotate could set the C flag incorrectly.
- A possible crash in the Dynamic Recompiler has been prevented thanks to a fix from Tom Walker.
- When using RISC OS 4.02 with no VRAM configured, RPCEmu now boots in RiscPC emulation.
- There are accuracy improvements to the emulation of IOMD.
- A workaround has been implemented to the ADC issue on the 64bit recompiler, which prevented RISC OS 5.17 from booting.
- Refactoring of code, particularly relating to RAM and IOMD.
Changes relating to the Windows version:
- The window size has been increased vertically by one pixel. Previously it was too short, resulting in the very bottom row of the display being missing.
- There have been some improvements to the GUI, including the enabling of Windows “Visual Styles” so that windows adopt the native look of the OS, as well as improving the layout of the “Configure” window.
- A potential freeze when choosing “File->Exit” has been fixed.
- An improved icon, which now includes high-resolution variants for Windows Vista and later.
Changes relating to the Linux version:
- Some improvements to the GUI, including improving the layout of the “Configure” window.
Note: Before installing the Windows version, it is recommended that backup copies are made of your cmos.ram and rpc.cfg files – these may be overwritten during the installation process, losing your choices and settings. Once the installation has been carried out, you should be able to copy them back into their original locations, and they will be recognised by the new version.
|
OPCFW_CODE
|
All game developers wish they could get players once and then retain them for the rest of their lives. So that no player leaves the game and they don’t have to spend time wondering why users are leaving their mobile game.
Everyone knows ultimately that perfect retention is impossible. And that there will always be those who dislike your game.
Moreover, even those who like your game may not stay on for long. And unfortunately, that’s just the reality android game developers have to live with.
But putting effort into player retention is worth it. You can consciously invest time and energy before and after your game’s release to ensure as many players as possible stay in your game.
However, first, you must make sure you are not giving your players reasons to leave your game.
Below, we list some of the common mistakes which game developers make, which you should avoid making when you develop your game, to prevent users from leaving your game:
First impressions matter. In the case of games, first impressions are even more critical because players are quick to judge.
Especially mobile gamers. For them, forming an opinion based on a game’s first impression is beneficial, as they have access to thousands of games. So if one game doesn’t impress them within the first few minutes of opening it, users will leave your mobile game and install another one.
But many developers make the mistake of not spending enough time optimizing their onboarding process. As a result, the tutorials in their games are lengthy, the title screens are too many, and the UI screen is confusing—all things, which form the perfect recipe to turn off players.
It would be best if you avoid these mistakes in your mobile game. And it would be best if you spent time optimizing the onboarding stage of your game.
And this effort should start right from the title screen itself. First, you can use the title screen to create an excellent first impression on your players. Then, you can use the screen to prepare them for the experience they are about to have.
After the title screen comes the UI screen. You should make the UI clean and intuitive. The elements in your game’s UI should be neatly categorized and easy to access. The most crucial buttons and icons should be prominent on the screen, so your players will be free from confusion.
Third, comes the tutorial in your game. Your game’s tutorial should teach your players your game’s basics. But should not spend too much time doing so. Your players will be eager to start playing, and a lengthy tutorial will hinder it.
So, make the tutorial short and to the point. Or, as some game developers do, discard tutorials altogether. Instead, design your game so that the gameplay itself will teach players how to play.
Overpowered weapons, broken progression systems, and sudden difficulty spikes all make your game’s experience a bad one.
For example, if a specific weapon is more powerful than all other weapons, with no negative consequences, the player would find the game less challenging.
Similarly, if the first few levels are tough to complete, the player would deem the game too hard.
And no matter how hard you try to eliminate these from your game during the design stages, they will be present in your game.
However, there is a way you can remedy this: playtesting.
By playtesting, you can see where your game’s gameplay experience needs tweaking. You can identify the areas where players might get turned off in your game.
For this, you will need a team of experienced playtesters. The team would play the game repeatedly and make sure your final gameplay experience is a good one.
Bugs significantly affect the gameplay experience. Some bugs won’t even let the players play in the first place.
Eliminating every single bug in your game is impossible. No matter how hard you try, your game will always have some minor bugs.
However, you should take extra care to make sure your game is free of significant bugs since you don’t want your users leaving your mobile game. Major bugs are bugs that make it impossible for the players to enjoy your game.
You should also make sure your game is free of too many minor bugs. Too many minor bugs can cause your players to have a bad gameplay experience.
To combat this, you should subject your game to rigorous testing.
Hire a group of testers and make sure they test your game for everything, from compatibility to performance.
When you find out how you are losing your players, you can start planning ways to stop losing them. You can use game analytics for this purpose exactly.
With better methods, you can enjoy higher retention. But, for that, you must also avoid making big mistakes.
The best way to do all this is with a top mobile game development company like Juego Studios. Juego Studios has developed numerous mobile games across Android and iOS platforms. The games we have developed have been received warmly by both players and clients.
|
OPCFW_CODE
|
myWidget.doSomeStuff() if you don’t have an instance of
First: what are the more traditional options?
A global function. If the mechanics of the function are not specific to the object’s implementation, then this might be a perfectly acceptable approach. If your function is just “doing stuff to” (or “…with”) some arguments then this might be fine. Performing some mathematical operations? Who needs an object-oriented approach for determining Fibonacci numbers? Or trimming whitespace from strings? Certainly that would be overkill.
A namespaced util method. Not too dissimilar from the global function approach; but if you’re namespacing, you’re less likely to have collisions in your variable and function names, and maybe some better sharing of common code, etc3. As far as “static methods” go, we’re just calling these methods off a singleton—fine (for example) for converting constants, or otherwise working with and manipulating known quantities.
But what about something more sophisticated? What if you have a family of classes (e.g., widget editors) that are all related but may each require an ever so slightly different approach to the context inspection? A global function or namespaced util won’t quite cut it here. What are we to do?
Observe (using our suggested example above):
Imagine a scenario where you have a few dozen classes like this (e.g., each class representing an editor for a specific widget that you might deploy to your WordPress-based blog)—using this technique, each class could perform its own inspection of the context (e.g., the DOM fragment representing the widget) and depending on the outcome of the inspection, assign itself as the appropriate handler5—and all without creating an instance of a given class until it is needed6. Marvelous!
But this approach is not without some dangers7 and does require careful attention to detail and some discipline. Because you’re invoking the specified method from the prototype, you should assume that the method is not “scope safe”. This is not to say that
this is unsafe; but
this might not be what you think it is8. You can9 still tap into
this in your “static” method, but you better be damn certain that your
- …only to find out that it’s not even the right object for the task! [↩]
- For the record, I’m a Java neophyte; this is the comparison explained to me, and the one that makes sense. [↩]
- But now we’re just talking “best practices”. [↩]
- We’re (of course?) assuming here that there is a 1:1 binding relationship between editors and widgets for the sake of this example. [↩]
- Granted, in this example, there still needs to be some apparatus in place to manage calling the inspector method from each prototype (bringing us right back around to the “global function and/or namespaced util” question) but depending on the specifics of the implementation, there’s actually an opportunity to cache or curry the results from the inspection. And/but this is not at all to diminish the huge advantage you get from having the class’ method actually on the class; everything you need to know about the class is that it exists (and it takes care of the rest). [↩]
- That is to say, if you think
thisis an instance, then you’d be mistaken—and you’d be missing the whole point of trying this method in the first place. [↩]
- …and for maximum effectiveness probably should. [↩]
- But when aren’t the arguments important…? [↩]
|
OPCFW_CODE
|
Helping The others Realize The Advantages Of computer science assignment helpProgramming is a giant position and virtually every day pupils are given a different programming assignment. This means They can be always in search of programming assignment help. We offer affordable companies in addition to a 24/seven on line help center so learners can often locate the help they require for his or her assignment.
a hundred% Accuracy: Computer programming is essentially a mechanism to feed a sequence of specialised Recommendations to a computer method – inside a format the machine can translate and compile – to generate a specific output. Consequently, precision is of An important In regards to computer programming.
We are unable to strain enough how crucial purchaser gratification would be to us. If any corrections or minimal tweaks need to be addressed in a very finished project, We're going to do that gratis.
Get Resolution - We send you plagiarism free of charge assigment Resolution by the deadline. Also, we depart some buffer in some time so that you could ask for any rework if expected.
Plagiarism Report on Request: We offer Absolutely free plagiarism report to substantiate a hundred% original function. It is possible to ask for for almost any of the assignments and we is going to be happier to provide a similar.
You'll be able to be selected of higher scores and General improvement as part of your grades should you hire our computer science on-line tutor right away.
Unlimited modifications: We offer limitless modifications request till our customer get happy. We have confidence in providing complete pleasure to our buyers.
We're an exceptionally reputed title inside the market. Now we have earned click now our name as a result of hard work and commitment. Our standing isn't a thing that We have now attained without spending a dime.
We need your e-mail address to ensure we can deliver you an email inform when the tutor responds in your concept.
Despite it’s age, it's used for generating read a broad cluster of applications from leisure ones to Business office operate plans.
Our revolutionary methods of assistance enable a scholar for getting back again the dropped assurance and put together himself to sit for an assessment.
Modifying and Checking: Our able editors revise the assignments submitted by students to rule out and revise all the poorly structured sentences, grammatical faults, sensible problems, etc.
When you embark upon your computer science experience, our tutors are available to help each and every step along pop over here how. 24HourAnswers has a numerous pool of professional tutors who will help with any topics in computer science. In case you are just starting with programming or are Mastering elementary procedures in computer science, we offer introductory support the place tutors will get you setup with Discovering and knowledge programming, also training Main topics.
My homework help prices a spending plan-friendly cost for the assignment orders, and there isn't any concealed or extra costs.
|
OPCFW_CODE
|
Which Dead by Daylight perks can be considered better at tier 1?
In Dead by Daylight, perks can only be leveled up through their three tiers, and once you've unlocked a higher tier version of a perk on a certain character, you can't go back to a lower tier version unless you do a prestige reset, which in itself can only be done up to three times per character. Generally, this isn't a problem since perks get more useful as their level increases.
However, as YouTuber Otzdarva pointed out in this video, this isn't always the case. He points out two specific killer perks that are actually better at level 1, in some or all situations:
Discordance
This perk notifies you when two or more survivors are working together on a generator, and highlights that generator for you. If you increase the perk's level, the duration of that highlighting also increases. Otz considers this a bad thing, because the highlight always remains for the entire duration, even if the survivors leave the generator before the time runs out. Therefore, at a higher level, it will take longer before this information gets "updated" for you.
Make Your Choice (only for certain killers)
When a survivor is rescued from a hook and you are far enough away, this perk makes the rescuer exposed, allowing you to down them in one hit. A higher level increases the duration for which this effect stays active, however this also acts as a cooldown before it can be activated again. This can be a problem for certain fast killers who don't need that much time to catch the rescuer, and could instead benefit from being able to reactivate this perk quicker.
Both of these are conclusions that I'm not sure I would have been able to make on my own. As I'm soon about to prestige my first character for the third time (meaning I'm going to lose the ability of resetting that characters perks), my question is: Are there any other perks that could be considered better if they were left on level 1, especially on the survivor's side, since those weren't mentioned in the video at all?
There are some perks in this game that are better at Tier 1 but also become worse in another aspect. If an answer explained the pros and cons of these perks it wouldn’t be opinion based.
Your logic for discordance is flawed. Survivors can move away from the gen regardless of what level your perk is. Its up to the killer to determine if its worth going to the gen or not. The extended highlight just makes it easier to find the gen if you do decide to go to it
@musefan I'm aware that that's probably the intention behind the increasing durations, having more time to process the information. However, what otzdarva means is that when the duration is shorter, you get to know sooner whether the survivors have left or not, which can be a significant upside depending on your playstyle.
Small Game is better in Tier 1 for locating traps and totems since the covered area is smaller.
I disagree. Its worse because you have to get much closer to the totem before you know its near by. So close that you probably already breaking it when you get alerted
|
STACK_EXCHANGE
|
Saturated Transimpedance Amplifier Issue
I am using a transimpedance amplifier with a BP104 Silicon PIN photodiode to receive incoming pulses from an infrared LED with a frequency around 100 kHz. Here is the relevant part of my schematic:
I have not included specific values for the feedback network, as it occurs for all values I have tested. Now here is the issue: this receiver only works beyond a certain distance between the transmitting LED and the photodiode; this distance is dependent on the gain, which is the value of the feedback resistor. If I have it too close to the IR LED then the output just saturates to my supply voltage (+10V). Any ideas on why this this happening? I know that it is not simply the capacitor charging up, as it also occurs without that. My best guess is that when I have the LED too close I am generating too much current, which is trying to pull up the output voltage beyond its maximum limit, thus causing it to rail and affecting it internally. I am having trouble convincing myself of this though, because my transmission is a 50% duty cycle square wave, so during half of the period there is no signal at all, and yet the output continues to hold at a steady +10V. Is such an intense amount of irradiation on the photodiode causing a current to persist through the LOW portions of the signal? This it the first time that I am using a transimpedance amplifier in a serious application, so this may be obvious. Many thanks in advance for any insight or tips.
Edit: Here is a plot showing the resistance values that I have tested. More specifically, it shows my maximum transmitted distance vs. the resistance value (gain). The blue curve is my theory prediction, and the red points are my data.
What resistance and capacitance values have you tried?
@helloworld922 I have edited my post to include a plot that might be useful. A capacitance of 2pF seems to work well, except for high gains, at which having no capacitor seems to be fine. I need some rest, but will respond in a few hours if you reply.
You're overloading the amplifier. Recovery from overload can be very fast, but 1 millisecond is not uncommon, and tens of ms is possible even with multi-MHz GBW op-amps.
There are amplifiers such as the OPA380, which has 100ns overload recovery.
I don't see a number for the LF357, you can measure it, as suggested in this classic application note AN356 from Analog Devices.
Perfect, thank you very much for the response. Do you have any personal recommendations on how to circumvent this issue without changing the op-amp? I see in that application note (which is great) in Figure 10 that there is a "clamp circuit" for this issue, which I will build if my lab has the components, however what would you recommend if they do not? Thanks again.
The clamp circuit is a reasonable approach, although it's not so simple, I think the end result is that the FD333 diodes end up effectively in parallel with the PD rather than the feedback resistor, which is much better, and the zener capacitance drops out of the equation (the 1N4148 capacitance is a load on the output). Personally, I'd buy the $6 op-amp. The low leakage diodes are going to be hard to find http://datasheet.octopart.com/FDH300A-Fairchild-datasheet-28922.pdf You could try it with all 1N4148s and see if it's good enough for you, or use diode-connected JFETs for the diodes.
I see that the maximum output for the OPA380 is less than 10V, which is what I need. Is there another op-amp with a small recovery time that you would suggest? Thanks in advance
Maybe AD8067. Limiting amplifiers are nice, but not many that are high voltage.
Spehro is correct. The "obvious" way to fix the problem is to reduce the value of the feedback resistor. This, of course, will hurt your maximum range.
The second "obvious" way to fix it is to put a zener of, let's say, 7.5 volts across the feedback resistor. Unhappily, this won't work, as such zeners have reverse capacitances of ~ 50 pf, and this will kill your frequency response at megohm resistors such as you use. I'd suggest a string of 4 or more 1N4148 diodes, anode at the opamp output. You can sum the knee voltages to whatever value you like, and the series connection will reduce the total capacitance. This will almost certainly mean you'll need to get rid of your feedback capacitor.
Thanks for the response! I am a bit confused on why the feedback diodes would help me out (I am new to electronics). Is the idea similar to the clamp circuit (Fig. 10) in the application note that Spehro linked to? I.e., does it prevent the output from saturating? If so, how are the diodes acting to make that happen? Thanks again, and I would upvote you if I had enough reputation points.
Yes, it's like Fig, 10, but simpler. The problem with your circuit is that, when the input current gets too high, the feedback current can't keep up, because the opamp can only produce a limited output voltage range. With one or more diodes in the feedback loop, after some point (the knee voltage of the diodes, ~0.7 v / diode), the diode current goes up very rapidly and allows the current at the input to equalize. This means that the output is no longer linear wrt the input, but as long as the combined knee voltages are high enough, that's not a problem.
See, for instance, http://www.nxp.com/documents/data_sheet/1N4148_1N4448.pdf, Fig 3 for the voltage/current of a diode. As you can see, for voltages less than 0.6 volts, the current is essentially zero, but by the time it's up to .8 volts, it's in the tens of mA. Using regular signal diodes this way gives a very soft limit - that is, the output stops being linear long before the limit is reached, but like I say, that's not a problem in your application.
Sorry for the delayed response, I have had a very busy day; hopefully you haven't lost interest. I apologize to be a burden, but I am very confused about your suggestion. I have made a schematic of what I believe you are saying, is this the right idea?
|
STACK_EXCHANGE
|
So now that you’re an expert on mempool mining dynamics and fee rates, what other areas should you know about to optimize your fee rates?
- Transaction size and complexity: did you know you can decrease the size of your transaction and pay way less in total fees by using SegWit addresses that start with ‘bc1’ instead of a ‘1’ or ‘3’?
- Bull markets: often times, bull markets cause a ton of network congestion so if you’re looking to transact at a lower fee rate, avoid sending transactions during bull markets
- 9AM EST: did you know that BitMEX (one of the largest Bitcoin derivatives exchanges) usually clogs the mempool with a ton of transactions at 9AM EST every day? Best to avoid sending transactions at this time
- Lower time preference: if you want to optimize fee rates to a point where you pay only 1 sat/vbyte, then you and your recipient will have to lower your time preferences and practice patience 😉
- Batching: you can send bitcoin to multiple recipients in a single transaction
- Consolidation: you can consolidate your bitcoins (UTXOs) so that your future transactions use fewer inputs, decreasing your future transaction sizes
- Child pays for parent transactions (CPFP): your “child” transaction can pay for both its own and its parent’s transaction so that both transactions can be confirmed in the next block
- Lightning Network: this second-layer scaling solution allows you to send transactions in fractions of a second and pay tiny fees, but requires much more knowledge (eg. HTLCs, state channels, channel management, etc.)
- Weekends: typically, fee rates are lower on weekends due to less on-chain activity
Feel free to read this wiki on other ways to reduce your future transaction fees!
Hope you found this blog post helpful – if you did, please share the article to those who might need it and make sure to subscribe to the newsletter to stay up to date on the latest posts!
What did I miss? What else would you like to learn more about? Let me know on Telegram!
Avoid Stuck transactions in the future
To avoid such situation in future always set high transaction fee for your transaction, If your wallet does not allow setting custom transaction fees, you should upgrade to a new wallet. If you are sending from the exchange then ask exchange support to set high transaction fee for quick confirmation.
Use Hardware Wallet Like Trezor or any other wallets that allow you to set custom Transaction Fee.
Check-in Explorer how many Transactions are pending, You can also check this by sending a small amount to test how fast your transaction. Based on that you can take your decision.
My Bitcoin Transaction is Stuck and Unconfirmed
You were expecting your bitcoin transaction to get mined and confirmed within the next block (~10 minutes), but for some reason your transaction isn’t going through.
It seems stuck and you’re worried that your transaction will never clear. You ask yourself, “have I just lost some bitcoin?”
If this sounds familiar, don’t worry – your funds are safe. Chances are, the fee you included in your transaction wasn’t high enough for miners to prioritize it.
|
OPCFW_CODE
|
Anyone care to suggest the best antivirus for home use either free ones or ones that are low in cost and have a family pack.
You mentioned low cost or free - SEP is neither one. SEP is good for a corporate network, but honestly overkill & probably to resource intense for a home computer. If you are not being an idiot on the internet, I would just use AVG (be sure to change the scheduled scans to be at night time though). Then for added security use OpenDNS on your router.
in the upstate area of NY our internet provider, Time Warner Cable offers CA Suite for free, you might want to see if your provider offers that. TWC does not make a big deal about it but it was within the self help of their web site.
I would question whether or not that is legit then. Not sure why a company would buy extra copies for people's home computer. It USED to be that Symantec offered licensing agreements that were special like that, but they do not do it anymore, so that would be illegal copies of the software.
I used to use AVG free but I it frequently would have problems updating and otherwise let several viruses into my system. This may not be the case for everyone but I have since used Avira Personal Edition (which is free). It works much more smoothly and I've never noticed it slow down my computer. Avira comes highly recommended from me.
FIrst of all, FREE isn't FREE, if you have been in this business for awhile, you have that figured out.
Second, if you don't take a look at Sunbelt Software's Vipre, then your question is meaningless.
IF you think because it is a big name in the business its effectivness is better, then you are still naive.
IF you don't understand this industry is dynamic and ever changing and evaluate products accordingly, then you need to re-think what you are doing.
This plug is NOT from an SBS employee but one who has experienced the results of using their product.
I have to second Jeff81871; I am really liking Microsoft Security Essentials. I have it on a few machines, and it is catching things that AVG, Malwarebytes Anti-Malware, and Ad-aware all let through. It is also not a resource hog, so I tip my hat to Microsoft this time.
I would have to throw my vote behind Microsoft Security Essentials myself as well. It does both AV and Malware and doesn't really eat up to many system resources. That and it meets the free criteria and does realtime protection of your system. I've been installing it pretty much on anyone who I know doesn't have AV software and haven't had any complaints.
I've used Avast for a number of years and found their regular updates to be very reassuring. Yes, it can be a little bit annoying when you boot up, to have an update running, especially if you are low on memory, but it does give peace of mind.
I've recently upgraded to another PC with 4Gb of RAM (originally only had 512Mb) and also have Microsoft Security Essentials installed. Both seem to co-exist without any problerm although it may be somewhat of an overkill situation.
I have typically used AVAST! in the past and like it. I like it a little better than AVG. It's free - all you have to do is register it. VERY regular updates.
I will be testing out the Microsoft Security Essentials product. The comments made here are the same I'm hearing everywhere else. It seems to be a pretty good product.
- Just remember "you get what you pay for".
- Free isn't always free.
- Do your own evaluation for what works for you. Don't listen to us because our machines aren't your machines. Personally I use McAfee. I see where others have recommended differently.
Overall, it is your decision and your data.
|
OPCFW_CODE
|
LINGUIST List 17.30|
Tue Jan 10 2006
Calls: Computational Ling/UK;Text/Corpus Ling/Italy
Editor for this issue: Kevin Burrows
As a matter of policy, LINGUIST discourages the use of abbreviations or acronyms in conference announcements unless they are explained in the text. To post to LINGUIST, use our convenient web form at http://linguistlist.org/LL/posttolinguist.html.
Inference in Computational Semantics
Workshop on 'Strategies for Developing Machine Translation for Minority Languages'
Message 1: Inference in Computational Semantics
From: Johan Bos <jbosinf.ed.ac.uk>
Subject: Inference in Computational Semantics
Full Title: Inference in Computational Semantics
Short Title: ICoS-5
Date: 20-Apr-2006 - 21-Apr-2006
Location: Buxton, England, United Kingdom
Contact Person: Johan Bos
Meeting Email: < click here to access email >
Web Site: http://www.cs.man.ac.uk/~ipratt/ICoS-5/
Linguistic Field(s): Computational Linguistics; Semantics
Call Deadline: 16-Jan-2006
The next International workshop on Inference in Computational Semantics (ICoS-5) will take place from 20th-21st April at the University of Derby College, Buxton, England. ICoS-5 is intended to bring together researchers interested in inference-oriented NLP from areas such as Computational Linguistics, Artificial Intelligence, Computer Science, Formal Semantics, and Logic.
FINAL CALL FOR PAPERS
5th workshop on
INFERENCE IN COMPUTATIONAL SEMANTICS
Buxton, England, 20-21 April 2006
Submission deadline: 16 January 2006
Endorsed by SIGSEM, the Association for Computational Linguistics (ACL) Special Interest Group (SIG) on computational semantics.
Natural Language Processing has reached a stage where the exploration
and development of inference is one of its most pressing tasks. On the
theoretical side, it is clear that inference plays a key role in such
areas as semantic construction and the management of discourse and
dialogue. On the practical side, the use of sophisticated inference
methods could lead to improvements in application areas such as
natural language generation, automatic question answering, and spoken
ICoS-5 is intended to bring together researchers interested in
inference-oriented NLP from areas such as Computational Linguistics,
Artificial Intelligence, Computer Science, Formal Semantics, and
We invite submissions addressing the theme of inference in
computational semantics broadly construed. Subjects relevant to ICoS-5
include but are not restricted to:
- natural language generation
- natural language pragmatics
- discourse and dialogue processing
- (spoken) dialogue systems
- underspecified representations
- ambiguity resolution
- interfacing lexical and computational semantics
- lexically-driven inference
- inference for shallow semantics
- inference in question answering
- recognising textual entailment
- background knowledge: use and acquisition
- applications of semantic resources
(e.g. CYC, WordNet, FrameNet, PropBank, ontologies)
- automatic ontology creation
- common-sense reasoning in NLP
- temporal and epistemic reasoning
- resource-bounded inference
- applications of automated reasoning
(e.g. model building, model checking, theorem proving)
- alternative inference strategies
(e.g. abduction, nonmonotonic reasoning, default)
- decidable fragments of natural language
- controlled languages
- natural language inference in decidable logics
(e.g. description logic)
- probabilistic and statistical approaches to inference
- machine learning and inference
- inference and information extraction and/or text mining
- novel applications (e.g. semantic web)
- evaluation methodologies and resources for inference
- robustness and scalability of inference
- system descriptions
Submitted papers should not exceed 10 pages (A4, single column, 12
point font) including references. All submissions must be in PDF, and
must be sent by email to icos5coli.uni-sb.de.
We also encourage submission of papers describing systems that show
aspects of inference in computational semantics. There will be a
separate slot at the workshop where people can demonstrate their
systems. System descriptions should follow the same submission
guidelines as regular papers.
Submission Deadline: January 16, 2006.
Notification: February 20, 2006.
Final Versions: March 20, 2006.
Conference: April 20-21, 2006.
Christian Ebert (University of Bielefeld)
Patrick Pantel (ISI, University of Southern California)
Stephen Pulman (Oxford University)
Johan Bos (co-chair)
Kees van Deemter
Alexander Koller (co-chair)
Maarten de Rijke
As well as producing the workshop proceedings, we plan to publish a
selection of accepted papers as a book or special issue of a journal.
Message 2: Workshop on 'Strategies for Developing Machine Translation for Minority Languages'
From: Briony Williams <b.williamsbangor.ac.uk>
Subject: Workshop on 'Strategies for Developing Machine Translation for Minority Languages'
Full Title: Workshop on 'Strategies for Developing Machine Translation for Minority Languages'
Date: 23-May-2006 - 23-May-2006
Location: Genoa, Italy
Contact Person: Briony Williams
Meeting Email: < click here to access email >
Linguistic Field(s): Text/Corpus Linguistics; Translation
Call Deadline: 17-Feb-2006
'Strategies for developing machine translation for minority languages'. Fifth SALTMIL Workshop on Minority Languages, Tuesday May 23rd (morning). A satellite workshop of the Language Resources and Evaluation Conference, May 24-26 2006, Genoa, Italy.
FIRST CALL FOR PAPERS
''Strategies for developing machine translation for minority languages''
5th SALTMIL Workshop on Minority Languages
on Tuesday May 23rd 2006 (morning)
Magazzini del Cotone Conference Centre, Genoa, Italy
Organised in conjunction with LREC 2006: Fifth International Conference on Language Resources and Evaluation, Genoa, Italy, 24-26 May 2006
This workshop continues the series of LREC workshops organized by SALTMIL (SALTMIL is the ISCA Special Interest Group for Speech And Language Technology for Minority Languages: http://isl.ntf.uni-lj.si/SALTMIL/ ):
The minority or ''less resourced'' languages of the world are under increasing pressure from the major languages (especially English), and many of them lack full political recognition. Some minority languages have been well researched linguistically, but most have not, and the vast majority do not yet possess basic speech and language resources (such as text and speech corpora, lexicons, POS taggers, etc) which would enable the commercial development of products.
The workshop aims to share information on tools and best practice, so that isolated researchers will not need to start from nothing. An important aspect will be the forming of personal contacts, which can minimise duplication of effort. There will be a balance between presentations of existing language resources, and more general presentations designed to give background information needed by all researchers present.
The workshop will begin with the following presentations from invited speakers:
* Delyth Prys (University of Wales, Bangor): ''The BLARK matrix and its relation to the language resources situation for the Celtic languages.''
* Hermann Ney (Rheinisch-Westfälische Technische Hochschule, Aachen, Germany): ''Statistical Machine Translation with and without a bilingual training corpus''
* Mikel Forcada (Universitat d'Alacant, Spain): ''Open source machine translation: an opportunity for minor languages''
* Lori Levin (Carnegie Mellon University, USA): ''Omnivorous MT: Using whatever resources are available.''
* Anna Sågvall Hein (University of Uppsala, Sweden): ''Approaching new languages in machine translation.''
These talks will then be followed by a poster session featuring contributed papers.
Papers are invited that describe research and development in the following areas:
* The BLARK (Basic Language Resource Kit) matrix at ELDA, and how it relates to minority languages.
* The advantages and disadvantages of different corpus-based strategies for developing MT, with reference to a) speed of development, and b) level of researcher expertise required.
* What open-source or free language resources are available for developing MT?
* Existing resources for minority languages, with particular emphasis on software tools that have been found useful.
All contributed papers will be presented in poster format. All contributions will be printed in the workshop proceedings (CD). They will also be published on the SALTMIL website.
* Paper submission deadline: Feb 17, 2006
* Notification of acceptance: March 10, 2006
* Final version of paper: April 10, 2006
* Workshop: May 23, 2006 (morning)
Abstracts should be in English, and up to 4 pages long. The submission format is PDF.
Papers will be reviewed by members of the organising committee. The reviews are not anonymous.
Accepted papers may be up to 6 pages long. The final papers should be in the format specified for the proceedings by the LREC organisers.
Each submission should include: title; author(s); affiliation(s); and contact author's e-mail address, postal address, telephone and fax numbers.
Abstracts should be sent via e-mail to Briony Williams at b.williams bangor.ac.uk. The deadline for submission is February 17th.
* Briony Williams (University of Wales, Bangor, UK: b.williams bangor.ac.uk)
* Kepa Sarasola (University of the Basque Country: ksarasola si.ehu.es)
* Bojan Petek (University of Ljubljana, Slovenia: bojan.petek uni-lj.si)
* Julie Berndsen (University College Dublin, Ireland: julie.berndsen ucd.ie)
* Atelach Alemu Argaw (University of Stockholm, Sweden: atelach dsv.su.se)
Respond to list|Read more issues|LINGUIST home page|Top of issue
Please report any bad links or misclassified data
LINGUIST Homepage | Read
LINGUIST | Contact us
While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.
|
OPCFW_CODE
|
Java Downcasting Runtime Error
Does India C# and Java support multiple inheritance? a pow wow? What is the cost of a 5 rivers wind? SQL Server Transaction Oracle Transaction MySQL Transaction DB2 Transaction Concurrent Update Problem How More hints sensitive and case of your method names donot match.
What about Soldier mentioned in War Dogs What makes at 17:54 Paul Vargas 23.4k65082 add a comment| up vote 0 down vote No. How many so with the "expression" view of the code, the cast might work. These comments are not
Downcasting In Java Example
System Threads Recursion Recursion Interview Questions Explanation of What trees leave? For your example, it's simple to see that the cast will "catch wind" mean? tapas in Granada, Spain?
In this case S is Tree and T is Redwood, Meaning of "Sue me" Looking for a movie about a beautiful shapeshifting a turnkey solution? What is Upcasting And Downcasting In Java Pdf the compiler sees it. Otherwise, a compile-time error occurs." Where "|S| <: |T|" means to exchange money in Cordoba, Spain?
What does "OTE" No at compiler level and when it will be at runtime? Current community chat Stack Overflow Meta Stack Overflow your We need type-conversion downcasting or ask your own question.
What is Downcasting In Java Classcastexception HTML GET vs. Who first What frog of "et al" Meaning of out of commission? Why is a it returns true.
Upcasting And Downcasting In Java With Examples
Strictly speaking, the compiler it knows exactly what object type expression_of_type_A will really have. Does encumbrance include Does encumbrance include Downcasting In Java Example So your code What Is The Purpose Of Private Constructor In Java a killing" mean? have pi in a CSS calc?
So if a method returns an Object but you know that Object is More Help Are Bank of a moose and an elk? Best way to remove rusted steel bolts from aluminum parts What is hard Downcasting C# someone who acts "cool-headed"?
What bird never Park Should you use your debit card when traveling? Just a heads up meaning On the line meaning Cream of the Crop gzip file in Java? What happens when http://winload.org/java-1-3-runtime.html you're looking for? a "hard copy"?
Downcasting C++ makes arrows poisonous? Are crocodiles non-static members of class? Does Yahoo
Difference between a left join Danish people from? You cannot convert for just a day? How do I Downcasting Javatpoint objects in Java. It's actually pretty simple - suppose you have a base class, and you need is a superclass ref containing a subclass object.
Does Britain have Recursion Can every recursive function be made iterative? Are What is http://winload.org/java-5-0-runtime.html Paid for Davis Cup? It's really only worthwhile
I'm using Java 8. –UnKnown Apr 20 at 16:06 @UnKnown: I mean the vegetables also fruits? What is a give foreign aid? What's the difference between shopping still a thing? we have a Person class.
Review of the Danubius Hotel in London, Regents Mark Peters 55.6k8105150 2 This is why you should ALWAYS use the @Override annotation. So, the compiler Threading and Synchronization User Threads vs. How to check to see for car seats? When do pilots a "haunt"?
Or why the Where is the best place to charge for pets? Pentest Results: Questionable CSRF Attack Starting off with shimano gears why is possible to cast a Vehicle to a Bike, and so it allows it. How expensive trusts our judgement.
if downcasting is valid and legitimate? Problems with of the guard"? When I retrieve the elements
|
OPCFW_CODE
|
As part of Siemens Digital Industries Software, Samtech s.a. is expert in the development of numerical simulation technologies. Founded in 1986 from the Aerospace Laboratory of University of Liège for the development and commercialization of general-purpose Finite Element Analysis software, now known as Simcenter™ Samcef® software, Samtech is expanding its developments to Simcenter™ NASTRAN inside Siemens organization.
Its activities include RTD for non-linear analysis of structural parts (metallic materials, composites, …), interactions in structural assemblies (contact, friction, fracture mechanics, tolerance to damage, …) and global analysis of machine structural dynamics in operations, including mechatronics. Generative design is also addressed with topology optimization and contributes with predictive process simulation to the Siemens' end-to-end software solution for Additive Manufacturing.
Samtech develops solutions available in the Simcenter portfolio:
-Non-Linear Structure FE analysis (Simcenter Nastran SOL 402),
-Topology Optimization (for analyst with Simcenter Nastran SOL 200 and for designer in NX),
-Curing simulation based on thermomechanical capabilities
-Simcenter 3D Additive Manufacturing solution, for predictive process simulation
-Rotor Dynamics solution (Nastran SOL 414)
Samtech is continuously involved in RTD projects at European, national or regional levels:
UPWARDS: "Understanding of the Physics of Wind Turbine and Rotor Dynamics through an Integrated Simulation Framework" (2018-2022) coordinated by SINTEF - Wind Energy.
TECCOMA: “Advanced technologies for complex and integrated parts" (2015-2020) coordinated by SONACA - Aeronautics.
SQEQUIP: "SQRTM Equipment" (2014-2018) coordinated by COEXPAIR - Aeronautics.
ICOGEN: “Composite life-cycle integration" (2013-2017) coordinated by TECHSPACE Aero - Aeronautics.
MAAXIMUS: "More Affordable Aircraft structure through eXtended, Integrated, and Mature nUmerical Sizing" (2008-2016) coordinated by AIRBUS France - Aeronautics.
COSSMAS: "Composite Space Structures Modelling and Analysis Software" (-2016).
LOCOMACHS: "Low Cost Manufacturing and Assembly of Composite and Hybrid Structures" (2012-2016) coordinated by SAAB - Aeronautics.
ECOMISE: "Enabling next generation COmposite Manufacturing by In-situ Structural Evaluation and process adjustment" (2013-2016) coordinated by DLR - Aeronautics.
FIRECOMP: "Modelling the thermo-mechanical behaviour of high pressure vessel in composite materials when exposed to fire conditions" (2013-2016) coordinated by AIR LIQUIDE - Automotive.
COMPACT: "Numerical Simulation of Composite Materials Submitted to High Speed Impacts" (2014-2016) coordinated by SAMTECH - Automotive.
DRAPOPT: "Optimal Draping of Composite Materials" (2013-2015) coordinated by SAMTECH - Aeronautics and Automotive.
ECOTAC: "Efficient Composite Technologies for Aircraft Components" (2011-2015) coordinated by SONACA - Aeronautics.
VIRTUALCOMP: Industrial research for an autonomous software environment for sizing aeronautical structures in composite materials and modeling manufacturing processes (2009-2013) coordinated by SAMTECH - Aeronautics.
CRESCENDO: "Collaborative & Robust Engineering using Simulation Capability Enabling Next Design Optimisation" (2009-2012) coordinated by AIRBUS UK - Aeronautics.
ALCAS: "Advanced Low Cost Aircraft Structures" (2005-2011) coordinated by AIRBUS UK - Aeronautics.
APC: "L’avion Plus Composite" (2006-2011) coordinated by SONACA - Aeronautics.
OPTISTACK: "Optimisation d'empilements composites" (-2008).
VIVACE: "Value Improvement through a Virtual Aeronautical Collaborative Enterprise" (2004-2007) - Aeronautics.
Research, Technology and Development collaborative projects around Simcenter 3D
|
OPCFW_CODE
|
'Forms' does not exist in the namespace system.windows
I have just started working on c#, and was fiddling with some code sample that I got from some forum.
This code is using a namespace using system.windows.forms for which I am getting an error:
Forms does not exist in the namespace system.windows.
Also I am getting some error related to undefined functions for senddown & sendup which I believe to be in the Forms name space.
I am using visual studio 10 (with .net frame work 4.0). Any idea how to fix this error?
Add a reference to System.Windows.Forms
Sounds like you created a WPF project rather than a Windows Forms project
@todda, thanks, that worked :)
@shf301, Yes it was a WPF project, but adding the mentioned reference worked.
For future reference, when asking an SO question, please paste the exact code, which is very much case-sensitive among other things...
@sara Regarding your bounty, what kind of answer are you looking for here? What "official sources" do you need? The question has clearly been answered: you cannot use items from namespaces that you have not added references to. Are you looking for a citation from the language standard that makes the same point?
i am facing this issue in console application. how can i resolve this issue. ?
i needed to introduce a build script that provided the argument -r:System.Windows.Forms.dll to my compiler. mcs mycsharp.cs -r:System.Windows.Forms.dll
Expand the project in Solution Tree, Right-Click on References, Add Reference, Select System.Windows.Forms on Framework tab.
You need to add reference to some non-default assemblies sometimes.
From comments: for people looking for VS 2019+: Now adding project references is Right-Click on Dependencies in Solution Explorer.
For people looking for VS Code: How do I add assembly references in Visual Studio Code
ok that worked. I was under the impression that I already added it. Just checked it again and as expected, it was missing. Thanks :)
Right click on the References node under the Project.
@naXa Things changed from that time, yes
@naXa Oh you meant "Project > Add reference" in the menu, not in the Solution Explorer windows... Also it only shows up if selecting "References > Analyzers" in the Solution Explorer windows first.
Update for people looking for VS 2019 - Now Adding Project References is Right-Click on Dependencies in Solution Explorer.
How to do this in vscode?
@BonecoSinforoso https://stackoverflow.com/a/42399545/213550
@VMAtm I had already looked at this post. It did not work. I've already switched from vscode to visual studio and I'm still having problems. Anyway, thanks for the try.
@BonecoSinforoso What exact problem do you have?
@VMAtm Sorry for the inconvenience. I was using vscode together with unity. I was referencing the unity button as if it were a windows application button. Hence the error.
In case someone runs into this error when trying to reference Windows Forms components in a .NET Core 3+ WPF app (which is actually not uncommon), the solution is to go into the .csproj file (double click it in VS2019) and add it to the property group node containing the target frameworks. Like this:
<PropertyGroup>
<TargetFramework>netcoreapp3.0</TargetFramework>
<UseWPF>true</UseWPF>
<UseWindowsForms>true</UseWindowsForms>
</PropertyGroup>
This is exactly the solution that worked for me. Many of the proposed solutions I was seeing suggested adding a reference to "System.Window.Forms", but that proposal never worked for me. I was able to add that reference and resolve my missing Forms class, but by adding that reference, it broke "System.Window"
Strange, it works on one computer, but open on another and it failed. The solution was UseWindowsForms tag. You would think with the problems of Windows Forms in .net core, Visual Studio would offer something more useful in both error finding and letting you add forms in the gui without having to close, manually edit and reload...
Thanks. Also, It was necessary for me to change targetframework from net5.0 to net5.0-windows.
@mfvjunior, yes, full explained here: https://stackoverflow.com/a/66098428/842935
Used this answer to use old MS C# example code (aimed at 4.0) with Net 7.0. Only other thing I needed to do was to resolve an ambiguity with Application, by fully qualifying it with System.Windows.Forms.Application. https://learn.microsoft.com/en-us/previous-versions/dotnet/articles/aa480727(v=msdn.10)?redirectedfrom=MSDN
If you are writing Windows Forms code in a .Net Core app, then it's very probable that you run into this error:
Error CS0234 The type or namespace name 'Forms' does not exist in the namespace 'System.Windows' (are you missing an assembly reference?)
If you are using the Sdk style project file (which is recommended) your *.csproj file should be similar to this:
<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<OutputType>WinExe</OutputType>
<UseWindowsForms>true</UseWindowsForms>
<RootNamespace>MyAppNamespace</RootNamespace>
<AssemblyName>MyAppName</AssemblyName>
<GenerateAssemblyInfo>false</GenerateAssemblyInfo>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Windows.Compatibility" Version="3.0.0" />
</ItemGroup>
</Project>
Pay extra attention to these lines:
<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">
<OutputType>WinExe</OutputType>
<UseWindowsForms>true</UseWindowsForms>
<PackageReference Include="Microsoft.Windows.Compatibility" Version="3.0.0" />
Note that if you are using WPF while referencing some WinForms libraries you should add <UseWPF>true</UseWPF> as well.
Hint: Since .NET 5.0, Microsoft recommends to refer to SDK Microsoft.Net.Sdk in lieu of Microsoft.Net.Sdk.WindowsDesktop.
Net >= 5
<TargetFramework>
net5.0-windows
</TargetFramework>
Quoting Announcing .NET 5.0:
Windows desktop APIs (including Windows Forms, WPF, and WinRT) will only be available when targeting net5.0-windows. You can specify an operating system version, like net5.0-windows7 or net5.0-windows10.0.17763.0 ( for Windows October 2018 Update). You need to target a Windows 10 version if you want to use WinRT APIs.
In your project:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>WinExe</OutputType>
<TargetFramework>net5.0-windows</TargetFramework>
<UseWindowsForms>true</UseWindowsForms>
</PropertyGroup>
</Project>
Also interesting:
net5.0 is the new Target Framework Moniker (TFM) for .NET 5.0.
net5.0 combines and replaces netcoreapp and netstandard TFMs.
net5.0 supports .NET Framework compatibility mode
net5.0-windows will be used to expose Windows-specific functionality, including Windows Forms, WPF and WinRT APIs.
.NET 6.0 will use the same approach, with net6.0, and will add net6.0-ios and net6.0-android.
The OS-specific TFMs can include OS version numbers, like net6.0-ios14.
Portable APIs, like ASP.NET Core will be usable with net5.0. The same will be true of Xamarin forms with net6.0.
You may encounter this problem if you have multiple projects inside a solution and one of them is physically located inside solution folder.
I solved this by right click on this folder inside Solution tree -> then pressing "exclude from project"
Go to project properties->Application->General.
Select the checkbox "Enable Windows Forms for this project".
For some one just need the struct and some function from Windowsform namespace, you just need to change the the project to old behave.
Make DisableWinExeOutputInference true then OutputType dont be override by visual studio :D.
Dont foget set add -windows to TargetFramework
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net5.0-windows</TargetFramework>
<DisableWinExeOutputInference>true</DisableWinExeOutputInference>
<UseWindowsForms>true</UseWindowsForms>
<StartupObject></StartupObject>
<ApplicationIcon />
</PropertyGroup>
here where i found them
https://learn.microsoft.com/en-us/dotnet/core/compatibility/sdk/5.0/sdk-and-target-framework-change
browxy.com
Compilation failed: 1 error(s), 0 warnings
main.cs(7,24): error CS0234: The type or namespace name `Forms' does not exist in the namespace `System.Windows'. Are you missing `System.Windows.Forms' assembly reference?
This is not an answer.
The cleanest solution is to add this nuget package
Link to nuget pacakge page
This does not provide an answer to the question. Once you have sufficient reputation you will be able to comment on any post; instead, provide answers that don't require clarification from the asker. - From Review
I forgot the details of the context that led to this post however from what I remember I was stuck in a situation where I could not reference Forms in my project (old .Net4.8 framework so this solution probably does not apply to other frameworks - poster is using 4.0) and the mentionned nuget package solved the issue.
|
STACK_EXCHANGE
|
Cannot compile with feature native
Hello,
I have trouble compiling the demo with the native feature. I think there is some conflict between esp-idf-hal and esp-idf-sys.
error[E0560]: struct `esp_idf_sys::spi_bus_config_t` has no field named `mosi_io_num`
--> /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/esp-idf-hal-0.21.1/src/spi.rs:204:13
|
204 | mosi_io_num: SDO::pin(),
| ^^^^^^^^^^^ `esp_idf_sys::spi_bus_config_t` does not have this field
|
= note: available fields are: `__bindgen_anon_1`, `__bindgen_anon_2`, `sclk_io_num`, `__bindgen_anon_3`, `__bindgen_anon_4` ... and 7 others
error[E0560]: struct `esp_idf_sys::spi_bus_config_t` has no field named `miso_io_num`
--> /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/esp-idf-hal-0.21.1/src/spi.rs:205:13
|
205 | miso_io_num: if pins.sdi.is_some() { SDI::pin() } else { -1 },
| ^^^^^^^^^^^ `esp_idf_sys::spi_bus_config_t` does not have this field
|
= note: available fields are: `__bindgen_anon_1`, `__bindgen_anon_2`, `sclk_io_num`, `__bindgen_anon_3`, `__bindgen_anon_4` ... and 7 others
error[E0560]: struct `esp_idf_sys::spi_bus_config_t` has no field named `quadwp_io_num`
--> /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/esp-idf-hal-0.21.1/src/spi.rs:206:13
|
206 | quadwp_io_num: -1,
| ^^^^^^^^^^^^^ `esp_idf_sys::spi_bus_config_t` does not have this field
|
= note: available fields are: `__bindgen_anon_1`, `__bindgen_anon_2`, `sclk_io_num`, `__bindgen_anon_3`, `__bindgen_anon_4` ... and 7 others
error[E0560]: struct `esp_idf_sys::spi_bus_config_t` has no field named `quadhd_io_num`
--> /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/esp-idf-hal-0.21.1/src/spi.rs:207:13
|
207 | quadhd_io_num: -1,
| ^^^^^^^^^^^^^ `esp_idf_sys::spi_bus_config_t` does not have this field
|
= note: available fields are: `__bindgen_anon_1`, `__bindgen_anon_2`, `sclk_io_num`, `__bindgen_anon_3`, `__bindgen_anon_4` ... and 7 others
For more information about this error, try `rustc --explain E0560`.
error: could not compile `esp-idf-hal` due to 4 previous errors
Thanks!
Should be fixed since https://github.com/ivmarkov/rust-esp32-std-demo/commit/3f5d36d0f4509dc1f9d577f53537e7674a8965f4
Please reopen if you still have issues.
Thanks, it works now.
Also saw your work @ https://github.com/ivmarkov/esp-idf-template, that's very cool!
|
GITHUB_ARCHIVE
|
Sprint or Verizon? Recommendations for broadband access card for my Mac?
September 10, 2008
In heading out the Communications Developer Conference/ITEXPO next week in L.A., the show organizers have already told me there is no free WiFi access at the LA Convention Center... but I can, of course, pay for the access through the local provider. (And probably deal with the same usual headaches of getting adequate signal strength.)
I am so incredibly sick of show WiFi, both in terms of paying for it and also just in quality, that yes, indeed, even though I am
a cheap Yankee... er... "frugal", I think I need to suck it up and pay the $720/year to have wireless Internet access over the cell networks. This will also be hugely beneficial for all the wonderful times I spend hanging out in airports.
My choice seems to be either Sprint or Verizon. (AT&T and T-Mobile don't have great coverage in my area.) Both will cover whatever limited roaming I do in my local area... and both have coverage in the major cities I tend to travel to. I've seen both used on the Amtrak train down to New York. They both charge ~$60/month... they both charge $50-100 for your actual broadband access card. They both require a 2-year contract (or reference a 1-yr but then your hardware costs go up.) And they both seem to have 5GB monthly limits (on-network).
On the actual hardware, it seems that I can get either a USB dongle or an ExpressCard. The USB is interesting in the sense that I can plug it into virtually any computer and use it. But the ExpressCard version looks interesting because: 1) I don't use that slot currently for anything else (whereas I do plug things into the USB slots); and 2) it looks like a smaller external form factor, i.e. there's less sticking out of my laptop.
So my questions for you all, dear readers, are these:
- Have you seen any great reason to prefer Sprint or Verizon?
- Do either one work better with the Mac? (my laptop these days)
- Do either work better than the other inside of buildings like convention halls? (I'm imagining neither one works great.)
- Any suggestions of the USB dongle over the ExpressCard card?
Any advice or recommendations is definitely welcome... I'll probably be picking one of these up in the next couple of days. (Thanks in advance!)
Technorati Tags: wireless, connectivity, broadband, sprint, verizon
If you found this post interesting or useful, please consider either:
- following me on Mastodon;
- following me on Twitter;
- following me on SoundCloud;
- subscribing to my email newsletter; or
- subscribing to the RSS feed
|
OPCFW_CODE
|
How to use hierarchical models to predict vote share in elections?
I would like to predict overall vote share $Y$ in a national election in the US as well as the vote share individual states. For simplicity, let's assume that the country has a pure two party system, so $Y$ is simply a proportion of votes to one party.
The data $X$ that I would like to use is a little complicated and consists of several parts $X = \{X_{1,S}, X_{2,S}, X_{3,S}, X_4\}$
$X_{1,S}$ is national election data at aggregated at a state level $S$. This is the proportion people who voted a particular way in the last election. For simplicity assume, that we are only interested over on particular electoral cycle.
$X_{2,S}$ is census data containing relevant demographic information in state $S$
$X_{3,S}$ is state-level representative opinion poll giving voting intentions taken at time $t$ and in state $S$. This will often give a breakdown of the voting intentions by several demographic categories.
$X_4$ is nationally representative opinion poll giving voting intentions taken at time $t$. As with the state polls, you also have some demographic information.
A simple model I have seen is to use a Kalman filter to estimate $Y$ is a latent variable with noisy estimates $X_3$ Some models go further and try to estimate the bias of a particular polling companies using $X_{1, \bullet}$. This is done by Jackman in this linked paper.
I would like to try to use the state-level information and pool it to estimate $Y$ on different states. I am interested in this because there may be many state polls around say a gubernatorial election in a point in the electoral cycle where there are few national level polls.
My state level sample is going to biased compared to the national sample, and the selection of the state is obviously non-random. But presumably I can do some matching based on demographic and geographic information to inform my belief about how people in the other states vote. For instance, I believe wealthy voters in adjacent regions are likely to vote in a similar fashion. How do I proceed in R in order to build a model that will make use of this additional state based data?
What kind of state-level data do you have? Would it be data from all 50 states? Or would some states be missing?
@Jonathan , thanks for your reply. Sorry, it's a very vague question! I have summary statistics and vote estimates from the polls. I do not have respondent level data, so I'm not sure I can apply a Gelman-style hierarchical model. I'm thinking now about how to incorporate hierarchical information into a Simon Jackman-style model. There is lots of missing data at the state level, infact it's mostly missing! I have polls from approximately 1/6 of the states, but these are the most variable states.
|
STACK_EXCHANGE
|
Krakatoa MY Licensing
By default, Krakatoa for Maya uses FlexNet to distribute floating "krakatoa-maya" and "krakatoa-render" licenses.
The license manager is compatible with the licensing management of Maya itself, but requires a separateThinkbox license file and the thinkbox daemon.
Please consult the Thinkbox Installation Guide for details.
License Types and Behavior
"krakatoa-maya" Workstation License
A "krakatoa-maya" workstation license allows the interactive use of Krakatoa inside of Maya, including the interactive rendering, PRT files saving, and Maya Batch background rendering within Maya.
No additional "krakatoa-render" license is needed in workstation mode.
This Krakatoa MY workstation license is Maya-specific and is not compatible with Krakatoa MX workstation, Krakatoa C4D workstation, or Krakatoa SR licenses.
It is possible to request a "krakatoa-maya" license as node-locked, but this is not recommended. A node-locked license can only be used on a single machine and cannot be shared between workstations.
"krakatoa-render" Network Rendering License
A "krakatoa-render" network rendering license allows the rendering with Maya Batch outside of Maya, typically on network render nodes controlled by a network manager like Autodesk Backburner or Thinkbox Deadline.
The "krakatoa-render" license is universal and compatible with Krakatoa SR, and with Krakatoa MX and Krakatoa C4D in network rendering mode.
In addition to acquiring a "krakatoa-render" floating license from the license manager, when running in network rendering mode, Krakatoa MY will also check whether a "krakatoa-maya" license line exists in the license file, but will NOT acquire one. In other words, a "krakatoa-maya" license must have been purchased for the "krakatoa-render" licenses to work with Maya network rendering, but it will not be checked out (used up). Thus, you can have one workstation and ten network licenses and the latter will function even if the one workstation license is currently in use. You cannot purchase ten network licenses without a workstation license and use Krakatoa MY.
License Acquisition And Release
A "krakatoa-maya" workstation license is acquired when the Krakatoa renderer is first used inside of Maya to render an image, or when the PRT Saver utility is used to save PRT files.
The license will be held for the complete Maya session and returned when the Maya application is closed.
Installing And Setting Up The License Manager
The licensing tools are currently NOT included in the Krakatoa MY installer. They can be downloaded from here. This download includes the thinkbox daemon.
For the license manager installation instructions on Microsoft Windows, please see here.
For the license manager installation instructions on Linux, please see here.
FAQ: How do I specify the license server on multiple render nodes without opening Maya?
In order to license Krakatoa without opening up Maya and entering the information manually, you will need to set the following environment variable on the computers:
|
OPCFW_CODE
|
File and Printer Sharing problem on LinkSys Router
I'd like to get the latest firmware for my Linksys WRT54G v2 router, but the linksys page offers no get latest firmware for Linksys Router data .bin file?
I have two netgear routers (with gateways: 192.168.0.1 and 192.168.100.1, both sharing the same subnet mask of 255.255.255.0) that i want to connect to the two wan how to upgrade your wireless router's cisco/linksys: 192.168.1.1 or 192 some routers may require that you first save the file to your computer and then select
6 linksys ea-series setting up your ea-series router • usb storage lets you access an attached usb drive (not included) and set up file sharing, a media server, and vpn configuration guide linksys® rv042/rv082. configure your linksys rv042/rv082 router please check the log file of your linksys …
Easy steps for linksys e1000 e2000 e3000 router setup and once you set up the router manually or using the cd then you can try to the file downloaded from 2014-09-18 · help please new problems installing dd-wrt to a linksys e1200 v2 router. if so where would i find such a file?
Methods to reset your linksys wrt54g ). linksys wireless router,linksys wireless router,linksys wireless router setup,linksys wireless amazon.com: linksys wrt54g wireless-g router: electronics. click “apply changes” to finish manually configuring the belkin router. set up a these are the various method in setting up both the wired and … the following simple solution explains how to fix a bricked linksys router. tested on a linksys wrt54g router. typically a […]
I mean it states after importing the file to the router you can i’ll be using a linksys e2500 as the internet router and the load config file for vpn configuration guide linksys® rv042/rv082. configure your linksys rv042/rv082 router please check the log file of your linksys …
Manually configure linksys e4200 router without cd important : be sure you select the right hardware version for your router before downloading. installing the wrong installing openwrt on the linksys wrt1900acs wireless router. login to the linksys admin interface click on the choose file button in the manual section,
Hardware version number of your linksys router, but you only need to re-load the firmware due to the file setting up your router manually. performing how to load a config file onto a cisco router via the console port.
I'm in way over my head here as usual with computer issues. i have a linksys wrt110 router, which always worked fine to support my wife's laptop while i was on... find out more about the linksys ac2600 (ea8500) wireless router, including ratings, performance, if someone else then needs to download a file,
SOLVED How to Fix Linksys Router Error 2118? Fixya
|
OPCFW_CODE
|
Hi, I used to be able to modify histogram contents like this
import boost_histogram as bh import numpy as np bins = [0, 1, 2] hist = bh.Histogram(bh.axis.Variable(bins), storage=bh.storage.Weight()) yields = [3, 4] var = [0.1, 0.2] hist[...] = np.stack([yields, var], axis=-1) hist.view().value /= 2
That worked until
0.11.1. In the current master it does not anymore, I think scikit-hep/boost-histogram#475 changes the behavior:
Traceback (most recent call last): File "test.py", line 10, in <module> hist.view().value /= 2 File "[...]/boost_histogram/_internal/view.py", line 57, in fset self[name] = value File "[...]/boost_histogram/_internal/view.py", line 49, in __setitem__ raise ValueError("Needs matching ndarray or n+1 dim array") ValueError: Needs matching ndarray or n+1 dim array
Is there another way to achieve this now?
hist.view() /= 2.
python-3.8package, you still get pip 9 even on Python 3.8!!! (which is totally unsupported). I hate distro packaging sometimes…
boost-histogram 1.0! I'm adopting the new API for subclassing, and saw in https://boost-histogram.readthedocs.io/en/latest/usage/subclassing.html that
family=object() is recommended when only overriding
Histogram. What is the difference between
object? While trying to understand this, I noticed that
object is object is
object() is object() (are those instances?) is
False. Is the latter part an issue given the following?
It just has to support is, and be the exact same object on all your subclasses.
objectis a class; classes are singletons, there’s just one.
object()is an instance, and you can make as many as you want, each will live in a different place in memory, check with
family=can be anything that supports is which is literally everything, with the exception of the Module
boost_histogram(as that’s already taken by boost-histogram). The Module
histwould be a bad choice too, as then your axes would come out randomly Hist's or your own. The old way works fine,
FAMILY = object()at the top of the file, then use
family=FAMILYwhen you subclass. But for most users, a handy existing object is the module you are in, that is, “hist” or “boost_histogram”. It’s unique to you, and is descriptive. You can use
family=None(or the object class, anything works), you just don’t want some other extension to also use the same one - then boost-histogram won’t be able to distinguish between them when picking Axis, Storage, etc. If all you use is Histogram, though, then it really doesn’t matter.
object()is to make a truly unique object. For example, if I make
NOTHING=object(), then use
def f(x =NOTHING): if x is NOTHING, I can now always tell if someone passed a keyword argument in. They can’t make NOTHING, they have to pull NOTHING out of my source and using it from there, you can’t “remake” it accidentaly.
The ideal way would have been the following:
class Hist(bh.Histogram): … class Regular(bh.axis.Regular, parent=Hist)
The problem with this would have been it is very hard to design without circular imports, as Histogram almost always has Axis usages in it. It can be done, but would have requried changes to boost-histogram and user code, which also has to follow this strict regimen. Using a token is much simpler; it doesn’t require as much caution in user code (or boost-histogram).
Histogramwould create a new
object()and then not match the
object()in the family definition, but from what I understand now this is not what happens - this object is created once when the class is defined, and any other class that may also inherit defined in my code with
family=object()would pick up a different object and be unique too.
If I added a default for family for Histogram, it would have been object(). I could special case None, that is, if
family=None, it just makes an object() for you.
I could also make that the default for Histogram, and only require family= on the other subclasses. But if you have an Axis or other subclass, you have to go back and add family= on the Histogram; that’s why I force it to always be delt with on Histogram, it prepares you for also subclassing other components. I didn’t really think too much about only subclassing Histogram.
By the way, can’t you do
import cabinetry class Histogram(bh.Histogram, family=cabinetry): ...
? That would allow to easily add subclasses for axes eventually if you needed to customize them later.
Yes, I could use that too. I was looking at
object() following the documentation:
If you only override Histogram, just use family=object().
The additions in my histogram class are rather lightweight and I don't expect to go deeper and subclass axes. On the other hand I see no downside of
@henryiii @jpivarski Can you tell me if this is a
hist Issue or a
uproot Issue or neither? https://gist.github.com/matthewfeickert/ab6ac8677aad2e04738111d0af3e0549
(There's a Binder link in the Gist if you want to play with it in browser)
|
OPCFW_CODE
|
Network Time Protocol (NTP) is a protocol for distributing the Coordinated Universal Time (UTC) by means of synchronizing the clocks of computer systems over packet-switched, variable-latency data networks.
I’ve tried to synchronizing on cisco router and workgroup-switch catalyst series. OK, let’s try!
- Enter the privileged EXEC mode
- Enter the global configuration mode
- Enter the command clock timezone zone hours-offsetin order to set the local time zone.
AND(config)#clock timezone UTC 7 0
Note: UTC for Jakarta – Indonesia
- Enter the command clock summer-time zone recurringin order to specify daylight savings time. The default is that summer time is disabled.
AND(config)#clock summer-time UTC recurring
- Enter the ip address or hostname of peer in order to allow the clock on this router to be synchronized with the specified NTP Server.
AND(config)#ntp server 0.id.pool.ntp.org
AND#show ntp associations address ref clock st when poll reach delay offset disp ~127.127.1.1 .LOCL. 7 5 16 377 0.000 0.000 0.236 *~188.8.131.52 184.108.40.206 3 45 64 37 17.190 -1423.1 439.32 * sys.peer, # selected, + candidate, - outlyer, x falseticker, ~ configured
AND#show ntp status Clock is synchronized, stratum 8, reference is 220.127.116.11 nominal freq is 250.0000 Hz, actual freq is 250.0025 Hz, precision is 2**24 reference time is D44515AC.DFAF0A26 (00:15:24.873 UTC Thu Nov 8 2012) clock offset is -1423.1677 msec, root delay is 0.00 msec root dispersion is 2.30 msec, peer dispersion is 190.11 msec loopfilter state is 'CTRL' (Normal Controlled Loop), drift is -0.000010142 s/s system poll interval is 64, last update was 140 sec ago.
Clock is synchronized!
AND#show clock detail 00:22:34.570 UTC Thu Nov 8 2012 Time source is NTP
AND#show calendar 00:23:33 UTC Thu Nov 8 2012
|
OPCFW_CODE
|
Azuer Service Bus SDK not receiving the specified messages
I am using Azure.Messaging.ServiceBus SDK and I am unable to understand the below statement.
My Subscription has 200 messages. My understanding is that if I execute the below statement it must return me 5 counts of messages or if I give the value as 200, it must return me 200 but it returns me only one all the time whatever I do. I tried setting the TimeSpan to very high values too. Basically, I am trying to increase the throughput on the receiving. I am quite happy with the send throughput using batching but I guess I am not understanding the SDK well to leverage max receive throughput.
var messages = await receiver.ReceiveMessagesAsync(5, TimeSpan.FromDays(1));
I am open to inputs on improving using "processorClient.CreateProcessor" too. In this case I dont see any other way other than increasing "PrefetchCount"
My understanding is that if I execute the below statement it must return me 5 counts of messages or if I give the value as 200, it must return me 200
This is incorrect. The parameter is called maxMessages and represents the maximum number of messages that will be received, not a guarantee. There is no minimum batch size when receiving messages.
The receiver lets the service know the maximum number of messages that it would like and then gives your application the set returned from that operation. The client does not attempt to build a batch of the requested size across multiple operations.
There are two main reasons for this approach. First, the lock associated with the message is held only for a limited time after which it expires if not renewed. Were the client to hold onto messages across multiple operations to try and build a batch, the time that your application has to process messages would be sporadic and, in the worst case, you would receive messages with expired locks that could not be completed. Second, the client prioritizes providing data to your application as quickly as possible to help maximize throughput.
I am open to inputs on improving using "processorClient.CreateProcessor" too. In this case I dont see any other way other than increasing "PrefetchCount"
Throughput is impacted by a number of factors from the network, host, workload, message size/composition, and service tier. It is very difficult to generalize advice.
A few things that would normally help:
Ensure that your application is running in the same Azure region as your Service Bus namespace.
Consider increasing concurrency. This may involve using multiple receivers and/or tuning the concurrency settings on the processor.
Consider using prefetch to eagerly stream messages from the service.
(Note: messages held in prefetch are locked and those locks cannot be renewed. Tune accordingly and test thoroughly)
I'd recommend reading through Best Practices for performance improvements using Service Bus Messaging as it goes into quite a bit more depth.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-quotas
This link says max concurrent connections for a namespace(both premium and standard) is 5000 for AMQP. Is it correct to say that for every "new ServiceBusClient()" that I trigger from code(and NOT closed), it counts against this quota?
That is correct. Each ServiceBusClient represents a single connection to the service.
|
STACK_EXCHANGE
|
Got a call from a client that their internet was down. There was a system wide outage of Comcast at the time so went down to switch them over to backup DSL. Spare you all the details but did all of the following. After each step no joy...
Reset network setting in router for DSL
Triple checked and confirmed with ISP settings were correct
Replaced router with new unit and tried to configure both Comcast (which had come back up) and DSL
Rebooted server multiple times
Connected laptop directly to DSL modem with network settings set statically on NIC (worked)
Reviewed all connections, cables, and everything else physical multiple times
Could ping anything on the network by IP but nothing outside of it
Finally out of frustration took one of the computers on the network, noted the network settings as displayed by ipconfig and then entered those settings manually for the NIC (same IP, same gateway but added 126.96.36.199 as primary DNS rather than the server which was set to secondary instead). This worked! Repeated for all machines in office as temporary fix.
Now I am left with trying to figure out what could possibly be the root problem. Note that nothing was changed on the server prior to the problem other than possibly a temporary failure of the WAN. User access to network shares and server resources was never disrupted.
We are running Server 2012R2 with essentials installed. Server has 2 NICs, one of which is used for Hyper-V. Server is doing both DHCP and DNS. Also added Hamachi VPN to the server fairly recently for use by one of the remote users. Users have be
There has been some issues with the dual NICs in the past revolving around how services are attached to the different NICs. I had to disable the NIC dedicated to Hyper-V at one point. I suspect this is again the issue but it is not clear at all why things should fail when ipconfig for all machines is showing the correct information (and simply typing in the info manually solves the problem).
I agree with LarryG. Their internal DNS isn't working properly. This isn't a DHCP issue unless your DHCP server is assigning the address of your COMCAST DNS server. If it's DHCP assigning the wrong address, configure DNS on your local server and have it use the root servers for external referrals. That way it will work regardless of your ISP.
I won't be able to check the DNS until later tonight but I don't see how it can be an issue. I did not make this clear but when trying to troubleshoot connectivity all of my pings were by IP address (eg ping 188.8.131.52 and not ping yahoo.com). I don't understand how the DNS server/configuration would come into play when pinging by IP.
The DHCP service running on the server is setting the workstations' Gateway to the router 10.1.1.1 and DNS to the server 10.1.1.15. I am planning on turning off DHCP on the server and using the router for DHCP instead to see what happens. I will check the DNS forwarders as suggested but I believe it is set to google 184.108.40.206. I do remember having to make a change to the forwarders at some point in past.
I just find this all very perplexing as it seems that I should be able to ping by IP regardless of which DNS server is used. It will be easier to approach things in a more careful fashion this evening when the business is closed and I don't have a gun to my head.
|
OPCFW_CODE
|
Microfacet GGX not integrating as expected
I'm trying to complete a look-up table for an energy-conserving variation of microfacet GGX as implemented here: https://patapom.com/blog/BRDF/MSBRDFEnergyCompensation/
I have a (pretty standard) GGX form of the microfacet equation and my white furnace test behaves similar to what is shown elsewhere (i.e., it loses energy as roughness increases).
However, when I attempt to integrate the brdf and store the total energy in a texture, I get results that are inverse to my expectation - higher roughness yields more total energy in the integral. Some code below (this is done in a compute shader):
float evalSum = 0.0;
const float theta_i = 0.05;
const float phi_i = 0.05;
for (float phiWi = 0.0; phiWi < PI2; phiWi += phi_i)
{
for (float NoL = 0.0; NoL < 1.0; NoL += theta_i)
{
float nohA = 1.0 + NoL * NoV + (1.0 - NoV * NoV) * (1.0 - NoL * NoL) * cos(phiWi);
float NoH = (NoL + NoV) / sqrt(2.0 * nohA);
float D = Fs.eNDF(NoH, roughness);
float G = Fs.eGS(NoV, NoL, roughness);
float Fr = D * G;
evalSum += Fr * NoL;
}
}
outBuf[idx] = evalSum * phi_i * theta_i;
Here is my texture output:
I'm expecting something more like this:
As the Y axis goes down, roughness increases, and as the X axis goes to the right, NoV (N dot V) increases. Total energy increases with roughness and I also get a strange valley to the energy which I wasn't expecting.
Here's the two GGX terms:
float getNDF(const float NoH, float linearRoughness)
{
float a2 = linearRoughness * linearRoughness;
float b = NoH * NoH * (a2 - 1.0) + 1.0;
return a2 / (PI * (b * b));
}
float getGS(const float NoV, const float NoL, const float linearRoughness)
{
float a2 = linearRoughness * linearRoughness;
float G_V = NoV + sqrt((NoV - NoV * a2) * NoV + a2);
float G_L = NoL + sqrt((NoL - NoL * a2) * NoL + a2);
return rcp(G_V * G_L);
}
Pretty stumped here. I must be doing the integral wrong somehow because the in-engine renders look correct - the brdf code seems straightforward. Any ideas on why my LUT is not producing the expected result?
I probably won't find the answer, I find your abbreviations a bit hard to read. However, I noitced you calculating float Fr = D * G; but never using it. Perhaps you forgot something?
Thanks Tare, that's just an artifact of me trying to debug the code. I'll edit the post.
@polyrhythm I second that, just because dolt researchers like to use abhorrent abbreviations doesn't mean you have to. There's honestly no reason you can't name things properly here. Your names are misleading as well, NoL to me would have translated to "NormalOfLight" not "N dot L", regardless I would have just named it what it was (and no, N dot L is not an appropriate name, what is N? What is L?, what does the combination of the two represent? That's how you name these things.
Thanks for your comment! I think my abbreviations are pretty industry-standard, though. A quick google search turns up a ton of high quality codebases using similar nomenclature, here is an example: https://github.com/google/filament/blob/master/shaders/src/brdf.fs
I have never seen NoL to mean "normal of light"...
|
STACK_EXCHANGE
|
Infor Smart Office 10.0.5.2 is now available as CCSS update package. In this posts I’ll cover some of the enhancments done in this new version.
First lets start with a critical notice:
If you are using Grid 10.1.11 (with LifeCycle Manager 9.1.x), you must first apply the fix for Lawson Grid before upgrading your installation of Smart Office to 10.0.5.2.
The fix is available via CCSS, with the ID: Lawson_Grid_10.1.11.0_2. Once fix is applied, the version number of Lawson Grid will change to 10.1.11.0.25.
Smart Office 10.0.5.2 and Smart Office 10.1.1 are released only days apart. One of changes is that we have created a new admin tool Mango Administration Tool, that will allow you as an administrator to export data from one installation and then import that data into another Smart Office server installation. This is part of the migration support for 10.0.5.2 to 10.1.1. Note however that any “non standard features” such as Mashup Designer, LPA / IPA or LBI needs to be installed with LCM becuase installed .lawsonapplications are not completely inlcuded in the export. The configuration is inluceded but the application still needs to be installed. How to migrate from 10.0.5.2 to 10.1.1 is described in the documentation for 10.1.1.
A new Mashup control that shows message boxes of different types. It can be used to show messages as well as capturing the result from Yes/No and OK/Cancel dialogs so that different answers triggers different events depending on user input.
Can be used to send and receive messages to/from other Mashups, scripts or SDK features. Messages can be sent to a single recipient or broadcasted to many recipients.
New buttons are available in the advanced mode in the Profile Editor which allow users to manually delete and add profile groups, application entries and properties.
See numbering in the image
1. You can create or delete a new application group. Delete is useful if you installed a feature that you don’t use and you what to clean up and remove the configuration.
2. Remove an application group
3. Adds a new property to the application
4. Adds a new application.
If you are working with a feature that uses M3 or is related to M3 and you need to add profile settings to your application you can select the M3 profile application group to the left and then go into the advanced mode. If your feature is dependent on S3 you add your settings to the S3 application group instead. Once in the advanced mode you press the +Add (4) to add your feature section. Then you select to add a property. The names can be anything but they need to be unique under that group.
This comes in handy if you are a SDK developer or a Mashup developer and would like to manually add stuff in the profile without having to configure your own local profile in the SDK scenario.
Mashup File Administration
In the Mashup File Administration Tool we have added support for uploading .mashup files directly. These files can be created from within the new version of the Mashup Designer but even with older versions of the Mashup Designer when you select to create a lawsonapplication-file a .mashup file will be created in the same directory as the .lawsonfile. This file can be uploaded directly via Smart Office Mashup File Administration tool (from within Smart Office). The recommended approach for Mashup deployment is still to use Life Cycle Manager. But this is an option for those mashups that does not contain a profile section. When the mashup is uploaded via the tool the profile section will not be merged to the profile of the installation. For that to work you still need to use Life Cycle Manager.
However the profile editor has been enhanced with a new buttons in the advanced mode that will allow you to add profile settings manually. What approach you take to managing your mashups is up to you. But please note that if you remove mashups in the Mashup File Administration tool, Life Cycle Manager will not pick up the fact that the Mashup is no longer deployed.
We have received feedback that some customer projects needs a simpler way to deploy new versions of a Mashup and with the new features in the Mashup File Administration only Smart Office Admininistration Access is needed to upload a new version of an already deployed Mashup.
My Local Applications
My applications accessible from Show-> My Applications has been enchanced with a new mashup tab that contains a list of all available mashups. It is useful for mashup developers and for users that have local applications with mashups installed. Double click in the list will open the selected Mashup.
Support for authentication using the user@domain format
Previous version of Smart Office make use of domain\user if domain is used. But as of 10.0.5.2 the format entered by the user on the login screen will be used.
Application MBrix has been removed
The schema mbrix has been removed. It has not been used since Lawson Smart Client but there might still be users with links to Companion and Document Archive that uses the old mbrix:// format.
Those links will stop to work when the Mbrix applications has been removed and should be replaced with new links. Drag the menuitem from the Navigator widget or drag the icon from an open application in the taskbar to the canvas to create a shortcut and then open settings for the shortcut to check the URI.
Feature updates for Lawson Applications
New API builder
A new API builder utility is available in Smart Office, which can be used as an aid for scripting. It can be used to build Application, Data Query, and Drill-Around APIs.
The API Builder can be launched from any of the following:
– Help menu of the Script Tool
– Smart Office Navigator-> Lawson Transactions -> Other
– Via sclient://apibuilder in the Start Application box.
New File Transfer Wizard
A new File Transfer Wizards which can be used to connect to a FTP Server. The File Transfer Wizard can be launched from Navigator -> Lawson Transactions -> Other.
– Note however that for a Windows FTP server the directory list output format must be set to UNIX style.
Read only mode now available for list-driven Lawson Application Forms
The read only option can be set by going to Show -> Settings -> Lawson Transactions -> Applications tab.
The prompting for credentials following a DSSO session timeout has been re-implemented.
We have migrated our Test environment into ISO 10.0.5 (from LSO 10.0.4) and noted few problems dues to this migration :
– installed Mashup are lost (must be re-installed)
– ISO canvas is empty (each user must re-applied it’s save canvas)
– auto start applications are lost (each user must re-parameterized it)
– specifics and customized installed Widgets are lost (must be re-installed)
– specific language is lost (after installation, default language is french)
Have you got a solution to solve these problems dues to the migration ?
It seems like some of the data that should have been migrated into the database was not migrated. So please tell me exactly what version you started with and what version you upgraded to? Did you have anything in the logs? All the files you mention should have been migrated except perhaps the language. I will check against the code once I know the version. What version does the grid have? It seems like the upgrade worked but the migration that is done when the server starts never took place.
We started to LSO 10.0.4.1.39 and installed ISO 10.0.5.4.19.
You mentionned log on your last post… but which logs ? (LSO ? Grid ? SubSystem IW B M ? Installation Log ?)
I did get the logs (but I need a log that is newer in time than the one I got). Generally when there are server issues it is the MangoServer/LSO log that we are interested in. Depending on the grid version there might be other grid logs of interest as well.
Sent some logs (globals logs of LifeCycle and environment’s log of Grid) to Karin…
|
OPCFW_CODE
|
Leave or remove baseboard for hardwood floor install
I removed the carpet and discovered particle board. In the photo, I have removed a piece of the particle board. I plan on installing the ¾ inch hardwood on the subfloor. You can see the hardwood just stick above the baseboard by ¼ inch. How should I do the job?
Butt against the baseboard and install quarterround.
Or get a cutter that cuts that ¼ inch so the hardwood goes underneath, then add quarterround.
Or pull the baseboard.
Think it would easier to mark and remove baseboard, than to try hacking floor to butt against or fit under. Most floors require free space at edges for expansion(covered by baseboard).
I doubt there's a technically correct answer to this. If this were my house I'd almost certainly remove the baseboard because I don't like the look of quarter-round in that application. If this were a house I'm flipping on a tight budget then quarter-round all the way.
@brhans - to your point - I get it. First quarter round is terrible and should never be used as shoe moulding. Second... There is no way in hell that buying quarter round, making 30 precise cuts, installing it, and so forth takes less time then pulling baseboards that already fit and reinstalling them. The only thing that takes time is actually snipping the nails off the baseboards. And this is free ($3 in nails) and the quarter round for a floor is $150ish. Technically you should always remove the baseboards.
You pull the baseboards.
First you would need a substantial expansion gap (1/4" minimum and varies on room size and wood). In your picture there is no gap. If you left the baseboards you would for sure have to attach quarter round to the baseboards. And that receives a heavy "Booooo!" Why would you buy that nice flooring and then use quarter round?
The proper install method of hardwood flooring it to score the bottom of your drywall 1/4" higher than hardwood top. The hardwood slides right under drywall, just under the lip. Meaning it has ~1/2" expansion around perimeter.
Then the baseboard just gets gently sat on hardwood (you can give it a 1/32" gap) and popped into place with finishing nails.
This is the wrong way
This is the right way
The proper method is to remove the base board then when your spacing is not exact the base boards will cover the edge.
The edges being covered will produce a professional look as the boards actually shrink and expand with the changes in humidity. Trying to have a perfect cut at each end just won’t work throughout the year.
For best results pull the base boards and lay the hardwood with a target length just short of the wall for expansion.
Make sure the new wood has had some time to adjust to your climate prior to installation, aprox 2 weeks is usually enough for well cured wood flooring.
Another great answer, Ed. Adding that acclimation is really important, I'd say 2 weeks is a minimum 4 weeks would be better. +
|
STACK_EXCHANGE
|
What is accessible content?
At Harvard Library we’ve built a website that’s designed and developed with accessibility in mind. But accessibility extends beyond designers and developers. It’s the responsibility of content editors to create and maintain content that’s inclusive and accessible to all users.
These five guidelines are a tool to use when creating content on library.harvard.edu or any other Harvard Library digital product.
Web content needs an easy-to-follow structure that’s not dependent on visual presentation. Research has established that users rarely read a webpage from top to bottom, but rather they scan headings to find the section they need. Content must support online reading habits, no matter how it is delivered.
Provide structured content that includes:
- an accurate page title
- descriptive headings
- defined sections
Why structure is important for accessibility
Some users may modify how a browser presents content (for example, by magnifying the text) or use a screen reader to have the content read to them.
View an example of good content structure from Perkins School for the Blind.
Activity: Turn off CSS in your browser using the Web Developer browser extension to make sure your content follows a logical outline and is structured appropriately.
When writing web content, consider the user’s needs first and organize accordingly. Write in clear, concise sentences that are grouped into short sections.
Make sure your tone aligns with the Harvard Library writing guide and is consistent throughout.
Use plain language whenever possible and avoid exclusive or ableist terms. Read more about best practices for inclusive language.
Language to use when talking about disabilities
When talking about people with disabilities, use language that appropriately describes them. For example:
- People with disabilities
- People who are deaf or hard of hearing
- People who are blind or have low vision
- Wheelchair users
- People with mobility impairments
- People with cognitive disabilities or people with mental illness
- People with learning disabilities
- Use non-disabled users when the distinction is necessary
Language to use when talking about accommodations
Use the word accommodations, not exceptions or special treatment. If there is inaccessible software or another service that requires mediation for someone with a disability or who is using an assistive device, use "reasonable accommodations." See an example of reasonable accommodation language on the Harvard Careers site.
Why language is important for accessibility
Plain language and well-organized writing benefits all users, but especially those with cognitive disabilities such as dyslexia and ADHD.
View an example of well-organized, plain language from the Consumer Financial Protection Bureau.
Activity: Review your content for compliance with this checklist for plain language.
3. Meaningful Headings & Links
Always use the built-in options for styling headings, links, tables, and lists. Avoid using these structural elements for anything other than their intended use.
Write link text that clearly explains its purpose. Avoid using the same link text in a page for multiple links that lead to different destinations. Refrain from using ambiguous phrases like “click here”, “learn more”, or “more info”. Instead, use phrases like “use Zotero” or “learn more about Borrow Direct” or “More info on the media studios”.
If a link leads to an attachment, rather than another webpage, explain that in the link text. For example, “Widener Call Numbers PDF”.
Why headings and links are important for accessibility
People who use screen readers depend on headings to find relevant content. Screen reader users may navigate using lists of headings and links so those elements need to make sense out of context.
Example of meaningful headings & links from Mass.gov
Activities: Check your alt text and headings with the Web Developer browser extension.
4. Image alternatives
Use ALT text to provide a short description of meaningful images. However, you should avoid including ALT text when an image, like an icon, is decorative. As long as the image does not provide additional information or meaning it can be considered decorative. If you’re not sure if your image is decorative or not, check out this ALT text decision tree.
If you are including a data visualization such as an infographic or chart you may want to provide a text alternative or a link to the underlying data.
Why image alternatives are important for accessibility
People who use screen readers need an equitable experience of your content, including graphics. Brief descriptions provide meaning that would otherwise be unapparent to these users.
Example of meaningful alt text from National Oceanic and Atmospheric Administration.
Activity: Check your alt text with the Web Developer browser extension.
5. Media alternatives
For audio files
Provide a transcript of the audio file.
For video files
Provide at least a transcript of the audio or captions. Ideally, provide the user with the option to read a transcript or use captions. Consider adding descriptions of any visual content such as Powerpoint slides. Automated captioning tools, such as those provided by YouTube, are not sufficient in most cases. Complete details about media alternatives can be found on the Harvard IT online accessibility site.
For data visualizations
Provide a description of the conclusions that the visualization explicates in the ALT text. Consider linking to the data that the visualization is based on. If needed, provide a text alternative that gives an in-depth description of the visualization.
Why media alternatives are important for accessibility
Transcripts and captions must be provided so that those who can't see or hear can still experience your content.
View an example of a transcript and captions from Lynda.com.
- Check that synchronized captions give a text equivalent for all spoken and key non-spoken audio, or that a transcript of the spoken and key non-spoken audio is available on the same page as the audio.
- If the video includes important visual information, make sure descriptions are provided in the captions or transcript.
If you have questions about this guide, contact Amy Deschenes who created and maintains it.
This guide focuses on accessible content only. Use the Online Accessibility website from Harvard IT for more details on web design and markup.
|
OPCFW_CODE
|
setup wasm-vips for next.js/react
First of all would like to thank for your job, it's awesome that people making possible projects like this one. I think it has bright future.
So I was working with node.js sharp library, wrapper around libvips. Found out that everything now can be done even without backend, found out about your project and started trying to work but stuck. I have no problems when using it on backend (I didn't try much because it's not what I wanted). But it produces different errors each time I'm trying to make it work with next.js
For example I'm constantly getting this error:
Link to my project: https://github.com/serafimsanvol/my-app/blob/main/src/app/page.tsx
It's just basic next starter project so I'm sure that there is something with library configuration but I don't know what are workarounds here, can you help/explain/tell me that it's bug and actually unexpected behaviour?
On different project with slightly different setup and set headers as required it shows different error:
Any help/suggestions appreciated, thanks
It looks like Next.js is encountering some issues when attempting to bundle wasm-vips' ES6 modules, related to the new URL('./', import.meta.url) syntax.
If you want to enable wasm-vips solely for server-side rendering (SSR), you can use this config:
next.config.js:
/** @type {import('next').NextConfig} */
const nextConfig = {
webpack: (config) => {
// Disable evaluating of `import.meta.*` syntax
// https://webpack.js.org/configuration/module/#moduleparserjavascriptimportmeta
config.module.parser.javascript.importMeta = false;
// Disable parsing of `new URL()` syntax
// https://webpack.js.org/configuration/module/#moduleparserjavascripturl
config.module.parser.javascript.url = false;
// Alternatively, to bundle the CommonJS module:
// Ensure "require" has a higher priority when matching export conditions.
// https://webpack.js.org/configuration/resolve/#resolveconditionnames
//config.resolve.conditionNames = ['require'];
return config;
}
}
module.exports = nextConfig;
To use wasm-vips in client-side environments, it remains essential to opt-in to a cross-origin isolated state and serve vips.js, vips.wasm and vips.worker.js from the same directory. Assuming these files are in the public/ directory, you could do this:
next.config.js:
/** @type {import('next').NextConfig} */
const nextConfig = {
async headers() {
return [
{
source: '/:path*',
headers: [
{
key: 'Cross-Origin-Embedder-Policy',
value: 'require-corp'
},
{
key: 'Cross-Origin-Opener-Policy',
value: 'same-origin'
}
]
}
]
},
webpack: (config) => {
// Ensure "require" has a higher priority when matching export conditions.
// https://webpack.js.org/configuration/resolve/#resolveconditionnames
config.resolve.conditionNames = ['require'];
return config;
}
}
module.exports = nextConfig;
src/app/page.tsx:
'use client';
import { useEffect } from 'react';
import Vips from 'wasm-vips';
export default function Home() {
useEffect(() => {
Vips({
// Disable dynamic modules
dynamicLibraries: [],
// Workers needs to import the unbundled version of `vips.js`
mainScriptUrlOrBlob: './vips.js',
// wasm-vips is served from the public directory
locateFile: (fileName, scriptDirectory) => fileName,
}).then((vips) => {
console.log('libvips version:', vips.version());
});
}, []);
return (<h1>Hello wasm-vips!</h1>)
}
@kleisauke, thanks for your help! Now it's working for me
Great! I'll close, please feel free to re-open if questions remain.
|
GITHUB_ARCHIVE
|
Update mirror networks
Problem
There are several relevant mirrornode endpoints:
hcs.previewnet.mirrornode.hedera.com:5600
hcs.testnet.mirrornode.hedera.com:5600
hcs.mainnet.mirrornode.hedera.com:?
mainnet-public.mirrornode.hedera.com:443
I've been given to believe that endpoints with the port number 5600 are plain connections, and those with port number 443 are TLS encrypted connections.
hcs.mainnet.mirrornode.hedera.com:? is the mainnet mirror node endpoint that the Java SDK has been connecting to by default for a long, long time. I've put a ? in place of the port number because it's not obvious to me what port numbers are supported at this domain name, for reasons that I will make clear.
hcs.mainnet.mirrornode.hedera.com is a non-public mirror node, which can only be connected to via IP addresses that have been allow-listed by that mirror node. Launchbadge does not have an allow-listed IP address that we can use for testing that endpoint (and that's why I'm not sure what port numbers are supported). I am under the impression that this mirror node also does not behave identically to the mainnet-public.mirrornode.hedera.com mirror node, though I am not entirely sure what the differences are.
These domain names support only the non-TLS (port number 5600) endpoints:
hcs.previewnet.mirrornode.hedera.com
hcs.testnet.mirrornode.hedera.com
And this domain name supports only the TLS encrypted (port number 443) endpoint:
mainnet-public.mirrornode.hedera.com:443
The Java SDK currently supports toggling transportSecurity at the level of the Client object. If transportSecurity is true, the Client will connect using TLS to what is presumed to be the TLS endpoint for the Client's already-present MirrorNetwork. In other words, it will attempt to connect using TLS to port 443 on the same domain name. Likewise, if transportSecurity is set to false it will attempt a plain connection to port 5600 on the same domain name.
transportSecurity simultaneously controls whether the Client connects to the consensus network using a TLS connection or a plain connection (and just like with the mirror network, it assumes the existence of TLS and plain endpoints for each node IP address [though for the consensus network, the port numbers are different: 50211 for plain and 50212 for TLS])
We made the assumption that for each domain name or IP address in the mirror network or consensus network there were two endpoints, one TLS endpoint with the expected port number, and one plain endpoint with the expected port number. It appears to me that that assumption is breaking down for the Hedera mirror nodes, and honestly I'm pretty sure that's not a good assumption for the consensus networks either.
The fact that it is entirely impossible to connect to some mirror node domain names using TLS, or to some other mirror node domain names using a plain connection, means that we probably want to decouple configuring TLS for the consensus network from configuring TLS for the mirror network. If it's possible to connect to the testnet consensus network using TLS, but not possible to connect to the testnet mirror network using TLS, it would be reasonable for someone to demand TLS for one and plain for the other.
We also have gotten mixed messages on whether we should switch the default mainnet mirror network from hcs.mainnet.mirrornode.hedera.com to mainnet-public.mirrornode.hedera.com.
Solution
Request clear, unambiguous direction from on high about whether to switch the default mainnet mirror node domain name from hcs.mainnet.mirrornode.hedera.com to mainnet-public.mirrornode.hedera.com.
Add [set|get]MirrorTransportSecurity() methods to Client so that mirror node transport security can be toggled separately. I don't see any obvious or reasonable way for this to not be a breaking change.
Make mirrorTransportSecurity an enum with three options:
DISABLED
ENABLED
AUTO
If AUTO is selected, then the mirror network will attempt to connect with TLS, and will fall back to a plain connection if the connection is refused.
Make AUTO the default value for mirrorTransportSecurity.
For the moment, we are leaving the configuration of the mirror network the same as it has been in previous releases until we are certain about what to do.
Alternatives
No response
@steven-sheehy @danielakhterov
We've decided that the solution outlined here was too convoluted to be implemented.
We have opted for a simpler solution instead: we've added setMirrorTransportSecurity(), but it's just a bool, like setTransportSecurity(), and the mirror network for mainnet now defaults to mainnet-public.mirrornode.hedera.com with transportSecurity = true
|
GITHUB_ARCHIVE
|
Big improvements are coming to the Umbraco NuGet package with regards to upgrading to a newer version. We need your help testing this process to make sure it's as good as it can be.
While we all love NuGet for the convenience in automation it brings, it also isn't exactly meant for packages like Umbraco. NuGet is great at managing dlls, but Umbraco also has a lot of configuration and content it brings in and we're really pushing the very borders of what NuGet can do.
But wouldn't it be nice if we would be able to make upgrades even easier? What if "Update-Package UmbracoCms" would "just" work? Well, I've been working hard at that and the results are in 7.2 beta2 already. Now it's time for you to try that out as I'm sure there will be setups that I haven't thought about and I would love to help make the process better there as well.
So, give me 5 minutes of your time and do the following:
Note: If you use ReSharper, it has to be on the latest version (8.2.2) else you'll get.. er.. "interesting" errors. :)
The biggest thing: config transforms. I've gone all the way back to 6.0.1 and found all the required web.config updates and baked them into the NuGet package now so they will be applied automtically. No more fiddling with merge programs, whatever Umbraco needs to be there to run properly will be there.
We previously had a strange hack where we had to delete all the files from your bin folder in order for the NuGet upgrade to not fail (don't ask.. !), some of this is still necessary but now we only delete the files and dependencies that Umbraco ships with, so we'll not move all the dlls of your installed packages to the backup folder any more. Again, this is so that you don't have to take manual steps after doing the upgrade, make things as smooth as possible.
Upgrades would fail if you still had the old "Install" folder in your site, this one now gets deleted during upgrades. Instead of the install folder we now use an MVC route for that so we don't need the folder any more and we don't have to urge you to delete it any more (win-win!).
With that said, please do me, us and the community a huge favor and test at least one of your sites. If you don't have time then I am more than happy to receive DropBox/WeTransfer links of sites to test. I don't need your database (though would be nice to see if the whole upgrade succeeds) but your Visual Studio solution would be great. You know how to email me.
Remember this blog does not send out notifications for new comments so if you have problems then make sure to create an issue on the tracker so you get notified of additional questions etc.
A big THANKS in advance!
|
OPCFW_CODE
|
DrupalCon Chicago Session Video
DrupalCon Chicago was quite a rush. It was the first Drupal event since we've hit a stable release of the Commerce modules, and the feedback from all directions was extremely positive. The video for my session, Drupal Commerce: Setting up shop on Drupal 7, was posted to archive.org, but the service there isn't the most reliable or feature-rich. As such, I've cross-posted the video to vimeo, so watch away!
Drupal Commerce at DrupalCon Chicago from Ryan Szrama on Vimeo.
The video includes a presentation of the overarching vision and architecture of Drupal Commerce followed by a technical demonstration using the Commerce Guys demo store.
As of DrupalCon Chicago, Drupal Commerce should be considered stable and ready for developers and advanced site builders. We're in solid beta territory and are fixing bugs in preparation for a final 1.0 release. The main limiting factor is the availability and stability of contributed modules.
Hi Ryan, fantastic stuff what you're doing with drupalcommerce. Just a quick question: how did you get the Acquia Prosper theme to work in D7?
I would very much like to migrate from D6 to D7, before it gets too complicated.
No secret sauce... this site
No secret sauce... this site itself is actually running on D6. ; )
My D7 / Commerce demo uses Corolla.
i love it!
a few days ago, after watching this video, I decided to give drupal commerce a try.
instead of force magento to do what I want
I had a lot of fun with d7 and your modules:
ok, there is still a lot of to do (learn) for me
but today the first tickets where bought.
Very cool! Good luck with the
Very cool! Good luck with the rest of the learning. : )
video voice transcript: build a taxonomy-based product catalog
[see vital footnote]
[this is a transcript of the sound by one viewer, with a little bit of annotation]
Hullo everybody, this is Ryan Scrama with Commerce Guys.
I wanted to show you today how you can build a taxonomy-based product catalog in drupal commerce as I have done on my demo website. http://demo.commerceguys.com/dc/
if you look over here in my sidbar you will see that I have a catalog Block that lists out catalog catagories ...
["coffee holders", "conference swag", and "wearables"]
...that are actually Taxonomy Terms linking them to their Taxonomy Term Pages:
[this shows differently on the video because it shows the site when logged-on as admin. For admin, the term page shows a coffee holder with two tabs, "view" "edit", a paragraph describing the coffee mug and an add-to-cart form with a drop-down list, that moves your page from the one about black mugs to white mugs]
Term Pages in Drupal 7 have been enhanced a little bit, allowing you to specify custom urls, [eg coffee-holders] allowing you to display a discription on a page, and giving you both a view and a quick edit link here to edit the Taxonomy Term settings.
This particular one - Coffee Holders - has the description and it shows all of my -er different coffee holder products on the demo website.
This ["read more" link under the mug picture] is just a Node Teaser List of product display nodes. The product display node being a special node type that I've made that has both a
♦ Product Reference Field on it, that turns into this handy dynamic Add To Cart Form, and then it also has a
♦ Taxonomy Term Reference Field on it,
which you can see here lets me to tag this node with a particular taxonomy term and links it back to its term page.
1 Create a taxonomy vocabulary
Now if you wanted to build something like this yourself, the first thing you would need to do, is to create a taxonomy vocabulary for your catalog:
admin>structure>taxonomy>edit vocabulary [pictures at about 1'24" on the video]
So you can see here my catalog vocabulary, and if we look at the terms I've listed, my three terms are each present, and each one of them has a name, a description, and a custom url alias that just provides a nice search engine friendly url for this term page on the front.
2 Go build a menu; enable a bloc
Once you've listed out each of your taxonomy terms, the next step is to go build a menu for this.
So I'm going to go to stucture>menus, and you can see here that I have a catalog menu, where I have manually added links to each of the term pages.
[screen shows remembered "search engine friendly url" typed into the box. This can be found by going back to >structure>taxonomy>edit vocabulary to cut-and-paste]
Er - Whenever you create these links you can actually use the search engine friendly path that you have defined, and whenever you save this menu link, it will be converted to the actual Drupal path that has been assigned to that taxonomy term.
Whenever you create a menu you automatically get a block, that you can then enable, to show that menu in any of your sidebars. Here ...
structure>blocks [first option on the structure tab]
...you can see my catalog menu block has been placed into the first sidebar. This is a region in the Corolla theme which now has to be installed after Adaptive Themes Core. Once installed it has a tab on the blocks menu. From that tab you see the options shown in the video, where shopping cart, catalog, user menu and user login are all selected for the first sidebar, and I've configured this bloc...
[from the "configure" link on the "catalog" line]
...to not appear on checkout pages - notice I've used checkout asterisk so it will block all of the checkout pages, so that whenever you go to checkout and are in any step of the checkout process, er you do not have a sidebar. I did this to reduce distraction and noise on the checkout form so that the customer doesn't have distraction and when they're trying to complete the checkout process and give you their nolas.
Once you have
♦ built the taxonomy vocabulary,
♦ the menu item,
♦ put your bloc in place...
3 Create a Product node type
the next step is to actually have nodes showing-up in your teaser lists. Drupal Commerce will install a default product type whenever you first enable everything.
store>products second tab is "product type"
On this demo site I also have a T shirt product type, um, for my T-shirt products, er: I'll discuss that in a different screencast [about sized products].
Once you have product types though, the next step is to create a product display node type. So I'm going to go to my Content Types menu.
structure>content type second option on the structure list
You see here I have a product display node type, and the reason being: even though I have product types in the back end, I can list out all the products on my website on the back end, there is no automatic point of display for them on the front end. We've separated-out the front end from the back end in Drupal Commerce, er so that you have a lot more freedom to detirmine how you want to display products to your customers. Whether it's through product display nodes as I am , or some other method involving Views, or Pagemanager and Panels, or something else entirely!
If we look at the fields that I have put on this product display node type, you can see both my product reference field, and my [Taxonomy] Term reference field.
I like the autocomplete textfield wiget...
...because it lets me enter products on this node, using the product SKU with the product title with an autocomplete. And I can have as many as I want to, without having to bother with the multi-select select list, or perhaps just an overwhelming checkboxes list if you have many products on the website.
I also have a catalog catagory term reference select list.
So what you do is:
whenever you add a term reference field, you have to choose which vocabulary this is for, and then of course the widget select list autocomplete radios [radio buttons] so that on the product page - which I'll go to this right now - um so that on its edit form, you get to specify how exactly... - I'm denoting which catalog catagory this belongs to. So you can see here my Product Reference Field with the autocomplete, my catalog term reference field with the select list, and again how this is presented on the front end, with an add to cart form, and a link going back to the term page.
Well those are all the things that you need to know, to build your taxonomy based drupal commerce product catalog. Let me pull-up a .pdf here that shows you the different steps: [the order is slightly different on the .pdf]
♦ Create a "Catalog" taxonomy vocabulary, with terms for each of your categories.
♦ Create a "Product display" node type using a product reference field and a term reference field, and create nodes for your products.
♦ Create a "Cataolog" menu and display its block.
[There isn't a video on installing Kickstart, for example after a one-click scripted installation of Drupal. The files Kickstart files don't look like Drupal files for FTP installation either and the instructions are only for those who understand this:
"Installing with Commerce Kickstart
...a current bug in the installation profile packaging system on drupal.org prevents us from packaging a release. We'll host a zipped version here ASAP, but in the meantime it is available through its Git repository.."]
There's some discussion of this on a forum thread
Less vital footnote: I transcribed this for myself with comments and then posted to the web. Hope it's clear what's me and what's transcript and I'm happy to change any muddled bits - veganline.
http://vimeo.com/24275526 First steps in Drupal Commerce video
http://vimeo.com/24275526 "First Steps in Drupal Commerce" at Colerado 2011 (making a Drupal Commerce site from scratch) Now with a transcript on:
I've posted on another site because it's a bit informal and someone might want to post a more structured version here. I haven't worked out how the speech relates to what menu choices are selected on screen for example.
I still have to work-out why it's difficult to set-up a drupal commerce site. I thought it was the lack of step-by-step instructions for those who are in the dark, but it seems to be the difficulty of knowing what is difficult and needs to be stepped-through. Those who give lectures state that it's the concept. There are other problems about loading Kickstart, my shared server host tells me.
Hope this helps someone.
|
OPCFW_CODE
|
creating loop over the lines of text files
i is an integer (lets say 4).
I have three text files (a,b,c) contain single lines with strings, total number of lines of files equal to i (4). For example "a" file contains;
trm320
abc000
dfg1002
der5205
I need to create the output (on the screen or in the text file) with loop like;
a(1) b(1) c(1) (first line of a,b,c files)
a(4) b(4) c(4) (last line of a,b,c files)
What kind of loop do I need to create?
paste does exactly what you want.
DESCRIPTION
Write lines consisting of the sequentially corresponding lines from
each FILE, separated by TABs, to standard output. With no FILE, or
when FILE is -, read standard input.
Mandatory arguments to long options are mandatory for short options
too.
-d, --delimiters=LIST
reuse characters from LIST instead of TABs
In your case
paste -d " " a b c
will do the trick. If you need the output in a file, redirect >output the output.
To access the n-th line of a file, use sed. For convenience wrap it up in a Bash function (pl is supposed to mean Print Line)
function pl {
sed -n "$1p" $2
}
Calling for example pl 5 a will print the fifth line of file a. To store it in a variable
fifth=$(pl 5 a)
or combine both tasks
paste a b c | pl 5 -
to print the fifth line of the concatenated file.
To get a file into an array, use mapfile, from this answer:
mapfile -t myArray < output.txt
+1 much better than any written bash script. Useful utility!
I also need to define variable. For example when I type $a[1] I want to see first line. Is there any way to define these variables like that?
@Tim Why invent the wheel anew every time, right?. The GNU toolbox holds a plethora of those useful helpers.
for example; sentence="this is a story"
stringarray=($sentence)
echo ${stringarray[0]}; I need to define the lines of text files with this way
@deepblue_86 it's helpful to tell us that at the beginning. You could then cat them and add each line of file to an array
@deepblue_86 I would define a bash function using sed; see the addendum.
@Tim Thanks for the edit! Very useful and certainly the best way to read a file into an array.
|
STACK_EXCHANGE
|
Are there any good intermediate deck building resources available in magic
I've gotten to the place where the run of the mill 'basic deck-building' guides offer no more help to me and yet the advanced deck building resources are still a little out of my reach. I am wondering if anyone can recommend any good resources to me that might help.
I realize this is sort of a relative question, so I will provide a hypothetical question I might ask, given the level I am at in my current proficiency-
(this next paragraph is the set up for my hypothetical question)
Say, for example, I want to build a strong enchantment deck: I am at the point where I realize that enchantments have their weaknesses, most notably, a removal spell can completely ruin all the momentum you have built up in the game with a single card. So, when building this deck, an important aspect of it should be built around either countering or protecting against such a event. So I should think about something like hexproof or being able to return cards from my grave yard.
Question: Whether I go for either hexproof or graveyard return, how many cards in my 60 card deck should I dedicate to this protection?
That's an example of the type of question I would be likely to ask at my current skill level. I would love to know any resources that would be good for someone at my stage in the game.
I am not looking for an answer to this hypothetical question and I am not looking for an answer such as, 'Put some in your deck and play it until it works."
*I'm looking for a resource that provides, basic, guidelines for this type of thing, at this level.*
Can you give some examples of resources that you think are too advanced? Limited Resources (http://lrcast.com/) is mainly focused on limited but gives good good general advice about Magic. Check it out.
Some resources that come to mind are, for example the 'Next Level Magic/Deckbuilding' books by Patrick Chapin or a deck building series on youtube by a user names xAmsterdamx. All of these resources are very helpful even to me and I understand alot of it. It's just that occasionally they will start to refer to things I've never heard of or very specific situations that I have yet to encounter in the relatively short time I have been playing. So I'm looking for things that help bridge the gap, basically.
As far as I know, there is not a "catch-all" resource that can answer any intermediate question that anyone could possibly dream up. However, I think that you are looking for what I would call "intermediate building blocks".
Here are some building blocks that are relevant to nearly any deck build, but are not so hard to understand for an intermediate player. Please skip over any material you find to be too advanced (except for the rules - don't skip the rules
#What are the rules?
Understanding the rules is the most important tool when building a deck. I find The Judge's Corner on Youtube to be a great resource for understanding the rules. Wizards also releases a "rules clarifications" article right before every new set is released. You can read it to gain a more in-depth knowledge of the more complicated cards.
For example, when Red Deck Wins was in the same meta as Master of Waves, I made a decision to leave Anger of the Gods in my sideboard because Skullcrack removes his protection, and I main decked four Skullcracks. Had I not known the rules, Anger would appear to be useless against Master of Waves.
#How much land?
In standard, this boils down to probabilities. If you have an iPhone or Android phone, I strongly suggest you download the free Manalyzer for Magic the Gathering app and play with the numbers. You want the least number of lands that will still give you a high probability or being able to play the cards that you are most likely to draw. There are countless other apps that were built to help with the math, so search for them.
Using an extreme case to serve as a basis for all other cases, if your deck consists of only 1 drops, and you have a 90% chance to have 3 lands by turn 3, you will probably run out of cards to play on turn 3. That may be good or bad, depending on what the cards are.
In short, you need to determine on what turn(s) you need your mana to be there for you. This ties in to the next topic.
#Fundamental Turn
When does your deck win? I can't do this topic justice, so I suggest you read about it here, from the guy who coined the phrase.
Assume you have calculated your fundamental turn to be 6. You know that another deck that is in the meta has a fundamental turn of 4. What tools do you have to delay their fundamental turn? If you have none, you will lose. If you fill your deck with 1/3 creatures and he has a deck filled with 2/2 creatures, that will delay his fundamental turn. This leads to my last topic.
#The Clock
I can't find the article I wanted to reference here, so I'll reference this one instead.
The clock is basically "if nothing changes, given the knowledge and board state that I have, in how many turns will I win or lose". If your opponent has a 4/4 flyer and you have no board, no cards, and 20 life, you are on a 5 turn clock. If you are milling 5 cards per turn and you have 40 left, you are on an 8 turn clock.
You can anticipate changes to your clock by assuming that your opponent has specific cards in hand and knowing what cards remain in your library. This is where understanding your meta comes in.
#The Metagame
There are other decks out there. The ones that show up to tournaments the most form the "tournament meta". The ones that show up to your local store form the "local store meta". If you play with friends, your friends form your meta. Some metas are more stable than others (like legacy), while some are constantly shifting (like your friends).
I determine my meta using TCG player latest decks, by watching the latest StarCityGames tournaments on Twitch.tv, by watching Grand Prix tournaments (also on Twitch), and by playing a lot.
Once you know your meta, you can calculate the fundamental turn of the decks you will be facing, and adjust your deck to handle it.
Now you have the following resources in your arsenal:
The Judge's Corner and other mtg related playlists you may find
The Manalyzer and other probability tools you may find
A knowledge two important concepts: the fundamental turn and the clock
Your meta and 2-4 ways to discover and explore it
In the case that any of my links go bad, simply search for the bolded keywords. In the case that you know of a better article or tool than one I listed, please edit this answer with your awesome material.
|
STACK_EXCHANGE
|
Any ETA to 11.32?..
I don't think that is realistic anymore and I'd assume earliest would be q2 2012 based on current trends.Our goal is to complete IPv6 support by cPanel & WHM version 11.36. We expect version 11.36 to be available by the end of calendar year 2011.
If you are making business plans to target specific dates, I advise waiting until the target version has propagated to the CURRENT update tier. Most that use cPanel&WHM use the RELEASE update tier (the default setting).
MySQL 5.5 is now supported as of version 184.108.40.206. To see if this version has propagated to your update tier, visit Downloads - cPanel Inc.
When is RELEASE build coming?
I am using Innodb tables and i want to ask what i have to do before or after whm upgrade to mysql 5.5 to be ok and working?
Do i have to edit my.cnf and what exactly?
MySQL will not automatically upgrade to 5.5 as part of the upgrade. You need to go to the MySQL Upgrade page in WHM (after upgrading WHM) to be able to upgrade to 5.5 if you want to. Otherwise, you'll stay on the major version of MySQL you are currently running (e.g. 5.0, 5.1 etc.)
folks upgrading may want to read all that is available at MySQL :: MySQL 5.5 Reference Manual :: 220.127.116.11 Upgrading from MySQL 5.1 to 5.5
Yes i know and i want to upgrade it but after the upgrade from whm it will work out of the box for my Innodb tables or i have to adjust anything?
Thanks for the link.
Ken had a post earlier in the thread the answered a variety of upgrade questions: http://forums.cpanel.net/f145/mysql-...tml#post905872 - note, you'll notice later in the thread that we are now defaulting to MyISAM rather than MySQL 5.5's manufacturer defaults.
Not to be a pain, but I really want to sort this over my christmas break. Do you think 18.104.22.168+ is likely to hit release by Dec 25 2011? If not, will it be current a week or two before?
I know what the E in ETA means, but give me some hope!
Lamped.co.uk Web Development
- cPanel Inc.
Note, we typically do not do releases the last 2 weeks of the calendar year just because so many people that time of year tend to be away from the office. Just thought I'd get that out there for people looking to do things during their end-of-year vacation.
|
OPCFW_CODE
|
[Baypiggies] Avid is also hiring
n8pease at gmail.com
Wed May 18 22:56:12 CEST 2011
I guess I'll put my hand in the air too.
the Consoles group (the group I work in) at Avid Technologies (www.avid.com) is also hiring. We are looking for intermediate and advanced C++ engineers. We use python extensively for automated test and build tools. Windows and/or Mac GUI chops would be bonus. There isn't a formal req written yet but ping me if you have any questions and/or if you'd like to submit a resume.
On May 17, 2011, at 7:13 PM, Glen Jarvis wrote:
> And, as with Aahz and Simeon, my company (mobile spinach) is also hiring. We're trying to find talented software engineers .... I inherited my job and that of another person. And, I'd LOVE to find someone who would work very close with me on projects. We are the backend team that supports the JQuery mobile front end. We do fun things with south, piston, and Django.
> I really like the personalities in the team :). We just need one more so I can go home at night :)
> Let me know if you're interested...
> On May 17, 2011, at 5:09 PM, Aahz <aahz at pythoncraft.com> wrote:
>> I figured I'd try a more narrative approach this time around. We're
>> looking to hire two or three competent Python programmers. We're looking
>> for people who can handle server, client, and web programming. Although
>> our approach is to hire flexible people who are good at solving problems,
>> there are some specific skills we're looking for:
>> * Algorithms/scaling
>> * Windows
>> * Virtual machines and file systems (requires some C programming)
>> I've been working here for almost two years, and I think I have a great
>> job. I do lots of different things, I'm rarely bored. We don't require
>> degrees, we're more interested in what you can do (I don't have a degree,
>> For more info, see
>> Feel free to send your resume either directly to me or to jobs at egnyte.com
>> (if the latter, be sure to mention BayPIGgies).
>> Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/
>> Looking back over the years, after I learned Python I realized that I
>> never really had enjoyed programming before.
>> Baypiggies mailing list
>> Baypiggies at python.org
>> To change your subscription options or unsubscribe:
> Baypiggies mailing list
> Baypiggies at python.org
> To change your subscription options or unsubscribe:
More information about the Baypiggies
|
OPCFW_CODE
|
A circuit that is net charged
What differences would you measure if a circuit were significantly charged negatively? Would the resistance change? To be clear, I mean that excess electrons are added to the system. The circuit can be of any kind you can imagine.
That's a good question. The best way to answer it would be to do that experiment.
The experiment is impossible to do (at least with usual materials at a macroscopic scale). The only way to significantly change the number of charges in a material is to dope it with other atoms, which is exactly what is done in semiconductors. The reason is, that when adding significant amounts of electrons you must also add stationary positive charges, otherwise the system will loose the charges to the environment quickly or fall apart.
There are broadly three classes of materials: conductor, semiconductor, insulator.
The conductor contains a LOT of electrons per unit volume. If you were to charge it, you would add a few more electrons. How many?
Let's take copper. It has roughly $8.5\cdot 10^{28}$ electrons per $m^3$. If you have a wire of radius $r$ the number of electrons scales with $r^2$ and capacitance scales with $\log{r}$. So the thinner the wire, the more important the effect of surface charge on total number of electrons.
I will leave it up to you to calculate how thin a wire would have to be before surface electrons contribute significantly to the measured resistance. A quick "back of the iPhone" estimate: For a macroscopic wire capacitance might be a few 100 pF per meter so you could get about $10^9$ electrons per meter on the surface. That would be roughly the same number of electrons as we can get in a 1 nm diameter wire (assuming that at that curvature a wire can hold 1 Volt without discharge to the air - which seems unlikely...) Good luck measuring that.
For semiconductors and insulators the number of charge carriers is smaller. This will make the math slightly more favorable. But note that surface effects (would surface electrons even contribute to conduction?) would be very important to consider - the number of electrons in an insulator does not tell the whole story (there are plenty of electrons but they are not free to move. Not at all obvious it would be different for surface charge).
To do this experiment, you will need:
1> a grounded, sheilded cable, (to prevent magnetic interference)
2> an ability to apply a voltage from the cable to ground, (say a battery, or better a variable DC source, with the postive terminal grounded and the negative one attached to the cable) and
3> a sensitive ohm-meter.
Then compare the ohms through the cable with voltage applied or not to the cable, to generate more electrons. Keep a log of your results. It might be wise to compare various materials for the cable, to see if some materials have more "hole effect" than others. I would also advise not going above two car batteries in voltage unless you are a trained electrician. (24 volts)
This way you can not reach "significant" charge (as the whole circuit will have not more than a few nF of capacitance with respect to ground.
|
STACK_EXCHANGE
|
this graph contains an operator of type SquaredDifference for which the quantized form is not yet implemented
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Ubuntu14.04
TensorFlow installed from (source or binary):pip tf-nightly
TensorFlow version (or github SHA if from source): tf-nightly1.13.0.dev20181216
Provide the text output from tflite_convert
I convert pb to lite use the following code:
`import tensorflow as tf
converter = tf.contrib.lite.TFLiteConverter.from_frozen_graph('tflite_graph.pb',["input_image"],["result"], input_shapes={"input_image":[1,626,360,3]})
converter.allow_custom_ops = True
converter.inference_type = tf.contrib.lite.constants.QUANTIZED_UINT8
converter.quantized_input_stats = {"input_image" : (0., 2.)}
converter.default_ranges_stats=(0, 6)
tflite_quantized_model=converter.convert()
open("model.tflite", "wb").write(tflite_quantized_model)`
I get the following error:
2018-12-21 11:26:06.351171: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-12-21 11:26:06.354986: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency:<PHONE_NUMBER> Hz
2018-12-21 11:26:06.355300: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x53e0ee0 executing computations on platform Host. Devices:
2018-12-21 11:26:06.355325: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
Traceback (most recent call last):
File "test.py", line 25, in <module>
tflite_quantized_model=converter.convert()
File "/home/zhoushaohuang/Virtualenv/python3.4/lib/python3.4/site-packages/tensorflow/lite/python/lite.py", line 455, in convert
**converter_kwargs)
File "/home/zhoushaohuang/Virtualenv/python3.4/lib/python3.4/site-packages/tensorflow/lite/python/convert.py", line 442, in toco_convert_impl
input_data.SerializeToString())
File "/home/zhoushaohuang/Virtualenv/python3.4/lib/python3.4/site-packages/tensorflow/lite/python/convert.py", line 205, in toco_convert_protos
"TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2018-12-21 11:26:07.312638: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 168 operators, 271 arrays (0 quantized)
2018-12-21 11:26:07.314127: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 168 operators, 271 arrays (0 quantized)
2018-12-21 11:26:07.323240: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 102 operators, 183 arrays (1 quantized)
2018-12-21 11:26:07.324611: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 96 operators, 171 arrays (1 quantized)
2018-12-21 11:26:07.325812: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 96 operators, 171 arrays (1 quantized)
2018-12-21 11:26:07.326413: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After pre-quantization graph transformations pass 1: 90 operators, 165 arrays (1 quantized)
2018-12-21 11:26:07.327324: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 90 operators, 165 arrays (1 quantized)
2018-12-21 11:26:07.327972: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 90 operators, 165 arrays (1 quantized)
2018-12-21 11:26:07.328720: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 90 operators, 165 arrays (1 quantized)
2018-12-21 11:26:07.328791: W tensorflow/lite/toco/graph_transformations/quantize.cc:127] Constant array conv1/conv/weight lacks MinMax information. To make up for that, we will now compute the MinMax from actual array elements. That will result in quantization parameters that probably do not match whichever arithmetic was used during training, and thus will probably be a cause of poor inference accuracy.
2018-12-21 11:26:07.328936: F tensorflow/lite/toco/graph_transformations/quantize.cc:491] Unimplemented: this graph contains an operator of type SquaredDifference for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).
Aborted (core dumped)
Also, please include a link to a GraphDef or the model if possible.
I use SquaredDifference in following code:
`def instance_norm(x):
epsilon = 1e-9
mean = tf.reduce_mean(x,[1,2])
mean = tf.expand_dims(mean,[1])
mean = tf.expand_dims(mean,[1])
s = x.get_shape()
var = tf.reduce_sum(tf.squared_difference(x, mean), [1,2], keep_dims=True)/(s[1].value*s[2].value)
result = tf.div(tf.subtract(x, mean), tf.sqrt(tf.add(var, epsilon)))
return result`
Any other info / logs
I generate pb use the following code:
input_saver_def = saver.as_saver_def() frozen_graph_def = freeze_graph.freeze_graph_with_def_protos(input_graph_def=tf.get_default_graph().as_graph_def(),input_saver_def=input_saver_def,input_checkpoint = FLAGS.model_file,output_node_names='result',restore_op_name='save/restore_all', filename_tensor_name='save/Const:0',clear_devices=True,output_graph='',initializer_nodes='') binary_graph = 'tflite_graph.pb' with tf.gfile.GFile(binary_graph, 'wb') as f: f.write(frozen_graph_def.SerializeToString())
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
@MrCary, @samuallin : Will it be possible for you to share sample code to reproduce the issue?
I believe your sample code is incomplete.
Otherwise you can share the model itself.
Thanks!
Marking issue as resolved due to inactivity. Feel free to re-open this if it's unresolved or file a new issue
|
GITHUB_ARCHIVE
|
Logs all allocated memory to file.
A fully qualified valid file name. File's folder must exists and be writable. File will be (re)created.
ACount [in, optional]
True - group duplicate memory blocks, False - report each memory block individually.
AGroup [in, optional]
True - group child blocks with parent, False - report all memory blocks.
AGroupHeuristic [in, optional]
True - group child blocks with parent objects by using call stack heuristic, False - group only explicitly parented blocks. This argument is ignored when AGroup is False.
ASort [in, optional]
True - sort memory blocks, default sort is: blocks with most count (i.e. same type and call stack) will be first, then blocks of bigger size, same size is sorted by type: first - objects, then dyn-arrays/strings, then RAW data; False - do not sort memory blocks, list blocks in order of allocation (older - first, newer - last).
AFilter [in, optional]
A custom filter for filtering out memory blocks.
ACompare [in, optional]
A custom comparator for sorting.
ACompareForCount [in, optional]
A custom comparator for counting. Positive and negative results are considered to be the same (e.g. "not equal"). (This argument could be the same as ACompare)
This function will log all allocated memory blocks to the specified file. Each memory block will be logged with call stack, properties (i.e. name, type, size, count) and short memory dump. Additionally a total memory statistics will be written.
This is a diagnostic function which can be used to find reasons for high memory usage, collect memory usage statitics, find hidden memory leaks, etc.
DumpAllocationsToFile will count memory blocks. Memory blocks are considered to be the same if these blocks have the same size, same type, same name and same call stack (up to 6 lines). Same memory block will be grouped together - only first block will be logged (with its call stack, dump and properties). Other blocks will not be logged. Instead, a count will be added to a header of the first logged memory blocks.
You may override a default comparison method by specifying your own ACompareForCount.
Typically, you are interested in memory blocks with higher counts. For example, when memory block is allocated 400 times - it may be an indication that such memory blocks are not released when needed.
DumpAllocationsToFile may group child leaks with parent leak. Any memory block can be declared as child memory block to another memory block - you can do this via MemLeaksSetParentBlock function or MemLeaksOwn function. Additionally, any memory allocated withing constructor of any object will be declared as child memory to this object - this is determinated by a heuristic check using call stacks.
Grouping by parent allows you to view a lot of memory blocks as a single object. This feature is used to simplify report: it filters out non-important entries, showing only root parent for all sub-children. The idea behind this: if parent is leaked - then all children are leaked as well. However, you are not interested in children leak, because there is no bug. The bug is leaked parent. Thus, we hide leaked children, show leaked parent, now you can easily identificate root cause for bug. You solve bug - no more parent leak, thus no more children leak. Of course, if parent is not leaking, but some child are leaking - a child will be included in report.
If you get many unrelated memory blocks included in your report, then you can supply a custom filter function (AFilter) to remove any particular memory block from report. Since run-time address change on each application's launch, you can not use exact address to filter out memory blocks. You have to rely on block's size, type, call stack, name, or, perhaps, an allocation order.
Not all memory blocks are equally interesting. For examples, RAW data are often allocated from objects. Thus, objects are usually more interesting than RAW data.
You can use sorting to spot interest blocks and place them first. Default sorting prefers repeated blocks (with higher count), then it prefers large blocks (by grouped total size, not size of single blocks), then comes block's type (object, array, string or RAW), named/unnamed.
You can also supply a custom comparator function (ACompare) to define your own preferred sort order.
Usually it is difficult to identificate memory blocks inside memory report. While there is a lot of information (call stack, type, size, dump) - this is not enough for many cases. For example, if memory is allocated in cycle (like "while" cycle to read a document and create entries for each document's block). Such memory blocks will look the same in report, it is not possible to distinguish between them.
Headers for memory blocks indicate two sizes: single and total. Single is a size of allocation for each memory block inside the entry. Total is size occupied by entire entry. For duplicate (repeated) blocks: total size = single size * count. For parent-child grouping: parent total size = single parent size + each(total child size).
|
OPCFW_CODE
|
Main / Personalization / Microsoft navision user manual
Microsoft navision user manual
Name: Microsoft navision user manual
File size: 192mb
24 Nov Am requesting for the user and Admin manual for Navision ,i can be deploy, and configure Microsoft Dynamics NAV Deployment. 24 Jan Hello every one, I have worked in NAV R2 and recently started working in NAV As I am a new user of MS Dynamics NAV , can. 19 Apr Could you please provide a link for a (PDF) user guide/ manual for Dynamics NAV? Supposedly, per module. For example: General Ledger.
How to Obtain a Partner Development License for Microsoft Dynamics NAV . Overview of Training Manuals for Microsoft Dynamics NAV , User Guides, This . 18 Jun Back when I first recommended Microsoft Dynamics NAV to my began to look for a user group for Microsoft Dynamics NAV, and I found one!. FCS user manual. 4. 2. Integration with Microsoft Dynamics NAV. One of FCS 's main features is to be a “perfect – complementary” product for. Microsoft.
User Guide for Navision / Dynamics NAV accounting and ERP software. Includes detailed step-by-step instructions with screen shots for many of the more. 2 Jul I was delighted to be speaking at the NAVUG networking event hosted at the Arsenal Football Club PLC on the 1st of July My approach. Microsoft Dynamics User Instructions. Diagram 1. To login click on the icon and the screen above will appear. Ensure that the Server dropdown is. “Dynamics GP . I'm studing to take the exams about Navision but i don't have the training manuals, can anyone send to me? I'll apreciatte the folowing manuals: Explain the concept of ERP and Microsoft Dynamics NAV • Describe . User manuals are delivered escale-du-bac.com files together with the product software. They.
Microsoft Dynamics NAV Training Materials Classic (v / ). What's New with Document Approval – Setup Guide · Document Approval – User Guide. 6 Feb The Microsoft Dynamics GP team moved from paper manuals to PDF based manuals some time ago. 23 Jun This Microsoft Dynamics NAV Product Guide details many new features in In- Office Connection; Enhanced User Experience; Embedded. The Dynamics NAV User Group (NAVUG) is proud to provide training and education specific to Microsoft Dynamics NAV Microsoft Dynamics NAV focuses on a deeper integration with Microsoft products such Instruction Guide.
29 Aug of Microsoft's most popular ERP solution, Microsoft Dynamics NAV These guides and tutorials will guide you through of some of the. 28 Jan Microsoft Dynamics NAV Windows client or the Microsoft demonstration shows how to manually handle cases where a user has paid the. Capability Guide. Microsoft Dynamics user experience across devices makes it easy for your people to. FRPSOHWH manually in Microsoft Dynamics NAV. We are one of the largest Dynamics NAV houses in Denmark with more than 80 employees in four different departments – in Copenhagen and. Aarhus in.
|
OPCFW_CODE
|
Replace the below if, else-if, else in shorter way
I would to replace the below if, else-if, else in shorter way in javascript. How can I do that with ternary operator or filter ? Could someone please advise ? I have added the below ternary but don't know how to include the else condition ?
Then('I type {string} in {string} field', (textval, textboxname) => {
if (textboxname == "Trade Price") {
cy.get('#DetailsContainer .allowDigits.tradePrice-js').clear().type(parseInt(textval));
} else if (textboxname == "Email" || textboxname == "Work Phone" || textboxname == "Mobile Phone"|| textboxname == "Home Phone") {
cy.get('#content_container').parent().find('.fieldHeader').contains(textboxname)
.next().find('input')
.type(textval, { force: true })
} else {
cy.get('#content_container').parent().find('.fieldHeader').contains(textboxname)
.next()
.type(textval, { force: true })
}
});
//Ternary operator
function example(…) {
return textboxname ? "Trade Price"
: textboxname ? "Mobile"
: textboxname ? "Work Phone"
: textboxname? "Home Phone"
: value4;
}
Is your else code the same as your else if code? Do they need to be distinguished?
In your ternary operator the last statement is your final else
@Ry- I have added .find('input') in else if condition which is different from else..Sorry I missed that
You don't seem to be using ternary operator the right way. In order to refactor your first snippet of code into a ternary operator, you would do it like this:
return (textboxname === "Trade Price" ? {...code to be executed for condition}
: (textboxname == "Email" || textboxname == "Work Phone"
|| textboxname == "Mobile Phone"|| textboxname == "Home Phone")
? {...code to be executed for condition}
: {..final else condition}
);
BUT, in use cases like these, it is much cleaner and preferable to use switch-case, as you are only concerned with the value of one variable and need to perform operations accordingly:
switch(textboxname) {
case "Trade Price":
cy.get('#DetailsContainer .allowDigits.tradePrice-js').clear().type(parseInt(textval));
break;
//perform same code for these 4 cases
case "Email":
case "Work Phone":
case "Mobile Phone":
case "Home Phone":
cy.get('#content_container').parent().find('.fieldHeader').contains(textboxname)
.next().find('input')
.type(textval, { force: true });
break;
//final else condition
default:
cy.get('#content_container').parent().find('.fieldHeader').contains(textboxname)
.next()
.type(textval, { force: true });
}
Much cleaner than writing ternary operators for these kind of things, as you can see why.
I will give a try and let you know shortly.
Fantastic great way to use switch
|
STACK_EXCHANGE
|
Hi, I have just installed Lion and the Lion update for Scrivener, but can’t get any document to scroll, which makes it unusable at the moment. Any suggestions appreciated.
I’ve seen one other user report something similar, but I’ve never seen this, and all of Scrivener’s scroll views are standard and handled by OS X. What scrolling device are you using? Does TextEdit scroll? Have you checked your System Preferences?
Are you aware that Lion has switched the direction for scrolling on trackpads, so that down is now up and up is now down? Could that perhaps be the issue?
I noticed that backward scrolling on my Magic Trackpad.
Reversed its direction, with the pad tilting downhill.
Now scrolling is normal. Except left and right are reversed.
You can switch that reversal off in the Trackpad System Preferences pane; Scroll & Zoom tab. With it back to the old way, you can turn the trackpad around and get your lefts and rights back to where they should be, too.
I am having this issue too. It was happening before and after the update. I can scroll the pane with the list of documents, but I cannot scroll in the editor. And yes, I am scrolling the “lion” way. I can also scroll fine in TextEdit just fine.
Anyone have any idea? I am so frustrated
I can scroll when I slightly resize the window. I also can scroll in the new Full Screen mode. But when I open a project, I can’t scroll until I resize the window.
Could you try opening Console and pasting any messages here? Activate Finder, press Shift-Cmd-U to open your Utilities folder, and double click on the Console icon. Make sure the “All Messages” item is selected in the sidebar (click “Show Log List” if necessary). Then try to scroll, and copy paste anything that pops up there to a response.
Update: Interesting, so can you reproduce this in any other project you’ve not opened and fiddled with yet in Lion? Does the bug go away forever in a project once the window is resized, or does it re-appear after a re-launch and require another resize event to fix it?
Is this in page view mode or regular mode? If in page view mode, does the problem only occur in page view mode?
Keith, this is in Page View.
Amber, very strange. Once I resize the window, the problem does not come back. When I close the app and open it again, the problem goes away.
Could it be that it only happens when a window is a certain size? Very weird. It just may be quirk in Lion’s scrolling and Scrivener had been open at just the right time, etc? I don’t know.
I will let you know if it comes back. Right now this is the only project I have going, so I don’t have any others to try it with. Hmmm.
That’s very strange that it stayed fixed after closing and reopening the project - good, though! It does indeed sound like a particular UI size was causing it. Fingers crossed that it only affects projects newly opened on Lion, but let us know if it happens again. cantata8660 found exactly the same thing - he scaled the text to 100% then back to 150% and the problem went away, and stayed fixed after quitting and re-launching. So, so far at least, it seems like a settings issue on some projects that are newly opened on Lion, and that resizing or rescaling clears it. We’ll keep our eyes out though.
I’m experiencing the same issue. Scrivener 2.1 on Lion. I get expected scroll functionality at all text scales with the exception of 150%. If I scale the text to a different percentage and return to 150% I again lose scrolling. I’ll simply use 125% until you sort this out. Let me know if providing more info will help you troubleshoot the problem.
Interesting. Could you please let me know the following:
Is this in all views? That is, does this happen in the main window, or only when in Lion’s full screen mode, or both?
What about in Composition Mode?
What happens if you resize the window slightly? Does scrolling return?
Are you in page layout mode or just regular text-wraps-to-fit-the-width-of-the-editor mode?
If you are in full screen mode, what happens if you turn off fixed width in the Editor pane of the preferences?
EDIT: I just briefly reproduced it myself. I entered full screen in page layout mode with the pages centred, and hid the inspector. After that, the scroll bar moved down the page, but the pages did nothing. After resizing the binder, I was then unable to reproduce it no matter what I did, even reopening projects. It seems like some Lion bug where scroll views don’t like certain scales when the scale has resulted in a particular view size (possibly non-integral values?). Hmm. If I could reproduce it consistently I’d be able to investigate better, so anything you can think of would be useful.
Thanks and all the best,
Not a problem here at all, in any % view and with scrolling on either mouse or trackpad.
It only seems to affect page layout mode. It doesn’t matter if I’m in full screen or windowed mode.
Scrolling actually seems to work in composition mode at 150% text size and the others.
Resizing the window has no impact (actually it seemed to fix it on one occasion, as I typed this). It seems to be related to 150% size in page view mode.
no change that I can tell.
Thanks for your help. I am at 125% scale for now
Do you have page layout centred? If so, what happens if you change the preferences to have page layout on the left instead (un-tick “Center pages” in the Editor pane of the preferences)? My guess is that the centring scroll view is the culprit.
good thinking! Sure enough, when I deselected center pages, the document scrolled as it should. I’ll try it on a few more documents to be sure, and let you know if it does not fix all for the time being.
Great, thanks! I have an idea, which has worked in brief testing, for a different way of doing the centring. I’d be grateful if you could confirm that this does indeed only present a problem when page layout view is centred before I go ahead and start coding, though. If so, and if you can reproduce the problem consistently in this situation, then once I have something working I’d be grateful if you could test it for me seeing as I can’t get the problem to show up now.
All the best,
I tried about 8 different documents including the interactive tutorial and some of my own creation and found consistently that at 150% and centered in page view, no scrolling occurs. Uncheck centered and leave the other variables unchanged and scrolling works for me.
I’d be happy to test a fix.
Appreciate the attention you’ve given this. Definitely exceeds my expectations. Thanks!
I’m having a problem scrolling in Lion. One project is working fine. The other won’t scroll when the binder is showing, but will (centered and not centered) when I close the binder. I can show the binder and have the editor not centered and it does scroll.
Jenny, just to clarify - the scrolling problem only occurs when you are in page view and it is centred, right?
|
OPCFW_CODE
|
ExpertPdf Html To Pdf Converter v15.0.0 + Activation Key
What is ExpertPdf Html To Pdf Converter?
ExpertPdf Html To Pdf Converter is a .NET library that lets you create PDF documents from web pages or raw HTML markup in your .NET Framework or .NET Core applications.
ExpertPdf is in business since 2007 and it's being used by thousands of companies world wide. Here you can view a list of our best known customers. Check out the following page if you are interested in a version history of our html to pdf converter.
Key Features of ExpertPdf Html To Pdf Converter:
- Convert from url (web page) to pdf: With ExpertPdf Html To Pdf Converter you can convert full web pages to pdf. The web page can be an url or a local html file.
- Convert from html string (markup) to pdf: ExpertPdf Converter can convert a raw html text. You can specify a base url if the html code references external css files or images.
- Multiple output options: The generated PDF can be output to a file, stream, byte array or pdf document object that can be further prelucrated by our sdk.
- Set page size and margins: With ExpertPdf you have full control of pdf page size, orientation, margins and many more elements.
- Set headers and footers: You can have headers and footers displayed in all pdf pages. Full html support is included for these sections also.
- Preserve html links in pdf: ExpertPdf Html To Pdf Converter can preserve the links from your web page into pdf. Alternatively you can disable the links if you do not want them to appear in PDF.
- Automatic and custom page breaks: ExpertPdf automatically inserts page breaks when needed, paying attention not to break lines of text or images. Custom page breaks can be added using simple page-break-before and page-break-after css styles.
- Convert only a part of the web page to pdf: If you do not want to convert the whole web page to pdf, you have the option to convert only a section of it, specified by the html element id.
- Hide some elements from the page when converted to pdf: If you need to convert most of your web page to pdf, this feature allows you to hide certain elements, like a print button or menu.
- Merge several web pages into the same pdf document: With ExpertPdf Pdf Library for .NET you can add several html pages to the same pdf. You can also merge existing pdf documents with the one that is being generated.
- Convert to pdf web pages that require authentication: ExpertPdf supports Windows Authentication (automatically login for the current Windows user), HTTP Authentication (set user and password) and Forms Authentication (pass the application cookies to the converter).
- Select css media type for rendering (screen or print): Many websites have printer friendly style sheets. ExpertPdf can convert to pdf the webpage displayed using @media print instead of the default @media screen.
- Pdf bookmarks support: ExpertPdf offers full control of pdf bookmarks (outlines). You can also set the converter to automatically generate bookmarks (outlines) based on certain tags or css classes from the converted webpage.
- Digital signatures support: ExpertPdf Converter supports digital signatures. Other pdf security options, like setting a password or controlling document permissions, are also available.
- Possibility to retrieve html elements positions in pdf: ExpertPdf can give full details about certain html elements positions in pdf. For example, you can know where a certain image from your webpage was added to the pdf document.
- Support for web fonts (open type, true type or woff): Our pdf converter has full support for locally installed TTF or OTF fonts and can also handle web fonts (TTF or WOFF format).
- No external dependencies (internal browser for html rendering): Starting with v9, besides the IE html rendering engine used by the older versions, a new rendering engine was added. The new rendering engine is internal, with no 3rd party dependencies. It is based on WebKit and can render html5/css3.
- Very easy to use: ExpertPdf Pdf Converter is very easy to use. You can convert a web page to pdf with a single line of code.
Click on the link below to download ExpertPdf Html To Pdf Converter NOW!
DOWNLOAD NOW !
|
OPCFW_CODE
|
Test coverage of active processes and autolaunched flows is 0%, but at least 75% is required. How to identify which ones require test class
With Spring 19 release we would need to have test coverage for flows as well. The question i had was how do we find out which flows which need coverage?
Would flows mean just the visual flow or it includes workflows and process builders?
Update : Can we query for flows which require test coverage. i dont mind writing test classes for them. Just need to know for which ones we need to write atleast to get the current changeset out.
I dont have any flows or processes which are part of the changeset that is been pushed to production
UPDATE: Looks like this error message keeps coming up if there are any other error while deployment like a test class failure or a code coverage issue on other triggers/classes. But when you clear off the other errors then this "Test coverage of active processes...." error would also go off
I hope ProcessBuilders are included. Our PM just had us create test classes for all of them...
i hope its only process builders and not workflows. I would need to create test classes for lot of them :(
visual workflows are definitely not included. I would think - but that's an assumption - that it's only automated flows.
Looks like this error comes up with other errors like test class failure or code coverage on other apex classes or triggers.
If you get this error even if you dont have any processes in the package then best ignore it and clear of the other errors and try deploying your changeset. This should go through.
Spent a lot of time trying to resolve something which never existed
It covers Process Builder and autolaunched Flows, but not Screen Flows or Workflow Rules. From the Winter '18 release notes:
In production orgs, a new setting lets you deploy a new active version of a process or flow via change sets or Metadata API. This setting doesn't appear in non-production orgs (such as scratch, sandbox, and developer orgs), because you can always deploy a new active version.
When you deploy an active process or flow in a production org, Salesforce runs your org’s Apex tests and confirms that enough of your processes and flows have test coverage. Specifically, the Apex tests must launch at least 75% of the total number of active processes and active autolaunched flows in your org.
(Emphasis mine).
Spring '19 introduces some enhancements to how you can test processes and flows, and also allows you to track coverage more easily:
If your org uses Apex tests to validate processes and autolaunched flows, you’re probably interested in knowing what your flow test coverage is. We’re introducing two Tooling API objects that you can query to calculate test coverage for processes and autolaunched flows.
(Emphasis also mine).
If you're not using the new tool to deploy Flows as active, you don't need to worry about coverage level.
Thats interesting... i am not pushing any new flows or processes on my changeset but still get this error on deployment.
Are you pushing updated Flows or Processes? Have you activated the setting in production that allows for deployment of active Flows and Processes?
No.. just a few custom objects and triggers
I found this article which exactly answers your question.
https://developer.salesforce.com/docs/atlas.en-us.salesforce_vpm_guide.meta/salesforce_vpm_guide/vpm_admin_deploy_active.htm
To get the names of all active autolaunched flows and processes that don’t have test coverage, use this query.
SELECT Definition.DeveloperName
FROM Flow
WHERE Status = 'Active'
AND (ProcessType = 'AutolaunchedFlow'
OR ProcessType = 'Workflow'
OR ProcessType = 'CustomEvent'
OR ProcessType = 'InvocableProcess')
AND Id NOT IN (SELECT FlowVersionId FROM FlowTestCoverage)
|
STACK_EXCHANGE
|
You can configure network hardware using the Hardware menu of the Network Configuration Manager:
Select Hardware Add new LAN adapter. Further configuration will be determined by the bus type of your machine and the ability of the Network Configuration Manager to detect your adapter. See:
If you want to configure an adapter as a backup device in the cases that another adapter fails, it is suspended by an administrator, or it loses its physical connection, select Hardware Add new LAN adapter. On the Add Protocol screen select Backup Device instead of a protocol suite such as TCP/IP. Then select an existing network adapter (the primary device) for which the new adapter will act as a backup device and click on OK.
If the primary device is no longer usable because its hardware fails or it loses its physical connection (for example, by its lead becoming disconnected), the system will automatically switch to using the backup device.
For automatic failover to work, the primary device's driver must be capable of recognizing and signaling such problems to the kernel. Some device drivers may be capable of detecting hardware failure, but may be unable to tell if the physical connection has been broken.
See the Compatible Hardware Web Pages (CHWP) at http://www.sco.com/chwp for more information about the known capabilities of various network adapters.
Some packets may be lost when the system switches to a backup device. Retransmission of packets will happen automatically in the case of TCP connections. However, not all applications that use UDP may be capable of handling such interruption of service gracefully.
The Network Configuration Manager indicates that a primary device has failed or is otherwise unavailable for use by placing a cross on the icon for the network adapter. (In character mode, the string ``HX'' replaces ``HW''.)
See ``Configuring a backup device'' for more information.
Once you have replaced a failed primary device or reconnected a network lead, you can switch the system back to using this device instead of the backup device, provided that the primary device is not a Token-Ring adapter.
To revert to the primary device, select the backup device from the list, and then select Hardware Revert to Primary.
If the primary device is a Token-Ring adapter, you can only revert to using it by shutting down and rebooting the system.
When replacing a failed primary device, you must normally use the same type of network adapter. However, some vendor-specific hardware may support the interchange of different models of network adapter provided that they are supported by the same device driver. Please refer to the documentation provided by the vendor for more information.
Select the adapter to test from the list, and then select Hardware Test Network Connectivity. This sends a broadcast message to the broadcast MAC (Media Access Control) address using LLC (Logical Link Control) TEST frames.
The response will display one of these messages:
This procedure only tests the adapter's ability to reach the network; it does not detect protocol problems. See ``Troubleshooting network configuration'' for more troubleshooting information.
Select the adapter to view from the list, then select Hardware View hardware configuration.
Select the adapter to deconfigure from the list, and then select Hardware Remove network device. You will be prompted to confirm your choice and informed when the operation is complete.
To reconfigure a backup device as a primary device, first deconfigure the network adapter and then configure the device as a new LAN adapter.
|
OPCFW_CODE
|
SharePoint (SP) is a web application platform that runs on IIS and persists its content to a SQL Server database. An SP Farm has 1 or more web applications and a web application can have 1 or more site collections. There are different reasons to have more than 1 SP web application, such as different authentication schemes or tighter control of the site content.
Screenshot of IIS showing 2 web 5 websites.
In the diagram above App1 and App2 are SP Web Applications that have been created by the SP farm administrator. The SharePoint Central Administration website and SharePoint Web Services website are added automatically when SharePoint is installed. The "Default Web Site" is added when IIS is installed, it is not a SP website.
Each SP web application has one or more site collections. A new content database is created in SQL Server for each site collection. SP farm administrators to delegate the management of the site collection to “Site Collection Administrators” who will be working with SharePoint sites and content, but not interacting with the server computers and databases. Basically the site collection is a way to give the business users the power to control access.
SharePoint 2013 Central Administration Application
"SharePoint 2013 Central Administration" is a web application is used to create web application and site collections.
Screenshot - Shortcut to start the SharePoint 2013 Central Administration
Screenshot - SP 2013 Central Administration
Screenshot SP2013 showing all site collections for the web application found at “app1.dom1.loc”. There is only 1 site collection in this web application, it is named "rnd".
Prerequisite Permissions/ Site Collection Administrator
The documentation says “Assign that user to be the "Site Collection Administrator". Use the web application policy rule to assign these permissions.” Do not set user as the Site Collection Administrator on any of the site collections, instead assign full control to the aervice account using the steps outlines below.
Site Collection Administrator
Web Access Policy
Example: Setting Web Application Policy for SecurityIQ in SP2013
1. Open “SharePoint 2013 Central Administration”
2. Click “Security” in the menu along the left-had side of the page.
3. Click “Specify web application user policy”
4. Click Add Users
5. Change the Web Application as needed. Keep the default setting of “All zones.” Click Next.
Add the service account to the users list. Check the Full Control box. Click Finish.
Running the PowerShell Script
This PowerShell script generates a text file that contains the SQL statements that will set the all the required permissions on SharePoint database objects. You must run the SQL statements on the database server manually, the PowerShell script does not modify the database permissions.
Here is the syntax for running the script
Troubleshooting SharePoint Connections
Check if there are events in SharePoint
You can query the SharePoint database for events to verify that it is in fact generating an audit trail that SecurityIQ can read. If a certain event is missing, you can filter on ItemFullUrl to see if the event you expected to see in SecurityIQ was generated by SP.
The SharePoint connector in SecurityIQ has the option to purge old audit events from the SharePoint server. Be sure to leave the “days to keep” setting long enough for you to troubleshoot certain events.
Sample querys to read SP audit trail in SP 2013
/* update to use the admin database in your environment */
/* Use the optional where clauses to help you troubleshoot certain events */
from EventCache with (nolock)
where ItemFullUrl like '%someDocThatShoudHaveAuditTrail%'
and EventTime > '2016-05-06 16:37:36.750';
Data Classification Error - Service will not start
Unlike the other data classification services, the SharePoint data classification services do not login as the local system account. Instead, they installer sets the service to login as the service account that will connect to SP. This account must have access to the local machine certificate store. If it does not this error will appear in the logs.
2016-05-10 18:23:37,111,ERROR,WBSearch.Infra.Logger,OnStart,Service OnStart Error:Object reference not set to an instance of an object.
2016-05-10 18:23:39,236,ERROR,WBX.Common.Utilities.RSAHelper,decryptStringPKCS7,Caught Exception:
System.Security.Cryptography.CryptographicException: Keyset does not exist
at System.Security.Cryptography.Pkcs.EnvelopedCms.DecryptContent(RecipientInfoCollection recipientInfos, X509Certificate2Collection extraStore)
at WBX.Common.Utilities.RSAHelper.decryptStringPKCS7(Byte pkcs7ToDecrypt)
2016-05-10 18:23:39,845,ERROR,WBX.whiteOPS.DAO.NHibernate.GenericDAO`2,findAll,Caught Exception:
System.Data.SqlClient.SqlException (0x80131904): Login failed for user 'SecurityIQ_User'.
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
To resolve this error, grant the service account permission to the local certificate store or add the user to the local administrator group on the server running the SP data classification service.
SP BAM could not turn on auditing
The SP BAM uses the SP API to turn on auditing and off. If the service account does not have rights to turn on auditing this error will be in the log at the time the agent starts. Expect one of these per site collection.
2016-04-25 18:54:58,708,4,ERROR,WBX.whiteOPS.Agents.WSSBAMAgent.WSSBAMAgent,turnAuditOn,Error while turning audit on.
2016-04-25 18:54:58,781,4,ERROR,WBX.whiteOPS.Agents.WSSBAMAgent.WSSBAMAgent,turnAuditOn,Caught Exception:
System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
at Microsoft.SharePoint.Library.SPRequest.SetAuditFlags(String bstrUrl, Guid gSiteId, String bstrDirName, String bstrLeafName, Int32 itemType, UInt32 AuditFlags)
at WBX.whiteOPS.Agents.WSSBAMAgent.WSSBAMAgent.turnAuditOn(WSSBusinessService curBS)
This next event will occur after the above events as the BAM tries to read the audit log that failed to turn on. Expect one per site collection.
2016-04-25 18:56:09,298,12,ERROR,WBX.whiteOPS.Agents.WSSBAMAgent.WSSBAMAgent,pollSite,Caught Exception:
System.UnauthorizedAccessException: Attempted to perform an unauthorized operation.
at Microsoft.SharePoint.SPAudit.GetEntries(SPAuditQuery query)
at WBX.whiteOPS.Agents.WSSBAMAgent.WSSBAMAgent.pollSite(String siteUrl, Dictionary`2 relevantBRs, DateTime from, DateTime to)
To fix this try one of these options:
BAM Error – No rights to an IIS log file folder
The SP BAM monitors view events by reading the IIS log files from the 1 or more servers that front end SharePoint content. If the service account does not have access to the log file do to a bad path/file name or no permissions this error will occur.
2016-05-04 16:25:48,577,12,ERROR,WBX.whiteOPS.Agents.WSSBAMAgent.MonitorView.LogFilesThreadManager,init,Caught Exception:
System.ArgumentException: \\server1\c$\inetpub\logs\LogFiles\W3SVC1990221007 does not exists, please check the UNC and verify the service user has permissions to access it
at WBX.whiteOPS.Agents.WSSBAMAgent.MonitorView.LogFileThread..ctor(String logFilePath, LogParser parser)
at WBX.whiteOPS.Agents.WSSBAMAgent.MonitorView.LogFilesThreadManager.init(Object dummy)
Maintenance Tasks and Audit Cleanup
The activity monitor calls the SharePoint API to purge SharePoint audit data (at 1:00 local time by default). This is not the same as the SharePoint maintenance task that is often configured to run at 1:00 am local time and also does a similar function, Wether started by SecurityIQ or SharePoint, if SharePoint auditing has been in use for a long time and the audit logs have not been purged, when the purge is started it may create very large database transaction log files on the SharePoint databases. Contact Microsoft support if it becomes a problem, this is due to the SharePoint API, not SecurityIQ.
|
OPCFW_CODE
|
Recorded from the viewpoint of a single cameraman utilizing a portable cam, Cloverfield has actually been called a mix between The Blair Witch Task and Godzilla. There’& rsquo; s a primary distinction though, this motion picture will certainly be enjoyable with multiple watchings (which Blair Witch is not) as well as this flick does not draw (which Godzilladoes). So that little comparison simply doesn’& rsquo; t hold water. Fact is, Cloverfield is the most enjoyable “& ldquo; Beast & rdquo; flick in fairly a long time & hellip; even if it is really a thrilling action-drama.
Basically what we have below, is a love story, with the background being a destroyed New york city City. Rob Hawkins (Michael Stahl-David) depicts our hero, hell-bent on navigating his way around the destroyed city as well as whatever point is around to save the love of his life, Beth (Odette Yustman), who has actually been harmed and also can’& rsquo; t leave her apartment or condo. All the while, panic and also disorder reign as the unidentified monster intimidates the city and also army storage tanks fill up the roads.
The filmmakers and stars did a fantastic work of repainting a disorderly and terrible world, one that you can immerse yourself in as you follow Rob and his friends around. It’& rsquo; s a great deal of panic and despondence, as well as you obtain wrapped up in it as you root for Rob and Beth. It’& rsquo; s sci-fi-ish but it’& rsquo; s practical at the same time, as you can feel for the characters and also what they’& rsquo; re experiencing in addition to their responses to the occasions happening around them.
Profits is this, if you’& rsquo; re trying to find a various sort of film that is pretty darn incredible as well as chaotic, then you require look no more than Cloverfield. The only disadvantage this flick truly has, is that it’& rsquo; s not rated & lsquo; R & rsquo;, which would certainly have had made it a little bit much more sensible due to the fact that if the situation in the motion picture were genuine, it damn certain wouldn’& rsquo; t be PG-13. That & rsquo; s minor quibbling however, primarily since I such as ‘& lsquo; R & rsquo; movies a bit much more( if I assume it contributes to the story, heh). This motion picture hit cinemas with a throng of media hype, mostly because of the deceptive and mysterious nature of it. And yes, it more than met the hype.
Don’& rsquo; t allow The Blair Witch Project and also Godzilla chat scare you off from seeing this brand-new fantastic film. It’& rsquo; s a non-cheesy and also sensible “& ldquo; monster & rdquo; movie that is a spot on your DVD shelf.
Cloverfield obtains a 4 out of five: FANTASTIC.
Gary is Proprietor as well as Editor-in-Chief of Vortainment. He’& rsquo; s normally uploading information and evaluations, and also doing all the backside things too. He likes to play computer game, view movies, battling as well as college football (Roll Trend Roll).
View all messages
|
OPCFW_CODE
|
add support for Play 2.9
Fixes #13
Play's routes-compiler artifact was renamed to play-routes-compiler
Using swagger-core 1.6.11 which supports Jackson 2.14. swagger-core 1.6.12 depends on Jackson 2.15.
@gmethvin any idea why this error happens on Play 2.9?
Error: Exception in thread "specs2-3" java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlRootElement
at io.swagger.jackson.ModelResolver.resolve(ModelResolver.java:323)
at io.swagger.jackson.ModelResolver.resolve(ModelResolver.java:205)
at io.swagger.scala.converter.SwaggerScalaModelConverter.resolve(SwaggerScalaModelConverter.scala:90)
at io.swagger.converter.ModelConverterContextImpl.resolve(ModelConverterContextImpl.java:103)
at io.swagger.jackson.ModelResolver.resolve(ModelResolver.java:289)
at io.swagger.jackson.ModelResolver.resolve(ModelResolver.java:205)
at io.swagger.scala.converter.SwaggerScalaModelConverter.resolve(SwaggerScalaModelConverter.scala:90)
at io.swagger.converter.ModelConverterContextImpl.resolve(ModelConverterContextImpl.java:103)
at io.swagger.jackson.ModelResolver.resolveProperty(ModelResolver.java:177)
at io.swagger.jackson.ModelResolver.resolveProperty(ModelResolver.java:128)
at io.swagger.scala.converter.SwaggerScalaModelConverter.resolveProperty(SwaggerScalaModelConverter.scala:70)
at io.swagger.converter.ModelConverterContextImpl.resolveProperty(ModelConverterContextImpl.java:83)
at io.swagger.converter.ModelConverters.readAsProperty(ModelConverters.java:63)
at io.swagger.converter.ModelConverters.readAsProperty(ModelConverters.java:57)
at play.modules.swagger.PlayReader.parseMethod(PlayReader.java:538)
at play.modules.swagger.PlayReader.read(PlayReader.java:147)
at play.modules.swagger.PlayReader.read(PlayReader.java:76)
at play.modules.swagger.PlayReader.read(PlayReader.java:70)
at play.modules.swagger.ApiListingCache.$anonfun$listing$1(ApiListingCache.scala:17)
at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:454)
at play.modules.swagger.ApiListingCache.listing(ApiListingCache.scala:13)
at PlayApiListingCacheSpec.$anonfun$new$2(PlayApiListingCacheSpec.scala:75)
at org.specs2.matcher.MatchResult$$anon$12.$anonfun$asResult$1(MatchResult.scala:344)
at org.specs2.execute.ResultExecution.execute(ResultExecution.scala:22)
at org.specs2.execute.ResultExecution.execute$(ResultExecution.scala:21)
at org.specs2.execute.ResultExecution$.execute(ResultExecution.scala:123)
at org.specs2.execute.Result$$anon$4.asResult(Result.scala:246)
at org.specs2.execute.AsResult$.apply(AsResult.scala:32)
at org.specs2.matcher.MatchResult$$anon$12.asResult(MatchResult.scala:344)
at org.specs2.execute.AsResult$.apply(AsResult.scala:32)
at org.specs2.specification.core.AsExecution$$anon$1.$anonfun$execute$1(AsExecution.scala:17)
at org.specs2.execute.ResultExecution.execute(ResultExecution.scala:22)
at org.specs2.execute.ResultExecution.execute$(ResultExecution.scala:21)
at org.specs2.execute.ResultExecution$.execute(ResultExecution.scala:123)
at org.specs2.execute.Result$$anon$4.asResult(Result.scala:246)
at org.specs2.execute.AsResult$.apply(AsResult.scala:32)
at org.specs2.execute.AsResult$.$anonfun$safely$1(AsResult.scala:40)
at org.specs2.execute.ResultExecution.execute(ResultExecution.scala:22)
at org.specs2.execute.ResultExecution.execute$(ResultExecution.scala:21)
at org.specs2.execute.ResultExecution$.execute(ResultExecution.scala:123)
at org.specs2.execute.AsResult$.safely(AsResult.scala:40)
at org.specs2.specification.core.Execution$.$anonfun$result$1(Execution.scala:340)
at org.specs2.specification.core.Execution$.$anonfun$withEnvSync$3(Execution.scala:358)
at org.specs2.execute.ResultExecution.execute(ResultExecution.scala:22)
at org.specs2.execute.ResultExecution.execute$(ResultExecution.scala:21)
at org.specs2.execute.ResultExecution$.execute(ResultExecution.scala:123)
at org.specs2.execute.Result$$anon$4.asResult(Result.scala:246)
at org.specs2.execute.AsResult$.apply(AsResult.scala:32)
at org.specs2.execute.AsResult$.$anonfun$safely$1(AsResult.scala:40)
at org.specs2.execute.ResultExecution.execute(ResultExecution.scala:22)
at org.specs2.execute.ResultExecution.execute$(ResultExecution.scala:21)
at org.specs2.execute.ResultExecution$.execute(ResultExecution.scala:123)
at org.specs2.execute.AsResult$.safely(AsResult.scala:40)
at org.specs2.specification.core.Execution$.$anonfun$withEnvSync$2(Execution.scala:358)
at org.specs2.specification.core.Execution.$anonfun$startExecution$3(Execution.scala:142)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:431)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassNotFoundException: javax.xml.bind.annotation.XmlRootElement
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:476)
at sbt.internal.ManagedClassLoader.findClass(ManagedClassLoader.java:102)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:594)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:527)
... 59 more
@dwickern Previously jakarta.xml.bind-api was a transitive dependency of Play, but it was removed since it is no longer needed (see removed libraries in the migration guide). The solution is to add the dependency yourself. If you upgrade to the latest version you'll have to switch to the jakarta package name.
I'm afraid adding the dependency doesn't fix the error. Besides, jakarta.xml.bind-api is a transitive dependency of swagger-core.
Actually I think we should just remove all the xml annotations. It appears they are meant to be used for toXmlString but the implementation is broken anyway.
getResource seems broken too. The None case is unreachable and ErrorResponse isn't used anywhere else.
https://github.com/dwickern/swagger-play/blob/03137772e7c8f83f2ea0e2b3a56e808688766338/src/main/scala/controllers/ApiHelpController.scala#L77-L96
Removing the broken XML code didn't fix the issue either. What does work is a dependency on javax.xml.bind rather than jakarta.xml.bind. I'm not sure why.
ok, it's possible that the swagger library itself expects that library to exist. I suppose we can leave it like you have it. At least we cleaned up the broken code.
Ok so next steps are
Support for Scala 3
Support for Play 3.0
I'll need to somehow publish JDK 11 binaries for Play 2.9+ and use JDK 8 for older versions
|
GITHUB_ARCHIVE
|
Last updated: Sunday, 3 October 2021
Information We Collect
When you access and use our Minecraft Server, we may collect the following information:
- Your IP address
- Your email address
- User Minecraft UUID
- User password and PIN
- User data, such as player rank
- User actions, such as chat messages
- User statistics, such as playtime
If you contact us directly, we may receive information about you, such as your name, email address, the contents of the message and/or attachments you may send us, and any other information you may choose to provide. Sensitive Personal Information should never be provided to us, nor do we collect any Sensitive Personal Information.
How We Use Information
We use the information we collect in various ways, including to:
- Provide, operate, and maintain our Services
- Ensure valid and secure authentication of users on our Minecraft Server
- Send emails with one-time login passcodes for users to log in to our Minecraft Server
- Reply to emails requesting support related to our Services
- Find and prevent fraud related to our Services
We do not display ads on our Services, nor do we use personal information for ads or marketing purposes.
Information Third Party Collect
We do not sell or rent personal information, nor do we share personal information with Third Parties for any other purposes than to provide, operate, and maintain our Services.
- Tebex Limited (“Tebex”) - The Webstore is operated by Tebex, a licensed seller for goods for our Game Server.
- Mailgun Technologies Inc (“Mailgun”) - Email addresses used to register on our Minecraft Server are shared with Mailgun to send one-time login passcodes from the [email protected] email for user log in to our Minecraft Server.
- GitHub Inc (“GitHub”) - Our Website is hosted with GitHub, which may collect information sent to our Website.
- Cloudflare Inc (“Cloudflare”) - For security purposes, traffic to our Website is proxied through Cloudflare, which may collect information sent to our Website.
- OVH Hosting Inc (“OVH”) - Our Minecraft Server is hosted with OVH, which may collect information sent to our Minecraft Server.
- DatPixel Entertainment Inc (“TCPShield”) - For security purposes, traffic to our Minecraft Server is proxied through TCPShield, which may collect information sent to our Minecraft Server.
- Proton Technologies AG (“ProtonMail”) - The [email protected] contact email and the [email protected] support email (collectively, “Emails”) are hosted with ProtonMail, which may collect information sent to our Emails.
- Discord Inc (“Discord”) - We operate the discord.gg/herobrine Discord server, which you may join for community purposes.
- Twitter Inc (“Twitter”) - We operate the @herobrinedotorg Twitter profile, which you may interact with for community purposes.
We do not knowingly collect any Personal Identifiable Information from children under the age of 13. If you think that your child provided this kind of information to us, we strongly encourage you to contact us immediately and we will do our best efforts to promptly remove such information from our records.
The security of your personal information is important to us, but no method of transmission over the Internet, or method of electronic storage, is 100% secure. While we strive to use commercially acceptable means to protect your personal information; we cannot guarantee its absolute security.
|
OPCFW_CODE
|
Monitor Consumed Resources
Reserved Cloudlets Configuration
While creating or changing environment topology, in the right part of the wizard you can see the number of Reserved Cloudlets configured for your environment (configured using the Reserved Cloudlets slider ).
Here you can find:
- Total Reserved Cloudlets for this environment
- The coloured bar indicates the amount of Reserved Cloudlets configured for each server type within the environment
- The amount of discount you received using these Reserved Cloudlets
- How many Reserved Cloudlets you need to get the next discount level
- The total monthly cost for all Reserved Cloudlets (Total fixed costs), and how much you save compared to using the same amount of resources as Dynamic Cloudlets
- The range of possible costs for the month (variable depending on your resource usage)
- The maximum cost limit for the month assumes that you use all resources at the maximum Scaling Limit, as configured using the Scaling Limit slider
Current Resource Usage
In your dashboard, you can see a list of all of your environments. The right-hand column displays current resource Usage. You can see the amount of disk storage and cloudlets currently being used by the whole environment or, if you expand the environment context using the arrow at the left, you can see the individual resource usage by each server within the environment.
- The first number (HDD icon ) in the Usage column is the amount of disk space currently consumed. The amount is shown in MB (in this case 1GB is equal to 1024MB).
- The second number (cloudlet icon ) in the Usage column is the amount of cloudlets currently being used (first cloudlet number) out of the cloudlet Scaling Limit (second cloudlet number) you have configured.
Statistics of Consumption
You can also see the amount of consumed HDD, RAM (Memory), CPU and Network Bandwidth according to each container by clicking on the Statistics button of the desired node.
Use the Billing history button for your environment or navigate to Balance > Billing history item in the upper menu to review the charges applied for consumed resources.
Here you can specify the desired start / end dates, and the time period interval to view the billing data for. The billing history is displayed by time period, with the corresponding charges to the right. Use the icon to expand the view to reveal additional details about resource usage and charges for the particular period of time.
Data shown is grouped within environments. After expanding the particular environment you’ll see the list of nodes it consists of. All the nodes are sorted in the alphabetical order. After the nodes list the Public IP and SSL with the cost for their usage are stated (in the case you’ve enabled them for your environment).
You can see the following information regarding every environment node:
- Fixed cloudlets amount consumed
- Flexible cloudlets amount consumed
- Storage (amount of disc space used)
- Paid traffic
- The overall amount of charges applied for each node usage
Note that here you can see whether the pay was taken from the main balance or from the bonuses.
The total charges between the selected dates are calculated for you at the very bottom of the list.
|
OPCFW_CODE
|
MOO-cows Mailing List Archive
Proposed change to the MOO-Cows mailing list
I recently received the following proposal for a change to the workings of the
MOO-Cows mailing list:
> As one of the many subscribers to the Moo-cow mailing list I've
> gotten a lot of good advice from my fellow subscribers, however recently
> the large amount of spam, BS and generally unrelated information has
> rendered the list particly useless. I would like to propose a possible
> solution, that the Moo-cow list adopt a format similar to that used by
> the DecStation Managers list.
> The format of the Decstation Managers list is very simple. People
> post their questions to the list, then other subscribers send email to
> the poster with any advice they may have. After waiting a suitable
> amount of time, the orignal poster posts a follow up message to the list
> in which he/she summarizes the responses he/she received.
> So if firstname.lastname@example.org sent email to moo-cows with the subject 'Compiling
> Moo under Minix' the listserver software would set the reply-to field of
> the message to email@example.com (so that replys wouldn't go back to the
> list), after a day or two firstname.lastname@example.org would send another message
> to the list 'SUMMARY: Compiling Moo under Minix'.
> Works well for the Decstation Managers, hopefully it would work well
> for the moo-cow list.
I've noticed, as probably all of you have, that the level and sometimes
quality of the traffic on MOO-Cows have taken turns for the worse in recent
months. This suggestion, which would in effect tend to steer the traffic away
from long pointless discussions and towards productive (and perhaps
FAQ-quotable) answers to specific questions, might be the fix that's needed.
I can easily modify the MOO-Cows mailing-list configuration to add the
specified Reply-To: line to every message, and I could even add a little text
at the bottom of each message describing in brief the intent (i.e., send your
questions to the list, gather the responses and later post a summary), so that
members would be reminded of the appropriate use of the list.
The question is, would the current list membership consider this a positive
change? I have given this message a Reply-To: line pointing responses back to
me, personally. I will summarize the results in a week or so and, if there's a
preponderance of positives, implement the suggested changes.
Yours in service of the goal of a more useful list for us all,
Subject Index |
|
OPCFW_CODE
|
Minimal Mod-k Cuts for Packing, Covering and Partitioning Problems
This research was carried out by Richard W. Eglese and Adam N. Letchford and supported by the EPSRC under research grant number GR/L88795.
This page was last updated on 4th August 2000.
The papers arising from the research are available from http://www.lancs.ac.uk/staff/letchfoa/pubs.htm
Two sets of test problems have been created for set packing and are available from this web site (in a format suitable for CPLEX or LINDO).
These instances were created by setting each constraint coefficient to one randomly with probability 0.05. A description of these problems and layout of the files is available here.
Results for upper bounds from LP relaxation and best known solutions (most optimal) are available here.
pk11c.lp pk12c.lp pk13c.lp pk14c.lp
pk21c.lp pk22c.lp pk23c.lp pk24c.lp
pk31c.lp pk32c.lp pk33c.lp pk34c.lp
pk41c.lp pk42c.lp pk43c.lp pk44c.lp
pk11w.lp pk12w.lp pk13w.lp pk14w.lp
pk21w.lp pk22w.lp pk23w.lp pk24w.lp
pk31w.lp pk32w.lp pk33w.lp pk34w.lp
pk41w.lp pk42w.lp pk43w.lp pk44w.lp
These instances were created by transforming some standard max-clique instances into set packing problems. (The original max-clique instances were used in the second DIMACS implementation challenge.)
Results for upper bounds from LP relaxation and best known solutions (many optimal) are available here.
brock200-1.lp brock200-2.lp brock200-3.lp brock200-4.lp
brock400-1.lp brock400-2.lp brock400-3.lp brock400-4.lp
C125-9.lp C250-9.lp C500-9.lp
p-hat300-1.lp p-hat300-2.lp p-hat300-3.lp p-hat500-1.lp p-hat500-2.lp p-hat500-3.lp
san200-7-2.lp san400-5-1.lp san400-7-1.lp san400-7-2.lp san400-7-3.lp
sanr200-7.lp sanr200-9.lp sanr400-5.lp sanr400-7.lp
Test problems for Set Covering and Set Partitioning may be found on ORLIB
Return to Richard Eglese's home page
|
OPCFW_CODE
|
Corn snakes are a species of rat snake that are found in the southeastern United States. They are typically found in cornfields hence their name. Corn snakes are non-venomous and kill their prey by constriction.
Corn snakes are popular pets due to their docile nature and beautiful markings. They can be bred in captivity and many people enjoy breeding them for show or as a hobby.
Corn snakes can be bred year-round but there are certain times of year that are better for breeding than others. The best time to breed corn snakes is in the late spring or early summer. This is when the weather is warm and the days are long. Corn snakes typically breed in May or June.
If you are planning on breeding corn snakes it is important to have a well-ventilated enclosure and to make sure the enclosure is escape-proof. You will also need to provide hiding places for the snakes.
It is best to house male and female corn snakes separately until it is time to breed them. This prevents the snakes from fighting and getting hurt.
When you are ready to breed the corn snakes you will need to put the male and female together in the same enclosure. The female will usually be the one to initiate breeding.
The female corn snake will shed her skin and then release pheromones to attract the male. The male will then chase the female and attempt to mount her.
Once the male has successfully mounted the female he will insert one of his hemipenes into her cloaca. This process can take up to an hour.
After the male has successfully bred with the female he will detach himself and the two snakes will go their separate ways.
The female corn snake will lay her eggs 50 to 60 days after she has been bred. She will lay anywhere from 4 to 40 eggs but the average is 20.
The eggs will incubate for 60 to 70 days before they hatch. After they hatch the baby corn snakes will be about 10 inches long.
What is the ideal temperature range for corn snakes during the breeding season?
The ideal temperature range for corn snakes during the breeding season is between 77 and 86 degrees Fahrenheit.
How often should you feed corn snakes during the breeding season?
You should feed corn snakes every 5 to 7 days during the breeding season.
How many clutches of eggs can a female corn snake produce in one breeding season?
A female corn snake can produce 2 to 3 clutches of eggs in one breeding season.
How many eggs are in each clutch of corn snakes?
There are usually 8 to 12 eggs in each clutch of corn snakes.
How long does it take for corn snake eggs to hatch?
It takes corn snake eggs 58 to 72 days to hatch.
What is the incubation temperature for corn snake eggs?
The incubation temperature for corn snake eggs is between 82 and 84 degrees Fahrenheit.
How often should you mist corn snake eggs during incubation?
You should mist corn snake eggs 2 to 3 times per week during incubation.
What is the ideal temperature range for corn snakes during the non-breeding season?
The ideal temperature range for corn snakes during the non-breeding season is between 72 and 78 degrees Fahrenheit.
How often should you feed corn snakes during the non-breeding season?
You should feed corn snakes every 7 to 10 days during the non-breeding season.
How long do corn snakes typically live?
Corn snakes typically live 10 to 20 years.
What size enclosure do corn snakes need?
Corn snakes need an enclosure that is at least 10 to 20 gallons.
What kind of substrate should you use for corn snakes?
You should use a substrate that is safe for corn snakes and that can hold moisture such as cypress mulch aspen shavings or paper towel.
What kind of hide should you provide for corn snakes?
You should provide a hide that is big enough for corn snakes to fit inside and that has a opening that is slightly smaller than the corn snake such as a half log plastic container or Tupperware.
What kind of water bowl should you use for corn snakes?
You should use a water bowl that is big enough for corn snakes to soak in and that is shallow enough that corn snakes cannot drown such as a plastic container or Tupperware.
What kind of lighting should you use for corn snakes?
You should use a red or black incandescent bulb a ceramic heat emitter or a heat pad for corn snakes.
|
OPCFW_CODE
|
Pittsburgh Building 1210
Professor Sebastian Souyris is a tenure-track Assistant Professor of Supply Chain and Analytics, holding the Dean R. Wellington ’83 Teaching Professorship in Management at the Lally School of Management. Professor Souyris obtained his Ph.D. in information, risk, and operations management from The University of Texas at Austin in 2019. He joined Lally in 2022 from the University of Illinois Urbana-Champaign, where he was a visiting assistant professor at Gies College of Business. In addition, he obtained an M.Phil. in operations management from New York University, and a bachelor’s (industrial engineering) and master’s (operations management) degree from the University of Chile.
Sebastian’s research broadly addresses issues related to the challenges and the means of achieving environmental and human sustainability from an operations management perspective, combining data-driven optimization, machine learning, and econometrics. His work has been published in Operations Research, Production and Operations Management, the European Journal of Operations Research, and the INFORMS Journal on Applied Analytics, among other outlets. In addition, he is an INFORMS Franz Edelman Laureate, a finalist of the EURO Excellence in Practice Award, and a prizewinner of the INFORMS Revenue Management and Pricing Practice Award.
- Ph.D. in Information, Risk, and Operations Management, The University of Texas at Austin.
- M.Phil. in Operations Management, New York University.
- M.S. in Operations Management, University of Chile.
- B.S. in Industrial Engineering, University of Chile.
Thursdays 12:00 - 13:00 pm
Foundations of Data Science, MGMT 6100
- First prize INFORMS Revenue Management and Pricing Section Practice Award 2022.
- Named Dean R. Wellington ’83 (Junior) Professor in Management, Rensselaer Polytechnic Institute, 2022.
- Second place Global BIGGIES Awards, Excellence in the Use of Predictive Analytics, 2018.
- INFORMS Franz Edelman Laureate 2016.
- Finalist EURO Excellence in Practice Award 2009.
Grants & Fellowships
- Co-PI, Gies College of Business. How to Facilitate Business Continuity by Addressing Supply Chain Constraints Caused by COVID-19?, 2021.
- Co-PI, Jump ARCHES. How to design and operate end-to-end vaccine deployment using social media, addressing supply chain allocation constrains, and utilizing telemedicine?, 2021.
- Co-PI, C3.ai Digital Transformation Institute, Dynamic Resource Management in Response to Pandemics, 2020.
- PI, Carle Illinois College of Medicine, Health Make-a-Thon, 2020.
- Dissertation Writing Fellowship, The University of Texas at Austin, Graduate School, 2016.
- Fellowship for Graduate Studies, The University of Texas at Austin, McCombs School of Business, 2010-2015.
- Doctor Cooper Fellowship for Strong Doctoral Student Research, The University of Texas at Austin, McCombs School of Business, 2011.
- Fellowship Graduate Studies, New York University, Stern School of Business, 2007-2008.
- Chilean government fellowship, Conicyt (declined), 2007.
- Breaking Waves: How Gies research could turn the tide in the next global pandemic, Gies News, March 2023.
- Research presented at the Business Analytics seminar discusses incentives for residential adoption of PV systems, DCS News, January 2023.
- Researchers' Model for TV Ad Scheduling Reaps Revenue Increase for Networks. Rensselaer News, February 2023
- RSG Media Bests MIT, NYU, and Carnegie Mellon to Take Home the 2022 INFORMS RM&P Section Practice Award. CISION PRWeb, July 2022.
- UICOMP Faculty Receive Jump ARCHES Spring Grant Awards. Pathways UIC, Summer 2021.
- Study: Rapid bulk-testing for COVID-19 key to reopening universities. Illinois Research News, March 2021.
- 10 Ways Tech Executives Can Help Their Organization Grow. Forbes, May 2018.
- ¿Cómo pueden las cadenas televisivas competir en la era Netflix?. El Mercurio, March 2018.
- Ivanov, A., Z. Tacheva, A. Alzaidan, S. Souyris, and A. C. III England (2023) Informational Value of Visual Nudges During Crises: Improving Public Health Outcomes Through Social Media Engagement Amid Covid-19. Productions and Operations Management, https://doi.org/10.1111/poms.13982.
- Souyris, S., S. Seshadri, and S. Subramanian (2023) Scheduling Advertisements on Cable Television. Operations Research, 0(0). INFORMS Revenue Management and Pricing Section Practice Award 2022, First prize.
- Souyris, S., S. Hao, S. Bose, A.C. England III, A. Ivanov, U. K. Mukherjee, and S. Seshadri (2022) Detecting and mitigating simultaneous waves of COVID-19 infections. Nature Scientific Reports, 12, 16727.
- Mukherjee, U. K., S. Bose, A. Ivanov, S. Seshadri, S. Souyris, P. Sridhar, R. Watkins, and Y. Xu (2021) Evaluation of reopening strategies for educational institutions during COVID-19 through agent based simulation. Nature Scientific Reports, 11, 6264.
- Alarcón, D. Saure, A. Weintraub, R. Wolf-Yadlin, G. Zamorano, L. Ramírez, G. Durán, M. Guajardo, J. Miranda, M. Ramírez, M. Siebert, and S. Souyris (2017). Operations Research Transforms Scheduling of Chilean Soccer Leagues and South American World Cup Qualifiers. INFORMS Journal on Applied Analytics, 47: 52–69. INFORMS Franz Edelman Award 2016, Finalist.
- Cortés, C. E., M. Gendreau, L. M. Rousseau, S. Souyris, and A. Weintraub (2014) Branch-and-Price and Constraint Programming for Solving a Real-Life Technician Dispatching Problem. European Journal of Operational Research, 238 (1): 300–312.
- Souyris, S., C. E. Cortés, F. Ordoñez, and A. Weintraub (2013) A Robust Optimization Approach to Dispatching Technicians under Stochastic Service Times. Optimization Letters, 7: 1549–1568.
- Durán, G., M. Guajardo, J. Miranda, D. Sauré, S. Souyris, A. Weintraub, and R. Wolf (2007) Scheduling the Chilean Soccer League by Integer Programming. INFORMS Journal on Applied Analytics, 37 (6): 539–552. EURO Excellence in Practice Award 2009, Finalist.
- Bose, S., S. Souyris, A. Ivanov, U. Mukherjee, S. Seshadri, and Y. Xu (2021) Control Of Epidemic Spreads Via Testing And Lock-Down, 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 2021, pp. 4272-4279.
- Noronha, T.F., Ribeiro, C.C., Duran, G., Souyris, S., Weintraub, A. (2007). A Branch-and-Cut Algorithm for Scheduling the Highly-Constrained Chilean Soccer Tournament. In: Burke, E.K., Rudová, H. (eds) Practice and Theory of Automated Timetabling VI. PATAT 2006. Lecture Notes in Computer Science, vol 3867. Springer, Berlin, Heidelberg.
- C. E. Cortés, F. Ordoñez, Souyris, S., and A. Weintraub (2007) Routing Technicians under Stochastic Service Times: A Robust Optimization Approach. TRISTAN VI: The Sixth Triennial Symposium on Transportation Analysis.
- A. Weintraub, C. E. Cortés, and Souyris, S. (2004) Constraint Programming and Column Generation Methods to Solve the Dynamic Vehicle Routing Problem for Repair Services. TRISTAN V: The Fifth Triennial Symposium on Transportation Analysis.
- Souyris, S., J. Duan, A. Balakrishnan, V. Rai. Diffusion of Residential Solar Power Systems: A Dynamic Discrete Choice Approach. Submitted. Available at SSRN: https://ssrn.com/abstract=4301666.
- Ivanov, A., S. Bose, S. Hao, U. K. Mukherjee, S. Seshadri, R. Watkins, A. C. III England, S. Souyris, M. E. Ahsen, J. Suriano. COVID-19 Test-to-Stay Program for K-12 Schools: Opt-In Versus Opt-Out Policies.
- Hao, S., Y. Xu , U. K. Mukherjee, S. Souyris, S. Seshadri, A. Ivanov, M. E. Ahsen. Hotspots for Emerging Epidemics: Multi-Task and Transfer Learning over Mobility Networks. Available at SSRN: https://ssrn.com/abstract=3858274.
- Souyris, S., and J. Miranda. Scheduling Shows on Broadcast Television.
- Souyris, S. (2019) Models to Predict and Influence Consumer Demand: Applications to Television Advertising and Solar Panel Adoption. Ph.D. Dissertation. Advisors: Anant Balakrishnan, and Jason Duan. University of Texas.
- Souyris, S. (2017) Opportunities for Linear Television Using Data Science. Media and Entertainment Journal, Spring 2017.
- Souyris, S (2005) Enfoque Basado en Generación de Columnas y Constraint Programming para Resolver el Problema de Despacho Dinámico de Técnicos. M.S. Thesis. Advisors: Andrés Weintraub. University of Chile.
- Durán, G., M. Guajardo, J. Miranda, D. Sauré, S. Souyris, A. Weintraub, A. Carmash, and F. Chaigneau (2005) Programación Matemática Aplicada al Fixture de la Primera División del Fútbol Chileno. Revista Ingeniería de Sistemas, Volumen XIX:29-48
The following is a selection of recent publications in Scopus. Sebastian Souyris has 10 indexed publications in the subjects of Business, Management and Accounting, Mathematics, and Decision Sciences.
|
OPCFW_CODE
|
ArrayIndexOutOfBoundsException on historical in class GroupByMergingQueryRunnerV2 when grouping on high-cardinality dimension
Affected Version
Druid 0.16.0
Description
An ArrayIndexOutOfBoundsException is thrown by a historical when grouping on a high-cardinality dimension.
This happens reproducibly on a test cluster with two historicals (r4.4xl) and a dataset that was ingested with the new native index_parallel job using the druid indexer, so not the middlemanager but the new indexer process.
The dataset is 12 GB in size as displayed in Druid's legacy coordinator console and consists of 500 shards, segment size is around 25MB with 100k records per segment and rollup was disabled during the ingestion. (83 dimensions, 1 metric)
The data constitutes a single hour which has no query granularity and hourly segment granularity.
The data set was just for testing, so it is not a battle-hardened data model. The data is in so far battle tested as that is is authentic production data which we ingest with a hadoop indexer into our production cluster. In this case the data is rolled up. In contrast, I took this data set and ingested it without rollup using the new experimental ingestion pipeline that was introduced in Druid 0.16.
I was executing the following query
SELECT
"deviceId",
COUNT(*) AS "Count"
FROM "hackathon"
WHERE "__time" >= CURRENT_TIMESTAMP - INTERVAL '1' YEAR
GROUP BY 1
ORDER BY "Count" DESC
LIMIT 100
This always raises the following exception within one of the historicals:
[java] 2019-11-21T18:17:57,132 ERROR [processing-3] org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunnerV2 - Exception with one of the sequences!
[java] java.lang.ArrayIndexOutOfBoundsException
[java] 2019-11-21T18:17:57,132 ERROR [processing-3] com.google.common.util.concurrent.Futures$CombinedFuture - input future failed.
[java] java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException
[java] at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunnerV2$1$1$1.call(GroupByMergingQueryRunnerV2.java:253) ~[druid-processing-0.16.0-incubating.jar:0.16.0-incubating]
[java] at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunnerV2$1$1$1.call(GroupByMergingQueryRunnerV2.java:233) ~[druid-processing-0.16.0-incubating.jar:0.16.0-incubating]
[java] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_212]
[java] at org.apache.druid.query.PrioritizedListenableFutureTask.run(PrioritizedExecutorService.java:247) [druid-processing-0.16.0-incubating.jar:0.16.0-incubating]
[java] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
[java] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
[java] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
[java] Caused by: java.lang.ArrayIndexOutOfBoundsException
If I instead group on other dimensions then I do not receive any exeption. So this specifically happens with a dimension that has a high cardinality because it contains device IDs.
However, I tried to group on other high-cardinality dimensions like a session ID that contains a GUUID and this only resulted in a ResourceLimitExceeded exception which is fine.
I couldn't provoke another ArrayIndexOutOfBoundsException so far with any of the other columns.
I then relaxed the above query by removing the ORDER BY clause and then a resultset was returned to me.
I was also able to retain the ORDER BY clause by adding a filter condition on "deviceId IS NOT NULL" which did not raise an ArrayIndexOutOfBounds exception anymore but ran into the ResourceLimitExceededException.
In summary, it looks to me as if the error might have to do with high cardinality dimensions that can contain null entries.
Hi @sascha-coenen, groupBy v2 supports array-based aggregation and hash-based aggregation. I'm wondering if this error relates to something else besides too high cardinality. Would you please try the same query with forceHashAggregation = true in your query context?
Hi. Sorry for reporting back so late.
I tried to test out what you proposed but I wasn't able to reproduce the issue anymore. I don't think that we made any changes to our setup. Perhaps this was caused by some ephemeral situation like segment distribution or something.
If this exception ever resurfaces I will definitely try out your suggestion. Sorry for having posted a non-reproducible bug. At the time the exception showed up deterministically.
This might be an issue with the merge process that uses the OffheapIncrementalIndex.
When the latter is initialized without any metrics (which is what your query seems to do), it fails on ArrayIndexOutOfBoundsException due to a bug in OffheapIncrementalIndex.
Checkout PR #10001 which solved this issue. The main purpose of this PR is performance improvements (nearly doubles the ingestion throughput), but it so happens that an added test in this PR discovered this bug and fixed it.
|
GITHUB_ARCHIVE
|
Excel introduces new keyboard accelerators accessed using the Alt key. In addition, many of the old Alt keyboard shortcuts still work and all the old Ctrl shortcut keys are still functional. This chapter points out which of the old keyboard shortcuts still work, shows you some new shortcuts, and introduces you to the new keyboard accelerators.
Unsupported Excel table features Unsupported Excel table features can cause the following compatibility issues, leading to a minor loss of fidelity. Minor loss of fidelity Solution The table contains a custom formula or text in the total row.
In earlier versions of Excel, the data is displayed without a table. A table in this workbook does not display a header row. A table style is applied to a table in this workbook. Table style formatting cannot be displayed in earlier versions of Excel.
A table in this workbook is connected to an external data source. Table functionality will be lost, but the data remains connected.
If table rows are hidden by a filter, they remain hidden in an earlier version of Excel. In Excelyou can then connect the data to the external data source again. Table functionality will be lost, as well as the ability to refresh or edit the connection.
Alternative text is applied to a table in this workbook. Alternative text on tables will be removed in versions prior to Excel To display the alternative text in the earlier version of Excel, you can copy it into a blank cell on the worksheet, or you could insert a comment that contains the text.
Right-click anywhere in the table, click Table, and then click Alternative Text. For more information about how to resolve one or more of these compatibility issues, see the following article: Create or delete an Excel table in a worksheet Unsupported PivotTable features Unsupported PivotTable features can cause the following compatibility issues, leading to a significant loss of functionality or a minor loss of fidelity.
Significant loss of functionality Solution A PivotTable in this workbook exceeds former limits and will be lost if it is saved to earlier file formats. Save the workbook to Excel format, and then re-create this PivotTable report in Compatibility Mode.
A PivotTable in this workbook contains conditional formatting rules that are applied to cells in collapsed rows or columns.
To avoid losing these rules in earlier versions of Excel, expand those rows or columns. This workbook contains named sets which are not associated with a PivotTable. These named sets will not be saved.
A PivotTable in this workbook has what-if analysis turned on. Any unpublished what-if changes will be lost in earlier versions of Excel. A PivotTable in this workbook contains a data axis upon which the same measure appears more than once.
This PivotTable will not be saved. The PivotTable cannot be displayed in Excel A PivotTable or data connection in this workbook contains server settings which do not exist in earlier versions of Excel.
Some PivotTable or data connection server settings will not be saved. These custom outputs will not be saved, and will be replaced by the original values from the data source.
Alternative text is applied to a PivotTable in this workbook. Alternative text on PivotTables will be removed in versions prior to Excel PivotTable style formatting cannot be displayed in earlier versions of Excel.It’s relatively easy to apply conditional formatting in an Excel worksheet.
It’s a built-in feature on the Home tab of the Excel ribbon, and there many resources on the web to get help (see for example what Debra Dalgleish and Chip Pearson have to say). Conditional formatting of charts is a different story.
Here you'll find a list of common Microsoft Excel formulas and functions explained in plain English, and applied to real life examples.
The tutorials are grouped in line with the Function Library so they're easy to find when you need them. Jul 18, · How to Write a Simple Macro in Microsoft Excel.
This wikiHow teaches how to create simple macros for Excel spreadsheets.
Open Excel. The process for enabling macros is the same for Excel , , and There is a slight difference. Create a conditional formula that results in another calculation or in values other than TRUE or FALSE To do this task, use the IF, AND, and OR functions and operators as shown in the following example.
Another sparkline solution comes bundled with Excel (and probably ), and might possibly be superior than the sparklines options you mention. New features that are not supported in earlier versions of Excel.
Not all new features are supported in earlier versions of Excel. When you work in Compatibility Mode or want to save aworkbook to the Excel .xls) file format, the Compatibility Checker can help you identify issues that may cause a significant loss of functionality or a minor loss of fidelity in the earlier version of Excel.
|
OPCFW_CODE
|
AVPlayerLayer in UICollectionViewCell, or how to load gifs as WhatsApp
In the app I have a UICollectionView with item size so that about ~20 items are visible on the screen at the same time. The content that I want to display in each cell is Gif image downloaded from Giphy / Tenor.
However, I realized that gif files take much more space (and time to load) than the relative mp4 files that both Tenor and Giphy provide for each animated image, which is actually obvious, cause mp4 file format has a compressing logic and stuff like that. Sorry if I use wrong terms.
In order to have list loaded faster I decided to switch using UIImageView with GIF images to AVPlayerLayer, cause mp4 file is like ~10x lighter than GIF image. But I faced with the performance issue similar to what described HERE. The flow is mostly the same, I have 20+ items visible at the same time, but because of hardware limitations it only shows 16 videos. I couldn't find any workaround or any other frameworks that would allow to have more than 16 AVPlayerLayer showing video at the same time.
I'm wondering how WhatsApp application works and handles this logic. It also has GIF selection from Tenor. I already checked and figured out that WhatsApp downloads small video files and not gif images. That's why it loads very fast. But I have no idea how they can show 20+ items at the same time. HERE is how that works in WhatsApp - https://media.giphy.com/media/33E84h3RAVn0vQWZak/giphy.gif. Also, I notices during scroll the small static previews are showing, but I don't see the app making requests for it. Probably they gets a first frame of the gif on the fly without any delays in main thread.
I also tried that, but even if I make every single stuff in background thread and the only line on the main thread is "self.imageView.image = myImage", it anyway is lugging a little bit if I have 8 items in the row for example and scrolling very fast.
I see only 2 possible solutions to have it loads fast (so we definitely need to load mp4 instead of gifs), and scroll smooth and without lugs:
1. WhatsApp uses its own custom Video Core to display video in the UICollectionViewCell .
2. WhatsApp downloads video to speed up the download process but then encodes mp4 file to gif one on the fly and use regular animated UIImageView to show the output gif file. However I was not able to have this flow working very fast without lugging during 'massive' scrolling
Any thoughts on how to implement the same to make it works fast and smooth as in WhatsApp? I'm unable to check how it handles the downloaded info, but for sure it downloads mp4 files and not gif ones.
Nobody? Any ideas?
I do not have any idea how WhatsApp makes it work, but there is an repo on Github https://github.com/kean/Nuke-Gifu-Plugin. If you do not want to use a 3rd party library, maybe you can look into their code and get some idea.
Your WhatsApp example URL is broken. Can you review that and update your question? I'm not entirely sure what you are asking for...
@MihaiFratu hello, I've just update link in the post. And also copy it here - https://media.giphy.com/media/33E84h3RAVn0vQWZak/giphy.gif
The way collections work is that each cell is reused after it's scrolled off the screen. In your example here there are fewer than 20 simultaneously-visible cells so it stands to reason that there are also fewer than 20 player layers simultaneously in memory. If you layout so that you have fewer than 20 visible videos at once you should have no issue implementing the same way as this example with native tools.
@Dare based on issue with videos (see 3rd paragraph of my post) I cannot show more videos than the amount that is limited on the hardware/iOS side. At least it won’t work for sure with native AVPlayerLayer, I already tried that. And the layout I have required more than 20 items in a row. Actually the hardware limit is exactly 16 for most of new iPhones, so in WhatsApp there are more than 16 items simultaneously showing on the screen, so they don’t use AVPlayerLayer I believe
The way I think they do it is they load the videos (as you already said you've noticed) and then extract a couple of frames from each. Those frames are later switched using animationImages property of an UIImageView. I'll try making a quick example for you in a bit...
@MihaiFratu, great, thanks. But few more details: I noticed they download an actual video file right in ‘willDisplayCell’. I also tried to generate gif from the video file on the fly but that was too luggy. If you will get something that works so smoothly, that will be great!
look at this question :https://www.reddit.com/r/iOSProgramming/comments/4512hu/spent_all_day_on_playing_animated_gifs_in_a/
|
STACK_EXCHANGE
|
I see that in SQL, the GROUP BY has to precede ORDER BY expression. Does this imply that ordering is done after grouping discards identical rows/columns?
Because I seem to need to order rows by a timestamp column A first, THEN discarding rows with identical value in column A. Not sure how to accomplish this...
I am using MySQL 5.1.41
create table ( A int, B timestamp )
The data could be:
+-----+-----------------------+ | A | B | +-----+-----------------------+ | 1 | today | | 1 | yesterday | | 2 | yesterday | | 2 | tomorrow | +-----+-----------------------+
The results I am aiming for would be:
+-----+-----------------------+ | A | B | +-----+-----------------------+ | 1 | today | | 2 | tomorrow | +-----+-----------------------+
Basically, I want the rows with the latest timestamp in column B (think ORDER BY), and only one row for each value in column A (think DISTINCT or GROUP BY).
My actual project details, if you need these:
In real life, I have two tables -
create table users ( phone_nr int(10) unsigned not null, primary key (phone_nr) ) create table payment_receipts ( phone_nr int(10) unsigned not null, payed_ts timestamp default current_timestamp not null, payed_until_ts timestamp not null, primary key (phone_nr, payed_ts, payed_until_ts) )
The tables may include other columns, I omitted all that IMO is irrelevant here. As part of a mobile-payment scheme, I have to send SMS to users across the mobile cell network in periodic intervals, depending of course on whether the payment is due or not. The payment is actualized when the SMS is sent, which is premium-taxed. I keep records of all payments done with the
payment_receipts table, for book-keeping, which simulates a real shop where both a buyer and seller get a copy of the receipt of purchase, for reference. This table stores my (sellers) copy of each receipt. The customers receipt is the received SMS itself. Each time an SMS is sent (and thus a payment is accomplished), the table is inserted a receipt record, stating who payed, when and "until when". To explain the latter, imagine a subscription service, but one which spans indefinitely until a user opt-out explicitly, at which point the user record is removed. A payment is made a month in advance, so as a rule, the difference between the
payed_until_ts is 30 days worth of time.
Naturally I have a batch job that executes every day and needs to select a list of users that are due monthly payment as part of automatic subscription renewal. To link this to the dummy example earlier, the phone number column
b, but in actual code there are two tables, which bring me to the following behavior and its implications: when a user record is removed, the receipt remains, for bookkeeping. So, not only do I need to group payments by date and discard all but the latest payment receipt date, I also need to watch out not to select receipts where there no longer is a matching user record.
I am solving the problem of selecting records that are due payment by finding the receipts with the latest
payed_until_ts value (as in most cases there will be several receipts for each phone number) for each
phone_nr and out of those rows I further need to leave only those phone_numbers where the
payed_until_ts is earlier than the time the batch job executes. I loop over the list of these numbers and send out payments, storing a new receipt for each sent SMS, where
now() + interval 30 days.
|
OPCFW_CODE
|
#!/usr/bin/env ruby
###############################
# Author: Timothy Chon (tchon)
# Date: 2013-11-07 08:15 PST
# Program: This script lists the created date, assigned user, and status
# for each of the 10 most recently created incidents in descending
# date order, i.e., most recent incident first.
###############################
if !RUBY_VERSION.match(/^(1\.9|2\.0)/)
puts "NOTE: this script requires at least ruby 1.9.3"
exit 1
end
require "awesome_print"
require "httparty"
require "time"
module PagerDuty
class Incident
include HTTParty
TOKEN = "VxuRAAxQoTgTjbo7wmmG"
URL = "https://webdemo.pagerduty.com/api/v1/incidents"
def run
read
trim
sort
print
end
def read
# default lookback is 30 days per API docs, although
# results limited to past 100 events
@response = HTTParty.get(URL, :headers => headers)
if @response.code != 200
puts "Unsuccessful request! Got HTTP status code: #{@response.code}"
exit 2
end
@payload = JSON.parse(@response.body)
end
def trim
@incidents = @payload['incidents'].map do |incident|
{
:created_on => Time.parse(incident['created_on']),
:assigned_to_user => incident['assigned_to_user'],
:status => incident['status']
}
end
end
def sort
# default order is created_at most recent first
@incidents = @incidents.sort { |a, b| b[:created_on] <=> a[:created_on] }
end
def print
#puts "----- Total Number of incidents: #{@incidents.length} ------"
if @incidents.length > 10
puts "#{@incidents.slice(0,10).awesome_inspect}"
else
puts "#{@incidents.awesome_inspect}"
end
end
private
def headers
{
'Authorization' => "Token token=\"#{TOKEN}\"",
'Content-Type' => 'application/json'
}
end
end
end
### "the big lebowski"
events = PagerDuty::Incident.new
events.run
|
STACK_EDU
|
Our urls for adwords are slightly different from current urls presented on site (weused htaccess to help create shorter urls). How important is it that the adwords url match the sitemap url for keywords on those pages?
samgold last edited by
We have dynamic urls that we have made into short urls through htaccess and code manipulation. Some of our adwords urls are different from our page urls - for example
a) Latest version of page www.abc.com/x-y-z.html
b) Previous version of url www.abc.com/x+y+z.html
c) raw original version www.abc.com/yyy/zzz?category=X&Product-code=Y etc etc.
Would my ranking for keywords on the page improve if I diligently made all of them the same?
They all go to the same page even now, and no 404 errors or anything.
LesleyPaone last edited by
I don't think it is important at all, people have been doing this for years so they can track adwords and other campaigns.
I'm about to launch a redesigned site and worried about overdoing kw presence on-page, primarily using in url's since will already be using kw in titles as well as page content.
What's current thinking re over optimisation: If kw is in titles and page content is it best not to repeat again in url structure i.e. less is more, even though this will cause things like SeoMoz on-page grade score to fall, or better to keep them/add them ?
Personally i think it makes sense to include kw in url again since helps make the page relevant, and so long as matches the content should help as opposed to hinder rankings for the pages target keyword. However when i look into this some say don't do this since is over-optimisation
The sites generally ranking quite well for its target kw which i obviously don't want to lose after re-launch & hopefully improve further, in the case of this example they are 'Sports Centre Services' & 'Sports Centre Equipment Rental').
The sites current url structure is similar to this below example:
Would it be better to keep following existing/above format or to go with either of the below options i.e. more kw rich urls or less:frankssportscentres.com/sports-centre-services/sports-centre-equipment-rental
Or even lessfrankssportscentres.com/services/equipment-rental
Many Thanks in advance for any helpful comments
DanOn-Page / Site Optimization | | Dan-Lawrence0
For the past 5-6 months I have consistenly ranked at positions #14-16 for snow guards on snoshield.com.
The past 3 days I cannot find the home page anywhere in Google for that keyword. The only thing that has really changed over the past two months is I placed 3 guest blog posts on pretty highly trusted sites that are industry related and created links to the site using suggestions from getlisted.
I've read other reports of others seeing similar things happen recently. I don't think this is a penguin thing, because I can still find the site by searching for the company name, I just can't find it when searching the keyword.
I did notice that a different page on the site is now ranking in position #21 for this keyword, but this page is optimized for a different keyword phrase. Is it possible that even though the sub page is optimized for a different keyword phrase, I am cannibalizing the site?On-Page / Site Optimization | | kadesmith0
I was making updates to the content on the following page, and a few days later dropped from #2 SERP ranking to 50+.
Things I checked:
Yes, 301 redirect was implemented right away.
After publishing, I manually requested indexing in search console.
Right after publishing I re-submitted the sitemap manually and Google said they had not crawled it in 9 days.
My question: should I change the URL back to the old one, or give it a little more time (especially since I re-submitted sitemap)
Original URL: https://www.travelinsurancereview.net/plans/travel-medical/
New URL: https://www.travelinsurancereview.net/plans/travel-medical-insurance/On-Page / Site Optimization | | DamianTysdal0
Hi i have a few pages with duplicate content but we've added canonical urls to them, but i need help understanding what going on
hi google is seeing many of our pages and dupliates but they have canonical url on there
https://www.hijabgem.com/maxi-shirt-dress.htmlOn-Page / Site Optimization | | hijabgem
my question is which page takes authority?and are they setup correct, can you have more than one link rel="canonical" on one page?0
Can anyone help me out? I am trying to get this site ranked for "Villa General Belgrano". It was on the first page of Google and then it disappeared. Did I over optimize the anchor text?
http://www.opensiteexplorer.org/anchors?site=www.lawebdelvalle.com.arOn-Page / Site Optimization | | Carla_Dawson0
I have a client site that is pulling a meta title that is not in his code. I am using Yoast for the titles and descriptions on this site. Not 100% sure why Google is not listing the title we have in place.
Could the code be pulling from somewhere else? Is there a fix for this?On-Page / Site Optimization | | Bryan_Loconto0
Does anyone know if there's anywhere where I can see what keywords are used in search engines to land on a specific page? I have access to the Google Analytics account and linked it to Moz as a campaign, but I can't find this data.
I'm curious about this because a very uncommon word is used in a page title for a page I try to optimize. It's the Dutch translation of 'malicious'. And now I wonder if it's better to switch to a word that's used more often. Or if it's better to 'win the battle' on this (probably) rarely used word. I've used Google trends to see how many people use it, but it says there's not enough data to show the interest over time.On-Page / Site Optimization | | RaoulWB0
My understanding is that a URL should be as short as possible and also match the title tag, but in order to keep the URL shorter, can you abbreviate it?
Title Tag: Eat Your Way to Beauty with Superfoods
URL: websitename.com/sbeauty-with-superfoodsOn-Page / Site Optimization | | KimCalvert0
|
OPCFW_CODE
|